title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Python: How to read csv file with different separators?
| 39,485,848 |
<p>This is the first line of my txt.file</p>
<pre><code>0.112296E+02-.121994E-010.158164E-030.158164E-030.000000E+000.340000E+030.328301E-010.000000E+00
</code></pre>
<p>There should be 8 columns, sometimes separated with '-', sometimes with '.'. It's very confusing, I just have to work with the file, I didn't generate it.</p>
<p>And second question: How can I work with the different columns? There is no header, so maybe:</p>
<p><code>df.iloc[:,0]</code> .. ? </p>
| 3 |
2016-09-14T08:30:33Z
| 39,486,193 |
<p>As i said in the comments, it is not a case of multiple separators, it is just a fixed width format. <code>Pandas</code> has a method to read such files. try this:</p>
<pre><code>df = pd.read_fwf(myfile, widths=[12]*8)
print(df) # prints -> [0.112296E+02, -.121994E-01, 0.158164E-03, 0.158164E-03.1, 0.000000E+00, 0.340000E+03, 0.328301E-01, 0.000000E+00.1]
</code></pre>
<p>for the widths you have to provide the cell width which looks like its <em>12</em> and the number of columns which as you say must be <em>8</em>.</p>
<p>As you might notice the results of the read are not perfect (notice the <code>.1</code> just before the comma in the 4th and last element) but i am working on it.</p>
<hr>
<p>Alternatively, you can do it <em>"manually"</em> like so:</p>
<pre><code>myfile = r'C:\Users\user\Desktop\PythonScripts\a_file.csv'
width = 12
my_content = []
with open(myfile, 'r') as f_in:
for lines in f_in:
data = [float(lines[i * width:(i + 1) * width]) for i in range(len(lines) // width)]
my_content.append(data)
print(my_content) # prints -> [[11.2296, -0.0121994, 0.000158164, 0.000158164, 0.0, 340.0, 0.0328301, 0.0]]
</code></pre>
<p>and every row would be a nested list.</p>
| 3 |
2016-09-14T08:48:57Z
|
[
"python",
"csv",
"pandas"
] |
Python: How to read csv file with different separators?
| 39,485,848 |
<p>This is the first line of my txt.file</p>
<pre><code>0.112296E+02-.121994E-010.158164E-030.158164E-030.000000E+000.340000E+030.328301E-010.000000E+00
</code></pre>
<p>There should be 8 columns, sometimes separated with '-', sometimes with '.'. It's very confusing, I just have to work with the file, I didn't generate it.</p>
<p>And second question: How can I work with the different columns? There is no header, so maybe:</p>
<p><code>df.iloc[:,0]</code> .. ? </p>
| 3 |
2016-09-14T08:30:33Z
| 39,486,509 |
<p>A possible solution is the following:</p>
<pre><code>row = '0.112296E+02-.121994E-010.158164E-030.158164E-030.000000E+000.340000E+030.328301E-010.000000E+00'
chunckLen = 12
for i in range(0, len(row), chunckLen):
print(row[0+i:chunckLen+i])
</code></pre>
<p>You can easly extend the code to handle more general cases.</p>
| 1 |
2016-09-14T09:05:23Z
|
[
"python",
"csv",
"pandas"
] |
Python: How to read csv file with different separators?
| 39,485,848 |
<p>This is the first line of my txt.file</p>
<pre><code>0.112296E+02-.121994E-010.158164E-030.158164E-030.000000E+000.340000E+030.328301E-010.000000E+00
</code></pre>
<p>There should be 8 columns, sometimes separated with '-', sometimes with '.'. It's very confusing, I just have to work with the file, I didn't generate it.</p>
<p>And second question: How can I work with the different columns? There is no header, so maybe:</p>
<p><code>df.iloc[:,0]</code> .. ? </p>
| 3 |
2016-09-14T08:30:33Z
| 39,486,552 |
<p>As stated in comments, this is likely a list of numbers in scientific notation, that aren't separated by anything but simply glued together.
It could be interpreted as:</p>
<pre><code>0.112296E+02
-.121994E-010
.158164E-030
.158164E-030
.000000E+000
.340000E+030
.328301E-010
.000000E+00
</code></pre>
<p>or as </p>
<pre><code>0.112296E+02
-.121994E-01
0.158164E-03
0.158164E-03
0.000000E+00
0.340000E+03
0.328301E-01
0.000000E+00
</code></pre>
<p>Assuming the second interpretation is better, the trick is to split evenly every 12 characters.</p>
<pre><code>data = [line[i:i+12] for i in range(0, len(line), 12)]
</code></pre>
<p>If really the first interpretation is better, then I'd use a REGEX</p>
<pre><code>import re
line = '0.112296E+02-.121994E-010.158164E-030.158164E-030.000000E+000.340000E+030.328301E-010.000000E+00'
pattern = '[+-]?\d??\.\d+E[+-]\d+'
data = re.findall(pattern, line)
</code></pre>
<hr>
<p><strong>Edit</strong></p>
<p>Obviously, you'd need to iterate over each line in the file, and add it to your dataframe. This is a rather inefficient thing to do in Pandas. Therefore, if your preferred interpretation is the fixed width one, I'd go with @Ev. Kounis ' answer: <code>df = pd.read_fwf(myfile, widths=[12]*8)</code> </p>
<p>Otherwise, the inefficient way is:</p>
<pre><code>df = pd.DataFrame(columns=range(8))
with open(myfile, 'r') as f_in:
for i, lines in enumerate(f_in):
data = re.findall(pattern, line)
df.loc[i] = [float(d) for d in data]
</code></pre>
<p>The two things to notice here is that the DataFrame must be initialized with column names (here [0, 1, 2, 3..7] but perhaps you know of better identifiers); and that the regex gave us strings that must be casted to floats.</p>
| 4 |
2016-09-14T09:08:05Z
|
[
"python",
"csv",
"pandas"
] |
crontab to run python file if not running already
| 39,485,918 |
<p>I want to execute my python file via crontab only if its down or not running already. I tried adding below entry in cron tab but it does not work</p>
<pre><code>24 07 * * * pgrep -f test.py || nohup python /home/dp/script/test.py & > /var/tmp/test.out
</code></pre>
<p>test.py works fine if i run <code>pgrep -f test.py || nohup python /home/dp/script/test.py & > /var/tmp/test.out</code> manually and it also works in crontab if i remove pgrep -f test.py || from my crontab and just keep <code>24 07 * * * nohup python /home/dp/script/test.py & > /var/tmp/test.out</code></p>
<p>Any idea why crontab does not work if i add pgrep -f? is there any other way i can run test.py just one time to avoid multiple running processes of test.py?
Thanks,
Deepak</p>
| 2 |
2016-09-14T08:34:09Z
| 39,569,235 |
<p>cron may be seeing the <code>pgrep -f test.py</code> process as well as the <code>test.py</code> process, giving you the wrong result from <code>pgrep</code>.<br>
Try the command without the <code>-f</code>, this should just look for an occurrence of <code>test.py</code> or replace <code>-f</code> with <code>-o</code> which will look for the oldest occurrence.</p>
<p>Your other option is to insert into test.py something along the lines of:</p>
<pre><code>Pid = os.popen('pgrep -f test.py').read(5).strip()
</code></pre>
<p>which will allow you to check within the code itself, if it is already running.</p>
| 0 |
2016-09-19T09:03:20Z
|
[
"python",
"crontab"
] |
crontab to run python file if not running already
| 39,485,918 |
<p>I want to execute my python file via crontab only if its down or not running already. I tried adding below entry in cron tab but it does not work</p>
<pre><code>24 07 * * * pgrep -f test.py || nohup python /home/dp/script/test.py & > /var/tmp/test.out
</code></pre>
<p>test.py works fine if i run <code>pgrep -f test.py || nohup python /home/dp/script/test.py & > /var/tmp/test.out</code> manually and it also works in crontab if i remove pgrep -f test.py || from my crontab and just keep <code>24 07 * * * nohup python /home/dp/script/test.py & > /var/tmp/test.out</code></p>
<p>Any idea why crontab does not work if i add pgrep -f? is there any other way i can run test.py just one time to avoid multiple running processes of test.py?
Thanks,
Deepak</p>
| 2 |
2016-09-14T08:34:09Z
| 39,571,561 |
<h3>pgrep -f lists itself as a false match when run from cron</h3>
<p>I did the test with a <code>script.py</code> running an infinite loop. Then</p>
<pre class="lang-sh prettyprint-override"><code>pgrep -f script.py
</code></pre>
<p>...from the terminal, gave <strong><em>one</em></strong> pid, <code>13132</code> , while running from cron:</p>
<pre class="lang-sh prettyprint-override"><code>pgrep -f script.py > /path/to/out.txt
</code></pre>
<p>outputs <strong><em>two</em></strong> pids, <code>13132</code> and <code>13635</code>.</p>
<p>We can therefore conclude that the command <code>pgrep -f script.py</code> lists itself as a match, when run from cron. Not sure how and why, but most likely, this is indirectly caused by the fact that <code>cron</code> runs with a quite limited set of environment variables (HOME, LOGNAME, and SHELL).</p>
<h3>The solution</h3>
<p>Running <code>pgrep -f</code> from a (wrapper) script makes the command <em>not</em> list itself, even when run from <code>cron</code>. Subsequently, run the wrapper from <code>cron</code>:</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/bash
if ! pgrep -f 'test.py'
then
nohup python /home/dp/script/test.py & > /var/tmp/test.out
# run the test, remove the two lines below afterwards
else
echo "running" > ~/out_test.txt
fi
</code></pre>
| 0 |
2016-09-19T11:01:02Z
|
[
"python",
"crontab"
] |
crontab to run python file if not running already
| 39,485,918 |
<p>I want to execute my python file via crontab only if its down or not running already. I tried adding below entry in cron tab but it does not work</p>
<pre><code>24 07 * * * pgrep -f test.py || nohup python /home/dp/script/test.py & > /var/tmp/test.out
</code></pre>
<p>test.py works fine if i run <code>pgrep -f test.py || nohup python /home/dp/script/test.py & > /var/tmp/test.out</code> manually and it also works in crontab if i remove pgrep -f test.py || from my crontab and just keep <code>24 07 * * * nohup python /home/dp/script/test.py & > /var/tmp/test.out</code></p>
<p>Any idea why crontab does not work if i add pgrep -f? is there any other way i can run test.py just one time to avoid multiple running processes of test.py?
Thanks,
Deepak</p>
| 2 |
2016-09-14T08:34:09Z
| 39,571,775 |
<p>It is generally a good idea to check whether the application is running or not within the application rather than checking from outside and starting it. Managing the process within the process rather than expecting another process to do it.</p>
<ol>
<li>Let the cron run the application always</li>
<li>At the start of execution of the application, have a lock file or similar mechanism that will tell whether the process is already running or not. </li>
<li>If the application is already running, update somewhere the last test time and break the new process.</li>
<li>If the application is not running, log somewhere and notify someone (if required) and trigger the process start.</li>
</ol>
<p>This will ensure that you will have more control over the life time of a process AND let you decide what to do in case of failures.</p>
<p>Not to mention, if you want to make sure that the application up time is to be high, it is best to make use of monitoring services such as <a href="https://linux.die.net/man/1/monit" rel="nofollow">Monit</a>. Again this will depend on your application to add a sanity layer that says whether it is OK or not.</p>
| 3 |
2016-09-19T11:10:46Z
|
[
"python",
"crontab"
] |
Unbound Local Error when Assigning to Function Arg
| 39,485,935 |
<pre><code>def make_accumulator(init):
def accumulate(part):
init = init + part
return init
return accumulate
A = make_accumulator(1)
print A(2)
</code></pre>
<p>gives me:-</p>
<pre><code>Traceback (most recent call last):
File "make-accumulator.py", line 8, in <module>
print A(2)
File "make-accumulator.py", line 3, in accumulate
init = init + part
UnboundLocalError: local variable 'init' referenced before assignment
</code></pre>
<p>Why is init not visible inside accumulate? </p>
| 0 |
2016-09-14T08:35:04Z
| 39,486,074 |
<p>That's because during parsing the inner function when Python sees the assignment <code>init = init + part</code> it thinks <code>init</code> is a local variable and it will only look for it in local scope when the function is actually invoked.</p>
<p>To fix it add <code>init</code> as an argument to <code>accumulate</code> with default value of <code>init</code>:</p>
<pre><code>def make_accumulator(init):
def accumulate(part, init=init):
init = init + part
return init
return accumulate
</code></pre>
<hr>
<p>Read: <a href="https://docs.python.org/3/faq/programming.html#why-am-i-getting-an-unboundlocalerror-when-the-variable-has-a-value" rel="nofollow">Why am I getting an UnboundLocalError when the variable has a value?</a></p>
| 1 |
2016-09-14T08:43:15Z
|
[
"python",
"function",
"python-3.x",
"scope",
"python-2.x"
] |
Unbound Local Error when Assigning to Function Arg
| 39,485,935 |
<pre><code>def make_accumulator(init):
def accumulate(part):
init = init + part
return init
return accumulate
A = make_accumulator(1)
print A(2)
</code></pre>
<p>gives me:-</p>
<pre><code>Traceback (most recent call last):
File "make-accumulator.py", line 8, in <module>
print A(2)
File "make-accumulator.py", line 3, in accumulate
init = init + part
UnboundLocalError: local variable 'init' referenced before assignment
</code></pre>
<p>Why is init not visible inside accumulate? </p>
| 0 |
2016-09-14T08:35:04Z
| 39,486,169 |
<pre><code>>>> def make_accumulator(init):
... def accumulate(part):
... return init + part
... return accumulate
...
>>> make_accumulator(1)
<function accumulate at 0x7fe3ec398938>
>>> A(2)
3
</code></pre>
<p>Since you declare <code>init</code> inside accumulate, Python interprets it as local and therefore referenced before assignment. (Note that I removed the <code>init =</code> part). </p>
<p>I am definitely not an expert about this, but got the hint from these posts : <a href="http://stackoverflow.com/questions/9264763/unboundlocalerror-in-python">Here</a> and <a href="http://stackoverflow.com/questions/4831680/function-inside-function">Here</a>.</p>
<p>I guess someone could explain it better...</p>
| 1 |
2016-09-14T08:47:27Z
|
[
"python",
"function",
"python-3.x",
"scope",
"python-2.x"
] |
pandas read_csv with pound sign in column headers
| 39,485,976 |
<p>I need to read data from a tab-delimited file where the 1st row contains column headers but the 1st character of that row is a pound sign/octothorpe/hastag <code>#</code>.</p>
<p>The data look like this:</p>
<pre><code>FILE_CONTENTS = """\
# year-month-day spam eggs
1956-01-31 11 21
1985-03-20 12 22
1940-11-22 13 23
"""
</code></pre>
<p>I have a solution (answer posted below) but it feels like there is probably a better way.</p>
<p>There is a <a href="http://stackoverflow.com/questions/7086945%20%22related%20question%20about%20doing%20this%20in%20R2">related question about doing this in R</a>.</p>
| 0 |
2016-09-14T08:37:15Z
| 39,485,977 |
<p>This gives the desired <code>DataFrame</code></p>
<pre><code>from io import StringIO
import pandas as pd
FILE_CONTENTS = """\
# year-month-day spam eggs
1956-01-31 11 21
1985-03-20 12 22
1940-11-22 13 23
"""
df = pd.read_csv(StringIO(FILE_CONTENTS), delim_whitespace=True, escapechar='#')
df.columns = df.columns.str.strip()
</code></pre>
<p>N.B. edited to include <a href="http://stackoverflow.com/users/704848/edchum">EdChum</a>'s fix for the leading white-space in initial column, supplied in <a href="http://stackoverflow.com/questions/39485976/pandas-read-csv-with-pound-sign-in-column-headers#comment66290633_39485977">comment</a>.</p>
<p>And seems preferable to various kludges I tried along the lines of:</p>
<pre><code>with open(filename) as f:
header = f.readline()
cols = header.strip('#').split()
df = pd.read_csv(..., comment='#', names=cols)
</code></pre>
<p>edit: seeing <a href="http://stackoverflow.com/users/6207849/nickil-maveli">Nickil Maveli</a>'s answer i realize that I have to deal with both <code>#<space>year-month-day ...</code> <em>and</em> <code>#<tab>year-month-day ...</code> in the file headers.
So we will need a combination of Nikil's and EdChum's approaches</p>
| 0 |
2016-09-14T08:37:15Z
|
[
"python",
"python-3.x",
"pandas"
] |
pandas read_csv with pound sign in column headers
| 39,485,976 |
<p>I need to read data from a tab-delimited file where the 1st row contains column headers but the 1st character of that row is a pound sign/octothorpe/hastag <code>#</code>.</p>
<p>The data look like this:</p>
<pre><code>FILE_CONTENTS = """\
# year-month-day spam eggs
1956-01-31 11 21
1985-03-20 12 22
1940-11-22 13 23
"""
</code></pre>
<p>I have a solution (answer posted below) but it feels like there is probably a better way.</p>
<p>There is a <a href="http://stackoverflow.com/questions/7086945%20%22related%20question%20about%20doing%20this%20in%20R2">related question about doing this in R</a>.</p>
| 0 |
2016-09-14T08:37:15Z
| 39,488,155 |
<p>You still have to shift the column names by a single position to the left to account for the empty column getting created due to the removal of <code>#</code> char. </p>
<p>Then, remove the extra column whose values are all <code>NaN</code>.</p>
<pre><code>def column_cleaning(frame):
frame.columns = np.roll(frame.columns, len(frame.columns)-1)
return frame.dropna(how='all', axis=1)
FILE_CONTENTS = """\
# year-month-day spam eggs
1956-01-31 11 21
1985-03-20 12 22
1940-11-22 13 23
"""
df = pd.read_csv(StringIO(FILE_CONTENTS), delim_whitespace=True, escapechar="#")
column_cleaning(df)
</code></pre>
<p><a href="http://i.stack.imgur.com/Lucxf.png" rel="nofollow"><img src="http://i.stack.imgur.com/Lucxf.png" alt="Image"></a></p>
| 0 |
2016-09-14T10:29:09Z
|
[
"python",
"python-3.x",
"pandas"
] |
Inconsistent behaviour when importing my own module in Python 2.7
| 39,485,982 |
<p>I have created a module, it's sitting in its own folder with an <code>__init__.py</code> and four files that contain my classes.</p>
<p>When doing <code>from MyPackage import *</code> I'm getting the modules that I have written into the <code>__all__</code> statement in my <code>__init__.py</code> just as expected.</p>
<p>When doing <code>from MyPackage import ModuleX</code> I can import any module individually just fine.</p>
<p>When doing <code>import MyPackage</code> and then say <code>dir(MyPackage)</code> however, all I get is this:</p>
<pre><code>['__all__',
'__builtins__',
'__doc__',
'__file__',
'__name__',
'__package__',
'__path__']
</code></pre>
<p>My modules aren't shown and I can't access them using <code>MyPackage.ModuleX</code> either.</p>
<p>The only thing I have written into my <code>__init__.py</code> is the <code>__all__ = [ModuleX]</code> statement.</p>
<p>Why does the last statement not see my modules? Do I have to set some more configuration?</p>
| 1 |
2016-09-14T08:37:54Z
| 39,486,219 |
<p><code>__all__</code> determines what names are <em>exported</em> from that module. However in order to export them you would need to import them in the first place, which you haven't.</p>
| 1 |
2016-09-14T08:50:09Z
|
[
"python",
"python-2.7",
"module"
] |
Scraping ajax page with Scrapy?
| 39,486,224 |
<p>I'm using Scrapy for scrape data from this page</p>
<blockquote>
<p><a href="https://www.bricoetloisirs.ch/magasins/gardena" rel="nofollow">https://www.bricoetloisirs.ch/magasins/gardena</a></p>
</blockquote>
<p>Product list appears dynamically.
Find url to get products</p>
<blockquote>
<p><a href="https://www.bricoetloisirs.ch/coop/ajax/nextPage/(cpgnum=1&layout=7.01-14_180_69_164_182&uiarea=2&carea=%24ROOT&fwrd=frwd0&cpgsize=12)/.do?page=2&_=1473841539272" rel="nofollow">https://www.bricoetloisirs.ch/coop/ajax/nextPage/(cpgnum=1&layout=7.01-14_180_69_164_182&uiarea=2&carea=%24ROOT&fwrd=frwd0&cpgsize=12)/.do?page=2&_=1473841539272</a></p>
</blockquote>
<p>But when i scrape it by Scrapy it give me empty page </p>
<pre><code><span class="pageSizeInformation" id="page0" data-page="0" data-pagesize="12">Page: 0 / Size: 12</span>
</code></pre>
<p>Here is my code </p>
<pre><code># -*- coding: utf-8 -*-
import scrapy
from v4.items import Product
class GardenaCoopBricoLoisirsSpider(scrapy.Spider):
name = "Gardena_Coop_Brico_Loisirs_py"
start_urls = [
'https://www.bricoetloisirs.ch/coop/ajax/nextPage/(cpgnum=1&layout=7.01-14_180_69_164_182&uiarea=2&carea=%24ROOT&fwrd=frwd0&cpgsize=12)/.do?page=2&_=1473841539272'
]
def parse(self, response):
print response.body
</code></pre>
| 0 |
2016-09-14T08:50:20Z
| 39,486,396 |
<p>As far as i know websites use JavaScript to make Ajax calls.<br>
when you use <code>scrapy</code> the page's JS dose not load.</p>
<p>You will need to take a look at <a href="http://selenium-python.readthedocs.io/" rel="nofollow">Selenium</a> for scraping those kind of pages.</p>
<p>Or find out what ajax calls are being made and send them yourself.<br>
check this <a href="http://stackoverflow.com/questions/8550114/can-scrapy-be-used-to-scrape-dynamic-content-from-websites-that-are-using-ajax?rq=1">Can scrapy be used to scrape dynamic content from websites that are using AJAX?</a> may help you as well </p>
| 0 |
2016-09-14T08:58:44Z
|
[
"python",
"ajax",
"scrapy"
] |
Scraping ajax page with Scrapy?
| 39,486,224 |
<p>I'm using Scrapy for scrape data from this page</p>
<blockquote>
<p><a href="https://www.bricoetloisirs.ch/magasins/gardena" rel="nofollow">https://www.bricoetloisirs.ch/magasins/gardena</a></p>
</blockquote>
<p>Product list appears dynamically.
Find url to get products</p>
<blockquote>
<p><a href="https://www.bricoetloisirs.ch/coop/ajax/nextPage/(cpgnum=1&layout=7.01-14_180_69_164_182&uiarea=2&carea=%24ROOT&fwrd=frwd0&cpgsize=12)/.do?page=2&_=1473841539272" rel="nofollow">https://www.bricoetloisirs.ch/coop/ajax/nextPage/(cpgnum=1&layout=7.01-14_180_69_164_182&uiarea=2&carea=%24ROOT&fwrd=frwd0&cpgsize=12)/.do?page=2&_=1473841539272</a></p>
</blockquote>
<p>But when i scrape it by Scrapy it give me empty page </p>
<pre><code><span class="pageSizeInformation" id="page0" data-page="0" data-pagesize="12">Page: 0 / Size: 12</span>
</code></pre>
<p>Here is my code </p>
<pre><code># -*- coding: utf-8 -*-
import scrapy
from v4.items import Product
class GardenaCoopBricoLoisirsSpider(scrapy.Spider):
name = "Gardena_Coop_Brico_Loisirs_py"
start_urls = [
'https://www.bricoetloisirs.ch/coop/ajax/nextPage/(cpgnum=1&layout=7.01-14_180_69_164_182&uiarea=2&carea=%24ROOT&fwrd=frwd0&cpgsize=12)/.do?page=2&_=1473841539272'
]
def parse(self, response):
print response.body
</code></pre>
| 0 |
2016-09-14T08:50:20Z
| 39,487,265 |
<p>I believe you need to send an additional request just like a browser does. Try to modify your code as follows:</p>
<pre><code># -*- coding: utf-8 -*-
import scrapy
from scrapy.http import Request
from v4.items import Product
class GardenaCoopBricoLoisirsSpider(scrapy.Spider):
name = "Gardena_Coop_Brico_Loisirs_py"
start_urls = [
'https://www.bricoetloisirs.ch/coop/ajax/nextPage/'
]
def parse(self, response):
request_body = '(cpgnum=1&layout=7.01-14_180_69_164_182&uiarea=2&carea=%24ROOT&fwrd=frwd0&cpgsize=12)/.do?page=2&_=1473841539272'
yield Request(url=response.url, body=request_body, callback=self.parse_page)
def parse_page(self, response):
print response.body
</code></pre>
| 0 |
2016-09-14T09:41:58Z
|
[
"python",
"ajax",
"scrapy"
] |
Scraping ajax page with Scrapy?
| 39,486,224 |
<p>I'm using Scrapy for scrape data from this page</p>
<blockquote>
<p><a href="https://www.bricoetloisirs.ch/magasins/gardena" rel="nofollow">https://www.bricoetloisirs.ch/magasins/gardena</a></p>
</blockquote>
<p>Product list appears dynamically.
Find url to get products</p>
<blockquote>
<p><a href="https://www.bricoetloisirs.ch/coop/ajax/nextPage/(cpgnum=1&layout=7.01-14_180_69_164_182&uiarea=2&carea=%24ROOT&fwrd=frwd0&cpgsize=12)/.do?page=2&_=1473841539272" rel="nofollow">https://www.bricoetloisirs.ch/coop/ajax/nextPage/(cpgnum=1&layout=7.01-14_180_69_164_182&uiarea=2&carea=%24ROOT&fwrd=frwd0&cpgsize=12)/.do?page=2&_=1473841539272</a></p>
</blockquote>
<p>But when i scrape it by Scrapy it give me empty page </p>
<pre><code><span class="pageSizeInformation" id="page0" data-page="0" data-pagesize="12">Page: 0 / Size: 12</span>
</code></pre>
<p>Here is my code </p>
<pre><code># -*- coding: utf-8 -*-
import scrapy
from v4.items import Product
class GardenaCoopBricoLoisirsSpider(scrapy.Spider):
name = "Gardena_Coop_Brico_Loisirs_py"
start_urls = [
'https://www.bricoetloisirs.ch/coop/ajax/nextPage/(cpgnum=1&layout=7.01-14_180_69_164_182&uiarea=2&carea=%24ROOT&fwrd=frwd0&cpgsize=12)/.do?page=2&_=1473841539272'
]
def parse(self, response):
print response.body
</code></pre>
| 0 |
2016-09-14T08:50:20Z
| 39,490,622 |
<p>I solve this.</p>
<pre><code># -*- coding: utf-8 -*-
import scrapy
from v4.items import Product
class GardenaCoopBricoLoisirsSpider(scrapy.Spider):
name = "Gardena_Coop_Brico_Loisirs_py"
start_urls = [
'https://www.bricoetloisirs.ch/magasins/gardena'
]
def parse(self, response):
for page in xrange(1, 50):
url = response.url + '/.do?page=%s&_=1473841539272' % page
yield scrapy.Request(url, callback=self.parse_page)
def parse_page(self, response):
print response.body
</code></pre>
| 0 |
2016-09-14T12:35:52Z
|
[
"python",
"ajax",
"scrapy"
] |
Recognition of a sound (a word) with machine learning in python
| 39,486,341 |
<p>I'm preparing an experiment, and I want to write a program using python to recognize certain word spoken by the participants.</p>
<p>I searched a lot about speech recognition in python but the results are complicated.(e.g. CMUSphinx).</p>
<p>What I want to achieve is a program, that receive a sound file (contains only one word, not English), and I tell the program what the sound means and what output I want to see. </p>
<p>I have seen the sklearn <a href="http://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.html" rel="nofollow">example</a> about recognizing hand-written digits. I want to know if I can do something like the example:</p>
<ol>
<li>training the program to return certain output (e.g. numbers) according to sound files from different people saying same word;</li>
<li>when take in new sound files from other person saying same word, return same values.</li>
</ol>
<p>Can I do this with python and sklearn?
If so, where should I start?</p>
<p>Thank you!</p>
| 0 |
2016-09-14T08:56:07Z
| 39,486,898 |
<p>I've written such program in text recognition. I can tell you if you chose to "teach" your program manually you will have a lot of work think about the variation in voice due to accents etc. </p>
<p>You could start <a href="https://wiki.python.org/moin/PythonInMusic" rel="nofollow">looking for a sound analyzer here</a> (Musical Analysis). try to identify the waves of a simple word like "yes" and write an alghorithm that percentages the variation of the soundfile. this way you can put a margin in to safe yourself from false-positives / vice-versa. </p>
<p>Also you might need to remove background noise from the soundfile first as they may interfer with your identification patterns. </p>
| 0 |
2016-09-14T09:25:30Z
|
[
"python",
"audio",
"machine-learning"
] |
calculate avg response time from multiple curl commands using python
| 39,486,482 |
<p>I am trying to do a load testing of my web server using curl command.</p>
<p>I am able to run multiple curl commands, but now I also want to calculate the avg response time from all the curl command which were executed </p>
<pre><code>from functools import partial
from multiprocessing.dummy import Pool
from subprocess import call
commands = []
command = "curl -s -w \"Time:%{time_total}\n\" -o /dev/null -k -X GET \"https://google.com\""
for i in range(10): # run 10 curl commands in total
commands.append(command)
pool = Pool(5) # Nummber of concurrent commands at a time
for i, returncode in enumerate(pool.imap(partial(call, shell=True), commands)):
if returncode != 0:
print("%d command failed: %d" % (i, returncode))
</code></pre>
<p>output</p>
<pre><code>Time:0.654
Time:0.689
Time:0.720
Time:0.725
Time:0.735
Time:0.624
Time:0.635
Time:0.633
Time:0.678
Time:0.708
</code></pre>
<p>How can i capture the <code>Time</code> and calculate the average response time?</p>
<p>Thanks</p>
| 0 |
2016-09-14T09:03:48Z
| 39,487,374 |
<p>Instead of relying to <code>call</code> you could create separate function executed by <code>imap</code>. Then you could use <a href="https://docs.python.org/3.5/library/subprocess.html#subprocess.Popen" rel="nofollow"><code>Popen</code></a> that allows you to communicate with the child process. The example below writes only the time to stdout which is then captured and returned to parent process:</p>
<pre><code>from functools import partial
from multiprocessing.dummy import Pool
from subprocess import Popen, PIPE
def child(cmd):
p = Popen(cmd, stdout=PIPE, shell=True)
out, err = p.communicate()
return out, p.returncode
commands = []
command = "curl -s -w \"%{time_total}\" -o /dev/null -k -X GET \"https://google.com\""
for i in range(10): # run 10 curl commands in total
commands.append(command)
pool = Pool(5) # Nummber of concurrent commands at a time
times = []
for i, (output, returncode) in enumerate(pool.imap(child, commands)):
if returncode != 0:
print("{} command failed: {}".format(i, returncode))
else:
print("{} success: {}".format(i, output))
times.append(float(output))
print 'Average: {}'.format(sum(times) / len(times) if times else 0)
</code></pre>
<p>Output:</p>
<pre><code>0 success: 0.109
1 success: 0.108
2 success: 0.103
3 success: 0.093
4 success: 0.085
5 success: 0.091
6 success: 0.109
7 success: 0.114
8 success: 0.092
9 success: 0.099
Average: 0.1003
</code></pre>
| 1 |
2016-09-14T09:48:00Z
|
[
"python",
"curl"
] |
Google maps API to extract company details
| 39,486,794 |
<p>I am trying to extract the details displayed when you search for a company name on google maps. As shown in the image below:</p>
<p><img src="http://i.stack.imgur.com/LwWBf.png" alt="Google maps search results"></p>
<p>I tired the <code>http://maps.google.com/maps/api/geocode/json?address=</code> this gives only the formatted_address. But I need to get the website and contact details from the results. Can someone help me what is the possible API to use for this? Also it would be nice if the code is in R or python.</p>
| -1 |
2016-09-14T09:20:23Z
| 39,487,354 |
<p>The URL you mentioned is a <a href="https://developers.google.com/maps/documentation/geocoding/start" rel="nofollow">Google Geocoding API</a>, for company details you should use <strong>places</strong> method from <a href="https://developers.google.com/places/web-service/details" rel="nofollow">Google Places API</a>.</p>
| 0 |
2016-09-14T09:46:58Z
|
[
"python",
"google-maps-api-3"
] |
How to fit an ellipse contour with 4 points?
| 39,486,869 |
<p>I have 4 coordinate points. Using these 4 points I want to fit an ellipse, but seems like the requirement for cv2.fitellipse() is a minimum of 5 points. Is there any way to get around this and draw a countour with just 4 points?</p>
<p>How does fitellipse draw a contour when cv2.CHAIN_APPROX_SIMPLE is used, which gives only 4 coordinates point. </p>
| 0 |
2016-09-14T09:24:00Z
| 39,488,635 |
<p>Four points aren't enough to fit an ellipse without ambiguity (don't forget that a general ellipse can have arbitrary rotation). You need at least five to get an exact solution or more to fit in a least square manner. For a more detailed explanation I found <a href="https://sarcasticresonance.wordpress.com/2012/05/14/how-many-points-does-it-take-to-define/" rel="nofollow">this</a>.</p>
<p>You can draw the countour itself (without fitting anything) with <code>drawContours</code> (or fit e.g. a circle).</p>
<p>So to answer your second question (assuming that I understand it right): It doesn't if there are less than 5 points available, but <code>FindContours</code> in combination with <code>CHAIN_APPROX_SIMPLE</code> returns eventually more depending on the particular detected contour. </p>
<p>See <a href="http://docs.opencv.org/2.4/doc/tutorials/imgproc/shapedescriptors/bounding_rotated_ellipses/bounding_rotated_ellipses.html" rel="nofollow">here</a>. In this C++ example ellipses are only fitted if at least 5 points are available.</p>
| 1 |
2016-09-14T10:53:17Z
|
[
"python",
"opencv",
"image-processing"
] |
How to fit an ellipse contour with 4 points?
| 39,486,869 |
<p>I have 4 coordinate points. Using these 4 points I want to fit an ellipse, but seems like the requirement for cv2.fitellipse() is a minimum of 5 points. Is there any way to get around this and draw a countour with just 4 points?</p>
<p>How does fitellipse draw a contour when cv2.CHAIN_APPROX_SIMPLE is used, which gives only 4 coordinates point. </p>
| 0 |
2016-09-14T09:24:00Z
| 39,494,238 |
<p>If you take a look at the equation for ellipses:</p>
<p><a href="http://i.stack.imgur.com/EkEHr.gif" rel="nofollow"><img src="http://i.stack.imgur.com/EkEHr.gif" alt="enter image description here"></a></p>
<p>and the homogenous representation:</p>
<p><a href="http://i.stack.imgur.com/ehVZY.gif" rel="nofollow"><img src="http://i.stack.imgur.com/ehVZY.gif" alt="enter image description here"></a>.</p>
<p>The conic-matrix looks like this:</p>
<p><a href="http://i.stack.imgur.com/sA0u5.gif" rel="nofollow"><img src="http://i.stack.imgur.com/sA0u5.gif" alt="enter image description here"></a></p>
<p>you see that it has 6 parameters - one projective degree of freedom, so 5 degree of freedom (if f not equal 0).
Every point on C gives a condition to the parameters a-f. So five points are needed to describe a conic:</p>
<p><a href="http://i.stack.imgur.com/U5U4i.gif" rel="nofollow"><img src="http://i.stack.imgur.com/U5U4i.gif" alt="enter image description here"></a></p>
<p>To solve this you need to compute the kernel of the matrix (for five points):</p>
<p><a href="http://i.stack.imgur.com/She4g.gif" rel="nofollow"><img src="http://i.stack.imgur.com/She4g.gif" alt="enter image description here"></a> </p>
<p>if you have more then five you need the least square solution:
<a href="http://i.stack.imgur.com/icWFJ.gif" rel="nofollow"><img src="http://i.stack.imgur.com/icWFJ.gif" alt="enter image description here"></a></p>
<p>and x is the eigenvector from the smallest eigenvalue of <a href="http://i.stack.imgur.com/Nsl12.gif" rel="nofollow"><img src="http://i.stack.imgur.com/Nsl12.gif" alt="enter image description here"></a></p>
| 0 |
2016-09-14T15:24:44Z
|
[
"python",
"opencv",
"image-processing"
] |
Scrapy installed, but won't recognized in the command line
| 39,486,965 |
<p>I installed Scrapy in my python 2.7 environment in windows 7 but when I trying to start a new Scrapy project using <code>scrapy startproject newProject</code> the command prompt show this massage</p>
<pre><code>'scrapy' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>Note:</p>
<ul>
<li>I also have python 3.5 but that do not have scrapy</li>
<li>This question is not duplicate of <a href="http://stackoverflow.com/questions/37757233/scrapy-installed-but-wont-run-from-the-command-line">this</a></li>
</ul>
| 0 |
2016-09-14T09:28:15Z
| 39,487,077 |
<p><a href="http://doc.scrapy.org/en/1.1/intro/install.html#intro-install-platform-notes" rel="nofollow">Scrapy should be in your environment variables</a>. You can check if it's there with the following in windows:</p>
<pre><code>echo %PATH% # To print only the path
set # For all
</code></pre>
<p>or</p>
<pre><code>printenv # In linux
</code></pre>
<p>Make should scrapy is in your <em>path</em> and if it's not, add it to your path and it should (probably) resolve your problem. I said probably, since it might be caused by other issues you have not mentioned.</p>
| 1 |
2016-09-14T09:34:10Z
|
[
"python",
"python-2.7",
"scrapy",
"scrapy-spider"
] |
Scrapy installed, but won't recognized in the command line
| 39,486,965 |
<p>I installed Scrapy in my python 2.7 environment in windows 7 but when I trying to start a new Scrapy project using <code>scrapy startproject newProject</code> the command prompt show this massage</p>
<pre><code>'scrapy' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>Note:</p>
<ul>
<li>I also have python 3.5 but that do not have scrapy</li>
<li>This question is not duplicate of <a href="http://stackoverflow.com/questions/37757233/scrapy-installed-but-wont-run-from-the-command-line">this</a></li>
</ul>
| 0 |
2016-09-14T09:28:15Z
| 39,510,766 |
<p>See the <a href="http://doc.scrapy.org/en/1.1/intro/install.html" rel="nofollow">official documentation</a> or stackoverflow <a class='doc-link' href="http://stackoverflow.com/documentation/scrapy/2099/introduction-to-scrapy/22896/creating-a-project#t=201609151211472059058">documentation</a>.</p>
<ul>
<li><strong>Set environment variable</strong> </li>
<li><strong>Install pywin32</strong> </li>
</ul>
| 0 |
2016-09-15T12:13:35Z
|
[
"python",
"python-2.7",
"scrapy",
"scrapy-spider"
] |
Matplotlib: misaligned colorbar ticks?
| 39,486,999 |
<p>I'm trying to plot data in the range 0-69 with a bespoke colormap. Here is an example:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from matplotlib.colors import LinearSegmentedColormap
colors = [(0.9, 0.9, 0.9), # Value = 0
(0.3, 0.3, 0.3), # Value = 9
(1.0, 0.4, 0.4), # Value = 10
(0.4, 0.0, 0.0), # Value = 19
(0.0, 0.7, 1.0), # Value = 20
(0.0, 0.1, 0.3), # Value = 29
(1.0, 1.0, 0.4), # Value = 30
(0.4, 0.4, 0.0), # Value = 39
(1.0, 0.4, 1.0), # Value = 40
(0.4, 0.0, 0.4), # Value = 49
(0.4, 1.0, 0.4), # Value = 50
(0.0, 0.4, 0.0), # Value = 59
(1.0, 0.3, 0.0), # Value = 60
(1.0, 0.8, 0.6)] # Value = 69
# Create the values specified above
max_val = 69
values = [n for n in range(max_val + 1) if n % 10 == 0 or n % 10 == 9]
# Create colormap, first normalise values
values = [v / float(max_val) for v in values]
values_and_colors = [(v, c) for v, c in zip(values, colors)]
cmap = LinearSegmentedColormap.from_list('my_cmap', values_and_colors,
N=max_val + 1)
# Create sample data in range 0-69
data = np.round(np.random.random((20, 20)) * max_val)
ax = plt.imshow(data, cmap=cmap, interpolation='nearest')
cb = plt.colorbar(ticks=range(0, max_val, 10))
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/ZROy0.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZROy0.png" alt="enter image description here"></a></p>
<p>I'm thoroughly puzzled as to why the colorbar ticks do not line up with the distinct separations between the color gradients (for which there are 10 colors each).</p>
<p>I've tried setting the data and view intervals from [0, 69] to [0, 70]:</p>
<pre><code>cb.locator.axis.set_view_interval(0, 70)
cb.locator.axis.set_data_interval(0, 70)
cb.update_ticks()
</code></pre>
<p>but this doesn't appear to do anything.</p>
<p>Please can someone advise?</p>
| 1 |
2016-09-14T09:30:29Z
| 39,567,623 |
<p>The simplest way to solve my problem was to set <code>vmax</code> in the definition of the mappable:</p>
<pre><code>ax = plt.imshow(data, cmap=cmap, interpolation='nearest', vmax=max_val + 1)
</code></pre>
<p>It was being set at <code>max_val</code> because the Colorbar class has the call <code>mappable.autoscale_None()</code> in its <code>__init__</code>, which was setting <code>vmax</code> to <code>data.max()</code>, i.e. 69.</p>
<p>I <em>think</em> I am just a victim of using the <code>LinearSegmentedColormap</code> in the wrong way. I want discrete values assigned to specific colors, but the display of a colorbar associated with <code>LinearSegmentedColormap</code> assumes continuous data and therefore defaults to setting unspecified limits to <code>data.min()</code> and <code>data.max()</code>, i.e. in this case 0 and 69.</p>
| 0 |
2016-09-19T07:31:25Z
|
[
"python",
"matplotlib",
"colorbar"
] |
How to prevent Django's label_tag function from escaping the label?
| 39,487,079 |
<p>Example:</p>
<pre><code>>>> example.label
&#x3bb;<sub>blabla</sub>
>>> example.label_tag()
[...]&amp;#x3bb;&lt;blabla&gt;[...]
</code></pre>
<p>Even calling <code>mark_safe(example.label)</code> before <code>label_tag()</code> does not prevent Django from escaping the HTML. How can I get <code>label_tag()</code> to return unescaped labels?</p>
| 1 |
2016-09-14T09:34:12Z
| 39,487,162 |
<p>Try this:</p>
<pre><code> from HTMLParser import HTMLParser
h = HTMLParser()
unescaped = h.unescape(example.label_tag())
print unescaped
</code></pre>
| 1 |
2016-09-14T09:37:26Z
|
[
"python",
"django"
] |
How to prevent Django's label_tag function from escaping the label?
| 39,487,079 |
<p>Example:</p>
<pre><code>>>> example.label
&#x3bb;<sub>blabla</sub>
>>> example.label_tag()
[...]&amp;#x3bb;&lt;blabla&gt;[...]
</code></pre>
<p>Even calling <code>mark_safe(example.label)</code> before <code>label_tag()</code> does not prevent Django from escaping the HTML. How can I get <code>label_tag()</code> to return unescaped labels?</p>
| 1 |
2016-09-14T09:34:12Z
| 39,487,288 |
<p>There is a comment in the <a href="https://github.com/django/django/blob/2ced2f785d5aca0354abf5841d5449b7a49509dc/django/forms/boundfield.py#L135" rel="nofollow">code for <code>label_tag</code></a></p>
<pre><code>Wraps the given contents in a <label>, if the field has an ID attribute.
contents should be 'mark_safe'd to avoid HTML escaping. If contents
aren't given, uses the field's HTML-escaped label.
</code></pre>
<p>So </p>
<pre><code>example.label_tag(contents=mark_safe(example.label))
</code></pre>
<p>Should work.. I can't see another way around this problem</p>
| 1 |
2016-09-14T09:43:20Z
|
[
"python",
"django"
] |
How to prevent Django's label_tag function from escaping the label?
| 39,487,079 |
<p>Example:</p>
<pre><code>>>> example.label
&#x3bb;<sub>blabla</sub>
>>> example.label_tag()
[...]&amp;#x3bb;&lt;blabla&gt;[...]
</code></pre>
<p>Even calling <code>mark_safe(example.label)</code> before <code>label_tag()</code> does not prevent Django from escaping the HTML. How can I get <code>label_tag()</code> to return unescaped labels?</p>
| 1 |
2016-09-14T09:34:12Z
| 39,488,587 |
<p>You have to mark the label as safe when you define the field.</p>
<pre><code>class MyForm(forms.Form):
example = forms.Field(label=mark_safe('&#x3bb;<sub>blabla</sub>'))
</code></pre>
<p>Example:</p>
<pre><code>>>> f = MyForm({'example': 'foo'})
>>> str(f)
'<tr><th><label for="id_example">&#x3bb;<sub>blabla</sub>:</label></th><td><input id="id_example" name="example" type="text" value="foo" /></td></tr>'
</code></pre>
| 0 |
2016-09-14T10:50:44Z
|
[
"python",
"django"
] |
How to send email from python
| 39,487,103 |
<p>My Code</p>
<pre><code>import smtplib
import socket
import sys
from email.mime.text import MIMEText
fp = open("CR_new.txt", 'r')
msg = MIMEText(fp.read())
fp.close()
you = "rajiv@domain.com"
me = "rajiv@domain.com"
msg['Subject'] = 'The contents of %s' % "CR_new.txt"
msg['From'] = you
msg['To'] = me
s = smtplib.SMTP('127.0.0.1')
s.sendmail(you,me, msg.as_string())
s.quit()
</code></pre>
<blockquote>
<p>ConnectionRefusedError: [WinError 10061] No connection could be made
because the target machine actively refused it</p>
</blockquote>
<p><strong>Note:</strong></p>
<ul>
<li>Not having a SMTP server </li>
</ul>
| 0 |
2016-09-14T09:35:12Z
| 39,487,885 |
<p>This code will help you to send the email.
You just need to provide your email-id password.</p>
<p>The most important note is: don't give file name as email.py.</p>
<pre><code>import socket
import sys
import smtplib
EMAIL_TO = ["rajiv@domain.com"]
EMAIL_FROM = "rajiv@domain.com"
EMAIL_SUBJECT = "Test Mail... "
msg= {}
EMAIL_SPACE = ", "
msg['Subject'] = EMAIL_SUBJECT
msg['To'] = EMAIL_SPACE.join(EMAIL_TO)
msg['From'] = EMAIL_FROM
try:
mail = smtplib.SMTP_SSL('smtp.bizmail.yahoo.com',465)
#
# if you are using gmail then use
# smtplib.SMTP_SSL('smtp.gmail.com',587)
#
mail.login("rajiv@domain.com", 'password') # you account email password..
mail.sendmail("rajiv@domain.com", EMAIL_TO, msg.as_string())
mail.quit()
except Exception,e:
print e
finally:
print "Error of email sending mail"
</code></pre>
| 0 |
2016-09-14T10:13:40Z
|
[
"python",
"email"
] |
How to send email from python
| 39,487,103 |
<p>My Code</p>
<pre><code>import smtplib
import socket
import sys
from email.mime.text import MIMEText
fp = open("CR_new.txt", 'r')
msg = MIMEText(fp.read())
fp.close()
you = "rajiv@domain.com"
me = "rajiv@domain.com"
msg['Subject'] = 'The contents of %s' % "CR_new.txt"
msg['From'] = you
msg['To'] = me
s = smtplib.SMTP('127.0.0.1')
s.sendmail(you,me, msg.as_string())
s.quit()
</code></pre>
<blockquote>
<p>ConnectionRefusedError: [WinError 10061] No connection could be made
because the target machine actively refused it</p>
</blockquote>
<p><strong>Note:</strong></p>
<ul>
<li>Not having a SMTP server </li>
</ul>
| 0 |
2016-09-14T09:35:12Z
| 39,497,595 |
<p>I would suggest using a package like <a href="https://github.com/kootenpv/yagmail" rel="nofollow">yagmail</a>, rather than trying to figure out how to get smtplib to work. Disclaimer: I'm the maintainer of yagmail.</p>
<p>Code would look like:</p>
<pre><code>import yagmail
yag = yagmail.SMTP(host="127.0.0.1")
yag.send(to"rajiv@domain.com", subject="subject", contents="content")
</code></pre>
| 0 |
2016-09-14T18:48:07Z
|
[
"python",
"email"
] |
Using matplotlib to solve Point in Polygone
| 39,487,194 |
<p>I am looking for an algorithm to check if a point is within a polygon or not.</p>
<p>I am currently using mplPath and contains_point() but it doesn't seem to work in some cases.</p>
<p>EDIT 16 Sept 2016:</p>
<p>Okay so I imporved my code by simply checking if the point where also on the edges. I still have some issues with the rectangle and bow-tie example though:</p>
<p>NEW CODE:</p>
<pre><code>#for PIP problem
import matplotlib.path as mplPath
import numpy as np
#for plot
import matplotlib.pyplot as plt
def plot(poly,points):
bbPath = mplPath.Path(poly)
#plot polygon
plt.plot(*zip(*poly))
#plot points
xs,ys,cs = [],[],[]
for point in points:
xs.append(point[0])
ys.append(point[1])
color = inPoly(poly,point)
cs.append(color)
print point,":", color
plt.scatter(xs,ys, c = cs , s = 20*4*2)
#setting limits
axes = plt.gca()
axes.set_xlim([min(xs)-5,max(xs)+50])
axes.set_ylim([min(ys)-5,max(ys)+10])
plt.show()
def isBetween(a, b, c): #is c between a and b ?
crossproduct = (c[1] - a[1]) * (b[0] - a[0]) - (c[0] - a[0]) * (b[1] - a[1])
if abs(crossproduct) > 0.01 : return False # (or != 0 if using integers)
dotproduct = (c[0] - a[0]) * (b[0] - a[0]) + (c[1] - a[1])*(b[1] - a[1])
if dotproduct < 0 : return False
squaredlengthba = (b[0] - a[0])*(b[0] - a[0]) + (b[1] - a[1])*(b[1] - a[1])
if dotproduct > squaredlengthba: return False
return True
def get_edges(poly):
# get edges
edges = []
for i in range(len(poly)-1):
t = [poly[i],poly[i+1]]
edges.append(t)
return edges
def inPoly(poly,point):
if bbPath.contains_point(point) == True:
return 1
else:
for e in get_edges(poly):
if isBetween(e[0],e[1],point):
return 1
return 0
# TESTS ========================================================================
#set up poly
polys = {
1 : [[10,10],[10,50],[50,50],[50,80],[100,80],[100,10],[10,10]], # test rectangulary shape
2 : [[20,10],[10,20],[30,20],[20,10]], # test triangle
3 : [[0,0],[0,10],[20,0],[20,10],[0,0]], # test bow-tie
4 : [[0,0],[0,10],[20,10],[20,0],[0,0]] # test rect
}
#points to check
points = {
1 : [(10,25),(50,75),(60,10),(20,20),(20,60),(40,50)], # rectangulary shape test pts
2 : [[20,10],[10,20],[30,20],[-5,0],[20,15]] , # triangle test pts
3 : [[0,0],[0,10],[20,0],[20,10],[10,0],[10,5],[15,5]], # bow-tie shape test pts
4 : [[0,0],[0,10],[20,0],[20,10],[10,0],[10,5],[15,5]] # rect shape test pts
}
#print bbPath.contains_points(points) #0 if outside, 1 if inside
for data in zip(polys.itervalues(),points.itervalues()):
plot(data[0],data[1])
</code></pre>
<p>Outputs from new code:</p>
<p><a href="http://i.stack.imgur.com/tS9I9.png" rel="nofollow"><img src="http://i.stack.imgur.com/tS9I9.png" alt="enter image description here"></a></p>
<p>OLD CODE:</p>
<pre><code>#for PIP problem
import matplotlib.path as mplPath
import numpy as np
#for plot
import matplotlib.pyplot as plt
#set up poly
array = np.array([[10,10],[10,50],[50,50],[50,80],[100,80],[100,10]])
bbPath = mplPath.Path(array)
#points to check
points = [(10,25),(50,75),(60,10),(20,20),(20,60),(40,50)]
print bbPath.contains_points(points) #0 if outside, 1 if inside
#plot polygon
plt.plot(*zip(*array))
#plot points
xs,ys,cs = [],[],[]
for point in points:
xs.append(point[0])
ys.append(point[1])
cs.append(bbPath.contains_point(point))
plt.scatter(xs,ys, c = cs)
#setting limits
axes = plt.gca()
axes.set_xlim([0,120])
axes.set_ylim([0,100])
plt.show()
</code></pre>
<p>I come up with the following <a href="http://i.stack.imgur.com/MsJbg.png" rel="nofollow"><img src="http://i.stack.imgur.com/MsJbg.png" alt="graph"></a>. As you can see the three points surrounded in red are indicated as being outside the polygon (in blue) when I would expect them to be inside.</p>
<p>I also tried changing the radius value of the path <code>bbPath.contains_points(points, radius = 1.)</code> but that didn't make any difference.</p>
<p>Any help would be welcome.</p>
<p>EDIT :</p>
<p><a href="http://i.stack.imgur.com/L7Qam.png" rel="nofollow"><img src="http://i.stack.imgur.com/L7Qam.png" alt="enter image description here"></a>screenshots from the algorithm proposed in the answer to this question seem to show that it fails for other cases.</p>
| 0 |
2016-09-14T09:39:15Z
| 39,513,209 |
<p>If doing it the long way isn't bad, you could go off of the principle: <strong>a point is inside a polygon if number of intersections is odd and similarly outside if number of intersections is even</strong>. Here's some sloppy python i put together with the test case you gave above.</p>
<pre class="lang-python prettyprint-override"><code># polygon points: [[x,y],...]
points = [[10,10],[10,50],[50,50],[50,80],[100,80],[100,10]]
# get edges
edges = []
for i in range(len(points)-1):
t = [points[i],points[i+1]]
edges.append(t)
# get min and max x values to use as the length of ray
xmax = max(points,key=lambda item:item[1])[0]
xmin = min(points,key=lambda item:item[1])[1]
dist = xmax-xmin
# return True if p1,p2,p3 are on the same line
def colinear(p1,p2,p3):
return (p1[0]*(p2[1] - p3[1]) + p2[0]*(p3[1] - p1[1]) + p3[0]*(p1[1] - p2[1])) == 0
# return True if p1 is on the line segment p2-p3
def inRange(p1,p2,p3):
dx = abs(p3[0]-p2[0])
dy = abs(p3[1]-p2[1])
if abs(p3[0]-p1[0])+abs(p2[0]-p1[0])==dx and abs(p3[1]-p1[1])+abs(p2[1]-p1[1])==dy:
return True
return False
# line segment intersection between
# (x1,y1)-(x1+dist,y1) and
# (x3,y3)-(x4,y4)
def intersect(x1,y1,x3,y3,x4,y4):
x2 = x1+dist
B1 = x1-x2
C1 = B1*y1
A2 = y4-y3
B2 = x3-x4
C2 = A2*x3+B2*y3
det =-A2*B1
if(det == 0):
return False
x = (B2*C1 - B1*C2)/det
if inRange((x,y1),(x3,y3),(x4,y4)):
return True
return False
# return True if point (x,y) is inside
# polygon defined above
def isInside(x,y):
i = 0
for edge in edges:
# if (x,y) is on the edge return True
if colinear((x,y),edge[0],edge[1]) and inRange((x,y),edge[0],edge[1]):
return True
# if both x values of edge are to the left of (x,y)
# if both y values of edge are are above or bellow (x,y)
# then skip
if edge[0][0] < x and edge[1][0] < x:
continue
if edge[0][1] < y and edge[1][1] < y:
continue
# if ray intersects edge, increment i
if intersect(x,y,edge[0][0],edge[0][1],edge[1][0],edge[1][1]):
i+=1
if i%2==1:
return True
else:
return False
l = [(10,25),(50,75),(60,10),(20,20),(20,60),(40,50)]
for p in l:
print(isInside(p[0],p[1]))
</code></pre>
| 0 |
2016-09-15T14:06:02Z
|
[
"python",
"algorithm",
"python-2.7",
"matplotlib",
"point-in-polygon"
] |
Using matplotlib to solve Point in Polygone
| 39,487,194 |
<p>I am looking for an algorithm to check if a point is within a polygon or not.</p>
<p>I am currently using mplPath and contains_point() but it doesn't seem to work in some cases.</p>
<p>EDIT 16 Sept 2016:</p>
<p>Okay so I imporved my code by simply checking if the point where also on the edges. I still have some issues with the rectangle and bow-tie example though:</p>
<p>NEW CODE:</p>
<pre><code>#for PIP problem
import matplotlib.path as mplPath
import numpy as np
#for plot
import matplotlib.pyplot as plt
def plot(poly,points):
bbPath = mplPath.Path(poly)
#plot polygon
plt.plot(*zip(*poly))
#plot points
xs,ys,cs = [],[],[]
for point in points:
xs.append(point[0])
ys.append(point[1])
color = inPoly(poly,point)
cs.append(color)
print point,":", color
plt.scatter(xs,ys, c = cs , s = 20*4*2)
#setting limits
axes = plt.gca()
axes.set_xlim([min(xs)-5,max(xs)+50])
axes.set_ylim([min(ys)-5,max(ys)+10])
plt.show()
def isBetween(a, b, c): #is c between a and b ?
crossproduct = (c[1] - a[1]) * (b[0] - a[0]) - (c[0] - a[0]) * (b[1] - a[1])
if abs(crossproduct) > 0.01 : return False # (or != 0 if using integers)
dotproduct = (c[0] - a[0]) * (b[0] - a[0]) + (c[1] - a[1])*(b[1] - a[1])
if dotproduct < 0 : return False
squaredlengthba = (b[0] - a[0])*(b[0] - a[0]) + (b[1] - a[1])*(b[1] - a[1])
if dotproduct > squaredlengthba: return False
return True
def get_edges(poly):
# get edges
edges = []
for i in range(len(poly)-1):
t = [poly[i],poly[i+1]]
edges.append(t)
return edges
def inPoly(poly,point):
if bbPath.contains_point(point) == True:
return 1
else:
for e in get_edges(poly):
if isBetween(e[0],e[1],point):
return 1
return 0
# TESTS ========================================================================
#set up poly
polys = {
1 : [[10,10],[10,50],[50,50],[50,80],[100,80],[100,10],[10,10]], # test rectangulary shape
2 : [[20,10],[10,20],[30,20],[20,10]], # test triangle
3 : [[0,0],[0,10],[20,0],[20,10],[0,0]], # test bow-tie
4 : [[0,0],[0,10],[20,10],[20,0],[0,0]] # test rect
}
#points to check
points = {
1 : [(10,25),(50,75),(60,10),(20,20),(20,60),(40,50)], # rectangulary shape test pts
2 : [[20,10],[10,20],[30,20],[-5,0],[20,15]] , # triangle test pts
3 : [[0,0],[0,10],[20,0],[20,10],[10,0],[10,5],[15,5]], # bow-tie shape test pts
4 : [[0,0],[0,10],[20,0],[20,10],[10,0],[10,5],[15,5]] # rect shape test pts
}
#print bbPath.contains_points(points) #0 if outside, 1 if inside
for data in zip(polys.itervalues(),points.itervalues()):
plot(data[0],data[1])
</code></pre>
<p>Outputs from new code:</p>
<p><a href="http://i.stack.imgur.com/tS9I9.png" rel="nofollow"><img src="http://i.stack.imgur.com/tS9I9.png" alt="enter image description here"></a></p>
<p>OLD CODE:</p>
<pre><code>#for PIP problem
import matplotlib.path as mplPath
import numpy as np
#for plot
import matplotlib.pyplot as plt
#set up poly
array = np.array([[10,10],[10,50],[50,50],[50,80],[100,80],[100,10]])
bbPath = mplPath.Path(array)
#points to check
points = [(10,25),(50,75),(60,10),(20,20),(20,60),(40,50)]
print bbPath.contains_points(points) #0 if outside, 1 if inside
#plot polygon
plt.plot(*zip(*array))
#plot points
xs,ys,cs = [],[],[]
for point in points:
xs.append(point[0])
ys.append(point[1])
cs.append(bbPath.contains_point(point))
plt.scatter(xs,ys, c = cs)
#setting limits
axes = plt.gca()
axes.set_xlim([0,120])
axes.set_ylim([0,100])
plt.show()
</code></pre>
<p>I come up with the following <a href="http://i.stack.imgur.com/MsJbg.png" rel="nofollow"><img src="http://i.stack.imgur.com/MsJbg.png" alt="graph"></a>. As you can see the three points surrounded in red are indicated as being outside the polygon (in blue) when I would expect them to be inside.</p>
<p>I also tried changing the radius value of the path <code>bbPath.contains_points(points, radius = 1.)</code> but that didn't make any difference.</p>
<p>Any help would be welcome.</p>
<p>EDIT :</p>
<p><a href="http://i.stack.imgur.com/L7Qam.png" rel="nofollow"><img src="http://i.stack.imgur.com/L7Qam.png" alt="enter image description here"></a>screenshots from the algorithm proposed in the answer to this question seem to show that it fails for other cases.</p>
| 0 |
2016-09-14T09:39:15Z
| 39,530,141 |
<p>Okay so I finally managed to get it done using shapely instead.</p>
<pre><code>#for PIP problem
import matplotlib.path as mplPath
import numpy as np
#for plot
import matplotlib.pyplot as plt
import shapely.geometry as shapely
class MyPoly(shapely.Polygon):
def __init__(self,points):
super(MyPoly,self).__init__(points)
self.points = points
self.points_shapely = [shapely.Point(p[0],p[1]) for p in points]
def convert_to_shapely_points_and_poly(poly,points):
poly_shapely = MyPoly(poly)
points_shapely = [shapely.Point(p[0],p[1]) for p in points]
return poly_shapely,points_shapely
def plot(poly_init,points_init):
#convert to shapely poly and points
poly,points = convert_to_shapely_points_and_poly(poly_init,points_init)
#plot polygon
plt.plot(*zip(*poly.points))
#plot points
xs,ys,cs = [],[],[]
for point in points:
xs.append(point.x)
ys.append(point.y)
color = inPoly(poly,point)
cs.append(color)
print point,":", color
plt.scatter(xs,ys, c = cs , s = 20*4*2)
#setting limits
axes = plt.gca()
axes.set_xlim([min(xs)-5,max(xs)+50])
axes.set_ylim([min(ys)-5,max(ys)+10])
plt.show()
def isBetween(a, b, c): #is c between a and b ?
crossproduct = (c.y - a.y) * (b.x - a.x) - (c.x - a.x) * (b.y - a.y)
if abs(crossproduct) > 0.01 : return False # (or != 0 if using integers)
dotproduct = (c.x - a.x) * (b.x - a.x) + (c.y - a.y)*(b.y - a.y)
if dotproduct < 0 : return False
squaredlengthba = (b.x - a.x)*(b.x - a.x) + (b.y - a.y)*(b.y - a.y)
if dotproduct > squaredlengthba: return False
return True
def get_edges(poly):
# get edges
edges = []
for i in range(len(poly.points)-1):
t = [poly.points_shapely[i],poly.points_shapely[i+1]]
edges.append(t)
return edges
def inPoly(poly,point):
if poly.contains(point) == True:
return 1
else:
for e in get_edges(poly):
if isBetween(e[0],e[1],point):
return 1
return 0
# TESTS ========================================================================
#set up poly
polys = {
1 : [[10,10],[10,50],[50,50],[50,80],[100,80],[100,10],[10,10]], # test rectangulary shape
2 : [[20,10],[10,20],[30,20],[20,10]], # test triangle
3 : [[0,0],[0,10],[20,0],[20,10],[0,0]], # test bow-tie
4 : [[0,0],[0,10],[20,10],[20,0],[0,0]], # test rect clockwise
5 : [[0,0],[20,0],[20,10],[0,10],[0,0]] # test rect counter-clockwise
}
#points to check
points = {
1 : [(10,25),(50,75),(60,10),(20,20),(20,60),(40,50)], # rectangulary shape test pts
2 : [[20,10],[10,20],[30,20],[-5,0],[20,15]] , # triangle test pts
3 : [[0,0],[0,10],[20,0],[20,10],[10,0],[10,5],[15,5]], # bow-tie shape test pts
4 : [[0,0],[0,10],[20,0],[20,10],[10,0],[10,5],[15,2],[30,8]], # rect shape test pts
5 : [[0,0],[0,10],[20,0],[20,10],[10,0],[10,5],[15,2],[30,8]] # rect shape test pts
}
#print bbPath.contains_points(points) #0 if outside, 1 if inside
for data in zip(polys.itervalues(),points.itervalues()):
plot(data[0],data[1])
</code></pre>
| 0 |
2016-09-16T11:15:01Z
|
[
"python",
"algorithm",
"python-2.7",
"matplotlib",
"point-in-polygon"
] |
create a dynamic two dimensional array in python (loop)
| 39,487,235 |
<p>i am trying to create a two-dimensional array in python..In my php i have this code: </p>
<pre><code>$i = 1;
$arr= array();
foreach($mes as $res){
$arr[$i]->type = $res->item;
$arr[$i]->action = $res->title;
$i++
}
</code></pre>
<p>how can i make that code in python? i need to built that code dynamically for the foreach loop as you can see in the code above..i need to assign the values..arr[1]['type'] = 'blabla', arr[1]['action] = 'blabla'..i hope you understand what i am trying to do</p>
<p>i have done this but doesnt work:</p>
<pre><code>i = 1
for res in mes:
fa = [i]
fa[i].append(['type'])
fa[i]['type'].append(res['item'])
</code></pre>
| 0 |
2016-09-14T09:40:40Z
| 39,487,314 |
<p>A <em>list comprehension</em> with nested dictionaries is a simple way to do this:</p>
<pre><code>arr = [{'type': res['item'], 'action': res['title']} for res in mes]
</code></pre>
<hr>
<p>The first item (and others similarly by changing the index) in <code>arr</code> can then be accessed with:</p>
<pre><code>arr[0]['type'] # or arr[0]['action']
# ^ indices of lists in Python start from zero
</code></pre>
| 0 |
2016-09-14T09:44:44Z
|
[
"python"
] |
how to use variables in where clause of orientdb query using python
| 39,487,454 |
<p>code</p>
<pre><code>import pyorient
# create connection
client = pyorient.OrientDB("localhost", 2424)
# open databse
client.db_open( "Apple", "admin", "admin" )
requiredObj = client.command(" select out().question as qlist,out().seq as qseq,out().pattern as pattern,out().errormsg as errormsg from chat where app_cat='%s' and module='%s' and type='%s' and prob_cat='%s' ",(appCategory,module,type,problemCategory))
for data in requiredObj :
print data
</code></pre>
<p>the above one is not working please suggest alternative way</p>
| -2 |
2016-09-14T09:52:09Z
| 39,488,543 |
<p>You could use this command</p>
<pre><code>requiredObj = client.command("select from chat where name='%s'" % "chat 1");
</code></pre>
<p>or </p>
<pre><code>requiredObj = client.command("select from chat where name='%s' and room='%s'" % ("chat 1","1"));
</code></pre>
<p>Hope it helps</p>
| 0 |
2016-09-14T10:48:27Z
|
[
"python",
"orientdb"
] |
Python list concatenation with strings into new list
| 39,487,507 |
<p>Im looking for the best way to take a list of stirngs, generate a new list with each item from the previous list concatenated with a specific string.</p>
<p><strong>Example sudo code</strong></p>
<pre><code>list1 = ['Item1','Item2','Item3','Item4']
string = '-example'
NewList = ['Item1-example','Item2-example','Item3-example','Item4-example']
</code></pre>
<p><strong>Attempt</strong></p>
<pre><code>NewList = (string.join(list1))
#This of course makes one big string
</code></pre>
| 1 |
2016-09-14T09:54:43Z
| 39,487,525 |
<p>If you want to create a list, a list comprehension is usually the thing to do.</p>
<pre><code>new_list = ["{}{}".format(item, string) for item in list1]
</code></pre>
| 5 |
2016-09-14T09:55:22Z
|
[
"python"
] |
Python list concatenation with strings into new list
| 39,487,507 |
<p>Im looking for the best way to take a list of stirngs, generate a new list with each item from the previous list concatenated with a specific string.</p>
<p><strong>Example sudo code</strong></p>
<pre><code>list1 = ['Item1','Item2','Item3','Item4']
string = '-example'
NewList = ['Item1-example','Item2-example','Item3-example','Item4-example']
</code></pre>
<p><strong>Attempt</strong></p>
<pre><code>NewList = (string.join(list1))
#This of course makes one big string
</code></pre>
| 1 |
2016-09-14T09:54:43Z
| 39,487,557 |
<p>Use string concatenation in a list comprehension:</p>
<pre><code>>>> list1 = ['Item1', 'Item2', 'Item3', 'Item4']
>>> string = '-example'
>>> [x + string for x in list1]
['Item1-example', 'Item2-example', 'Item3-example', 'Item4-example']
</code></pre>
| 3 |
2016-09-14T09:56:40Z
|
[
"python"
] |
Python list concatenation with strings into new list
| 39,487,507 |
<p>Im looking for the best way to take a list of stirngs, generate a new list with each item from the previous list concatenated with a specific string.</p>
<p><strong>Example sudo code</strong></p>
<pre><code>list1 = ['Item1','Item2','Item3','Item4']
string = '-example'
NewList = ['Item1-example','Item2-example','Item3-example','Item4-example']
</code></pre>
<p><strong>Attempt</strong></p>
<pre><code>NewList = (string.join(list1))
#This of course makes one big string
</code></pre>
| 1 |
2016-09-14T09:54:43Z
| 39,487,899 |
<p>concate list item and string </p>
<pre><code>>>>list= ['Item1', 'Item2', 'Item3', 'Item4']
>>>newList=[ i+'-example' for i in list]
>>>newList
['Item1-example', 'Item2-example', 'Item3-example', 'Item4-example']
</code></pre>
| 1 |
2016-09-14T10:14:34Z
|
[
"python"
] |
Python list concatenation with strings into new list
| 39,487,507 |
<p>Im looking for the best way to take a list of stirngs, generate a new list with each item from the previous list concatenated with a specific string.</p>
<p><strong>Example sudo code</strong></p>
<pre><code>list1 = ['Item1','Item2','Item3','Item4']
string = '-example'
NewList = ['Item1-example','Item2-example','Item3-example','Item4-example']
</code></pre>
<p><strong>Attempt</strong></p>
<pre><code>NewList = (string.join(list1))
#This of course makes one big string
</code></pre>
| 1 |
2016-09-14T09:54:43Z
| 39,487,941 |
<p>An alternative to list comprehension is using <code>map()</code>:</p>
<pre><code>>>> map(lambda x: x+string,list1)
['Item1-example', 'Item2-example', 'Item3-example', 'Item4-example']
</code></pre>
<p>Note, <code>list(map(lambda x: x+string,list1))</code> in Python3.</p>
| 2 |
2016-09-14T10:17:40Z
|
[
"python"
] |
How to stop bokeh server?
| 39,487,574 |
<p>I do use bokeh to plot sensor data live on the local LAN. Bokeh is started from within my python application using popen: <code>Popen("bokeh serve --host=localhost:5006 --host=192.168.8.100:5006", shell=True)</code></p>
<p>I would like to close bokeh server from within the application. However, I cannot find anything in the <a href="http://bokeh.pydata.org/en/latest/docs/user_guide/cli.html#userguide-cli" rel="nofollow">documentation</a>. Also <code>bokeh serve --help</code> does not give any hint how to do that.</p>
<p>EDIT: based on the accepted answer I came up with following solution:</p>
<pre><code> self.bokeh_serve = subprocess.Popen(shlex.split(command),
shell=False, stdout=subprocess.PIPE)
</code></pre>
<p>I used <code>self.bokeh_serve.kill()</code> for ending the process. Maybe <code>.terminate()</code> would be better. I will try it.</p>
| 1 |
2016-09-14T09:57:29Z
| 39,487,949 |
<p>Without knowing bokeh and assuming that you use Python >= 3.2 or Linux, you could try to kill the process with <code>SIGTERM</code>, <code>SIGINT</code> or <code>SIGHUP</code>, using <a href="https://docs.python.org/3/library/os.html#os.kill" rel="nofollow"><code>os.kill()</code></a> with <a href="https://docs.python.org/3/library/subprocess.html#subprocess.Popen.pid" rel="nofollow"><code>Popen.pid</code></a> <strong>or even better</strong> <a href="https://docs.python.org/3/library/subprocess.html#subprocess.Popen.send_signal" rel="nofollow"><code>Popen.send_signal()</code></a>. If bokeh has proper signal handlers, it will even shutdown cleanly.</p>
<p>However, you may be better off using the option <code>shell=False</code>, because with <code>shell=True</code>, the signal is sent to the shell instead of the actual process.</p>
| 0 |
2016-09-14T10:17:58Z
|
[
"python",
"bokeh"
] |
How to write numpy arrays to .txt file, starting at a certain line? numpy version 1.6
| 39,487,605 |
<p>At: <a href="http://stackoverflow.com/questions/39483774/how-to-write-numpy-arrays-to-txt-file-starting-at-a-certain-line">How to write numpy arrays to .txt file, starting at a certain line?</a></p>
<p>People helped me to solve my problem - this works for numpy Version 1.7 or later. Unfortunatelly I have to use the version 1.6 - the follwoing code (thank you @Praveen)</p>
<pre><code>extra_text = 'Answer to life, the universe and everything = 42'
header = '# Filexy\n# time operation1 operation2\n' + extra_text
np.savetxt('example.txt', np.c_[time, operation1, operation2],
header=header, fmt='%d', delimiter='\t', comments=''
</code></pre>
<p>give me an error with numpy 1.6</p>
<pre><code>numpy.savetxt() got an unexpected keyword argument 'header' · Issue ...
</code></pre>
<p>Is there a work-around for Version 1.6 that produces the same result:</p>
<pre><code># Filexy
# time operation1 operation2
Answer to life, the universe and everything = 42
0 12 100
60 23 123
120 68 203
180 26 301
</code></pre>
| 2 |
2016-09-14T09:58:38Z
| 39,488,326 |
<p>You write your header first, then you dump the data.
Note that you'll need to add the <code>#</code> in each line of the header as <code>np.savetxt</code> won't do it.</p>
<pre><code>time = np.array([0,60,120,180])
operation1 = np.array([12,23,68,26])
operation2 = np.array([100,123,203,301])
header='#Filexy\n#time operation1 operation2'
with open('example.txt', 'w') as f:
f.write(header)
np.savetxt(f, np.c_[time, operation1, operation2],
fmt='%d',
delimiter='\t')
</code></pre>
| 2 |
2016-09-14T10:37:19Z
|
[
"python",
"numpy"
] |
Cassandra Python Driver Error callback shows No error
| 39,487,752 |
<p>I'm right now performing load tests on an API endpoint that saves data into cassandra. In general it works well, but when I perform async insert operations I get the following messages on the error callback:</p>
<pre><code>ERROR:root:Query '<BatchStatement type=UNLOGGED, statements=382, consistency=ONE>' failed: errors={}, last_host=XXXXX
</code></pre>
<p>I perform batch insert the following way:</p>
<pre><code>query_template = self.query_template(table, columns, values, ttl, insertion_timestamp)
statement = self.session.prepare(query_template)
statement.consistency_level = self.write_consistency_level
batch = BatchStatement(batch_type=BatchType.UNLOGGED, retry_policy=RetryPolicy.RETRY,
consistency_level=self.write_consistency_level)
for elem in list_of_dictionary:
values = [elem[key] for key in field_list]
batch.add(statement, values)
if async:
future = self.session.execute_async(batch, values)
future.add_errback(error_handler, batch)
else:
self.session.execute(batch, values)
</code></pre>
<p>With the error callback handler:</p>
<pre><code>def default_error_handler(exc, batch):
"""
Default callback function that is triggered when the cassandra async operation failed
:param exception:
"""
logging.error("Query '%s' failed: %s", batch, exc)
</code></pre>
<p>Does anyone have a clue?</p>
| 0 |
2016-09-14T10:06:03Z
| 39,491,691 |
<p>So I found out the issue.</p>
<p>It is a client side error of the type <code>OperationTimedOut</code>. You can find it in here:</p>
<p><a href="https://github.com/datastax/python-driver/blob/1fd961a55a06a3ab739a3995d09c53a1b0e35fb5/cassandra/__init__.py" rel="nofollow">https://github.com/datastax/python-driver/blob/1fd961a55a06a3ab739a3995d09c53a1b0e35fb5/cassandra/<strong>init</strong>.py</a></p>
<p>I suggest to log in addition the type of the exception in your callback handler like this</p>
<pre><code>def default_error_handler(exc, batch):
"""
Default callback function that is triggered when the cassandra async operation failed
:param exception:
"""
logging.error("Batch '%s' failed with exception '%s' of type '%s' ", batch, exc, type(exc))
</code></pre>
<p>I will now try to solve our issues!</p>
| 0 |
2016-09-14T13:25:11Z
|
[
"python",
"cassandra",
"datastax"
] |
how to wrap automatically functions from certain file
| 39,487,769 |
<p>It's a well known fact there are many ways to get a function name using python standard library, here's a little example:</p>
<pre><code>import sys
import dis
import traceback
def get_name():
stack = traceback.extract_stack()
filename, codeline, funcName, text = stack[-2]
return funcName
def foo1():
print("Foo0 start")
print("Inside-_getframe {0}".format(sys._getframe().f_code.co_name))
print("Inside-traceback {0}".format(get_name()))
print("Foo1 end")
def foo2():
print("Foo2 start")
print("Inside {0}".format(sys._getframe().f_code.co_name))
print("Inside-traceback {0}".format(get_name()))
print("Foo2 end")
def foo3():
print("Foo3 start")
print("Inside {0}".format(sys._getframe().f_code.co_name))
print("Inside-traceback {0}".format(get_name()))
print("Foo3 end")
for f in [foo1, foo2, foo3]:
print("Outside: {0}".format(f.__name__))
f()
print('-' * 80)
</code></pre>
<p>You can use <a href="https://docs.python.org/2/library/traceback.html" rel="nofollow">traceback</a>, <a href="https://docs.python.org/2/library/sys.html#sys._getframe" rel="nofollow">sys._getframe</a>, <a href="https://docs.python.org/2/library/dis.html" rel="nofollow">dis</a> and maybe there is a lot of more options... so far so good, python is awesome to do this kind of introspection.</p>
<p>Now, here's the thing, I'd like to know how to wrap automatically functions (at file level) to print its name and also measuring the execution time when they are executed. For instance, something like this:</p>
<pre><code>def foo1():
print("Foo0 processing")
def foo2():
print("Foo2 processing")
def foo3():
print("Foo3 processing")
wrap_function_from_this_file()
for f in [foo1, foo2, foo3]:
f()
print('-' * 80)
</code></pre>
<p>Would print something like:</p>
<pre><code>foo1 started
Foo1 processing
foo1 finished, elapsed time=1ms
--------------------------------------------------------------------------------
foo2 started
Foo2 processing
foo2 finished, elapsed time=2ms
--------------------------------------------------------------------------------
foo3 started
Foo3 processing
foo3 finished, elapsed time=3ms
--------------------------------------------------------------------------------
</code></pre>
<p>As you can see, the idea would be not adding any wrapper per-function manually to the file's functions. <strong>wrap_function_from_this_file</strong> would automagically introspect the file where is executed and it'd modify functions wrapping them somewhat, in this case, wrapping the functions with some code printing its name and execution time.</p>
<p>Just for the record, <strong>I'm not asking for any profiler</strong>. I'd like to know whether this is possible to do and how.</p>
| 1 |
2016-09-14T10:07:03Z
| 39,488,868 |
<p>A solution could be to use <code>globals()</code> for getting information about currently defined objects. Here is a simple wrapper function, which replaces the functions within the given globals data by a wrapped version of them:</p>
<pre><code>import types
def my_tiny_wrapper(glb):
def wrp(f):
# create a function which is not in
# local space of my_tiny_wrapper
def _inner(*args, **kwargs):
print('wrapped', f.__name__)
return f(*args, **kwargs)
print('end wrap', f.__name__)
return _inner
for f in [f for f in glb.values() if type(f) == types.FunctionType
and f.__name__ != 'my_tiny_wrapper']:
print('WRAP FUNCTION', f.__name__)
glb[f.__name__] = wrp(f)
</code></pre>
<p>It can be used like this:</p>
<pre><code>def peter(): pass
def pan(a): print('salat and onions')
def g(a,b,c='A'): print(a,b,c)
# pass the current globals to the funcion
my_tiny_wrapper(globals())
g(4,b=2,c='D') # test keyword arguments
peter() # test no arguments
pan(4) # single argument
</code></pre>
<p>generating the following result:</p>
<pre><code>~ % python derp.py
('WRAP FUNCTION', 'g')
('WRAP FUNCTION', 'pan')
('WRAP FUNCTION', 'peter')
('wrapped', 'g')
(4, 2, 'D')
('end wrap', 'g')
('wrapped', 'peter')
('end wrap', 'peter')
('wrapped', 'pan')
salat and onions
('end wrap', 'pan')
</code></pre>
| 1 |
2016-09-14T11:05:10Z
|
[
"python",
"python-2.7",
"python-3.x",
"introspection"
] |
how to wrap automatically functions from certain file
| 39,487,769 |
<p>It's a well known fact there are many ways to get a function name using python standard library, here's a little example:</p>
<pre><code>import sys
import dis
import traceback
def get_name():
stack = traceback.extract_stack()
filename, codeline, funcName, text = stack[-2]
return funcName
def foo1():
print("Foo0 start")
print("Inside-_getframe {0}".format(sys._getframe().f_code.co_name))
print("Inside-traceback {0}".format(get_name()))
print("Foo1 end")
def foo2():
print("Foo2 start")
print("Inside {0}".format(sys._getframe().f_code.co_name))
print("Inside-traceback {0}".format(get_name()))
print("Foo2 end")
def foo3():
print("Foo3 start")
print("Inside {0}".format(sys._getframe().f_code.co_name))
print("Inside-traceback {0}".format(get_name()))
print("Foo3 end")
for f in [foo1, foo2, foo3]:
print("Outside: {0}".format(f.__name__))
f()
print('-' * 80)
</code></pre>
<p>You can use <a href="https://docs.python.org/2/library/traceback.html" rel="nofollow">traceback</a>, <a href="https://docs.python.org/2/library/sys.html#sys._getframe" rel="nofollow">sys._getframe</a>, <a href="https://docs.python.org/2/library/dis.html" rel="nofollow">dis</a> and maybe there is a lot of more options... so far so good, python is awesome to do this kind of introspection.</p>
<p>Now, here's the thing, I'd like to know how to wrap automatically functions (at file level) to print its name and also measuring the execution time when they are executed. For instance, something like this:</p>
<pre><code>def foo1():
print("Foo0 processing")
def foo2():
print("Foo2 processing")
def foo3():
print("Foo3 processing")
wrap_function_from_this_file()
for f in [foo1, foo2, foo3]:
f()
print('-' * 80)
</code></pre>
<p>Would print something like:</p>
<pre><code>foo1 started
Foo1 processing
foo1 finished, elapsed time=1ms
--------------------------------------------------------------------------------
foo2 started
Foo2 processing
foo2 finished, elapsed time=2ms
--------------------------------------------------------------------------------
foo3 started
Foo3 processing
foo3 finished, elapsed time=3ms
--------------------------------------------------------------------------------
</code></pre>
<p>As you can see, the idea would be not adding any wrapper per-function manually to the file's functions. <strong>wrap_function_from_this_file</strong> would automagically introspect the file where is executed and it'd modify functions wrapping them somewhat, in this case, wrapping the functions with some code printing its name and execution time.</p>
<p>Just for the record, <strong>I'm not asking for any profiler</strong>. I'd like to know whether this is possible to do and how.</p>
| 1 |
2016-09-14T10:07:03Z
| 39,489,697 |
<p>Here's the solution I was looking for:</p>
<pre><code>import inspect
import time
import random
import sys
random.seed(1)
def foo1():
print("Foo0 processing")
def foo2():
print("Foo2 processing")
def foo3():
print("Foo3 processing")
def wrap_functions_from_this_file():
def wrapper(f):
def _inner(*args, **kwargs):
start = time.time()
print("{0} started".format(f.__name__))
result = f(*args, **kwargs)
time.sleep(random.random())
end = time.time()
print('{0} finished, elapsed time= {1:.4f}s'.format(
f.__name__, end - start))
return _inner
for o in inspect.getmembers(sys.modules[__name__], inspect.isfunction):
globals()[o[0]] = wrapper(o[1])
wrap_functions_from_this_file()
for f in [foo1, foo2, foo3]:
f()
print('-' * 80)
</code></pre>
| 0 |
2016-09-14T11:49:19Z
|
[
"python",
"python-2.7",
"python-3.x",
"introspection"
] |
How to get the defining class from a method body?
| 39,487,829 |
<p>I have two classes:</p>
<pre><code>class A:
def foo(self):
print(<get the defining class>)
class B(A):
pass
</code></pre>
<p>Now, I need to replace <code><get the defining class></code> by a code such that this:</p>
<pre><code>a = A()
b = B()
a.foo()
b.foo()
</code></pre>
<p>produces this (i.e. this is the expected behaviour):</p>
<pre><code>A
A
</code></pre>
<p>I tried <code>self.__class__.__name__</code> but that obviously produces <code>B</code> for the last call as the <code>self</code> is, in fact, of class <code>B</code>.</p>
<p>So the ultimate question is: if I'm in a method body (which is not a class method), how can I get the name of the class the method is defined in?</p>
| 0 |
2016-09-14T10:10:08Z
| 39,488,947 |
<p>The simplest way to do this is by using the functions qualified name:</p>
<pre><code>class A:
def foo(self):
print(self.foo.__qualname__[0])
class B(A):
pass
</code></pre>
<p>The qualified name consists of the class defined and the function name in <code>cls_name.func_name</code> form. <code>__qualname__[0]</code> suits you here because the class name consists of a single character; it's better of course to split on the dot and return the first element <code>self.foo.__qualname__.split('.')[0]</code>.</p>
<p>For both, the result is:</p>
<pre><code>>>> a = A()
>>> b = B()
>>> a.foo()
A
>>> b.foo()
A
</code></pre>
<p>A more robust approach is climbing the <code>__mro__</code> and looking inside the <code>__dict__</code>s of every class for a function <code>type(self).foo</code>:</p>
<pre><code>def foo(self):
c = [cls.__name__ for cls in type(self).__mro__ if getattr(type(self), 'foo', None) in cls.__dict__.values()]
print(c[0])
</code></pre>
<p>This is a bit more complex but yields the same result.</p>
| 2 |
2016-09-14T11:09:27Z
|
[
"python",
"class",
"python-3.x",
"methods"
] |
How to get the defining class from a method body?
| 39,487,829 |
<p>I have two classes:</p>
<pre><code>class A:
def foo(self):
print(<get the defining class>)
class B(A):
pass
</code></pre>
<p>Now, I need to replace <code><get the defining class></code> by a code such that this:</p>
<pre><code>a = A()
b = B()
a.foo()
b.foo()
</code></pre>
<p>produces this (i.e. this is the expected behaviour):</p>
<pre><code>A
A
</code></pre>
<p>I tried <code>self.__class__.__name__</code> but that obviously produces <code>B</code> for the last call as the <code>self</code> is, in fact, of class <code>B</code>.</p>
<p>So the ultimate question is: if I'm in a method body (which is not a class method), how can I get the name of the class the method is defined in?</p>
| 0 |
2016-09-14T10:10:08Z
| 39,489,750 |
<p>I'm assuming that you don't want to hard-code this logic into each function, since you could do so trivially in your example code.</p>
<p>If the <code>print</code> statement is always the first statement of the methods for which you want this behaviour, one possible solution would be to use a decorator.</p>
<pre><code>def print_defining_class(fn):
calling_class = fn.__qualname__.split('.')[0]
def decorated(*args, **kwargs):
print(calling_class)
return fn(*args, **kwargs)
return decorated
class A:
@print_defining_class
def method(self): pass
class B(A): pass
A().method() # A
B().method() # A
</code></pre>
| -1 |
2016-09-14T11:51:27Z
|
[
"python",
"class",
"python-3.x",
"methods"
] |
Google Cloud Platform - Datastore - Python ndb API
| 39,487,831 |
<p>I recently started working with the Google Cloud Platform, more precisely, with the ndb-Datastore-API. I tried to use following tutorial (<a href="https://github.com/GoogleCloudPlatform/appengine-guestbook-python.git" rel="nofollow">https://github.com/GoogleCloudPlatform/appengine-guestbook-python.git</a>) to get used to the API. </p>
<p>Unfortunately, I cannot figure out how to import the third party library tweepy into tweet.py. Google Cloud does not support tweepy so that I had to include the library manually in a folder /lib. But how do I now import the installed tweepy (pip install -t lib tweepy)? </p>
<p>I basically just try to put an Entity in the Google Datastore but I cannot figure out what I did wrong. </p>
<p>tweet.py: </p>
<pre><code> # [START imports]
from __future__ import absolute_import, print_function
import os
import urllib
from time import *
import jinja2
import webapp2
from google.appengine.api import users
from google.appengine.ext import ndb
JINJA_ENVIRONMENT = jinja2.Environment(
loader=jinja2.FileSystemLoader(os.path.dirname(__file__)),
extensions=['jinja2.ext.autoescape'],
autoescape=True)
# [END imports]
# [START globalvar]
# Go to http://apps.twitter.com and create an app.
# The consumer key and secret will be generated for you after
consumer_key="KEY"
consumer_secret="SECRET"
# After the step above, you will be redirected to your app's page.
# Create an access token under the the "Your access token" section
access_token="TOKEN"
access_token_secret="SECRET"
# [END globalvar]
USERNAME = "@Seeed"
def getDate():
# local Time
lt = localtime()
# get Date
year, month, day = lt[0:3]
date = "%02i.%02i.%04i" % (day,month,year)
return date
# [START tweet_count_entity]
class TweetCount(ndb.Model):
"""A main model for representing an individual TweetCount entry."""
date = ndb.DateProperty(auto_now_add=True)
tweets = ndb.IntegerProperty(indexed=False)
user_name = ndb.StringProperty(indexed=False)
# [END tweet_count_entity]
# [START tweet_counter]
class TweetCounter(webapp2.RequestHandler):
"""
# Create a key for the Entity
def tweetCount_key(date):
date = getDate()
return ndb.Key('TweetCount', date)"""
# Get TweetCount for actor "user_name"
def getTweetCount(self, user_name):
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
user = api.get_user(user_name)
return user.followers_count
def get(self):
count = self.getTweetCount(USERNAME)
tweet_count_user_name = USERNAME
tweet_count_tweets = count
tweet_count = TweetCount(tweets=tweet_count_tweets, user_name=tweet_count_user_name)
tweet_count.put()
self.response.headers['Content-Type'] = 'text/plain'
self.response.write("" + USERNAME + " TweetCount: " + str(count))
# [END tweet_counter]
# [START app]
app = webapp2.WSGIApplication([
('/', TweetCounter),
], debug=True)
# [END app]
</code></pre>
<p>app.yaml:</p>
<pre><code>runtime: python27
api_version: 1
threadsafe: true
# [START handlers]
handlers:
- url: /.*
script: tweet.app
# [END handlers]
# [START libraries]
libraries:
- name: webapp2
version: latest
- name: jinja2
version: latest
# [END libraries]
</code></pre>
<p>index.yaml:</p>
<pre><code>indexes:
- kind: TweetCount
properties:
- name: date
- name: tweets
- name: user_name
</code></pre>
<p>appengine_config.py: </p>
<pre><code>from google.appengine.ext import vendor
# Add any libraries installed in the "lib" folder.
vendor.add('lib')
</code></pre>
<p>Thank you very much for your help. I appreciate any help and an explanation about what I did wrong. </p>
| 2 |
2016-09-14T10:10:14Z
| 39,734,086 |
<p>Based on your <code>tweet.py</code> and <code>app.yaml</code>, I can see the 2 things that may be the reason you cannot yet use <strong>tweepy</strong> in your Python application. For the sake of being through, I'll document the complete process.</p>
<h2>Acquire tweepy library</h2>
<p>As <strong>Tweepy</strong> is a <a href="https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27" rel="nofollow">third party library</a> and not a library that ships with Google App Engine, we have to package it with our Python application as you've already suggested with <a href="https://packaging.python.org/installing/#use-pip-for-installing" rel="nofollow"><code>pip install --target lib tweepy</code></a>. The directory specified after the <code>--target</code> option is where the third party library will be downloaded.</p>
<h2>Vendor in the third-party directory</h2>
<p>Now that we've downloaded the third party library, Python must be made to search this <code>lib</code> directory when attempting to import modules. This can be done as you've shown, using an <code>appengine_config.py</code> file in your application directory as indicated in <a href="https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27#installing_a_library" rel="nofollow">Installing a library</a> like so:</p>
<pre class="lang-python prettyprint-override"><code>from google.appengine.ext import vendor
vendor.add('lib')
</code></pre>
<h2>Import third-party module in the application</h2>
<p>Python will now look in the <code>lib</code> directory when attempting to import modules. Thus, we add our import statement where appropriate in our application code (e.g.: <code>tweet.py</code>):</p>
<pre class="lang-python prettyprint-override"><code>import tweepy
</code></pre>
<p><strong><em>This import is not currently found in your <code>tweet.py</code> example.</em></strong></p>
<h2>Ensure tweepy's dependencies are also imported</h2>
<p>Note that <strong>tweepy</strong> when imported will attempt to import the <strong>ssl</strong> module so it must be included in the <code>libraries</code> section of your <code>app.yaml</code> like so:</p>
<pre class="lang-yaml prettyprint-override"><code>libraries:
- name: ssl
version: latest
</code></pre>
<p><strong><em>This too, was not in your example <code>app.yaml</code>.</em></strong></p>
<p>With the above, one should be able to successfully deploy a python GAE application which properly imports <strong>tweepy</strong>. Note that I've not actively tested any of <strong>tweepy</strong>'s features, but merely imported it successfully. Should you be seeing any errors with the above examples, I would suggested also including the following in your question: </p>
<ul>
<li>App Engine SDK version</li>
<li>Python version</li>
<li>pip version</li>
<li>Stack trace of the error you are receiving when deploying or serving a HTTP response</li>
</ul>
| 1 |
2016-09-27T20:50:40Z
|
[
"python",
"google-app-engine",
"google-cloud-platform",
"google-cloud-datastore"
] |
Django authentication and permissions
| 39,487,857 |
<p>I posted a question on SO regarding Django authentication and permissions even though it's already there. My question is ,</p>
<p>I have a backend ready with a lot of views, models and serializers from DRF. Now I want to apply authentication to my app and create RESTful apis that are consumed at the front-end. So the doubts that I have</p>
<ul>
<li>How does authentication work? Does Django create different model tables for each of its user?</li>
<li>If so, how do I retrieve data per user from Django?</li>
</ul>
<p>Now comes the second case</p>
<ul>
<li>If above two are true, then how do I use permission in DRF to serve only the data that is relevant to a particular user?</li>
</ul>
<p>I hope my questions are clear. If not, suggest me edits.
I'd also like some examples(if any) on how does this all happens.</p>
<p>Also, if you want to see the <a href="http://stackoverflow.com/questions/39484907/django-authentication-with-drf">original post</a>.</p>
| 0 |
2016-09-14T10:12:17Z
| 39,490,638 |
<p>Django uses table <code>auth_user</code> to save user and password</p>
<p>Django has an API to check <code>is_authenticated</code></p>
<hr>
<p>Django rest framework uses table <code>authtoken_token</code> </p>
<p>Django rest framework has an API <code>rest_framework.authtoken.views.obtain_auth_token</code> for the same purpose</p>
<p>In DRF, for permissions you can do something like below..</p>
<pre><code>class UserDetail(generics.RetrieveUpdateDestroyAPIView):
"""
User Detail API
"""
permission_classes = ("check_permission1", "check_permission2",) ### you can define your permissions using this
</code></pre>
| 0 |
2016-09-14T12:37:04Z
|
[
"python",
"django",
"django-rest-framework"
] |
Django authentication and permissions
| 39,487,857 |
<p>I posted a question on SO regarding Django authentication and permissions even though it's already there. My question is ,</p>
<p>I have a backend ready with a lot of views, models and serializers from DRF. Now I want to apply authentication to my app and create RESTful apis that are consumed at the front-end. So the doubts that I have</p>
<ul>
<li>How does authentication work? Does Django create different model tables for each of its user?</li>
<li>If so, how do I retrieve data per user from Django?</li>
</ul>
<p>Now comes the second case</p>
<ul>
<li>If above two are true, then how do I use permission in DRF to serve only the data that is relevant to a particular user?</li>
</ul>
<p>I hope my questions are clear. If not, suggest me edits.
I'd also like some examples(if any) on how does this all happens.</p>
<p>Also, if you want to see the <a href="http://stackoverflow.com/questions/39484907/django-authentication-with-drf">original post</a>.</p>
| 0 |
2016-09-14T10:12:17Z
| 39,492,849 |
<blockquote>
<p>How does authentication work? Does Django create different model tables for each of its user?</p>
</blockquote>
<p>No, Django does not create different model tables for each of its users. The users itself are stored as entry in the <code>User</code>-Model. So to have data for each user you have to add a foreign-key in your data model to a particular user.</p>
<pre><code>class Post(models.Model):
author = models.ForeignKey(settings.AUTH_USER_MODEL)
text = models.TextField()
</code></pre>
<p>So once your user is logged in (authenticated) you simply filter the Post-model according to the logged in user.
In DRF you can do this filtering with permissions:</p>
<pre><code>class PostList(generics.ListCreateAPIView):
queryset = Post.objects.all()
serializer_class = PostSerializer
permission_classes = (IsOwner,)
class IsOwner(permissions.BasePermission):
"""
Custom permission to only allow owners of an object to see it.
"""
def has_object_permission(self, request, view, obj):
return obj.author == request.user
</code></pre>
<p>I hope this helps</p>
| 0 |
2016-09-14T14:21:14Z
|
[
"python",
"django",
"django-rest-framework"
] |
Interactive/Animated scatter plotting with matplotlib
| 39,487,901 |
<p>I want to animate the scatter plot based on the actual timestamp from the csv file (see below). I'm not so good with matplotlib and I know of the animation function and the ion()-function but I'm not sure how to implement it.
I read <a href="http://stackoverflow.com/questions/32268532/animate-a-scatterplot-with-pyplot">this</a> but it seemed very difficult to implement it in my way.
I have tried the code below but it only shows me every loop a new window with the actual data but I would like to have the animation in one window thanks in advance :):</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
start_time = 86930.00
end_time = 86934.00
df = pd.read_csv('Data.csv', delimiter=',')
for timestamp in range(int(start_time), int(end_time), 1):
act_data = df.loc[df['timestamp'] == float(timestamp)]
X = act_data.x
Y = act_data.y
plt.scatter(X, Y)
plt.show()
</code></pre>
<p>Data.csv:</p>
<pre><code>timestamp,id,x,y
86930.00,1,1155.53,7155.05
86930.00,2,3495.08,8473.46
86931.00,1,3351.04,6402.27
86931.00,3,3510.59,8021.62
86931.00,2,2231.04,6221.27
86931.00,4,3710.59,8111.62
86932.00,2,3333.01,6221.27
86932.00,1,3532.59,3178.62
86933.00,3,1443.01,2323.27
86933.00,4,5332.59,1178.62
</code></pre>
<p>It would be cool if I could color the blobs by ID but not necessary :).</p>
| 0 |
2016-09-14T10:14:42Z
| 39,505,892 |
<p>The quickest way to animate on the same axis is to use interactive plots, <code>plt.ion</code></p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
start_time = 86930.00
end_time = 86934.00
df = pd.read_csv('Data.csv', delimiter=',')
fig, ax = plt.subplots(1,1)
plt.ion()
plt.show()
for timestamp in range(int(start_time), int(end_time), 1):
act_data = df.loc[df['timestamp'] == float(timestamp)]
X = act_data.x
Y = act_data.y
ax.scatter(X, Y)
plt.pause(1.0)
</code></pre>
<p>Although, I suspect from your title you want something interactive, which you can also use a <code>matplotlib</code> <a href="http://matplotlib.org/examples/widgets/slider_demo.html" rel="nofollow">slider widget</a>.
With you data,</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.widgets import Slider
start_time = 86930.00
end_time = 86934.00
df = pd.read_csv('Data.csv', delimiter=',')
fig, ax = plt.subplots(1,1)
plt.subplots_adjust(bottom=0.25)
sax = plt.axes([0.25, 0.1, 0.65, 0.03])
slide = Slider(sax, 'time', start_time, end_time, valinit=start_time)
#Initial plot
act_data = df.loc[df['timestamp'] == float(int(start_time))]
s, = ax.plot(act_data.x, act_data.y, 'o')
def update(timestamp):
act_data = df.loc[df['timestamp'] == float(int(timestamp))]
X = act_data.x
Y = act_data.y
#Update data based on slider
s.set_xdata(X)
s.set_ydata(Y)
#Reset axis limits
ax.set_xlim([X.min()*0.9,X.max()*1.1])
ax.set_ylim([Y.min()*0.9,Y.max()*1.1])
fig.canvas.draw()
slide.on_changed(update)
plt.show()
</code></pre>
| 1 |
2016-09-15T07:59:51Z
|
[
"python",
"pandas",
"matplotlib"
] |
Django database objects not displaying
| 39,487,918 |
<p>I keep getting <code><QuerySet [<Member: - >]></code> whenever I try to display all objects within a model class through <code>Member.objects.all()</code> in the shell.</p>
<p>This is my code, note that the Book class is displaying them correctly when I call all objects, but the Member class does not:</p>
<pre><code>class Book(models.Model):
Titel = models.CharField(max_length=100)
Author = models.CharField(max_length=100)
Genre = models.CharField(max_length=100)
ISBN13 = models.CharField(max_length=13)
def __str__(self):
return self.Author + ' - ' + self.Titel
class Member(models.Model):
Name = models.CharField(max_length=15)
Surname = models.CharField(max_length=15)
Mailadres = models.CharField(max_length=15)
Birth = models.DateField(auto_now_add=True, auto_now=False)
Leerlingnummer = models.CharField(max_length=15)
def __str__(self):
return self.Name + ' - ' + self.Surname
</code></pre>
<p>As requested the way I add members:</p>
<pre><code>from dashboard.models import Member
a = Member()
Name = 'xxx'
Surname = 'xxx'
Mailadres = 'xxxxxxx'
Birth = 'xx-xx-xxxx'
Leerlingnummer = 'xxxxxxxxx'
a.save()
</code></pre>
| 0 |
2016-09-14T10:16:03Z
| 39,488,224 |
<p>You are just creating some variables. You have to assign to the attributes of the object you created and actually save it.</p>
<pre><code>from dashboard.models import Member
a = Member()
a.Name = 'xxx'
a.Surname = 'xxx'
a.Mailadres = 'xxxxxxx'
a.Birth = 'xx-xx-xxxx'
a.Leerlingnummer = 'xxxxxxxxx'
a.save()
</code></pre>
<p>Alternatively, you can use <code>Member.objects.create</code>:</p>
<pre><code>from dashboard.models import Member
a = Member.objects.create(
Name='xxx',
Surname='xxx',
Mailadres='xxxxxxx',
Birth='xx-xx-xxxx',
Leerlingnummer='xxxxxxxxx')
</code></pre>
| 0 |
2016-09-14T10:32:53Z
|
[
"python",
"django",
"database",
"shell",
"object"
] |
Which errors to catch when a module doesn't document all of its errors? (plistlib)
| 39,488,045 |
<p>TL;DR: What kind of errors to catch when a module doesn't document all of its errors?</p>
<p><strong>Scenario</strong>:</p>
<p>I'm trying to read a series of <a href="https://en.wikipedia.org/wiki/Property_list" rel="nofollow">property lists</a> using <a href="https://docs.python.org/2/library/plistlib.html" rel="nofollow"><code>plistlib</code></a>. I have no control over the files. If a file couldn't be read, I want to skip it.</p>
<p>My question is what kind of errors should I catch? </p>
<p><code>plistlib.readPlist</code> documents <code>IOError</code> and <code>xml.parsers.expat.ExpatError</code>.</p>
<p>But I also could produce at least <code>IndexError</code> and <code>AttributeError</code> just by malforming my input files. These are not documented in <code>plistlib</code>. Who knows what kind of additional errors other random input files could produce? I don't want my program to fail because of that.</p>
<p>So my question is. What should I catch? My understanding is that catching any error with a generic <code>except</code> is not preferred since it masks other errors such as <code>KeyboardInterrupt</code>. Since this is a command-line application I don't want to ignore such events.</p>
<p><strong>Code</strong>:</p>
<pre><code>import plistlib
import sys
def main():
paths = [] # from sys.argv
for path in paths:
try:
plist = plistlib.readPlist(path)
except: # What to catch here?
sys.stderr.write('Couldnt read plist. Ignoring.')
continue
process(plist)
</code></pre>
<p>Python 2.7, OS X.</p>
| 2 |
2016-09-14T10:22:17Z
| 39,488,221 |
<p>If you can't do any better, then <code>except Exception:</code> avoids catching <code>KeyboardInterrupt</code> or <code>SystemExit</code>.</p>
<p>It does however catch <code>StopIteration</code> and <code>GeneratorExit</code>. Possibly you can safely move down to catching <code>StandardError</code> (which excludes those), since it would normally be considered wrong for any code other than an iterator to let <code>StopIteration</code> escape. But who knows, maybe there's some input that causes the library to call <code>next</code> on an exhausted iterator and not catch that.</p>
<p><code>StandardError</code> still catches <code>SyntaxError</code> and <code>TypeError</code>, which are often indicators of programmer error rather than bad input. But there's no single class that catches both <code>LookupError</code> and <code>MemoryError</code> (both of which would be appropriate to catch here) and not <code>SyntaxError</code>. So that's about as far as you can go without either documentation or extensive testing to determine what the code really throws.</p>
<p>Note that <code>MemoryError</code> isn't enough to know whether the error is transient (it would work on another day or another machine) or permanent (the input file is so large that no conceivable machine will be able to handle it).</p>
| 2 |
2016-09-14T10:32:36Z
|
[
"python",
"python-2.7",
"error-handling",
"try-catch",
"plist"
] |
Odoo 9 report Task error
| 39,488,049 |
<p>I want create report from Project > Task.</p>
<p>When try load data from table <strong>account_analytic_line</strong> </p>
<p><a href="http://i.stack.imgur.com/9CBL1.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/9CBL1.jpg" alt="enter image description here"></a></p>
<p>get this error</p>
<p>"'project.task' object has no attribute 'timesheets_ids'" while evaluating
'doc.timesheets_ids'</p>
<pre><code> <tbody>
<t t-foreach="doc.timesheets_ids" t-as="l">
<tr>
<td>
<span t-field="l.name"/>
</td>
<td class="text-right">
<span t-field="l.name"/>
</td>
<td>
<span t-field="l.name"/>
</td>
<td class="text-right">
<span t-field="l.name"/>
</td>
</tr>
</t>
</tbody>
<template id="report_task">
<t t-call="report.html_container">
<t t-foreach="docs" t-as="doc">
<t t-call="project.report_task_document"/>
</t>
</t>
</template>
</code></pre>
<p>Any solution?</p>
| 0 |
2016-09-14T10:22:44Z
| 39,702,943 |
<p>Its timesheet_ids not timesheets_ids. Just a spell mistake.</p>
| 0 |
2016-09-26T12:32:52Z
|
[
"python",
"openerp",
"odoo-9"
] |
Python Class instantiation: What does it mean when a class is called without accompanying arguments?
| 39,488,061 |
<p>I am trying to understand some code that I have found on github. Essentially there is a class called "HistoricCSVDataHandler" that has been imported from another module. In the main method this class is passed as a parameter to another class "backtest". </p>
<p>How can a class name that does not represent a instantiated variable be called without raising a NameError. </p>
<p>Or simply put :</p>
<p>Why/How is the class being called as:</p>
<pre><code>Backtest(HistoricCSVDataHandler)
</code></pre>
<p>Rather than:</p>
<pre><code>CSV_Handler = HistoricCSVDataHandler(foo,bar,etc)
Backtest(CSV_Handler)
</code></pre>
<p><a href="https://github.com/djunh1/event-driven-backtesting/blob/master/snp_forecast.py" rel="nofollow">See line 110 for code.</a></p>
<p>Regards</p>
| 2 |
2016-09-14T10:23:45Z
| 39,488,142 |
<p>A link to the code would be useful, but in python you can pass uninstantiated objects around like this just fine. i.e.:</p>
<pre><code>def f(uninst_class, arg):
return uninst_class(arg)
</code></pre>
| 0 |
2016-09-14T10:28:14Z
|
[
"python",
"class"
] |
Python Class instantiation: What does it mean when a class is called without accompanying arguments?
| 39,488,061 |
<p>I am trying to understand some code that I have found on github. Essentially there is a class called "HistoricCSVDataHandler" that has been imported from another module. In the main method this class is passed as a parameter to another class "backtest". </p>
<p>How can a class name that does not represent a instantiated variable be called without raising a NameError. </p>
<p>Or simply put :</p>
<p>Why/How is the class being called as:</p>
<pre><code>Backtest(HistoricCSVDataHandler)
</code></pre>
<p>Rather than:</p>
<pre><code>CSV_Handler = HistoricCSVDataHandler(foo,bar,etc)
Backtest(CSV_Handler)
</code></pre>
<p><a href="https://github.com/djunh1/event-driven-backtesting/blob/master/snp_forecast.py" rel="nofollow">See line 110 for code.</a></p>
<p>Regards</p>
| 2 |
2016-09-14T10:23:45Z
| 39,488,194 |
<p>Think of python variables as labels, you can label a class (no instance), its like giving an alias to that class.</p>
<pre><code>>>> class A(): pass
>>> A
<class __builtin__.A at 0x000001BE82279D08>
>>> b = A
>>> b
<class __builtin__.A at 0x000001BE82279D08>
>>> a = b()
>>> a
<__builtin__.A instance at 0x000001BE82269F48>
>>> A()
<__builtin__.A instance at 0x000001BE82269708>
</code></pre>
<p>Its the same with funtions parameters, you can pass a class (no instance) to the function for posterior use of it.</p>
<pre><code>>>> def instantiator(c):
... return c()
...
>>> c = instantiator(b)
>>> c
<__builtin__.A instance at 0x000001BE8226A148>
>>> c = instantiator(A)
>>> c
<__builtin__.A instance at 0x000001BE8226A288>
</code></pre>
| 0 |
2016-09-14T10:31:26Z
|
[
"python",
"class"
] |
Python Class instantiation: What does it mean when a class is called without accompanying arguments?
| 39,488,061 |
<p>I am trying to understand some code that I have found on github. Essentially there is a class called "HistoricCSVDataHandler" that has been imported from another module. In the main method this class is passed as a parameter to another class "backtest". </p>
<p>How can a class name that does not represent a instantiated variable be called without raising a NameError. </p>
<p>Or simply put :</p>
<p>Why/How is the class being called as:</p>
<pre><code>Backtest(HistoricCSVDataHandler)
</code></pre>
<p>Rather than:</p>
<pre><code>CSV_Handler = HistoricCSVDataHandler(foo,bar,etc)
Backtest(CSV_Handler)
</code></pre>
<p><a href="https://github.com/djunh1/event-driven-backtesting/blob/master/snp_forecast.py" rel="nofollow">See line 110 for code.</a></p>
<p>Regards</p>
| 2 |
2016-09-14T10:23:45Z
| 39,488,252 |
<p>This is a technique called <a href="https://en.wikipedia.org/wiki/Dependency_injection" rel="nofollow"><em>dependency injection</em></a>. A class is an object just like any other, so there's nothing wrong with passing it as an argument to a function and then calling it inside the function.</p>
<p>Suppose I want to read a string and get either an <code>int</code> or a <code>float</code> back. I could write a function that takes the desired class as an argument:</p>
<pre><code>def convert(s, typ):
return typ(s)
</code></pre>
<p>Calling this gives several possibilities:</p>
<pre><code>>>> convert(3, str)
'3'
>>> convert('3', int)
3
>>> convert('3', float)
3.0
</code></pre>
<p>So the <code>Backtest</code> function in your code is most likely creating an instance of whichever class you pass - <em>i.e.</em> it is calling <code>HistoricCVSHandler</code> internally to create an instance of that class.</p>
<p>We normally think of Python objects as instances of some class. Classes are similarly objects, and in fact are instances of their so-called <em>metaclass</em>, which by default will be <code>type</code> for classes inheriting from object.</p>
<pre><code>>>> class MyClass(object): pass
...
>>> type(MyClass)
<type 'type'>
</code></pre>
| 2 |
2016-09-14T10:34:26Z
|
[
"python",
"class"
] |
Asterisk AMI incoming calls returns no calls
| 39,488,128 |
<p>I have the following function with the aim is to get incoming calls from my Python web interface from the Asterisk server.</p>
<pre><code>def fetch_events(event, manager):
with app.app_context():
if event.name == 'CoreShowChannel':
id = event.message['accountcode']
data = {
'user_id': id,
'caller_id': event.message['CallerIDnum'],
'channel': event.message['Channel'],
'duration': event.message['Duration'],
'context': event.message['Context'],
'extension': event.message['Extension'],
'line': event.message['ConnectedLineNum'],
#'channel_state': event.message['ChannelState'],
'channel_state': event.message['ChannelStateDesc'],
}
user = System().getUserById(id)
if user:
profile = {
'firstname': user['firstname'],
'lastname': user['lastname']
'email': user['email']
}
else:
profile = {
'first_name': "No firstname",
'last_name': "No lastname"
}
data.update(profile)
g.channels.append(data)
if event.name == 'CoreShowChannelsComplete':
g.complete = True
if not event.name:
data = {
"connectivity":"Not connected",
"event-name":"No event name"
}
g.channels.append(data)
g.complete = True
@app.route('/incoming-calls')
def incoming_calls():
/**** I have already login and connect *****/
g.channels = []
g.complete = False
manager.register_event('*', fetch_events)
res = manager.send_action({'Action':'CoreShowChannels'})
try:
while not g.complete:
time.sleep(0.5)
manager.close()
return json.dumps(g.channels)
</code></pre>
<p>But when I try getting incoming calls events after registering the handle_event method I get an empty array. It seems there is something wrong with the handle_event method but I can't get to find it.</p>
| -1 |
2016-09-14T10:27:36Z
| 39,496,453 |
<p>You should use (listen) 'Newexten' event and extract 'context' from that event ,and then look up if the calls come from incoming context if it true ,then use other variables from that event. Don't use action 'CoreShowChannel' because it will list current active channels.</p>
| 0 |
2016-09-14T17:32:45Z
|
[
"python",
"asterisk"
] |
Asterisk AMI incoming calls returns no calls
| 39,488,128 |
<p>I have the following function with the aim is to get incoming calls from my Python web interface from the Asterisk server.</p>
<pre><code>def fetch_events(event, manager):
with app.app_context():
if event.name == 'CoreShowChannel':
id = event.message['accountcode']
data = {
'user_id': id,
'caller_id': event.message['CallerIDnum'],
'channel': event.message['Channel'],
'duration': event.message['Duration'],
'context': event.message['Context'],
'extension': event.message['Extension'],
'line': event.message['ConnectedLineNum'],
#'channel_state': event.message['ChannelState'],
'channel_state': event.message['ChannelStateDesc'],
}
user = System().getUserById(id)
if user:
profile = {
'firstname': user['firstname'],
'lastname': user['lastname']
'email': user['email']
}
else:
profile = {
'first_name': "No firstname",
'last_name': "No lastname"
}
data.update(profile)
g.channels.append(data)
if event.name == 'CoreShowChannelsComplete':
g.complete = True
if not event.name:
data = {
"connectivity":"Not connected",
"event-name":"No event name"
}
g.channels.append(data)
g.complete = True
@app.route('/incoming-calls')
def incoming_calls():
/**** I have already login and connect *****/
g.channels = []
g.complete = False
manager.register_event('*', fetch_events)
res = manager.send_action({'Action':'CoreShowChannels'})
try:
while not g.complete:
time.sleep(0.5)
manager.close()
return json.dumps(g.channels)
</code></pre>
<p>But when I try getting incoming calls events after registering the handle_event method I get an empty array. It seems there is something wrong with the handle_event method but I can't get to find it.</p>
| -1 |
2016-09-14T10:27:36Z
| 39,504,488 |
<p>When someone call the Asterisk example incoming call to trunk num ,Asterisk fired 'Newexten' event. The data structure of that event is:</p>
<pre><code>`{ event: 'Newexten',
privilege: 'call,all',
channel: 'SIP/NameOfyourSIPprovider-0000ae7d',//channel that will be bridged
channelstate: '0',
channelstatedesc: 'Down',//the channel is currently down because no one is answered
calleridnum: '060444333',// num of calling party
calleridname: '<unknown>',
connectedlinenum: '+34312345678',//your trunk num
connectedlinename: '<unknown>',
language: 'en',
accountcode: '',
context: 'from-trunk',//context(extensions.conf) for incoming calls
exten: '060444333',//num of calling party
priority: '1',
uniqueid: '1471265139.357602',//unique id of channel
linkedid: '1471265138.357596',//linked id of channel that will be bridged
extension: '060444333',//num of calling party
application: 'AppDial',
appdata: '(Outgoing Line)' }`
</code></pre>
| 0 |
2016-09-15T06:37:10Z
|
[
"python",
"asterisk"
] |
Import CSV to pandas with two delimiters
| 39,488,163 |
<p>I have a CSV with two delimiters (<code>;</code>) and (<code>,</code>) it looks like this:</p>
<pre><code>vin;vorgangid;eventkm;D_8_lamsoni_w_time;D_8_lamsoni_w_value
V345578;295234545;13;-1000.0,-980.0;7.9921875,11.984375
V346670;329781064;13;-960.0,-940.0;7.9921875,11.984375
</code></pre>
<p>I want to import it into a pandas data frame, with the (<code>;</code>) acting as a column separator and (<code>,</code>) as a separator for a <code>list</code> or <code>array</code> using <code>float</code> as data type. So far I am using this method, but I am sure there is something easier out there.</p>
<pre><code>aa=0;
csv_import=pd.read_csv(folder+FileName, ';')
for col in csv_import.columns:
aa=aa+1
if type(csv_import[col][0])== str and aa>3:
# string to list of strings
csv_import[col]=csv_import[col].apply(lambda x:x.split(','))
# make the list of stings into a list of floats
csv_import[col]=csv_import[col].apply(lambda x: [float(y) for y in x])
</code></pre>
| 2 |
2016-09-14T10:29:23Z
| 39,488,190 |
<p>first read CSV using <code>;</code> as a delimiter:</p>
<pre><code>df = pd.read_csv(filename, sep=';')
</code></pre>
<p><strong>UPDATE:</strong></p>
<pre><code>In [67]: num_cols = df.columns.difference(['vin','vorgangid','eventkm'])
In [68]: num_cols
Out[68]: Index(['D_8_lamsoni_w_time', 'D_8_lamsoni_w_value'], dtype='object')
In [69]: df[num_cols] = (df[num_cols].apply(lambda x: x.str.split(',', expand=True)
....: .stack()
....: .astype(float)
....: .unstack()
....: .values.tolist())
....: )
In [70]: df
Out[70]:
vin vorgangid eventkm D_8_lamsoni_w_time D_8_lamsoni_w_value
0 V345578 295234545 13 [-1000.0, -980.0] [7.9921875, 11.984375]
1 V346670 329781064 13 [-960.0, -940.0] [7.9921875, 11.984375]
In [71]: type(df.ix[0, 'D_8_lamsoni_w_value'][0])
Out[71]: float
</code></pre>
<p><strong>OLD answer:</strong></p>
<p>Now we can split numbers into lists in the "number" columns:</p>
<pre><code>In [20]: df[['D_8_lamsoni_w_time', 'D_8_lamsoni_w_value']] = \
df[['D_8_lamsoni_w_time', 'D_8_lamsoni_w_value']].apply(lambda x: x.str.split(','))
In [21]: df
Out[21]:
vin vorgangid eventkm D_8_lamsoni_w_time D_8_lamsoni_w_value
0 V345578 295234545 13 [-1000.0, -980.0] [7.9921875, 11.984375]
1 V346670 329781064 13 [-960.0, -940.0] [7.9921875, 11.984375]
</code></pre>
| 3 |
2016-09-14T10:31:10Z
|
[
"python",
"csv",
"pandas",
"delimiter",
"csv-import"
] |
Import CSV to pandas with two delimiters
| 39,488,163 |
<p>I have a CSV with two delimiters (<code>;</code>) and (<code>,</code>) it looks like this:</p>
<pre><code>vin;vorgangid;eventkm;D_8_lamsoni_w_time;D_8_lamsoni_w_value
V345578;295234545;13;-1000.0,-980.0;7.9921875,11.984375
V346670;329781064;13;-960.0,-940.0;7.9921875,11.984375
</code></pre>
<p>I want to import it into a pandas data frame, with the (<code>;</code>) acting as a column separator and (<code>,</code>) as a separator for a <code>list</code> or <code>array</code> using <code>float</code> as data type. So far I am using this method, but I am sure there is something easier out there.</p>
<pre><code>aa=0;
csv_import=pd.read_csv(folder+FileName, ';')
for col in csv_import.columns:
aa=aa+1
if type(csv_import[col][0])== str and aa>3:
# string to list of strings
csv_import[col]=csv_import[col].apply(lambda x:x.split(','))
# make the list of stings into a list of floats
csv_import[col]=csv_import[col].apply(lambda x: [float(y) for y in x])
</code></pre>
| 2 |
2016-09-14T10:29:23Z
| 39,488,230 |
<p>You can use parameter <code>converters</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow"><code>read_csv</code></a> and define custom function for spliting:</p>
<pre><code>def f(x):
return [float(i) for i in x.split(',')]
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp),
sep=";",
converters={'D_8_lamsoni_w_time':f, 'D_8_lamsoni_w_value':f})
print (df)
vin vorgangid eventkm D_8_lamsoni_w_time D_8_lamsoni_w_value
0 V345578 295234545 13 [-1000.0, -980.0] [7.9921875, 11.984375]
1 V346670 329781064 13 [-960.0, -940.0] [7.9921875, 11.984375]
</code></pre>
<p>Another solution working with <code>NaN</code> in <code>4.</code> and <code>5.</code> columns:</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow"><code>read_csv</code></a> with separators <code>;</code>, then apply <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow"><code>str.split</code></a> to <code>4.</code> and <code>5.</code> column selected by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow"><code>iloc</code></a> and convert each value in <code>list</code> to <code>float</code>:</p>
<pre><code>import pandas as pd
import numpy as np
import io
temp=u"""vin;vorgangid;eventkm;D_8_lamsoni_w_time;D_8_lamsoni_w_value
V345578;295234545;13;-1000.0,-980.0;7.9921875,11.984375
V346670;329781064;13;-960.0,-940.0;7.9921875,11.984375"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), sep=";")
print (df)
vin vorgangid eventkm D_8_lamsoni_w_time D_8_lamsoni_w_value
0 V345578 295234545 13 -1000.0,-980.0 7.9921875,11.984375
1 V346670 329781064 13 -960.0,-940.0 7.9921875,11.984375
#split 4.th and 5th column and convert to numpy array
df.iloc[:,3] = df.iloc[:,3].str.split(',').apply(lambda x: [float(i) for i in x])
df.iloc[:,4] = df.iloc[:,4].str.split(',').apply(lambda x: [float(i) for i in x])
print (df)
vin vorgangid eventkm D_8_lamsoni_w_time D_8_lamsoni_w_value
0 V345578 295234545 13 [-1000.0, -980.0] [7.9921875, 11.984375]
1 V346670 329781064 13 [-960.0, -940.0] [7.9921875, 11.984375]
</code></pre>
<p>If need <code>numpy arrays</code> instead <code>lists</code>:</p>
<pre><code>#split 4.th and 5th column and convert to numpy array
df.iloc[:,3] = df.iloc[:,3].str.split(',').apply(lambda x: np.array([float(i) for i in x]))
df.iloc[:,4] = df.iloc[:,4].str.split(',').apply(lambda x: np.array([float(i) for i in x]))
print (df)
vin vorgangid eventkm D_8_lamsoni_w_time D_8_lamsoni_w_value
0 V345578 295234545 13 [-1000.0, -980.0] [7.9921875, 11.984375]
1 V346670 329781064 13 [-960.0, -940.0] [7.9921875, 11.984375]
print (type(df.iloc[0,3]))
<class 'numpy.ndarray'>
</code></pre>
<hr>
<p>I try improve your solutiuon:</p>
<pre><code>a=0;
csv_import=pd.read_csv(folder+FileName, ';')
for col in csv_import.columns:
a += 1
if type(csv_import.ix[0, col])== str and a>3:
# string to list of strings
csv_import[col]=csv_import[col].apply(lambda x: [float(y) for y in x.split(',')])
</code></pre>
| 1 |
2016-09-14T10:33:04Z
|
[
"python",
"csv",
"pandas",
"delimiter",
"csv-import"
] |
Import CSV to pandas with two delimiters
| 39,488,163 |
<p>I have a CSV with two delimiters (<code>;</code>) and (<code>,</code>) it looks like this:</p>
<pre><code>vin;vorgangid;eventkm;D_8_lamsoni_w_time;D_8_lamsoni_w_value
V345578;295234545;13;-1000.0,-980.0;7.9921875,11.984375
V346670;329781064;13;-960.0,-940.0;7.9921875,11.984375
</code></pre>
<p>I want to import it into a pandas data frame, with the (<code>;</code>) acting as a column separator and (<code>,</code>) as a separator for a <code>list</code> or <code>array</code> using <code>float</code> as data type. So far I am using this method, but I am sure there is something easier out there.</p>
<pre><code>aa=0;
csv_import=pd.read_csv(folder+FileName, ';')
for col in csv_import.columns:
aa=aa+1
if type(csv_import[col][0])== str and aa>3:
# string to list of strings
csv_import[col]=csv_import[col].apply(lambda x:x.split(','))
# make the list of stings into a list of floats
csv_import[col]=csv_import[col].apply(lambda x: [float(y) for y in x])
</code></pre>
| 2 |
2016-09-14T10:29:23Z
| 39,489,834 |
<p>Asides from the other fine answers here, which are more pandas-specific, it should be noted that Python itself is pretty powerful when it comes to string processing. You can just place the result of replacing <code>';'</code> with <code>','</code> in a <a href="https://docs.python.org/2/library/stringio.html" rel="nofollow"><code>StringIO</code></a> object, and work normally from there:</p>
<pre><code>In [8]: import pandas as pd
In [9]: from cStringIO import StringIO
In [10]: pd.read_csv(StringIO(''.join(l.replace(';', ',') for l in open('stuff.csv'))))
Out[10]:
vin vorgangid eventkm D_8_lamsoni_w_time \
V345578 295234545 13 -1000.0 -980.0 7.992188
V346670 329781064 13 -960.0 -940.0 7.992188
D_8_lamsoni_w_value
V345578 295234545 11.984375
V346670 329781064 11.984375
</code></pre>
| 2 |
2016-09-14T11:55:07Z
|
[
"python",
"csv",
"pandas",
"delimiter",
"csv-import"
] |
Dumping pickle is not working with rb+
| 39,488,237 |
<p>Why pickle file is not modified? But after I uncomment the line it works?</p>
<pre><code>with open(PATH, "rb+") as fp:
mocks_pickle = pickle.load(fp)
mocks_pickle['aa'] = '123'
# pickle.dump(mocks_pickle, open(PATH, 'wb'))
pickle.dump(mocks_pickle, fp)
</code></pre>
| 0 |
2016-09-14T10:33:47Z
| 39,488,510 |
<p>You need to seek to the beginning of the file with <code>fp.seek(0)</code> before dumping the object.</p>
<p>If you don't seek you append the new pickle to the end of the file.
And when you <code>pickle.load</code> from the file you only get the first there is in the file.</p>
<pre><code>with open(PATH, "rb+") as fp:
mocks_pickle = pickle.load(fp)
mocks_pickle['aa'] = '123'
fp.seek(0)
pickle.dump(mocks_pickle, fp)
</code></pre>
| 1 |
2016-09-14T10:46:41Z
|
[
"python",
"pickle"
] |
total size of new array must be unchanged
| 39,488,282 |
<p>I have two arrays x1 and x2, both are 1*14 arrays i am trying to zip them up and then perform reshape.</p>
<p>The code is as below ;</p>
<pre><code>x1
</code></pre>
<p>Out[122]: array([1, 2, 3, 1, 5, 6, 5, 5, 6, 7, 8, 9, 7, 9])</p>
<pre><code>x2
</code></pre>
<p>Out[123]: array([1, 3, 2, 2, 8, 6, 7, 6, 7, 1, 2, 1, 1, 3])</p>
<pre><code>X = np.array(zip(x1, x2)).reshape(2, len(x1))
</code></pre>
<p>ValueErrorTraceback (most recent call last)
in ()
----> 1 X = np.array(zip(x1, x2)).reshape(2, len(x1))</p>
<p>ValueError: total size of new array must be unchanged</p>
| 1 |
2016-09-14T10:35:35Z
| 39,488,393 |
<p>I would assume you're on Python 3, in which the result is an array with a <code>zip</code> object. </p>
<p>You should call <code>list</code> on the <em>zipped</em> items:</p>
<pre><code>X = np.array(list(zip(x1, x2))).reshape(2, len(x1))
# ^^^^
print(X)
# [[1 1 2 3 3 2 1 2 5 8 6 6 5 7]
# [5 6 6 7 7 1 8 2 9 1 7 1 9 3]]
</code></pre>
<p>In Python 2, <code>zip</code> returns a list and not an iterator as with Python 3, and your previous code would work fine.</p>
| 1 |
2016-09-14T10:40:39Z
|
[
"python",
"arrays",
"numpy",
"reshape"
] |
total size of new array must be unchanged
| 39,488,282 |
<p>I have two arrays x1 and x2, both are 1*14 arrays i am trying to zip them up and then perform reshape.</p>
<p>The code is as below ;</p>
<pre><code>x1
</code></pre>
<p>Out[122]: array([1, 2, 3, 1, 5, 6, 5, 5, 6, 7, 8, 9, 7, 9])</p>
<pre><code>x2
</code></pre>
<p>Out[123]: array([1, 3, 2, 2, 8, 6, 7, 6, 7, 1, 2, 1, 1, 3])</p>
<pre><code>X = np.array(zip(x1, x2)).reshape(2, len(x1))
</code></pre>
<p>ValueErrorTraceback (most recent call last)
in ()
----> 1 X = np.array(zip(x1, x2)).reshape(2, len(x1))</p>
<p>ValueError: total size of new array must be unchanged</p>
| 1 |
2016-09-14T10:35:35Z
| 39,488,397 |
<p>You are on Python 3, so <code>zip</code> is evaluated lazily. </p>
<pre><code>>>> np.array(zip(x1,x2))
array(<zip object at 0x7f76d0befe88>, dtype=object)
</code></pre>
<p>You need to iterate over it:</p>
<pre><code>>>> np.array(list(zip(x1, x2))).reshape(2, len(x1))
array([[1, 1, 2, 3, 3, 2, 1, 2, 5, 8, 6, 6, 5, 7],
[5, 6, 6, 7, 7, 1, 8, 2, 9, 1, 7, 1, 9, 3]])
</code></pre>
| 3 |
2016-09-14T10:40:52Z
|
[
"python",
"arrays",
"numpy",
"reshape"
] |
total size of new array must be unchanged
| 39,488,282 |
<p>I have two arrays x1 and x2, both are 1*14 arrays i am trying to zip them up and then perform reshape.</p>
<p>The code is as below ;</p>
<pre><code>x1
</code></pre>
<p>Out[122]: array([1, 2, 3, 1, 5, 6, 5, 5, 6, 7, 8, 9, 7, 9])</p>
<pre><code>x2
</code></pre>
<p>Out[123]: array([1, 3, 2, 2, 8, 6, 7, 6, 7, 1, 2, 1, 1, 3])</p>
<pre><code>X = np.array(zip(x1, x2)).reshape(2, len(x1))
</code></pre>
<p>ValueErrorTraceback (most recent call last)
in ()
----> 1 X = np.array(zip(x1, x2)).reshape(2, len(x1))</p>
<p>ValueError: total size of new array must be unchanged</p>
| 1 |
2016-09-14T10:35:35Z
| 39,488,686 |
<p><code>np.array</code> isn't recognizing the generator created by <code>zip</code> as an iterable. It works fine if you force conversion to a list first:</p>
<pre><code>from array import array
import numpy as np
x1 = array('i', [1, 2, 3, 1, 5, 6, 5, 5, 6, 7, 8, 9, 7, 9])
x2 = array('i', [1, 3, 2, 2, 8, 6, 7, 6, 7, 1, 2, 1, 1, 3])
print(np.array(list(zip(x1, x2))).reshape(2, len(x1)))
</code></pre>
<p>gives</p>
<pre><code>[[1 1 2 3 3 2 1 2 5 8 6 6 5 7]
[5 6 6 7 7 1 8 2 9 1 7 1 9 3]]
</code></pre>
| 1 |
2016-09-14T10:56:00Z
|
[
"python",
"arrays",
"numpy",
"reshape"
] |
Correct way to implement placeholder variables (PEP8)
| 39,488,284 |
<p>I run into the problem of needing placeholder variables a lot. I try to code according to PEP8, and always follow it, also i am using PyCharm which notifies me about mistakes. Currently i use <code>_</code> as i have seen this in a lot of online codes, but i guess it is wrong as i still got the warnings. What is the correct way of doing this?</p>
<p>Some examples:</p>
<p>I need a list (with given length) of tuples where each tuple is (0, None):</p>
<pre><code>bound = [(0, None) for _ in ENERGY_ATTRIBUTES]
</code></pre>
<p>An unordered multiprocess where the function does not return anything</p>
<pre><code>for _ in p.imap_unordered(partial(read_energies, round=i), PEPTIDE_KD.keys()):
pass
</code></pre>
<p>Also tried </p>
<pre><code>_ = [_ for _ in p.imap_unordered(partial(read_energies, round=i), PEPTIDE_KD.keys())]
</code></pre>
<p>Same warnings.</p>
| 0 |
2016-09-14T10:35:37Z
| 39,488,361 |
<p>Ignore the warnings if you know they are harmless. They are warnings, not errors or exceptions. If you know what you are doing, ignore or suppress the warnings!</p>
| 2 |
2016-09-14T10:38:59Z
|
[
"python",
"python-3.x",
"pep8"
] |
Python: Reproduce Splinter/ Selenium Behaviour for Testing a Website That Uses Javascript
| 39,488,311 |
<p>I have a bot which interacts with a website using Splinter and Selenium. The website uses Javascript and updates in real time. The bot works well 90% of the time, but due to random events it will sometimes raise an Exception. It is very hard for me to debug these events, by the time I am in the debugger the website has changed. </p>
<p>Is there anyway I can record the website data and play it back, like with vcrpy? Or is there anyway I can record the behaviour so I can debug and test? </p>
| -2 |
2016-09-14T10:36:40Z
| 39,672,863 |
<p>Closest thing you can do is to take screen shots of web page on various events. You will have to use EventFiringWebDriver. Whichever even you want to take screen shot call <code>screen_shot</code> function there.</p>
<pre><code>from selenium.webdriver.support.events import EventFiringWebDriver
from selenium.webdriver.support.events import AbstractEventListener
import os
import time
class ScreenShotListener(AbstractEventListener):
DIR_NAME = None
def screen_shot(self, driver):
dir = os.path.curdir
unique_filename = str(int(time.time() * 1000)) + ".png"
fpath = os.path.join(dir, unique_filename)
driver.get_screenshot_as_file(fpath)
def before_navigate_to(self, url, driver):
pass
def after_navigate_to(self, url, driver):
pass
def before_navigate_back(self, driver):
pass
def after_navigate_back(self, driver):
pass
def before_navigate_forward(self, driver):
pass
def after_navigate_forward(self, driver):
pass
def before_find(self, by, value, driver):
pass
def after_find(self, by, value, driver):
pass
def before_click(self, element, driver):
pass
def after_click(self, element, driver):
pass
def before_change_value_of(self, element, driver):
pass
def after_change_value_of(self, element, driver):
pass
def before_execute_script(self, script, driver):
pass
def after_execute_script(self, script, driver):
pass
def before_close(self, driver):
pass
def after_close(self, driver):
pass
def before_quit(self, driver):
pass
def after_quit(self, driver):
pass
def on_exception(self, exception, driver):
pass
driver = EventFiringWebDriver(driver, ScreenShotListener())
</code></pre>
| 0 |
2016-09-24T05:26:06Z
|
[
"javascript",
"python",
"selenium",
"splinter"
] |
Why does Python's C3 MRO depend on a common base class?
| 39,488,324 |
<p>When I read about Python's C3 method resolution order, I often hear it reduced to "children come before parents, and the order of subclasses is respected". Yet that only seems to hold true if all the subclasses inherit from the same ancestor.</p>
<p>E.g.</p>
<pre><code>class X():
def FOO(self):
return 11
class A(X):
def doIt(self):
return super().FOO()
def FOO(self):
return 42
class B(X):
def doIt(self):
return super().FOO()
def FOO(self):
return 52
class KID(A,B):
pass
</code></pre>
<p>Here the MRO of KID is:
KID, A, B, X</p>
<p>However, if I changed B to be instead:</p>
<pre><code>class B(object):
</code></pre>
<p>The MRO of KID becomes:
KID, A, X, B</p>
<p>It seems we are searching A's superclass before we have finished searching all KID's parents.</p>
<p>So it seems a bit less intuitive now than "kids first, breadth first" to "kids first, breadth first if common ancestor else depth first".</p>
<p>It would be quite the gotcha that if a class stopped using a common ancestor the MRO changes (even though the overall hierarchy is the same apart from that one link), and you started calling a deeper ancestor method rather than the one in that class.</p>
| 2 |
2016-09-14T10:37:11Z
| 39,488,453 |
<p>All classes in Python 3 have a common base class, <code>object</code>. You can omit the class from the <code>class</code> definition, but it is there unless you already indirectly inherit from <code>object</code>. (In Python 2 you have to explicitly inherit from <code>object</code> to even have the use of <code>super()</code> as this is a new-style class feature).</p>
<p>You changed the base class of <code>B</code> from <code>X</code> to <code>object</code>, but <code>X</code> <em>also</em> inherits from <code>object</code>. The MRO changed to take this into account. The same simplification of the C3 rules (<em>children come before parents, and the order of subclasses is respected</em>) is still applicable here. <code>B</code> comes before <code>object</code>, as does <code>X</code>, and <code>A</code> and <code>B</code> are still listed in the same order. However, <code>X</code> should come before <code>B</code>, as both inherit from <code>object</code> and the subclass <code>A(X)</code> comes before <code>B</code> in <code>KID</code>.</p>
<p>Note that nowhere it is said C3 is breadth first. If anything, it is <strong>depth first</strong>. See <a href="https://www.python.org/download/releases/2.3/mro/" rel="nofollow"><em>The Python 2.3 Method Resolution Order</em></a> for an in-depth description of the algorithm and how it applies to Python, but the linearisation of any class is the result of merging the linearisations of the base classes plus the base classes themselves:</p>
<pre><code>L[KID] = KID + merge(L[A], L[B], (A, B))
</code></pre>
<p>where <code>L[..]</code> is the C3 linearisation of that class (their MRO).</p>
<p>So the linearisation of <code>A</code> comes before <code>B</code> when merging, making C3 look at hierarchies in depth rather than in breadth. Merging starts with the left-most list and takes any element that doesn't appear in the tails of the other lists (so everything but the first element), then takes the next, etc.</p>
<p>In your first example, <code>L[A]</code> and <code>L[B]</code> are almost the same (they both end in <code>(X, object)</code> as their MRO, with only the first element differing), so merging is simple; you merge <code>(A, X, object)</code> and <code>(B, X, object)</code>, and merging these gives you only <code>A</code> from the first list, then the whole second list, ending up with <code>(KID, A, B, X, object)</code> after prepending <code>KID</code>:</p>
<pre><code>L[KID] = KID + merge((A, X, object), (B, X, object), (A, B))
# ^ ^^^^^^
# \ & \ both removed as they appear in the next list
= KID + (A,) + (B, X, object)
= (KID, A, B, X, object)
</code></pre>
<p>In your second example, <code>L[A]</code> is <em>unchanged</em>, but <code>L[B]</code> is now <code>(B, object)</code> (dropping <code>X</code>), so merging prefers <code>X</code> before <code>B</code> as <code>(A, X, object)</code> comes first when merging and <code>X</code> doesn't appear in the second list. Thus </p>
<pre><code>L[KID] = KID + merge((A, X, object), (B, object), (A, B))
# ^^^^^^
# \removed as it appears in the next list
= KID + (A, X) + (B, object)
= (KID, A, X, B, object)
</code></pre>
| 2 |
2016-09-14T10:43:41Z
|
[
"python",
"multiple-inheritance",
"method-resolution-order"
] |
pandas read_sql with parameters and wildcard operator
| 39,488,380 |
<p>I am trying to retrieve data from sql through python with the code:</p>
<pre><code>query = ("SELECT stuff FROM TABLE WHERE name like %%(this_name)s%")
result = pd.read_sql(query,con=cnx,params={'this_name':some_name})
</code></pre>
<p>The code above works perfectly when I don't have to pass the wildcard operator %.
However, in this case the code doesn't work. How can I pass in the query the wildcard operator? Thank you.</p>
| 1 |
2016-09-14T10:39:55Z
| 39,490,030 |
<p>Consider concatenating the wildcard operator, <code>%</code>, to passed in value:</p>
<pre><code>query = ("SELECT stuff FROM TABLE WHERE name LIKE %(this_name)s")
result = pd.read_sql(query,con=cnx, params={'this_name': '%'+ some_name +'%'})
</code></pre>
| 1 |
2016-09-14T12:06:38Z
|
[
"python",
"sql",
"pandas",
"wildcard"
] |
Create API key in AWS API Gateway from AWS Lambda using boto3
| 39,488,456 |
<p>I am using an AWS Lambda function to create an API key using Boto3. </p>
<p>Testing locally with the following is successful:</p>
<pre><code>import boto3
client = boto3.client('apigateway')
response = client.create_api_key(
name='test_user_from_boto',
description='This is the description',
enabled=True,
generateDistinctId=True,
value='',
stageKeys=[{
'restApiId':'aaa',
'stageName':'beta'
}]
)
</code></pre>
<p>This works no problem returning a dictionary as <a href="http://boto3.readthedocs.io/en/latest/reference/services/apigateway.html#APIGateway.Client.create_api_key" rel="nofollow">expected</a>. The return dictionary includes a <code>value</code> key that has the generated api key value which is what I'm after.</p>
<p>When doing something similar in AWS Lambda, the return dictionary does not include the <code>value</code> key. </p>
<p>This is my Lambda hander function.</p>
<pre><code>import boto3
api_id = 'zzz'
plan_id_map = {
'trial': 'aaa',
'basic': 'bbb',
'professional': 'ccc'
}
def handler(event, context):
user_name = event['user_name']
stage = event['stage']
plan = event['plan']
client = boto3.client('apigateway')
api_key_response = client.create_api_key(
name=user_name,
description='',
enabled=True,
# generateDistinctId=True, # including this argument throws an error
# value='', # including this argument throws an error
stageKeys=[{
'restApiId': api_id,
'stageName': stage
}]
)
user_key_id = api_key_response['id']
user_api_key = api_key_response['value'] # throws a key error here
plan_response = client.create_usage_plan_key(
usagePlanId=plan_id_map[plan],
keyId=user_key_id,
keyType='API_KEY')
return {
'user_name': user_name,
'user_key_id': user_key_id,
'user_api_key': user_api_key
}
</code></pre>
<p>The results from printing <code>api_key_response</code> is the following:</p>
<pre><code>{
u'name': u'test_user_from_lambda',
'ResponseMetadata': {
'HTTPStatusCode': 201,
'RequestId': 'b8298d38-7aec-11e6-8322-5bc341fc4b73',
'HTTPHeaders': {
'x-amzn-requestid': 'b8298d38-7aec-11e6-8322-5bc341fc4b73',
'date': 'Thu, 15 Sep 2016 02:33:00 GMT',
'content-length': '203',
'content-type': 'application/json'
}
},
u'createdDate': datetime.datetime(2016, 9, 15, 2, 33, tzinfo=tzlocal()),
u'lastUpdatedDate': datetime.datetime(2016, 9, 15, 2, 33, tzinfo=tzlocal()),
u'enabled': True,
u'id': u'xyzxyz',
u'stageKeys': [u'abcabc/beta']
}
</code></pre>
<p>When attempting to use <code>get_api_key</code>, I get a parameter validation error:</p>
<pre><code>get_api_key_response = client.get_api_key(
apiKey='585yw0f1tk',
includeValue=True
)
Unknown parameter in input: "includeValue", must be one of: apiKey: ParamValidationError
</code></pre>
<p>Is the AWS<code>boto3</code> module modified to exclude the <code>value</code> key? How to return the generated api key?</p>
| 0 |
2016-09-14T10:44:00Z
| 39,495,821 |
<p>The difference here can be attributed to different versions of the AWS SDK in your Lambda environment vs your development environment.</p>
<p>In newer versions of the SDK, the API key value is omitted from certain responses as a security measure. You can retrieve the API Key value via a separate call to <a href="http://boto3.readthedocs.io/en/latest/reference/services/apigateway.html#APIGateway.Client.get_api_key" rel="nofollow">get_api_key</a> with includeValue=True</p>
| 1 |
2016-09-14T16:50:41Z
|
[
"python",
"amazon-web-services",
"aws-lambda",
"aws-api-gateway",
"boto3"
] |
unable to open excel document using Python through openpyxl
| 39,488,501 |
<p>I'm new to Python, so sorry if this is annoyingly simple. I'm trying to simply open an excel document using this,</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>import openpyxl
from openpyxl.reader.excel import load_workbook
wb = openpyxl.load_workbook('C:\Users\ my file location here.xlsx') #with my real location</code></pre>
</div>
</div>
</p>
<p>I didn't get any error, however i don't understand why file doesn't open ?</p>
<p>The location of the file is correct as I can open it using, </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>file = "C:\Users\ my file location here.xlsx"
# os.startfile(file)</code></pre>
</div>
</div>
</p>
<p>Thanks</p>
| 0 |
2016-09-14T10:46:15Z
| 39,488,728 |
<p>you incorrectly invoke the load_workbook function</p>
<p><code>wb = openpyxl.load_workbook('C:\Users\ my file location here.xlsx') #with my real location</code></p>
<p>you should use just load_workbook</p>
<p><code>wb = load_workbook('C:\Users\ my file location here.xlsx') #with my real location</code></p>
| -1 |
2016-09-14T10:57:59Z
|
[
"python",
"excel"
] |
unable to open excel document using Python through openpyxl
| 39,488,501 |
<p>I'm new to Python, so sorry if this is annoyingly simple. I'm trying to simply open an excel document using this,</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>import openpyxl
from openpyxl.reader.excel import load_workbook
wb = openpyxl.load_workbook('C:\Users\ my file location here.xlsx') #with my real location</code></pre>
</div>
</div>
</p>
<p>I didn't get any error, however i don't understand why file doesn't open ?</p>
<p>The location of the file is correct as I can open it using, </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>file = "C:\Users\ my file location here.xlsx"
# os.startfile(file)</code></pre>
</div>
</div>
</p>
<p>Thanks</p>
| 0 |
2016-09-14T10:46:15Z
| 39,488,744 |
<p>1) Try this first.</p>
<pre><code>import openpyxl
wb = openpyxl.load_workbook('C:\Users\ my file location here.xlsx')
type(wb)
</code></pre>
<p>2) else put your .py file in the same directory where .xlsx file is present and change code in .py as shown below.</p>
<pre><code>import openpyxl
wb = openpyxl.load_workbook('urfilename.xlsx')
type(wb)
</code></pre>
| 0 |
2016-09-14T10:58:29Z
|
[
"python",
"excel"
] |
unable to open excel document using Python through openpyxl
| 39,488,501 |
<p>I'm new to Python, so sorry if this is annoyingly simple. I'm trying to simply open an excel document using this,</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>import openpyxl
from openpyxl.reader.excel import load_workbook
wb = openpyxl.load_workbook('C:\Users\ my file location here.xlsx') #with my real location</code></pre>
</div>
</div>
</p>
<p>I didn't get any error, however i don't understand why file doesn't open ?</p>
<p>The location of the file is correct as I can open it using, </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>file = "C:\Users\ my file location here.xlsx"
# os.startfile(file)</code></pre>
</div>
</div>
</p>
<p>Thanks</p>
| 0 |
2016-09-14T10:46:15Z
| 39,489,719 |
<p>Could there be an issue with how the modules are installed?</p>
<p>As this isn't working, I tried to use the xlrd module.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>import xlrd
from xlrd import open_workbook
wb = xlrd.open_workbook('C:\Users\ my file location.xlsx')</code></pre>
</div>
</div>
</p>
<p>However, this doesn't work either. It simply does nothing when I press run.</p>
| 0 |
2016-09-14T11:50:15Z
|
[
"python",
"excel"
] |
unable to open excel document using Python through openpyxl
| 39,488,501 |
<p>I'm new to Python, so sorry if this is annoyingly simple. I'm trying to simply open an excel document using this,</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>import openpyxl
from openpyxl.reader.excel import load_workbook
wb = openpyxl.load_workbook('C:\Users\ my file location here.xlsx') #with my real location</code></pre>
</div>
</div>
</p>
<p>I didn't get any error, however i don't understand why file doesn't open ?</p>
<p>The location of the file is correct as I can open it using, </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>file = "C:\Users\ my file location here.xlsx"
# os.startfile(file)</code></pre>
</div>
</div>
</p>
<p>Thanks</p>
| 0 |
2016-09-14T10:46:15Z
| 39,540,900 |
<p>It sounds like you want to open the file <em>in Excel</em>. That is, you want to launch the Excel application, and have the file open in that application. OpenPyXL and xlrd are not designed to do this. In fact, they are specifically designed for working with the files when you can't or don't want to launch Excel. (For example, OpenPyXL and xlrd both work on Linux machines, which can't even run Excel.)</p>
<p>You probably want some kind of Excel automation, like you would get with VBA or VBScript or a .NET language, except you want to do it in Python. If so, the package you are looking for is <strong><a href="https://www.xlwings.org/" rel="nofollow">xlwings</a></strong>.</p>
| 0 |
2016-09-16T22:24:17Z
|
[
"python",
"excel"
] |
Use of timestamp or datetime object for colorcoding of a scatterplot?
| 39,488,506 |
<p>I do have a list of data: </p>
<pre><code>a = [[Timestamp('2015-01-01 15:00:00', tz=None), 53.0, 958.0],
[Timestamp('2015-01-01 16:00:00', tz=None), 0.0, 900.0],
[Timestamp('2015-01-02 11:00:00', tz=None), 543.0, 820.0], .....]
</code></pre>
<p>My goal is to plot the second element of each list entry vs. the third element of each list entry colorcoded by the timestamp.</p>
<pre><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots()
smap = ax.scatter(a[:,1], a[:,2])
plt.show()
</code></pre>
<p>As soon as I change the line for plotting to
<code>smap = ax.scatter(a[:,1], a[:,2], c = a[:,0])</code></p>
<p>I receive the error message: </p>
<blockquote>
<p>'Timestamp' object has no attribute 'view'.</p>
</blockquote>
<p>I think my general question is:
Is there any solution in Python to plot two columns of data color-coded by date using a third column which is a timestamp or datetime-object?</p>
| 0 |
2016-09-14T10:46:29Z
| 39,490,991 |
<pre><code>a = [[pd.Timestamp('2015-01-01 15:00:00', tz=None), 53.0, 958.0],
[pd.Timestamp('2015-01-01 16:00:00', tz=None), 0.0, 900.0],
[pd.Timestamp('2015-01-02 11:00:00', tz=None), 543.0, 820.0]]
df = pd.DataFrame(a).add_prefix('Col_')
df
</code></pre>
<p><a href="http://i.stack.imgur.com/Gw5X8.png" rel="nofollow"><img src="http://i.stack.imgur.com/Gw5X8.png" alt="Image"></a></p>
<pre><code>df.dtypes
Col_0 datetime64[ns]
Col_1 float64
Col_2 float64
dtype: object
</code></pre>
<p>Map each color to every value present in the <code>datetime</code> column by defining a list of desired colors inside a dictionary.</p>
<pre><code>c_dict = df['Col_0'].map(pd.Series(data=list('rgb'), index=df['Col_0'].values).to_dict())
df.plot.scatter(x='Col_1', y='Col_2', c=c_dict, alpha=0.8, title='Scatter-Plot')
</code></pre>
<p><a href="http://i.stack.imgur.com/Jsoeq.png" rel="nofollow"><img src="http://i.stack.imgur.com/Jsoeq.png" alt="Image"></a></p>
<p>It wouldn't be practical to use a <code>list</code> and populate colors for every value to be mapped against. In such cases, you are better of using colormaps to do the mapping.</p>
<pre><code>c_dict = df['Col_0'].map(pd.Series(data=np.arange(3), index=df['Col_0'].values).to_dict())
df.plot.scatter(x='Col_1', y='Col_2', c=c_dict, title='Scatter-Plot', cmap=plt.cm.rainbow)
</code></pre>
<p><a href="http://i.stack.imgur.com/5pniu.png" rel="nofollow"><img src="http://i.stack.imgur.com/5pniu.png" alt="Image"></a></p>
| 0 |
2016-09-14T12:54:42Z
|
[
"python",
"pandas",
"matplotlib"
] |
pip: How to deal with different python versions to install Flask?
| 39,488,603 |
<p>I have different versions of Python installed on my Ubuntu machine (2.7.11, 2.7.12, 3.5). I would like to install Flask on the 2.7.12 as it used by Apache. </p>
<p>I have three pip{version} in my PATH which are: pip, pip2, and pip2.7. How do I know which one is for which python version. </p>
<p>I have already read <a href="http://stackoverflow.com/questions/2812520/pip-dealing-with-multiple-python-versions">Here</a> but it didn't help my case as I need to differentiate between minor version number 2.7.11 and 2.7.12.</p>
<p>One thing is that I tried pip{version} install Flask for all three pips but the 2.7.12 still can't import Flask.</p>
<p>Any help is much appreciate it.</p>
<p>Thanks</p>
| 0 |
2016-09-14T10:51:20Z
| 39,488,693 |
<p>You can find it out by trying to run this: <code>pip --version</code>. Output will be something like this: <code>pip 8.1.2 from /usr/lib/python2.7/site-packages (python 2.7)</code>. This way we can see that it is for <code>python2.7</code> in my case.</p>
| 0 |
2016-09-14T10:56:23Z
|
[
"python",
"python-2.7",
"pip"
] |
pip: How to deal with different python versions to install Flask?
| 39,488,603 |
<p>I have different versions of Python installed on my Ubuntu machine (2.7.11, 2.7.12, 3.5). I would like to install Flask on the 2.7.12 as it used by Apache. </p>
<p>I have three pip{version} in my PATH which are: pip, pip2, and pip2.7. How do I know which one is for which python version. </p>
<p>I have already read <a href="http://stackoverflow.com/questions/2812520/pip-dealing-with-multiple-python-versions">Here</a> but it didn't help my case as I need to differentiate between minor version number 2.7.11 and 2.7.12.</p>
<p>One thing is that I tried pip{version} install Flask for all three pips but the 2.7.12 still can't import Flask.</p>
<p>Any help is much appreciate it.</p>
<p>Thanks</p>
| 0 |
2016-09-14T10:51:20Z
| 39,488,765 |
<p><a href="http://flask.pocoo.org/docs/0.11/installation/#virtualenv" rel="nofollow">You should always create virtualenvs for your projects</a>. You can create one like</p>
<pre><code>virtualenv -p <path/to/python2.7.12> <path/to/new/virtualenv/>
</code></pre>
<p>inside that virtualenv <code>pip</code> and <code>python</code> will always select the right interpreter and path.</p>
| 1 |
2016-09-14T10:59:33Z
|
[
"python",
"python-2.7",
"pip"
] |
why python's pickle is not serializing a method as default argument?
| 39,488,675 |
<p>I am trying to use pickle to transfer python objects over the wire between 2 servers. I created a simple class, that subclasses <code>dict</code> and I am trying to use pickle for the marshalling:</p>
<pre><code>def value_is_not_none(value):
return value is not None
class CustomDict(dict):
def __init__(self, cond=lambda x: x is not None):
super().__init__()
self.cond = cond
def __setitem__(self, key, value):
if self.cond(value):
dict.__setitem__(self, key, value)
</code></pre>
<p>I first tried to use <code>pickle</code> for the marshalling, but when I un-marshalled I received an error related to the <code>lambda</code> expression.</p>
<p>Then I tried to do the marshalling with <code>dill</code> but it seemed the <code>__init__</code> was not called.</p>
<p>Then I tried again with <code>pickle</code>, but I passed the <code>value_is_not_none()</code> function as the <code>cond</code> parameter - again the <code>__init__()</code> does not seemed to be invoked and the un-marshalling failed on the <code>__setitem__()</code> (<code>cond</code> is <code>None</code>).</p>
<p>Why is that? what am I missing here?</p>
<p>If I try to run the following code:</p>
<pre><code>obj = CustomDict(cond=value_is_not_none)
obj['hello'] = ['world']
payload = pickle.dumps(obj, protocol=pickle.HIGHEST_PROTOCOL)
obj2 = pickle.loads(payload)
</code></pre>
<p>it fails with </p>
<pre><code>AttributeError: 'CustomDict' object has no attribute 'cond'
</code></pre>
<p>This is a different question than: <a href="http://stackoverflow.com/questions/16626429/python-cpickle-pickling-lambda-functions">Python, cPickle, pickling lambda functions</a>
as I tried using <code>dill</code> with <code>lambda</code> and it failed to work, and I also tried passing a function and it also failed.</p>
| 3 |
2016-09-14T10:55:40Z
| 39,491,671 |
<p><code>pickle</code> is loading your dictionary data <em>before</em> it has restored the attributes on your instance. As such the <code>self.cond</code> attribute is not yet set when <code>__setitem__</code> is called for the dictionary key-value pairs.</p>
<p>Note that <code>pickle</code> will never call <code>__init__</code>; instead it'll create an entirely <em>blank</em> instance and restore the <code>__dict__</code> attribute namespace on that directly.</p>
<p>You have two options:</p>
<ul>
<li><p>default to <code>cond=None</code> and ignore the condition if it is still set to <code>None</code>:</p>
<pre><code>class CustomDict(dict):
def __init__(self, cond=None):
super().__init__()
self.cond = cond
def __setitem__(self, key, value):
if getattr(self, 'cond', None) is None or self.cond(value):
dict.__setitem__(self, key, value)
</code></pre>
<p>The <code>getattr()</code> there is needed because a blank instance has no <code>cond</code> attribute at all (it is not set to <code>None</code>, the attribute is entirely missing). You could add <code>cond = None</code> to the class:</p>
<pre><code>class CustomDict(dict):
cond = None
</code></pre>
<p>and then just test for <code>if self.cond is None or self.cond(value):</code>.</p></li>
<li><p>Define a custom <a href="https://docs.python.org/3/library/pickle.html#object.__reduce__" rel="nofollow"><code>__reduce__</code> method</a> to control how the initial object is created when restored:</p>
<pre><code>def _default_cond(v): return v is not None
class CustomDict(dict):
def __init__(self, cond=_default_cond):
super().__init__()
self.cond = cond
def __setitem__(self, key, value):
if self.cond(value):
dict.__setitem__(self, key, value)
def __reduce__(self):
return (CustomDict, (self.cond,), None, None, iter(self.items()))
</code></pre>
<p><code>__reduce__</code> is expected to return a tuple with:</p>
<ul>
<li>A callable that can be pickled directly (here the class does fine)</li>
<li>A tuple of positional arguments for that callable; on unpickling the first element is called passing in the second as arguments, so by setting this to <code>(self.cond,)</code> we ensure that the new instance is created with <code>cond</code> passed in as an argument and <em>now</em> <code>CustomDict.__init__()</code> <em>will</em> be called.</li>
<li>The next 2 positions are for a <code>__setstate__</code> method (ignored here) and for list-like types, so we set these to <code>None</code>.</li>
<li>The last element is an iterator for the key-value pairs that pickle then will restore for us.</li>
</ul>
<p></p>
<p>Note that I replaced the default value for <code>cond</code> with a function here too so you don't have to rely on <code>dill</code> for the pickling.</p></li>
</ul>
| 2 |
2016-09-14T13:24:18Z
|
[
"python",
"dictionary",
"lambda",
"pickle"
] |
GtkToolButton are disabled by default in glade3 + pygtk
| 39,488,717 |
<p>I've created a simple app skeleton in python 2.7 using pyGtk and Glade (3.16.1) on Ubiuntu 14.04 LTS.
I've added a ToolBar and some buttons, but the gtkToolButton are always disabled. How can I enable them from Glade? </p>
<p><a href="http://i.stack.imgur.com/ABmjQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/ABmjQ.png" alt="my gui with toolbutton disabled"></a>
<a href="http://i.stack.imgur.com/ABmjQ.png" rel="nofollow">my gui with toolbutton disabled</a></p>
<p>I've tried also in python using "set_sensitive" but nothing works.</p>
<p>Can You help me? Thank You very much!</p>
<p>This is a snippet from glade file:</p>
<pre><code><child>
<object class="GtkToolbar" id="toolbar1">
<property name="visible">True</property>
<property name="can_focus">False</property>
<property name="toolbar_style">both-horiz</property>
<property name="show_arrow">False</property>
<style>
<class name="primary-toolbar"/>
</style>
<child>
<object class="GtkToolButton" id="toolbutton1">
<property name="name">bt1</property>
<property name="visible">True</property>
<property name="can_focus">False</property>
<property name="is_important">True</property>
<property name="action_name">bt1</property>
<property name="label" translatable="yes">toolbutton1</property>
<property name="use_underline">True</property>
<property name="stock_id">gtk-connect</property>
<signal name="clicked" handler="on_toolbutton1_clicked" swapped="no"/>
</object>
</code></pre>
| 0 |
2016-09-14T10:57:29Z
| 39,660,685 |
<p>GtkToolbars are not very straightforward to use. You have to create a corresponding action. This is a simple example of how to use them:</p>
<p>The glade file:</p>
<pre><code><interface>
<requires lib="gtk+" version="3.16"/>
<object class="GtkWindow" id="window">
<property name="can_focus">False</property>
<child>
<object class="GtkToolbar">
<property name="visible">True</property>
<property name="can_focus">False</property>
<child>
<object class="GtkToolButton">
<property name="visible">True</property>
<property name="can_focus">False</property>
<property name="action_name">app.zoom</property>
<property name="label" translatable="yes">zoom</property>
<property name="use_underline">True</property>
<property name="stock_id">gtk-zoom-in</property>
</object>
<packing>
<property name="expand">False</property>
<property name="homogeneous">True</property>
</packing>
</child>
</object>
</child>
</object>
</interface>
</code></pre>
<p>The python file:</p>
<pre><code>from gi.repository import Gtk, Gio
class Application(Gtk.Application):
def do_activate(self):
action = Gio.SimpleAction.new('zoom', None)
action.connect('activate', self.on_zoom)
self.add_action(action)
builder = Gtk.Builder.new_from_file('window.glade')
window = builder.get_object('window')
window.show_all()
self.add_window(window)
def on_zoom(self, action, param):
print('clicked')
if __name__ == '__main__':
Application().run()
</code></pre>
<p>You can also have a look at an <a href="http://python-gtk-3-tutorial.readthedocs.io/en/latest/application.html" rel="nofollow">example for the application menu</a> as it is similar</p>
| 0 |
2016-09-23T12:23:01Z
|
[
"python",
"toolbar",
"pygtk",
"gtk3",
"glade"
] |
Parent constructor called by default?
| 39,488,875 |
<p>I have the following example from python <code>page_object</code> docs:</p>
<pre><code> from page_objects import PageObject, PageElement
from selenium import webdriver
class LoginPage(PageObject):
username = PageElement(id_='username')
password = PageElement(name='password')
login = PageElement(css='input[type="submit"]')
driver = webdriver.PhantomJS()
driver.get("http://example.com")
page = LoginPage(driver)
page.username = 'secret'
page.password = 'squirrel'
assert page.username.text == 'secret'
page.login.click()
</code></pre>
<p>What bothers me is that we create a <code>LoginPage</code> with providing a <code>driver</code> to it's constructor, but we haven't define a <code>__init__</code> method in <code>LoginPage</code> class. </p>
<p>Does that mean that the parent class <code>PageObject</code>'s constructor is called with <code>driver</code> parameter? I thought that python doesn't implicitly call parent's constructors?</p>
| 1 |
2016-09-14T11:05:27Z
| 39,489,022 |
<p>The <code>__init__</code> method is just a method and as such python performs the same kind of lookup for it as other methods. If class <code>B</code> does not define a method/attribute <code>x</code> then python looks up it's base class <code>A</code> and so on, until it either finds the attribute/method or fails.</p>
<p>A simple example:</p>
<pre><code>>>> class A:
... def method(self):
... print('A')
...
>>> class B(A): pass
...
>>> class C(B):
... def method(self):
... print('C')
...
>>> a = A()
>>> b = B()
>>> c = C()
>>> a.method()
A
>>> b.method() # doesn't find B.method, and so uses A.method
A
>>> c.method()
C
</code></pre>
<p>The same is with <code>__init__</code>: since <code>LoginPage</code> does not define <code>__init__</code> python looks up the <code>PageObject</code> class and finds its definition there.</p>
<p>What is meant when we say that "python doesn't implicitly call parent class constructors" is that <strong>if you define an <code>__init__</code> method</strong> the interpreter will just call that method and <em>not</em> call all the parent class <code>__init__</code>s, and as such if you want to call the parent class constructor you have to do so explicitly.</p>
<p>Note the difference among these classes:</p>
<pre><code>>>> class A:
... def __init__(self):
... print('A')
...
>>> class B(A):
... pass
...
>>> class B2(A):
... def __init__(self):
... print('B')
...
>>> class B3(A):
... def __init__(self):
... print('B3')
... super().__init__()
...
>>> A()
A
<__main__.A object at 0x7f5193267eb8>
>>> B() # B.__init__ does not exists, uses A.__init__
A
<__main__.B object at 0x7f5193267ef0>
>>> B2() # B2.__init__ exists, no call to A.__init__
B
<__main__.B2 object at 0x7f5193267eb8>
>>> B3() # B3.__init__exists, and calls to A.__init__ too
B3
A
<__main__.B3 object at 0x7f5193267ef0>
</code></pre>
| 1 |
2016-09-14T11:14:12Z
|
[
"python",
"oop",
"selenium",
"selenium-webdriver"
] |
Django user_passes_test usage
| 39,488,991 |
<p>I have a function-based view in Django:</p>
<pre><code>@login_required
def bout_log_update(request, pk):
...
</code></pre>
<p>While it's protected from people who aren't logged in, I need to be able to restrict access to this view based on:
1. The user currently logged in
2. Which user created the object (referred to by pk)</p>
<p>It needs to be accessible only if the currently logged in user created the object being accessed, or is a superuser.</p>
<p>Can the standard @user_passes_test decorator accomplish this? Or a custom decorator? Or another method entirely?</p>
<p>I'd re-write it as a class-based view and use UserPassesTestMixin if I could, but I don't know that it's possible for this particular view.</p>
| 0 |
2016-09-14T11:12:03Z
| 39,490,140 |
<p>You can achieve this quite easily with a custom decorator <a href="https://docs.djangoproject.com/en/1.10/_modules/django/contrib/auth/decorators/#user_passes_test" rel="nofollow">based on <code>user_passes_test</code> source</a>:</p>
<pre><code>def my_user_passes_test(test_func, login_url=None, redirect_field_name=REDIRECT_FIELD_NAME):
"""
Decorator for views that checks that the user passes the given test,
redirecting to the log-in page if necessary. The test should be a callable
that takes the user object and returns True if the user passes.
"""
def decorator(view_func):
@wraps(view_func, assigned=available_attrs(view_func))
def _wrapped_view(request, *args, **kwargs):
# the following line is the only change with respect to
# user_passes_test:
if test_func(request.user, *args, **kwargs):
return view_func(request, *args, **kwargs)
path = request.build_absolute_uri()
resolved_login_url = resolve_url(login_url or settings.LOGIN_URL)
# If the login url is the same scheme and net location then just
# use the path as the "next" url.
login_scheme, login_netloc = urlparse(resolved_login_url)[:2]
current_scheme, current_netloc = urlparse(path)[:2]
if ((not login_scheme or login_scheme == current_scheme) and
(not login_netloc or login_netloc == current_netloc)):
path = request.get_full_path()
from django.contrib.auth.views import redirect_to_login
return redirect_to_login(
path, resolved_login_url, redirect_field_name)
return _wrapped_view
return decorator
</code></pre>
<p>Note that just one line is changed from <code>test_func(request.user)</code> to <code>test_func(request.user, *args, **kwargs)</code> so that all arguments passed to the view are passed to the test function too.</p>
| 1 |
2016-09-14T12:11:29Z
|
[
"python",
"django"
] |
Cannot import from package in same namespace tree until pkg_resources has been imported
| 39,489,027 |
<p>I have a strange problem that I somehow cannot reproduce separately, yet it shows up in production code, and of course the production code cannot be shared publically.</p>
<p>I have two packages, for argument's sake <code>ns.server</code> and <code>ns.protobuf</code>, where the latter implements protobuf specific extensions for the project. Both packages properly declare the namespace packages in setup.py, and both have the boilerplate pkg_resources stuff in <code>__init__.py</code>:</p>
<pre><code>try:
__import__('pkg_resources').declare_namespace(__name__)
except ImportError:
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)
</code></pre>
<p>Now for some strange reason, I get this:</p>
<pre><code>>>> import server.protobuf
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named protobuf
>>> import pkg_resources
>>> import server.protobuf
>>>
</code></pre>
<p>So it appears that my namespaces are all screwy until I import pkg_resources and then it is fixed. It's not too bad, the workaround is then simply to import pkg_resources first. I would just like to understand what is going on.</p>
| 0 |
2016-09-14T11:14:34Z
| 39,489,265 |
<p>Ugh, second question I self-answer in as many days. I had a stale egg-info directory lying around in <code>lib/python2.7/site-packages</code>, from a previous install where I accidentally neglected to pass -e (development mode) to pip. Completely clearing everything and reinstalling fixed it.</p>
| 0 |
2016-09-14T11:26:16Z
|
[
"python",
"python-2.7",
"pkg-resources"
] |
Interpolating elements of a color matrix on the basis of some given reference elements
| 39,489,089 |
<p>This is more or less a follow up question to <a href="http://stackoverflow.com/questions/39485178/two-dimensional-color-ramp-256x256-matrix-interpolated-from-4-corner-colors?noredirect=1#comment66289716_39485178">Two dimensional color ramp (256x256 matrix) interpolated from 4 corner colors</a> that was profoundly answered by jadsq today.</p>
<p>For linear gradients the previous answer works very well. However, if one wants to have better control of the stop colors of the gradient, this method seems not to be very practical. What might help in this situation is to have some reference color points in a matrix (lookup table) which are used to interpolate color values for the empty position in the look-up table. What I mean might be easier read out of the below image.</p>
<p><a href="http://i.stack.imgur.com/AwPYk.png" rel="nofollow"><img src="http://i.stack.imgur.com/AwPYk.png" alt="enter image description here"></a></p>
<p>The whole idea is taken from <a href="http://cartography.oregonstate.edu/pdf/2006_JennyHurni_SwissStyleShading.pdf" rel="nofollow">http://cartography.oregonstate.edu/pdf/2006_JennyHurni_SwissStyleShading.pdf</a> page 4 to 6. I've read through the paper, I understand theoretically what is going on but failing miserably because of my low experience with interpolation methods and to be honest, general math skills. What might also be of interest is, that they use a sigmoid Gaussian bell as interpolation method (page 6). They argue that Gaussian weighting yielded the visually best results and was simple to compute (equation 1, with k=0.0002 for a table of 256 per 256 cells).</p>
<hr>
<p>Edit (better illustrations):</p>
<p><a href="http://i.stack.imgur.com/cQVW1.png" rel="nofollow"><img src="http://i.stack.imgur.com/cQVW1.png" alt="Weighting functions for interpolating colours"></a></p>
<p><a href="http://i.stack.imgur.com/hGXfz.png" rel="nofollow"><img src="http://i.stack.imgur.com/hGXfz.png" alt="Equation 1"></a></p>
<hr>
<p>I have the other parts of their presented methods in place but filling the empty values in the matrix really is a key part and keeps me from continuing. Once again, thank you for your help!</p>
<p>What I have right now:</p>
<pre><code>#!/usr/bin/env python3
import numpy as np
import matplotlib.pyplot as plt
# the matrix with the reference color elements
ref=np.full([7, 7, 3], [255,255,255], dtype=np.uint8)
ref[0][6] = (239,238,185)
ref[1][1] = (120,131,125)
ref[4][6] = (184,191,171)
ref[6][2] = (150,168,158)
ref[6][5] = (166,180,166)
# s = ref.shape
#
# from scipy.ndimage.interpolation import zoom
# zooming as in http://stackoverflow.com/a/39485650/1230358 doesn't seem to work here anymore, because we have no corner point as reference but randomly distributed points within the matrix. As far as I know ...
# zoomed=zoom(ref,(256/s[0],256/s[1],1),order=1)
plt.subplot(211)
plt.imshow(ref,interpolation='nearest')
# plt.subplot(212)
# plt.imshow(zoomed,interpolation='nearest')
plt.show()
</code></pre>
| 5 |
2016-09-14T11:18:00Z
| 39,506,280 |
<p>First some questions to better clarify your problem:</p>
<ul>
<li>what kind of interpolation you want: linear/cubic/other ?</li>
<li>What are the points constrains? for example will there be alway just single region encapsulated by these control points or there could be also points inside? </li>
</ul>
<p>For the simple linear interpolation and arbitrary (but at least 3 points not on a single line) I would try this:</p>
<ol>
<li><p><strong>Triangulate control points area</strong></p>
<p>To non overlapping triangles covering whole defined area.</p></li>
<li><p><strong>render triangles</strong></p>
<p>So just rasterize see <a href="http://stackoverflow.com/a/39062479/2521214">Algorithm to fill triangle</a> and all the sublinks. You should interpolate also the <code>R,G,B</code> along with the coordinates.</p></li>
<li><p><strong>Create a 2 copies of gradient and extrapolate one with H and second with V lines</strong></p>
<p>So scan all the H-horizontal lines of the gradient and if found 2 known pixels far enough from each other (for example quarter or half of gradient size) then extrapolate the whole line unknown colors. So if found known endpoints (Red) are <code>(x0,y,r0,g0,b0),(x1,y,r1,g1,b1)</code> then set all unknown colors in the same line as:</p>
<pre><code>r = r0+(r1-r0)*(x-x0)/(x1-x0)
g = g0+(g1-g0)*(x-x0)/(x1-x0)
b = b0+(b1-b0)*(x-x0)/(x1-x0)
</code></pre>
<p>Similarly do the same in the copy of gradient for V-vertical lines now. So the points are now (x,y0,r0,g0,b0),(x,y1,r1,g1,b1)` and extrapolation:</p>
<pre><code>r = r0+(r1-r0)*(y-y0)/(y1-y0)
g = g0+(g1-g0)*(y-y0)/(y1-y0)
b = b0+(b1-b0)*(y-y0)/(y1-y0)
</code></pre>
<p>After this compare both copies and if unknown point is computed in both set it as average of both colors in the target gradient image. Loop this whole process (<strong>#3</strong>) until no new gradient pixel is added.</p></li>
<li><p><strong>use single extrapolated color for the rest</strong></p>
<p>depending on how you define the control points some areas will have only 1 extrapolated color (either from H or V lines but not both) so use only the single computed color for those (after <strong>#3</strong> is done).</p></li>
</ol>
<p>Here an example of what I mean by all this:</p>
<p><a href="http://i.stack.imgur.com/l6zBg.png" rel="nofollow"><img src="http://i.stack.imgur.com/l6zBg.png" alt="overview"></a></p>
<p>If you want something simple instead (but not exact) then you can bleed the known control points colors (with smooth filters) to neighboring pixels until the whole gradient is filled and saturated.</p>
<ol>
<li><strong>fill unknown gradient pixels with predefined color meaning not computed</strong></li>
<li><p><strong>set each pixel to average of its computed neighbors</strong></p>
<p>you may do this in separate image to avoid shifting.</p></li>
<li><p><strong>set control points back to original color</strong></p></li>
<li><p><strong>loop #2 until area filled/saturated/or predefined number of iterations</strong></p></li>
</ol>
<p><strong>[Edit1] second solution</strong></p>
<p>Ok I put it together in <strong>C++</strong> with your points/colors and gradient size here is how it looks (I bleed 100 times with 4-neighbors bleeding without weights):</p>
<p><a href="http://i.stack.imgur.com/lYYyu.png" rel="nofollow"><img src="http://i.stack.imgur.com/lYYyu.png" alt="bleeding"></a></p>
<p>The image on the left is input matrix where I encoded into alpha channel (highest 8 bits) if the pixel is reference point, computed or yet undefined. The image on the right is after applying the bleeding 100 times. The bleed is simple just take any non reference point and recompute it as average of all usable pixels around and itself (ignoring any undefined colors). </p>
<p>Here the <strong>C++</strong> code you can ignore the <strong>GDI</strong> stuff for rendering (beware my gradient map has <code>x</code> coordinate first you got <code>y</code> !)</p>
<pre class="lang-cpp prettyprint-override"><code>//---------------------------------------------------------------------------
const int mxs=7,mys=7,msz=16; // gradient resolution x,y and square size for render
DWORD map[mxs][mys]; // gradient matrix ... undefined color is >= 0xFF000000
// 0x00?????? - reference color
// 0xFF?????? - uncomputed color
// 0xFE?????? - bleeded color
//---------------------------------------------------------------------------
void map_clear() // set all pixels as uncomputed (white with alpha=255)
{
int x,y;
for (x=0;x<mxs;x++)
for (y=0;y<mys;y++)
map[x][y]=0xFFFFFFFF;
}
void map_bleed() // bleed computed colors
{
int x,y,r,g,b,n;
DWORD tmp[mxs][mys],c;
for (x=0;x<mxs;x++)
for (y=0;y<mys;y++)
{
c=map[x][y];
n=0; r=0; g=0; b=0; if (DWORD(c&0xFF000000)==0) { tmp[x][y]=c; continue; } if (DWORD(c&0xFF000000)!=0xFF000000) { r+=c&255; g+=(c>>8)&255; b+=(c>>16)&255; n++; }
x++; if ((x>=0)&&(x<mxs)&&(y>=0)&&(y<mys)) c=map[x][y]; else c=0xFF000000; if (DWORD(c&0xFF000000)!=0xFF000000) { r+=c&255; g+=(c>>8)&255; b+=(c>>16)&255; n++; }
x--; y--; if ((x>=0)&&(x<mxs)&&(y>=0)&&(y<mys)) c=map[x][y]; else c=0xFF000000; if (DWORD(c&0xFF000000)!=0xFF000000) { r+=c&255; g+=(c>>8)&255; b+=(c>>16)&255; n++; }
x--; y++; if ((x>=0)&&(x<mxs)&&(y>=0)&&(y<mys)) c=map[x][y]; else c=0xFF000000; if (DWORD(c&0xFF000000)!=0xFF000000) { r+=c&255; g+=(c>>8)&255; b+=(c>>16)&255; n++; }
x++; y++; if ((x>=0)&&(x<mxs)&&(y>=0)&&(y<mys)) c=map[x][y]; else c=0xFF000000; if (DWORD(c&0xFF000000)!=0xFF000000) { r+=c&255; g+=(c>>8)&255; b+=(c>>16)&255; n++; }
y--; if (!n) { tmp[x][y]=0xFFFFFFFF; continue; }
c=((r/n)|((g/n)<<8)|((b/n)<<16))&0x00FFFFFF;
tmp[x][y]=c;
}
// copy tmp back to map
for (x=0;x<mxs;x++)
for (y=0;y<mys;y++)
map[x][y]=tmp[x][y];
}
void map_draw(TCanvas *can,int x0,int y0) // just renders actual gradient map onto canvas (can ignore this)
{
int x,y,xx,yy;
for (x=0,xx=x0;x<mxs;x++,xx+=msz)
for (y=0,yy=y0;y<mys;y++,yy+=msz)
{
can->Pen->Color=clBlack;
can->Brush->Color=map[x][y]&0x00FFFFFF;
can->Rectangle(xx,yy,xx+msz,yy+msz);
}
}
//---------------------------------------------------------------------------
</code></pre>
<p>And here the usage (your example):</p>
<pre class="lang-cpp prettyprint-override"><code>// clear backbuffer
bmp->Canvas->Brush->Color=clBlack;
bmp->Canvas->FillRect(TRect(0,0,xs,ys));
// init your gradient with reference points
map_clear();
// x y R G B
map[6][0] = (239)|(238<<8)|(185<<16);
map[1][1] = (120)|(131<<8)|(125<<16);
map[6][4] = (184)|(191<<8)|(171<<16);
map[2][6] = (150)|(168<<8)|(158<<16);
map[5][6] = (166)|(180<<8)|(166<<16);
map_draw(bmp->Canvas,msz,msz); // render result (left)
// bleed
for (int i=0;i<100;i++) map_bleed();
map_draw(bmp->Canvas,(mxs+2)*msz,msz); // render result (right)
// refresh window with backbufer (anti-flickering)
Main->Canvas->Draw(0,0,bmp);
</code></pre>
<p>Again you can ignore all the rendering stuff. The number of bleeds should be 2x bigger then pixels in diagonal so bleeding covers all the pixels. The more iterations the more saturated result I try <code>100</code> just for example and the result looks good .. so I did not play with it anymore...</p>
<p><strong>[Edit2] and here the algorithm for the second approach</strong></p>
<ol>
<li><p><strong>add flags to interpolated matrix</strong></p>
<p>You need to know if the pixel is <code>reference,undefined</code> or <code>interpolated</code>. You can encode this to alpha channel, or use mask (separate 2D matrix).</p></li>
<li><p><strong>bleed/smooth matrix</strong></p>
<p>basically for each non <code>reference</code> pixel compute its new value as average of all non <code>undefined</code> pixels around (4/8 neighbors) and at its position. Do not use <code>undefined</code> pixels and store the computed value to temporary matrix (not messing up next pixels otherwise the bleeding/smoothing would shift the pixels usually diagonally). This way undefined pixels areas will shrink by 1 pixel. After whole matrix is done copy the content of temporary matrix to the original one (or swap pointers).</p></li>
<li><p><strong>loop #2 until result is saturated or specific count of iterations</strong></p>
<p>Number of counts should be at leas 2x bigger then the number of diagonal pixels to propagate reference pixel into whole matrix. The saturation check can be done in <strong>#2</strong> while copying the temp array into original one (can do abs difference between frames and if zero or near it stop).</p></li>
</ol>
| 5 |
2016-09-15T08:22:20Z
|
[
"python",
"numpy",
"image-processing",
"matrix",
"scipy"
] |
Interpolating elements of a color matrix on the basis of some given reference elements
| 39,489,089 |
<p>This is more or less a follow up question to <a href="http://stackoverflow.com/questions/39485178/two-dimensional-color-ramp-256x256-matrix-interpolated-from-4-corner-colors?noredirect=1#comment66289716_39485178">Two dimensional color ramp (256x256 matrix) interpolated from 4 corner colors</a> that was profoundly answered by jadsq today.</p>
<p>For linear gradients the previous answer works very well. However, if one wants to have better control of the stop colors of the gradient, this method seems not to be very practical. What might help in this situation is to have some reference color points in a matrix (lookup table) which are used to interpolate color values for the empty position in the look-up table. What I mean might be easier read out of the below image.</p>
<p><a href="http://i.stack.imgur.com/AwPYk.png" rel="nofollow"><img src="http://i.stack.imgur.com/AwPYk.png" alt="enter image description here"></a></p>
<p>The whole idea is taken from <a href="http://cartography.oregonstate.edu/pdf/2006_JennyHurni_SwissStyleShading.pdf" rel="nofollow">http://cartography.oregonstate.edu/pdf/2006_JennyHurni_SwissStyleShading.pdf</a> page 4 to 6. I've read through the paper, I understand theoretically what is going on but failing miserably because of my low experience with interpolation methods and to be honest, general math skills. What might also be of interest is, that they use a sigmoid Gaussian bell as interpolation method (page 6). They argue that Gaussian weighting yielded the visually best results and was simple to compute (equation 1, with k=0.0002 for a table of 256 per 256 cells).</p>
<hr>
<p>Edit (better illustrations):</p>
<p><a href="http://i.stack.imgur.com/cQVW1.png" rel="nofollow"><img src="http://i.stack.imgur.com/cQVW1.png" alt="Weighting functions for interpolating colours"></a></p>
<p><a href="http://i.stack.imgur.com/hGXfz.png" rel="nofollow"><img src="http://i.stack.imgur.com/hGXfz.png" alt="Equation 1"></a></p>
<hr>
<p>I have the other parts of their presented methods in place but filling the empty values in the matrix really is a key part and keeps me from continuing. Once again, thank you for your help!</p>
<p>What I have right now:</p>
<pre><code>#!/usr/bin/env python3
import numpy as np
import matplotlib.pyplot as plt
# the matrix with the reference color elements
ref=np.full([7, 7, 3], [255,255,255], dtype=np.uint8)
ref[0][6] = (239,238,185)
ref[1][1] = (120,131,125)
ref[4][6] = (184,191,171)
ref[6][2] = (150,168,158)
ref[6][5] = (166,180,166)
# s = ref.shape
#
# from scipy.ndimage.interpolation import zoom
# zooming as in http://stackoverflow.com/a/39485650/1230358 doesn't seem to work here anymore, because we have no corner point as reference but randomly distributed points within the matrix. As far as I know ...
# zoomed=zoom(ref,(256/s[0],256/s[1],1),order=1)
plt.subplot(211)
plt.imshow(ref,interpolation='nearest')
# plt.subplot(212)
# plt.imshow(zoomed,interpolation='nearest')
plt.show()
</code></pre>
| 5 |
2016-09-14T11:18:00Z
| 39,545,478 |
<p>I'm here again (a bit late, sorry,I just found the question) with a fairly short solution using <code>griddata</code> from <code>scipy.interpolate</code>. That function is meant to do precisely what you want : interpolate values on a grid from just a few points. The issues being the following : with that you won't be able to use fancy weights only the predefined interpolation method, and the holes around the border can't be directly interpolated either, so here I completed them with nearest values.</p>
<p>Here's the demo code :</p>
<pre><code># the matrix with the reference color elements
ref=np.full([7, 7, 3], 0 , dtype=np.uint8)
#Note I fill with 0 instead of 255
ref[0][6] = (239,238,185)
ref[1][1] = (120,131,125)
ref[4][6] = (184,191,171)
ref[6][2] = (150,168,158)
ref[6][5] = (166,180,166)
from scipy.interpolate import griddata
#we format the data to feed in griddata
points=np.where(ref != 0)
values=ref[points]
grid_x,grid_y,grid_z=np.mgrid[0:7,0:7,0:3]
#we compute the inperpolation
filled_grid=griddata(points, values, (grid_x, grid_y, grid_z), method='linear')
filled_grid=np.array(filled_grid,dtype=np.uint8) #we convert the float64 to uint8
#filled_grid still has holes around the border
#here i'll complete the holes with the nearest value
points=np.where(filled_grid != 0)
values=filled_grid[points]
near_grid=griddata(points, values, (grid_x, grid_y, grid_z), method='nearest')
completed_grid=(near_grid*(filled_grid == 0))+filled_grid
plt.subplot(131)
plt.imshow(ref,interpolation='nearest')
plt.subplot(132)
plt.imshow(filled_grid,interpolation='nearest')
plt.subplot(133)
plt.imshow(completed_grid,interpolation='nearest')
plt.show()
</code></pre>
<p><strong>Output:</strong>
<a href="http://i.stack.imgur.com/ZTSrr.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZTSrr.png" alt="output of the code"></a></p>
| 1 |
2016-09-17T10:02:07Z
|
[
"python",
"numpy",
"image-processing",
"matrix",
"scipy"
] |
gnuplot: the "sum [<var> = <start>:<end>] <expression>" strategy
| 39,489,107 |
<p>In the following script, I would like to plot <code>y(x)</code>:</p>
<p><a href="http://i.stack.imgur.com/LrUF9.png" rel="nofollow"><img src="http://i.stack.imgur.com/LrUF9.png" alt="enter image description here"></a></p>
<p>and this function <code>u(x)</code>:</p>
<p><a href="http://i.stack.imgur.com/KiqoI.png" rel="nofollow"><img src="http://i.stack.imgur.com/KiqoI.png" alt="enter image description here"></a></p>
<p><strong>EDIT</strong></p>
<p>Plotting <code>y(x)</code> is easy, but I am having problems to plot the function <code>u(x)</code>. </p>
<p><code>u(x)</code> is just the same function as <code>y(x)</code>, but summing every step.</p>
<p>Therefore, in order to plot <code>u(x)</code>, I have tried the <code>sum [<var> = <start>:<end>] <expression></code> strategy. I have implemented this notation as: </p>
<p><code>replot sum[x=1:6] y(x) with line lt -1 lw 1 lc 2 title "u(x)"</code></p>
<p>in the following script:</p>
<pre><code> #
set ylabel "y" font ", 20"
set xlabel 'x' font ", 20"
set format y "%9.4f"
set xrange [1:6]
set yrange [0:20]
set xtics font ", 15"
set ytics font ", 15"
set key font ",17" # Changes the font of the letters of the legend
y(x) = (2*x)/(x**2)
plot y(x) with line lt -1 lw 1 lc 1 title "y(x)"
replot sum[x=1:6] y(x) with line lt -1 lw 1 lc 2 title "u(x)"
pause -1
set term post enh color eps
set output "y_x.eps"
replot
</code></pre>
<p>I am not sure if the <code>sum[x=1:6] y(x)</code> strategy is indeed plotting <code>u(x)</code>.</p>
<p>In order to check this, we can do the following: </p>
<p>we know that:</p>
<p><a href="http://i.stack.imgur.com/mq3O0.png" rel="nofollow"><img src="http://i.stack.imgur.com/mq3O0.png" alt="enter image description here"></a></p>
<p>So, what is the value in gnuplot for <code>u(6)</code>? If you run that script, you get:</p>
<p><a href="http://i.stack.imgur.com/XoFFx.png" rel="nofollow"><img src="http://i.stack.imgur.com/XoFFx.png" alt="enter image description here"></a></p>
<p>zooming:</p>
<p>I see that <code>u(6)</code> is reaching the value of <code>2.0000</code>, and not <code>3.5835</code>.</p>
<p>This makes me think that the <code>replot sum[x=1:6] u(x)</code> is not plotting u(x_i) (second formula)</p>
<p>How could I plot <code>u(x)</code> ?.</p>
<p><a href="http://i.stack.imgur.com/vIg6P.png" rel="nofollow"><img src="http://i.stack.imgur.com/vIg6P.png" alt="enter image description here"></a></p>
<p><strong>EDIT 2</strong></p>
<p>Running <code>replot sum[i=1:6] y(i)</code> in this script:</p>
<pre><code>set ylabel "y" font ", 20"
set xlabel 'x' font ", 20"
set format y "%9.4f"
set xrange [1:6]
set yrange [0:20]
set xtics font ", 15"
set ytics font ", 15"
set key font ",17" # Changes the font of the letters of the legend
y(x) = (2*x)/(x**2)
plot y(x) with line lt -1 lw 1 lc 1 title "y(x)"
replot sum[i=1:6] y(i) with line lt -1 lw 1 lc 2 title "u(x)"
pause -1
set term post enh color eps
set output "y_x.eps"
replot
</code></pre>
<p>produces the following: <code>u(6) = 3.000</code>:</p>
<p><a href="http://i.stack.imgur.com/j9WMA.png" rel="nofollow"><img src="http://i.stack.imgur.com/j9WMA.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/w3r72.png" rel="nofollow"><img src="http://i.stack.imgur.com/w3r72.png" alt="enter image description here"></a></p>
<p><strong>EDIT 3</strong></p>
<p>Using <code>y(x) = (2.*x)/(x**2)</code> or <code>y(x) = (2.*x)/(x**2.)</code>, I get <code>u(6) = 4.9</code>:</p>
<p><a href="http://i.stack.imgur.com/0rTqQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/0rTqQ.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/YSe3P.png" rel="nofollow"><img src="http://i.stack.imgur.com/YSe3P.png" alt="enter image description here"></a></p>
<p><strong>EDIT 4</strong></p>
<p>Making:</p>
<pre><code>N=100
replot sum[i=0:N-1] y(1. + (i+0.5)*5./N)*5./N with line lt -1 lw 1 lc 5 title "sum(x)"
</code></pre>
<p>produces a constant (cyan line) <code>y=3.58</code>. This is the result of the numerical approximation of the summation.</p>
<p><a href="http://i.stack.imgur.com/uoqAP.png" rel="nofollow"><img src="http://i.stack.imgur.com/uoqAP.png" alt="enter image description here"></a></p>
<p>What I really want to achieve is to plot the function <code>u(x)</code> for all the values of <code>x_{i}</code>... in which at every step <code>i</code>, the summation over all the previous steps is performed, and a new value of <code>u</code> is generated. I would like to plot the function <code>u(x)</code>...</p>
| 0 |
2016-09-14T11:18:54Z
| 39,489,728 |
<p>In fact, what is actually plotted as <code>u(x)</code> is just <code>6*y(x)</code>, you can check that if you replace in your script the line</p>
<pre><code>plot y(x) with line lt -1 lw 1 lc 1 title "y(x)"
</code></pre>
<p>with</p>
<pre><code>plot 6*y(x) with line lt -1 lw 1 lc 1 title "y(x)"
</code></pre>
<p>that the line will then coincide with <code>u(x)</code>.</p>
<p>Also, note that the function <code>y(x)</code> is not integrable in the interval <code>[0, 6]</code> since it behaves at <code>0</code> as <code>1/x</code> (in your formula, you would obtain expression <code>ln(0)</code>).</p>
| 1 |
2016-09-14T11:50:36Z
|
[
"python",
"sum",
"gnuplot"
] |
How can I check how much of a path exists in python
| 39,489,111 |
<p>Let's say on my filesystem the following directory exists:</p>
<pre><code>/foo/bar/
</code></pre>
<p>In my python code I have the following path:</p>
<pre><code>/foo/bar/baz/quix/
</code></pre>
<p>How can I tell that only the <code>/foo/bar/</code> part of the path exists? </p>
<p>I can walk the path recursively and check it step by step, but is there an easier way?</p>
| 2 |
2016-09-14T11:19:10Z
| 39,489,433 |
<p>I don't actually get your requirements as whether you want every path to be checked or upto some specific level.But for simple sanity checks you can just iterate through the full path create the paths and check the sanity.</p>
<pre><code>for i in filter(lambda s: s, sample_path.split('/')):
_path = os.path.join(_path, i)
if os.path.exists(_path):
print "correct path"
</code></pre>
| 1 |
2016-09-14T11:35:44Z
|
[
"python",
"path",
"directory"
] |
How can I check how much of a path exists in python
| 39,489,111 |
<p>Let's say on my filesystem the following directory exists:</p>
<pre><code>/foo/bar/
</code></pre>
<p>In my python code I have the following path:</p>
<pre><code>/foo/bar/baz/quix/
</code></pre>
<p>How can I tell that only the <code>/foo/bar/</code> part of the path exists? </p>
<p>I can walk the path recursively and check it step by step, but is there an easier way?</p>
| 2 |
2016-09-14T11:19:10Z
| 39,489,473 |
<p>Well, I think the only way is to work recursively... Though, I would work up the directory tree. The code isn't too hard to implement:</p>
<pre><code>import os
def doesItExist(directory):
if not os.path.exists(directory):
doesItExist(os.path.dirname(directory)
else:
print "Found: " + directory
return directory
</code></pre>
| 0 |
2016-09-14T11:37:52Z
|
[
"python",
"path",
"directory"
] |
How can I check how much of a path exists in python
| 39,489,111 |
<p>Let's say on my filesystem the following directory exists:</p>
<pre><code>/foo/bar/
</code></pre>
<p>In my python code I have the following path:</p>
<pre><code>/foo/bar/baz/quix/
</code></pre>
<p>How can I tell that only the <code>/foo/bar/</code> part of the path exists? </p>
<p>I can walk the path recursively and check it step by step, but is there an easier way?</p>
| 2 |
2016-09-14T11:19:10Z
| 39,489,505 |
<p>No easy function in the standard lib but not really a difficult one to make yourself.</p>
<p>Here's a function that takes a path and returns only the path that does exist.</p>
<pre><code>In [129]: def exists(path):
...: if os.path.exists(path): return path
...: return exists(os.path.split(path)[0])
...:
In [130]: exists("/home/sevanteri/src/mutta/oisko/siellä/jotain/mitä/ei ole/")
Out[130]: '/home/sevanteri/src'
</code></pre>
| 4 |
2016-09-14T11:39:22Z
|
[
"python",
"path",
"directory"
] |
How can I check how much of a path exists in python
| 39,489,111 |
<p>Let's say on my filesystem the following directory exists:</p>
<pre><code>/foo/bar/
</code></pre>
<p>In my python code I have the following path:</p>
<pre><code>/foo/bar/baz/quix/
</code></pre>
<p>How can I tell that only the <code>/foo/bar/</code> part of the path exists? </p>
<p>I can walk the path recursively and check it step by step, but is there an easier way?</p>
| 2 |
2016-09-14T11:19:10Z
| 39,489,512 |
<p>I think a simple <code>while</code> loop with <code>os.path.dirname()</code> will suffice the requirement</p>
<pre><code>path_string = '/home/moin/Desktop/my/dummy/path'
while path_string:
if not os.path.exists(path_string):
path_string = os.path.dirname(path_string)
else:
break
# path_string = '/home/moin/Desktop' # which is valid path in my system
</code></pre>
| 1 |
2016-09-14T11:39:50Z
|
[
"python",
"path",
"directory"
] |
Issue with pie chart matplotlib - startangle / len()
| 39,489,150 |
<p>I have a script which creates a pie chart based on CSV files. My problem started when I was reading a CSV that had only one row (e.g. <code>percent = [100]</code>). Is there any limitation of using a pie chart, where is will not show 100% for only one item? It seems that the error is related to either the <code>startangle</code> or <code>explode</code> arguments.</p>
<p>My code is:</p>
<pre><code>percent = [100]
plt.pie(percent, # data
explode=(0), # offset parameters
#labels=country, # slice labels - removed to hid labels and added labels=country in legend()
colors=colors, # array of colours
autopct='%1.0f%%', # print the values inside the wedges - add % to the values
shadow=False, # enable shadow
startangle=70 # starting angle
)
plt.axis('equal')
plt.legend(loc='best', labels=country)
plt.tight_layout()
</code></pre>
<p>Error in the line of startangle=70:</p>
<pre><code>if len(x) != len(explode):
TypeError: object of type 'float' has no len()
</code></pre>
<p>Thanks!</p>
| -2 |
2016-09-14T11:21:01Z
| 39,490,979 |
<p>Change the <code>explode</code> parameter to a <code>list</code>:</p>
<pre><code>percent = [100]
explode = [0]
plt.pie(percent, explode=explode, ...)
</code></pre>
<p>If you have more values, you can use a <code>tuple</code>, but with one value <code>(int)</code> is seen as an integer:</p>
<pre><code>>>> type((0))
<type 'int'>
>>> type((0, 1))
<type 'tuple'>
>>> type([0])
<type 'list'>
</code></pre>
| 1 |
2016-09-14T12:54:16Z
|
[
"python",
"matplotlib",
"charts"
] |
How to scrape real time streaming data with Python?
| 39,489,168 |
<p>I was trying to scrape the number of flights for this webpage <a href="https://www.flightradar24.com/56.16,-49.51" rel="nofollow">https://www.flightradar24.com/56.16,-49.51</a></p>
<p>The number is highlighted in the picture below:
<a href="http://i.stack.imgur.com/Zvsmf.png" rel="nofollow"><img src="http://i.stack.imgur.com/Zvsmf.png" alt="enter image description here"></a></p>
<p>The number is updated every 8 seconds.</p>
<p>This is what I tried with BeautifulSoup:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import time
r=requests.get("https://www.flightradar24.com/56.16,-49.51")
c=r.content
soup=BeautifulSoup(c,"html.parser")
value=soup.find_all("span",{"class":"choiceValue"})
print(value)
</code></pre>
<p>But that always returns 0:</p>
<pre><code>[<span class="choiceValue" id="menuPlanesValue">0</span>]
</code></pre>
<p>View source also shows 0, so I understand why BeautifulSoup returns 0 too.</p>
<p>Anyone know any other method to get the current value?</p>
| 0 |
2016-09-14T11:22:10Z
| 39,489,328 |
<p>The problem with your approach is that the page first loads a view, then performs regular requests to refresh the page. If you look at the network tab in the developer console in Chrome (for example), you'll see the requests to <a href="https://data-live.flightradar24.com/zones/fcgi/feed.js?bounds=59.09,52.64,-58.77,-47.71&faa=1&mlat=1&flarm=1&adsb=1&gnd=1&air=1&vehicles=1&estimated=1&maxage=7200&gliders=1&stats=1">https://data-live.flightradar24.com/zones/fcgi/feed.js?bounds=59.09,52.64,-58.77,-47.71&faa=1&mlat=1&flarm=1&adsb=1&gnd=1&air=1&vehicles=1&estimated=1&maxage=7200&gliders=1&stats=1</a></p>
<p>The response is regular json:</p>
<pre><code>{
"full_count": 11879,
"version": 4,
"afefdca": [
"A86AB5",
56.4288,
-56.0721,
233,
38000,
420,
"0000",
"T-F5M",
"B763",
"N641UA",
1473852497,
"LHR",
"ORD",
"UA929",
0,
0,
"UAL929",
0
],
...
"aff19d9": [
"A12F78",
56.3235,
-49.3597,
251,
36000,
436,
"0000",
"F-EST",
"B752",
"N176AA",
1473852497,
"DUB",
"JFK",
"AA291",
0,
0,
"AAL291",
0
],
"stats": {
"total": {
"ads-b": 8521,
"mlat": 2045,
"faa": 598,
"flarm": 152,
"estimated": 464
},
"visible": {
"ads-b": 0,
"mlat": 0,
"faa": 6,
"flarm": 0,
"estimated": 3
}
}
}
</code></pre>
<p>I'm not sure if this API is protected in any way, but it seems like I can access it without any issues using curl.</p>
<p>More info:</p>
<ul>
<li><a href="http://aviation.stackexchange.com/questions/3052/is-there-an-api-to-get-real-time-faa-flight-data">aviation.stackexchange - Is there an API to get real-time FAA flight data?</a></li>
<li><a href="http://forum.flightradar24.com/threads/24-API-access/page3">Flightradar24 Forum - API access</a> (meaning your use case is probably discouraged)</li>
</ul>
| 5 |
2016-09-14T11:30:16Z
|
[
"python",
"web-scraping",
"beautifulsoup"
] |
How to scrape real time streaming data with Python?
| 39,489,168 |
<p>I was trying to scrape the number of flights for this webpage <a href="https://www.flightradar24.com/56.16,-49.51" rel="nofollow">https://www.flightradar24.com/56.16,-49.51</a></p>
<p>The number is highlighted in the picture below:
<a href="http://i.stack.imgur.com/Zvsmf.png" rel="nofollow"><img src="http://i.stack.imgur.com/Zvsmf.png" alt="enter image description here"></a></p>
<p>The number is updated every 8 seconds.</p>
<p>This is what I tried with BeautifulSoup:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import time
r=requests.get("https://www.flightradar24.com/56.16,-49.51")
c=r.content
soup=BeautifulSoup(c,"html.parser")
value=soup.find_all("span",{"class":"choiceValue"})
print(value)
</code></pre>
<p>But that always returns 0:</p>
<pre><code>[<span class="choiceValue" id="menuPlanesValue">0</span>]
</code></pre>
<p>View source also shows 0, so I understand why BeautifulSoup returns 0 too.</p>
<p>Anyone know any other method to get the current value?</p>
| 0 |
2016-09-14T11:22:10Z
| 39,489,711 |
<p>So based on what @Andre has found out, I wrote this code:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import time
def get_count():
url = "https://data-live.flightradar24.com/zones/fcgi/feed.js?bounds=59.09,52.64,-58.77,-47.71&faa=1&mlat=1&flarm=1&adsb=1&gnd=1&air=1&vehicles=1&estimated=1&maxage=7200&gliders=1&stats=1"
# Request with fake header, otherwise you will get an 403 HTTP error
r = requests.get(url, headers={'User-Agent': 'Mozilla/5.0'})
# Parse the JSON
data = r.json()
counter = 0
# Iterate over the elements to get the number of total flights
for element in data["stats"]["total"]:
counter += data["stats"]["total"][element]
return counter
while True:
print(get_count())
time.sleep(8)
</code></pre>
<p>The code should be self explaining, everything it does is printing the actual flight count every 8 seconds :)</p>
<p><strong>Note:</strong> <em>The values are similar to the ones on the website, but not the same. This is because it's unlikely, that the Python script and the website are sending a request at the same time. If you want to get more accurate results, just make a request every 4 seconds for example.</em></p>
<p>Use this code as you want, extend it or whatever. Hope this helps!</p>
| 2 |
2016-09-14T11:49:53Z
|
[
"python",
"web-scraping",
"beautifulsoup"
] |
Text Classification for multiple label
| 39,489,197 |
<p>I doing Text Classification by Convolution Neural Network. I used health documents (ICD-9-CM code) for my project and I used the same model as <a href="http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/" rel="nofollow">dennybritz</a> used but my data has 36 labels. I used one_hot encoding to encode my label.</p>
<p>Here is my problem, when I run data which has one label for each document my code the accuracy is perfect from 0.8 to 1. If I run data which has more than one labels, the accuracy is significantly reduced. </p>
<p>For example: a document has single label as <code>"782.0"</code>: <code>[0 0 1 0 ... 0]</code>,<br>
a document has multiple label as <code>"782.0 V13.09 593.5"</code>: <code>[1 0 1 0 ... 1]</code>.</p>
<p>Could anyone suggest why this happen and how to improve it?</p>
| 0 |
2016-09-14T11:23:23Z
| 39,520,251 |
<p>The label encoding seems correct. If you have multiple correct labels, <code>[1 0 1 0 ... 1]</code> looks totally fine. The loss function used in Denny's <a href="http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/" rel="nofollow">post</a> is <code>tf.nn.softmax_cross_entropy_with_logits</code>, which is the loss function you want to use for a multi-label problem. </p>
<blockquote>
<p>Computes softmax cross entropy between logits and labels.</p>
<p>Measures the probability error in discrete classification tasks in
which the classes are <strong>mutually exclusive</strong> (each entry is in exactly one class).</p>
</blockquote>
<h3>Multiple predictions</h3>
<p>First you might want to have multiple predictions (for example top k). Then you should consider replace </p>
<pre><code>self.predictions = tf.argmax(self.scores, 1, name="predictions")
</code></pre>
<p>with <code>tf.nn.top_k()</code>, for example top 2 labels.</p>
<pre><code>values, self.predictions = tf.nn.top_k(self.scores, 2)
</code></pre>
<p>Further, you can remove predictions if their <code>values</code> are less than a threshold.</p>
<h3>Fix the accuracy measure</h3>
<p>In order to measure the accuracy correctly for a multi-label problem, the code below needs to be changed.</p>
<pre><code># Calculate Accuracy
with tf.name_scope("accuracy"):
correct_predictions = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"), name="accuracy")
</code></pre>
<p>The logic of <code>correct_predictions</code> above is incorrect when you could have multiple correct labels. For example, say <code>num_classes=4</code>, and label 0 and 2 are correct. Thus your <code>input_y=[1, 0, 1, 0].</code> The <code>correct_predictions</code> would need to break tie between index 0 and index 2. I am not sure how <code>tf.argmax</code> breaks tie but if it breaks the tie by choosing the smaller index, a prediction of label 2 is always considered wrong, which definitely hurt your accuracy measure. </p>
<p>Actually in a multi-label problem, <a href="https://en.wikipedia.org/wiki/Precision_and_recall" rel="nofollow">precision and recall</a> are better metrics than accuracy. Also you can consider using precision@k (<code>tf.nn.in_top_k</code>) to report classifier performance.</p>
| 1 |
2016-09-15T20:57:47Z
|
[
"python",
"machine-learning",
"tensorflow",
"text-classification"
] |
How to debug / correctly setup git-multimail
| 39,489,274 |
<p>I would like to use <a href="https://github.com/git-multimail/git-multimail" rel="nofollow">git-multimail</a> as post receive hook in one of my git repositories (no gitolite used). Unfortunately, I cannot get it work, and I have hardly any experience using Python.</p>
<h2>What I did so far:</h2>
<ol>
<li>I added the following block to the <code>project.git/config</code> file:</li>
</ol>
<p> </p>
<pre><code>[multimailhook]
mailingList = email@example.com
from = email@example.com
envelopeSender = email@example.com
mailer = smtp
smtpServer = smtp.mydomain.com
smtpUser = myUser
smtpPass = myPassword
</code></pre>
<p>Please note that I do not know whether "smtp", which is defined in the <code>mailer</code> variable, is installed on my machine.</p>
<ol start="2">
<li>I copied the current <code>git_multimail.py</code> file into <code>project.git/hooks</code>.</li>
<li>I created a <code>project.git/hook/post-receive</code> file with the following content. The file is executable, I copied this from <a href="https://github.com/git-multimail/git-multimail/blob/master/git-multimail/post-receive.example" rel="nofollow">https://github.com/git-multimail/git-multimail/blob/master/git-multimail/post-receive.example</a></li>
</ol>
<p> </p>
<pre><code>#! /usr/bin/env python
import sys
import os
import git_multimail
config = git_multimail.Config('multimailhook')
try:
environment = git_multimail.GenericEnvironment(config=config)
#environment = git_multimail.GitoliteEnvironment(config=config)
except git_multimail.ConfigurationException:
sys.stderr.write('*** %s\n' % sys.exc_info()[1])
sys.exit(1)
mailer = git_multimail.choose_mailer(config, environment)
git_multimail.run_as_post_receive_hook(environment, mailer)
</code></pre>
<h2>What happens:</h2>
<p>When I push a change, a file <code>project.git/hooks/git_multimail.pyc</code> is created, but no email is sent.</p>
<p>Doing a configuration test using <code>GIT_MULTIMAIL_CHECK_SETUP=true python git_multimail.py</code> as described on <a href="https://github.com/git-multimail/git-multimail/blob/master/doc/troubleshooting.rst" rel="nofollow">https://github.com/git-multimail/git-multimail/blob/master/doc/troubleshooting.rst</a> tells me that <code>git-multimail seems properly set up</code></p>
<p>Is there a way to log something like an output of the script? What can I do to find out what is not working? Are there even errors in my files?</p>
<p>Thanks in advance.</p>
| 2 |
2016-09-14T11:26:46Z
| 39,494,389 |
<p>OK guys, the error was probably as little as it could be. I did just one very little mistake in the post receive hook file: The <code>sys.exit(1)</code> command is not indented.</p>
<p>So, the WRONG version from my question:</p>
<pre><code>try:
environment = git_multimail.GenericEnvironment(config=config)
except git_multimail.ConfigurationException:
sys.stderr.write('*** %s\n' % sys.exc_info()[1])
sys.exit(1)
</code></pre>
<p>CORRECT is (compare last line):</p>
<pre><code>try:
environment = git_multimail.GenericEnvironment(config=config)
except git_multimail.ConfigurationException:
sys.stderr.write('*** %s\n' % sys.exc_info()[1])
sys.exit(1)
</code></pre>
<p>Like I said, I hardly know Python, so I did not pay attention to the indents. After correcting this, the email was sent, so feel free to use the above steps as a little tutorial for setting up git-multimail the "easiest" way. (I did not find a tutorial for this exact solution.)</p>
| 0 |
2016-09-14T15:31:23Z
|
[
"python",
"git",
"githooks",
"post-receive-email"
] |
How to validate if a user input is both a float and with a given range in Python?
| 39,489,400 |
<p>I'm trying to validate a user input to check if it is both a number (float) and within a range (0-1). I have used Try except to check if the input is a float as below:</p>
<pre><code>while True:
try:
rate=input(": ")
rate=float(rate)
break
except ValueError:
print("That was not a valid numerical value, please try again")
</code></pre>
<p>This works for checking if the input is numerical (floats are accepted) however I cannot make it check both if its numerical and within a range (0,1) this needs to return the rate to my main code.</p>
<p>I am able to validate if an input is within a range I just can't work out how to do both checks so that for example if a user enters 3.8 they get and error message and are able to re-input if they then put a string it would not crash the code.</p>
| 0 |
2016-09-14T11:34:07Z
| 39,489,500 |
<p>You can consider using <code>try-except-else</code> in the following manner:</p>
<pre><code>min_val = 1
max_val = 10
while True:
rate = input(": ")
try:
rate = float(rate)
except ValueError:
print("That was not a valid numerical value, please try again")
else:
if min_val < rate < max_val:
break
else:
print("This number is not in the required range")
</code></pre>
<p>This will require the input to be a number in the range <code>min_val < rate < max_val</code>. Note that the <code>else</code> block is executed only if no <code>exception</code> was raised.</p>
<p>Another approach would be to use the already catched <code>ValueError</code> to raise your own:</p>
<pre><code>min_val = 1
max_val = 10
while True:
rate = input(": ")
try:
rate = float(rate)
if not min_val < rate < max_val:
raise ValueError
except ValueError:
print("That was not a valid numerical value, please try again")
else:
break
</code></pre>
| 3 |
2016-09-14T11:39:08Z
|
[
"python",
"python-3.x"
] |
Access m2m relationships on the save method of a newly created instance
| 39,489,539 |
<p>I'd like to send emails (only) when Order instances are created. In the email template, I need to access the m2m relationships. Unfortunatly, its seems like the m2m relations are ont yet populated, and the itemmembership_set.all() method returns an empty list.</p>
<p>Here is my code:</p>
<pre><code>class Item(models.Model):
...
class Order(models.Model):
...
items = models.ManyToManyField(Item, through='ItemMembership')
def save(self, *args, **kwargs):
pk = self.pk
super(Order, self).save(*args, **kwargs)
# If the instance is beeing created, sends an email with the order
# details.
if not pk:
self.send_details_email()
def send_details_email(self):
assert len(self.itemmembership_set.all()) != 0
class ItemMembership(models.Model):
order = models.ForeignKey(Order)
item = models.ForeignKey(Item)
quantity = models.PositiveSmallIntegerField(default=1)
</code></pre>
| 0 |
2016-09-14T11:41:01Z
| 39,503,978 |
<p>Some of the comments suggested using signals. While you can use signals, specifically the <code>m2m_changed</code> signal, this will always fire whenever you modify the m2m fields. As far as I know, there is no way for the sender model (in your sample, that is <code>ItemMembership</code>) to know if the associated <code>Order</code> instance was just created or not.</p>
<p>Sure, you can probably use the <code>cache</code> framework to set a temporary flag upon calling <code>save()</code> of an <code>Order</code> object, then read that same flag on the <code>m2m_changed</code> signal and delete the flag when it is over. The downside is you have to validate the process, and it beats the purpose of using signals which is to decouple stuff.</p>
<p>My suggestion is to totally remove all those email sending functionalities from your models. Implement it as a helper function instead, and then just invoke the helper function explicitly after an <code>Order</code> object with its associated <code>ItemMembership</code> objects have been successfully created. IMHO, it also makes debugging a lot easier.</p>
| 1 |
2016-09-15T05:55:18Z
|
[
"python",
"django",
"django-models",
"m2m"
] |
Python Selenium get image name from website
| 39,489,612 |
<p>I am trying to scrape a website for my project, but i'm having trouble with scrapping image names via Selenium from this <a href="http://comparefirst.sg/wap/productsListEvent.action?prodGroup=whole&pageAction=prodlisting" rel="nofollow">website</a></p>
<p><a href="http://i.stack.imgur.com/CR0l6.png" rel="nofollow"><img src="http://i.stack.imgur.com/CR0l6.png" alt="enter image description here"></a></p>
<p>with the below code, I am able to use selenium to return me the <code>text</code> data from the website</p>
<pre><code>results = driver.find_elements_by_css_selector("li.result_content")
for result in results:
company_name = result.find_element_by_tag_name("h3").get_attribute("innerText")
product_name = result.find_element_by_id('sProdName').get_attribute("innerText")
product_paymode = result.find_element_by_id('paymode').get_attribute("innerText")
</code></pre>
<p>I was told to use <code>get_attribute("innerText")</code> because there are several items hidden, and <code>get_attribute("innerText")</code> would help me get the hidden items. (True enough, it works)</p>
<p>my question is: How do I scrape the <code>prod-feature-icon</code> class, to tell me if that picture is <code>active</code> or not??</p>
| 1 |
2016-09-14T11:45:17Z
| 39,489,693 |
<p>Why not use <a href="http://selenium-python.readthedocs.io/locating-elements.html" rel="nofollow">find_element_by_class_name</a> ?</p>
<pre><code>feature_icon = result.find_element_by_class_name("prod-feature-icon")
</code></pre>
<p>However it's worth noting that the object with this class name is actually a <code>UL</code> within it there are several images so you need to decide which image exactly you want to work with from that. Alternatively you could iterate through them with</p>
<pre><code>for item in feature_icon.find_elements_by_tag_name('img'):
print(item.get_attribute('src'))
</code></pre>
<p>of course this wouldn't still tell you whether the item is active or inactive because that doesn't seem to be dictated by the CSS but rather by the shading of the image</p>
| 2 |
2016-09-14T11:49:06Z
|
[
"python",
"selenium"
] |
Encoding / formatting issues with python kafka library
| 39,489,649 |
<p>I've been trying to use the <a href="http://kafka-python.readthedocs.io/en/master/index.html" rel="nofollow">python kafka</a> library for a bit now and can't get a producer to work.</p>
<p>After a bit of research I've found out that kafka sends (and I'm guessing expects as well) an additional 5 byte header (one 0 byte, one long containing a schema id for schema-registry) to consumers. I've managed to get a consumer working by simply stripping this first bytes.</p>
<p>Am I supposed to prepend a similar header when writing a producer?</p>
<p>Below the exception that comes out:</p>
<pre><code> [2016-09-14 13:32:48,684] ERROR Task hdfs-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:142)
org.apache.kafka.connect.errors.DataException: Failed to deserialize data to Avro:
at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:109)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:357)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:226)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1
Caused by: org.apache.kafka.common.errors.SerializationException: Unknown magic byte!
</code></pre>
<p>I'm using the latest stable releases of both kafka and python-kafka.</p>
<p><strong>EDIT</strong></p>
<p><strong>Consumer</strong>
</p>
<pre><code>from kafka import KafkaConsumer
import avro.io
import avro.schema
import io
import requests
import struct
# To consume messages
consumer = KafkaConsumer('hadoop_00',
group_id='my_group',
bootstrap_servers=['hadoop-master:9092'])
schema_path = "resources/f1.avsc"
for msg in consumer:
value = bytearray(msg.value)
schema_id = struct.unpack(">L", value[1:5])[0]
response = requests.get("http://hadoop-master:8081/schemas/ids/" + str(schema_id))
schema = response.json()["schema"]
schema = avro.schema.parse(schema)
bytes_reader = io.BytesIO(value[5:])
# bytes_reader = io.BytesIO(msg.value)
decoder = avro.io.BinaryDecoder(bytes_reader)
reader = avro.io.DatumReader(schema)
temp = reader.read(decoder)
print(temp)
</code></pre>
<p><strong>Producer</strong>
</p>
<pre><code>from kafka import KafkaProducer
import avro.schema
import io
from avro.io import DatumWriter
producer = KafkaProducer(bootstrap_servers="hadoop-master")
# Kafka topic
topic = "hadoop_00"
# Path to user.avsc avro schema
schema_path = "resources/f1.avsc"
schema = avro.schema.parse(open(schema_path).read())
range = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for i in range:
producer.send(topic, b'{"f1":"value_' + str(i))
</code></pre>
| 1 |
2016-09-14T11:46:52Z
| 39,491,227 |
<p>Since you are reading with BinaryDecoder and DatumReader, if you send the data in the reverse(using DatumWriter with the BinaryEncoder as encoder), your messages will be fine, I suppose. </p>
<p>Something like this:</p>
<p><strong>Producer</strong></p>
<pre><code>from kafka import KafkaProducer
import avro.schema
import io
from avro.io import DatumWriter, BinaryEncoder
producer = KafkaProducer(bootstrap_servers="hadoop-master")
# Kafka topic
topic = "hadoop_00"
# Path to user.avsc avro schema
schema_path = "resources/f1.avsc"
schema = avro.schema.parse(open(schema_path).read())
# range is a bad variable name. I changed it here
value_range = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for i in value_range:
datum_writer = DatumWriter(schema)
byte_writer = io.BytesIO()
datum_encoder = BinaryEncoder(byte_writer)
datum_writer.write({"f1" : "value_%d" % (i)}, datum_encoder)
producer.send(topic, byte_writer.getvalue())
</code></pre>
<p>The few changes I made are:</p>
<ul>
<li>use the DatumWriter and BinaryEncoder</li>
<li>Instead of a json, I am sending a dictionary in the byte stream(You might have to check your code with a normal dictionary and it might work too; but I am not sure)</li>
<li>Sending the message to the kafka topic using the byte stream(For me, sometimes it failed and in those cases, I assigned the .getvalue method to a variable and use the variable in producer.send. I don't know the reason for failure but assigning to a variable always worked)</li>
</ul>
<p>I could not test the code I added. But that's the piece of code I wrote while using avro previously. If it's not working for you, please let me know in the comments. It might be because of my rusty memory. I will update this answer with a working one once I reach my home where I can test the code.</p>
| 0 |
2016-09-14T13:05:22Z
|
[
"python",
"apache-kafka",
"python-kafka"
] |
Encoding / formatting issues with python kafka library
| 39,489,649 |
<p>I've been trying to use the <a href="http://kafka-python.readthedocs.io/en/master/index.html" rel="nofollow">python kafka</a> library for a bit now and can't get a producer to work.</p>
<p>After a bit of research I've found out that kafka sends (and I'm guessing expects as well) an additional 5 byte header (one 0 byte, one long containing a schema id for schema-registry) to consumers. I've managed to get a consumer working by simply stripping this first bytes.</p>
<p>Am I supposed to prepend a similar header when writing a producer?</p>
<p>Below the exception that comes out:</p>
<pre><code> [2016-09-14 13:32:48,684] ERROR Task hdfs-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:142)
org.apache.kafka.connect.errors.DataException: Failed to deserialize data to Avro:
at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:109)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:357)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:226)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1
Caused by: org.apache.kafka.common.errors.SerializationException: Unknown magic byte!
</code></pre>
<p>I'm using the latest stable releases of both kafka and python-kafka.</p>
<p><strong>EDIT</strong></p>
<p><strong>Consumer</strong>
</p>
<pre><code>from kafka import KafkaConsumer
import avro.io
import avro.schema
import io
import requests
import struct
# To consume messages
consumer = KafkaConsumer('hadoop_00',
group_id='my_group',
bootstrap_servers=['hadoop-master:9092'])
schema_path = "resources/f1.avsc"
for msg in consumer:
value = bytearray(msg.value)
schema_id = struct.unpack(">L", value[1:5])[0]
response = requests.get("http://hadoop-master:8081/schemas/ids/" + str(schema_id))
schema = response.json()["schema"]
schema = avro.schema.parse(schema)
bytes_reader = io.BytesIO(value[5:])
# bytes_reader = io.BytesIO(msg.value)
decoder = avro.io.BinaryDecoder(bytes_reader)
reader = avro.io.DatumReader(schema)
temp = reader.read(decoder)
print(temp)
</code></pre>
<p><strong>Producer</strong>
</p>
<pre><code>from kafka import KafkaProducer
import avro.schema
import io
from avro.io import DatumWriter
producer = KafkaProducer(bootstrap_servers="hadoop-master")
# Kafka topic
topic = "hadoop_00"
# Path to user.avsc avro schema
schema_path = "resources/f1.avsc"
schema = avro.schema.parse(open(schema_path).read())
range = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for i in range:
producer.send(topic, b'{"f1":"value_' + str(i))
</code></pre>
| 1 |
2016-09-14T11:46:52Z
| 39,536,659 |
<p>I'm able to have my python producer sending messages to Kafka-Connect with Schema-Registry:</p>
<pre><code>...
import avro.datafile
import avro.io
import avro.schema
from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers='kafka:9092')
with open('schema.avsc') as f:
schema = avro.schema.Parse(f.read())
def post_message():
bytes_writer = io.BytesIO()
# Write the Confluent "Magic Byte"
bytes_writer.write(bytes([0]))
# Should get or create the schema version with Schema-Registry
...
schema_version = 1
bytes_writer.write(
int.to_bytes(schema_version, 4, byteorder='big'))
# and then the standard Avro bytes serialization
writer = avro.io.DatumWriter(schema)
encoder = avro.io.BinaryEncoder(bytes_writer)
writer.write({'key': 'value'}, encoder)
producer.send('topic', value=bytes_writer.getvalue())
</code></pre>
<p>Documentation about the "Magic Byte":
<a href="https://github.com/confluentinc/schema-registry/blob/master/docs/serializer-formatter.rst" rel="nofollow">https://github.com/confluentinc/schema-registry/blob/master/docs/serializer-formatter.rst</a></p>
| 0 |
2016-09-16T16:57:58Z
|
[
"python",
"apache-kafka",
"python-kafka"
] |
Why won't the list delete function delete spaces?
| 39,489,793 |
<pre><code>string = "hello world i'm a new program"
words_length = []
length = 21
</code></pre>
<p>I'm using <code>re.split</code> to produce a list of words and spaces:</p>
<pre><code>words = re.split('\w', string)
</code></pre>
<p>so: </p>
<pre><code>words = ['hello', ' ', 'world', ' ', 'i', "'", 'm', ' ', 'a', ' ', 'new', ' ', 'program']
for x in words:
words_length.append(len(x))
for x in range(len(words)):
if words_length < length:
words_length += letters_length[x]
line += words[x]
del words[x]
</code></pre>
<p>but at the end when I print the variables I get:</p>
<pre><code>line = "helloworldi'manew"
words = [' ', ' ', ' ', ' ', ' ', 'program']
</code></pre>
<p>But what I want is:</p>
<pre><code>line = "hello world i'm a new"
words = ['program']
</code></pre>
<p>How can I manage to do that?</p>
| -1 |
2016-09-14T11:53:19Z
| 39,489,912 |
<p>You are <em>skipping indices</em> because you are deleting characters from your list.</p>
<p>Each time you delete a character, everything to the <em>right</em> of that character shifts up one step to the left, and their index goes down by one. But your <code>x</code> index still goes up by one so now you are referencing a later element in the list:</p>
<ol>
<li><p>first iteration of the for loop:</p>
<pre><code>x == 0
words == ['hello', ' ', 'world', ' ', 'i', "'", 'm', ' ', 'a', ' ', 'new', ' ', 'program']
# 0 1 2 3 4 5 ...
words[x] == 'hello'
del words[x]
words == [' ', 'world', ' ', 'i', "'", 'm', ' ', 'a', ' ', 'new', ' ', 'program']
# 0 1 2 3 4 5 ...
</code></pre></li>
<li><p>second iteration of your loop:</p>
<pre><code>x == 1
words == [' ', 'world', ' ', 'i', "'", 'm', ' ', 'a', ' ', 'new', ' ', 'program']
# 0 1 2 3 4 5 ...
words[x] == 'world'
del words[x]
words == [' ', ' ', 'i', "'", 'm', ' ', 'a', ' ', 'new', ' ', 'program']
# 0 1 2 3 4 5 ...
</code></pre></li>
<li><p>third iteration of your loop</p>
<pre><code>x == 2
words == [' ', ' ', 'i', "'", 'm', ' ', 'a', ' ', 'new', ' ', 'program']
# 0 1 2 3 4 5 ...
words[x] == 'i'
del words[x]
words == [' ', ' ', "'", 'm', ' ', 'a', ' ', 'new', ' ', 'program']
# 0 1 2 3 4 5 ...
</code></pre></li>
</ol>
<p>Don't remove entries from your list until at least <em>after</em> looping; you don't need to have them removed during the loop:</p>
<pre><code>line = []
current_length = 0
for i, word in enumerate(words):
current_length += len(word)
if current_length > length:
i -= 1
break
line.append(word)
# here i is the index of the last element of words actually used
words = words[i + 1:] # remove the elements that were used.
line = ''.join(line)
</code></pre>
<p>or you could remove words (from a reversed list for efficiency), but then use a <code>while</code> loop and test for the accumulated length instead:</p>
<pre><code>line = []
current_length = 0
reversed_words = words[::-1]
while reversed_words:
l = len(reversed_words[-1])
if current_length + l > length:
break
line.append(reversed_words.pop())
current_length += l
words = reversed_words[::-1]
line = ''.join(line)
</code></pre>
<p>However, if you are trying to apply line-length wrapping to a Python string, you could avoid re-inventing that wheel by using the <a href="https://docs.python.org/2/library/textwrap.html" rel="nofollow"><code>textwrap</code> module</a> instead. It can do line-wrapping within a maximum length for you with ease:</p>
<pre><code>wrapped = textwrap.fill(string, length)
</code></pre>
| 3 |
2016-09-14T11:59:31Z
|
[
"python",
"list",
"python-3.x",
"for-loop"
] |
Counting Characters in a varible python
| 39,489,897 |
<p>i am trying to create a program to print a line of text a certain amount of times, i want to limit the amount of letters in the first text entry and i cant figure out how.</p>
<p>code: </p>
<pre><code># Hello World Script 2.0
import random
**------------------------------------------------
#i want to limit the amount of characters, how?
------------------------------------------------**
print("What do you want to be printed?(Max 20 Characters)")
var0 = input("> ")
print("Please enter the amount of times you want that printed(max 100000)")
print('Or enter "R" for a random number')
var2 = input("> ")
if int(var2) > 100000:
print("That number is too high, please restart the proram and enter something smaller.")
exit()
if var2 == "r":
var2 = random.randint(1,100000)
var1 = var0 * int(var2)
print(var1, var2)
</code></pre>
| 0 |
2016-09-14T11:58:34Z
| 39,490,015 |
<p>You could check the length of the string with the function len().</p>
<pre><code>import random
**------------------------------------------------
#i want to limit the amount of characters, how?
------------------------------------------------**
print("What do you want to be printed?(Max 20 Characters)")
var0 = input("> ")
if len(var0) > 20:
print("That is too long")
exit()
print("Please enter the amount of times you want that printed(max 100000)")
print('Or enter "R" for a random number')
var2 = input("> ")
if int(var2) > 100000:
print("That number is too high, please restart the proram and enter something smaller.")
exit()
if var2 == "r":
var2 = random.randint(1,100000)
var1 = var0 * int(var2)
print(var1, var2)
</code></pre>
| 0 |
2016-09-14T12:05:32Z
|
[
"python",
"variables"
] |
How to find column values which contains unique value in another column from same dataframe?
| 39,489,909 |
<p>I have a dataframe: </p>
<pre><code> Id name value
0 1 aaa x
1 2 aaa y
2 3 aaa z
3 4 ddd t
4 5 ddd t
5 6 fff j
6 7 ggg m
7 8 ggg n
</code></pre>
<p>I want to find only those rows whose names are duplicate and for these duplicate rows the values are different.</p>
<p><strong>Expected output :</strong> </p>
<pre><code> Id name value
0 1 aaa x
1 2 aaa y
2 3 aaa z
3 7 ggg m
4 8 ggg n
</code></pre>
<p>I'm trying with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow">groupby</a>:</p>
<pre><code>df.groupby('name')
</code></pre>
<p>Is this groupby function usefull for this task? How I can achieve what I want exactly?</p>
| 1 |
2016-09-14T11:59:25Z
| 39,490,027 |
<p>This line of code will count the number of values by name:</p>
<pre><code>df.groupby('name')['value'].transform(pd.Series.nunique)
Out[8]:
0 3
1 3
2 3
3 1
4 1
5 1
6 2
7 2
</code></pre>
<p>Note that I use <code>.transform(pd.Series.nunique)</code> rather than simply <code>.nunique()</code> on the <code>groupby</code> object. This way, the result will be of the same length as the original dataframe, and you can use it directly for filtering:</p>
<pre><code>df[df.groupby('name')['value'].transform(pd.Series.nunique) > 1]
Out[9]:
Id name value
0 1 aaa x
1 2 aaa y
2 3 aaa z
6 7 ggg m
7 8 ggg n
</code></pre>
| 1 |
2016-09-14T12:06:30Z
|
[
"python",
"pandas",
"dataframe"
] |
How to connect Android app to python-socketio backend?
| 39,489,919 |
<p>I am currently running a Python SocketIO server that connects perfectly to my JavaScript Client. I am using the <a href="http://socket.io/blog/native-socket-io-and-android/" rel="nofollow">socketio android example chat app</a> to write the Android code, it worked perfectly with a NodeJS server but when I change over to using the Python server it won't connect. </p>
<p><strong>How can I connect to the Ptyhon-SocketIO server from Android?</strong></p>
<p><strong>Android code:</strong></p>
<pre><code>public class HomeActivity extends AppCompatActivity
implements NavigationView.OnNavigationItemSelectedListener {
private final String TAG = "MainActivity";
Button btnCore0, btnCore1, btnCPUUsage;
private ProgressBar progressBar;
private Socket mSocket;
{
try {
mSocket = IO.socket(Constants.SERVER_URL);
} catch (URISyntaxException e) {
Log.e(TAG, e.getMessage());
}
}
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_home);
Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar);
setSupportActionBar(toolbar);
FloatingActionButton fab = (FloatingActionButton) findViewById(R.id.fab);
fab.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
Snackbar.make(view, "Replace with your own action", Snackbar.LENGTH_LONG)
.setAction("Action", null).show();
}
});
DrawerLayout drawer = (DrawerLayout) findViewById(R.id.drawer_layout);
ActionBarDrawerToggle toggle = new ActionBarDrawerToggle(
this, drawer, toolbar, R.string.navigation_drawer_open, R.string.navigation_drawer_close);
drawer.setDrawerListener(toggle);
toggle.syncState();
NavigationView navigationView = (NavigationView) findViewById(R.id.nav_view);
navigationView.setNavigationItemSelectedListener(this);
btnCore0 = (Button) findViewById(R.id.btnCore0);
btnCore1 = (Button) findViewById(R.id.btnCore1);
btnCPUUsage = (Button) findViewById(R.id.btnCPUUsage);
progressBar = (ProgressBar) findViewById(R.id.progressBar);
// Make buttons invisible
btnCore0.setVisibility(View.INVISIBLE);
btnCore1.setVisibility(View.INVISIBLE);
btnCPUUsage.setVisibility(View.INVISIBLE);
// Make progress bar visible
progressBar.setVisibility(View.VISIBLE);
mSocket.on("status-update", onNewMessage);
mSocket.on(Socket.EVENT_DISCONNECT, onSocketDisconnected);
mSocket.connect();
}
private Emitter.Listener onNewMessage = new Emitter.Listener() {
@Override
public void call(final Object... args) {
HomeActivity.this.runOnUiThread(new Runnable() {
@Override
public void run() {
Log.d(TAG, "New message 090909***");
JSONObject data = (JSONObject) args[0];
int core0 = 0;
int core1 = 0;
int cpu_usage_in = 0;
try {
core0 = data.getInt("core0_in");
core1 = data.getInt("core1_in");
cpu_usage_in = data.getInt("cpu_usage_in");
} catch (JSONException e) {
Log.e(TAG, e.getMessage());
}
btnCore0.setText(getResources().getString(R.string.core0, String.valueOf(core0)));
btnCore1.setText(getResources().getString(R.string.core1, String.valueOf(core1)));
btnCPUUsage.setText(getResources().getString(R.string.cpu_usge, String.valueOf(cpu_usage_in)));
updateButtonBackgroundColor(btnCore0, core0);
updateButtonBackgroundColor(btnCore1, core1);
updateButtonBackgroundColor(btnCPUUsage, cpu_usage_in);
onServerDataReceived();
}
});
}
};
</code></pre>
<p>Next is the Pyhton server that emits every second. This, I know works fine as I can connect to it from a JavaScript app.
<strong>Python code:</strong></p>
<pre><code>from flask import Flask, render_template
from flask_socketio import SocketIO
from gcm import GCM
eventlet.monkey_patch()
app = Flask(__name__)
socket = SocketIO(app, logger=True, engineio_logger=True)
class Server(threading.Thread):
def __init__(self, thread_id):
threading.Thread.__init__(self)
self.threadID = thread_id
def run(self):
print("Starting " + self.name)
serve()
print("Exiting " + self.name)
def serve():
if __name__ == '__main__':
eventlet.wsgi.server(eventlet.wrap_ssl(eventlet.listen(('', 8000)), certfile='/home/garthtee/cert.pem', keyfile='/home/garthtee/privkey.pem'), app)
server_thread = Server("Server-thread")
server_thread.start()
threads.append(server_thread)
print("Started @ " + str(get_time()))
while True:
sensors.init()
try:
for chip in sensors.iter_detected_chips():
# print('%s at %s' % (chip, chip.adapter_name))
for feature in chip:
if feature.label == 'Core 0':
core0 = feature.get_value()
elif feature.label == 'Core 1':
core1 = feature.get_value()
for x in range(1):
cpu_usage = str(psutil.cpu_percent(interval=1))
finally:
socket.emit('status-update', {'core0_in': core0, 'core1_in': core1, 'cpu_usage_in': cpu_usage, 'users': users})
alert_checker(avg_temp, users)
sensors.cleanup()
time.sleep(1)
</code></pre>
<p><strong>The following error shows up:</strong></p>
<p>SSLError: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake failure (_ssl.c:1754)</p>
| 1 |
2016-09-14T11:59:44Z
| 39,489,994 |
<p>I downloaded the SocketIO python library from <a href="https://github.com/miguelgrinberg/python-socketio" rel="nofollow">Github</a></p>
<p>I modified the example code like this:</p>
<pre><code>import socketio
import eventlet
import eventlet.wsgi
from flask import Flask, render_template
sio = socketio.Server()
app = Flask(__name__)
@app.route('/')
def index():
"""Serve the client-side application."""
return render_template('index.html')
@sio.on('connect', namespace='/')
def connect(sid, environ):
print("connect ", sid)
@sio.on('add user', namespace='/')
def login(sid, environ):
print("login ", sid)
sio.emit('login', room=sid)
@sio.on('new message', namespace='/')
def message(sid, data):
print("message ", data)
sio.emit('reply', room=sid)
@sio.on('disconnect', namespace='/')
def disconnect(sid):
print('disconnect ', sid)
if __name__ == '__main__':
# wrap Flask application with engineio's middleware
app = socketio.Middleware(sio, app)
# deploy as an eventlet WSGI server
eventlet.wsgi.server(eventlet.listen(('', 8000)), app)
</code></pre>
<p>Then I cloned the <a href="https://github.com/nkzawa/socket.io-android-chat" rel="nofollow">Android example chat</a> project, and the only thing what I changed in the Constants.java:</p>
<pre><code>public static final String CHAT_SERVER_URL = "http://MY_LOCAL_IP:8000";
</code></pre>
<p>And the Android app can connect.
I see it in the app, and also in the python console.
If you remove some unnecessary parsing part (the app is crashing, because the response is different), you can also see your messages in python.</p>
<p>Have you tried to run your server app without SSL first?</p>
<p>Maybe that is the problem.
On Android you can use <code>IO.setDefaultSSLContext(SSLContext sslContext)</code> to setup SSL.</p>
| 1 |
2016-09-14T12:04:20Z
|
[
"android",
"python",
"flask",
"socket.io"
] |
How to break conversation data into pairs of (Context , Response)
| 39,489,933 |
<p>I'm using Gensim Doc2Vec model, trying to cluster portions of a customer support conversations. My goal is to give the support team an auto response suggestions.</p>
<p><strong>Figure 1:</strong> shows a sample conversations where the user question is answered in the next conversation line, making it easy to extract the data:</p>
<p><a href="http://i.stack.imgur.com/N4ri4.gif"><img src="http://i.stack.imgur.com/N4ri4.gif" alt="Figure 1"></a> </p>
<p><sup>during the conversation <strong>"hello"</strong> and <strong>"Our offices are located in NYC"</strong> should be suggested</sup></p>
<hr>
<p><strong>Figure 2:</strong> describes a conversation where the questions and answers are not in sync</p>
<p><a href="http://i.stack.imgur.com/oHUQu.gif"><img src="http://i.stack.imgur.com/oHUQu.gif" alt="Figure 2"></a></p>
<p><sup>during the conversation <strong>"hello"</strong> and <strong>"Our offices are located in NYC"</strong> should be suggested</sup></p>
<hr>
<p><strong>Figure 3:</strong> describes a conversation where the context for the answer is built over time, and for classification purpose (I'm assuming) some of the lines are redundant.</p>
<p><a href="http://i.stack.imgur.com/muf6Y.gif"><img src="http://i.stack.imgur.com/muf6Y.gif" alt="Figure 3"></a></p>
<p><sup>during the conversation <strong>"here is a link for the free trial account"</strong> should be suggested</sup></p>
<hr>
<p>I have the following data per conversation line (simplified):<br>
who wrote the line (user or agent), text, time stamp</p>
<p>I'm using the following code to train my model:</p>
<pre><code>from gensim.models import Doc2Vec
from gensim.models.doc2vec import TaggedLineDocument
import datetime
print('Creating documents',datetime.datetime.now().time())
context = TaggedLineDocument('./test_data/context.csv')
print('Building model',datetime.datetime.now().time())
model = Doc2Vec(context,size = 200, window = 10, min_count = 10, workers=4)
print('Training...',datetime.datetime.now().time())
for epoch in range(10):
print('Run number :',epoch)
model.train(context)
model.save('./test_data/model')
</code></pre>
<p><strong>Q</strong>: How should I structure my training data and what heuristics could be applied in order to extract it from the raw data?</p>
| 14 |
2016-09-14T12:00:31Z
| 39,599,388 |
<p>To train a model I would start by concatenating consecutive sequences of messages. What I would do is, using the timestamps, concatenate the messages without any message in between from the other entity.</p>
<p>For instance:</p>
<pre><code>Hello
I have a problem
I cannot install software X
Hi
What error do you get?
</code></pre>
<p>would be:</p>
<pre><code>Hello I have a problem I cannot install software X
Hi What error do you get?
</code></pre>
<p>Then I would train a model with sentences in that format. I would do that because I am assuming that the conversations have a "single topic" all the time between interactions from the entities. And in that scenario suggesting a single message <code>Hi What error do you get?</code> would be totally fine.</p>
<p>Also, take a look at the data. If the questions from the users are usually single-sentenced (as in the examples) sentence detection could help a lot. In that case I would apply sentence detection on the concatenated strings (<code>nltk</code> could be an option) and use only single-sentenced questions for training. That way you can avoid the out-of-sync problem when training the model at the price of reducing the size of the dataset.</p>
<p>On the other hand, I would <em>really</em> consider to start with a very simple method. For example you could score questions by tf-idf and, to get a suggestion, you can take the most similar question in your dataset wrt some metric (e.g. cosine similarity) and suggest the answer for that question. That will perform very bad in sentences with context information (e.g. <code>how do you do it?</code>) but can perform well in sentences like <code>where are you based?</code>.</p>
<p>My last suggestion is because <a href="https://arxiv.org/abs/1509.01626" rel="nofollow">traditional methods perform even better than complex NN methods when the dataset is small</a>. How big is your dataset?</p>
<p><em>How</em> you train a NN method is also crucial, there are a lot of hyper-parameters, and tuning them properly can be difficult, that's why having a baseline with a simple method can help you a lot to check how well you are doing. In this other <a href="http://arxiv.org/abs/1607.05368" rel="nofollow">paper</a> they compare the different hyper-parameters for doc2vec, maybe you find it useful.</p>
<p><strong>Edit:</strong> a completely different option would be to train a model to "link" questions with answers. But for that you should manually tag each question with the corresponding answer and then train a supervised learning model on that data. That could potentially generalize better but with the added effort of manually labelling the sentences and still it doesn't look like an easy problem to me.</p>
| 5 |
2016-09-20T16:30:32Z
|
[
"python",
"text-mining",
"doc2vec",
"gensym"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.