title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Navigating pagination with selenium
| 39,534,584 |
<p>I am getting stuck on a weird case of pagination. I am scraping search results from <a href="https://cotthosting.com/NYRocklandExternal/LandRecords/protected/SrchQuickName.aspx" rel="nofollow">https://cotthosting.com/NYRocklandExternal/LandRecords/protected/SrchQuickName.aspx</a></p>
<p>I have search results that fall into 4 categories.</p>
<p>1) There are no search results</p>
<p>2) There is one results page</p>
<p>3) There is more than one results page but less than 12 results pages</p>
<p>4) There are more than 12 results pages.</p>
<p>For case 1, that is easy, I am just passing.</p>
<pre><code>results = driver.find_element_by_class_name('GridView')
if len(results)== 0:
pass
</code></pre>
<p>For cases 2 and 3, I am checking if the list of links in the containing element is at least one and then click it.</p>
<pre><code>else:
results_table = bsObj.find('table', {'class':'GridView'})
sub_tables = results_table.find_all('table')
next_page_links = sub_tables[1].find_all('a')
if len(next_page_links) == 0
scrapeResults()
else:
scrapeResults()
####GO TO NEXT PAGE UNTIL THERE IS NO NEXT PAGE
</code></pre>
<p>Question for case 2 and 3: What could i possibly check for here as my control? </p>
<p>The links are hrefs to pages 2, 3, etc. But the tricky part is if I am on a current page, say page 1, how do I make sure I a going to page 2 and when I am on page 2 how do i make sure I am going to page 3? The html for page 1 for the results list is as follows</p>
<pre><code><table cellspacing="0" cellpadding="0" border="0" style="border-collapse:collapse;">
<tr>
<td>Page: <span>1</span></td>
<td><a href="javascript:__doPostBack(&#39;ctl00$cphMain$lrrgResults$cgvNamesDir&#39;,&#39;Page$2&#39;)">2</a></td>
<td><a href="javascript:__doPostBack(&#39;ctl00$cphMain$lrrgResults$cgvNamesDir&#39;,&#39;Page$3&#39;)">3</a></td>
</tr>
</table>
</code></pre>
<p>I can zero into this table specifically using <code>sub_tables[1]</code> see above bs4 code in case 2.</p>
<p>The problem is there is no next button that I could utilize. Nothing changes along the results pages in the html. There is nothing to isolate the current page besides the number in the <code>span</code> right before the links. And I would like it to stop when it reaches the last page</p>
<p>For case 4, the html looks like this:</p>
<pre><code><table cellspacing="0" cellpadding="0" border="0" style="border-collapse:collapse;">
<tr>
<td>Page: <span>1</span></td>
<td><a href="javascript:__doPostBack(&#39;ctl00$cphMain$lrrgResults$cgvNamesDir&#39;,&#39;Page$2&#39;)">2</a></td>
<td><a href="javascript:__doPostBack(&#39;ctl00$cphMain$lrrgResults$cgvNamesDir&#39;,&#39;Page$3&#39;)">3</a></td>
<td><a href="javascript:__doPostBack(&#39;ctl00$cphMain$lrrgResults$cgvNamesDir&#39;,&#39;Page$4&#39;)">4</a></td>
<td><a href="javascript:__doPostBack(&#39;ctl00$cphMain$lrrgResults$cgvNamesDir&#39;,&#39;Page$5&#39;)">5</a></td>
<td><a href="javascript:__doPostBack(&#39;ctl00$cphMain$lrrgResults$cgvNamesDir&#39;,&#39;Page$6&#39;)">6</a></td>
<td><a href="javascript:__doPostBack(&#39;ctl00$cphMain$lrrgResults$cgvNamesDir&#39;,&#39;Page$7&#39;)">7</a></td>
<td><a href="javascript:__doPostBack(&#39;ctl00$cphMain$lrrgResults$cgvNamesDir&#39;,&#39;Page$8&#39;)">8</a></td>
<td><a href="javascript:__doPostBack(&#39;ctl00$cphMain$lrrgResults$cgvNamesDir&#39;,&#39;Page$9&#39;)">9</a></td>
<td><a href="javascript:__doPostBack(&#39;ctl00$cphMain$lrrgResults$cgvNamesDir&#39;,&#39;Page$10&#39;)">10</a></td>
<td><a href="javascript:__doPostBack(&#39;ctl00$cphMain$lrrgResults$cgvNamesDir&#39;,&#39;Page$11&#39;)">...</a></td>
<td><a href="javascript:__doPostBack(&#39;ctl00$cphMain$lrrgResults$cgvNamesDir&#39;,&#39;Page$Last&#39;)">Last</a></td>
</tr>
</table>
</code></pre>
<p>The last two links are <code>...</code> to show that there are more results pages and <code>Last</code> to signify the last page. However, the `Last link exists on every page and it is only on the last page itself that it is not an active link. </p>
<p>Question for case 4, how could i check if the <code>last</code> link is clickable and use this as my stopping point? </p>
<p>Bigger question for case 4, how do i manouver the <code>...</code> to go through other results pages? The results page list is 12 values at most. i.e. the nearest ten pages to the current page, the <code>...</code> link to more pages and the <code>Last</code> link. So i don't know what to do if my results have say 88 pages.</p>
<p>I am link a dump to a full sample page : <a href="https://ghostbin.com/paste/nrb27" rel="nofollow">https://ghostbin.com/paste/nrb27</a></p>
| 1 |
2016-09-16T14:59:24Z
| 39,536,073 |
<p>Click on the "last page" for get his numbers, and then click in each child.</p>
| 0 |
2016-09-16T16:20:05Z
|
[
"python",
"loops",
"selenium",
"selenium-webdriver",
"pagination"
] |
TypeError: first argument must be an iterable of pandas objects, you passed an object of type "DataFrame"
| 39,534,676 |
<p>I have a big dataframe and I try to split that and after <code>concat</code> that.
I use</p>
<pre><code>df2 = pd.read_csv('et_users.csv', header=None, names=names2, chunksize=100000)
for chunk in df2:
chunk['ID'] = chunk.ID.map(rep.set_index('member_id')['panel_mm_id'])
df2 = pd.concat(chunk, ignore_index=True)
</code></pre>
<p>But it return an error</p>
<pre><code>TypeError: first argument must be an iterable of pandas objects, you passed an object of type "DataFrame"
</code></pre>
<p>How can I fix that?</p>
| 1 |
2016-09-16T15:03:14Z
| 39,534,745 |
<p>IIUC you want the following:</p>
<pre><code>df2 = pd.read_csv('et_users.csv', header=None, names=names2, chunksize=100000)
chunks=[]
for chunk in df2:
chunk['ID'] = chunk.ID.map(rep.set_index('member_id')['panel_mm_id'])
chunks.append(chunk)
df2 = pd.concat(chunks, ignore_index=True)
</code></pre>
<p>You need to append each chunk to a list and then use <code>concat</code> to concatenate them all, also I think the <code>ignore_index</code> may not be necessary but I may be wrong</p>
| 1 |
2016-09-16T15:06:38Z
|
[
"python",
"pandas"
] |
Python requests SSL error [SSL: UNKNOWN_PROTOCOL] while getting https://www.nfm.com
| 39,534,804 |
<p>Using Python 3.4 and requests 2.11.1 package to get '<a href="https://www.nfm.com" rel="nofollow">https://www.nfm.com</a>' website, requests throws an SSLError [SSL: UNKNOWN_PROTOCOL]. I'm able to get a valid response from other https sites such as pythonanywhere.com and amazon.com. The error is only encountered trying to <code>requests.get('https://www.nfm.com')</code> or <code>requests.get('https://www.nfm.com', verify=False)</code>.</p>
<p>I checked NFM's certificate in Chrome and it's a valid thawte SHA256 SSL CA. Is this a problem with NFM or is there something I need to configure on my end to get a response object from this website?</p>
| 1 |
2016-09-16T15:09:40Z
| 39,535,211 |
<p>According to <a href="https://www.ssllabs.com/ssltest/analyze.html?d=www.nfm.com" rel="nofollow">SSLLabs</a> the server supports only TLS 1.1 and higher. My guess is that you have an older OpenSSL version which is not able to speak this version. Support for TLS 1.1 and TLS 1.2 was added with OpenSSL 1.0.1 years ago but for example Apple ships only the very old version 0.9.8 on Mac OS X which only supports at most TLS 1.0 (which the server does not support).</p>
| 1 |
2016-09-16T15:31:27Z
|
[
"python",
"ssl",
"https",
"python-requests"
] |
Is there a Python equivalent to the C# ?. and ?? operators?
| 39,534,935 |
<p>For instance, in C# (starting with v6) I can say:</p>
<pre><code>mass = (vehicle?.Mass / 10) ?? 150;
</code></pre>
<p>to set mass to a tenth of the vehicle's mass if there is a vehicle, but 150 if the vehicle is null (or has a null mass, if the Mass property is of a nullable type).</p>
<p>Is there an equivalent construction in Python (specifically IronPython) that I can use in scripts for my C# app?</p>
<p>This would be particularly useful for displaying defaults for values that can be modified by other values - for instance, I might have an armor component defined in script for my starship that is always consumes 10% of the space available on the ship it's installed on, and its other attributes scale as well, but I want to display defaults for the armor's size, hitpoints, cost, etc. so you can compare it with other ship components. Otherwise I might have to write a convoluted expression that does a null check or two, like I had to in C# before v6.</p>
| 6 |
2016-09-16T15:16:18Z
| 39,535,290 |
<p>No, Python does not (yet) have NULL-coalescing operators. </p>
<p>There is a <em>proposal</em> (<a href="https://www.python.org/dev/peps/pep-0505/">PEP 505 â <em>None-aware operators</em></a>) to add such operators, but no consensus exists wether or not these should be added to the language at all and if so, what form these would take.</p>
<p>From the <em>Implementation</em> section:</p>
<blockquote>
<p>Given that the need for None -aware operators is questionable and the spelling of said operators is almost incendiary, the implementation details for CPython will be deferred unless and until we have a clearer idea that one (or more) of the proposed operators will be approved.</p>
</blockquote>
<p>Note that Python doesn't really <em>have</em> a concept of <code>null</code>. Python names and attributes <em>always</em> reference <em>something</em>, they are never a <code>null</code> reference. <code>None</code> is just another object in Python, and the community is reluctant to make that one object so special as to need its own operators.</p>
<p>Until such time this gets implemented (if ever, and IronPython catches up to that Python release), you can use Python's <a href="https://docs.python.org/3/reference/expressions.html#conditional-expressions">conditional expression</a> to achieve the same:</p>
<pre><code>mass = 150 if vehicle is None or vehicle.Mass is None else vehicle.Mass / 10
</code></pre>
| 9 |
2016-09-16T15:35:23Z
|
[
"python",
"ironpython"
] |
python pandas Ignore Nan in integer comparisons
| 39,534,941 |
<p>I am trying to create dummy variables based on integer comparisons in series where Nan is common. A > comparison raises errors if there are any Nan values, but I want the comparison to return a Nan. I understand that I could use fillna() to replace Nan with a value that I know will be false, but I would hope there is a more elegant way to do this. I would need to change the value in fillna() if I used less than, or used a variable that could be positive or negative, and that is one more opportunity to create errors. Is there any way to make 30 < Nan = Nan?</p>
<p>To be clear, I want this:</p>
<pre><code>df['var_dummy'] = df[df['var'] >= 30].astype('int')
</code></pre>
<p>to return a null if var is null, 1 if it is 30+, and 0 otherwise. Currently I get ValueError: cannot reindex from a duplicate axis.</p>
| 0 |
2016-09-16T15:16:37Z
| 39,535,163 |
<p>Here's a way:</p>
<pre><code>s1 = pd.Series([1, 3, 4, 2, np.nan, 5, np.nan, 7])
s2 = pd.Series([2, 1, 5, 5, np.nan, np.nan, 2, np.nan])
(s1 < s2).mask(s1.isnull() | s2.isnull(), np.nan)
Out:
0 1.0
1 0.0
2 1.0
3 1.0
4 NaN
5 NaN
6 NaN
7 NaN
dtype: float64
</code></pre>
<p>This masks the boolean array returned from <code>(s1 < s2)</code> if any of them is NaN. In that case, it returns NaN. But you cannot have NaNs in a boolean array so it will be casted as float.</p>
| 3 |
2016-09-16T15:28:40Z
|
[
"python",
"pandas"
] |
Phrasequery to make researches
| 39,535,046 |
<p>Is there anyway to use phrasequery with python?
until now i was using parser, but i would like to know how to use phrasequery.</p>
<pre><code>parser = QueryParser(Version.LUCENE_CURRENT, "contents",
analyzer)
parser.setDefaultOperator(QueryParser.Operator.AND)
query = parser.parse(command)
scoreDocs = searcher.search(query, 10000).scoreDocs
</code></pre>
| 0 |
2016-09-16T15:22:31Z
| 39,535,425 |
<p>Firstly, you should understand that when you cut out the QueryParser, you lose the analyzer. <code>PhraseQuery</code> won't analyze for you, like the QueryParser does, so it's on you to tokenize and normalize your phrase to match the index-time analysis. You may be better off sticking with the parser.</p>
<p>That said, constructing a PhraseQuery manually looks something like this:</p>
<pre><code>query = PhraseQuery()
query.add(Term("contents", "lorem"))
query.add(Term("contents", "ipsum"))
query.add(Term("contents", "sit"))
query.add(Term("contents", "amet"))
</code></pre>
<p>You can set the slop for the query using <code>setSlop</code>.</p>
<p>You can also specify the position of each term. For instance, if "sit" were a stopword in my index, I might do something like:</p>
<pre><code>query = PhraseQuery()
query.add(Term("contents", "lorem"), 0)
query.add(Term("contents", "ipsum"), 1)
query.add(Term("contents", "amet"), 3)
</code></pre>
| 0 |
2016-09-16T15:43:08Z
|
[
"python",
"lucene"
] |
Python Iteration in list
| 39,535,049 |
<p>Is there a way to test whether an item in a list has repetitions of 5 digits and above, and the repetitions being adjacent to each other?</p>
<pre><code>#!/usr/bin/env python
import itertools
from collections import Counter
mylist = ['000002345','1112345','11122222345','1212121212']
#some function code here
#expected output
#['000002345','11122222345'] #1 and 2 repeats five times, next to each other
#method 1
v = list(mylist[0])
for i in v:
if v[0]==v[1] and v[0]==v[1]...
#method 2
v = list(mylist[0])
Counter(v)
</code></pre>
<p>I can only think of using if statements, but my actual list is pretty long and it will be inefficient if the item contains repetitions in between an item, such as '1123333345', which requires me to write never ending ifs'.</p>
<p>With my second method in mind, I'm not too sure how to proceed after knowing how many repetitions are there, and even so, it will return items having five repetitions but not adjacent to each other, such as '1212121212'.</p>
<p>Any ideas?</p>
| 0 |
2016-09-16T15:22:42Z
| 39,535,161 |
<blockquote>
<p>The condition is that i only want the items with a repetition of 5
digits and above</p>
</blockquote>
<p>Use a <a href="https://docs.python.org/3/library/re.html" rel="nofollow">regular expression</a>:</p>
<pre><code>>>> import re
>>> mylist = ['000002345', '1112345', '11122222345', '1212121212']
>>> for item in mylist:
... if re.search(r'(\d)\1{4,}', item):
... print(item)
...
000002345
11122222345
</code></pre>
| 2 |
2016-09-16T15:28:33Z
|
[
"python"
] |
Python Iteration in list
| 39,535,049 |
<p>Is there a way to test whether an item in a list has repetitions of 5 digits and above, and the repetitions being adjacent to each other?</p>
<pre><code>#!/usr/bin/env python
import itertools
from collections import Counter
mylist = ['000002345','1112345','11122222345','1212121212']
#some function code here
#expected output
#['000002345','11122222345'] #1 and 2 repeats five times, next to each other
#method 1
v = list(mylist[0])
for i in v:
if v[0]==v[1] and v[0]==v[1]...
#method 2
v = list(mylist[0])
Counter(v)
</code></pre>
<p>I can only think of using if statements, but my actual list is pretty long and it will be inefficient if the item contains repetitions in between an item, such as '1123333345', which requires me to write never ending ifs'.</p>
<p>With my second method in mind, I'm not too sure how to proceed after knowing how many repetitions are there, and even so, it will return items having five repetitions but not adjacent to each other, such as '1212121212'.</p>
<p>Any ideas?</p>
| 0 |
2016-09-16T15:22:42Z
| 39,535,374 |
<p>You could use <code>itertools.groupby</code></p>
<pre><code>>>> from itertools import groupby
>>> [item for item in mylist if any(len(list(y))>=5 for x,y in groupby(item))]
['000002345', '11122222345']
</code></pre>
| 1 |
2016-09-16T15:40:10Z
|
[
"python"
] |
Confusion with comparison error
| 39,535,140 |
<p>When i run the following </p>
<pre class="lang-python prettyprint-override"><code>def max(L):
m = L[0][0]
for item in L:
if item[0] > m:
m = item
return m
L = [[20, 10], [10, 20], [30, 20],[12,16]]
print(max(L))
</code></pre>
<p>i get the error
<code>TypeError: unorderable types: int() > list()</code> at line 4. The confusion comes when i try to get the <code>len()</code> of both members. So from the error message it's reasonable to assume <code>m</code> is the list so i run </p>
<pre class="lang-python prettyprint-override"><code>def max(L):
m = L[0][0]
for item in L:
len(m)
if item[0] > m:
m = item
return m
L = [[20, 10], [10, 20], [30, 20],[12,16]]
print(max(L))
</code></pre>
<p>and get the error <code>len(m) TypeError: object of type 'int' has no len()</code>. Ok so the only option left is that <code>item[0]</code> is the list... so similarly </p>
<pre class="lang-python prettyprint-override"><code>def max(L):
m = L[0][0]
for item in L:
len(item[0])
if item[0] > m:
m = item
return m
L = [[20, 10], [10, 20], [30, 20],[12,16]]
print(max(L))
</code></pre>
<p>and i get the same error: <code>len(item[0]) TypeError: object of type 'int' has no len()</code>. Since I'm somewhat certain you can compare 2 ints, I have a hard time understanding what to do about the error originally stated.</p>
| -4 |
2016-09-16T15:27:25Z
| 39,535,264 |
<p>In the first iteration of your <em>for</em> loop, doing <code>m = item</code> makes <code>m</code> reference a <code>list</code> which afterwards cannot be compared with an <code>int</code> (viz. <code>item[0] > m</code>) in the next iteration. </p>
<p>You should instead assign <code>m</code> to one of the elements in <code>item</code>, say <code>m = item[0]</code> (to find maximum from first element in each sublist), depending on how exactly you want to compute your maximum value.</p>
<p>On a lighter note, if you're looking for a global maximum, you could simply <a href="http://stackoverflow.com/questions/952914/making-a-flat-list-out-of-list-of-lists-in-python"><em>flatten</em></a> the list and make this easier.</p>
| 2 |
2016-09-16T15:34:21Z
|
[
"python",
"list",
"int",
"comparison"
] |
Confusion with comparison error
| 39,535,140 |
<p>When i run the following </p>
<pre class="lang-python prettyprint-override"><code>def max(L):
m = L[0][0]
for item in L:
if item[0] > m:
m = item
return m
L = [[20, 10], [10, 20], [30, 20],[12,16]]
print(max(L))
</code></pre>
<p>i get the error
<code>TypeError: unorderable types: int() > list()</code> at line 4. The confusion comes when i try to get the <code>len()</code> of both members. So from the error message it's reasonable to assume <code>m</code> is the list so i run </p>
<pre class="lang-python prettyprint-override"><code>def max(L):
m = L[0][0]
for item in L:
len(m)
if item[0] > m:
m = item
return m
L = [[20, 10], [10, 20], [30, 20],[12,16]]
print(max(L))
</code></pre>
<p>and get the error <code>len(m) TypeError: object of type 'int' has no len()</code>. Ok so the only option left is that <code>item[0]</code> is the list... so similarly </p>
<pre class="lang-python prettyprint-override"><code>def max(L):
m = L[0][0]
for item in L:
len(item[0])
if item[0] > m:
m = item
return m
L = [[20, 10], [10, 20], [30, 20],[12,16]]
print(max(L))
</code></pre>
<p>and i get the same error: <code>len(item[0]) TypeError: object of type 'int' has no len()</code>. Since I'm somewhat certain you can compare 2 ints, I have a hard time understanding what to do about the error originally stated.</p>
| -4 |
2016-09-16T15:27:25Z
| 39,535,464 |
<p>As Moses Koledoye says, you are getting that <code>TypeError: unorderable types: int() > list()</code> error because the first assignment in the loop assigns the whole <code>item</code> to <code>m</code>, so the next time you attempt to compare you're comparing the list <code>m</code> with the integer <code>item[0]</code>. So you just need to assign <code>item[0]</code> to <code>m</code>. Like this:</p>
<pre><code>def max0(L):
m = L[0][0]
for item in L:
if item[0] > m:
m = item[0]
return m
L = [[20, 10], [10, 20], [30, 20], [12, 16]]
print(max0(L))
</code></pre>
<p><strong>output</strong></p>
<pre><code>30
</code></pre>
<p>But there's a better way to do this: use the built-in <code>max</code> with a key function that grabs the first element from each list in the sequence you pass to `max.</p>
<pre><code>from operator import itemgetter
L = [[20, 10], [10, 20], [30, 20], [12, 16]]
m = max(L, key=itemgetter(0))
print(m)
</code></pre>
<p><strong>output</strong></p>
<pre><code>[30, 20]
</code></pre>
<p>You can also do this with a simple <code>lambda</code> function, rather than importing <code>itemgetter</code>, but <code>itemgetter</code> is more efficient.</p>
<pre><code>m = max(L, key=lambda u:u[0])
</code></pre>
<p>In fact, you don't really need to supply a key function here because Python will happily compare 2 lists (or tuples). It compares the corresponding elements in the two lists, stopping as soon as it finds a pair of elements that are unequal. So <code>[30, 20] > [30, 19]</code> evaluates to <code>True</code>, and so does <code>[30, 20] > [29, 1000]</code>. The 2 lists don't have to be the same length; <code>[30, 20, 0] > [30, 20]</code> evaluates to <code>True</code>.</p>
<p>So you could just do </p>
<pre><code>m = max(L)
</code></pre>
<p>but using <code>itemgetter</code> is better (and probably more efficient) because it explicitly says to only compare the sublists by their first element.</p>
| 0 |
2016-09-16T15:45:31Z
|
[
"python",
"list",
"int",
"comparison"
] |
Stuck with nested for loop issue
| 39,535,377 |
<p>A website changes content dynamically, through the use of two date filters (year / week), without the need of a get request (it is handled asynchronously on the client side). Each filter option produces a different page_source with td elements I would like to extract.</p>
<p>Currently, I am using a nested list for-loop to iterate through the filters (and so different page sources containing different td elements, iterate through the contents of each page source and then append the desired td elements in an empty list.</p>
<pre><code>store = []
def getData():
year = ['2015','2014']
for y in year:
values = y
yearid = Select(browser.find_element_by_id('yearid'))
fsid.select_by_value(values)
weeks = ['1', '2']
for w in weeks:
value = w
frange = Select(browser.find_element_by_id('frange'))
frange.select_by_value('WEEKS')
selectElement = Select(browser.find_element_by_id('fweek'))
selectElement.select_by_value(value)
pressFilter = browser.find_element_by_name('submit')
pressFilter.submit()
#scrape data from page source
html = browser.page_source
soup = BeautifulSoup(html, "lxml")
for el in soup.find_all('td'):
store.append(el.get_text())
</code></pre>
<p>So far so good, and I have a for loop that constructs a single list of all the td elements that I would like. </p>
<p>Instead, I would like to store separate lists, one for each page source (i.e. one per filter combination), in a list of lists. I can do that after the fact i.e. in a secondary step I could then extract the items from the list according to some criteria. </p>
<p>However, can I do that at the point of the original appending? Something like...</p>
<pre><code>store = [[],[], [], []]
...
counter = 0
for el in soup.find_all('td'):
store[counter].append(el.get_text())
counter = counter +1
</code></pre>
<p>This isn't quite right as it only appends to the first object in the store list. If I put the counter in the td for-loop, then it will increase for each time td element is iterated, when in actual fact I only want it to increase when I have finished iterating through a particular page source ( which is itself an iteration of a filter combination).</p>
<p>I am stumped, is what I am trying even possible? If so, where should I put the counter? Or should I use some other technique? </p>
| 0 |
2016-09-16T15:40:29Z
| 39,535,671 |
<p>Create a new list object per filter combination, so inside the <code>for w in weeks:</code> loop. Append your cell text to <em>that</em> list, and append the per-filter list this produces to <code>store</code>:</p>
<pre><code>def getData():
store = []
year = ['2015','2014']
for y in year:
# ... elided for brevity
weeks = ['1', '2']
for w in weeks:
perfilter = []
store.append(perfilter)
# ... elided for brevity
for el in soup.find_all('td'):
perfilter.append(el.get_text())
</code></pre>
| 1 |
2016-09-16T15:56:59Z
|
[
"python",
"for-loop"
] |
PyKCS11 unhashable list
| 39,535,387 |
<p>A python script of mine is designed to get detailed information of slots/tokens in a particular .so library. The output looks like this:</p>
<pre><code>Library manufacturerID: Safenet, Inc.
Available Slots: 4
Slot no: 0
slotDescription: ProtectServer K5E:00045
manufacturerID: SafeNet Inc.
TokenInfo
label: CKM
manufacturerID: SafeNet Inc.
model: K5E:PL25
Opened session 0x00000002
Found 38 objects: [5021, 5022, 5014, 5016, 4, 5, 6, 7, 8, 9, 16, 18, 23, 24, 26, 27, 29, 30, 32, 33, 35, 36, 38, 39, 5313, 5314, 4982, 5325, 5326, 5328, 5329, 5331, 5332, 5335, 5018, 4962, 5020, 4963]
</code></pre>
<p>I am able to open the session and get the information. Where I run into dubious problems is retrieving the attributes of said keys in the library.</p>
<p>I created my own template for desired attributes needed for my specifications, the following:</p>
<pre><code> all_attributes = PyKCS11.CKA.keys()
# only use the integer values and not the strings like 'CKM_RSA_PKCS'
all_attributes = [e for e in all_attributes if isinstance(e, int)]
attributes = [
["CKA_ENCRYPT", PyKCS11.CKA_ENCRYPT],
["CKA_CLASS", PyKCS11.CKA_CLASS],
["CKA_DECRYPT", PyKCS11.CKA_DECRYPT],
["CKA_SIGN", PyKCS11.CKA_SIGN],
["CKA_VERIFY", PyKCS11.CKA_VERIFY],
["CKA_ID", PyKCS11.CKA_ID],
["CKA_MODULUS", PyKCS11.CKA_MODULUS],
["CKA_MODULUS", PyKCS11.CKA_MODULUS],
["CKA_MODULUS_BITS", PyKCS11.CKA_MODULUS_BITS],
["CKA_PUBLIC_EXPONENT", PyKCS11.CKA_PUBLIC_EXPONENT],
["CKA_PRIVATE_EXPONENT", PyKCS11.CKA_PRIVATE_EXPONENT],
]
</code></pre>
<p>I'm getting an unhashable type: 'list' TypeError when trying to dump the attributes on the following block: </p>
<pre><code>print "Dumping attributes:"
for q, a in zip(all_attributes, attributes):
if a == None:
# undefined (CKR_ATTRIBUTE_TYPE_INVALID) attribute
continue
if q == PyKCS11.CKA_CLASS:
print format_long % (PyKCS11.CKA[q], PyKCS11.CKO[a], a)
elif q == PyKCS11.CKA_CERTIFICATE_TYPE:
print format_long % (PyKCS11.CKA[q], PyKCS11.CKC[a], a)
elif q == PyKCS11.CKA_KEY_TYPE:
print format_long % (PyKCS11.CKA[q], PyKCS11.CKK[a], a)
elif session.isBin(q):
print format_binary % (PyKCS11.CKA[q], len(a))
if a:
print dump(''.join(map(chr, a)), 16),
elif q == PyKCS11.CKA_SERIAL_NUMBER:
print format_binary % (PyKCS11.CKA[q], len(a))
if a:
print hexdump(a, 16),
else:
print format_normal % (PyKCS11.CKA[q], a)
</code></pre>
<p>This line specifically is generating the error: </p>
<pre><code>if q == PyKCS11.CKA_CLASS:
print format_long % (PyKCS11.CKA[q], PyKCS11.CKO[a], a)
</code></pre>
<p>I understand that you can't use a list as the key in a dict, since dict keys need to be immutable. How would I use a tuple in this situation?</p>
| 0 |
2016-09-16T15:40:53Z
| 40,052,063 |
<p>(This answer was put together in the context of your other questions)</p>
<p>To read attributes of a PKCS#11 object <code>o</code> you can use the following code:</p>
<pre><code># List which attributes you want to read
attributeIds = [
CKA_ENCRYPT,
CKA_CLASS,
CKA_DECRYPT,
CKA_SIGN,
CKA_VERIFY,
CKA_ID,
CKA_MODULUS,
CKA_MODULUS_BITS,
CKA_PUBLIC_EXPONENT,
CKA_PRIVATE_EXPONENT
]
# Read them
attributeValues = session.getAttributeValue(o, attributeIds)
# Print them (variant 1 -- more readable)
for i in range(0,len(attributeIds)):
attributeName = CKA[attributeIds[i]]
print("Attribute %s: %s" % (attributeName, attributeValues[i]))
# Print them (variant 2 -- more consise)
for curAttrId, currAttrVale in zip(attributeIds,attributeValues):
attributeName = CKA[curAttrId]
print("Attribute %s: %s" % (attributeName, currAttrVale))
</code></pre>
<p>Some additional (random) notes:</p>
<ul>
<li><p>the <a href="http://pkcs11wrap.sourceforge.net/api/PyKCS11.Session-class.html#getAttributeValue" rel="nofollow">Session.getAttributeValue() method</a> method requires a list of attribute ids. You are constructing a list of "lists containing <em>Attribute name (string)</em> and <em>Attribute id (int)</em>" -- without any conversion -- this can't work</p></li>
<li><p>the <code>CKA_PRIVATE_EXPONENT</code> attribute is sensitive for RSA private keys. You probably won't be able to read it unless the <code>CKA_SENSITIVE</code> attribute is set to <code>False</code> (see e.g. <a href="http://stackoverflow.com/a/14933297/5128464">here</a>)</p></li>
<li><p>be sure to read only valid attributes for specific object (based on type, mechanism, sensitivity...)</p></li>
<li><p>the snippet above does not use the <code>PyKCS11.</code> prefix to reference PyKCS11 object members as it assumes they are imported with <code>from PyKCS11 import *</code> directive (I am not enough into python to tell you which way is the good one)</p></li>
<li><p>the attribute id <-> attribute name mapping is based on fact, that the <code>PKCS11.CKA</code> dictionary contains both string keys with int values and int keys with string keys (you can dump this dictionary yourself or check the <a href="https://bitbucket.org/PyKCS11/pykcs11/src/856b967cefb0e946d4b03614f71ddcaa9cbd8153/PyKCS11/__init__.py?at=1.3.2&fileviewer=file-view-default#__init__.py-77" rel="nofollow">source code</a>)</p></li>
<li><p>it might be much easier to dump the attributes with <code>print(o)</code></p></li>
<li><p>I would recommend reading relevant parts of the <a href="https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/pkcs-11-cryptographic-token-interface-standard.htm" rel="nofollow">PKCS#11 standard</a></p></li>
<li><p>(you might get your answer faster if you referenced <a href="https://bitbucket.org/PyKCS11/pykcs11/src/856b967cefb0e946d4b03614f71ddcaa9cbd8153/samples/dumpit.py?at=1.3.2&fileviewer=file-view-default#dumpit.py-224" rel="nofollow">the origins of your thoughts</a>)</p></li>
</ul>
<p>Good luck!</p>
| 0 |
2016-10-14T21:11:41Z
|
[
"python",
"list",
"dictionary",
"typeerror",
"pkcs#11"
] |
AttributeError: 'DataFrame' object has no attribute 'map'
| 39,535,447 |
<p>I wanted to convert the spark data frame to add using the code below:</p>
<pre><code>from pyspark.mllib.clustering import KMeans
spark_df = sqlContext.createDataFrame(pandas_df)
rdd = spark_df.map(lambda data: Vectors.dense([float(c) for c in data]))
model = KMeans.train(rdd, 2, maxIterations=10, runs=30, initializationMode="random")
</code></pre>
<p>The detailed error message is:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-11-a19a1763d3ac> in <module>()
1 from pyspark.mllib.clustering import KMeans
2 spark_df = sqlContext.createDataFrame(pandas_df)
----> 3 rdd = spark_df.map(lambda data: Vectors.dense([float(c) for c in data]))
4 model = KMeans.train(rdd, 2, maxIterations=10, runs=30, initializationMode="random")
/home/edamame/spark/spark-2.0.0-bin-hadoop2.6/python/pyspark/sql/dataframe.pyc in __getattr__(self, name)
842 if name not in self.columns:
843 raise AttributeError(
--> 844 "'%s' object has no attribute '%s'" % (self.__class__.__name__, name))
845 jc = self._jdf.apply(name)
846 return Column(jc)
AttributeError: 'DataFrame' object has no attribute 'map'
</code></pre>
<p>Does anyone know what I did wrong here? Thanks!</p>
| 2 |
2016-09-16T15:44:46Z
| 39,536,218 |
<p>You can't <code>map</code> a dataframe, but you can convert the dataframe to an RDD and map that by doing <code>spark_df.rdd.map()</code>. Prior to Spark 2.0, <code>spark_df.map</code> would alias to <code>spark_df.rdd.map()</code>. With Spark 2.0, you must explicitly call <code>.rdd</code> first. </p>
| 3 |
2016-09-16T16:28:47Z
|
[
"python",
"apache-spark",
"pyspark",
"spark-dataframe",
"apache-spark-mllib"
] |
appending to python lists
| 39,535,481 |
<p>I have a web scrapper that returns me values like the example below. </p>
<pre><code># Other code above here.
test = []
results = driver.find_elements_by_css_selector("li.result_content")
for result in results:
# Other code about result.find_element_by_blah_blah
product_feature = result.find_element_by_class_name("prod-feature-icon")
for each_item in product_feature.find_elements_by_tag_name('img'):
zz = test.append(each_item.get_attribute('src')[34:-4]) # returning the values I want
print(zz)
</code></pre>
<p>The code above would print out the results like this: (Which is the values I want)</p>
<pre><code>TCP_active
CI
DOH_active
TCP_active
CI
DOH
TCP
CI_active
DOH_active
</code></pre>
<p>I want to achieve the results below:</p>
<pre><code>[TCP_active, CI, DOH_active]
[TCP_active, CI, DOH]
[TCP, CI_active, DOH_active]
</code></pre>
<p>how should I be doing it?</p>
<p>I tried:</p>
<pre><code>test.append(each_item.get_attribute('src')[34:-4])
</code></pre>
<p>But this gives me:</p>
<pre><code>[TCP_active]
[TCP_active, CI]
[TCP_active, CI, DOH_active]
[TCP_active, CI, DOH_active, TCP]
...
</code></pre>
<p>Hope my explanation is clear </p>
| 0 |
2016-09-16T15:46:16Z
| 39,535,621 |
<p>Rather than <code>print</code>, append your results to lists; one new list per iteration of the outer loop:</p>
<pre><code>test = []
results = driver.find_elements_by_css_selector("li.result_content")
for result in results:
# Other code about result.find_element_by_blah_blah
product_feature = result.find_element_by_class_name("prod-feature-icon")
features = []
for each_item in product_feature.find_elements_by_tag_name('img'):
features.append(each_item.get_attribute('src')[34:-4])
test.append(features)
</code></pre>
<p>You can print <code>features</code> if you want to, or <code>test</code>, just to see what is happening at each level of your <code>for</code> loops.</p>
| 2 |
2016-09-16T15:54:34Z
|
[
"python",
"list"
] |
appending to python lists
| 39,535,481 |
<p>I have a web scrapper that returns me values like the example below. </p>
<pre><code># Other code above here.
test = []
results = driver.find_elements_by_css_selector("li.result_content")
for result in results:
# Other code about result.find_element_by_blah_blah
product_feature = result.find_element_by_class_name("prod-feature-icon")
for each_item in product_feature.find_elements_by_tag_name('img'):
zz = test.append(each_item.get_attribute('src')[34:-4]) # returning the values I want
print(zz)
</code></pre>
<p>The code above would print out the results like this: (Which is the values I want)</p>
<pre><code>TCP_active
CI
DOH_active
TCP_active
CI
DOH
TCP
CI_active
DOH_active
</code></pre>
<p>I want to achieve the results below:</p>
<pre><code>[TCP_active, CI, DOH_active]
[TCP_active, CI, DOH]
[TCP, CI_active, DOH_active]
</code></pre>
<p>how should I be doing it?</p>
<p>I tried:</p>
<pre><code>test.append(each_item.get_attribute('src')[34:-4])
</code></pre>
<p>But this gives me:</p>
<pre><code>[TCP_active]
[TCP_active, CI]
[TCP_active, CI, DOH_active]
[TCP_active, CI, DOH_active, TCP]
...
</code></pre>
<p>Hope my explanation is clear </p>
| 0 |
2016-09-16T15:46:16Z
| 39,535,716 |
<p>OK, not exactly sure what you want, but the code below will give the output you want:</p>
<pre><code>test = []
results = driver.find_elements_by_css_selector("li.result_content")
for result in results:
# Other code about result.find_element_by_blah_blah
product_feature = result.find_element_by_class_name("prod-feature-icon")
zz = []
for each_item in product_feature.find_elements_by_tag_name('img'):
zz = test.append(each_item.get_attribute('src')[34:-4]) # returning the values I want
print(zz)
</code></pre>
<p>If you want to store the data and not print it, use a dictionary something like this:</p>
<pre><code>test = []
zz_store = {}
results = driver.find_elements_by_css_selector("li.result_content")
for result in results:
# Other code about result.find_element_by_blah_blah
product_feature = result.find_element_by_class_name("prod-feature-icon")
zz = []
for each_item in product_feature.find_elements_by_tag_name('img'):
zz = test.append(each_item.get_attribute('src')[34:-4]) # returning the values I want
zz_store[result] = zz
print(zz)
</code></pre>
| 0 |
2016-09-16T15:59:03Z
|
[
"python",
"list"
] |
QT Creator converted ui to py working in windows terminal but not working with spyder IDE
| 39,535,486 |
<p>I am following a tutorial on making a basic GUI with QT creator and python. I have successfully created a window with a button that closes the window when pressed. It works fine in QT Creator and I converted the ui file to py and it runs and creates the window as expected when I open a command prompt window and call it with <code>python main.py</code>. I enjoy working in the spyder IDE and would like to continue to do so. The issue is that if I open the main.py in spyder and run it will first give the error </p>
<pre><code>An exception has occurred, use %tb to see the full traceback.
SystemExit: -1
</code></pre>
<p>then if I attempt to run it a second time the kernel will hang. </p>
<p>What is required to run this script in spyder successfully? Here is the code:</p>
<pre><code>from PyQt4 import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
def _fromUtf8(s):
return s
try:
_encoding = QtGui.QApplication.UnicodeUTF8
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig, _encoding)
except AttributeError:
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig)
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName(_fromUtf8("MainWindow"))
MainWindow.resize(400, 300)
self.centralWidget = QtGui.QWidget(MainWindow)
self.centralWidget.setObjectName(_fromUtf8("centralWidget"))
self.horizontalLayout = QtGui.QHBoxLayout(self.centralWidget)
self.horizontalLayout.setMargin(11)
self.horizontalLayout.setSpacing(6)
self.horizontalLayout.setObjectName(_fromUtf8("horizontalLayout"))
self.pushButton = QtGui.QPushButton(self.centralWidget)
self.pushButton.setObjectName(_fromUtf8("pushButton"))
self.horizontalLayout.addWidget(self.pushButton)
MainWindow.setCentralWidget(self.centralWidget)
self.menuBar = QtGui.QMenuBar(MainWindow)
self.menuBar.setGeometry(QtCore.QRect(0, 0, 400, 21))
self.menuBar.setObjectName(_fromUtf8("menuBar"))
MainWindow.setMenuBar(self.menuBar)
self.mainToolBar = QtGui.QToolBar(MainWindow)
self.mainToolBar.setObjectName(_fromUtf8("mainToolBar"))
MainWindow.addToolBar(QtCore.Qt.TopToolBarArea, self.mainToolBar)
self.statusBar = QtGui.QStatusBar(MainWindow)
self.statusBar.setObjectName(_fromUtf8("statusBar"))
MainWindow.setStatusBar(self.statusBar)
self.retranslateUi(MainWindow)
QtCore.QObject.connect(self.pushButton, QtCore.SIGNAL(_fromUtf8("clicked()")), MainWindow.close)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow", None))
self.pushButton.setText(_translate("MainWindow", "Close", None))
if __name__ == "__main__":
import sys
app = QtGui.QApplication(sys.argv)
MainWindow = QtGui.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
</code></pre>
| 0 |
2016-09-16T15:46:24Z
| 39,537,547 |
<p>It's hard to say without the actual traceback. I'm guessing it's one of two things, both related to the fact that Spyder is also built on Qt.</p>
<ol>
<li><p>You're trying to run your app within the Spyder environment. Your app tries to create a <code>QApplication</code>, but because Spyder already runs on <code>Qt</code>, so a <code>QApplication</code> already exists and it errors.</p></li>
<li><p>Mismatched versions of PyQt/Qt. I'm assuming Spyder ships with it's own version of Qt/PyQt and you didn't have to install it to run Spyder. But I'm guessing you also did install your own version of Qt/PyQt and PyQt is loading the wrong dll's (the ones shipped with Spyder and not the ones you installed).</p></li>
</ol>
<p>Some things to check:</p>
<ul>
<li><p>Check to make sure you're launching an external process from Spyder (i.e. in a brand new shell with a new python process) and not simply running the code within the Spyder embedded python process. I don't know Spyder well enough to know how to do this, but most IDE's have some setup that control how it launches external processes.</p></li>
<li><p>Check the <code>PATH</code> environment variable from your launched script. It's possible that the Spyder directory is being added before your installation of Qt, causing the Spyder Qt dll's to load instead of your system install of Qt when importing PyQt.</p></li>
</ul>
| 0 |
2016-09-16T17:58:56Z
|
[
"python",
"user-interface",
"pyqt",
"pyqt4",
"spyder"
] |
Acessing rows and columns in dictionary
| 39,535,522 |
<p>I have a dictionary that has its keys as tuples and values assigned to these keys.</p>
<p>I want to perform a set of actions based on the keys positions.</p>
<p>Here is my code that I have written so far.</p>
<p>Dictionary:</p>
<pre><code>p={(0, 1): 2, (1, 2): 6, (0, 0): 1, (2, 0): 7, (1, 0): 4, (2, 2): 9, (1, 1): 5, (2, 1): 8, (0, 2): 3}
</code></pre>
<p>Desired output is:</p>
<p>I want the values of individual rows as shown below.</p>
<pre><code>q=[[1,2,3], [4,5,6], [7,8,9]]
</code></pre>
<p>I wrote this code that can do this trick for columns:</p>
<pre><code>r=[[p[(x,y)] for x in range(3)] for y in range(3)]
</code></pre>
<p>for which the output looks like this:</p>
<pre><code>r=[[1, 4, 7], [2, 5, 8], [3, 6, 9]]
</code></pre>
<p>I know how to do this with the following set of code:</p>
<pre><code>z=[]
for i in range(3):
z.append([p[i,j] for j in range(3)])
</code></pre>
<p>Which gives me:</p>
<pre><code>z=[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
</code></pre>
<p>My question is, can I do that operation in just one list comprehension?</p>
<p>Thanks in advance.</p>
| 0 |
2016-09-16T15:49:05Z
| 39,535,662 |
<p>If you flip <code>x</code> and <code>y</code> in your code for <code>r</code>, it seems to output the way you desire. For example,</p>
<pre><code>r=[[p[(x,y)] for y in range(3)] for x in range(3)]
</code></pre>
<p>results in </p>
<blockquote>
<p>[[1, 2, 3], [4, 5, 6], [7, 8, 9]]</p>
</blockquote>
| 1 |
2016-09-16T15:56:39Z
|
[
"python",
"dictionary"
] |
Acessing rows and columns in dictionary
| 39,535,522 |
<p>I have a dictionary that has its keys as tuples and values assigned to these keys.</p>
<p>I want to perform a set of actions based on the keys positions.</p>
<p>Here is my code that I have written so far.</p>
<p>Dictionary:</p>
<pre><code>p={(0, 1): 2, (1, 2): 6, (0, 0): 1, (2, 0): 7, (1, 0): 4, (2, 2): 9, (1, 1): 5, (2, 1): 8, (0, 2): 3}
</code></pre>
<p>Desired output is:</p>
<p>I want the values of individual rows as shown below.</p>
<pre><code>q=[[1,2,3], [4,5,6], [7,8,9]]
</code></pre>
<p>I wrote this code that can do this trick for columns:</p>
<pre><code>r=[[p[(x,y)] for x in range(3)] for y in range(3)]
</code></pre>
<p>for which the output looks like this:</p>
<pre><code>r=[[1, 4, 7], [2, 5, 8], [3, 6, 9]]
</code></pre>
<p>I know how to do this with the following set of code:</p>
<pre><code>z=[]
for i in range(3):
z.append([p[i,j] for j in range(3)])
</code></pre>
<p>Which gives me:</p>
<pre><code>z=[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
</code></pre>
<p>My question is, can I do that operation in just one list comprehension?</p>
<p>Thanks in advance.</p>
| 0 |
2016-09-16T15:49:05Z
| 39,535,750 |
<p>I had tried to do several things in the code. I had written it but did not realize that and went on trying different combinations with code.</p>
<p>Here is the answer:</p>
<pre><code>z=[[p[(i,j)] for j in range(3)] for i in range(3)]
</code></pre>
<p>Thanks!</p>
| 1 |
2016-09-16T16:00:52Z
|
[
"python",
"dictionary"
] |
Expecting property name enclosed in double quotes - converting a string to a json object using Python
| 39,535,527 |
<p>I am trying to convert a string to JSON using python using the following code:</p>
<pre><code>myStr = '[{u"total": "54", u"value": "54", u"label": u"16 Sep"}, {u"total": "58", u"value": "4", u"label": u"16 Sep"}, {u"total": "65", u"value": "7", u"label": u"16 Sep"}, {u"total": "65", u"value": "0", u"label": u"16 Sep"}]'
import json
json.loads(myStr)
</code></pre>
<p>I get the following error:</p>
<pre><code>ValueError: Expecting property name enclosed in double quotes: line 1 column 3 (char 2)
</code></pre>
<p>This makes no sense as every property has double quotes, not single ones. Any help?</p>
| 0 |
2016-09-16T15:49:23Z
| 39,535,598 |
<p>Remove the unicode qualifier from the string. <code>json.loads</code> assumes the property names are already in unicode.</p>
| 2 |
2016-09-16T15:53:24Z
|
[
"python",
"json"
] |
Expecting property name enclosed in double quotes - converting a string to a json object using Python
| 39,535,527 |
<p>I am trying to convert a string to JSON using python using the following code:</p>
<pre><code>myStr = '[{u"total": "54", u"value": "54", u"label": u"16 Sep"}, {u"total": "58", u"value": "4", u"label": u"16 Sep"}, {u"total": "65", u"value": "7", u"label": u"16 Sep"}, {u"total": "65", u"value": "0", u"label": u"16 Sep"}]'
import json
json.loads(myStr)
</code></pre>
<p>I get the following error:</p>
<pre><code>ValueError: Expecting property name enclosed in double quotes: line 1 column 3 (char 2)
</code></pre>
<p>This makes no sense as every property has double quotes, not single ones. Any help?</p>
| 0 |
2016-09-16T15:49:23Z
| 39,535,692 |
<p>Putting together the puzzle pieces from this question and <a href="http://stackoverflow.com/questions/39534841/getting-syntaxerror-json-parse-error-expected-when-trying-to-convert-a-str">Getting SyntaxError: JSON Parse error: Expected '}' when trying to convert a string into JSON using javascript</a>, you want to do:</p>
<pre><code>myStr = [{u"total": "54", u"value": "54", u"label": u"16 Sep"}, {u"total": "58", u"value": "4", u"label": u"16 Sep"}, {u"total": "65", u"value": "7", u"label": u"16 Sep"}, {u"total": "65", u"value": "0", u"label": u"16 Sep"}]
import json
print(json.dumps(myStr))
</code></pre>
<p>Then copy paste the outputted string into javascript.</p>
| 0 |
2016-09-16T15:57:45Z
|
[
"python",
"json"
] |
Expecting property name enclosed in double quotes - converting a string to a json object using Python
| 39,535,527 |
<p>I am trying to convert a string to JSON using python using the following code:</p>
<pre><code>myStr = '[{u"total": "54", u"value": "54", u"label": u"16 Sep"}, {u"total": "58", u"value": "4", u"label": u"16 Sep"}, {u"total": "65", u"value": "7", u"label": u"16 Sep"}, {u"total": "65", u"value": "0", u"label": u"16 Sep"}]'
import json
json.loads(myStr)
</code></pre>
<p>I get the following error:</p>
<pre><code>ValueError: Expecting property name enclosed in double quotes: line 1 column 3 (char 2)
</code></pre>
<p>This makes no sense as every property has double quotes, not single ones. Any help?</p>
| 0 |
2016-09-16T15:49:23Z
| 39,535,754 |
<pre><code>import json
myStr = '[{"total": 54, "value": 54, "label": "u16 Sep"}, {"total": 58, "value": 4, "label": "u16 Sep"}, {"total": 65, "value": 7, "label":" u16 Sep"}, {"total": 65, "value": 0,"label": "u16 Sep"}]'
obj = json.loads(myStr)
print(repr(obj))
</code></pre>
<p>You try load incorrect JSON, you can check it <a href="http://json.parser.online.fr/" rel="nofollow">here</a>. I edit your json and it work.</p>
<p><a href="http://www.w3schools.com/json/json_syntax.asp" rel="nofollow">Here You find JSON Syntax rules</a></p>
| 0 |
2016-09-16T16:01:01Z
|
[
"python",
"json"
] |
python: DataFrame.append does not append element
| 39,535,597 |
<p>I am working one week with python and I need some help.
I want that if certain condition is fulfilled, it adds a value to a database.
My program doesn't give an error but it doesn't append an element to my database </p>
<pre><code>import pandas as pd
noTEU = pd.DataFrame() # empty database
index_TEU = 0
for vessel in list:
if condition is fullfilled:
imo_vessel = pd.DataFrame({'imo': vessel}, index=[index_TEU])
noTEU.append(imo_vessel) # I want here to add an element to my database
index_TEU = index_TEU + 1
</code></pre>
<p>If I run this, at the end I still get an empty dataframe. I have no idea why it doesn't do what I want it to do </p>
| 1 |
2016-09-16T15:53:21Z
| 39,535,717 |
<p>You should reassign the dataframe such as:</p>
<pre><code>import pandas as pd
noTEU = pd.DataFrame() # empty database
index_TEU = 0
for vessel in list:
if condition is fullfilled:
imo_vessel = pd.DataFrame({'imo': vessel}, index=[index_TEU])
noTEU = noTEU.append(imo_vessel) # I want here to add an element to my database
index_TEU = index_TEU + 1
</code></pre>
<p>and don't use the keyword <code>list</code> for a List because it's included in the Python syntax.</p>
| 0 |
2016-09-16T15:59:04Z
|
[
"python",
"pandas"
] |
Comparing two lists of dictionaries in Python
| 39,535,712 |
<p>I would like to make a function that compares two lists of dictionaries in python by looking at their keys. When list A contains a dictionary that has an entry with the same key as an entry in the dictionary in list B, the function should return True.</p>
<p>Here's an example of list A and B:</p>
<pre><code>listA = [{'key1':'value1'}, {'key2':'value2'}]
listB = [{'key1':'value3'}, {'key3':'value4'}]
</code></pre>
<p>In this example the function should return True, because key1 is a match.</p>
<p>Thanks in advance.</p>
| 0 |
2016-09-16T15:58:50Z
| 39,535,889 |
<p>first you have to take the keys out of the list of dictionaries, then compare.</p>
<pre><code>keysA = [k for x in listA for k in x.keys()]
keysB = [k for x in listB for k in x.keys()]
any(k in keysB for k in keysA)
</code></pre>
| 2 |
2016-09-16T16:08:06Z
|
[
"python",
"list",
"dictionary"
] |
Comparing two lists of dictionaries in Python
| 39,535,712 |
<p>I would like to make a function that compares two lists of dictionaries in python by looking at their keys. When list A contains a dictionary that has an entry with the same key as an entry in the dictionary in list B, the function should return True.</p>
<p>Here's an example of list A and B:</p>
<pre><code>listA = [{'key1':'value1'}, {'key2':'value2'}]
listB = [{'key1':'value3'}, {'key3':'value4'}]
</code></pre>
<p>In this example the function should return True, because key1 is a match.</p>
<p>Thanks in advance.</p>
| 0 |
2016-09-16T15:58:50Z
| 39,536,071 |
<p>Is this what you are looking for?</p>
<pre><code>def cmp_dict(a, b):
return any(frozenset(c) & frozenset(d) for c, d in zip(a, b))
</code></pre>
<p>Here is a demonstration of its usage:</p>
<pre><code>>>> listA = [{'key1':'value1'}, {'key2':'value2'}]
>>> listB = [{'key1':'value3'}, {'key3':'value4'}]
>>> cmp_dict(listA, listB)
True
>>>
</code></pre>
| 0 |
2016-09-16T16:20:03Z
|
[
"python",
"list",
"dictionary"
] |
Setup pjsip for Python
| 39,535,729 |
<p>I'm trying to install pjsip's Python binding but am running into a build error that I feel is environmental but am just not able to figure out what's wrong.</p>
<p>I'm able to build pjsip without issue but run into a problem when trying to build the python bindings -- I'm getting an error from ld about a bad value in one of the static libraries.</p>
<p>Any thoughts?</p>
<pre><code>root@0fcbc7b108af:/src/pjproject-2.5.5/pjsip-apps/src/python# python setup.py install
running install
running build
running build_py
running build_ext
building '_pjsua' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -DPJ_AUTOCONF=1 -I/src/pjproject-2.5.5/pjlib/include -I/src/pjproject-2.5.5/pjlib-util/include -I/src/pjproject-2.5.5/pjnath/include -I/src/pjproject-2.5.5/pjmedia/include -I/src/pjproject-2.5.5/pjsip/include -I/usr/include/python2.7 -c _pjsua.c -o build/temp.linux-x86_64-2.7/_pjsua.o
_pjsua.c: In function 'py_pjsua_enum_transports':
_pjsua.c:1202:17: warning: variable 'status' set but not used [-Wunused-but-set-variable]
pj_status_t status;
^
_pjsua.c: In function 'py_pjsua_conf_get_port_info':
_pjsua.c:2338:9: warning: variable 'status' set but not used [-Wunused-but-set-variable]
int status;
^
_pjsua.c: In function 'py_pjsua_get_snd_dev':
_pjsua.c:2714:9: warning: variable 'status' set but not used [-Wunused-but-set-variable]
int status;
^
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wl,-Bsymbolic-functions -Wl,-z,relro -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/_pjsua.o -L/src/pjproject-2.5.5/pjlib/lib -L/src/pjproject-2.5.5/pjlib-util/lib -L/src/pjproject-2.5.5/pjnath/lib -L/src/pjproject-2.5.5/pjmedia/lib -L/src/pjproject-2.5.5/pjsip/lib -L/src/pjproject-2.5.5/third_party/lib -lpjsua-x86_64-unknown-linux-gnu -lpjsip-ua-x86_64-unknown-linux-gnu -lpjsip-simple-x86_64-unknown-linux-gnu -lpjsip-x86_64-unknown-linux-gnu -lpjmedia-codec-x86_64-unknown-linux-gnu -lpjmedia-x86_64-unknown-linux-gnu -lpjmedia-videodev-x86_64-unknown-linux-gnu -lpjmedia-audiodev-x86_64-unknown-linux-gnu -lpjmedia-x86_64-unknown-linux-gnu -lpjnath-x86_64-unknown-linux-gnu -lpjlib-util-x86_64-unknown-linux-gnu -lsrtp-x86_64-unknown-linux-gnu -lresample-x86_64-unknown-linux-gnu -lgsmcodec-x86_64-unknown-linux-gnu -lspeex-x86_64-unknown-linux-gnu -lilbccodec-x86_64-unknown-linux-gnu -lg7221codec-x86_64-unknown-linux-gnu -lyuv-x86_64-unknown-linux-gnu -lpj-x86_64-unknown-linux-gnu -lm -lrt -lpthread -lasound -o build/lib.linux-x86_64-2.7/_pjsua.so
/usr/bin/ld: /src/pjproject-2.5.5/pjsip/lib/libpjsua-x86_64-unknown-linux-gnu.a(pjsua_acc.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC
/src/pjproject-2.5.5/pjsip/lib/libpjsua-x86_64-unknown-linux-gnu.a: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
</code></pre>
| 0 |
2016-09-16T15:59:53Z
| 39,573,027 |
<p>I was able to find an answer in an old mailing list entry (<a href="http://lists.pjsip.org/pipermail/pjsip_lists.pjsip.org/2008-August/004456.html" rel="nofollow">http://lists.pjsip.org/pipermail/pjsip_lists.pjsip.org/2008-August/004456.html</a>).</p>
<p>Set CFLAGS=-fPIC and rebuild the PJSIP library (make clean && ./configure && make dep && make), then follow the directions for making the Python library.</p>
| 0 |
2016-09-19T12:19:26Z
|
[
"python",
"pjsip"
] |
apply vector of functions to vector of arguments
| 39,535,756 |
<p>I'd like to take in a list of functions, <code>funclist</code>, and return a new function which takes in a list of arguments, <code>arglist</code>, and applies the <code>i</code>th function in <code>funclist</code> to the <code>i</code>th element of <code>arglist</code>, returning the results in a list:</p>
<pre><code>def myfunc(funclist):
return lambda arglist: [ funclist[i](elt) for i, elt in enumerate(arglist) ]
</code></pre>
<p>This is not optimized for parallel/vectorized application of the independent functions in <code>funclist</code> to the independent arguments in <code>argvec</code>. Is there a built-in function in python or numpy (or otherwise) that will return a more optimized version of the <code>lambda</code> above? It would be similar in spirit to <code>map</code> or <code>numpy.vectorize</code> (but obviously not the same), and so far I haven't found anything.</p>
| 0 |
2016-09-16T16:01:04Z
| 39,536,845 |
<p>In <code>numpy</code> terms true vectorization means performing the iterative stuff in compiled code. Usually that requires using <code>numpy</code> functions that work with whole arrays, doing thing like addition and indexing.</p>
<p><code>np.vectorize</code> is a way of iterate of several arrays, and using their elements in a function that does not handle arrays. It doesn't do much in compiled code, so does not improve the speed much. It's most valuable as a way of applying <code>numpy</code> broadcasting rules to your own scalar function.</p>
<p><code>map</code> is a variant on list comprehension, and has basically the same speed. And a list comprehension has more expressive power, working with several lists.</p>
<p><code>@Tore's</code> zipped comprehension is a clear expression this task</p>
<pre><code>[f(args) for f, args in zip(funclist, arglist)]
</code></pre>
<p><code>map</code> can work with several input lists:</p>
<pre><code>In [415]: arglist=[np.arange(3),np.arange(1,4)]
In [416]: fnlist=[np.sum, np.prod]
In [417]: [f(a) for f,a in zip(fnlist, arglist)]
Out[417]: [3, 6]
In [418]: list(map(lambda f,a: f(a), fnlist, arglist))
Out[418]: [3, 6]
</code></pre>
<p>Your version is a little wordier, but functionally the same.</p>
<pre><code>In [423]: def myfunc(funclist):
...: return lambda arglist: [ funclist[i](elt) for i, elt in enumerate(arglist) ]
In [424]: myfunc(fnlist)
Out[424]: <function __main__.myfunc.<locals>.<lambda>>
In [425]: myfunc(fnlist)(arglist)
Out[425]: [3, 6]
</code></pre>
<p>It has the advantage of generating a function that can be applied to different arglists:</p>
<pre><code>In [426]: flist=myfunc(fnlist)
In [427]: flist(arglist)
Out[427]: [3, 6]
In [428]: flist(arglist[::-1])
Out[428]: [6, 0]
</code></pre>
<p>I would have written <code>myfunc</code> more like:</p>
<pre><code>def altfun(funclist):
def foo(arglist):
return [f(a) for f,a in zip(funclist, arglist)]
return foo
</code></pre>
<p>but the differences are just stylistic.</p>
<p>================</p>
<p>Time test for <code>zip</code> v <code>enumerate</code>:</p>
<pre><code>In [154]: funclist=[sum]*N
In [155]: arglist=[list(range(N))]*N
In [156]: sum([funclist[i](args) for i,args in enumerate(arglist)])
Out[156]: 499500000
In [157]: sum([f(args) for f,args in zip(funclist, arglist)])
Out[157]: 499500000
In [158]: timeit [funclist[i](args) for i,args in enumerate(arglist)]
10 loops, best of 3: 43.5 ms per loop
In [159]: timeit [f(args) for f,args in zip(funclist, arglist)]
10 loops, best of 3: 43.1 ms per loop
</code></pre>
<p>Basically the same. But <code>map</code> is 2x faster</p>
<pre><code>In [161]: timeit list(map(lambda f,a: f(a), funclist, arglist))
10 loops, best of 3: 23.1 ms per loop
</code></pre>
<p>Packaging the iteration in a callable is also faster</p>
<pre><code>In [165]: timeit altfun(funclist)(arglist)
10 loops, best of 3: 23 ms per loop
In [179]: timeit myfunc(funclist)(arglist)
10 loops, best of 3: 22.6 ms per loop
</code></pre>
| 1 |
2016-09-16T17:11:16Z
|
[
"python",
"numpy",
"vectorization"
] |
Python version compatibility with Windows Server 2003
| 39,535,762 |
<p>Which versions of python are compatible with Windows Server 2003? I'm using 32-bit (x86)</p>
<p>Can versions 3.x be installed?</p>
<p>Why I'm asking is because I've installed Python 3.5.2 on Windows Server 2003 and when I try to run it, it gives me an error "python.exe is not a valid Win32 application" </p>
| 0 |
2016-09-16T16:01:25Z
| 39,536,032 |
<p>The <a href="https://www.python.org/" rel="nofollow">official Site</a> (when pointing the mouse on Downloads) states that you cannot use Python 3.5+ on Windows XP and earlier (implying that earlier versions are usable) - Windows Server 2003 has many parts in common with XP so <em>propably</em> 3.5+ will not work there, too.</p>
<p>Note that 64-bit versions do not work on 32-bit operationg systems - regardless wether the hardware supports 64-bit-programs.</p>
<p>To be on the safe side just install <a href="https://www.python.org/downloads/release/python-344/" rel="nofollow">3.4</a> for Windows x86 or <a href="https://www.python.org/downloads/release/python-2712/" rel="nofollow">2.7</a> for Windows x86.</p>
| 0 |
2016-09-16T16:17:27Z
|
[
"python",
"version",
"windows-server-2003"
] |
Calculate cumulative p_value hourly in pandas
| 39,535,777 |
<p>I am wondering if there is a way to calculate the cumulative p_value for each hour of data in a dataframe. For example if you have 24 hours of data there would be 24 measurements of p_value, but they would be cumulative for all hours before the current hour.</p>
<p>I have been able to get the p_value for each hour by grouping my data by hour and then applying an agg_func that I wrote to calculate all of the relevant statistics necessary to calculate p. However, this approach does not produce a cumulative result, only the p for each individual hour. </p>
<p>Given a df with columns id, ts (as unix timestamp), ab_group, result. I ran the following code to compute p_values on the hour. </p>
<pre><code>df['time'] = pd.to_datetime(df.ts, unit='s').values.astype('<m8[h]')
def calc_p(group):
df_old_len = len(group[group.ab_group == 0])
df_new_len = len(group[group.ab_group == 1])
ctr_old = float(len(group[(group.ab_group == 0) & (df.result == 1)]))/ df_old_len
ctr_new = float(len(group[(group.ab_group == 1) & (df.converted == 1)]))/ df_new_len
nobs_old = df_old_len
nobs_new = df_new_len
z_score, p_val, null = z_test.z_test(ctr_old, ctr_new, nobs_old, nobs_new, effect_size=0.001)
return p_val
grouped = df.groupby(by='time').agg(calc_p)
</code></pre>
<p>N.B. z_test is my own module containing an implementation of a z_test. </p>
<p>Any advice on how to modify this for a cumulative p is much appreciated.</p>
| 0 |
2016-09-16T16:02:21Z
| 39,537,492 |
<p>So i came up with a workaround on my own for this one.</p>
<p>What I came up with was modifying <code>calc_p()</code> such that it utilized global variables and thus could use updated values each time it was called by the aggfunc. Below is the edited code:</p>
<pre><code>def calc_p(group):
global df_old_len, df_new_len, clicks_old, clicks_new
clicks_old += len(group[(group.landing_page == 'old_page') & (group.converted == 1)])
clicks_new += len(group[(group.landing_page == 'new_page') & (group.converted == 1)])
df_old_len += len(group[group.landing_page == 'old_page'])
df_new_len += len(group[group.landing_page == 'new_page'])
ctr_old = float(clicks_old)/df_old_len
ctr_new = float(clicks_new)/df_new_len
z_score, p_val, null = z_test.z_test(ctr_old, ctr_new, df_old_len, df_new_len, effect_size=0.001)
return p_val
# Initialize global values to 0 for cumulative calc_p
df_old_len = 0
df_new_len = 0
clicks_old = 0
clicks_new = 0
grouped = df.groupby(by='time').agg(calc_p)
</code></pre>
| 0 |
2016-09-16T17:54:48Z
|
[
"python",
"pandas",
"grouping",
"p-value"
] |
closing MYSQL JDBC connection in Spark
| 39,535,814 |
<p>I am loading data from MYSQL server to Spark through JDBC, but I need to close that connection after loading the data. What is the exact syntax for closing connection?</p>
<pre><code>df_mysql = sqlContext.read.format("jdbc").options(
url="jdbc:mysql://***/****â,
driver="com.mysql.jdbc.Driver",
dbtable="((SELECT jobid, system, FROM Jobs LIMIT 500) as T)",
user=â*****â,
password=â*****â).load()
</code></pre>
<p>I have tried dbtable.close(). That doesnt work.</p>
| 0 |
2016-09-16T16:04:16Z
| 39,536,107 |
<p>There is really nothing to be closed here. <code>DateFrame</code> object is not a JDBC connection and <code>load</code> doesn't really <code>load</code> data. It simply fetches metadata required to build <code>DataFrame</code>.</p>
<p>Actual data processing takes place only when you execute a job which contains tasks depending on this particular input and is handled by the corresponding executors. There are responsible for managing connections and fetching data and this process is not exposed to the user.</p>
| 2 |
2016-09-16T16:22:18Z
|
[
"python",
"mysql",
"jdbc",
"apache-spark",
"pyspark"
] |
Installing Scala kernel (or Spark/Toree) for Jupyter (Anaconda)
| 39,535,858 |
<p>I'm running RHEL 6.7, and have Anaconda set up. (anaconda 4.10). Jupyter is working OOTB, and it by default has the Python kernel. Everything is dandy so I can select "python notebook" in Jupyter.</p>
<p>I'm now looking to get Scala set up with Jupyter as well. (which it seems like Spark kernel - now Toree will work?)</p>
<p>Every question/answer I've seen in regards to it - is not referencing the issue I'm running into. </p>
<p>I was trying to install Toree, and did </p>
<pre><code>sudo pip install toree
</code></pre>
<p>and it worked. But then the next step is too </p>
<pre><code>jupyter toree install
</code></pre>
<p>And the <strong>error</strong> I get is:</p>
<pre><code>jupyter toree install
Traceback (most recent call last):
File "/usr/app/anaconda/bin/jupyter-toree", line 7, in <module>
from toree.toreeapp import main
ImportError: No module named toree.toreeapp
</code></pre>
<p>Am I missing a step? Anything I'm doing wrong? If i need to provide more information, I will be glad too. Thanks!</p>
<p>Edit: What is the standard/easiest/reliable way to get a Scala notebook in Jupyter? (tl;dr)</p>
| 2 |
2016-09-16T16:06:09Z
| 39,554,112 |
<p>First, make sure you set the SPARK_HOME variable in your shell environment to point to where spark is located, for example:</p>
<pre><code>export SPARK_HOME=$HOME/Downloads/spark-2.0.0-bin-hadoop2.7
</code></pre>
<p>next install toree with</p>
<pre><code>sudo jupyter toree install --spark_home=$SPARK_HOME
</code></pre>
| 0 |
2016-09-18T04:29:24Z
|
[
"python",
"scala",
"jupyter",
"jupyter-notebook",
"apache-toree"
] |
Installing Scala kernel (or Spark/Toree) for Jupyter (Anaconda)
| 39,535,858 |
<p>I'm running RHEL 6.7, and have Anaconda set up. (anaconda 4.10). Jupyter is working OOTB, and it by default has the Python kernel. Everything is dandy so I can select "python notebook" in Jupyter.</p>
<p>I'm now looking to get Scala set up with Jupyter as well. (which it seems like Spark kernel - now Toree will work?)</p>
<p>Every question/answer I've seen in regards to it - is not referencing the issue I'm running into. </p>
<p>I was trying to install Toree, and did </p>
<pre><code>sudo pip install toree
</code></pre>
<p>and it worked. But then the next step is too </p>
<pre><code>jupyter toree install
</code></pre>
<p>And the <strong>error</strong> I get is:</p>
<pre><code>jupyter toree install
Traceback (most recent call last):
File "/usr/app/anaconda/bin/jupyter-toree", line 7, in <module>
from toree.toreeapp import main
ImportError: No module named toree.toreeapp
</code></pre>
<p>Am I missing a step? Anything I'm doing wrong? If i need to provide more information, I will be glad too. Thanks!</p>
<p>Edit: What is the standard/easiest/reliable way to get a Scala notebook in Jupyter? (tl;dr)</p>
| 2 |
2016-09-16T16:06:09Z
| 39,628,777 |
<p>If you are trying to get spark 2.0 with 2.11 you may get strange msgs.
You need to update to latest toree 0.2.0
For Ubuntu 16.04 64bit. I have package & tgz file in
<a href="https://anaconda.org/hyoon/toree" rel="nofollow">https://anaconda.org/hyoon/toree</a></p>
<p>That's for python 2.7 & you will need conda. If you don't know how, then just download tgz then</p>
<p>tar zxvf toree-0.2.0.dev1.tar.gz
pip install -e toree-0.2.0.dev1</p>
<p>And I prefer to:
jupyter toree install --interpreters=Scala --spark_home=/opt/spark --user --kernel_name=apache_toree --interpreters=PySpark,SparkR,Scala,SQL
Which will create kernels in ~/.local/share/jupyter/kernels (--user is the key)</p>
<p>Happy sparking!</p>
| 0 |
2016-09-22T01:15:05Z
|
[
"python",
"scala",
"jupyter",
"jupyter-notebook",
"apache-toree"
] |
Python break from if statement to else
| 39,535,877 |
<p>(I'm a Python newbie, so apologies for this basic question I for some reason couldn't find an answer to.)</p>
<p>I have a nested if statement with the if statement of an if/else block. In the <em>nested</em> if statement, if it it meets the criteria, I'd like the code to break to the else statement. When I put a break in the nested if, though, I'm not sure if it's breaking to the else statement.</p>
<p>I'd like to find the longest substring in alphabetical order of a given string, s. Here's my code:</p>
<pre><code>s = 'lugabcdeczsswabcdefghij'
longest = 1
alpha_count = 1
longest_temp = 1
longest_end = 1
for i in range(len(s)-1):
if (s[i] <= s[i+1]):
alpha_count += 1
if (i+1 == (len(s)-1)):
break
else:
longest_check = alpha_count
if longest_check > longest:
longest = longest_check
longest_end = i+1
alpha_count = 1
print(longest)
print('Longest substring in alphabetical order is: ' +
s[(longest_end-longest):longest_end])
</code></pre>
<p>(Yes, I realize there's surely lots of unnecessary code here. Still learning!)</p>
<p>At this nested if:</p>
<pre><code>if (i+1 == (len(s)-1)):
break
</code></pre>
<p>...if True, I'd like the code to break to the 'else' statement. It doesn't seem to break to that section, though. Any help?</p>
| 0 |
2016-09-16T16:07:25Z
| 39,536,021 |
<p><code>break</code> is used when you want to break out of loops not if statments. You can have another if statement that executes this logic for you like this:</p>
<pre><code>if (s[i] <= s[i+1]):
alpha_count += 1
elif (i+1 == (len(s)-1)) or (add boolean expression for else part in here too something like s[i] > s[i+1]):
longest_check = alpha_count
if longest_check > longest:
longest = longest_check
longest_end = i+1
alpha_count = 1
</code></pre>
<p>What this snippet is doing is evaluating two booleans, both are for the else part. However, it says either execute in case of else from first if or in case of <code>(i+1 == (len(s)-1))</code></p>
| 1 |
2016-09-16T16:16:19Z
|
[
"python",
"if-statement",
"break"
] |
Find a more efficient way to code an If statement in python
| 39,535,878 |
<p>it's the second time I'm confronted to this kind of code : </p>
<pre><code> if "associé" in gender or "gérant" in gender or "président" in gender or "directeur" in gender:
gen = "male"
elif "associée" in gender or "gérante" in gender or "présidente" in gender or "directrice" in gender:
gen = "female"
else:
gen = "error"
</code></pre>
<p>I'd like to find a more efficient way to write this code because it looks really bad. </p>
| 1 |
2016-09-16T16:07:25Z
| 39,535,982 |
<p>If you cannot have multiple of these stings in one <code>gender</code> string, maybe something like this?</p>
<pre><code>gen = "error"
for g in ["associé", "gérant", "président", "directeur"]:
if g in gender:
gen = "male"
break
for g in ["associée" "gérante", "présidente", "directrice"]:
if g in gender:
gen = "female"
break
</code></pre>
| 0 |
2016-09-16T16:14:05Z
|
[
"python",
"if-statement"
] |
Find a more efficient way to code an If statement in python
| 39,535,878 |
<p>it's the second time I'm confronted to this kind of code : </p>
<pre><code> if "associé" in gender or "gérant" in gender or "président" in gender or "directeur" in gender:
gen = "male"
elif "associée" in gender or "gérante" in gender or "présidente" in gender or "directrice" in gender:
gen = "female"
else:
gen = "error"
</code></pre>
<p>I'd like to find a more efficient way to write this code because it looks really bad. </p>
| 1 |
2016-09-16T16:07:25Z
| 39,535,985 |
<p>I personally like doing this with <a href="https://docs.python.org/2/library/sets.html#set-objects" rel="nofollow"><code>sets</code></a>. For example:</p>
<pre><code>opts = ["associé", "gérant", "président", "directeur"]
if set(opts) & set(gender):
...
</code></pre>
<p><code>&</code> is used for the set intersection operation which returns a new <code>set</code> with the items shared by the sets on either side of the <code>&</code>. This will execute the <code>if</code> block only if there is overlap in <code>gender</code> and <code>opts</code>. You can repeat the process for your <code>elif</code> as well, creating a list of the possible options and checking for overlap between that list and <code>gender</code>. All together, you could do something like this:</p>
<pre><code>male_opts = ["associé", "gérant", "président", "directeur"]
female_opts = ["associée", "gérante", "présidente", "directrice"]
if set(male_opts) & set(gender):
gen = "male"
elif set(female_opts) & set(gender):
gen = "female"
else:
gen = "error"
</code></pre>
<p>Also, as <strong>@Copperfield</strong> points out. You could increase efficiency even more by making the <code>*_opt</code> variables (and potentially even <code>gender</code> sets to begin with:</p>
<pre><code>male_opts = {"associé", "gérant", "président", "directeur"}
female_opts = {"associée", "gérante", "présidente", "directrice"}
gender = set(gender)
if male_opts & gender:
...
</code></pre>
<h1>Edit:</h1>
<p>The code above assumes that <code>gender</code> is an iterable, but it seems from the comments that it is a string instead (e.g., <code>'associé gérant'</code>. Although the accepted answer is better at this point, you could still use this solution by making <code>gender</code> a set of the words that make up the string:</p>
<pre><code>gender = set(gender.split())
</code></pre>
| 3 |
2016-09-16T16:14:13Z
|
[
"python",
"if-statement"
] |
Find a more efficient way to code an If statement in python
| 39,535,878 |
<p>it's the second time I'm confronted to this kind of code : </p>
<pre><code> if "associé" in gender or "gérant" in gender or "président" in gender or "directeur" in gender:
gen = "male"
elif "associée" in gender or "gérante" in gender or "présidente" in gender or "directrice" in gender:
gen = "female"
else:
gen = "error"
</code></pre>
<p>I'd like to find a more efficient way to write this code because it looks really bad. </p>
| 1 |
2016-09-16T16:07:25Z
| 39,536,009 |
<p>Using lists and <code>any</code>:</p>
<pre><code>males = ["associé", "gérant", "président", "directeur"]
females = ["associée", "gérante", "présidente", "directrice"]
if any(m in gender for m in males):
gen = "male"
elif any(m in gender for m in females):
gen = "female"
else:
gen = "Error"
</code></pre>
| 4 |
2016-09-16T16:15:42Z
|
[
"python",
"if-statement"
] |
Printing Column Names and Values in Dataframe
| 39,535,939 |
<p>The end goal of this question is to plot X and Y for a graph using a dataframe. </p>
<p>I have a dataframe like so:</p>
<pre><code> Open High Low Close Volume stock symbol
Date
2000-10-19 1.37 1.42 1.24 1.35 373590000 AAPL
2000-10-20 1.36 1.46 1.35 1.39 195836200 AAPL
2000-10-23 1.39 1.49 1.39 1.46 129851400 AAPL
2000-10-24 1.48 1.49 1.34 1.35 192711400 AAPL
2000-10-25 1.36 1.37 1.30 1.32 163448600 AAPL
2000-10-26 1.34 1.42 1.25 1.32 178110800 AAPL
2000-10-27 1.35 1.37 1.28 1.33 181242600 AAPL
2000-10-30 1.37 1.42 1.34 1.38 152558000 AAPL
</code></pre>
<p>And I am trying to plot <code>Date</code> vs. <code>Open</code>. I know there is a way to simply plot, but I will be applying this concept to larger dataframes and would like to know how to do it "long-hand".</p>
<p><strong>What I've tried:</strong></p>
<p><code>print(some_DF['Open'])</code></p>
<p>Result: </p>
<pre><code> Date
2000-10-19 1.37
2000-10-20 1.36
2000-10-23 1.39
2000-10-24 1.48
2000-10-25 1.36
2000-10-26 1.34
</code></pre>
<p><strong>Problem:</strong> </p>
<p>Date seems to be my index, but the column header 'Open' Does not appear.</p>
<p><strong>Question:</strong></p>
<p>How do i print the above Dataframe while having <code>'Open'</code> as my header. Then making some value <code>x</code>=<code>Date</code>'s column and some value <code>y</code> = <code>'Open</code>'s values?</p>
<p><strong>"Expected Code to work":</strong></p>
<p>Im thinking something like</p>
<pre><code>print([some_DF['Open'] headers = 'date','open')
x = some_DF['Date'] #So that this becomes first column of dataframe
y = some_DF['Open'] #So that this becomes second column of dataframe
</code></pre>
| 0 |
2016-09-16T16:11:03Z
| 39,536,154 |
<p>You can <code>reset_index</code> on the data-frame and then print the subset dataframe consisting of the two columns</p>
<pre><code>>>> df
a b
Date
2000-10-19 1 3
2000-10-20 2 4
2000-10-21 3 5
2000-10-22 4 6
2000-10-23 5 7
>>> print(df.reset_index()[['Date', 'a']])
Date a
0 2000-10-19 1
1 2000-10-20 2
2 2000-10-21 3
3 2000-10-22 4
4 2000-10-23 5
</code></pre>
<p>Like IanS mentioned, you shouldn't worry about how the output looks in pandas. Date was an index and Open a column. The difference in the print statement illustrates that distinction.</p>
<p>Edit:</p>
<p>The <code>df[[list_of_column_names]]</code> is the same as <code>df.loc[:, [list_of_column_names]]</code>. It gives a list of columns to subset the original dataframe.</p>
| 3 |
2016-09-16T16:24:57Z
|
[
"python",
"python-3.x",
"pandas",
"dataframe"
] |
Migration clashes with forms.py
| 39,535,983 |
<p>The command <code>python manage.py makemigrations</code> fails most of time due to the <code>forms.py</code>, in which new models or new fields are referenced at class definition level.</p>
<p>So I have to comment each such definitions for the migration to operate. It's a painfull task.</p>
<p>I don't understand why the migration process import the <code>forms.py</code> module. I think that importing models modules should be sufficient.</p>
<p>Is there a way to avoid those errors ? </p>
| 0 |
2016-09-16T16:14:09Z
| 39,547,578 |
<p>Thanks to @alasdair I understood my problem and found a workaround: I replace the original code in the <code>views.py</code> file</p>
<pre><code>from MyApp import forms
</code></pre>
<p>with</p>
<pre><code>import sys
if 'makemigrations' not in sys.argv and 'migrate' not in sys.argv:
from MyApp import forms
</code></pre>
<p>It works fine in my case, but I suppose there is a better way to know if the current process is a migration or not. If so, please advise.</p>
| 0 |
2016-09-17T13:50:32Z
|
[
"python",
"django",
"django-forms",
"django-migrations"
] |
How to convert a datetime.time type to float in python?
| 39,536,000 |
<p>I am performing a data analysis with python. I need to convert a data type from datatime.time to float, something like what makes Excel when we change the cell format from "time" to "number".</p>
<p><a href="http://i.stack.imgur.com/fwvHt.png" rel="nofollow"><img src="http://i.stack.imgur.com/fwvHt.png" alt="enter image description here"></a></p>
<p>I could load the data on excel, change the columns format, export them again to CSV and finally load again into a dataframe, but I don't want to perform all that process again.</p>
<p>Thanks for your help!</p>
| 0 |
2016-09-16T16:14:59Z
| 39,536,110 |
<p>The decimal number used by Excel is simply the fraction of a day that a time represents, with midnight being 0.0. You simply take the hours, minutes, and seconds in the time and divide by the fraction of a day they represent:</p>
<pre><code>def excel_time(time):
return time.hour / 24.0 + time.minute / (24.0*60.0) + time.second / (24.0*60.0*60.0) + time.microsecond / (24.0*60.0*60.0*1000000.0)
</code></pre>
| 2 |
2016-09-16T16:22:35Z
|
[
"python",
"python-3.x",
"time"
] |
In python what is the correct nomenclature for accessing a class attribute using dot notation?
| 39,536,013 |
<p>This is semantics question, I'm writing a tutorial for a module that I hacked together for my lab and I would like to know if there is a correct term for accessing a class attribute by using dot notation. For example:</p>
<pre><code>class funzo:
def __init__(self,string):
self.attribute = string
fun = funzo("Sweet sweet data")
fun.attribute
</code></pre>
<blockquote>
<p>Now one can access the string by ??? the object named fun.</p>
</blockquote>
<p>I've been using the term 'dot referencing' but that seems wrong. I googled around and checked the python glossary but I can't seem to find anything.</p>
| 2 |
2016-09-16T16:16:02Z
| 39,536,348 |
<p>Attribute Access, this is from the language reference section on <a href="https://docs.python.org/3/reference/datamodel.html#customizing-attribute-access" rel="nofollow">customizing it</a>:</p>
<blockquote>
<p>The following methods can be defined to customize the meaning of <em>attribute access</em> (use of, assignment to, or deletion of <code>x.name</code>) for class instances.</p>
</blockquote>
<p><sup>(emphasis mine)</sup></p>
<p>So your quote could be better worded as:</p>
<blockquote>
<p>Now, one can access the attribute <code>attribute</code> by using dotted expressions on the instance named <code>fun</code>.</p>
</blockquote>
<p>Where the term <em>dotted expressions</em> is found from the entry on 'attribute' in the <a href="https://docs.python.org/3/glossary.html" rel="nofollow">Python Glossary</a>.</p>
<p>Another choice employed in the docs as an alternative to <em>dotted expressions</em> is, as in your title, <em>dot-notation</em>. This is found in the <em>Callables</em> section of the <a href="https://docs.python.org/3/reference/datamodel.html#the-standard-type-hierarchy" rel="nofollow">Standard Type Hierarchy</a>, so another option you could consider is:</p>
<blockquote>
<p>Now, one can access the attribute <code>attribute</code> using dot-notation on the instance named <code>fun</code>.</p>
</blockquote>
<p>Both are, in my opinion, completely understandable if someone has at least some experience in programming languages. I prefer the term <em>dot-notation</em> and as such, would use the later.</p>
| 6 |
2016-09-16T16:36:08Z
|
[
"python",
"python-3.x",
"oop"
] |
Python code working differently after a print statement
| 39,536,076 |
<p>I'm trying to use regex to search the keys in a dict and return the matches. The following code is simplified from the real code but shows the problem.</p>
<pre><code>#!/bin/python
# Import standard Python modules
import os, sys, string, pdb, re
key=""
pat=""
steps = {"pcb":"xxx","aoi":"xxx","pcb-pec":"xxx","pcb_1":"xxx"}
pat = "pcb"
print"***Search the dict***"
for key in steps:
print re.search(key,pat)
print"***Search the key***"
key = "pcb-pec"
pat = "pcb"
print re.search(key,pat)
print"***Search the key after printing it***"
key = "pcb-pec"
pat = "pcb"
print 'key:' + key+ ' ,pattern:' + pat
print re.search(pat,key)
exit()
</code></pre>
<p>And the output is this:</p>
<pre><code>***Search the dict***
<_sre.SRE_Match object at 0x00000000031FBC60>
None
None
None
***Search the key***
None
***Search the key after printing it***
key:pcb-pec ,pattern:pcb
<_sre.SRE_Match object at 0x00000000031FBC60>
</code></pre>
<p>I don't understand why the pattern isn't found on the 3rd and 4th keys.</p>
<p>I don't understand why the pattern isn't found in the second test either.</p>
<p>I REALLY don't understand why it is found in the third test which is the same as the second, but with a print statement.</p>
<p>This is my first post, but I've learned a lot by searching and reading here. Many thanks to you all.</p>
| -1 |
2016-09-16T16:20:17Z
| 39,536,118 |
<p>The signature of <code>re.search</code> (given as <a href="https://docs.python.org/2/library/re.html#re.search" rel="nofollow"><code>re.search(pattern, string, flags=0)</code></a>) takes the pattern first, then the string. </p>
<p>You should swap the order of the parameters:</p>
<pre><code>re.search(pat, key)
# ^^^^^^^^
</code></pre>
<p>And then the other keys will work:</p>
<pre><code>In [27]: pat = "pcb"
In [28]: key = "pcb-pec"
In [29]: re.search(key,pat) # wrong order
In [30]: re.search(pat,key) # right order
Out[30]: <_sre.SRE_Match object; span=(0, 3), match='pcb'>
</code></pre>
| 2 |
2016-09-16T16:22:55Z
|
[
"python",
"regex",
"python-2.7"
] |
Python code working differently after a print statement
| 39,536,076 |
<p>I'm trying to use regex to search the keys in a dict and return the matches. The following code is simplified from the real code but shows the problem.</p>
<pre><code>#!/bin/python
# Import standard Python modules
import os, sys, string, pdb, re
key=""
pat=""
steps = {"pcb":"xxx","aoi":"xxx","pcb-pec":"xxx","pcb_1":"xxx"}
pat = "pcb"
print"***Search the dict***"
for key in steps:
print re.search(key,pat)
print"***Search the key***"
key = "pcb-pec"
pat = "pcb"
print re.search(key,pat)
print"***Search the key after printing it***"
key = "pcb-pec"
pat = "pcb"
print 'key:' + key+ ' ,pattern:' + pat
print re.search(pat,key)
exit()
</code></pre>
<p>And the output is this:</p>
<pre><code>***Search the dict***
<_sre.SRE_Match object at 0x00000000031FBC60>
None
None
None
***Search the key***
None
***Search the key after printing it***
key:pcb-pec ,pattern:pcb
<_sre.SRE_Match object at 0x00000000031FBC60>
</code></pre>
<p>I don't understand why the pattern isn't found on the 3rd and 4th keys.</p>
<p>I don't understand why the pattern isn't found in the second test either.</p>
<p>I REALLY don't understand why it is found in the third test which is the same as the second, but with a print statement.</p>
<p>This is my first post, but I've learned a lot by searching and reading here. Many thanks to you all.</p>
| -1 |
2016-09-16T16:20:17Z
| 39,536,119 |
<p>You change the order of parameters in your last case. You have them out of order the first couple of times, and in the correct order the last time</p>
<pre><code>re.search(pat,key)
</code></pre>
<p>is the correct order.</p>
<p>In the loop, you're getting a match the one time the pattern and the string happen to be the same.</p>
| 2 |
2016-09-16T16:23:01Z
|
[
"python",
"regex",
"python-2.7"
] |
dictionary value operations shortcut
| 39,536,191 |
<p>I am wondering why arithmetic operations on dictionary values cannot be shortened with <code>=+</code> or <code>=-</code> as normal python variables can:</p>
<pre><code>for item in myDict:
myDict[item] =+ 1
</code></pre>
<p>doesn't seem to work, but instead I'm told to use:</p>
<pre><code>for item in myDict:
myDict[item] = myDict[item] + 1
</code></pre>
<p>It doesn't seem very Pythonic to me, but perhaps there is a great explanation for this convention.</p>
| -1 |
2016-09-16T14:44:42Z
| 39,536,245 |
<p>The order of the operators is <code>+=</code> and <code>-=</code>, not the other way around:</p>
<pre><code>In [31]: my_dict = {'key1': 1, 'key2': 2}
In [32]: for item in my_dict:
....: my_dict[item] += 1
....:
In [33]: my_dict
Out[33]: {'key1': 2, 'key2': 3} # values have been incremented by one
</code></pre>
| 2 |
2016-09-16T16:30:23Z
|
[
"python",
"shortcuts",
"dictionary"
] |
CSV joining based on keys
| 39,536,224 |
<p>This may be a simple/repeat question, but I could find/figure out yet how to do it. </p>
<p>I have two csv files:</p>
<p><strong>info.csv:</strong></p>
<pre><code>"Last Name", First Name, ID, phone, adress, age X [Total age: 100] |009076
abc, xyz, 1234, 982-128-0000, pqt,
bcd, uvw, 3124, 813-222-1111, tre,
poi, ccc, 9087, 123-45607890, weq,
</code></pre>
<p>and then </p>
<p><strong>age.csv:</strong></p>
<pre><code>student_id,age_1
3124,20
9087,21
1234,45
</code></pre>
<p>I want to compare the two csv files, based on the columns "<code>id</code>" from <strong>info.csv</strong> and "<code>student_id</code>" from <strong>age.csv</strong> and take the corresponding "<code>age_1</code>" data and put it into the "<code>age</code>" column in <strong>info.csv</strong>.</p>
<p>So the final output should be:</p>
<p><strong>info.csv:</strong></p>
<pre><code>"Last Name", First Name, ID, phone, adress, age X [Total age: 100] |009076
abc, xyz, 1234, 982-128-0000, pqt,45
bcd, uvw, 3124, 813-222-1111, tre,20
poi, ccc, 9087, 123-45607890, weq,21
</code></pre>
<p>I am able to simply join the tables based on the keys into a <strong>new.csv</strong>, but can't put the data in the columns titles "<code>age</code>". I used "<code>csvkit</code>" to do that. </p>
<p>Here is what I used:</p>
<pre><code>csvjoin -c 3,1 info.csv age.csv > new.csv
</code></pre>
| 1 |
2016-09-16T16:29:08Z
| 39,536,954 |
<p>Try this...</p>
<pre><code>import csv
info = list(csv.reader(open("info.csv", 'rb')))
age = list(csv.reader(open("age.csv", 'rb')))
def copyCSV(age, info, outFileName = 'out.csv'):
# put age into dict, indexed by ID
# assumes no duplicate entries
# 1 - build a dict ageDict to represent data
ageDict = dict([(entry[0].replace(' ',''), entry[1]) for entry in age[1:] if entry != []])
# 2 - setup output
with open(outFileName, 'wb') as outFile:
outwriter = csv.writer(outFile)
# 3 - run through info and slot in ages and write to output
# nb: had to use .replace(' ','') to strip out whitespaces - these may not be in original .csv
outwriter.writerow(info[0])
for entry in info[1:]:
if entry != []:
key = entry[2].replace(' ','')
if key in ageDict: # checks that you have data from age.csv
entry[5] = ageDict[key]
outwriter.writerow(entry)
copyCSV(age, info)
</code></pre>
<p>Let me know if it works or if anything is unclear. I've used a dict because it should be faster if your files are massive, as you only have to loop through the data in age.csv once.</p>
<p>There may be a simpler way / something already implemented...but this should do the trick.</p>
| 1 |
2016-09-16T17:17:27Z
|
[
"python",
"csv",
"inner-join",
"csvkit"
] |
CSV joining based on keys
| 39,536,224 |
<p>This may be a simple/repeat question, but I could find/figure out yet how to do it. </p>
<p>I have two csv files:</p>
<p><strong>info.csv:</strong></p>
<pre><code>"Last Name", First Name, ID, phone, adress, age X [Total age: 100] |009076
abc, xyz, 1234, 982-128-0000, pqt,
bcd, uvw, 3124, 813-222-1111, tre,
poi, ccc, 9087, 123-45607890, weq,
</code></pre>
<p>and then </p>
<p><strong>age.csv:</strong></p>
<pre><code>student_id,age_1
3124,20
9087,21
1234,45
</code></pre>
<p>I want to compare the two csv files, based on the columns "<code>id</code>" from <strong>info.csv</strong> and "<code>student_id</code>" from <strong>age.csv</strong> and take the corresponding "<code>age_1</code>" data and put it into the "<code>age</code>" column in <strong>info.csv</strong>.</p>
<p>So the final output should be:</p>
<p><strong>info.csv:</strong></p>
<pre><code>"Last Name", First Name, ID, phone, adress, age X [Total age: 100] |009076
abc, xyz, 1234, 982-128-0000, pqt,45
bcd, uvw, 3124, 813-222-1111, tre,20
poi, ccc, 9087, 123-45607890, weq,21
</code></pre>
<p>I am able to simply join the tables based on the keys into a <strong>new.csv</strong>, but can't put the data in the columns titles "<code>age</code>". I used "<code>csvkit</code>" to do that. </p>
<p>Here is what I used:</p>
<pre><code>csvjoin -c 3,1 info.csv age.csv > new.csv
</code></pre>
| 1 |
2016-09-16T16:29:08Z
| 39,537,149 |
<p>You can use <code>Pandas</code> and update the <code>info dataframe</code> using the <code>age</code> data. You do it by setting the index of both data frames to <code>ID</code> and <code>student_id</code> respectively, then update the age column in the <code>info dataframe</code>. After that you reset the index so <code>ID</code> becomes a column again.</p>
<pre><code>from StringIO import StringIO
import pandas as pd
info = StringIO("""Last Name,First Name,ID,phone,adress,age X [Total age: 100] |009076
abc, xyz, 1234, 982-128-0000, pqt,
bcd, uvw, 3124, 813-222-1111, tre,
poi, ccc, 9087, 123-45607890, weq,""")
age = StringIO("""student_id,age_1
3124,20
9087,21
1234,45""")
info_df = pd.read_csv(info, sep=",", engine='python')
age_df = pd.read_csv(age, sep=",", engine='python')
info_df = info_df.set_index('ID')
age_df = age_df.set_index('student_id')
info_df['age X [Total age: 100] |009076'].update(age_df.age_1)
info_df.reset_index(level=0, inplace=True)
info_df
</code></pre>
<p>outputs:</p>
<pre><code> ID Last Name First Name phone adress age X [Total age: 100] |009076
0 1234 abc xyz 982-128-0000 pqt 45
1 3124 bcd uvw 813-222-1111 tre 20
2 9087 poi ccc 123-45607890 weq 21
</code></pre>
| 3 |
2016-09-16T17:29:31Z
|
[
"python",
"csv",
"inner-join",
"csvkit"
] |
Issue with null character received by qpython/pandas from kdb
| 39,536,229 |
<p>This question is pretty much directed at @Maciej Lach but if anyone else has experienced this issue please let me know.</p>
<p>The issue is simple - qpyhton crashes (when pandas is set to true) whenever kdb sends it a single row table where one of the columns has a blank char. </p>
<p>I'm using: python version 2.7.11, qpython version qPython-1.2.0, pandas-0.18.1</p>
<p>To reproduce:</p>
<pre><code>from qpython import qconnection
q = qconnection.QConnection(pandas = True, host = 'myhost', port = myport)
print 'initiating connection(s)'
q.open()
while True:
msg = q.receive(data_only = True, raw = False)
print 'message received'
</code></pre>
<p>Now, on the kdb side:</p>
<pre><code>/send a table (which just so happens to have blank/null chars)
neg[4](`upd;`trade;([] col1:`a`b`c;col2:" a"))
/no problem
/send last row of that table
neg[4](`upd;`trade;-1#([] col1:`a`b`c;col2:" a"))
/no problem
/send two rows (2 blank chars)
neg[4](`upd;`trade;2#([] col1:`a`b`c;col2:" a"))
/no problem
/send first row of that table (one blank char)
neg[4](`upd;`trade;1#([] col1:`a`b`c;col2:" a"))
/crashes with error "AttributeError: 'float' object has no attribute 'meta'"
</code></pre>
<p>It seems to only have a problem when I send a single row table which has a null char.
It's fine with more than one null char. It's fine with a single row table with non-null char.
And everything is fine without the Pandas=True option (but I need pandas=True for my use case).</p>
<p>Any ideas?</p>
| 1 |
2016-09-16T16:29:16Z
| 39,608,894 |
<p>This is a bug in the <code>qPython</code> library in version < 1.2.1.</p>
<p>I've contributed a <a href="https://github.com/exxeleron/qPython/pull/41" rel="nofollow">pull request</a> with the fix to the maintainer.</p>
| 2 |
2016-09-21T06:24:08Z
|
[
"python",
"pandas",
"kdb",
"q-lang",
"exxeleron-q"
] |
Django models in Many2Many relationship: filter by a group of member
| 39,536,262 |
<p>I have these models:</p>
<pre><code>class Skill(models.Model):
name = models.CharField(max_length=20)
class Engineer(models.Model):
name = models.CharField(max_length=20)
skill = models.ManyToManyField(Skill)
city = models.ForeignKey(City)
class City(models.Model):
city = models.CharField(max_length=20)
</code></pre>
<p>I have 2 questions, please spare your time to help me. Thank you :)</p>
<p>1) I would like to filter Engineer by a group of skills.
Let's say I need to filter engineers, who have these skills ['HTML', 'python', 'CSS']. How can I do that? </p>
<p>2) I would like to filter Engineer by a group of skills <strong>AND</strong> in a specific area.
Let's say I need to filter engineers, who have these skills ['HTML', 'python', 'CSS'] <strong>AND</strong> this engineer must live in Anaheim. How can I do that? </p>
| 1 |
2016-09-16T16:31:13Z
| 39,536,485 |
<p>You should read <em><a href="https://docs.djangoproject.com/en/1.10/topics/db/queries/#lookups-that-span-relationships" rel="nofollow">queries that span relationships</a></em> part of the docs. Basically querying is done in the same kinda syntax you do <em>ForeignKey</em> lookups. (In case you don't use <a href="https://docs.djangoproject.com/en/1.10/ref/models/fields/#django.db.models.ManyToManyField.through" rel="nofollow">m2m through</a> though) </p>
<p>Also you don't have any relation between an <em>Engineer</em> and a <em>City</em>, If you want to be able to answer to queries like your second question, you need to add a <em>ForeignKey</em>, from <em>Engineer</em> to <em>City</em>.</p>
<p>1.</p>
<pre><code>skills = Skill.objects.filter(name__in=['HTML', 'Python', 'Css'])
engineers = Engineer.objects.filter(skill__in=skills)
</code></pre>
<p>2.</p>
<pre><code>city = City.objects.get(name='Anaheim')
engineers = Engineer.objects.filter(skill__in=skills, city=city) # Considering you put a ForeignKey from Engineer to City
</code></pre>
| 1 |
2016-09-16T16:46:29Z
|
[
"python",
"django",
"django-models"
] |
Variable dimensionality of a meshgrid with numpy
| 39,536,288 |
<p>I try to create a meshgrid with n dimensions.
Is there a nicer way to call meshgrid with n column vectors than with the if clause I am using?</p>
<p>Edit: The goal is to use it for user-defined n (2-100) without writing 100 if clauses.</p>
<p>The second line in the if clauses reduces the grid so column(n) < column(n+1)</p>
<p>Example:</p>
<pre><code>import numpy as np
dimension = 2
range = np.arange(0.2,2.4,0.1)
if dimension == 2:
grid = np.array(np.meshgrid(range,range)).T.reshape(-1,dimension)
grid = np.array(grid[[i for i in range(grid.shape[0]) if grid[i,0]<grid[i,1]]])
elif dimension == 3:
grid = np.array(np.meshgrid(range,range,range)).T.reshape(-1,dimension)
grid = np.array(grid[[i for i in range(grid.shape[0]) if grid[i,0]<grid[i,1]]])
grid = np.array(grid[[i for i in range(grid.shape[0]) if grid[i,1]<grid[i,2]]])
</code></pre>
<p>Edit: The solution was posted below:</p>
<pre><code>dimension = 2
r = np.arange(0.2,2.4,0.1)
grid=np.array(np.meshgrid(*[r]*n)).T.reshape(-1,n)
for i in range(0,n-1):
grid = np.array([g for g in grid if g[i]<g[i+1]])
</code></pre>
| 0 |
2016-09-16T16:32:31Z
| 39,536,518 |
<p>I haven't fully absorbed your approach and goals, but here's a partial simplification</p>
<pre><code>In [399]: r=np.arange(3) # simpler range for example
In [400]: grid=np.meshgrid(*[r]*2) # use `[r]*3` for 3d case
In [401]: grid=np.array(grid).T.reshape(-1,2)
In [402]: np.array([g for g in grid if g[0]<g[1]]) # simpler comprehensions
Out[402]:
array([[0, 1],
[0, 2],
[1, 2]])
</code></pre>
<p><code>itertools.product</code> makes that 2 column grid easier:</p>
<pre><code>In [403]: from itertools import product
In [404]: np.array([g for g in product(r,r) if g[0]<g[1]])
Out[404]:
array([[0, 1],
[0, 2],
[1, 2]])
</code></pre>
<p>That is, your <code>grid</code> before filtering is</p>
<pre><code>In [407]: grid
Out[407]:
array([[0, 0],
[0, 1],
[0, 2],
[1, 0],
[1, 1],
[1, 2],
[2, 0],
[2, 1],
[2, 2]])
</code></pre>
<p>and <code>product</code> is</p>
<pre><code>In [406]: list(product(r,r))
Out[406]: [(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2)]
</code></pre>
<p><code>product</code> has a <code>repeat</code> parameter that makes this even easier:</p>
<pre><code>In [411]: list(product(r,repeat=2))
Out[411]: [(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2)]
</code></pre>
<p>You still need the <code>if</code> clause to apply the 2 step filtering for dim=3. I guess the could written iteratively</p>
<pre><code>for i in range(0,dimension-1):
grid = [g for g in grid if g[i]<g[i+1]]
</code></pre>
| 1 |
2016-09-16T16:48:49Z
|
[
"python",
"numpy"
] |
How to read Twitter's Streaming API and reply an user based on a certain keyword?
| 39,536,364 |
<p>I am implementing a Twitter bot for fun purposes using <code>Tweepy</code>.</p>
<p>What I am trying to code is a bot that tracks a certain keyword and based in it the bot replies the user that tweeted with the given string.</p>
<p>I considered storing the Twitter's Stream on a <code>.json</code> file and looping the Tweet object for every user but it seems impractical as receiving the stream locks the program on a loop.</p>
<p>So, how could I track the tweets with the Twitter's Stream API based on a certain keyword and reply the users that tweeted it?</p>
<p>Current code:</p>
<pre><code> from tweepy import OAuthHandler
from tweepy import Stream
from tweepy.streaming import StreamListener
class MyListener(StreamListener):
def on_data(self, data):
try:
with open("caguei.json", 'a+') as f:
f.write(data)
data = f.readline()
tweet = json.loads(data)
text = str("@%s acabou de. %s " % (tweet['user']['screen_name'], random.choice(exp)))
tweepy.API.update_status(status=text, in_reply_to_status_id=tweet['user']['id'])
#time.sleep(300)
return True
except BaseException as e:
print("Error on_data: %s" % str(e))
return True
def on_error(self, status):
print(status)
return True
api = tweepy.API(auth)
twitter_stream = Stream(auth, MyListener())
twitter_stream.filter(track=['dengue']) #Executing it the program locks on a loop
</code></pre>
| 0 |
2016-09-16T16:37:17Z
| 39,537,331 |
<p>Tweepy <code>StreamListener</code> class allows you to override it's <code>on_data</code> method. That's where you should be doing your logic.</p>
<p>As per the code</p>
<pre><code>class StreamListener(object):
...
def on_data(self, raw_data):
"""Called when raw data is received from connection.
Override this method if you wish to manually handle
the stream data. Return False to stop stream and close connection.
"""
...
</code></pre>
<p>So in your listener, you can override this method and do your custom logic.</p>
<pre><code>class MyListener(StreamListener):
def on_data(self, data):
do_whatever_with_data(data)
</code></pre>
<p>You can also override several other methods (<em>on_direct_message</em>, etc) and I encourage you to take a look at the code of <strong>StreamListener</strong>. </p>
<p>Update</p>
<p>Okay, you can do what you intent to do with the following:</p>
<pre><code>class MyListener(StreamListener):
def __init__(self, *args, **kwargs):
super(MyListener, self).__init__(*args, **kwargs)
self.file = open("whatever.json", "a+")
def _persist_to_file(self, data):
try:
self.file.write(data)
except BaseException:
pass
def on_data(self, data):
try:
tweet = json.loads(data)
text = str("@%s acabou de. %s " % (tweet['user']['screen_name'], random.choice(exp)))
tweepy.API.update_status(status=text, in_reply_to_status_id=tweet['user']['id'])
self._persist_to_file(data)
return True
except BaseException as e:
print("Error on_data: %s" % str(e))
return True
def on_error(self, status):
print(status)
return True
</code></pre>
| 1 |
2016-09-16T17:42:53Z
|
[
"python",
"api",
"twitter",
"stream",
"tweepy"
] |
Match unicode emoji in python regex
| 39,536,390 |
<p>I need to extract the text between a number and an emoticon in a text</p>
<p>example text:</p>
<pre><code>blah xzuyguhbc ibcbb bqw 2 extract1 âºï¸ jbjhcb 6 extract2 í ½í¹
bjvcvvv
</code></pre>
<p>output:</p>
<pre><code>extract1
extract2
</code></pre>
<p>The regex code that I wrote extracts the text between 2 numbers, I need to change the part where it identifies the unicode emoji characters and extracts text between them.</p>
<pre><code>(?<=[\s][\d])(.*?)(?=[\d])
</code></pre>
<p>Please suggest a python friendly method, and I need it to work with all the emoji's not only the one's given in the example</p>
<p><a href="https://regex101.com/r/uT1fM0/1" rel="nofollow">https://regex101.com/r/uT1fM0/1</a></p>
| 3 |
2016-09-16T16:38:59Z
| 39,536,900 |
<p>So this may or not work depending on your needs. If you know the emoji's ahead of time though this will probably work, you just need a list of the types of emoticons to expect.</p>
<p>Anyway without more information, this is what I'd do.</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import re
my_regex = re.compile(r'\d\s*([^âºï¸|^í ½í¹
]+)')
string = "blah xzuyguhbc ibcbb bqw 2 extract1 âºï¸ jbjhcb 6 extract2 í ½í¹
bjvcvvv"
m = my_regex.findall(string)
if m:
print m
</code></pre>
| 0 |
2016-09-16T17:14:28Z
|
[
"python",
"regex",
"unicode",
"emoji"
] |
Match unicode emoji in python regex
| 39,536,390 |
<p>I need to extract the text between a number and an emoticon in a text</p>
<p>example text:</p>
<pre><code>blah xzuyguhbc ibcbb bqw 2 extract1 âºï¸ jbjhcb 6 extract2 í ½í¹
bjvcvvv
</code></pre>
<p>output:</p>
<pre><code>extract1
extract2
</code></pre>
<p>The regex code that I wrote extracts the text between 2 numbers, I need to change the part where it identifies the unicode emoji characters and extracts text between them.</p>
<pre><code>(?<=[\s][\d])(.*?)(?=[\d])
</code></pre>
<p>Please suggest a python friendly method, and I need it to work with all the emoji's not only the one's given in the example</p>
<p><a href="https://regex101.com/r/uT1fM0/1" rel="nofollow">https://regex101.com/r/uT1fM0/1</a></p>
| 3 |
2016-09-16T16:38:59Z
| 39,537,131 |
<p>Here's my stab at the solution. Not sure if it will work in all circumstances. The trick is to convert all unicode emojis into normal text. This could be done by following <a href="http://stackoverflow.com/questions/25707222/print-python-emoji-as-unicode-string">this post</a> Then you can match the emoji just as any normal text. Note that it won't work if the <em>literal</em> strings <code>\u</code> or <code>\U</code> is in your searched text.</p>
<p>Example: Copy your string into a file, let's call it <code>emo</code>.
In terminal:</p>
<pre><code>Chip chip@ 03:24:33@ ~: cat emo | python stackoverflow.py
blah xzuyguhbc ibcbb bqw 2 extract1 \u263a\ufe0f jbjhcb 6 extract2 \U0001f645 bjvcvvv\n
------------------------
[' extract1 ', ' extract2 ']
</code></pre>
<p>Where <code>stackoverflow.py</code> file is:</p>
<pre><code>import fileinput
a = fileinput.input();
for line in a:
teststring = unicode(line,'utf-8')
teststring = teststring.encode('unicode-escape')
import re
print teststring
print "------------------------"
m = re.findall('(?<=[\s][\d])(.*?)(?=\\\\[uU])', teststring)
print m
</code></pre>
| 0 |
2016-09-16T17:28:23Z
|
[
"python",
"regex",
"unicode",
"emoji"
] |
Match unicode emoji in python regex
| 39,536,390 |
<p>I need to extract the text between a number and an emoticon in a text</p>
<p>example text:</p>
<pre><code>blah xzuyguhbc ibcbb bqw 2 extract1 âºï¸ jbjhcb 6 extract2 í ½í¹
bjvcvvv
</code></pre>
<p>output:</p>
<pre><code>extract1
extract2
</code></pre>
<p>The regex code that I wrote extracts the text between 2 numbers, I need to change the part where it identifies the unicode emoji characters and extracts text between them.</p>
<pre><code>(?<=[\s][\d])(.*?)(?=[\d])
</code></pre>
<p>Please suggest a python friendly method, and I need it to work with all the emoji's not only the one's given in the example</p>
<p><a href="https://regex101.com/r/uT1fM0/1" rel="nofollow">https://regex101.com/r/uT1fM0/1</a></p>
| 3 |
2016-09-16T16:38:59Z
| 39,537,145 |
<p>Since there are a lot of emoji <a href="http://apps.timwhitlock.info/emoji/tables/unicode" rel="nofollow">with different unicode values</a>, you have to explicitly specify them in your regex, or if they are with a spesific range you can use a character class. In this case your second simbol is not a standard emoji, it's just a unicode character, but since it's greater than <code>\u263a</code> (the unicode representation of âºï¸) you can put it in a range with <code>\u263a</code>:</p>
<pre><code>In [71]: s = 'blah xzuyguhbc ibcbb bqw 2 extract1 âºï¸ jbjhcb 6 extract2 í ½í¹
bjvcvvv'
In [72]: regex = re.compile(r'\d+(.*?)(?:\u263a|\U0001f645)')
In [74]: regex.findall(s)
Out[74]: [' extract1 ', ' extract2 ']
</code></pre>
<p>Or if you want to match more emojies you can use a character range (here is a good reference which shows you the proper range for different emojies <a href="http://apps.timwhitlock.info/emoji/tables/unicode" rel="nofollow">http://apps.timwhitlock.info/emoji/tables/unicode</a>):</p>
<pre><code>In [75]: regex = re.compile(r'\d+(.*?)[\u263a-\U0001f645]')
In [76]: regex.findall(s)
Out[76]: [' extract1 ', ' extract2 ']
</code></pre>
<p>Note that in second case you have to make sure that all the characters withn the aforementioned range are emojies that you want.</p>
<p>Here is another example:</p>
<pre><code>In [77]: s = "blah 4 xzuyguhbc í ½í¸º ibcbb bqw 2 extract1 âºï¸ jbjhcb 6 extract2 í ½í¹
bjvcvvv"
In [78]: regex = re.compile(r'\d+(.*?)[\u263a-\U0001f645]')
In [79]: regex.findall(s)
Out[79]: [' xzuyguhbc ', ' extract1 ', ' extract2 ']
</code></pre>
| 1 |
2016-09-16T17:29:16Z
|
[
"python",
"regex",
"unicode",
"emoji"
] |
String split into dictionary containing lists in python
| 39,536,469 |
<p>I have to split a string that looks like;</p>
<pre><code>'{ a:[(b,c), (d,e)] , b: [(a,c), (c,d)]}'
</code></pre>
<p>and convert it to a dict whose values are a list containing tuples like;</p>
<pre><code>{'a':[('b','c'), ('d','e')] , 'b': [('a','c'), ('c','d')]}
</code></pre>
<p>In my case;
The above string was just an example. So what I am trying to do actually is that I'm getting a response from server. At server side, the response is proper dictionary with lists and stuff. But it sends to my client in a string format somehow. for example</p>
<pre><code>u"{'write_times': [ (1.658935546875, 1474049078179.095), (1.998779296875, 1474049078181.098)], 'read_times': [(0.825927734375, 1474049447696.7249), (1.4638671875, 1474049447696.7249)]}"
</code></pre>
<p>I want it to be just like it was at the server side.</p>
<pre><code>{'write_times': [ ('1.65893554687', '1474049078179.095'), ('1.998779296875', '1474049078181.098')], 'read_times': [('0.825927734375', '1474049447696.7249'), ('1.4638671875', '1474049447696.7249')]}
</code></pre>
<p>The solution you proposed may not work. Any ideas?</p>
| 0 |
2016-09-16T16:44:59Z
| 39,536,583 |
<p>It is important to know where this string is coming from, but, assuming this is what you have and you cannot change that, you can pre-process the string putting alphanumerics into quotes and using <a href="https://docs.python.org/2/library/ast.html#ast.literal_eval" rel="nofollow"><code>ast.literal_eval()</code></a> to <em>safely</em> eval it:</p>
<pre><code>>>> from ast import literal_eval
>>> import re
>>>
>>> s = '{ a:[(b,c), (d,e)] , b: [(a,c), (c,d)]}'
>>> literal_eval(re.sub(r"(\w+)", r"'\1'", s))
{'a': [('b', 'c'), ('d', 'e')], 'b': [('a', 'c'), ('c', 'd')]}
</code></pre>
| 3 |
2016-09-16T16:52:45Z
|
[
"python",
"split"
] |
PyQT Buttons has Incorrect Size in QGridLayout
| 39,536,497 |
<p>I just started a simple PyQt app on Windows 7 and Python 2.7. There are 2 buttons and a table. The Apple button should be 5x taller that the Orange button, and the table should be the same height as the Apple button.</p>
<p>However both buttons are drawn the same height despite using <code>grid.addWidget(appleBtn, 0, 0, 5, 1)</code> to define its height.</p>
<p>Any suggestions?</p>
<p><a href="http://i.stack.imgur.com/cSAJg.png" rel="nofollow"><img src="http://i.stack.imgur.com/cSAJg.png" alt="enter image description here"></a></p>
<pre><code>from PyQt4.QtGui import *
from PyQt4.QtCore import *
import sys
def main():
app = QApplication(sys.argv)
w = QTabWidget()
# Tab
grid = QGridLayout()
tab = QWidget()
tab.setLayout(grid)
w.addTab(tab,"Hello World")
# Button 1
appleBtn = QPushButton("Apples")
appleBtn.setSizePolicy(QSizePolicy.Preferred, QSizePolicy.Expanding)
grid.addWidget(appleBtn, 0, 0, 5, 1)
# Button 1
orangeBtn = QPushButton("Oranges")
appleBtn.setSizePolicy(QSizePolicy.Preferred, QSizePolicy.Expanding)
grid.addWidget(orangeBtn, 5, 0, 1, 1)
# Table
fruitTable = QTableWidget()
fruitTable.setRowCount(5)
fruitTable.setColumnCount(2)
fruitTable.setHorizontalHeaderLabels(QString("Fruit;Color;").split(";"))
fruitTable.horizontalHeader().setResizeMode(QHeaderView.Stretch)
grid.addWidget(fruitTable, 6, 0, 1, 1)
w.resize(400,300)
w.setWindowTitle('Test')
w.show()
sys.exit(app.exec_())
main()
</code></pre>
| 1 |
2016-09-16T16:47:09Z
| 39,537,083 |
<p>The <code>addWidget</code> method doesn't work in the way you assume it does. The second and third arguments specify the row/column, whilst the third and fourth specify how many rows/columns to span.</p>
<p>The correct way to specify proportional heights is with <code>setRowStretch</code>:</p>
<pre><code>grid.addWidget(appleBtn, 0, 0)
grid.setRowStretch(0, 5)
...
grid.addWidget(orangeBtn, 1, 0)
...
grid.addWidget(fruitTable, 2, 0)
grid.setRowStretch(2, 5)
</code></pre>
| 1 |
2016-09-16T17:25:44Z
|
[
"python",
"python-2.7",
"qt",
"pyqt"
] |
Select certain row values and make them columns in pandas
| 39,536,529 |
<p>I have a dataset that looks like the below:</p>
<pre><code>+-------------------------+-------------+------+--------+-------------+--------+--+
| | impressions | name | shares | video_views | diff | |
+-------------------------+-------------+------+--------+-------------+--------+--+
| _ts | | | | | | |
| 2016-09-12 23:15:04.120 | 1 | Vidz | 7 | 10318 | 15mins | |
| 2016-09-12 23:16:45.869 | 2 | Vidz | 7 | 10318 | 16mins | |
| 2016-09-12 23:30:03.129 | 3 | Vidz | 18 | 29291 | 30mins | |
| 2016-09-12 23:32:08.317 | 4 | Vidz | 18 | 29291 | 32mins | |
+-------------------------+-------------+------+--------+-------------+--------+--+
</code></pre>
<p>I am trying to build a dataframe to feed to a regression model, and I'd like to parse out specific rows as features. To do this I would like the dataframe to resemble this</p>
<pre><code>+-------------------------+------+--------------+-------------------+-------------------+--------------+-------------------+-------------------+
| | name | 15min_shares | 15min_impressions | 15min_video_views | 30min_shares | 30min_impressions | 30min_video_views |
+-------------------------+------+--------------+-------------------+-------------------+--------------+-------------------+-------------------+
| _ts | | | | | | | |
| 2016-09-12 23:15:04.120 | Vidz | 7 | 1 | 10318 | 18 | 3 | 29291 |
+-------------------------+------+--------------+-------------------+-------------------+--------------+-------------------+-------------------+
</code></pre>
<p>What would be the best way to do this? I think this would be easier if I were only trying to select 1 row (15mins), just parse out the unneeded rows and pivot.</p>
<p>However, I need 15min and 30min features and am unsure on how to proceed of the need for these columns</p>
| 2 |
2016-09-16T16:49:27Z
| 39,537,258 |
<p>You could take subsets of your <code>DF</code> to include rows for 15mins and 30mins and concatenate them by backfilling <code>NaN</code> values of first row(15mins) with that of it's next row(30mins) and dropping off the next row(30mins) as shown:</p>
<pre><code>prefix_15="15mins"
prefix_30="30mins"
fifteen_mins = (df['diff']==prefix_15)
thirty_mins = (df['diff']==prefix_30)
df = df[fifteen_mins|thirty_mins].drop(['diff'], axis=1)
df_ = pd.concat([df[fifteen_mins].add_prefix(prefix_15+'_'), \
df[thirty_mins].add_prefix(prefix_30+'_')], axis=1) \
.fillna(method='bfill').dropna(how='any')
del(df_['30mins_name'])
df_.rename(columns={'15mins_name':'name'}, inplace=True)
df_
</code></pre>
<p><a href="http://i.stack.imgur.com/A5RT2.png" rel="nofollow"><img src="http://i.stack.imgur.com/A5RT2.png" alt="Image"></a></p>
| 2 |
2016-09-16T17:37:26Z
|
[
"python",
"pandas"
] |
Select certain row values and make them columns in pandas
| 39,536,529 |
<p>I have a dataset that looks like the below:</p>
<pre><code>+-------------------------+-------------+------+--------+-------------+--------+--+
| | impressions | name | shares | video_views | diff | |
+-------------------------+-------------+------+--------+-------------+--------+--+
| _ts | | | | | | |
| 2016-09-12 23:15:04.120 | 1 | Vidz | 7 | 10318 | 15mins | |
| 2016-09-12 23:16:45.869 | 2 | Vidz | 7 | 10318 | 16mins | |
| 2016-09-12 23:30:03.129 | 3 | Vidz | 18 | 29291 | 30mins | |
| 2016-09-12 23:32:08.317 | 4 | Vidz | 18 | 29291 | 32mins | |
+-------------------------+-------------+------+--------+-------------+--------+--+
</code></pre>
<p>I am trying to build a dataframe to feed to a regression model, and I'd like to parse out specific rows as features. To do this I would like the dataframe to resemble this</p>
<pre><code>+-------------------------+------+--------------+-------------------+-------------------+--------------+-------------------+-------------------+
| | name | 15min_shares | 15min_impressions | 15min_video_views | 30min_shares | 30min_impressions | 30min_video_views |
+-------------------------+------+--------------+-------------------+-------------------+--------------+-------------------+-------------------+
| _ts | | | | | | | |
| 2016-09-12 23:15:04.120 | Vidz | 7 | 1 | 10318 | 18 | 3 | 29291 |
+-------------------------+------+--------------+-------------------+-------------------+--------------+-------------------+-------------------+
</code></pre>
<p>What would be the best way to do this? I think this would be easier if I were only trying to select 1 row (15mins), just parse out the unneeded rows and pivot.</p>
<p>However, I need 15min and 30min features and am unsure on how to proceed of the need for these columns</p>
| 2 |
2016-09-16T16:49:27Z
| 39,537,341 |
<p>stacking to pivot and collapsing your columns</p>
<pre><code>df1 = df.set_index('diff', append=True).stack().unstack(0).T
df1.columns = df1.columns.map('_'.join)
</code></pre>
<hr>
<p>To see just the first row</p>
<pre><code>df1.iloc[[0]].dropna(1)
</code></pre>
<p><a href="http://i.stack.imgur.com/kpRmc.png" rel="nofollow"><img src="http://i.stack.imgur.com/kpRmc.png" alt="enter image description here"></a></p>
| 0 |
2016-09-16T17:43:27Z
|
[
"python",
"pandas"
] |
Substitute Function call with sympy
| 39,536,540 |
<p>I want to receive input from a user, parse it, then perform some substitutions on the resulting expression. I know that I can use <code>sympy.parsing.sympy_parser.parse_expr</code> to parse arbitrary input from the user. However, I am having trouble substituting in function definitions. Is it possible to make subsitutions in this manner, and if so, how would I do so?</p>
<p>The overall goal is to allow a user to provide a function of <code>x</code>, which is then used to fit data. <code>parse_expr</code> gets me 95% of the way there, but I would like to provide some convenient expansions, such as shown below.</p>
<pre><code>import sympy
from sympy.parsing.sympy_parser import parse_expr
x,height,mean,sigma = sympy.symbols('x height mean sigma')
gaus = height*sympy.exp(-((x-mean)/sigma)**2 / 2)
expr = parse_expr('gaus(100, 5, 0.2) + 5')
print expr.subs('gaus',gaus) # prints 'gaus(100, 5, 0.2) + 5'
print expr.subs(sympy.Symbol('gaus'),gaus) # prints 'gaus(100, 5, 0.2) + 5'
print expr.subs(sympy.Symbol('gaus')(height,mean,sigma),gaus) # prints 'gaus(100, 5, 0.2) + 5'
# Desired output: '100 * exp(-((x-5)/0.2)**2 / 2) + 5'
</code></pre>
<p>This is done using python 2.7.9, sympy 0.7.5.</p>
| 1 |
2016-09-16T16:50:01Z
| 39,538,381 |
<p>After some experimentation, while I did not find a built-in solution, it was not difficult to build one that satisfies simple cases. I am not a sympy expert, and so there may be edge cases that I haven't considered.</p>
<pre><code>import sympy
from sympy.core.function import AppliedUndef
def func_sub_single(expr, func_def, func_body):
"""
Given an expression and a function definition,
find/expand an instance of that function.
Ex:
linear, m, x, b = sympy.symbols('linear m x b')
func_sub_single(linear(2, 1), linear(m, b), m*x+b) # returns 2*x+1
"""
# Find the expression to be replaced, return if not there
for unknown_func in expr.atoms(AppliedUndef):
if unknown_func.func == func_def.func:
replacing_func = unknown_func
break
else:
return expr
# Map of argument name to argument passed in
arg_sub = {from_arg:to_arg for from_arg,to_arg in
zip(func_def.args, replacing_func.args)}
# The function body, now with the arguments included
func_body_subst = func_body.subs(arg_sub)
# Finally, replace the function call in the original expression.
return expr.subs(replacing_func, func_body_subst)
def func_sub(expr, func_def, func_body):
"""
Given an expression and a function definition,
find/expand all instances of that function.
Ex:
linear, m, x, b = sympy.symbols('linear m x b')
func_sub(linear(linear(2,1), linear(3,4)),
linear(m, b), m*x+b) # returns x*(2*x+1) + 3*x + 4
"""
if any(func_def.func==body_func.func for body_func in func_body.atoms(AppliedUndef)):
raise ValueError('Function may not be recursively defined')
while True:
prev = expr
expr = func_sub_single(expr, func_def, func_body)
if prev == expr:
return expr
</code></pre>
| 0 |
2016-09-16T18:54:08Z
|
[
"python",
"sympy"
] |
Substitute Function call with sympy
| 39,536,540 |
<p>I want to receive input from a user, parse it, then perform some substitutions on the resulting expression. I know that I can use <code>sympy.parsing.sympy_parser.parse_expr</code> to parse arbitrary input from the user. However, I am having trouble substituting in function definitions. Is it possible to make subsitutions in this manner, and if so, how would I do so?</p>
<p>The overall goal is to allow a user to provide a function of <code>x</code>, which is then used to fit data. <code>parse_expr</code> gets me 95% of the way there, but I would like to provide some convenient expansions, such as shown below.</p>
<pre><code>import sympy
from sympy.parsing.sympy_parser import parse_expr
x,height,mean,sigma = sympy.symbols('x height mean sigma')
gaus = height*sympy.exp(-((x-mean)/sigma)**2 / 2)
expr = parse_expr('gaus(100, 5, 0.2) + 5')
print expr.subs('gaus',gaus) # prints 'gaus(100, 5, 0.2) + 5'
print expr.subs(sympy.Symbol('gaus'),gaus) # prints 'gaus(100, 5, 0.2) + 5'
print expr.subs(sympy.Symbol('gaus')(height,mean,sigma),gaus) # prints 'gaus(100, 5, 0.2) + 5'
# Desired output: '100 * exp(-((x-5)/0.2)**2 / 2) + 5'
</code></pre>
<p>This is done using python 2.7.9, sympy 0.7.5.</p>
| 1 |
2016-09-16T16:50:01Z
| 39,647,654 |
<p>You can use the <code>replace</code> method. For instance</p>
<pre><code>gaus = Function("gaus") # gaus is parsed as a Function
expr.replace(gaus, Lambda((height, mean, sigma), height*sympy.exp(-((x-mean)/sigma)**2 / 2)))
</code></pre>
<p><code>replace</code> also has other options, such as pattern matching. </p>
| 0 |
2016-09-22T19:44:47Z
|
[
"python",
"sympy"
] |
Command to run a bat script against all filename in a folder and segmented databases and merging output
| 39,536,601 |
<p>I have a random number (and with random name) of .txt files within a folder named 'seq' as:</p>
<pre><code>NP_4500.1.txt
NP_4568.1.txt
NP_45981.3.txt
XM_we679.txt
36498746.txt
</code></pre>
<p>in another folder named 'db', I made a database fragmented in 20 segments (due to my computational limitations) which are arranged as:</p>
<pre><code>hg.part-01.db
hg.part-02.db
hg.part-03.db
..
..
hg.part-20.db
</code></pre>
<p>now I want to run the following command in each .txt file against each fragmented database and generate fragmented result, as for one .txt file (NP_4500.1.txt):</p>
<pre><code>script.exe ./seq/NP_4500.1.txt -db ./db/hg.part-01.db -out NP_4500.1_part-01.out
script.exe ./seq/NP_4500.1.txt -db ./db/hg.part-02.db -out NP_4500.1_part-02.out
script.exe ./seq/NP_4500.1.txt -db ./db/hg.part-03.db -out NP_4500.1_part-03.out
...
...
script.exe ./seq/NP_4500.1.txt -db ./db/hg.part-20.db -out NP_4500.1_part-20.out
</code></pre>
<p>After that, I want to merge the results in a single file as:</p>
<pre><code>join NP_4500.1_part-001.out NP_4500.1_part-002.out .. NP_4500.1_part-00200.out > NP_4500.1.out
</code></pre>
<p>similarly for next .txt file:</p>
<pre><code>NP_4568.1.txt
...
</code></pre>
<p>Now, I can run a cmd script for each .txt file as:</p>
<pre><code>for %%F in ("*.txt") do script.exe ./seq/%%F .......
</code></pre>
<p>But my question is, how can I integrate this command with each of the fragmented database and merge the .out files to generate result for a single .txt file before proceeding to the next.</p>
<p>I am using windows 7 (32 bit machine). I can use cmd, perl or python script. Thanks for ur consideration.</p>
| 3 |
2016-09-16T16:53:51Z
| 39,548,916 |
<pre class="lang-dos prettyprint-override"><code>@ECHO OFF
SETLOCAL ENABLEDELAYEDEXPANSION
SET "sourcedir=U:\sourcedir"
FOR /f "delims=" %%a IN (
'dir /b /a-d "%sourcedir%\*.txt" '
) DO (
SET "join="
FOR /L %%d IN (101,1,120) DO (
SET /a segment=%%d
SET "segment=!segment:~-2!"
ECHO(script.exe %sourcedir%\%%a -db .\db\hg.part-!segment!.db -out %%~na_part-!segment!.out
SET "join=!join! %%~na_part-!segment!.out"
)
ECHO(join !join! ^>%%~na.out
)
GOTO :EOF
</code></pre>
<p>You would need to change the setting of <code>sourcedir</code> to suit your circumstances.</p>
<p>Note that in windows the directory-separator is <code>\</code> - '/` is a switch-indicator.</p>
<p>The outer <code>for</code> assigns each filename in turn to <code>%%a</code>.</p>
<p><code>%%d</code> is assigned 100..120 in turn. This is assigned to <code>segment</code> and then only the last 2 characters of <code>segment</code> are used.</p>
<p>The <code>script.exe</code> is then run for each segment, and the <code>join</code> string builds up each <code>.out</code> filename.</p>
<p>Finally, the <code>join</code> command is executed.</p>
<p>The required commands are merely <code>ECHO</code>ed for testing purposes. <strong>After you've verified that the commands are correct</strong>, delete the string <code>ECHO(</code> which appears before your command to actually execute the commands. You would also need to change the <code>^></code> in the final <code>echo</code> to <code>></code> - the caret escapes the special meaning of <code>></code> for the purpose of being <code>echo</code>ed.</p>
<p>The <code>%~na</code> Hieroglyph means the name part only of the string in <code>%%a</code></p>
| 0 |
2016-09-17T16:08:16Z
|
[
"python",
"perl",
"batch-file",
"cmd"
] |
Math Domain Error Quadratic Formula
| 39,536,727 |
<p>Figured out the errors except for this last one, Im now getting this error message, and can't figure out why, I'm using the exact formulas for x1 and x2 that my teacher gave us to use and i'm not able to figure the error out.</p>
<pre><code> # Quadratic Formula
# Import the math library to use sqrt() function to find the square root
import math
print("This equation solves for x in the binomial equation: ax^2 + bx + c = 0")
# Get the equation's coefficients and constant from the user
a = 0
while a == 0:
try:
a = float(input("Enter the first coefficeint, or a, value: "))
if a == 0:
raise ValueError
except ValueError:
print("The value you entered is invalid. Zero is not allowed")
else:
break
while (True):
try:
b = float(input("Enter the Second coefficeint, or b, value: "))
except ValueError:
print("The value you entered is invalid. only real numbers")
else:
break
while (True):
try:
c = float(input("Enter the last coefficeint, or c, value: "))
except ValueError:
print("The value you entered is invalid. only real numbers")
else:
break
d = (b**2) - (4*a*c)
x1 = ((-b) + math.sqrt(b**2 - 4*a*c)) / (2*a)
x2 = ((-b) - math.sqrt(b**2 - 4*a*c)) / (2*a)
print("X is: ", x1, " or ", x2)
do_calculation = True
while(do_calculation):
another_calculation = input("Do you want to perform another calculation? (y/n):")
if(another_calculation !="y"):
</code></pre>
<p>This equation solves for x in the binomial equation: ax^2 + bx + c = 0
Enter the first coefficeint, or a, value: 2
Enter the Second coefficeint, or b, value: 3
Enter the last coefficeint, or c, value: 4
Traceback (most recent call last):
File "/Users/cadenhastie/Downloads/Jtwyp6QuadraticEqnCalc/improvedquadraticeqncalc.py", line 34, in
x1 = ((-b) + math.sqrt(b**2 - 4*a*c)) / (2*a)
ValueError: math domain error</p>
| 0 |
2016-09-16T17:03:16Z
| 39,536,927 |
<p>You have <code>try</code> statements with no corresponding <code>except</code> statement. You should generally avoid <code>while True:</code>. The indentation in your code has many issues</p>
<p>To get one value from the user with error handling, you could do something like the code below. You would then repeat this for each coefficient you want the user to enter. You probably want to wrap this in a function at some point so you are not writing duplicate code.</p>
<p><code>a = 0
while a == 0:
try:
a = float(input("Enter the first coefficeint, or a, value: "))
if a == 0:
raise ValueError
except ValueError:
print("The value you entered is invalid. Zero is not allowed")
</code></p>
| 0 |
2016-09-16T17:16:03Z
|
[
"python",
"quadratic"
] |
Limits testing auth.sendCode in the telegram.org API
| 39,536,739 |
<p>Does the telegram server have a limit on the number of times a client can call auth.sendCode to receive a new <code>phone code</code>?</p>
<p>While testing I have to make this call many times until I get my code fully debugged, but it seems to be limited to maybe three invokations per day (or less in some cases). After making the auth.sendCode method call I receive a response that seems ok:</p>
<pre><code>('sentCode: ', {u'req_msg_id': 6330970917330544640L, u'result': {u'phone_code_hash': '7140824e8db63141ab', u'type': {u'length': 5}, u'next_type': {}, u'flags': 3, u'phone_registered': True}})
</code></pre>
<p>But after receiving a new <code>phone code</code> several times it stops sending me anything, as if it is ignoring my request for a new <code>phone code</code>. Today it only worked once before ignoring further <code>phone code</code> requests. Last week I was able to receive five new <code>phone codes</code> before it ignored my requests.</p>
<p>If this is a limit imposed by the server is there a way to reset it so I can continue debugging my client code?</p>
| 1 |
2016-09-16T17:03:59Z
| 39,537,796 |
<p>You can try this out with telegram on your phone to see how many times you can do a request for phone_codes from the same number before it stops responding.</p>
<p>Also, during testing ensure that you are using the range of test IP addresses not the live IP addresses.</p>
<p>During my testing I received way over 3 phone codes per day all on the same number and I never ran into any limits, however that was on Layer 42. Today we are layer 55. What layer are you presently working with?</p>
<p>Keep in mind too that of recent telegram has been under several DoS attacks so if your client looks like its doing something fishy then it is indeed possible that some restrictions have now been put in place :(</p>
<p>One more thing , from your Phone, look for the menu option that shows you the number of active sessions you currently have open, and close all the ones you done need.</p>
<p>That might also help narrow down your issue.</p>
| 1 |
2016-09-16T18:19:06Z
|
[
"python",
"api",
"telegram"
] |
How to determine which Python class provides attributes when inheriting
| 39,536,792 |
<p>What's the easiest way to determine which Python class defines an attribute when inheriting? For example, say I have:</p>
<pre><code>class A(object):
defined_in_A = 123
class B(A):
pass
a = A()
b = B()
</code></pre>
<p>and I wanted this code to pass:</p>
<pre><code>assert hasattr(a, 'defined_in_A')
assert hasattr(A, 'defined_in_A')
assert hasattr(b, 'defined_in_A')
assert hasattr(B, 'defined_in_A')
assert defines_attribute(A, 'defined_in_A')
assert not defines_attribute(B, 'defined_in_A')
</code></pre>
<p>How would I implement the fictional <code>defines_attribute</code> function? My first thought would be to walk through the entire inheritance chain, and use <code>hasattr</code> to check for the attribute's existence, with the deepest match assumed to be the definer. Is there a simpler way?</p>
| 3 |
2016-09-16T17:08:19Z
| 39,536,988 |
<p>(Almost) Every python object is defined with it's own instance variables (instance variables of a class object we usually call class variables) to get this as a dictionary you can use the <a href="https://docs.python.org/3/library/functions.html?highlight=built#vars" rel="nofollow"><code>vars</code></a> function and check for membership in it:</p>
<pre><code>>>> "defined_in_A" in vars(A)
True
>>> "defined_in_A" in vars(B)
False
>>> "defined_in_A" in vars(a) or "defined_in_A" in vars(b)
False
</code></pre>
<p>the issue with this is that it does not work when a class uses <code>__slots__</code> or builtin objects since it changes how the instance variables are stored:</p>
<pre><code>class A(object):
__slots__ = ("x","y")
defined_in_A = 123
>>> A.x
<member 'x' of 'A' objects>
>>> "x" in vars(a)
Traceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
"x" in vars(a)
TypeError: vars() argument must have __dict__ attribute
>>> vars(1) #or floats or strings will raise the same error
Traceback (most recent call last):
...
TypeError: vars() argument must have __dict__ attribute
</code></pre>
<p>I'm not sure there is a simple workaround for this case.</p>
| 2 |
2016-09-16T17:19:47Z
|
[
"python",
"inheritance"
] |
Python - How to pass quantity in Pinax Stripe subscription?
| 39,536,798 |
<p>I am using the pinax stripe package in my Django project where a user can enroll multiple people for subscriptions. </p>
<p>I know that stripe has a quantity parameter which can be passed for subscriptions but could not find it in pinax package. </p>
<p>Can someone guide me on how can I pass the quantity parameter in the pinax-stripe package? </p>
| 0 |
2016-09-16T17:08:52Z
| 39,538,462 |
<p><a href="https://github.com/pinax/pinax-stripe/blob/master/pinax/stripe/tests/test_actions.py#L479-L486" rel="nofollow">Like this</a>:</p>
<pre><code>Subscription.objects.create(
customer=self.customer,
plan=plan,
quantity=1
)
</code></pre>
| 0 |
2016-09-16T18:59:15Z
|
[
"python",
"django",
"stripe-payments",
"pinax"
] |
join two array in Python
| 39,536,809 |
<p>I am new to Python. I use np.lib.recfunctions.join_by to join two array, but the results is wrong. Here is the example and results:</p>
<pre><code>a = np.array([('a',1),('b',2),('b',2),('c',3)],dtype=[('key','<U1'),('x','<i4')])
b = np.array([('a',-1),('b',-2)],dtype=[('key','<U1'),('y','<i4')])
np.lib.recfunctions.join_by('key', a, b, jointype='outer').data
array([(u'a', 1, -1), (u'b', 2, -2), (u'b', 2, 0), (u'c', 3, 0)],
dtype=[])
</code></pre>
<p>why the second joined b shows 0 not -2</p>
<p>what I want is </p>
<pre><code>(a,1,-1),(b,2,-2),(b,2,-2),(c,3,0)
</code></pre>
<p>How to do it?</p>
<p>Thanks</p>
| 1 |
2016-09-16T17:09:32Z
| 39,541,317 |
<p>Had some time to examine a bit more in the options. I think part of the information is being hidden using '.data' to get your return. Consider the following </p>
<pre><code>from numpy.lib import recfunctions as rfn
a # array a and dtype
array([('a', 1), ('b', 2), ('b', 2), ('c', 3)],
dtype=[('key', '<U5'), ('x', '<i4')])
a_u # np.unique(a) and dtype
array([('a', 1), ('b', 2), ('c', 3)],
dtype=[('key', '<U5'), ('x', '<i4')])
b # array b and dtype
array([('a', -1), ('b', -2)],
dtype=[('key', '<U5'), ('x', '<i4')])
# join examples
rfn.join_by('key', a, b, jointype='outer') # join b to a
masked_array(data = [('a', 1, -1) ('b', 2, -2) ('b', 2, --) ('c', 3, --)],
mask = [(False, False, False) (False, False, False) (False, False, True) (False, False, True)],
fill_value = ('N/A', 999999, 999999),
dtype = [('key', '<U5'), ('x1', '<i4'), ('x2', '<i4')])
rfn.join_by('key', a_u, b, jointype='outer') # join b to a_u
masked_array(data = [('a', 1, -1) ('b', 2, -2) ('c', 3, --)],
mask = [(False, False, False) (False, False, False) (False, False, True)],
fill_value = ('N/A', 999999, 999999),
dtype = [('key', '<U5'), ('x1', '<i4'), ('x2', '<i4')])
rfn.join_by('key', a, b, jointype='inner').data # join b to a, with data
array([('a', 1, -1), ('b', 2, -2), ('b', 2, 0)],
dtype=[('key', '<U5'), ('x1', '<i4'), ('x2', '<i4')])
rfn.join_by('key', a_u, b, jointype='inner').data # join b to a_u, with data
array([('a', 1, -1), ('b', 2, -2)],
dtype=[('key', '<U5'), ('x1', '<i4'), ('x2', '<i4')])
</code></pre>
<p>If you skip .data, more information (useful?) is returned.
In your simple case, just returning the unique records in array 'a' takes care of any issues you seem to have with interpretation.</p>
<p>I think you will have to investigate in more detail what you need returned given you have a limited number of fields participating in the join.</p>
| 0 |
2016-09-16T23:19:39Z
|
[
"python",
"arrays",
"numpy"
] |
Python - Where is the pip executable after install?
| 39,536,831 |
<p>Since I am on a server and do not have admin privilege, I need to install my own version of python and pip locally. After installed python, I used the code <code>python get-pip.py --user</code> which is on the <a href="https://pip.pypa.io/en/stable/installing/#id10" rel="nofollow">official site</a>. I get the following return and seems pip is successfully installed. But I do not know where is the pip executable so that I cannot add it to the system environment. So where is it installed? </p>
<pre><code>Collecting pip
Using cached pip-8.1.2-py2.py3-none-any.whl
Collecting setuptools
Using cached setuptools-27.2.0-py2.py3-none-any.whl
Collecting wheel
Using cached wheel-0.29.0-py2.py3-none-any.whl
Installing collected packages: pip, setuptools, wheel
Successfully installed pip setuptools wheel
</code></pre>
| 1 |
2016-09-16T17:10:49Z
| 39,536,914 |
<p>On Unix, <code>pip install --user</code> ... drops scripts into <code>~/.local/bin</code></p>
<p>pip sould be somewhere around <code>~/.local</code></p>
| 2 |
2016-09-16T17:15:28Z
|
[
"python",
"install",
"pip"
] |
Create an automated script that login in into netflix
| 39,536,901 |
<p>I'd like to make an automated script in python using selenium that log in here: <a href="https://www.netflix.com/Login" rel="nofollow">https://www.netflix.com/Login</a>.
I've tried with this code:</p>
<pre><code>from selenium import webdriver
driver = webdriver.Firefox()
driver.get("https://www.netflix.com/Login")
while True:
element = driver.find_element_by_css_selector("ui-text-input[name = 'email']")
driver.execute_script("arguments[0].setAttribute('value','test1')", element)
element2 = driver.find_element_by_css_selector("ui-text-input[name = 'password']")
driver.execute_script("arguments[0].setAttribute('value','test1')", element2)
driver.refresh()
</code></pre>
<p>But it raises an error occurred in this line of code:</p>
<pre><code>Traceback (most recent call last):
File "netflixlogin.py", line 6, in <module>
element = driver.find_element_by_css_selector("ui-text-input[name = 'email']")
File "/home/user/.local/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 437, in find_element_by_css_selector
return self.find_element(by=By.CSS_SELECTOR, value=css_selector)
File "/home/user/.local/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 752, in find_element
'value': value})['value']
File "/home/user/.local/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 236, in execute
self.error_handler.check_response(response)
File "/home/user/.local/lib/python3.5/site-packages/selenium/webdriver/remote/errorhandler.py", line 192, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: {"method":"css selector","selector":"ui-text-input[name = 'email']"}
Stacktrace:
at FirefoxDriver.prototype.findElementInternal_ (file:///tmp/tmpk_odq6d0/extensions/fxdriver@googlecode.com/components/driver-component.js:10770)
at FirefoxDriver.prototype.findElement (file:///tmp/tmpk_odq6d0/extensions/fxdriver@googlecode.com/components/driver-component.js:10779)
at DelayedCommand.prototype.executeInternal_/h (file:///tmp/tmpk_odq6d0/extensions/fxdriver@googlecode.com/components/command-processor.js:12661)
at DelayedCommand.prototype.executeInternal_ (file:///tmp/tmpk_odq6d0/extensions/fxdriver@googlecode.com/components/command-processor.js:12666)
at DelayedCommand.prototype.execute/< (file:///tmp/tmpk_odq6d0/extensions/fxdriver@googlecode.com/components/command-processor.js:12608)
</code></pre>
<p>So there's a problem in this line of code:</p>
<pre><code>element = driver.find_element_by_css_selector("ui-text-input[name = 'email']")
</code></pre>
<p>Maybe a wrong syntax? I start using selenium recently so I am still not very good</p>
<p>UPDATED SCRIPT with not errors but it doesn't still work:</p>
<pre><code>from selenium import webdriver
driver = webdriver.Firefox()
driver.get("https://www.netflix.com/Login")
while True:
element = driver.find_element_by_css_selector("form.login-form input[name=email]")
driver.execute_script("arguments[0].setAttribute('value','test1')", element)
element2 = driver.find_element_by_css_selector("form.login-form input[name=password]")
driver.execute_script("arguments[0].setAttribute('value','test1')", element2)
driver.refresh()
</code></pre>
<p>SOLVED. WORKING SCRIPT:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("https://www.netflix.com/it/")
login = driver.find_element_by_css_selector(".authLinks.signupBasicHeader")
login.click()
element = driver.find_element_by_name("email")
element.send_keys("test1@gmail.com")
submit = driver.find_element_by_css_selector(".btn.login-button.btn-submit.btn-small")
submit.click()
element2 = driver.find_element_by_name("password")
element2.send_keys("test1")
submit2 = driver.find_element_by_css_selector(".btn.login-button.btn-submit.btn-small")
submit2.click()
</code></pre>
| -1 |
2016-09-16T17:14:28Z
| 39,536,926 |
<p>Yeah, the <code>ui-text-input[name = 'email']</code> selector is not going to match anything on this page since there is no <code><ui-text-input></code> element on the page, but there is an <code>input</code> with <code>ui-text-input</code> class. Fix your selector:</p>
<pre><code>form.login-form input[name=email]
</code></pre>
<p>By the way, instead, of <code>while True</code> approach, use the <a href="http://selenium-python.readthedocs.io/waits.html#explicit-waits" rel="nofollow">built into selenium mechanism to wait for specific conditions to happen on a page</a>.</p>
<p>And, should not you be choosing the login type (email/phone) first?</p>
| 0 |
2016-09-16T17:16:02Z
|
[
"python",
"python-3.x",
"selenium"
] |
Create an automated script that login in into netflix
| 39,536,901 |
<p>I'd like to make an automated script in python using selenium that log in here: <a href="https://www.netflix.com/Login" rel="nofollow">https://www.netflix.com/Login</a>.
I've tried with this code:</p>
<pre><code>from selenium import webdriver
driver = webdriver.Firefox()
driver.get("https://www.netflix.com/Login")
while True:
element = driver.find_element_by_css_selector("ui-text-input[name = 'email']")
driver.execute_script("arguments[0].setAttribute('value','test1')", element)
element2 = driver.find_element_by_css_selector("ui-text-input[name = 'password']")
driver.execute_script("arguments[0].setAttribute('value','test1')", element2)
driver.refresh()
</code></pre>
<p>But it raises an error occurred in this line of code:</p>
<pre><code>Traceback (most recent call last):
File "netflixlogin.py", line 6, in <module>
element = driver.find_element_by_css_selector("ui-text-input[name = 'email']")
File "/home/user/.local/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 437, in find_element_by_css_selector
return self.find_element(by=By.CSS_SELECTOR, value=css_selector)
File "/home/user/.local/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 752, in find_element
'value': value})['value']
File "/home/user/.local/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 236, in execute
self.error_handler.check_response(response)
File "/home/user/.local/lib/python3.5/site-packages/selenium/webdriver/remote/errorhandler.py", line 192, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: {"method":"css selector","selector":"ui-text-input[name = 'email']"}
Stacktrace:
at FirefoxDriver.prototype.findElementInternal_ (file:///tmp/tmpk_odq6d0/extensions/fxdriver@googlecode.com/components/driver-component.js:10770)
at FirefoxDriver.prototype.findElement (file:///tmp/tmpk_odq6d0/extensions/fxdriver@googlecode.com/components/driver-component.js:10779)
at DelayedCommand.prototype.executeInternal_/h (file:///tmp/tmpk_odq6d0/extensions/fxdriver@googlecode.com/components/command-processor.js:12661)
at DelayedCommand.prototype.executeInternal_ (file:///tmp/tmpk_odq6d0/extensions/fxdriver@googlecode.com/components/command-processor.js:12666)
at DelayedCommand.prototype.execute/< (file:///tmp/tmpk_odq6d0/extensions/fxdriver@googlecode.com/components/command-processor.js:12608)
</code></pre>
<p>So there's a problem in this line of code:</p>
<pre><code>element = driver.find_element_by_css_selector("ui-text-input[name = 'email']")
</code></pre>
<p>Maybe a wrong syntax? I start using selenium recently so I am still not very good</p>
<p>UPDATED SCRIPT with not errors but it doesn't still work:</p>
<pre><code>from selenium import webdriver
driver = webdriver.Firefox()
driver.get("https://www.netflix.com/Login")
while True:
element = driver.find_element_by_css_selector("form.login-form input[name=email]")
driver.execute_script("arguments[0].setAttribute('value','test1')", element)
element2 = driver.find_element_by_css_selector("form.login-form input[name=password]")
driver.execute_script("arguments[0].setAttribute('value','test1')", element2)
driver.refresh()
</code></pre>
<p>SOLVED. WORKING SCRIPT:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("https://www.netflix.com/it/")
login = driver.find_element_by_css_selector(".authLinks.signupBasicHeader")
login.click()
element = driver.find_element_by_name("email")
element.send_keys("test1@gmail.com")
submit = driver.find_element_by_css_selector(".btn.login-button.btn-submit.btn-small")
submit.click()
element2 = driver.find_element_by_name("password")
element2.send_keys("test1")
submit2 = driver.find_element_by_css_selector(".btn.login-button.btn-submit.btn-small")
submit2.click()
</code></pre>
| -1 |
2016-09-16T17:14:28Z
| 39,538,694 |
<p>I think below is syntax for CSS selector in selenium</p>
<pre><code>driver.findElement(By.cssSelector("ui-text-input[name = 'email']"))
</code></pre>
| 0 |
2016-09-16T19:15:50Z
|
[
"python",
"python-3.x",
"selenium"
] |
Create an automated script that login in into netflix
| 39,536,901 |
<p>I'd like to make an automated script in python using selenium that log in here: <a href="https://www.netflix.com/Login" rel="nofollow">https://www.netflix.com/Login</a>.
I've tried with this code:</p>
<pre><code>from selenium import webdriver
driver = webdriver.Firefox()
driver.get("https://www.netflix.com/Login")
while True:
element = driver.find_element_by_css_selector("ui-text-input[name = 'email']")
driver.execute_script("arguments[0].setAttribute('value','test1')", element)
element2 = driver.find_element_by_css_selector("ui-text-input[name = 'password']")
driver.execute_script("arguments[0].setAttribute('value','test1')", element2)
driver.refresh()
</code></pre>
<p>But it raises an error occurred in this line of code:</p>
<pre><code>Traceback (most recent call last):
File "netflixlogin.py", line 6, in <module>
element = driver.find_element_by_css_selector("ui-text-input[name = 'email']")
File "/home/user/.local/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 437, in find_element_by_css_selector
return self.find_element(by=By.CSS_SELECTOR, value=css_selector)
File "/home/user/.local/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 752, in find_element
'value': value})['value']
File "/home/user/.local/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 236, in execute
self.error_handler.check_response(response)
File "/home/user/.local/lib/python3.5/site-packages/selenium/webdriver/remote/errorhandler.py", line 192, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: {"method":"css selector","selector":"ui-text-input[name = 'email']"}
Stacktrace:
at FirefoxDriver.prototype.findElementInternal_ (file:///tmp/tmpk_odq6d0/extensions/fxdriver@googlecode.com/components/driver-component.js:10770)
at FirefoxDriver.prototype.findElement (file:///tmp/tmpk_odq6d0/extensions/fxdriver@googlecode.com/components/driver-component.js:10779)
at DelayedCommand.prototype.executeInternal_/h (file:///tmp/tmpk_odq6d0/extensions/fxdriver@googlecode.com/components/command-processor.js:12661)
at DelayedCommand.prototype.executeInternal_ (file:///tmp/tmpk_odq6d0/extensions/fxdriver@googlecode.com/components/command-processor.js:12666)
at DelayedCommand.prototype.execute/< (file:///tmp/tmpk_odq6d0/extensions/fxdriver@googlecode.com/components/command-processor.js:12608)
</code></pre>
<p>So there's a problem in this line of code:</p>
<pre><code>element = driver.find_element_by_css_selector("ui-text-input[name = 'email']")
</code></pre>
<p>Maybe a wrong syntax? I start using selenium recently so I am still not very good</p>
<p>UPDATED SCRIPT with not errors but it doesn't still work:</p>
<pre><code>from selenium import webdriver
driver = webdriver.Firefox()
driver.get("https://www.netflix.com/Login")
while True:
element = driver.find_element_by_css_selector("form.login-form input[name=email]")
driver.execute_script("arguments[0].setAttribute('value','test1')", element)
element2 = driver.find_element_by_css_selector("form.login-form input[name=password]")
driver.execute_script("arguments[0].setAttribute('value','test1')", element2)
driver.refresh()
</code></pre>
<p>SOLVED. WORKING SCRIPT:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("https://www.netflix.com/it/")
login = driver.find_element_by_css_selector(".authLinks.signupBasicHeader")
login.click()
element = driver.find_element_by_name("email")
element.send_keys("test1@gmail.com")
submit = driver.find_element_by_css_selector(".btn.login-button.btn-submit.btn-small")
submit.click()
element2 = driver.find_element_by_name("password")
element2.send_keys("test1")
submit2 = driver.find_element_by_css_selector(".btn.login-button.btn-submit.btn-small")
submit2.click()
</code></pre>
| -1 |
2016-09-16T17:14:28Z
| 39,539,387 |
<p>Solved.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("https://www.netflix.com/it/")
login = driver.find_element_by_css_selector(".authLinks.signupBasicHeader")
login.click()
element = driver.find_element_by_name("email")
element.send_keys("test1@gmail.com")
submit = driver.find_element_by_css_selector(".btn.login-button.btn-submit.btn-small")
submit.click()
element2 = driver.find_element_by_name("password")
element2.send_keys("test1")
submit2 = driver.find_element_by_css_selector(".btn.login-button.btn-submit.btn-small")
submit2.click()
</code></pre>
| -1 |
2016-09-16T20:10:31Z
|
[
"python",
"python-3.x",
"selenium"
] |
Risks of keeping file open forever
| 39,536,902 |
<p>I am writing an application that requires reading the next line from a 1GB file exact every 5 minutes; when the end is reached it should start from the top</p>
<p>I had 2 solutions in mind but I'm unsure which one is the best</p>
<p><strong>Solution 1</strong></p>
<pre><code>class I:
def __init__(self):
self.count = 0
def lineFromFile(self) -> str:
with open('file.txt') as file:
for i in range(self.count):
file.readline()
line = file.readline()
if not line:
file.seek(0)
self.count = 0
line = file.readline()
self.count += 1
return line
</code></pre>
<p><strong>Solution 2</strong></p>
<pre><code>class I:
def __init__(self):
self.file = open('file.txt')
def lineFromFile(self) -> str:
line = self.file.readline()
if not line:
self.file.seek(0)
line = self.file.readline()
return line
</code></pre>
| 1 |
2016-09-16T17:14:30Z
| 39,537,037 |
<p>Use Solution 1, but don't read line-by-line every time you open the file. Save the last offset read, and <code>seek</code> there directly. Also, you only want to call <code>file.readline()</code> a second time if the first call returned the empty string.</p>
<pre><code>class I:
def __init__(self):
self.count = 0
self.offset = 0
def lineFromFile(self) -> str:
with open('file.txt') as file:
file.seek(self.offset)
line = file.readline()
if not line:
file.seek(0)
self.count = 0
else:
line = file.readline()
self.count += 1
self.offset = file.tell()
return line
</code></pre>
| 2 |
2016-09-16T17:22:44Z
|
[
"python",
"memory",
"io"
] |
Risks of keeping file open forever
| 39,536,902 |
<p>I am writing an application that requires reading the next line from a 1GB file exact every 5 minutes; when the end is reached it should start from the top</p>
<p>I had 2 solutions in mind but I'm unsure which one is the best</p>
<p><strong>Solution 1</strong></p>
<pre><code>class I:
def __init__(self):
self.count = 0
def lineFromFile(self) -> str:
with open('file.txt') as file:
for i in range(self.count):
file.readline()
line = file.readline()
if not line:
file.seek(0)
self.count = 0
line = file.readline()
self.count += 1
return line
</code></pre>
<p><strong>Solution 2</strong></p>
<pre><code>class I:
def __init__(self):
self.file = open('file.txt')
def lineFromFile(self) -> str:
line = self.file.readline()
if not line:
self.file.seek(0)
line = self.file.readline()
return line
</code></pre>
| 1 |
2016-09-16T17:14:30Z
| 39,537,041 |
<p>Generally, the biggest risk of <em>lazily</em> reading from a file is another process writing to the file while you're reading from it.</p>
<p>Do the contents of the file change? Is the file massive? If not, just read the whole file at startup.</p>
<p>Does the file change a lot? Are lots of other processes writing to it? Can other processes delete lines? If that's the case, you should probably just store your <code>seek</code>/line number position and then reopen and close the file every 5 minutes, check if you're at the end of file and keep reading. In this case you should also use some type of <em>lock file</em> or other synchronization mechanism to prevent multiple processes from trying to read and write from the same file at the same time.</p>
| 2 |
2016-09-16T17:22:53Z
|
[
"python",
"memory",
"io"
] |
Filtering a pandas dataframe based on a match to partial strings
| 39,536,940 |
<p>I have a pandas dataframe that contains strings of varying length and characters.</p>
<p>For example:</p>
<pre><code>print df['name'][0]
print df['name'][1]
print df['name'][2]
print df['name'][3]
</code></pre>
<p>would return something like this:</p>
<pre><code>UserId : Z5QF1X33A
loginId : test.user
UserId : 0000012348; searchText : Cap
accountSampleToExclude : 0; accountSampleName : Sample Text; UserId : Z5QF1X33A; accountSampleType : Test; accountSample : Test
</code></pre>
<p>What I want to do is be able to parse through the column and only return the actual relevant id so based on the above example:</p>
<pre><code>Z5QF1X33A
test.user
0000012348
Z5QF1X33A
</code></pre>
<p>I figured regex would be an easy approach to solving this, but so far I've only been able to come up with some hardcoded pseudo solution to only partial cases:</p>
<pre><code> df['name'] = df['name'].str.strip(r'(?<=\UserId :).*')
df['name'] = df['name'].str.strip(r'(?<=\loginId :).*')
</code></pre>
<p>That would work for the rows that are similar to </p>
<pre><code>df['name'][0]
df['name'][1]
</code></pre>
<p>but wouldn't work for the other cases. Any help would be much appreciated, I realize that one could solve it without regex, maybe just with the str.split() method, but unsure of how to proceed in a pythonic and/or pandas way. </p>
| 1 |
2016-09-16T17:16:52Z
| 39,537,090 |
<p>try this:</p>
<pre><code>In [31]: df.name.str.extract(r'\b(?:UserId|loginId)\s*:\s*\b([^\s]+)\b', expand=True)
Out[31]:
0
0 Z5QF1X33A
1 test.user
2 0000012348
3 Z5QF1X33A
</code></pre>
| 0 |
2016-09-16T17:26:05Z
|
[
"python",
"regex",
"string",
"pandas",
"split"
] |
gevent monkey patching order
| 39,537,004 |
<p>At work we're using gevent to create some asynchronous servers, and there's some debate about when to perform monkey patching in relation to other modules. The gevent documentation shows things like this:</p>
<pre><code>from gevent import monkey
monkey.patch_socket()
import socket
</code></pre>
<p>Where the monkey patching happens before the library modules have been imported. </p>
<p>However, my manager feels the order of monkey patching should be this:</p>
<pre><code>import socket
from gevent import monkey
monkey.patch_socket()
</code></pre>
<p>Where monkey patching is called after the library module is imported. Which makes it look like monkey patching sees the socket module has been imported, and patches it at that point.</p>
<p>I've found some discussions that say do it one way, and others that say to do it the other. My own simple testing seems to say it doesn't matter. Does anyone have an opinion on this, with some clear reasons why, or references that would say why?</p>
<p>Thanks in advance!!
Doug</p>
| 1 |
2016-09-16T17:20:43Z
| 39,537,693 |
<p>Well, according to the source code (see bellow) <code>patch_socket</code> calls <code>patch_module</code> which imports the <code>socket</code> module for you.</p>
<pre><code>def patch_module(name, items=None):
gevent_module = getattr(__import__('gevent.' + name), name)
module_name = getattr(gevent_module, '__target__', name)
module = __import__(module_name)
if items is None:
items = getattr(gevent_module, '__implements__', None)
if items is None:
raise AttributeError('%r does not have __implements__' % gevent_module)
for attr in items:
patch_item(module, attr, getattr(gevent_module, attr))
return module
</code></pre>
<p>See that in <code>gevent</code> repository on GitHub.</p>
<p>So, you don't need to import socket at all (unless you use it of course). </p>
| 0 |
2016-09-16T18:09:47Z
|
[
"python",
"gevent",
"monkeypatching"
] |
gevent monkey patching order
| 39,537,004 |
<p>At work we're using gevent to create some asynchronous servers, and there's some debate about when to perform monkey patching in relation to other modules. The gevent documentation shows things like this:</p>
<pre><code>from gevent import monkey
monkey.patch_socket()
import socket
</code></pre>
<p>Where the monkey patching happens before the library modules have been imported. </p>
<p>However, my manager feels the order of monkey patching should be this:</p>
<pre><code>import socket
from gevent import monkey
monkey.patch_socket()
</code></pre>
<p>Where monkey patching is called after the library module is imported. Which makes it look like monkey patching sees the socket module has been imported, and patches it at that point.</p>
<p>I've found some discussions that say do it one way, and others that say to do it the other. My own simple testing seems to say it doesn't matter. Does anyone have an opinion on this, with some clear reasons why, or references that would say why?</p>
<p>Thanks in advance!!
Doug</p>
| 1 |
2016-09-16T17:20:43Z
| 39,552,387 |
<p>As the current maintainer of gevent, I will point to <a href="http://www.gevent.org/intro.html#beyond-sockets" rel="nofollow">the documentation</a> which specifically says (<a href="http://www.gevent.org/gevent.monkey.html#patching" rel="nofollow">multiple times</a>) that the recommended way to monkey-patch is to do it <em>as early as possible</em>, and preferably <em>before</em> any other imports. </p>
<p>Now, with most standard library modules you can get away with monkey-patching after they're imported. But third-party libraries are not necessarily safe that way. In general, it's just safer and reduces trouble to monkey-patch ASAP.</p>
| 1 |
2016-09-17T22:40:29Z
|
[
"python",
"gevent",
"monkeypatching"
] |
Generate random strings of fixed length from given characters with equal occurance
| 39,537,009 |
<p>I want to generate random strings from a list of characters C (e.g. C = ['A', 'B', 'C', 'D']). This random string shall have length N (e.g. N = 32). Every character should occur equally often - in that example 8 times.</p>
<p>How can I implement that each character occurs equally often in here:</p>
<pre><code>''.join(random.choice(C) for i in range(N))
</code></pre>
<p>Or is there a better way?</p>
| 0 |
2016-09-16T17:21:03Z
| 39,537,068 |
<p>I don't think you can guarantee that each item is picked with the same frequency if you use <code>random.choice</code>. Each choice is equally likely which isn't the same thing.</p>
<p>The best way to do this would be to maintain a list of characters and shuffle it...</p>
<pre><code>characters = C * 8
random.shuffle(characters)
print(''.join(characters))
</code></pre>
<p>Or, if you want a bunch of them:</p>
<pre><code>def get_random_strings(characters, count, N):
"""Yield `N` strings that contain each character in `characters` `count` times."""
characters = list(characters) * count
for _ in xrange(N):
random.shuffle(characters)
yield ''.join(characters)
</code></pre>
| 1 |
2016-09-16T17:25:08Z
|
[
"python",
"random",
"fixed"
] |
Setting up React in a large python project without Node
| 39,537,015 |
<p>I am trying to implement the front-end of a very large Python project with React.
It seems that most of the tutorials ask that we use Node to access the packages, is there any way to get around without them? </p>
<p>Initially I thought I could use it similarly to bootstrap or jquery where I just download the files or use the CDN and tag them in the HTML file, but it is not working.</p>
<p>Where do I go from here? Is there an easy way for me to install React?</p>
<p>Thanks!</p>
<p>Edit: I should probably add the code of what I am currently doing. I have tried to access the files which are on react's website, but nothing seems to be working, and from what I read in other questions and tutorials, they always ask to install via npm to make it all work, or so it seems...</p>
<pre><code> <div id='app'></div>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.1.0/jquery.js"></script>
<script type="text/javascript" src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
<script src="https://unpkg.com/react@15.3.1/dist/react.js"></script>
<script src="https://unpkg.com/react-dom@15.3.1/dist/react-dom.js"></script>
<script type="text/babel" src="//cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js"></script>
<script type="text/babel">
var React = require ('react');
var ReactDOM = require ('react-dom');
var Test = React.createClass({
render: function(){
return(<h1>it is working! </h1>);
}
});
ReactDOM.render(<Test />, document.getElementById('app'));
</code></pre>
<p> </p>
| 0 |
2016-09-16T17:21:13Z
| 39,537,661 |
<p>You can certainly use React with your own flavor of Python framework (Tornado, Flask, Django, etc.). In the final deploy, you don't have to have any Node dependencies. I've run Tornado with React and just used NPM and webpack locally to manage package dependencies and trans-compile.</p>
| 1 |
2016-09-16T18:07:37Z
|
[
"python",
"reactjs",
"react-router"
] |
Request for help numpy array syntax
| 39,537,032 |
<p>I am using a template-script to learn data analysis in using numpy and I don't understand this syntax. There exist two arrays <code>dist_data</code> and <code>dataArray</code>, <code>l</code> is a loop-dummy-variable (as in <code>for l in range(0,k):</code>)and I don't understand the content, specifically the purpose of separation by <code>,</code> in the second parenthesis <code>[l, self.dataArray.shape[1]-1]</code> because I am assuming that it represented a column of <code>dist_data</code></p>
<p><code>dist_data[dist_data[:,-1].argsort()][l, self.dataArray.shape[1]-1]</code></p>
| -1 |
2016-09-16T17:22:32Z
| 39,537,467 |
<pre><code>dist_data[dist_data[:,-1].argsort()][l, self.dataArray.shape[1]-1]
</code></pre>
<p><code>dist_data[:,-1]</code> last column of 2d <code>dist_data</code>. Sort on that and get the indices</p>
<p>So <code>dist_data[dist_data[:,-1].argsort()]</code> is <code>dist_data</code> sorted on the last column.</p>
<p><code>[l, self.dataArray.shape[1]-1]</code> is just an indexing on a 2d array; the <code>l</code> row, and the <code>self...</code> column. It looks like the column that corresponds to the last of <code>self.dataArray</code>.</p>
<p>So in sum - sort <code>dist_data</code> on the last column, and pick the <code>l'th</code> row, and some column.</p>
| 1 |
2016-09-16T17:52:56Z
|
[
"python",
"arrays",
"numpy"
] |
Sales commission program using while loop. Value not updating
| 39,537,144 |
<p>My while loop isn't updating my new sales commission program eveerytime I run the program. Here is my program:</p>
<pre><code> #this program calculates sales commissions and sums the total of all sales commissions the user has entered
print("Welcom to the program sales commission loop")
keep_going='y'
while keep_going=='y':
#get a salespersons sales and commission rate
sales=float(input('Enter the amount of sales'))
comm_rate=float(input('Enter commission rate'))
total=0
#calculate the commission
commission=sales*comm_rate
print("commission is",commission)
keep_going=input('Enter y for yes')
total=total+commission
print("Total is",total)
print("You have exited the program. Thet total is",total)
</code></pre>
<p>Here is the output of the program: Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:01:18) [MSC v.1900 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.</p>
<blockquote>
<blockquote>
<blockquote>
<p></p>
</blockquote>
</blockquote>
</blockquote>
<pre><code>Welcom to the program sales commission loop
Enter the amount of sales899
Enter commission rate.09
commission is 80.91
Enter y for yesy
Total is 80.91
Enter the amount of sales933
Enter commission rate.04
commission is 37.32
Enter y for yesy
Total is 37.32
Enter the amount of sales9909
Enter commission rate.10
commission is 990.9000000000001
Enter y for yesn
Total is 990.9000000000001
You have exited the program. Thet total is 990.9000000000001
>>>
> Blockquote
</code></pre>
<p>What am I doing wrong? I cannot figure it out </p>
| 0 |
2016-09-16T17:29:11Z
| 39,537,174 |
<p>Every time you loop you are setting total to zero. Move your initialization of total to outside of the loop as I show below.</p>
<pre><code>#this program calculates sales commissions and sums the total of all sales commissions the user has entered
print("Welcom to the program sales commission loop")
keep_going='y'
total=0
while keep_going=='y':
#get a salespersons sales and commission rate
sales=float(input('Enter the amount of sales'))
comm_rate=float(input('Enter commission rate'))
#calculate the commission
commission=sales*comm_rate
print("commission is",commission)
keep_going=input('Enter y for yes')
total=total+commission
print("Total is",total)
print("You have exited the program. Thet total is",total)
</code></pre>
| 2 |
2016-09-16T17:31:53Z
|
[
"python",
"while-loop",
"updating"
] |
Sales commission program using while loop. Value not updating
| 39,537,144 |
<p>My while loop isn't updating my new sales commission program eveerytime I run the program. Here is my program:</p>
<pre><code> #this program calculates sales commissions and sums the total of all sales commissions the user has entered
print("Welcom to the program sales commission loop")
keep_going='y'
while keep_going=='y':
#get a salespersons sales and commission rate
sales=float(input('Enter the amount of sales'))
comm_rate=float(input('Enter commission rate'))
total=0
#calculate the commission
commission=sales*comm_rate
print("commission is",commission)
keep_going=input('Enter y for yes')
total=total+commission
print("Total is",total)
print("You have exited the program. Thet total is",total)
</code></pre>
<p>Here is the output of the program: Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:01:18) [MSC v.1900 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.</p>
<blockquote>
<blockquote>
<blockquote>
<p></p>
</blockquote>
</blockquote>
</blockquote>
<pre><code>Welcom to the program sales commission loop
Enter the amount of sales899
Enter commission rate.09
commission is 80.91
Enter y for yesy
Total is 80.91
Enter the amount of sales933
Enter commission rate.04
commission is 37.32
Enter y for yesy
Total is 37.32
Enter the amount of sales9909
Enter commission rate.10
commission is 990.9000000000001
Enter y for yesn
Total is 990.9000000000001
You have exited the program. Thet total is 990.9000000000001
>>>
> Blockquote
</code></pre>
<p>What am I doing wrong? I cannot figure it out </p>
| 0 |
2016-09-16T17:29:11Z
| 39,537,373 |
<p>The problem is that you are reinitializing 'total' every time you repeat the loop. You don't need to initialize the variable, but in case you want to, you must do it outside the while loop. The corrected code would be: </p>
<pre><code>#this program calculates sales commissions and sums the total of all sales commissions the user has entered
print("Welcome to the program sales commission loop")
keep_going='y'
total=0
while keep_going=='y':
#get a salespersons sales and commission rate
sales=float(input( 'Enter the amount of sales ' ))
comm_rate=float(input( 'Enter commission rate ' ))
#calculate the commission
comission= sales * comm_rate
print( "commission is {}".format(comission) )
keep_going=input('Enter y for yes: ')
total += comission
print( "Total is {}".format(total) )
print("You have exited the program. Thet total is",total)
</code></pre>
| 0 |
2016-09-16T17:46:04Z
|
[
"python",
"while-loop",
"updating"
] |
python maintain two different random instance.
| 39,537,148 |
<p>I am trying to do some anaylsis and for 'reasons' I want objects in my programm to each have their own seeds but no global seeds. Can I accomplish something like this ? </p>
<pre><code>a = random.seed(seed1)
b = random.seed(seed2)
for a in range(5) :
print a.random() b.random()
</code></pre>
<p>The expect output would be </p>
<pre><code>0.23 0.23
0.45 0.45
0.56 0.56
0.34 0.34
</code></pre>
<p>etc...
Obviously a super contrived example -- These separate seed will be buried in objects and correspond to specific things. But first step is getting something like this to work.</p>
<p>How can I have python maintain multiple seeded randoms ? </p>
| -2 |
2016-09-16T17:29:27Z
| 39,537,246 |
<p>I hope you are just looking for random numbers as mentioned in above output:</p>
<p>Here is some code. Please check, if can be helpful for you:</p>
<pre><code>>>> for i in range(5):
print(random.randint(1,100), random.randint(1,100))
</code></pre>
<p>Output will be as:</p>
<p>14 93</p>
<p>51 62</p>
<p>20 12</p>
<p>9 3</p>
<p>52 71</p>
| -1 |
2016-09-16T17:36:37Z
|
[
"python"
] |
python maintain two different random instance.
| 39,537,148 |
<p>I am trying to do some anaylsis and for 'reasons' I want objects in my programm to each have their own seeds but no global seeds. Can I accomplish something like this ? </p>
<pre><code>a = random.seed(seed1)
b = random.seed(seed2)
for a in range(5) :
print a.random() b.random()
</code></pre>
<p>The expect output would be </p>
<pre><code>0.23 0.23
0.45 0.45
0.56 0.56
0.34 0.34
</code></pre>
<p>etc...
Obviously a super contrived example -- These separate seed will be buried in objects and correspond to specific things. But first step is getting something like this to work.</p>
<p>How can I have python maintain multiple seeded randoms ? </p>
| -2 |
2016-09-16T17:29:27Z
| 39,537,309 |
<p>You need to use a <code>random.Random</code> class object.</p>
<pre><code>from random import Random
a = Random()
b = Random()
a.seed(0)
b.seed(0)
for _ in range(5):
print(a.randrange(10), b.randrange(10))
# Output:
# 6 6
# 6 6
# 0 0
# 4 4
# 8 8
</code></pre>
<p>The <a href="https://docs.python.org/2/library/random.html" rel="nofollow">documentation</a> states explicitly about your problem:</p>
<blockquote>
<p>The functions supplied by this module are actually bound methods of a
hidden instance of the <code>random.Random</code> class. You can instantiate your
own instances of <code>Random</code> to get generators that donât share state.</p>
</blockquote>
| 2 |
2016-09-16T17:41:01Z
|
[
"python"
] |
python maintain two different random instance.
| 39,537,148 |
<p>I am trying to do some anaylsis and for 'reasons' I want objects in my programm to each have their own seeds but no global seeds. Can I accomplish something like this ? </p>
<pre><code>a = random.seed(seed1)
b = random.seed(seed2)
for a in range(5) :
print a.random() b.random()
</code></pre>
<p>The expect output would be </p>
<pre><code>0.23 0.23
0.45 0.45
0.56 0.56
0.34 0.34
</code></pre>
<p>etc...
Obviously a super contrived example -- These separate seed will be buried in objects and correspond to specific things. But first step is getting something like this to work.</p>
<p>How can I have python maintain multiple seeded randoms ? </p>
| -2 |
2016-09-16T17:29:27Z
| 39,537,375 |
<p>Then, this may help you:</p>
<pre><code>>>> for _ in range(5):
rn=random.randint(1,100)
print(rn, rn)
</code></pre>
<p>Output is:<br>
38 38 </p>
<p>98 98 </p>
<p>8 8 </p>
<p>29 29 </p>
<p>67 67</p>
| 0 |
2016-09-16T17:46:09Z
|
[
"python"
] |
Read Strings in Python
| 39,537,311 |
<p>I have some values in a string </p>
<pre><code>[AD6:0.02] [AD7:0.03] [AD8:0.19][AD3:6][AD0:22][AD1:22][AD4:48.00][AD5:0.01] [AD6:0.03]
</code></pre>
<p>I just want to read the values of each 'AD' like 0.02 in AD6 for example. The string changes each time, so I cannot use 'substring'.</p>
<p>Here is my code</p>
<pre><code>while True:
data = ser.read(9999)
for x in data:
if ((x==':') & (x+1=='0')):
print 'Achou'
</code></pre>
<p>Does someone have an idea about how I can extract the value of each AD and put it in a variable ? (The is in a loop).</p>
| -1 |
2016-09-16T17:41:02Z
| 39,537,350 |
<p>Use <code>re.findall()</code> with a proper regex:</p>
<pre><code>In [80]: s = "[AD6:0.02] [AD7:0.03] [AD8:0.19][AD3:6][AD0:22][AD1:22][AD4:48.00][AD5:0.01] [AD6:0.03]"
In [81]: regex = re.compile(r'AD\d:([\d.]+)')
In [82]: regex.findall(s)
Out[82]: ['0.02', '0.03', '0.19', '6', '22', '22', '48.00', '0.01', '0.03']
</code></pre>
<p>If you want to convert the values to float you better to use <code>finditer()</code> that returns an iterator-like object and convert the matched groups to float using <code>float()</code> within a list comprehension:</p>
<pre><code>In [85]: [float(x.group(1)) for x in regex.finditer(s)]
Out[85]: [0.02, 0.03, 0.19, 6.0, 22.0, 22.0, 48.0, 0.01, 0.03]
</code></pre>
<p>But note that this might raise a <code>ValueError</code> if your number are not valid digits. In that case you better to use a regular loop and handle the exceptions with a <code>try-except</code> expression. </p>
| 3 |
2016-09-16T17:43:58Z
|
[
"python",
"string"
] |
Redirect to back to referrer page
| 39,537,329 |
<p>I am building an account settings page. I was thinking of having a few routes that only accept post request then edit the records and then go back the account settings page.</p>
<p>The problem is that there is two account settings pages. One for users and one for an admin account.</p>
<p>The admin account_settings can use the same logic form the user account settings routes but If i use and post to use the user/account-settings route it returns back the user/account-settings route insted of the admin/user-account settings. </p>
<p>I was wondering how can flask returns back to the page it was on.</p>
| -2 |
2016-09-16T17:42:45Z
| 39,537,400 |
<p>People usually solve this problem with session cookies (which you should have access to given that the user will be logged into an admin panel).</p>
<p>This is of course safter than using <code>HTTP_REFERER</code> (header sent by the client), as you control the contents of the session cookie entirely.</p>
<p>You could also pass a <code>?continue=http://...</code> thing in the URL.</p>
| 1 |
2016-09-16T17:47:31Z
|
[
"python",
"flask"
] |
Redirect to back to referrer page
| 39,537,329 |
<p>I am building an account settings page. I was thinking of having a few routes that only accept post request then edit the records and then go back the account settings page.</p>
<p>The problem is that there is two account settings pages. One for users and one for an admin account.</p>
<p>The admin account_settings can use the same logic form the user account settings routes but If i use and post to use the user/account-settings route it returns back the user/account-settings route insted of the admin/user-account settings. </p>
<p>I was wondering how can flask returns back to the page it was on.</p>
| -2 |
2016-09-16T17:42:45Z
| 39,552,793 |
<p>request.referrer will return back to the previous page. <a href="http://flask.pocoo.org/docs/0.11/reqcontext/" rel="nofollow">http://flask.pocoo.org/docs/0.11/reqcontext/</a></p>
| -1 |
2016-09-17T23:48:36Z
|
[
"python",
"flask"
] |
Kivy position of a GridLayout's children always returns (0,0)
| 39,537,411 |
<p>When I add some element to my GridLayout, if want to get the element postion, Kivy always returns (0,0), but it not true, because the elements are correctly positioned on my windows.</p>
<pre><code>class ImageButton(ButtonBehavior, Label):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.text = 'hello'
def add_sources(self, sources):
print(self.pos) #(0,0), but is not (0,0)
self.add_widget(Label(text='foo', pos=self.pos))
</code></pre>
<p>And this is my main class.</p>
<pre><code>class MyClass(Widget):
my_layout = ObjectProperty(GridLayout())
def __init__(self):
super().__init__()
self.load_layout()
def load_map(self):
self.my_layout.rows = 2
self.my_layout.cols = 2
self.draw_ui()
def draw_ui(self):
a = ImageButton()
b = ImageButton()
c = ImageButton()
d = ImageButton()
self.my_layout.add_widget(a)
self.my_layout.add_widget(b)
self.my_layout.add_widget(c)
self.my_layout.add_widget(d)
a.add_sources(0)
b.add_sources(1)
c.add_sources(0)
d.add_sources(1)
</code></pre>
<p>Why getting the widget's position returns me (0,0)? What am I doing wrong? </p>
<p>This is what I get:</p>
<p><a href="http://i.stack.imgur.com/mXFOa.png" rel="nofollow"><img src="http://i.stack.imgur.com/mXFOa.png" alt="I'm getting this"></a></p>
<p>But I want the "foo" string in front of each "hello" string.</p>
<p>How can I do this?</p>
| 0 |
2016-09-16T17:48:19Z
| 39,544,071 |
<p>The pos is equal to [0,0] at the first frame of the kivy GUI loop, when you print it. It changes a bit later. You have two ways of solving this problem:</p>
<ol>
<li>Wait till the second frame, when pos is updated as expected. </li>
<li>Bind the pos instead of just assigning it once at the beginning.</li>
</ol>
<p>Solution 1) example:</p>
<pre><code>from kivy.clock import mainthread
class ImageButton(ButtonBehavior, Label):
...
@mainthread
def add_sources(self, sources):
self.add_widget(Label(text='foo', pos=self.pos))
</code></pre>
<p>Solution 2) example:</p>
<pre><code>class ImageButton(ButtonBehavior, Label):
...
def add_sources(self, sources):
self.add_widget(Label(text='foo', pos=self.setter('pos')))
</code></pre>
| 0 |
2016-09-17T07:25:05Z
|
[
"python",
"python-3.x",
"kivy",
"kivy-language"
] |
Module cross reference design issue
| 39,537,416 |
<p>I have a python file called <code>testlib.py</code>, its intention is defined some utility class and global function used by other modules. The <code>uselib.py</code> is designed as a client which is using class/global functions from <code>testlib.py</code>.</p>
<p>Dues to some design issues, <code>testlib.py</code> needs to refer to some class <code>Goo</code> defined in <code>uselib.py</code>. If I just import directly, there will be error message (post below). </p>
<p>Just wondering how to handle this situation for cross reference elegantly in Python 2.7 </p>
<p><strong>uselib.py</strong>,</p>
<pre><code>import testlib
class Goo:
def __init__(self):
pass
def checkValue(self):
return "check value in Goo"
print testlib.globalFoo()
f = testlib.Foo()
print f.getValue()
</code></pre>
<p><strong>testlib.py</strong>,</p>
<pre><code>import uselib
class Foo:
def __init__(self):
pass
def getValue(self):
g = uselib.Goo()
g.checkValue()
return 'get value in class Foo'
def globalFoo():
return 'in global foo'
</code></pre>
<p><strong>Error Message</strong>,</p>
<pre><code>Traceback (most recent call last):
File "/Users/foo/personal/uselib.py", line 1, in <module>
import testlib
File "/Users/foo/personal/testlib.py", line 1, in <module>
import uselib
File "/Users/foo/personal/uselib.py", line 9, in <module>
print testlib.globalFoo()
AttributeError: 'module' object has no attribute 'globalFoo'
</code></pre>
| 0 |
2016-09-16T17:48:29Z
| 39,537,577 |
<p>I came up with a sneaky hack: only <code>import testlib</code> when you are already calling the <code>__main__</code> function in <code>uselib.py</code>. Using the check <code>if __name__ == "__main__"</code> in <code>uselib.py</code> is important in this case. That way, you avoid the circular importing. The <code>testlib.py</code> has all the classes in <code>uselib.py</code>, but <code>uselib.py</code> only loads everything in <code>testlib.py</code> when it needs calling them.</p>
<p>Code for <strong>uselib.py</strong>:</p>
<pre><code>#import testlib
class Goo:
def __init__(self):
pass
def checkValue(self):
return "check value in Goo"
if __name__ == "__main__":
import testlib
print testlib.globalFoo()
f = testlib.Foo()
print f.getValue()
</code></pre>
<p>Code for <strong>testlib.py</strong>:</p>
<pre><code>import uselib
class Foo:
def __init__(self):
pass
def getValue(self):
g = uselib.Goo()
g.checkValue()
return 'get value in class Foo'
def globalFoo():
return 'in global foo'
</code></pre>
<p>Output:</p>
<pre><code>Chip chip@ 04:00:00@ ~: python uselib.py
in global foo
get value in class Foo
</code></pre>
<p>Note that: <code>import testlib</code> could also be called in any arbitary function in <code>uselib.py</code>, and it doesnot need to be <code>__main__</code>. E.g.:</p>
<p>Code for another <strong>uselib.py</strong>:</p>
<pre><code>#import testlib
class Goo:
def __init__(self):
pass
def checkValue(self):
return "check value in Goo"
def moretest():
import testlib
print testlib.globalFoo()
f = testlib.Foo()
print f.getValue()
#if __name__ == "__main__":
#import testlib
#print testlib.globalFoo()
#f = testlib.Foo()
#print f.getValue()
</code></pre>
<p>Code for <strong>stackoverflow.py</strong>:</p>
<pre><code>import uselib
uselib.moretest()
</code></pre>
<p>Calling <strong>stackoverflow.py</strong>:</p>
<pre><code>Chip chip@ 04:30:06@ ~: python stackoverflow.py
in global foo
get value in class Foo
</code></pre>
| 1 |
2016-09-16T18:01:01Z
|
[
"python",
"python-2.7"
] |
Counting overlapping values of two 2D binary numpy arrays for a specific value
| 39,537,439 |
<p>I start with two images of the same size. I convert them to binary black/white numpy arrays (0 = black 1 = white). I'd like to find how many of the black pixels overlap (0 value at same position in both arrays).</p>
<p>I know how to do this with for loops, but I'm trying to learn how to use numpy properly, and I imagine there's a much better way of doing this.</p>
<p>A minimal example would be as follows:</p>
<p>ArrayA:</p>
<pre><code>[ 1 1 0 ]
[ 1 0 0 ]
[ 0 1 1 ]
</code></pre>
<p>ArrayB:</p>
<pre><code>[ 1 0 0 ]
[ 1 1 0 ]
[ 0 1 1 ]
</code></pre>
<p>I want to know how many times both arrays have a '0' value in the same position.</p>
<p>In this case, once in the 1st row 3rd column, once in the 2nd rows 3rd column, and once in the 3rd row 1st column. Total overlap of '0' values: 3.</p>
<p>I was thinking of something along the lines of</p>
<pre><code>np.where(arrayA == 0 and arrayB == 0)
</code></pre>
<p>but that doesn't work.</p>
| 2 |
2016-09-16T17:50:42Z
| 39,537,633 |
<p>You can use a simple comparison with a logical and:</p>
<pre><code>>>> A
array([[1, 1, 0],
[1, 0, 0],
[0, 1, 1]])
>>> B
array([[1, 0, 0],
[1, 1, 0],
[0, 1, 1]])
>>> np.logical_and(A == 0, B == 0)
array([[False, False, True],
[False, False, True],
[ True, False, False]], dtype=bool)
</code></pre>
<p>And use <code>np.where()</code> and <code>column_stack()</code> in order to get the indices of the intended items:</p>
<pre><code>>>> np.column_stack(np.where(np.logical_and(A == 0, B == 0)))
array([[0, 2],
[1, 2],
[2, 0]])
</code></pre>
<p>Or as a pretty Numpythonic way as suggested in comment use <code>np.argwhere</code>:</p>
<pre><code>>>> np.argwhere(np.logical_and(A == 0, B == 0))
array([[0, 2],
[1, 2],
[2, 0]])
</code></pre>
| 3 |
2016-09-16T18:05:42Z
|
[
"python",
"arrays",
"numpy",
"multidimensional-array"
] |
Counting overlapping values of two 2D binary numpy arrays for a specific value
| 39,537,439 |
<p>I start with two images of the same size. I convert them to binary black/white numpy arrays (0 = black 1 = white). I'd like to find how many of the black pixels overlap (0 value at same position in both arrays).</p>
<p>I know how to do this with for loops, but I'm trying to learn how to use numpy properly, and I imagine there's a much better way of doing this.</p>
<p>A minimal example would be as follows:</p>
<p>ArrayA:</p>
<pre><code>[ 1 1 0 ]
[ 1 0 0 ]
[ 0 1 1 ]
</code></pre>
<p>ArrayB:</p>
<pre><code>[ 1 0 0 ]
[ 1 1 0 ]
[ 0 1 1 ]
</code></pre>
<p>I want to know how many times both arrays have a '0' value in the same position.</p>
<p>In this case, once in the 1st row 3rd column, once in the 2nd rows 3rd column, and once in the 3rd row 1st column. Total overlap of '0' values: 3.</p>
<p>I was thinking of something along the lines of</p>
<pre><code>np.where(arrayA == 0 and arrayB == 0)
</code></pre>
<p>but that doesn't work.</p>
| 2 |
2016-09-16T17:50:42Z
| 39,576,605 |
<p>For the record, your original try just lacked the right operator and some parentesis:</p>
<pre><code>np.where( (arrayA==0) & (arrayB==0))
</code></pre>
| 2 |
2016-09-19T15:14:49Z
|
[
"python",
"arrays",
"numpy",
"multidimensional-array"
] |
How to calculate a partial Area Under the Curve (AUC)
| 39,537,443 |
<p>In scikit learn you can compute the area under the curve for a binary classifier with</p>
<pre><code>roc_auc_score( Y, clf.predict_proba(X)[:,1] )
</code></pre>
<p>I am only interested in the part of the curve where the false positive rate is less than 0.1.</p>
<blockquote>
<p>Given such a threshold false positive rate, how can I compute the AUC
only for the part of the curve up the threshold?</p>
</blockquote>
<p>Here is an example with several ROC-curves, for illustration:</p>
<p><a href="http://i.stack.imgur.com/Xg4uc.png" rel="nofollow"><img src="http://i.stack.imgur.com/Xg4uc.png" alt="Illustration of ROC-curves plot for several types of a classifier."></a></p>
<p>The scikit learn docs show how to use roc_curve</p>
<pre><code>>>> import numpy as np
>>> from sklearn import metrics
>>> y = np.array([1, 1, 2, 2])
>>> scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> fpr, tpr, thresholds = metrics.roc_curve(y, scores, pos_label=2)
>>> fpr
array([ 0. , 0.5, 0.5, 1. ])
>>> tpr
array([ 0.5, 0.5, 1. , 1. ])
>>> thresholds
array([ 0.8 , 0.4 , 0.35, 0.1 ]
</code></pre>
<p>Is there a simple way to go from this to the partial AUC?</p>
<hr>
<p>It seems the only problem is how to compute the tpr value at fpr = 0.1 as roc_curve doesn't necessarily give you that.</p>
| 4 |
2016-09-16T17:51:09Z
| 39,538,650 |
<p>That depends on whether the FPR is the <strong>x</strong>-axis or <strong>y</strong>-axis (independent or dependent variable).</p>
<p>If it's <strong>x</strong>, the calculation is trivial: calculate only over the range [0.0, 0.1].</p>
<p>If it's <strong>y</strong>, then you first need to solve the curve for <strong>y = 0.1</strong>. This partitions the x-axis into areas you need to calculate, and those that are simple rectangles with a height of 0.1.</p>
<p>For illustration, assume that you find the function exceeding 0.1 in two ranges: [x1, x2] and [x3, x4]. Calculate the area under the curve over the ranges</p>
<pre><code>[0, x1]
[x2, x3]
[x4, ...]
</code></pre>
<p>To this, add the rectangles under y=0.1 for the two intervals you found:</p>
<pre><code>area += (x2-x1 + x4-x3) * 0.1
</code></pre>
<p>Is that what you need to move you along?</p>
| 1 |
2016-09-16T19:12:32Z
|
[
"python",
"machine-learning",
"statistics",
"scikit-learn"
] |
How to calculate a partial Area Under the Curve (AUC)
| 39,537,443 |
<p>In scikit learn you can compute the area under the curve for a binary classifier with</p>
<pre><code>roc_auc_score( Y, clf.predict_proba(X)[:,1] )
</code></pre>
<p>I am only interested in the part of the curve where the false positive rate is less than 0.1.</p>
<blockquote>
<p>Given such a threshold false positive rate, how can I compute the AUC
only for the part of the curve up the threshold?</p>
</blockquote>
<p>Here is an example with several ROC-curves, for illustration:</p>
<p><a href="http://i.stack.imgur.com/Xg4uc.png" rel="nofollow"><img src="http://i.stack.imgur.com/Xg4uc.png" alt="Illustration of ROC-curves plot for several types of a classifier."></a></p>
<p>The scikit learn docs show how to use roc_curve</p>
<pre><code>>>> import numpy as np
>>> from sklearn import metrics
>>> y = np.array([1, 1, 2, 2])
>>> scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> fpr, tpr, thresholds = metrics.roc_curve(y, scores, pos_label=2)
>>> fpr
array([ 0. , 0.5, 0.5, 1. ])
>>> tpr
array([ 0.5, 0.5, 1. , 1. ])
>>> thresholds
array([ 0.8 , 0.4 , 0.35, 0.1 ]
</code></pre>
<p>Is there a simple way to go from this to the partial AUC?</p>
<hr>
<p>It seems the only problem is how to compute the tpr value at fpr = 0.1 as roc_curve doesn't necessarily give you that.</p>
| 4 |
2016-09-16T17:51:09Z
| 39,678,975 |
<p>Calculate your fpr and tpr values only over the range [0.0, 0.1].</p>
<p>Then, you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.trapz.html" rel="nofollow">numpy.trapz</a> to evaluate the partial AUC (pAUC) like so:</p>
<pre><code>pAUC = numpy.trapz(tpr_array, fpr_array)
</code></pre>
<p>This function uses the composite trapezoidal rule to evaluate the area under the curve.</p>
| 1 |
2016-09-24T17:16:35Z
|
[
"python",
"machine-learning",
"statistics",
"scikit-learn"
] |
How to calculate a partial Area Under the Curve (AUC)
| 39,537,443 |
<p>In scikit learn you can compute the area under the curve for a binary classifier with</p>
<pre><code>roc_auc_score( Y, clf.predict_proba(X)[:,1] )
</code></pre>
<p>I am only interested in the part of the curve where the false positive rate is less than 0.1.</p>
<blockquote>
<p>Given such a threshold false positive rate, how can I compute the AUC
only for the part of the curve up the threshold?</p>
</blockquote>
<p>Here is an example with several ROC-curves, for illustration:</p>
<p><a href="http://i.stack.imgur.com/Xg4uc.png" rel="nofollow"><img src="http://i.stack.imgur.com/Xg4uc.png" alt="Illustration of ROC-curves plot for several types of a classifier."></a></p>
<p>The scikit learn docs show how to use roc_curve</p>
<pre><code>>>> import numpy as np
>>> from sklearn import metrics
>>> y = np.array([1, 1, 2, 2])
>>> scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> fpr, tpr, thresholds = metrics.roc_curve(y, scores, pos_label=2)
>>> fpr
array([ 0. , 0.5, 0.5, 1. ])
>>> tpr
array([ 0.5, 0.5, 1. , 1. ])
>>> thresholds
array([ 0.8 , 0.4 , 0.35, 0.1 ]
</code></pre>
<p>Is there a simple way to go from this to the partial AUC?</p>
<hr>
<p>It seems the only problem is how to compute the tpr value at fpr = 0.1 as roc_curve doesn't necessarily give you that.</p>
| 4 |
2016-09-16T17:51:09Z
| 39,687,168 |
<p>Say we start with</p>
<pre><code>import numpy as np
from sklearn import metrics
</code></pre>
<p>Now we set the true <code>y</code> and predicted <code>scores</code>:</p>
<pre><code>y = np.array([0, 0, 1, 1])
scores = np.array([0.1, 0.4, 0.35, 0.8])
</code></pre>
<p>(Note that <code>y</code> has shifted down by 1 from your problem. This is inconsequential: the exact same results (fpr, tpr, thresholds, etc.) are obtained whether predicting 1, 2 or 0, 1, but some <code>sklearn.metrics</code> functions are a drag if not using 0, 1.)</p>
<p>Let's see the AUC here:</p>
<pre><code>>>> metrics.roc_auc_score(y, scores)
0.75
</code></pre>
<p>As in your example:</p>
<pre><code>fpr, tpr, thresholds = metrics.roc_curve(y, scores)
>>> fpr, tpr
(array([ 0. , 0.5, 0.5, 1. ]), array([ 0.5, 0.5, 1. , 1. ]))
</code></pre>
<p>This gives the following plot:</p>
<pre><code>plot([0, 0.5], [0.5, 0.5], [0.5, 0.5], [0.5, 1], [0.5, 1], [1, 1]);
</code></pre>
<p><a href="http://i.stack.imgur.com/A142Q.png" rel="nofollow"><img src="http://i.stack.imgur.com/A142Q.png" alt="enter image description here"></a></p>
<p>By construction, the ROC for a finite-length <em>y</em> will be composed of rectangles:</p>
<ul>
<li><p>For low enough threshold, everything will be classified as negative.</p></li>
<li><p>As the threshold increases continuously, at <em>discrete points</em>, some negative classifications will be changed to positive.</p></li>
</ul>
<p>So, for a finite <em>y</em>, the ROC will always be characterized by a sequence of connected horizontal and vertical lines leading from <em>(0, 0)</em> to <em>(1, 1)</em>. </p>
<p>The AUC is the sum of these rectangles. Here, as shown above, the AUC is 0.75, as the rectangles have areas 0.5 * 0.5 + 0.5 * 1 = 0.75. </p>
<p>In some cases, people choose to calculate the AUC by linear interpolation. Say the length of <em>y</em> is much larger than the actual number of points calculated for the FPR and TPR. Then, in this case, a linear interpolation is an approximation of what the points in between <em>might</em> have been. In some cases people also follow the <em>conjecture</em> that, had <em>y</em> been large enough, the points in between would be interpolated linearly. <code>sklearn.metrics</code> does not use this conjecture, and to get results consistent with <code>sklearn.metrics</code>, it is necessary to use rectangle, not trapezoidal, summation.</p>
<p>Let's write our own function to calculate the AUC directly from <code>fpr</code> and <code>tpr</code>:</p>
<pre><code>import itertools
import operator
def auc_from_fpr_tpr(fpr, tpr, trapezoid=False):
inds = [i for (i, (s, e)) in enumerate(zip(fpr[: -1], fpr[1: ])) if s != e] + [len(fpr) - 1]
fpr, tpr = fpr[inds], tpr[inds]
area = 0
ft = zip(fpr, tpr)
for p0, p1 in zip(ft[: -1], ft[1: ]):
area += (p1[0] - p0[0]) * ((p1[1] + p0[1]) / 2 if trapezoid else p0[1])
return area
</code></pre>
<p>This function takes the FPR and TPR, and an optional parameter stating whether to use trapezoidal summation. Running it, we get:</p>
<pre><code>>>> auc_from_fpr_tpr(fpr, tpr), auc_from_fpr_tpr(fpr, tpr, True)
(0.75, 0.875)
</code></pre>
<p>We get the same result as <code>sklearn.metrics</code> for the rectangle summation, and a different, higher, result for trapezoid summation.</p>
<p>So, now we just need to see what would happen to the FPR/TPR points if we would terminate at an FPR of 0.1. We can do this with the <a href="https://docs.python.org/2/library/bisect.html" rel="nofollow"><code>bisect</code> module</a></p>
<pre><code>import bisect
def get_fpr_tpr_for_thresh(fpr, tpr, thresh):
p = bisect.bisect_left(fpr, thresh)
fpr = fpr.copy()
fpr[p] = thresh
return fpr[: p + 1], tpr[: p + 1]
</code></pre>
<p>How does this work? It simply checks where would be the insertion point of <code>thresh</code> in <code>fpr</code>. Given the properties of the FPR (it must start at 0), the insertion point must be in a horizontal line. Thus all rectangles before this one should be unaffected, all rectangles after this one should be removed, and this one should be possibly shortened. </p>
<p>Let's apply it:</p>
<pre><code>fpr_thresh, tpr_thresh = get_fpr_tpr_for_thresh(fpr, tpr, 0.1)
>>> fpr_thresh, tpr_thresh
(array([ 0. , 0.1]), array([ 0.5, 0.5]))
</code></pre>
<p>Finally, we just need to calculate the AUC from the updated versions:</p>
<pre><code>>>> auc_from_fpr_tpr(fpr, tpr), auc_from_fpr_tpr(fpr, tpr, True)
0.050000000000000003, 0.050000000000000003)
</code></pre>
<p>In this case, both the rectangle and trapezoid summations give the same results. Note that in general, they will not. For consistency with <code>sklearn.metrics</code>, the first one should be used.</p>
| 2 |
2016-09-25T13:06:56Z
|
[
"python",
"machine-learning",
"statistics",
"scikit-learn"
] |
pyspark: append/merge PythonRDD to a pyspark dataframe
| 39,537,505 |
<p>I am using the following code to create a clustering model, then classify each record to certain cluster:</p>
<pre><code>from pyspark.mllib.clustering import KMeans
from pyspark.mllib.linalg import Vectors
spark_df = sqlContext.createDataFrame(pandas_df)
rdd = spark_df.rdd.map(lambda data: Vectors.dense([float(c) for c in data]))
model = KMeans.train(rdd, 2, maxIterations=10, initializationMode="random")
result = model.predict(red)
</code></pre>
<p>How do I append the predicted result back to the spark_df as an additional column? Thanks!</p>
| 0 |
2016-09-16T17:55:48Z
| 39,548,417 |
<p><code>pyspark.mllib.clustering.KMeansModel</code> is one of rare models that can be used directly inside PySpark transformation so you can simply <code>map</code> with <code>predict</code>:</p>
<pre><code>rdd.map(lambda point: (model.predict(point), point))
</code></pre>
<p>In general case when it is not possible <code>zip</code> is the right tool for the job:</p>
<pre><code>rdd.zip(model.predict(rdd))
</code></pre>
| 1 |
2016-09-17T15:16:19Z
|
[
"python",
"apache-spark",
"pyspark",
"spark-dataframe",
"apache-spark-mllib"
] |
python: nested classes: access outer class class member
| 39,537,508 |
<p>This does not work:</p>
<pre><code>class A:
a1 = 4
class B:
b1 = A.a1 # Fails
b2 = 6
class C:
c1 = A.B.b2 # Fails
</code></pre>
<p>Any non cryptic way to solve it? I know I could take B and C out from A but I would like to keep them embedded. I also think it would be easier with no class members as they could easily passed to nested class as argument in constructor, but being all of them class members I do not know how to do some similar thing here. I have also read on some thread that this usage remembers using classes as namespaces and that should be solved using modules, not classes, but above classes are real classes for me (I construct instances) with class data in adition that I would like to share among them.</p>
<p>UPDATE: I found a really dirty way to share class members, I am not happy yet:</p>
<pre><code>global_tal = None
global_tal2 = None
class A:
a1 = 4
global global_tal
global_tal = a1
class B:
global global_tal
b0 = global_tal # it works
# Desired way: b1 = A.a1 # Fails
b2 = 6
global global_tal2
global_tal2 = B
class C:
global global_tal2
c1 = global_tal2.b2 # it works
# Desired way: c1 = A.B.b2 # Fails
pass
</code></pre>
<p>UPDATE 2: Cleaner way (IMHO). Thanks to dhke.</p>
<pre><code>class _SharedReferences:
ref1 = None
ref2 = None
class A(_SharedReferences):
a1 = 4
_SharedReferences.ref1 = a1
class B:
b0 = _SharedReferences.ref1 # it works
# Desired way: b1 = A.a1 # Fails
b2 = 6
_SharedReferences.ref1 = B
class C:
c1 = _SharedReferences.ref1.b2 # it works
# Desired way: c1 = A.B.b2 # Fails
pass
</code></pre>
| 3 |
2016-09-16T17:56:13Z
| 39,537,602 |
<p>This fails for two different reasons. One is that <code>A</code> is not ready when you try to access <code>A.a1</code> in <code>B</code> giving you <code>NameError</code>.</p>
<p>If you solve this using a subclass. The following will work:</p>
<pre><code>class A:
a1 = 4
class _A(A):
class B:
b1 = A.a1 # Fails
b2 = 6
</code></pre>
<p>However accessing <code>A.B.b2</code> in <code>C</code> will still not work as <code>A</code> has no attribute <code>B</code>. You will get an <code>AttributeError</code> on that.</p>
| 1 |
2016-09-16T18:02:46Z
|
[
"python",
"nested",
"inner-classes"
] |
python: nested classes: access outer class class member
| 39,537,508 |
<p>This does not work:</p>
<pre><code>class A:
a1 = 4
class B:
b1 = A.a1 # Fails
b2 = 6
class C:
c1 = A.B.b2 # Fails
</code></pre>
<p>Any non cryptic way to solve it? I know I could take B and C out from A but I would like to keep them embedded. I also think it would be easier with no class members as they could easily passed to nested class as argument in constructor, but being all of them class members I do not know how to do some similar thing here. I have also read on some thread that this usage remembers using classes as namespaces and that should be solved using modules, not classes, but above classes are real classes for me (I construct instances) with class data in adition that I would like to share among them.</p>
<p>UPDATE: I found a really dirty way to share class members, I am not happy yet:</p>
<pre><code>global_tal = None
global_tal2 = None
class A:
a1 = 4
global global_tal
global_tal = a1
class B:
global global_tal
b0 = global_tal # it works
# Desired way: b1 = A.a1 # Fails
b2 = 6
global global_tal2
global_tal2 = B
class C:
global global_tal2
c1 = global_tal2.b2 # it works
# Desired way: c1 = A.B.b2 # Fails
pass
</code></pre>
<p>UPDATE 2: Cleaner way (IMHO). Thanks to dhke.</p>
<pre><code>class _SharedReferences:
ref1 = None
ref2 = None
class A(_SharedReferences):
a1 = 4
_SharedReferences.ref1 = a1
class B:
b0 = _SharedReferences.ref1 # it works
# Desired way: b1 = A.a1 # Fails
b2 = 6
_SharedReferences.ref1 = B
class C:
c1 = _SharedReferences.ref1.b2 # it works
# Desired way: c1 = A.B.b2 # Fails
pass
</code></pre>
| 3 |
2016-09-16T17:56:13Z
| 39,537,673 |
<p>You might defer the definition of <code>B</code> until <code>A</code> is fully defined, and defer <code>C</code> until both <code>A</code> and <code>A.B</code> are defined.</p>
<pre><code>class A:
a1 = 4
class B:
b1 = A.a1
b2 = 6
A.B = B
del B
class C:
c1 = A.B.b2
A.C = C
del C
assert A.B.b1 == 4
assert A.C.c1 == 6
</code></pre>
<p>Alternatively, you could define <code>B.b1</code> outside of <code>B</code>'s definition:</p>
<pre><code>class A:
a1 = 4
class B:
pass
B.b1 = a1
B.b2 = 6
class C:
pass
C.c1 = B.b2
assert A.B.b1 == 4
assert A.C.c1 == 6
</code></pre>
| 1 |
2016-09-16T18:08:19Z
|
[
"python",
"nested",
"inner-classes"
] |
python: nested classes: access outer class class member
| 39,537,508 |
<p>This does not work:</p>
<pre><code>class A:
a1 = 4
class B:
b1 = A.a1 # Fails
b2 = 6
class C:
c1 = A.B.b2 # Fails
</code></pre>
<p>Any non cryptic way to solve it? I know I could take B and C out from A but I would like to keep them embedded. I also think it would be easier with no class members as they could easily passed to nested class as argument in constructor, but being all of them class members I do not know how to do some similar thing here. I have also read on some thread that this usage remembers using classes as namespaces and that should be solved using modules, not classes, but above classes are real classes for me (I construct instances) with class data in adition that I would like to share among them.</p>
<p>UPDATE: I found a really dirty way to share class members, I am not happy yet:</p>
<pre><code>global_tal = None
global_tal2 = None
class A:
a1 = 4
global global_tal
global_tal = a1
class B:
global global_tal
b0 = global_tal # it works
# Desired way: b1 = A.a1 # Fails
b2 = 6
global global_tal2
global_tal2 = B
class C:
global global_tal2
c1 = global_tal2.b2 # it works
# Desired way: c1 = A.B.b2 # Fails
pass
</code></pre>
<p>UPDATE 2: Cleaner way (IMHO). Thanks to dhke.</p>
<pre><code>class _SharedReferences:
ref1 = None
ref2 = None
class A(_SharedReferences):
a1 = 4
_SharedReferences.ref1 = a1
class B:
b0 = _SharedReferences.ref1 # it works
# Desired way: b1 = A.a1 # Fails
b2 = 6
_SharedReferences.ref1 = B
class C:
c1 = _SharedReferences.ref1.b2 # it works
# Desired way: c1 = A.B.b2 # Fails
pass
</code></pre>
| 3 |
2016-09-16T17:56:13Z
| 39,537,909 |
<p>One trick is to put the common parameters into a parameter class and inherit from that:</p>
<pre><code>class Params:
p = 4
class A(Params):
# A has p
class B(Params):
# B has p
pass
class C(Params):
# C has p
pass
</code></pre>
<p>Or, if you need the params with different names in the inner classes:</p>
<pre><code>class A(Params):
# A has p
class B:
b = Params.p
class C:
c = Params.p
</code></pre>
<p>This avoids having to monkey-patch the class after creation.</p>
| 0 |
2016-09-16T18:24:51Z
|
[
"python",
"nested",
"inner-classes"
] |
Why doesn't my selenium webdriver authentication work (Django 1.9)
| 39,537,615 |
<p>I am going through a process of registering and logging in for a list of users.</p>
<p>I am using the same username and password to make things simple. </p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import re
for address in geolocations:
# register
browser.get("http://127.0.0.1:8000/register")
username = browser.find_element_by_id("id_username")
print("username is being set as " + re.sub(' ', '', address))
password = browser.find_element_by_id("id_password")
print("password is being set as " + re.sub(' ', '', address))
location = browser.find_element_by_id("location")
submit = browser.find_element_by_id("register")
username.clear()
password.clear()
location.clear()
username.send_keys(re.sub(' ', '', address))
password.send_keys(re.sub(' ', '', address))
location.send_keys(address)
location.send_keys(Keys.RETURN)
submit.click()
browser.implicitly_wait(1)
# login
browser.get("http://127.0.0.1:8000/login")
username = browser.find_element_by_id("username")
password = browser.find_element_by_id("password")
username.clear()
password.clear()
password.send_keys(re.sub(' ', '', address)) # addresses have spaces
print("password is being set as: " + re.sub(' ', '', address))
browser.implicitly_wait(2)
submit = browser.find_element_by_id("submit")
submit.click()
browser.implicitly_wait(2)
browser.quit()
</code></pre>
<p>Even though the same string is being used for registering and logging in, the login authentication is not working. But the same username/password combos work when I register/login manually.</p>
<p>Can anyone tell me what's causing this?</p>
| 0 |
2016-09-16T18:03:36Z
| 39,537,694 |
<p>In the login section of your test, it looks like you have forgotten to call <code>username.send_keys()</code>.</p>
| 2 |
2016-09-16T18:09:53Z
|
[
"python",
"django",
"selenium",
"authentication",
"login"
] |
How to get a Logger that Prints to Console and to File in fewest lines
| 39,537,683 |
<p>This is how I obtain a logger that prints to both console and a file:</p>
<pre><code>import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
handler.setFormatter(formatter)
handler2 = logging.FileHandler(os.path.join(os.path.split(os.path.abspath(__file__))[0], 'my.log'), mode='w')
handler2.setFormatter(formatter)
logger.addHandler(handler)
logger.addHandler(handler2)
</code></pre>
<p>Is there a way to accomplish something similar to this in fewer operations? I don't particularly care about the mode of the file or the filename.</p>
<p>Edit: The use case for this is when I'm prototyping a new script and do not wish to spend the time writing a configuration file until later.</p>
| 0 |
2016-09-16T18:09:11Z
| 39,539,294 |
<p>I'm curious: why do you want that?</p>
<p>You can use an external configuration in an INI file and load it with <code>logging.config.fileConfig</code>.</p>
<p>Or</p>
<p>Create your own handle which combine file and console handlers.</p>
<p>Or</p>
<p>Create a file-like object which writes it a file and in the console. Then use this file with <code>logging.basicConfig</code>. </p>
| 0 |
2016-09-16T20:01:46Z
|
[
"python",
"logging"
] |
How to get a Logger that Prints to Console and to File in fewest lines
| 39,537,683 |
<p>This is how I obtain a logger that prints to both console and a file:</p>
<pre><code>import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
handler.setFormatter(formatter)
handler2 = logging.FileHandler(os.path.join(os.path.split(os.path.abspath(__file__))[0], 'my.log'), mode='w')
handler2.setFormatter(formatter)
logger.addHandler(handler)
logger.addHandler(handler2)
</code></pre>
<p>Is there a way to accomplish something similar to this in fewer operations? I don't particularly care about the mode of the file or the filename.</p>
<p>Edit: The use case for this is when I'm prototyping a new script and do not wish to spend the time writing a configuration file until later.</p>
| 0 |
2016-09-16T18:09:11Z
| 39,581,852 |
<p>My opinion: when you are prototyping, you want a log file for each module.</p>
<h2>A contextual handler</h2>
<p>To do that, you can process as follow:</p>
<ul>
<li>Create a <code>LogNameFileHandler</code>: this is a subclass of <code>logging.StreamHandler</code> which behave like <code>logging.FileHandler</code>, but which create a log file for each logger (based on their name).</li>
<li>Create a singleton <code>LOGGER</code> which set up a (the) root logger with this handler and the classic console logger (the way you suggest).</li>
<li>Use you logger in different module the simplest way: <code>logger = LOGGER.getChild(__name__)</code>.</li>
</ul>
<h2>Details implementation</h2>
<p>In <code>log_handler.py</code>:</p>
<pre><code>import io
import logging
import os
class LogNameFileHandler(logging.StreamHandler):
def __init__(self, root_dir, mode='a', encoding=None):
super(LogNameFileHandler, self).__init__(stream=None)
self.root_dir = os.path.abspath(root_dir)
self.mode = mode
self.encoding = encoding
# Set stream to None, because StreamHandler set it to sys.stderr
self.stream = None
#: :type log_path: str
self.log_path = None
def _open(self):
log_dir = os.path.dirname(self.log_path)
if not os.path.isdir(log_dir):
os.makedirs(log_dir)
return io.open(self.log_path, mode=self.mode, encoding=self.encoding)
def close(self):
self.acquire()
try:
try:
if self.stream:
try:
self.flush()
finally:
stream = self.stream
self.stream = None
if hasattr(stream, "close"):
stream.close()
finally:
super(LogNameFileHandler, self).close()
finally:
self.release()
def emit(self, record):
name = record.name.replace(".", os.sep)
log_name = name + ".log"
self.log_path = os.path.join(self.root_dir, log_name)
self.stream = self._open()
super(LogNameFileHandler, self).emit(record)
</code></pre>
<p>This handler opens a new file for each <code>emit</code>, of course, it's up to you to handle a cache⦠For prototyping, it should be OK. You can use a mapping <code>record.name</code> => <code>stream</code>.</p>
<p>In <code>log_setup.py</code>:</p>
<pre><code>import logging
import os
from log_handler import LogNameFileHandler
def _init_logger(log_dir, name=None):
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter(u'%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
handler1 = logging.StreamHandler()
handler1.setFormatter(formatter)
handler2 = LogNameFileHandler(log_dir, mode="a+", encoding="utf8")
handler2.setFormatter(formatter)
logger.addHandler(handler1)
logger.addHandler(handler2)
return logger
LOGGER = _init_logger(os.path.dirname(__file__))
</code></pre>
<p>This module defines the <code>LOGGER</code> singleton, which you canuse in all yours Python modulesâ¦</p>
<h2>Usage</h2>
<p>In your <code>package/module.py</code>:</p>
<pre><code>from log_setup import LOGGER
logger = LOGGER.getChild(__name__)
logger.info("hello solution3")
</code></pre>
<p>Thatâs all (all the dust is under the rug).</p>
<p>If you set up the <code>root</code> logger, you can aslo write:</p>
<pre><code>import logging
logger = logging.getLogger(__name__)
...
</code></pre>
| 1 |
2016-09-19T20:48:03Z
|
[
"python",
"logging"
] |
Comparing dictionaries in Python lists
| 39,537,687 |
<p>I have two lists which contain dictionaries. Each dictionary has only one entry. I would like to check if a key in dictionary A (in list X) also exists in a dictionary in list Y. If this is the case, the key and the values belonging to it should be printed.</p>
<p>Example:</p>
<pre><code>listA = [{key1: value1}, {key2: value2}]
listB = [{key1: value3}, {key4: value4}]
</code></pre>
<p>In this case the output should be:</p>
<pre><code>key1: value1, value3
</code></pre>
<p>Thanks in advance.</p>
| -3 |
2016-09-16T18:09:20Z
| 39,540,051 |
<p>A very simple way to do it would be:</p>
<pre><code>#!/usr/bin/env python
l1 = [{'1':"one"} , {'2':"two"}]
l2 = [{'3':"three"} , {'1':"one_too"}]
def cmp(l1,l2):
for i in l1:
for j in l2:
for (key1,value1),(key2,value2) in zip(i.iteritems(),j.iteritems()):
if key1==key2:
print key1+": "+value1+", "+value2
break
cmp(l1,l2)
</code></pre>
| 0 |
2016-09-16T21:04:46Z
|
[
"python",
"list",
"dictionary"
] |
How to assign unique identifier to DataFrame row
| 39,537,689 |
<p>I have a <code>.csv</code> file that is created from an <code>nd.array</code> after the input data is processed by <code>sklearn.cluster.DBSCAN()</code>.I would like to be able to "tie" every point in the cluster to an unique identifier given by a column in my input file. </p>
<p>This is how I'm reading my <code>input_data</code>:</p>
<pre><code># Generate sample data
col_1 ="RL15_LONGITUDE"
col_2 ="RL15_LATITUDE"
data = pd.read_csv("2004_Charley_data.csv")
coords = data.as_matrix(columns=[col_1, col_2])
data = data[[col_1,col_2]].dropna()
data = data.as_matrix().astype('float16',copy=False)
</code></pre>
<p>And this is what it looks like:</p>
<pre><code>RecordID Storm RL15_LATITUDE RL15_LONGITUDE
2004_Charley95104-257448 2004_Charley 25.81774 -80.25079
2004_Charley93724-254950 2004_Charley 26.116338 -81.74986
2004_Charley93724-254949 2004_Charley 26.116338 -81.74986
2004_Charley75496-215198 2004_Charley 26.11817 -81.75756
</code></pre>
<p>With some help I was able to take the output of <code>DBSCAN</code> and save it as a <code>.CSV</code> file like this: </p>
<pre><code>clusters = (pd.concat([pd.DataFrame(c, columns=[col_2,col_1]).assign(cluster=i)
for i,c in enumerate(clusters)])
.reset_index()
.rename(columns={'index':'point'})
.set_index(['cluster','point'])
)
clusters.to_csv('output.csv')
</code></pre>
<p>My output now is multi-index, but I would like to know if there's a way I could change the column point to <code>RecordID</code> instead of just a number? :</p>
<pre><code>cluster point RL15_LATITUDE RL15_LONGITUDE
0 0 -81.0625 29.234375
0 1 -81.0625 29.171875
0 2 -81.0625 29.359375
1 0 -81.0625 29.25
1 1 -81.0625 29.21875
1 2 -81.0625 29.25
1 3 -81.0625 29.21875
</code></pre>
| 3 |
2016-09-16T18:09:29Z
| 39,539,808 |
<h2>UPDATE:</h2>
<p><strong>Code:</strong></p>
<pre><code>import numpy as np
import pandas as pd
from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn.datasets.samples_generator import make_blobs
from sklearn.preprocessing import StandardScaler
fn = r'D:\temp\.data\2004_Charley_data.csv'
df = pd.read_csv(fn)
cols = ['RL15_LONGITUDE','RL15_LATITUDE']
eps_=4
min_samples_=13
db = DBSCAN(eps=eps_/6371., min_samples=min_samples_, algorithm='ball_tree', metric='haversine').fit(np.radians(df[cols]))
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
df['cluster'] = labels
res = df[df.cluster >= 0]
print('--------------------------------')
print(res)
print('--------------------------------')
print(res.cluster.value_counts())
</code></pre>
<p>Output:</p>
<pre><code>--------------------------------
RecordID Storm RL15_LATITUDE RL15_LONGITUDE cluster
5 2004_Charley73944-211787 2004_Charley 29.228560 -81.034440 0
13 2004_Charley72308-208134 2004_Charley 29.442692 -81.109528 0
18 2004_Charley68044-198941 2004_Charley 29.442692 -81.109528 0
19 2004_Charley67753-198272 2004_Charley 29.270940 -81.097300 0
22 2004_Charley64829-191531 2004_Charley 29.313223 -81.101620 0
.. ... ... ... ... ...
812 2004_Charley94314-256039 2004_Charley 28.287827 -81.353285 1
813 2004_Charley93913-255344 2004_Charley 26.532980 -82.194400 7
814 2004_Charley93913-255346 2004_Charley 27.210467 -81.863720 5
815 2004_Charley93913-255357 2004_Charley 26.935550 -82.054447 4
816 2004_Charley93913-255354 2004_Charley 26.935550 -82.054447 4
[688 rows x 5 columns]
--------------------------------
1 217
0 170
2 145
4 94
7 18
6 16
5 14
3 14
Name: cluster, dtype: int64
</code></pre>
<p><strong>Old answer:</strong></p>
<p>If i understood your code correctly you can do it this way:</p>
<pre><code># read CSV (you have provided space-delimited file and with one unnamed column, so i have converted it to somewhat similar to that from your question)
fn = r'D:\temp\.data\2004_Charley_data.csv'
df = pd.read_csv(fn, sep='\s+', index_col=0)
df.index = df.index.values + df.RecordID.map(str)
del df['RecordID']
</code></pre>
<p>first 10 rows:</p>
<pre><code>In [148]: df.head(10)
Out[148]:
Storm RL15_LATITUDE RL15_LONGITUDE
RecordID
2004_Charley67146-196725 2004_Charley 33.807550 -78.701172
2004_Charley73944-211790 2004_Charley 33.618435 -78.993407
2004_Charley73944-211793 2004_Charley 28.609200 -80.818880
2004_Charley73944-211789 2004_Charley 29.383210 -81.160100
2004_Charley73944-211786 2004_Charley 33.691235 -78.895129
2004_Charley73944-211787 2004_Charley 29.228560 -81.034440
2004_Charley73944-211795 2004_Charley 28.357253 -80.701632
2004_Charley73944-211792 2004_Charley 34.204490 -77.924700
2004_Charley66636-195501 2004_Charley 33.436717 -79.132074
2004_Charley66631-195496 2004_Charley 33.646292 -78.977968
</code></pre>
<p>clustering:</p>
<pre><code>cols = ['RL15_LONGITUDE','RL15_LATITUDE']
eps_=4
min_samples_=13
db = DBSCAN(eps=eps_/6371., min_samples=min_samples_, algorithm='ball_tree', metric='haversine').fit(np.radians(df[cols]))
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
</code></pre>
<p>setting cluster info to our DF - we can simply assign it as <code>labels</code> has the same length as our DF:</p>
<pre><code>df['cluster'] = labels
</code></pre>
<p>filter: keep only those rows where <code>cluster >= 0</code>:</p>
<pre><code>res = df[df.cluster >= 0]
</code></pre>
<p>result:</p>
<pre><code>In [152]: res.head(10)
Out[152]:
Storm RL15_LATITUDE RL15_LONGITUDE cluster
RecordID
2004_Charley73944-211787 2004_Charley 29.228560 -81.034440 0
2004_Charley72308-208134 2004_Charley 29.442692 -81.109528 0
2004_Charley68044-198941 2004_Charley 29.442692 -81.109528 0
2004_Charley67753-198272 2004_Charley 29.270940 -81.097300 0
2004_Charley64829-191531 2004_Charley 29.313223 -81.101620 0
2004_Charley67376-197429 2004_Charley 29.196990 -80.993800 0
2004_Charley73720-211013 2004_Charley 29.171450 -81.037170 0
2004_Charley73705-210991 2004_Charley 28.308746 -81.424273 1
2004_Charley65157-192371 2004_Charley 28.308746 -81.424273 1
2004_Charley65126-192326 2004_Charley 28.308746 -81.424273 1
</code></pre>
<p>stats:</p>
<pre><code>In [151]: res.cluster.value_counts()
Out[151]:
1 217
0 170
2 145
4 94
7 18
6 16
5 14
3 14
Name: cluster, dtype: int64
</code></pre>
<p>if you don't want to have <code>RecordID</code> as index:</p>
<pre><code>In [153]: res = res.reset_index()
In [154]: res.head(10)
Out[154]:
RecordID Storm RL15_LATITUDE RL15_LONGITUDE cluster
0 2004_Charley73944-211787 2004_Charley 29.228560 -81.034440 0
1 2004_Charley72308-208134 2004_Charley 29.442692 -81.109528 0
2 2004_Charley68044-198941 2004_Charley 29.442692 -81.109528 0
3 2004_Charley67753-198272 2004_Charley 29.270940 -81.097300 0
4 2004_Charley64829-191531 2004_Charley 29.313223 -81.101620 0
5 2004_Charley67376-197429 2004_Charley 29.196990 -80.993800 0
6 2004_Charley73720-211013 2004_Charley 29.171450 -81.037170 0
7 2004_Charley73705-210991 2004_Charley 28.308746 -81.424273 1
8 2004_Charley65157-192371 2004_Charley 28.308746 -81.424273 1
9 2004_Charley65126-192326 2004_Charley 28.308746 -81.424273 1
</code></pre>
| 1 |
2016-09-16T20:43:07Z
|
[
"python",
"pandas",
"dataframe"
] |
Errno 24: Too many open files. But I am not opening files?
| 39,537,731 |
<p>I am using treq (<a href="https://github.com/twisted/treq" rel="nofollow">https://github.com/twisted/treq</a>) to query some other api from my web service. Today when I was doing stress testing of my own services, It shows an error</p>
<p><code>twisted.internet.error.DNSLookupError: DNS lookup failed: address 'api.abc.com' not found: [Errno 24] Too many open files.</code></p>
<p>But the problem is, my entire code I didn't open any file. I suspect it could be caused by the api I query goes down or blocked me (the api.abc.com) since my stress testing could be like a ddos to that end point. Still, in that case shouldn't that be something like refuse connection? I don't know why it will have that <code>Too many open files</code> error. Or is that caused by creating too much thread query?</p>
| 0 |
2016-09-16T18:13:04Z
| 39,537,952 |
<p>"Files" include network sockets, which are a type of file on Unix-based systems. The maximum number of open files is configurable with <code>ulimit -n</code></p>
<pre><code># Check current limit
$ ulimit -n
256
# Raise limit to 2048
$ ulimit -n 2048
</code></pre>
<p>It is not surprising to run out of file handles and have to raise the limit. But if the limit is already high, you may be leaking file handles (not closing them quickly enough). In garbage-collected languages like Python, the finalizer does not always close files fast enough, which is why you should be careful to use <code>with</code> blocks or other systems to close the files as soon as you are done with them.</p>
| 1 |
2016-09-16T18:27:22Z
|
[
"python",
"asynchronous",
"twisted"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.