title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Apply weight formula over rows of Dataframe Pandas
| 39,519,167 |
<p>I have a <code>df1</code> below. I make a copy of it to <code>df2</code> to conserve <code>df1</code>; then I use <code>df3</code> to compute over <code>df2</code>.</p>
<pre><code>df2=df1.copy()
</code></pre>
<p>I want to calculate a weight such as <code>Weight(A) = Price(A) / Sum(row_Prices)</code> and return it to <code>df2</code> below the prices such as for each row I get 3 lines of data, the price, the std and the weight row. I also want to calculate the std over the row and I suppose it is of a similar form.</p>
<p>I have tried this</p>
<pre><code>df3 = df2.iloc[1:,1:].div(df2.iloc[1:,1:].sum(axis=1), axis=0)
</code></pre>
<p>to get the weights and then print <code>df3</code> but it does not work.</p>
<p>For getting 2 rows for each date I tried stacking <code>.stack()</code> but I am probably doing it wrong. Help! Thank you</p>
<pre class="lang-none prettyprint-override"><code> A B C D E
2006-04-27 00:00:00
2006-04-28 00:00:00 69.62 69.62 6.518 65.09 69.62
2006-05-01 00:00:00 71.5 71.5 6.522 65.16 71.5
2006-05-02 00:00:00 72.34 72.34 6.669 66.55 72.34
2006-05-03 00:00:00 70.22 70.22 6.662 66.46 70.22
2006-05-04 00:00:00 68.32 68.32 6.758 67.48 68.32
2006-05-05 00:00:00 68 68 6.805 67.99 68
2006-05-08 00:00:00 67.88 67.88 6.768 67.56 67.88
</code></pre>
<p>I would like it to ouput nicely as such:</p>
<pre class="lang-none prettyprint-override"><code> A B C D E
2006-04-27 00:00:00
2006-04-28 00:00:00
price 69.62 69.62 6.518 65.09 69.62
weight
std
2006-05-01 00:00:00
price 71.5 71.5 6.522 65.16 71.5
weight
std
2006-05-02 00:00:00
price 72.34 72.34 6.669 66.55 72.34
weight
std
</code></pre>
| -2 |
2016-09-15T19:44:23Z
| 39,525,049 |
<p>As far as I know, there's no one-liner-quick-and-dirty way to achieve what you are trying to do.
You need to calculate all your data and then merge it all into a <code>DataFrame</code> that uses a multi-level index:</p>
<pre><code># Making weight/std DataFrames
cols = list('ABCDE')
weight = pd.DataFrame([df[col] / df.sum(axis=1) for col in df], index=cols).T
std = pd.DataFrame([df.std(axis=1) for col in df], index=cols).T
# Making MultiIndex DataFrame
mindex = pd.MultiIndex.from_product([['price', 'weight', 'std'], df.index])
new_df = pd.DataFrame(index=mindex, columns=cols)
# Inserting data
new_df.ix['price'] = df.values
new_df.ix['weight'] = weight.values
new_df.ix['std'] = std.values
# Swapping levels
new_df = new_df.swaplevel(0, 1).sort_index()
</code></pre>
<p>The resulting <code>new_df</code> should look somewhat like this:</p>
<pre><code>2006-04-28 price 69.62 69.62 6.518 65.09 69.62
std 27.7829 27.7829 27.7829 27.7829 27.7829
weight 0.248228 0.248228 0.0232397 0.232076 0.248228
2006-05-01 price 71.5 71.5 6.522 65.16 71.5
std 28.4828 28.4828 28.4828 28.4828 28.4828
weight 0.249841 0.249841 0.0227897 0.227687 0.249841
2006-05-02 price 72.34 72.34 6.669 66.55 72.34
std 28.8308 28.8308 28.8308 28.8308 28.8308
weight 0.249243 0.249243 0.0229776 0.229294 0.249243
2006-05-03 price 70.22 70.22 6.662 66.46 70.22
std 28.0509 28.0509 28.0509 28.0509 28.0509
weight 0.247443 0.247443 0.0234758 0.234194 0.247443
2006-05-04 price 68.32 68.32 6.758 67.48 68.32
std 27.4399 27.4399 27.4399 27.4399 27.4399
weight 0.244701 0.244701 0.024205 0.241692 0.244701
2006-05-05 price 68 68 6.805 67.99 68
std 27.3661 27.3661 27.3661 27.3661 27.3661
weight 0.243907 0.243907 0.0244086 0.243871 0.243907
2006-05-08 price 67.88 67.88 6.768 67.56 67.88
std 27.2947 27.2947 27.2947 27.2947 27.2947
weight 0.244201 0.244201 0.0243481 0.24305 0.244201
</code></pre>
<p>As a side note, I am not sure what kind of std you want to calculate, so I just assumed it was the row-wise price std (which will be a single/repeated value for each row).</p>
| 1 |
2016-09-16T06:29:46Z
|
[
"python",
"pandas",
"dataframe",
"apply"
] |
Special characters/kanji problems using Python unicode
| 39,519,180 |
<p>I want to use videofileclip(), but a UnicodeDecodeError occurs.
The videofiles include japanese kanji or special characters.</p>
<p>My example code:</p>
<pre><code>#-*- coding: utf-8 -*-
import sys
from moviepy.editor import VideoFileClip
reload(sys)
sys.setdefaultencoding('utf-8')
a='H:\\kittens.mkv'
clip1=VideoFileClip(a)
b='H:\\âÄÄ«â â.mp4'
clip2=VideoFileClip(b)
if clip1.fps >= clip2.fps:
os.remove(b)
else:
os.remove(a)
</code></pre>
<p>'a' works fine:</p>
<pre><code>>>> a='H:\\kittens.mkv'
>>> clip=VideoFileClip(a)
>>>
</code></pre>
<p>but 'b' doesn't work: </p>
<pre><code>>>> b='H:\\âÄÄ«â â.mp4'
>>> clip=VideoFileClip(b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\moviepy\video\io\VideoFileClip.py", line 5
5, in __init__
reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt)
File "C:\Python27\lib\site-packages\moviepy\video\io\ffmpeg_reader.py", line 3
2, in __init__
infos = ffmpeg_parse_infos(filename, print_infos, check_duration)
File "C:\Python27\lib\site-packages\moviepy\video\io\ffmpeg_reader.py", line 2
70, in ffmpeg_parse_infos
filename, infos))
UnicodeDecodeError: 'utf8' codec can't decode byte 0xa1 in position 54: invalid
start byte
>>> b
'H:\\\xa1\xb0??\xa8\xe7\xa1\xb1.mp4'
>>> print b
H:\â??â â.mp4
>>> print b.decode('cp949')
H:\â??â â.mp4
>>>
</code></pre>
<p>I've tried this, but it also doesn't work.</p>
<pre><code>b=b.decode('cp949')
b=b.decode('cp949').encode('utf-8')
b=unicode(b.decode('cp949'))
</code></pre>
<p>I think that Windows 7 supports Unicode file names (in Japanese kanji or special characters), but the character set of Python (2.x) (cp949) does not support special characters. What can I do for this problem?</p>
| 1 |
2016-09-15T19:44:54Z
| 39,599,706 |
<p>Here's a workaround using the <a href="https://sourceforge.net/projects/pywin32" rel="nofollow">pywin32</a> extensions.
Basically, you use the <a href="http://timgolden.me.uk/pywin32-docs/win32api__GetShortPathName_meth.html" rel="nofollow"><code>GetShortPathName</code></a> function to generate a legacy <a href="https://en.wikipedia.org/wiki/8.3_filename" rel="nofollow">8.3 filename</a> from a unicode path.</p>
<pre><code># -*- coding: utf-8 -*-
import os
import win32api
from moviepy.editor import VideoFileClip
def short_path(unicode_path):
return win32api.GetShortPathName(unicode_path)
v1 = 'âÄÄ«â â.mp4'
print os.path.isfile(v1) # False
v2 = u'âÄÄ«â â.mp4'
print os.path.isfile(v2) # True
# clip = VideoFileClip(v1) # IOError
# clip = VideoFileClip(v2) # UnicodeEncodeError
clip = VideoFileClip(short_path(v2)) # OK
print clip.duration
</code></pre>
| 0 |
2016-09-20T16:50:45Z
|
[
"python",
"unicode",
"special-characters",
"moviepy"
] |
Data Conversion using pyodbc to query iSeries database - Conversion error
| 39,519,204 |
<p>I am trying to filter records based on a Zoned Decimal value that is returning as Decimal(160919, ). How can I use this to filter against a date (ie: 160919)
Below is the code that I'm using to extract the order data:</p>
<pre><code>#connect to APlus
import pyodbc
import time
import cursor as cursor
today = int(time.strftime("%y%m%d"))
whatisit = type(today)
print whatisit
cnxn = pyodbc.connect('DSN=aplus; uid=username;pwd=password')
cursor = cnxn.cursor()
query = """ select OHORNO, OHRSDT
from ORHED
where OHCSNO = 206576 and CAST(OHRSDT AS INT) = '$[today]'"""
cursor.execute(query)
row = cursor.fetchall()
if row :
print(row)
print ("Today : " + today)
</code></pre>
| 1 |
2016-09-15T19:46:41Z
| 39,521,359 |
<p>There ended up being a space at the end of the date in the record. I used Left(OHEXDT, 6) to compare and everything is working as expected. </p>
<p>This actually only worked for an isolated occurrence, and then failed. </p>
<p>I am now using substring to pull out the numbers in the format I need to compare them to. </p>
<p><code>where OHCSNO = 206576 and integer(substr(OHESDT,1,6)) = '160926'</code></p>
<p>Thanks!</p>
| 0 |
2016-09-15T22:38:45Z
|
[
"python",
"sql",
"ibm-midrange",
"pyodbc",
"db2400"
] |
Regex to remove bit signal noise spikes
| 39,519,207 |
<p>I am dealing with RF signals that sometimes have noise spikes.<br>
The input is something like this:<br>
<code>00000001111100011110001111100001110000001000001111000000111001111000
</code> </p>
<p>Before parsing the data in the signal, I need to remove the spike bits, that are 0's and 1's sequence with a lenght lower than (in this example) 3.</p>
<p>So basically I need to match <code>0000000111110001111000111110000111000000(1)000001111000000111(00)1111000
</code><br>
After match, I replace it by the bit before it, so a clean signal look like this:
<code>00000001111100011110001111100001110000000000001111000000111111111000
</code> </p>
<p>So far I achieved this with two different Regex:</p>
<pre><code>self.re_one_spikes = re.compile("(?:[^1])(?P<spike>1{1,%d})(?=[^1])" % (self._SHORTEST_BIT_LEN - 1))
self.re_zero_spikes = re.compile("(?:[^0])(?P<spike>0{1,%d})(?=[^0])" % (self._SHORTEST_BIT_LEN - 1))
</code></pre>
<p>Then I iterate on the matches and replace. </p>
<p>How can I do this with a single regex? And can I use regex to replace different sizes matches?<br>
I tried something like this with no success:</p>
<pre><code>re.compile("(?![\1])([01]{1,2})(?![\1])")
</code></pre>
| 4 |
2016-09-15T19:46:55Z
| 39,519,271 |
<pre><code>import re
THRESHOLD=3
def fixer(match):
ones = match.group(0)
if len(ones) < THRESHOLD: return "0"*len(ones)
return ones
my_string = '00000001111100011110001111100001110000001000001111000000111001111000'
print(re.sub("(1+)",fixer,my_string))
</code></pre>
<p>if you want to also remove "spikes" of zeros</p>
<pre><code>def fixer(match):
items = match.group(0)
if len(items) < THRESHOLD: return "10"[int(items[0])]*len(items)
return items
print(re.sub("(1+)|(0+)",fixer,my_string))
</code></pre>
| 6 |
2016-09-15T19:51:27Z
|
[
"python",
"regex"
] |
Regex to remove bit signal noise spikes
| 39,519,207 |
<p>I am dealing with RF signals that sometimes have noise spikes.<br>
The input is something like this:<br>
<code>00000001111100011110001111100001110000001000001111000000111001111000
</code> </p>
<p>Before parsing the data in the signal, I need to remove the spike bits, that are 0's and 1's sequence with a lenght lower than (in this example) 3.</p>
<p>So basically I need to match <code>0000000111110001111000111110000111000000(1)000001111000000111(00)1111000
</code><br>
After match, I replace it by the bit before it, so a clean signal look like this:
<code>00000001111100011110001111100001110000000000001111000000111111111000
</code> </p>
<p>So far I achieved this with two different Regex:</p>
<pre><code>self.re_one_spikes = re.compile("(?:[^1])(?P<spike>1{1,%d})(?=[^1])" % (self._SHORTEST_BIT_LEN - 1))
self.re_zero_spikes = re.compile("(?:[^0])(?P<spike>0{1,%d})(?=[^0])" % (self._SHORTEST_BIT_LEN - 1))
</code></pre>
<p>Then I iterate on the matches and replace. </p>
<p>How can I do this with a single regex? And can I use regex to replace different sizes matches?<br>
I tried something like this with no success:</p>
<pre><code>re.compile("(?![\1])([01]{1,2})(?![\1])")
</code></pre>
| 4 |
2016-09-15T19:46:55Z
| 39,519,769 |
<p>Alternative approach without using <code>regex</code>, and by using <code>replace()</code> instead (in case someone might find it useful in future):</p>
<pre><code>>>> my_signal = '00000001111100011110001111100001110000001000001111000000111001111000'
>>> my_threshold = 3
>>> for i in range(my_threshold):
... my_signal = my_signal.replace('0{}0'.format('1'*(i+1)), '0{}0'.format('0'*(i+1)))
...
>>> my_signal
'00000001111100011110001111100000000000000000001111000000000001111000'
</code></pre>
| 0 |
2016-09-15T20:25:48Z
|
[
"python",
"regex"
] |
Regex to remove bit signal noise spikes
| 39,519,207 |
<p>I am dealing with RF signals that sometimes have noise spikes.<br>
The input is something like this:<br>
<code>00000001111100011110001111100001110000001000001111000000111001111000
</code> </p>
<p>Before parsing the data in the signal, I need to remove the spike bits, that are 0's and 1's sequence with a lenght lower than (in this example) 3.</p>
<p>So basically I need to match <code>0000000111110001111000111110000111000000(1)000001111000000111(00)1111000
</code><br>
After match, I replace it by the bit before it, so a clean signal look like this:
<code>00000001111100011110001111100001110000000000001111000000111111111000
</code> </p>
<p>So far I achieved this with two different Regex:</p>
<pre><code>self.re_one_spikes = re.compile("(?:[^1])(?P<spike>1{1,%d})(?=[^1])" % (self._SHORTEST_BIT_LEN - 1))
self.re_zero_spikes = re.compile("(?:[^0])(?P<spike>0{1,%d})(?=[^0])" % (self._SHORTEST_BIT_LEN - 1))
</code></pre>
<p>Then I iterate on the matches and replace. </p>
<p>How can I do this with a single regex? And can I use regex to replace different sizes matches?<br>
I tried something like this with no success:</p>
<pre><code>re.compile("(?![\1])([01]{1,2})(?![\1])")
</code></pre>
| 4 |
2016-09-15T19:46:55Z
| 39,520,001 |
<pre><code>def fix_noise(s, noise_thold=3):
pattern=re.compile(r'(?P<before>1|0)(?P<noise>(?<=0)1{1,%d}(?=0)|(?<=1)0{1,%d}(?=1))' % (noise_thold-1, noise_thold-1))
result = s
for noise_match in pattern.finditer(s):
beginning = result[:noise_match.start()+1]
end = result[noise_match.end():]
replaced = noise_match.group('before')*len(noise_match.group('noise'))
result = beginning + replaced + end
return result
</code></pre>
<p>Jordan's <code>int(items[0])</code> indexing idea is awesome!</p>
| 0 |
2016-09-15T20:39:40Z
|
[
"python",
"regex"
] |
Regex to remove bit signal noise spikes
| 39,519,207 |
<p>I am dealing with RF signals that sometimes have noise spikes.<br>
The input is something like this:<br>
<code>00000001111100011110001111100001110000001000001111000000111001111000
</code> </p>
<p>Before parsing the data in the signal, I need to remove the spike bits, that are 0's and 1's sequence with a lenght lower than (in this example) 3.</p>
<p>So basically I need to match <code>0000000111110001111000111110000111000000(1)000001111000000111(00)1111000
</code><br>
After match, I replace it by the bit before it, so a clean signal look like this:
<code>00000001111100011110001111100001110000000000001111000000111111111000
</code> </p>
<p>So far I achieved this with two different Regex:</p>
<pre><code>self.re_one_spikes = re.compile("(?:[^1])(?P<spike>1{1,%d})(?=[^1])" % (self._SHORTEST_BIT_LEN - 1))
self.re_zero_spikes = re.compile("(?:[^0])(?P<spike>0{1,%d})(?=[^0])" % (self._SHORTEST_BIT_LEN - 1))
</code></pre>
<p>Then I iterate on the matches and replace. </p>
<p>How can I do this with a single regex? And can I use regex to replace different sizes matches?<br>
I tried something like this with no success:</p>
<pre><code>re.compile("(?![\1])([01]{1,2})(?![\1])")
</code></pre>
| 4 |
2016-09-15T19:46:55Z
| 39,520,346 |
<p>To match both cases <code>[01]</code>in a single regex, it's simply this: </p>
<p><code>(?<=([01]))(?:(?!\1)[01]){1,2}(?=\1)</code> </p>
<p>Expanded </p>
<pre><code> (?<= # Lookbehind for 0 or 1
( [01] ) # (1), Capture behind 0 or 1
)
(?: # Match spike, one to %d times in length
(?! \1 ) # Cannot be the 0 or 1 from lookbehind
[01]
){1,2}
(?= \1 ) # Lookahead, can only be 0 or 1 from capture (1)
</code></pre>
<p>Replace with <code>$1</code> times length of the match ( i.e. length of group 0 ). </p>
<p>Matches </p>
<pre><code> ** Grp 0 - ( pos 40 , len 1 )
1
** Grp 1 - ( pos 39 , len 1 )
0
----------------------------------------
** Grp 0 - ( pos 59 , len 2 )
00
** Grp 1 - ( pos 58 , len 1 )
1
</code></pre>
<p>Benchmark </p>
<pre><code>Regex1: (?<=([01]))(?:(?!\1)[01]){1,2}(?=\1)
Options: < none >
Completed iterations: 50 / 50 ( x 1000 )
Matches found per iteration: 2
Elapsed Time: 2.06 s, 2058.02 ms, 2058018 µs
50,000 iterations * 2 matches/iteration = 100,000 matches
100,000 matches / 2 sec's = 50,000 matches per second
</code></pre>
| 1 |
2016-09-15T21:04:05Z
|
[
"python",
"regex"
] |
how to get value from returned instance of deferred
| 39,519,240 |
<p>I use txmongo lib as the driver for mongoDB.
In its limited docs, the find function in txmongo will return an instance of deferred, but how can I get the actual result (like {"IP":11.12.59.119})?? I tried yield, str() and repr() but does not work.</p>
<pre><code>def checkResource(self, resource):
""" use the message to inquire database
then set the result to a ip variable
"""
d = self.units.find({'$and': [{'baseIP':resource},{'status':'free'}]},limit=1,fields={'_id':False,'baseIP':True})
#Here above, how can I retrieve the result in this deferred instance??
d.addCallback(self.handleReturnedValue)
d.addErrback(log.err)
return d
def handleReturnedValue(self, returned):
for ip in returned:
if ip is not None:
d = self.updateData(ip,'busy')
return d
else:
return "NA"
</code></pre>
| 0 |
2016-09-15T19:49:13Z
| 39,520,688 |
<p>If you want to write asynchronous code in twisted looking more like synchronous, try using <code>defer.inlineCallbacks</code></p>
<p>This is from the docs:
<a href="http://twisted.readthedocs.io/en/twisted-16.2.0/core/howto/defer-intro.html#inline-callbacks-using-yield" rel="nofollow">http://twisted.readthedocs.io/en/twisted-16.2.0/core/howto/defer-intro.html#inline-callbacks-using-yield</a></p>
<blockquote>
<p>Consider the following function written in the traditional Deferred
style:</p>
</blockquote>
<pre><code>def getUsers():
d = makeRequest("GET", "/users")
d.addCallback(json.loads)
return d
</code></pre>
<blockquote>
<p>using inlineCallbacks, we can write this as:</p>
</blockquote>
<pre><code>from twisted.internet.defer import inlineCallbacks, returnValue
@inlineCallbacks
def getUsers(self):
responseBody = yield makeRequest("GET", "/users")
returnValue(json.loads(responseBody))
</code></pre>
<p>EDIT:</p>
<pre><code>def checkResource(self, resource):
""" use the message to inquire database
then set the result to a ip variable
"""
returned = yield self.units.find({'$and': [{'baseIP':resource},{'status':'free'}]},limit=1,fields={'_id':False,'baseIP':True})
# replacing callback function
for ip in returned:
if ip is not None:
d = self.updateData(ip,'busy') # if this returns deferred use yield again
returnValue(d)
returnValue("NA")
</code></pre>
| 1 |
2016-09-15T21:32:12Z
|
[
"python",
"mongodb",
"twisted"
] |
passing items from list to an empty object in another list
| 39,519,289 |
<p>I want to pass items from lists to the empty object in store i.e. I want:</p>
<pre><code>store = [ [['a', 'b'], ['c', 'd']], [] ]
</code></pre>
<p>I am getting an unintended result: </p>
<pre><code>lists = [['a', 'b'], ['c', 'd']]
store = [[], []]
counter = 0
for l in lists:
for s in store:
s.append(l)
</code></pre>
<p>which gives me:</p>
<pre><code>store = [[['a', 'b'], ['c', 'd']], [['a', 'b'], ['c', 'd']]]
</code></pre>
| -1 |
2016-09-15T19:52:03Z
| 39,519,348 |
<p>Store has two empty lists, and you're adding on to both of them. If you only want to add onto the first then</p>
<pre><code>for l in lists:
store[0].append(l)
</code></pre>
| 0 |
2016-09-15T19:55:29Z
|
[
"python",
"list",
"nested-loops"
] |
passing items from list to an empty object in another list
| 39,519,289 |
<p>I want to pass items from lists to the empty object in store i.e. I want:</p>
<pre><code>store = [ [['a', 'b'], ['c', 'd']], [] ]
</code></pre>
<p>I am getting an unintended result: </p>
<pre><code>lists = [['a', 'b'], ['c', 'd']]
store = [[], []]
counter = 0
for l in lists:
for s in store:
s.append(l)
</code></pre>
<p>which gives me:</p>
<pre><code>store = [[['a', 'b'], ['c', 'd']], [['a', 'b'], ['c', 'd']]]
</code></pre>
| -1 |
2016-09-15T19:52:03Z
| 39,519,380 |
<p>The nested for loop is an overkill. You should instead <code>extend</code> the first sublist in <code>store</code> with <code>lists</code>:</p>
<pre><code>lists = [['a', 'b'], ['c', 'd']]
store = [[], []]
store[0].extend(lists)
# ^ indexing starts from 0
print(store)
# [[['a', 'b'], ['c', 'd']], []]
</code></pre>
<p>Have a look at more on <a href="https://docs.python.org/2/tutorial/datastructures.html#more-on-lists" rel="nofollow"><code>lists</code></a></p>
| 1 |
2016-09-15T19:57:32Z
|
[
"python",
"list",
"nested-loops"
] |
passing items from list to an empty object in another list
| 39,519,289 |
<p>I want to pass items from lists to the empty object in store i.e. I want:</p>
<pre><code>store = [ [['a', 'b'], ['c', 'd']], [] ]
</code></pre>
<p>I am getting an unintended result: </p>
<pre><code>lists = [['a', 'b'], ['c', 'd']]
store = [[], []]
counter = 0
for l in lists:
for s in store:
s.append(l)
</code></pre>
<p>which gives me:</p>
<pre><code>store = [[['a', 'b'], ['c', 'd']], [['a', 'b'], ['c', 'd']]]
</code></pre>
| -1 |
2016-09-15T19:52:03Z
| 39,519,427 |
<p>This is very simple man.
The best way to do this would be to simply assign the lists to stores[0]. </p>
<pre><code>stores[0]= lists
</code></pre>
<p>is all you need to do.</p>
| 0 |
2016-09-15T20:01:51Z
|
[
"python",
"list",
"nested-loops"
] |
passing items from list to an empty object in another list
| 39,519,289 |
<p>I want to pass items from lists to the empty object in store i.e. I want:</p>
<pre><code>store = [ [['a', 'b'], ['c', 'd']], [] ]
</code></pre>
<p>I am getting an unintended result: </p>
<pre><code>lists = [['a', 'b'], ['c', 'd']]
store = [[], []]
counter = 0
for l in lists:
for s in store:
s.append(l)
</code></pre>
<p>which gives me:</p>
<pre><code>store = [[['a', 'b'], ['c', 'd']], [['a', 'b'], ['c', 'd']]]
</code></pre>
| -1 |
2016-09-15T19:52:03Z
| 39,519,625 |
<p>How about simply doing this:</p>
<pre><code>store = [lists, []]
# value of 'store' = [[['a', 'b'], ['c', 'd']], []]
</code></pre>
<p>As I believe based on your question, that the desired O/P is:</p>
<pre><code>store = [ [['a', 'b'], ['c', 'd']], [] ]
</code></pre>
<p>from the input list:</p>
<pre><code>lists = [['a', 'b'], ['c', 'd']]
</code></pre>
| 0 |
2016-09-15T20:15:22Z
|
[
"python",
"list",
"nested-loops"
] |
passing items from list to an empty object in another list
| 39,519,289 |
<p>I want to pass items from lists to the empty object in store i.e. I want:</p>
<pre><code>store = [ [['a', 'b'], ['c', 'd']], [] ]
</code></pre>
<p>I am getting an unintended result: </p>
<pre><code>lists = [['a', 'b'], ['c', 'd']]
store = [[], []]
counter = 0
for l in lists:
for s in store:
s.append(l)
</code></pre>
<p>which gives me:</p>
<pre><code>store = [[['a', 'b'], ['c', 'd']], [['a', 'b'], ['c', 'd']]]
</code></pre>
| -1 |
2016-09-15T19:52:03Z
| 39,519,688 |
<p>If </p>
<pre><code>store = [ [['a', 'b'], ['c', 'd']], [] ]
</code></pre>
<p>is indeed what you want, then you've overshot the mark. The inner loop is unnecessary and will execute your code for <em>each</em> item in <code>store</code>. You have two empty lists in <code>store</code>, thus creating two populated lists in <code>store</code> after the code has run. To just do the first one, you want</p>
<pre><code>for l in lists:
store[0].append(l)
</code></pre>
<p>Reading your question though I'm not 100% positive that's what you're actually after, esp. given your otherwise mysterious inner loop.</p>
<p>I read "I want to pass items from lists to the empty object in store" as possibly meaning you're trying to take items from two lists in <code>lists</code> and make them one list in <code>store</code>. If that's what you want, something like this would do the trick:</p>
<pre><code>for l in lists:
for i in l:
store[0].append(i)
</code></pre>
<p>which gives you:</p>
<pre><code>[['a', 'b', 'c', 'd'], []]
</code></pre>
| 1 |
2016-09-15T20:19:42Z
|
[
"python",
"list",
"nested-loops"
] |
How do I improve my quick sort pivot selection in python?
| 39,519,343 |
<p>I was originally using only a single random pivot given by </p>
<pre><code>pivots = random.randrange(l,r)
</code></pre>
<p>Here l and r will be integers that define my range</p>
<p>I wanted to improve the run time by greatly increasing the likely hood that my pivot would be a good pivot by selecting the median of three random pivots. Below is the code I used and it caused my run time to increase by 20%-30%. </p>
<pre><code>rr = random.randrange
pivots = [ rr(l,r) for i in range(3) ]
pivots.sort()
</code></pre>
<p>How do I implement the above to be much faster?</p>
<p>Edit: Entire code added below</p>
<pre><code>import random
def quicksort(array, l=0, r=-1):
# array is list to sort, array is going to be passed by reference, this is new to me, try not to suck
# l is the left bound of the array to be acte on
# r is the right bound of the array to act on
if r == -1:
r = len(array)
# base case
if r-l <= 1:
return
# pick the median of 3 possible pivots
#pivots = [ random.randrange(l,r) for i in range(3) ]
rr = random.randrange
pivots = [ rr(l,r) for i in range(3) ]
pivots.sort()
i = l+1 # Barrier between below and above piviot, first higher element
array[l], array[pivots[1]] = array[pivots[1]], array[l]
for j in range(l+1,r):
if array[j] < array[l]:
array[i], array[j] = array[j], array[i]
i = i+1
array[l], array[i-1] = array[i-1], array[l]
quicksort(array, l, i-1)
quicksort(array, i, r)
return array
</code></pre>
<p>Edit 2:
This is the corrected code due. There was an error in the algorithm for picking the 3 pivots </p>
<pre><code>import random
def quicksort(array, l=0, r=-1):
# array is list to sort, array is going to be passed by reference, this is new to me, try not to suck
# l is the left bound of the array to be acte on
# r is the right bound of the array to act on
if r == -1:
r = len(array)
# base case
if r-l <= 1:
return
# pick the median of 3 possible pivots
mid = int((l+r)*0.5)
pivot = 0
#pivots = [ l, mid, r-1]
if array[l] > array[mid]:
if array[r-1]> array[l]:
pivot = l
elif array[mid] > array[r-1]:
pivot = mid
else:
if array[r-1] > array[mid]:
pivot = mid
else:
pivot = r-1
i = l+1 # Barrier between below and above piviot, first higher element
array[l], array[pivot] = array[pivot], array[l]
for j in range(l+1,r):
if array[j] < array[l]:
array[i], array[j] = array[j], array[i]
i = i+1
array[l], array[i-1] = array[i-1], array[l]
quicksort(array, l, i-1)
quicksort(array, i, r)
return array
</code></pre>
| 4 |
2016-09-15T19:54:59Z
| 39,519,585 |
<p>Though it can be outperformed by random choice on occasion, it's still worth looking into the <a href="https://en.wikipedia.org/wiki/Median_of_medians" rel="nofollow">median-of-medians</a> algorithm for pivot selection (and rank selection in general), which runs in O(n) time. It's not too far off of what you are currently doing, but there is a stronger assurance behind it that it picks a "good" pivot as opposed to just taking the median of three random numbers.</p>
| 1 |
2016-09-15T20:12:04Z
|
[
"python",
"pivot",
"quicksort"
] |
How do I improve my quick sort pivot selection in python?
| 39,519,343 |
<p>I was originally using only a single random pivot given by </p>
<pre><code>pivots = random.randrange(l,r)
</code></pre>
<p>Here l and r will be integers that define my range</p>
<p>I wanted to improve the run time by greatly increasing the likely hood that my pivot would be a good pivot by selecting the median of three random pivots. Below is the code I used and it caused my run time to increase by 20%-30%. </p>
<pre><code>rr = random.randrange
pivots = [ rr(l,r) for i in range(3) ]
pivots.sort()
</code></pre>
<p>How do I implement the above to be much faster?</p>
<p>Edit: Entire code added below</p>
<pre><code>import random
def quicksort(array, l=0, r=-1):
# array is list to sort, array is going to be passed by reference, this is new to me, try not to suck
# l is the left bound of the array to be acte on
# r is the right bound of the array to act on
if r == -1:
r = len(array)
# base case
if r-l <= 1:
return
# pick the median of 3 possible pivots
#pivots = [ random.randrange(l,r) for i in range(3) ]
rr = random.randrange
pivots = [ rr(l,r) for i in range(3) ]
pivots.sort()
i = l+1 # Barrier between below and above piviot, first higher element
array[l], array[pivots[1]] = array[pivots[1]], array[l]
for j in range(l+1,r):
if array[j] < array[l]:
array[i], array[j] = array[j], array[i]
i = i+1
array[l], array[i-1] = array[i-1], array[l]
quicksort(array, l, i-1)
quicksort(array, i, r)
return array
</code></pre>
<p>Edit 2:
This is the corrected code due. There was an error in the algorithm for picking the 3 pivots </p>
<pre><code>import random
def quicksort(array, l=0, r=-1):
# array is list to sort, array is going to be passed by reference, this is new to me, try not to suck
# l is the left bound of the array to be acte on
# r is the right bound of the array to act on
if r == -1:
r = len(array)
# base case
if r-l <= 1:
return
# pick the median of 3 possible pivots
mid = int((l+r)*0.5)
pivot = 0
#pivots = [ l, mid, r-1]
if array[l] > array[mid]:
if array[r-1]> array[l]:
pivot = l
elif array[mid] > array[r-1]:
pivot = mid
else:
if array[r-1] > array[mid]:
pivot = mid
else:
pivot = r-1
i = l+1 # Barrier between below and above piviot, first higher element
array[l], array[pivot] = array[pivot], array[l]
for j in range(l+1,r):
if array[j] < array[l]:
array[i], array[j] = array[j], array[i]
i = i+1
array[l], array[i-1] = array[i-1], array[l]
quicksort(array, l, i-1)
quicksort(array, i, r)
return array
</code></pre>
| 4 |
2016-09-15T19:54:59Z
| 39,519,787 |
<p>You could choose the pivot in this way:</p>
<pre><code>alen = len(array)
pivots = [[array[0],0], [array[alen//2],alen//2], [array[alen-1],alen-1]]]
pivots.sort(key=lambda tup: tup[0]) #it orders for the first element of the tupla
pivot = pivots[1][1]
</code></pre>
<p>Example:</p>
<p><a href="http://i.stack.imgur.com/ps9d9.png" rel="nofollow"><img src="http://i.stack.imgur.com/ps9d9.png" alt="enter image description here"></a></p>
| 2 |
2016-09-15T20:26:33Z
|
[
"python",
"pivot",
"quicksort"
] |
How to get the response of calling a redis command
| 39,519,369 |
<p>I'm using python 2.6.6 and I can't upgrade.</p>
<p>I had it working fine with subprocess.check_output but I didn't realize we are using python 2.6.6 and I can't upgrade it on my end.</p>
<p>I tried this:</p>
<pre><code>command = "redis-cli hget some_key some_field"
command_output = subprocess.Popen(command, stdout=subprocess.PIPE).communicate()[0]
</code></pre>
<p>But I don't think I am passing in the commands correctly, the docs have somethign like:</p>
<pre><code>subprocess.Popen( ['ls', 'li'], ..)
</code></pre>
| 0 |
2016-09-15T19:56:28Z
| 39,519,415 |
<p>Yup, you should use lists to give your command line.</p>
<pre><code>command = ["redis-cli", "hget", "some_key", "some_field"]
command_output = subprocess.Popen(command, stdout=subprocess.PIPE).communicate()[0]
</code></pre>
<p>Relevant from Moses Koledoye@</p>
<blockquote>
<p><code>command.split()</code> does the conversion to list.</p>
</blockquote>
<p>The reason they did this is pretty simple: every argument given will be directly forward to the <code>redis-cli</code> program. Let's go in details:</p>
<p>Imagine you have this:</p>
<pre><code>key = raw_input("Please give me the key:")
os.system('redis-cli bla bla %s' % key) # DON'T DO THIS AT HOME.
</code></pre>
<p>Now imagine I am a malicious guy, and I input the following: <code>|| echo "hacked by h4x0r"</code>. The final command will look like this :</p>
<pre><code>redis-cli bla bla || echo "hacked by h4x0r"
</code></pre>
<p>Use list. Really.</p>
| 1 |
2016-09-15T20:00:33Z
|
[
"python"
] |
Python Selenium: Firefox neverAsk.saveToDisk when downloading from Blob URL
| 39,519,518 |
<p>I wish to have Firefox using <code>selenium</code> for Python to download the <em>Master data (Download, XLSX)</em> Excel file from this <a href="http://www.xetra.com/xetra-en/instruments/etf-exchange-traded-funds/list-of-tradable-etfs" rel="nofollow">Frankfurt stock exchange webpage</a>.</p>
<p>The problem: I can't get Firefox to download the file without asking where to save it first.</p>
<p>Let me first point out that the URL I'm trying to get the Excel file from, is really a Blob URL:</p>
<blockquote>
<p><a href="http://www.xetra.com/blob/1193366/b2f210876702b8e08e40b8ecb769a02e/data/All-tradable-ETFs-ETCs-and-ETNs.xlsx" rel="nofollow">http://www.xetra.com/blob/1193366/b2f210876702b8e08e40b8ecb769a02e/data/All-tradable-ETFs-ETCs-and-ETNs.xlsx</a></p>
</blockquote>
<p>Perhaps the Blob is causing my problem? Or, perhaps the problem is in my MIME handling?</p>
<pre><code>from selenium import webdriver
profile_dir = "path/to/ff_profile"
dl_dir = "path/to/dl/folder"
ff_profile = webdriver.FirefoxProfile(profile_dir)
ff_profile.set_preference("browser.download.folderList", 2)
ff_profile.set_preference("browser.download.manager.showWhenStarting", False)
ff_profile.set_preference("browser.download.dir", dl_dir)
ff_profile.set_preference('browser.helperApps.neverAsk.saveToDisk', "text/plain, application/vnd.ms-excel, text/csv, text/comma-separated-values, application/octet-stream")
driver = webdriver.Firefox(ff_profile)
url = "http://www.xetra.com/xetra-en/instruments/etf-exchange-traded-funds/list-of-tradable-etfs"
driver.get(url)
dl_link = driver.find_element_by_partial_link_text("Master data")
dl_link.click()
</code></pre>
| 1 |
2016-09-15T20:07:57Z
| 39,519,560 |
<p>The actual mime-type to be used in this case is:</p>
<pre><code>application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
</code></pre>
<hr>
<p>How do I know that? Here is what I've done:</p>
<ul>
<li>opened Firefox manually and navigated to the target site</li>
<li>when downloading the file, checked the checkbox to save these kind of files automatically</li>
<li>went to Help -> Troubleshooting Information and navigated to the "Profile Folder"</li>
<li>in the profile folder, foudn and opened <code>mimetypes.rdf</code></li>
<li>inside the <code>mimetypes.rdf</code> found the record/resource corresponding to the excel file I've recently downloaded</li>
</ul>
| 1 |
2016-09-15T20:10:47Z
|
[
"python",
"selenium",
"firefox",
"blob"
] |
Using variable as part of name of new file in python
| 39,519,599 |
<p>I'm fairly new to python and I'm having an issue with my python script (split_fasta.py). Here is an example of my issue:</p>
<pre><code>list = ["1.fasta", "2.fasta", "3.fasta"]
for file in list:
contents = open(file, "r")
for line in contents:
if line[0] == ">":
new_file = open(file + "_chromosome.fasta", "w")
new_file.write(line)
</code></pre>
<p>I've left the bottom part of the program out because it's not needed. My issue is that when I run this program in the same direcoty as my fasta123 files, it works great:</p>
<blockquote>
<p>python split_fasta.py *.fasta</p>
</blockquote>
<p>But if I'm in a different directory and I want the program to output the new files (eg. 1.fasta_chromsome.fasta) to my current directory...it doesn't:</p>
<blockquote>
<p>python /home/bin/split_fasta.py /home/data/*.fasta</p>
</blockquote>
<p>This still creates the new files in the same directory as the fasta files. The issue here I'm sure is with this line:</p>
<pre><code>new_file = open(file + "_chromosome.fasta", "w")
</code></pre>
<p>Because if I change it to this:</p>
<pre><code>new_file = open("seq" + "_chromosome.fasta", "w")
</code></pre>
<p>It creates an output file in my current directory.</p>
<p>I hope this makes sense to some of you and that I can get some suggestions.</p>
| 0 |
2016-09-15T20:13:11Z
| 39,519,785 |
<p>You are giving the full path of the old file, plus a new name. So basically, if <code>file == /home/data/something.fasta</code>, the output file will be <code>file + "_chromosome.fasta"</code> which is <code>/home/data/something.fasta_chromosome.fasta</code></p>
<p>If you use <code>os.path.basename</code> on <code>file</code>, you will get the name of the file (i.e. in my example, <code>something.fasta</code>)</p>
<p>From @Adam Smith</p>
<blockquote>
<p>You can use <code>os.path.splittext</code> to get rid of the <code>.fasta</code></p>
<pre><code>basename, _ = os.path.splitext(os.path.basename(file))
</code></pre>
</blockquote>
<hr>
<p>Getting back to the code example, I saw many things not recommended in Python. I'll go in details.</p>
<p>Avoid shadowing builtin names, such as <code>list</code>, <code>str</code>, <code>int</code>... It is not explicit and can lead to potential issues later.</p>
<p>When opening a file for reading or writing, you should use the <code>with</code> syntax. This is highly recommended since it takes care to close the file.</p>
<pre><code>with open(filename, "r") as f:
data = f.read()
with open(new_filename, "w") as f:
f.write(data)
</code></pre>
<p>If you have an empty line in your file, <code>line[0] == ...</code> will result in a <code>IndexError</code> exception. Use <code>line.startswith(...)</code> instead.</p>
<p>Final code :</p>
<pre><code>files = ["1.fasta", "2.fasta", "3.fasta"]
for file in files:
with open(file, "r") as input:
for line in input:
if line.startswith(">"):
new_name = os.path.splitext(os.path.basename(file)) + "_chromosome.fasta"
with open(new_name, "w") as output:
output.write(line)
</code></pre>
<p>Often, people come at me and say "<em>that's hugly</em>". Not really :). The levels of indentation makes clear what is which context.</p>
| 2 |
2016-09-15T20:26:27Z
|
[
"python",
"fasta"
] |
Python compare md5 hash
| 39,519,734 |
<p>I'm using the following code I found on stackoverflow which suggested is an effective way to get the md5 hash of the contents of a text file and comparing with the generated md5 hash I got from <a href="http://www.miraclesalad.com/webtools/md5.php" rel="nofollow">http://www.miraclesalad.com/webtools/md5.php</a></p>
<p>However.. it isn't returned the same md5 hash and I'm not sure where I've gone wrong. The file contents is an exact match of the text I used to generate the md5 hash so it should match but it's not returning that same match.</p>
<p>Basically, I wanted to generate a md5 hash of some text and compare it with the contents of a text file to see if it matches.</p>
<pre><code>def md5Checksum(filePath):
with open(filePath, 'rb') as fh:
m = hashlib.md5()
while True:
data = fh.read(8192)
if not data:
break
m.update(data)
return m.hexdigest()
</code></pre>
<p>If I create a text file with the contents "test" and also go to <a href="http://www.miraclesalad.com/webtools/md5.php" rel="nofollow">http://www.miraclesalad.com/webtools/md5.php</a> and type in "test" and generate a hash then compare both they are both different.</p>
<p>The hash I'm getting back is always the same no matter the contents of the file.</p>
<p>code to compare hash</p>
<pre><code>filetext = 'LOCATIONTOFILE.txt'
filemd5 = '098f6bcd4621d373cade4e832627b4f6'
if not filemd5 == md5Checksum(filetxt):
</code></pre>
<p>I've tried printing the data and both data are exactly the same too.</p>
<p>hash of <code>test</code> from website: 098f6bcd4621d373cade4e832627b4f6</p>
<p>hash of text file with the contents <code>test</code> d41d8cd98f00b204e9800998ecf8427e</p>
<p><strong>UPDATE</strong></p>
<p>Fixed the issue thanks to Adam Smith.</p>
<p>It was a ident typo and so wasn't returning the updated hashlib.</p>
| 1 |
2016-09-15T20:23:22Z
| 39,519,897 |
<p>The issue could be with newlines. If your file ends in a newline <code>"test\n"</code>, the MD5 hash would be <code>d8e8fca2dc0f896fd7cb4cb0031ba249</code>.</p>
<p>Line endings can also differ whether you are on a Windows or Unix system.</p>
| 2 |
2016-09-15T20:33:22Z
|
[
"python",
"hash",
"md5sum"
] |
Python compare md5 hash
| 39,519,734 |
<p>I'm using the following code I found on stackoverflow which suggested is an effective way to get the md5 hash of the contents of a text file and comparing with the generated md5 hash I got from <a href="http://www.miraclesalad.com/webtools/md5.php" rel="nofollow">http://www.miraclesalad.com/webtools/md5.php</a></p>
<p>However.. it isn't returned the same md5 hash and I'm not sure where I've gone wrong. The file contents is an exact match of the text I used to generate the md5 hash so it should match but it's not returning that same match.</p>
<p>Basically, I wanted to generate a md5 hash of some text and compare it with the contents of a text file to see if it matches.</p>
<pre><code>def md5Checksum(filePath):
with open(filePath, 'rb') as fh:
m = hashlib.md5()
while True:
data = fh.read(8192)
if not data:
break
m.update(data)
return m.hexdigest()
</code></pre>
<p>If I create a text file with the contents "test" and also go to <a href="http://www.miraclesalad.com/webtools/md5.php" rel="nofollow">http://www.miraclesalad.com/webtools/md5.php</a> and type in "test" and generate a hash then compare both they are both different.</p>
<p>The hash I'm getting back is always the same no matter the contents of the file.</p>
<p>code to compare hash</p>
<pre><code>filetext = 'LOCATIONTOFILE.txt'
filemd5 = '098f6bcd4621d373cade4e832627b4f6'
if not filemd5 == md5Checksum(filetxt):
</code></pre>
<p>I've tried printing the data and both data are exactly the same too.</p>
<p>hash of <code>test</code> from website: 098f6bcd4621d373cade4e832627b4f6</p>
<p>hash of text file with the contents <code>test</code> d41d8cd98f00b204e9800998ecf8427e</p>
<p><strong>UPDATE</strong></p>
<p>Fixed the issue thanks to Adam Smith.</p>
<p>It was a ident typo and so wasn't returning the updated hashlib.</p>
| 1 |
2016-09-15T20:23:22Z
| 39,519,914 |
<p>With only the text of <code>test</code>, (no blank line after) in both the web generator and Python I get the MD5 hash of:</p>
<pre><code>098f6bcd4621d373cade4e832627b4f6
</code></pre>
<p>If I add a carriage return / new line (\n) afterwards I get:</p>
<pre><code>d8e8fca2dc0f896fd7cb4cb0031ba249 # Using the web site
9f06243abcb89c70e0c331c61d871fa7 # Using a Windows machine
d8e8fca2dc0f896fd7cb4cb0031ba249 # Using a Linux machine
</code></pre>
<p>The difference is caused by type of carriage return / line feed. DOS/Windows <code>('\r\n')</code> -- Linux <code>('\n')</code></p>
<p><a href="http://www.cs.toronto.edu/~krueger/csc209h/tut/line-endings.html" rel="nofollow">http://www.cs.toronto.edu/~krueger/csc209h/tut/line-endings.html</a> </p>
| 2 |
2016-09-15T20:34:13Z
|
[
"python",
"hash",
"md5sum"
] |
Python compare md5 hash
| 39,519,734 |
<p>I'm using the following code I found on stackoverflow which suggested is an effective way to get the md5 hash of the contents of a text file and comparing with the generated md5 hash I got from <a href="http://www.miraclesalad.com/webtools/md5.php" rel="nofollow">http://www.miraclesalad.com/webtools/md5.php</a></p>
<p>However.. it isn't returned the same md5 hash and I'm not sure where I've gone wrong. The file contents is an exact match of the text I used to generate the md5 hash so it should match but it's not returning that same match.</p>
<p>Basically, I wanted to generate a md5 hash of some text and compare it with the contents of a text file to see if it matches.</p>
<pre><code>def md5Checksum(filePath):
with open(filePath, 'rb') as fh:
m = hashlib.md5()
while True:
data = fh.read(8192)
if not data:
break
m.update(data)
return m.hexdigest()
</code></pre>
<p>If I create a text file with the contents "test" and also go to <a href="http://www.miraclesalad.com/webtools/md5.php" rel="nofollow">http://www.miraclesalad.com/webtools/md5.php</a> and type in "test" and generate a hash then compare both they are both different.</p>
<p>The hash I'm getting back is always the same no matter the contents of the file.</p>
<p>code to compare hash</p>
<pre><code>filetext = 'LOCATIONTOFILE.txt'
filemd5 = '098f6bcd4621d373cade4e832627b4f6'
if not filemd5 == md5Checksum(filetxt):
</code></pre>
<p>I've tried printing the data and both data are exactly the same too.</p>
<p>hash of <code>test</code> from website: 098f6bcd4621d373cade4e832627b4f6</p>
<p>hash of text file with the contents <code>test</code> d41d8cd98f00b204e9800998ecf8427e</p>
<p><strong>UPDATE</strong></p>
<p>Fixed the issue thanks to Adam Smith.</p>
<p>It was a ident typo and so wasn't returning the updated hashlib.</p>
| 1 |
2016-09-15T20:23:22Z
| 39,519,916 |
<p>Are you sure that your size param is large enough (I can't imagine it wouldn't be, but worth checking)? When I test your code above with a simple value and compare with a standard MD5 hash (using miraclesalad or whatever), I get back a correct response. Carriage returns or special characters could be of some concern too.</p>
| 1 |
2016-09-15T20:34:19Z
|
[
"python",
"hash",
"md5sum"
] |
Python compare md5 hash
| 39,519,734 |
<p>I'm using the following code I found on stackoverflow which suggested is an effective way to get the md5 hash of the contents of a text file and comparing with the generated md5 hash I got from <a href="http://www.miraclesalad.com/webtools/md5.php" rel="nofollow">http://www.miraclesalad.com/webtools/md5.php</a></p>
<p>However.. it isn't returned the same md5 hash and I'm not sure where I've gone wrong. The file contents is an exact match of the text I used to generate the md5 hash so it should match but it's not returning that same match.</p>
<p>Basically, I wanted to generate a md5 hash of some text and compare it with the contents of a text file to see if it matches.</p>
<pre><code>def md5Checksum(filePath):
with open(filePath, 'rb') as fh:
m = hashlib.md5()
while True:
data = fh.read(8192)
if not data:
break
m.update(data)
return m.hexdigest()
</code></pre>
<p>If I create a text file with the contents "test" and also go to <a href="http://www.miraclesalad.com/webtools/md5.php" rel="nofollow">http://www.miraclesalad.com/webtools/md5.php</a> and type in "test" and generate a hash then compare both they are both different.</p>
<p>The hash I'm getting back is always the same no matter the contents of the file.</p>
<p>code to compare hash</p>
<pre><code>filetext = 'LOCATIONTOFILE.txt'
filemd5 = '098f6bcd4621d373cade4e832627b4f6'
if not filemd5 == md5Checksum(filetxt):
</code></pre>
<p>I've tried printing the data and both data are exactly the same too.</p>
<p>hash of <code>test</code> from website: 098f6bcd4621d373cade4e832627b4f6</p>
<p>hash of text file with the contents <code>test</code> d41d8cd98f00b204e9800998ecf8427e</p>
<p><strong>UPDATE</strong></p>
<p>Fixed the issue thanks to Adam Smith.</p>
<p>It was a ident typo and so wasn't returning the updated hashlib.</p>
| 1 |
2016-09-15T20:23:22Z
| 39,519,990 |
<p>On windows, I did the following to reproduce.</p>
<pre><code>C:\Users\adsmith\tmp>echo test>test.txt
</code></pre>
<p>Then in Python:</p>
<pre><code>>>> import hashlib
>>> a = hashlib.md5()
>>> b = hashlib.md5()
>>> with open("test.txt", "rb") as fh:
... data = fh.read()
... a.update(data)
...
>>> with open("test.txt", "rb") as fh:
... data = fh.read().strip()
... b.update(data)
...
>>> print(a.hexdigest(), "\n", b.hexdigest())
'9f06243abcb89c70e0c331c61d871fa7' # from b'test\r\n'
'098f6bcd4621d373cade4e832627b4f6' # from b'test'
</code></pre>
<p>The issue is clearly caused by the line terminator in your file. This should also be a warning not to use lower-level constructs like <code>file.read(bytecount)</code> unless you have to!</p>
<pre><code>>>> open("test.txt", 'rb').read()
# b'test\r\n'
</code></pre>
| 1 |
2016-09-15T20:39:01Z
|
[
"python",
"hash",
"md5sum"
] |
Django can't convert 'Recipe_instruction' object to str implicitly
| 39,519,796 |
<p>I am developing an application in django. Here is my models.py and views.py code:</p>
<pre><code>#models.py
class Recipe_instruction(models.Model):
content = models.TextField(max_length=500)
recipe = models.ForeignKey(Recipe, on_delete=models.CASCADE)
order = models.IntegerField(max_length=500)
class Meta:
app_label='recipe_base'
def __str__(self):
return self.content
</code></pre>
<h1>views.py</h1>
<pre><code>#create recipes_dict
...
recipe_instructions = Recipe_instruction.objects.filter(recipe = recipe)
recipe_instructions_string = ""
for recipe_instruction in recipe_instructions:
recipe_instructions_string = recipe_instructions_string + recipe_instruction.content
...
</code></pre>
<p>My goal is to get all the recipe instructions and pin them together into a single string <code>recipe_instructions_string</code></p>
<p>But when I run my views.py, it gives me the following error:</p>
<pre><code>recipe_instructions_string = recipe_instructions_string + recipe_instruction.content
TypeError: Can't convert 'Recipe_instruction' object to str implicitly
</code></pre>
<p>Can anyone tell me whats going on?</p>
<p>As recipe_instruction.content is a text field so I shouldn't need to convert it again into a string as its already a string.</p>
<p><strong>TRACEBACK:</strong></p>
<pre><code>Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/root/worker/worker/views.py", line 500, in Task1
recipe_instructions_string = recipe_instructions_string + recipe_instruction.content
TypeError: Can't convert 'Recipe_instruction' object to str implicitly
</code></pre>
| 1 |
2016-09-15T20:27:26Z
| 39,520,226 |
<p>The problem isn't with the code here.. but whilst we're looking at it, try changing the entire code to</p>
<pre><code>instructions = Recipe_instruction.objects.filter(recipe=recipe).values_list('content',
flat=True)
recipe_instructions_string = "".join(instructions)
</code></pre>
<p>This would stop the error from occuring if it were here, and be more efficient.</p>
| 1 |
2016-09-15T20:56:32Z
|
[
"python",
"django"
] |
Making Combination from lists using itertools
| 39,519,871 |
<pre><code>import itertools
a = [[2, 3], [3, 4]]
b = [[5, 6], [7, 8], [9, 10]]
c = [[11, 12], [13, 14]]
d = [[15, 16], [17, 18]]
e = [[12,16],[13,17],[14,18],[15,19]]
q=[]
q=list(itertools.combinations((a, b, b,c, c, d,e),7)
print q
</code></pre>
<p>How would I go about using the combination function from itertools properly to use list a one time, b 2 times without replacement, c 2 times without replacement, and d and e one time each?</p>
<pre><code>[[[2, 3],[5, 6],[7, 8],[11, 12],[13, 14],[15, 16],[12,16]],
[[2, 3],[5, 6],[7, 8],[11, 12],[13, 14],[15, 16],[13,17]],
[[2, 3],[5, 6],[7, 8],[11, 12],[13, 14],[15, 16],[14,18]],
[[2, 3],[5, 6],[7, 8],[11, 12],[13, 14],[15, 16],[15,19]],
[[2, 3],[5, 6],[7, 8],[11, 12],[13, 14],[15, 16],[12,16]],...
[[3, 4],[7, 8],[9, 10],[11, 12], [13, 14],[17, 18],[15,19]]]
</code></pre>
| 2 |
2016-09-15T20:32:09Z
| 39,520,003 |
<p><strong>Updated given clarification of expected output</strong>:</p>
<p><a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="nofollow">You want <code>itertools.product</code></a>:</p>
<pre><code>itertools.product(a, b, b, c, c, c, c, d, e)
</code></pre>
<p>Which will pick one element from each of its arguments on each iteration, cycling the rightmost element the fastest, leftmost slowest.</p>
<p>You can use extended argument unpacking to express the repetition of certain arguments a little more obviously in Python 3:</p>
<pre><code>itertools.product(a, *[b]*2, *[c]*4, d, e)
</code></pre>
<p>Or use <a href="http://stackoverflow.com/a/39520252/364696">tobias_k's solution</a> for more general repetition of sequences (that will also work on Py2).</p>
| 1 |
2016-09-15T20:39:50Z
|
[
"python",
"list",
"combinations",
"itertools"
] |
Making Combination from lists using itertools
| 39,519,871 |
<pre><code>import itertools
a = [[2, 3], [3, 4]]
b = [[5, 6], [7, 8], [9, 10]]
c = [[11, 12], [13, 14]]
d = [[15, 16], [17, 18]]
e = [[12,16],[13,17],[14,18],[15,19]]
q=[]
q=list(itertools.combinations((a, b, b,c, c, d,e),7)
print q
</code></pre>
<p>How would I go about using the combination function from itertools properly to use list a one time, b 2 times without replacement, c 2 times without replacement, and d and e one time each?</p>
<pre><code>[[[2, 3],[5, 6],[7, 8],[11, 12],[13, 14],[15, 16],[12,16]],
[[2, 3],[5, 6],[7, 8],[11, 12],[13, 14],[15, 16],[13,17]],
[[2, 3],[5, 6],[7, 8],[11, 12],[13, 14],[15, 16],[14,18]],
[[2, 3],[5, 6],[7, 8],[11, 12],[13, 14],[15, 16],[15,19]],
[[2, 3],[5, 6],[7, 8],[11, 12],[13, 14],[15, 16],[12,16]],...
[[3, 4],[7, 8],[9, 10],[11, 12], [13, 14],[17, 18],[15,19]]]
</code></pre>
| 2 |
2016-09-15T20:32:09Z
| 39,520,252 |
<p>It seems like you are looking for a combination of <a href="https://docs.python.org/3/library/itertools.html#itertools.combinations" rel="nofollow"><code>combinations</code></a> and <a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="nofollow"><code>product</code></a>: Use <code>combinations</code> to get the possible combinations without replacement for the repeated lists, then use <code>product</code> to combine all those combinations. You can put the lists and counts in two lists, <code>zip</code> those lists, and use a generator expression to get all the combinations.</p>
<pre><code>from itertools import product, combinations, chain
lists = [a,b,c,d,e]
counts = [1,2,2,1,1]
combs = product(*(combinations(l, c) for l, c in zip(lists, counts)))
</code></pre>
<p>For this example, the <code>combs</code> generator has 48 elements, among others:</p>
<pre><code>[(([2, 3],), ([5, 6], [7, 8]), ([11, 12], [13, 14]), ([15, 16],), ([12, 16],)),
...
(([2, 3],), ([5, 6], [7, 8]), ([11, 12], [13, 14]), ([17, 18],), ([15, 19],)),
(([2, 3],), ([5, 6], [9, 10]),([11, 12], [13, 14]), ([15, 16],), ([12, 16],)),
...
(([3, 4],), ([5, 6], [7, 8]), ([11, 12], [13, 14]), ([15, 16],), ([12, 16],)),
...
(([3, 4],), ([5, 6], [7, 8]), ([11, 12], [13, 14]), ([17, 18],), ([15, 19],)),
...
(([3, 4],), ([7, 8], [9, 10]),([11, 12], [13, 14]), ([17, 18],), ([15, 19],))]
</code></pre>
<p>If you want <a href="http://stackoverflow.com/q/952914/1639625">flattened lists</a>, just <a href="https://docs.python.org/3/library/itertools.html#itertools.chain" rel="nofollow"><code>chain</code></a> them:</p>
<pre><code>>>> combs = (list(chain(*p)) for p in product(*(combinations(l, c) for l, c in zip(lists, counts))))
>>> list(combs)
[[[2, 3], [5, 6], [7, 8], [11, 12], [13, 14], [15, 16], [12, 16]],
...
[[3, 4], [7, 8], [9, 10], [11, 12], [13, 14], [17, 18], [15, 19]]]
</code></pre>
| 2 |
2016-09-15T20:57:56Z
|
[
"python",
"list",
"combinations",
"itertools"
] |
Making Combination from lists using itertools
| 39,519,871 |
<pre><code>import itertools
a = [[2, 3], [3, 4]]
b = [[5, 6], [7, 8], [9, 10]]
c = [[11, 12], [13, 14]]
d = [[15, 16], [17, 18]]
e = [[12,16],[13,17],[14,18],[15,19]]
q=[]
q=list(itertools.combinations((a, b, b,c, c, d,e),7)
print q
</code></pre>
<p>How would I go about using the combination function from itertools properly to use list a one time, b 2 times without replacement, c 2 times without replacement, and d and e one time each?</p>
<pre><code>[[[2, 3],[5, 6],[7, 8],[11, 12],[13, 14],[15, 16],[12,16]],
[[2, 3],[5, 6],[7, 8],[11, 12],[13, 14],[15, 16],[13,17]],
[[2, 3],[5, 6],[7, 8],[11, 12],[13, 14],[15, 16],[14,18]],
[[2, 3],[5, 6],[7, 8],[11, 12],[13, 14],[15, 16],[15,19]],
[[2, 3],[5, 6],[7, 8],[11, 12],[13, 14],[15, 16],[12,16]],...
[[3, 4],[7, 8],[9, 10],[11, 12], [13, 14],[17, 18],[15,19]]]
</code></pre>
| 2 |
2016-09-15T20:32:09Z
| 39,520,465 |
<p>What your are trying to achieve is the <code>Cartesian product of input iterables</code> and not the combinations of the item present in the list. Hence you have to use <a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="nofollow"><code>itertools.product()</code></a> instead.</p>
<p>In case repetition is allowed among the lists which used more than once, answer is simple: </p>
<pre><code>>>> import itertools
>>> a = [1,2]
>>> b = [3,4]
>>> [i for i in itertools.product(a, b, b)]
[(1, 3, 3), (1, 3, 4), (1, 4, 3), (1, 4, 4), (2, 3, 3), (2, 3, 4), (2, 4, 3), (2, 4, 4)]
</code></pre>
<p>But in case <code>repetition is not allowed</code> within the same lists, it will become little nasty and you need to combine the above answer with <a href="https://docs.python.org/3/library/itertools.html#itertools.combinations" rel="nofollow"><code>combinations()</code></a> and <a href="https://docs.python.org/3/library/itertools.html#itertools.chain" rel="nofollow"><code>chain()</code></a> (same as mentioned by <a href="http://stackoverflow.com/users/1639625/tobias-k">tobias_k</a>). This code will give the list of all <code>combinations</code>:</p>
<pre><code>>>> from itertools import chain, product, combinations
>>> lists, counts = [a, b], [1, 2] # to track that a is to be used once, and b twice
>>> list(list(chain(*p)) for p in product(*(combinations(l, c) for l, c in zip(lists, counts))))
[[1, 3, 4], [2, 3, 4]]
</code></pre>
<p>However, in case you need permutations instead of combinations, you have to update the above code with <a href="https://docs.python.org/3/library/itertools.html#itertools.permutations" rel="nofollow"><code>permutations()</code></a>:</p>
<pre><code>>>> from itertools import chain, product, permutations
>>> list(list(chain(*p)) for p in product(*(permutations(l, c) for l, c in zip(lists, counts))))
[[1, 3, 4], [1, 4, 3], [2, 3, 4], [2, 4, 3]]
</code></pre>
| 1 |
2016-09-15T21:13:39Z
|
[
"python",
"list",
"combinations",
"itertools"
] |
Update dictionary if in list
| 39,519,910 |
<p>I'm running through an excel file reading line by line to create dictionaries and append them to a list, so I have a list like:</p>
<pre><code>myList = []
</code></pre>
<p>and a dictionary in this format:</p>
<pre><code>dictionary = {'name': 'John', 'code': 'code1', 'date': [123,456]}
</code></pre>
<p>so I do this: <code>myList.append(dictionary)</code>, so far so good. Now I'll go into the next line where I have a pretty similar dictionary:</p>
<pre><code>dictionary_two = {'name': 'John', 'code': 'code1', 'date': [789]}
</code></pre>
<p>I'd like to check if I already have a dictionary with <code>'name' = 'John'</code> in <code>myList</code> so I check it with this function:</p>
<pre><code>def checkGuy(dude_name):
return any(d['name'] == dude_name for d in myList)
</code></pre>
<p>Currently I'm writing this function to add the guys to the list:</p>
<pre><code>def addGuy(row_info):
if not checkGuy(row_info[1]):
myList.append({'name':row_info[1],'code':row_info[0],'date':[row_info[2]]})
else:
#HELP HERE
</code></pre>
<p>in this else I'd like to <code>dict.update(updated_dict)</code> but I don't know how to get the dictionary here.</p>
<p>Could someone help so <code>dictionary</code> appends the values of <code>dictionary_two</code>?</p>
| 0 |
2016-09-15T20:33:55Z
| 39,520,040 |
<p>I would modify <code>checkGuy</code> to something like:</p>
<pre><code>def findGuy(dude_name):
for d in myList:
if d['name'] == dude_name:
return d
else:
return None # or use pass
</code></pre>
<p>And then do:</p>
<pre><code>def addGuy(row_info):
guy = findGuy(row_info[1])
if guy is None:
myList.append({'name':row_info[1],'code':row_info[0],'date':[row_info[2]]})
else:
guy.update(updated_dict)
</code></pre>
| 2 |
2016-09-15T20:42:56Z
|
[
"python",
"dictionary"
] |
Update dictionary if in list
| 39,519,910 |
<p>I'm running through an excel file reading line by line to create dictionaries and append them to a list, so I have a list like:</p>
<pre><code>myList = []
</code></pre>
<p>and a dictionary in this format:</p>
<pre><code>dictionary = {'name': 'John', 'code': 'code1', 'date': [123,456]}
</code></pre>
<p>so I do this: <code>myList.append(dictionary)</code>, so far so good. Now I'll go into the next line where I have a pretty similar dictionary:</p>
<pre><code>dictionary_two = {'name': 'John', 'code': 'code1', 'date': [789]}
</code></pre>
<p>I'd like to check if I already have a dictionary with <code>'name' = 'John'</code> in <code>myList</code> so I check it with this function:</p>
<pre><code>def checkGuy(dude_name):
return any(d['name'] == dude_name for d in myList)
</code></pre>
<p>Currently I'm writing this function to add the guys to the list:</p>
<pre><code>def addGuy(row_info):
if not checkGuy(row_info[1]):
myList.append({'name':row_info[1],'code':row_info[0],'date':[row_info[2]]})
else:
#HELP HERE
</code></pre>
<p>in this else I'd like to <code>dict.update(updated_dict)</code> but I don't know how to get the dictionary here.</p>
<p>Could someone help so <code>dictionary</code> appends the values of <code>dictionary_two</code>?</p>
| 0 |
2016-09-15T20:33:55Z
| 39,520,233 |
<p>This answer suggestion is pasted on the comments where it was suggested that if "name" is the only criteria to search on then it could be used as a key in a dictionary instead of using a list.</p>
<pre><code>master = {"John" : {'code': 'code1', 'date': [123,456]}}
def addGuy(row_info):
key = row_info[1]
code = row_info[0]
date = row_info[2]
if master.get(key):
master.get(key).update({"code": code, "date": date})
else:
master[key] = {"code": code, "date": date}
</code></pre>
| 0 |
2016-09-15T20:56:53Z
|
[
"python",
"dictionary"
] |
Update dictionary if in list
| 39,519,910 |
<p>I'm running through an excel file reading line by line to create dictionaries and append them to a list, so I have a list like:</p>
<pre><code>myList = []
</code></pre>
<p>and a dictionary in this format:</p>
<pre><code>dictionary = {'name': 'John', 'code': 'code1', 'date': [123,456]}
</code></pre>
<p>so I do this: <code>myList.append(dictionary)</code>, so far so good. Now I'll go into the next line where I have a pretty similar dictionary:</p>
<pre><code>dictionary_two = {'name': 'John', 'code': 'code1', 'date': [789]}
</code></pre>
<p>I'd like to check if I already have a dictionary with <code>'name' = 'John'</code> in <code>myList</code> so I check it with this function:</p>
<pre><code>def checkGuy(dude_name):
return any(d['name'] == dude_name for d in myList)
</code></pre>
<p>Currently I'm writing this function to add the guys to the list:</p>
<pre><code>def addGuy(row_info):
if not checkGuy(row_info[1]):
myList.append({'name':row_info[1],'code':row_info[0],'date':[row_info[2]]})
else:
#HELP HERE
</code></pre>
<p>in this else I'd like to <code>dict.update(updated_dict)</code> but I don't know how to get the dictionary here.</p>
<p>Could someone help so <code>dictionary</code> appends the values of <code>dictionary_two</code>?</p>
| 0 |
2016-09-15T20:33:55Z
| 39,520,309 |
<p>If you <em>dict.update</em> the existing data each time you see a repeated name, your code can be reduced to a dict of dicts right where you read the file. Calling update on existing dicts with the same keys is going to overwrite the values leaving you with the last occurrence so even if you had multiple "John" dicts they would all contain the exact same data by the end.</p>
<pre><code>def read_file():
results = {name: {"code": code, "date": date}
for code, name, date in how_you_read_into_rows}
</code></pre>
<p>If you actually think that the values get appended somehow, you are wrong. If you wanted to do that you would need a very different approach. If you actually want to gather the dates and codes per user then use a <em>defauldict</em> appending the code,date pair to a list with the name as the key:</p>
<pre><code>from collections import defaultdict
d = defaultdict(list)
def read_file():
for code, name, date in how_you_read_into_rows:
d["name"].append([code, date])
</code></pre>
<p>Or some variation depending on what you want the final output to look like.</p>
| 0 |
2016-09-15T21:01:41Z
|
[
"python",
"dictionary"
] |
PySpark reduceByKey with multiple values
| 39,519,922 |
<p>So while I have the identical title as this question: <a href="http://stackoverflow.com/questions/30831530/pyspark-reducebykey-on-multiple-values">PySpark reduceByKey on multiple values</a></p>
<p>I cannot get the answer to work for what I want to do.</p>
<pre><code>A = sc.parallelize([("a", (1,0)), ("b", (4,2)),("a", (11,2)), ("b", (4,10))])
A.reduceByKey(lambda x, y: x[0]+y[0],x[1]+y[1]).collect()
</code></pre>
<p>Gives me the error:</p>
<pre><code>name 'x' is not defined
</code></pre>
<p>Whats going on here? </p>
| 0 |
2016-09-15T20:34:27Z
| 39,520,267 |
<p>I found the problem. Some parenthesis:</p>
<pre><code>A.reduceByKey(lambda x, y: (x[0]+y[0] ,x[1]+y[1])).collect()
</code></pre>
| 0 |
2016-09-15T20:58:43Z
|
[
"python",
"apache-spark",
"pyspark"
] |
How can I get my gradle test task to use python pip install for library that isn't on maven central?
| 39,519,939 |
<p>I am trying to set up a gradle task that will run Robot tests. Robot uses a python library to interact with Selenium in order to test a web page through a browser. But unfortunately it seems the only way to install the <a href="https://github.com/robotframework/Selenium2Library" rel="nofollow">https://github.com/robotframework/Selenium2Library</a> is via pip - <code>pip install robotframework-selenium2library</code>. Is there a way to get Gradle to run this command in my task?</p>
<p>Here's what I have:</p>
<p>build.gradle:</p>
<pre><code>configurations {
//...
acceptanceTestRuntime {extendsFrom testCompile, runtime}
}
dependencies {
//...
acceptanceTestRuntime group: 'org.robotframework', name: 'robotframework', version: '2.8.7'
//The following doesn't work, apparently this library isn't on maven...
//acceptanceTestRuntime group: 'org.robotframework', name: 'Selenium2Library', version: '1.+'
}
sourceSets {
//...
acceptanceTest {
runtimeClasspath = sourceSets.test.output + configurations.acceptanceTestRuntime
}
}
task acceptanceTest(type: JavaExec) {
classpath = sourceSets.acceptanceTest.runtimeClasspath
main = 'org.robotframework.RobotFramework'
args '--variable', 'BROWSER:gc'
args '--outputdir', 'target'
args 'src/testAcceptance'
}
</code></pre>
<p>My robot resources file - login.resource.robot:</p>
<pre><code>*** Settings ***
Documentation A resource file for my example login page test.
Library Selenium2Library
*** Variables ***
${SERVER} localhost:8080
(etc.)
*** Keywords ***
Open Browser to Login Page
Open Browser ${LOGIN_URL} ${BROWSER}
Maximize Browser Window
Set Selenium Speed ${DELAY}
Login Page Should Be Open
Login Page Should Be Open
Location Should Be ${LOGIN_URL}
</code></pre>
<p>And when I run this task, my robot tests are run, BUT they fail. Because certain keywords that are defined in the robotframework-selenium2Library aren't recognized, such as "Open Browser", and an exception is thrown. </p>
<p>How can I get gradle to import this selenium library for this task? Can I install and call pip via some python plugin?</p>
| 2 |
2016-09-15T20:35:44Z
| 39,661,942 |
<p>I had to use a gradle Exec task to run a python script that then kicked off the robot tests. So it looked like this:</p>
<p>build.gradle</p>
<pre><code>task acceptanceTest(type: Exec) {
workingDir 'src/testAcceptance'
commandLine 'python', 'run.py'
}
</code></pre>
<p>src/testAcceptance/run.py</p>
<pre><code>import os
import robot
import setup
#Which runs setup.py
os.environ['ROBOT_OPTIONS'] = '--variable BROWSER.gc --outputdir results'
robot.run('.')
</code></pre>
<p>src/testAcceptance/setup.py</p>
<pre><code>import os
import sys
import pip
import re
pip.main(['install', 'robotframework==3.0'])
pip.main(['install', 'robotframework-selenium2library==1.8.0'])
# Checksums can be looked up by chromedriver version here - http://chromedriver.storage.googleapis.com/index.html
pip.main(['install', '--upgrade', 'chromedriver_installer',
'--install-option=--chromedriver-version=2.24',
'--install-option=--chromedriver-checksums=1a46c83926f891d502427df10b4646b9,d117b66fac514344eaf80691ae9a4687,' +
'c56e41bdc769ad2c31225b8495fc1a93,8e6b6d358f1b919a0d1369f90d61e1a4'])
#Add the Scripts dir to the path, since that's where the chromedriver is installed
scriptsDir = re.sub('[A-Za-z0-9\\.]+$', '', sys.executable) + 'Scripts'
os.environ['PATH'] += os.pathsep + scriptsDir
</code></pre>
| 0 |
2016-09-23T13:24:40Z
|
[
"python",
"selenium",
"gradle",
"robotframework",
"selenium2library"
] |
Print values from list based from separate text file
| 39,519,978 |
<p>How do I print a list of words from a separate text file? I want to print all the words unless the word has a length of 4 characters. </p>
<p>words.txt file looks like this:</p>
<p>abate chicanery disseminate gainsay latent aberrant coagulate dissolution garrulous laud</p>
<p>It has 334 total words in it. I'm trying to display the list until it reaches a word with a length of 4 and stops. </p>
<pre><code>wordsFile = open("words.txt", 'r')
words = wordsFile.read()
wordsFile.close()
wordList = words.split()
#List outputs length of words in list
lengths= [len(i) for i in wordList]
for i in range(10):
if i >= len(lengths):
break
print(lengths[i], end = ' ')
# While loop displays names based on length of words in list
while words != 4:
if words in wordList:
print("\nSelected words are:", words)
break
</code></pre>
<h1>output</h1>
<pre><code>5 9 11 7 6 8 9 11 9 4
</code></pre>
<h1>sample desired output</h1>
<p>Selected words are:</p>
<p>Abate </p>
<p>Chicanery </p>
<p>disseminate </p>
<p>gainsay </p>
<p>latent </p>
<p>aberrant </p>
<p>coagulate </p>
<p>dissolution </p>
<p>garrulous </p>
| -1 |
2016-09-15T20:38:18Z
| 39,521,850 |
<p>To read all words from a text file, and print each of them unless they have a length of 4:</p>
<pre><code>with open("words.txt","r") as wordsFile:
words = wordsFile.read()
wordsList = words.split()
print ("Selected words are:")
for word in wordsList:
if len(word) != 4: # ..unless it has a length of 4
print (word)
</code></pre>
<p>Later in your question you write, "I'm trying to display the first 10 words "...). If so, add a counter, and add a condition to print if its value is <= 10.</p>
| 1 |
2016-09-15T23:38:25Z
|
[
"python",
"python-3.x"
] |
Print values from list based from separate text file
| 39,519,978 |
<p>How do I print a list of words from a separate text file? I want to print all the words unless the word has a length of 4 characters. </p>
<p>words.txt file looks like this:</p>
<p>abate chicanery disseminate gainsay latent aberrant coagulate dissolution garrulous laud</p>
<p>It has 334 total words in it. I'm trying to display the list until it reaches a word with a length of 4 and stops. </p>
<pre><code>wordsFile = open("words.txt", 'r')
words = wordsFile.read()
wordsFile.close()
wordList = words.split()
#List outputs length of words in list
lengths= [len(i) for i in wordList]
for i in range(10):
if i >= len(lengths):
break
print(lengths[i], end = ' ')
# While loop displays names based on length of words in list
while words != 4:
if words in wordList:
print("\nSelected words are:", words)
break
</code></pre>
<h1>output</h1>
<pre><code>5 9 11 7 6 8 9 11 9 4
</code></pre>
<h1>sample desired output</h1>
<p>Selected words are:</p>
<p>Abate </p>
<p>Chicanery </p>
<p>disseminate </p>
<p>gainsay </p>
<p>latent </p>
<p>aberrant </p>
<p>coagulate </p>
<p>dissolution </p>
<p>garrulous </p>
| -1 |
2016-09-15T20:38:18Z
| 39,521,961 |
<p>Given that you only want the first 10 words. There isn't much point reading all 4 lines. You can safely read just the 1st and save yourself some time.</p>
<pre><code>#from itertools import chain
with open('words.txt') as f:
# could raise `StopIteration` if file is empty
words = next(f).strip().split()
# to read all lines
#words = []
#for line in f:
# words.extend(line.strip().split())
# more functional way
# words = list(chain.from_iterable(line.strip().split() for line in f))
print("Selected words are:")
for word in words[:10]:
if len(word) != 4:
print(word)
</code></pre>
<p>There are a few alternative methods I left in there but commented out. </p>
<p>Edit using a while loop. </p>
<pre><code>i = 0
while i < 10:
if len(words[i]) != 4:
print(words[i])
i += 1
</code></pre>
<p>Since you know how many iterations you can do, you can hide the mechanics of the iteration using a <code>for</code> loop. A while does not facilitate this very well and is better used when you don't know how many iterations you will do.</p>
| 1 |
2016-09-15T23:54:02Z
|
[
"python",
"python-3.x"
] |
Print values from list based from separate text file
| 39,519,978 |
<p>How do I print a list of words from a separate text file? I want to print all the words unless the word has a length of 4 characters. </p>
<p>words.txt file looks like this:</p>
<p>abate chicanery disseminate gainsay latent aberrant coagulate dissolution garrulous laud</p>
<p>It has 334 total words in it. I'm trying to display the list until it reaches a word with a length of 4 and stops. </p>
<pre><code>wordsFile = open("words.txt", 'r')
words = wordsFile.read()
wordsFile.close()
wordList = words.split()
#List outputs length of words in list
lengths= [len(i) for i in wordList]
for i in range(10):
if i >= len(lengths):
break
print(lengths[i], end = ' ')
# While loop displays names based on length of words in list
while words != 4:
if words in wordList:
print("\nSelected words are:", words)
break
</code></pre>
<h1>output</h1>
<pre><code>5 9 11 7 6 8 9 11 9 4
</code></pre>
<h1>sample desired output</h1>
<p>Selected words are:</p>
<p>Abate </p>
<p>Chicanery </p>
<p>disseminate </p>
<p>gainsay </p>
<p>latent </p>
<p>aberrant </p>
<p>coagulate </p>
<p>dissolution </p>
<p>garrulous </p>
| -1 |
2016-09-15T20:38:18Z
| 39,522,408 |
<p>While i'd use a for or a while loop, like Paul Rooney suggested, you can also adapt your code.
When you create the list lengths[], you create a list with ALL the lengths of the words contained in wordList.</p>
<p>You then cycle the first 10 lengths in lengths[] with the for loop;
If you need to use this method, you can nest a for loop, comparing words and lengths:</p>
<pre><code>#lengths[] contains all the lengths of the words in wordList
lengths= [len(i) for i in wordList]
#foo[] cointains all the words in wordList
foo = [i for i in wordList]
#for the first 10 elements of lengths, if the elements isn't 4 char long
#print foo[] element with same index
for i in range(10):
if lengths[i] != 4:
print(foo[i])
if i >= len(lengths):
break
</code></pre>
<p>I hope this is clear and it's the answer you were looking for</p>
| 1 |
2016-09-16T01:01:49Z
|
[
"python",
"python-3.x"
] |
Nested comprehensions in Elixir
| 39,520,002 |
<p>In Python, one can do this <code>[e2 for e1 in edits1(word) for e2 in edits1(e1)]</code>. What is the equivalent (and right) form of this construct in Elixir?</p>
<p>What I tried, is:</p>
<pre><code>def edits2(word) do
(for e1 <- edits1(word), do: edits1(e1))
|> Enum.reduce(MapSet.new, fn(item, acc) -> MapSet.union(item, acc) end)
end
</code></pre>
<p>but this is awfully slow, since it happens I need to do a MapSet of hundreds of lists, each containing 500+ elements.</p>
| 1 |
2016-09-15T20:39:41Z
| 39,521,510 |
<p>Ok, so the anwser to my initial question is just what @Dogbert suggested: <code>for e1 <- edits1(word), e2 <- edits1(e1), into: MapSet.new, do: e2</code></p>
<p>But the bottleneck wan't this particular line. See <a href="https://github.com/visar/spell_check/commit/857653593ca98310db028601e9cfc59dc1ac13a4?diff=split" rel="nofollow">https://github.com/visar/spell_check/commit/857653593ca98310db028601e9cfc59dc1ac13a4?diff=split</a> for some optimizations, which truncate the running time of the particular test to under 2s on my machine.</p>
<p>The killer was that <code>known/1</code> was recalculating the word keys every time - and there are thousands of them - but it could safely be a constant, so it takes a little longer to compile but it runs much faster.</p>
| 0 |
2016-09-15T22:53:34Z
|
[
"python",
"elixir",
"list-comprehension"
] |
numpy - return index If value inside one of 3d array
| 39,520,060 |
<p>How to do this in Numpy : Thank you!</p>
<p><strong>Input :</strong> </p>
<pre><code>A = np.array([0, 1, 2, 3])
B = np.array([[3, 2, 0], [0, 2, 1], [2, 3, 1], [3, 0, 1]])
</code></pre>
<hr>
<p><strong>Output :</strong></p>
<pre><code>result = [[0, 1, 3], [1, 2, 3], [0, 1, 2], [0, 2, 3]]
</code></pre>
<hr>
<p>in Python :</p>
<pre><code>A = np.array([0 ,1 ,2 ,3])
B = np.array([[3 ,2 ,0], [0 ,2 ,1], [2 ,3 ,1], [3 ,0 ,1]])
result = []
for x , valA in enumerate (A) :
inArray = []
for y , valB in enumerate (B) :
if valA in valB:
inArray.append (y)
result.append (inArray)
print result
# result = [[0, 1, 3], [1, 2, 3], [0, 1, 2], [0, 2, 3]]
</code></pre>
| 3 |
2016-09-15T20:44:14Z
| 39,520,541 |
<p><strong>Approach #1</strong></p>
<p>Here's a NumPy vectorized approach using <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>broadcasting</code></a> -</p>
<pre><code>R,C = np.where((A[:,None,None] == B).any(-1))
out = np.split(C,np.flatnonzero(R[1:]>R[:-1])+1)
</code></pre>
<p><strong>Approach #2</strong></p>
<p>Assuming <code>A</code> and <code>B</code> to hold positive numbers, we can consider those to represent indices on a <code>2D</code> grid, such that <code>B</code> could be considered to hold the column indices on per row basis. Once that <code>2D</code> grid corresponding to <code>B</code> is in place, we just need to consider only the columns that are intersected by <code>A</code>. Finally, we get the indices of <code>True</code> values in a such a <code>2D</code> grid to give us <code>R</code> and <code>C</code> values. This should be much more memory-efficient. </p>
<p>Thus, the alternative approach would look something like this -</p>
<pre><code>ncols = B.max()+1
nrows = B.shape[0]
mask = np.zeros((nrows,ncols),dtype=bool)
mask[np.arange(nrows)[:,None],B] = 1
mask[:,~np.in1d(np.arange(mask.shape[1]),A)] = 0
R,C = np.where(mask.T)
out = np.split(C,np.flatnonzero(R[1:]>R[:-1])+1)
</code></pre>
<p>Sample run -</p>
<pre><code>In [43]: A
Out[43]: array([0, 1, 2, 3])
In [44]: B
Out[44]:
array([[3, 2, 0],
[0, 2, 1],
[2, 3, 1],
[3, 0, 1]])
In [45]: out
Out[45]: [array([0, 1, 3]), array([1, 2, 3]), array([0, 1, 2]), array([0, 2, 3])]
</code></pre>
<p><strong>Runtime test</strong></p>
<p>Scaling up the dataset sizes by <code>100x</code>, here's a quick runtime test result -</p>
<pre><code>In [85]: def index_1din2d(A,B):
...: R,C = np.where((A[:,None,None] == B).any(-1))
...: out = np.split(C,np.flatnonzero(R[1:]>R[:-1])+1)
...: return out
...:
...: def index_1din2d_initbased(A,B):
...: ncols = B.max()+1
...: nrows = B.shape[0]
...: mask = np.zeros((nrows,ncols),dtype=bool)
...: mask[np.arange(nrows)[:,None],B] = 1
...: mask[:,~np.in1d(np.arange(mask.shape[1]),A)] = 0
...: R,C = np.where(mask.T)
...: out = np.split(C,np.flatnonzero(R[1:]>R[:-1])+1)
...: return out
...:
In [86]: A = np.unique(np.random.randint(0,10000,(400)))
...: B = np.random.randint(0,10000,(400,300))
...:
In [87]: %timeit [np.where((B == x).sum(axis = 1))[0] for x in A]
1 loop, best of 3: 161 ms per loop # @Psidom's soln
In [88]: %timeit index_1din2d(A,B)
10 loops, best of 3: 91.5 ms per loop
In [89]: %timeit index_1din2d_initbased(A,B)
10 loops, best of 3: 33.4 ms per loop
</code></pre>
<p><strong>Further performance-boost!</strong></p>
<p>Well, alternatively we can create the <code>2D</code> grid in the second approach in a transposed way. The idea is to avoid the transpose in <code>R,C = np.where(mask.T)</code>, which seemed like the bottleneck. So, a modified version of the second approach and the associated runtimes would look something like this -</p>
<pre><code>In [135]: def index_1din2d_initbased_v2(A,B):
...: nrows = B.max()+1
...: ncols = B.shape[0]
...: mask = np.zeros((nrows,ncols),dtype=bool)
...: mask[B,np.arange(ncols)[:,None]] = 1
...: mask[~np.in1d(np.arange(mask.shape[0]),A)] = 0
...: R,C = np.where(mask)
...: out = np.split(C,np.flatnonzero(R[1:]>R[:-1])+1)
...: return out
...:
In [136]: A = np.unique(np.random.randint(0,10000,(400)))
...: B = np.random.randint(0,10000,(400,300))
...:
In [137]: %timeit index_1din2d_initbased(A,B)
10 loops, best of 3: 57.5 ms per loop
In [138]: %timeit index_1din2d_initbased_v2(A,B)
10 loops, best of 3: 25.9 ms per loop
</code></pre>
| 3 |
2016-09-15T21:20:01Z
|
[
"python",
"arrays",
"numpy",
"indexing",
"collect"
] |
numpy - return index If value inside one of 3d array
| 39,520,060 |
<p>How to do this in Numpy : Thank you!</p>
<p><strong>Input :</strong> </p>
<pre><code>A = np.array([0, 1, 2, 3])
B = np.array([[3, 2, 0], [0, 2, 1], [2, 3, 1], [3, 0, 1]])
</code></pre>
<hr>
<p><strong>Output :</strong></p>
<pre><code>result = [[0, 1, 3], [1, 2, 3], [0, 1, 2], [0, 2, 3]]
</code></pre>
<hr>
<p>in Python :</p>
<pre><code>A = np.array([0 ,1 ,2 ,3])
B = np.array([[3 ,2 ,0], [0 ,2 ,1], [2 ,3 ,1], [3 ,0 ,1]])
result = []
for x , valA in enumerate (A) :
inArray = []
for y , valB in enumerate (B) :
if valA in valB:
inArray.append (y)
result.append (inArray)
print result
# result = [[0, 1, 3], [1, 2, 3], [0, 1, 2], [0, 2, 3]]
</code></pre>
| 3 |
2016-09-15T20:44:14Z
| 39,520,550 |
<p>An option with a combination of <code>numpy</code> and <code>list-comprehension</code>:</p>
<pre><code>import numpy as np
[np.where((B == x).sum(axis = 1))[0] for x in A]
# [array([0, 1, 3]), array([1, 2, 3]), array([0, 1, 2]), array([0, 2, 3])]
</code></pre>
| 0 |
2016-09-15T21:20:33Z
|
[
"python",
"arrays",
"numpy",
"indexing",
"collect"
] |
Docker, pylibmc, memcached
| 39,520,061 |
<p>I've project that uses memcached. So when docker trying to "pip install pylibmc", library can't find libmemcached cause it's not installed yet. How can I organise my docker-compose.yml, or maybe I have to do something with dockerfile?</p>
<p>Now my docker-compose.yml looks like (I've deleted memcached container lines):</p>
<pre><code>version: '2'
services:
app:
build: .
volumes:
- ./app:/usr/src/app
- ./logs:/var/log
expose:
- "8000"
links:
- db:db
networks:
tickets-api:
ipv4_address: 172.25.0.100
extra_hosts:
- "db:172.25.0.102"
webserver:
image: nginx:latest
links:
- app
- db
volumes_from:
- app
volumes:
- ./nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./nginx/uwsgi_params:/etc/nginx/uwsgi_params
ports:
- "80:80"
networks:
tickets-api:
ipv4_address: 172.25.0.101
db:
restart: always
image: postgres
volumes:
- ./postgresql/pgdata:/pgdata
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- PGDATA=/pgdata
networks:
tickets-api:
ipv4_address: 172.25.0.102
networks:
tickets-api:
driver: bridge
ipam:
config:
- subnet: 172.25.0.0/24
</code></pre>
| 0 |
2016-09-15T20:44:14Z
| 39,522,252 |
<p>You have two options. Installing it within your app container or install memcached as isolated container. </p>
<p><strong>OPTION 1</strong></p>
<p>You can add a command to install <code>libmemcached</code> on your app's <code>Dockerfile</code>. </p>
<p>If you are using some kind of ubuntu based image or alpine</p>
<p>Just add</p>
<pre><code>RUN apt-get update && apt-get install -y \
libmemcached11 \
libmemcachedutil2 \
libmemcached-dev \
libz-dev
</code></pre>
<p>Then, you can do <code>pip install pylibmc</code></p>
<p><strong>OPTION 2</strong></p>
<p>You can add memcached as a separated container. Just add in your docker-compose</p>
<pre><code>memcached:
image: memcached
ports:
- "11211:11211"
</code></pre>
<p>Of course, you need to link your app container with memcached container. </p>
| 0 |
2016-09-16T00:39:07Z
|
[
"python",
"docker",
"memcached",
"docker-compose",
"pylibmc"
] |
Docker, pylibmc, memcached
| 39,520,061 |
<p>I've project that uses memcached. So when docker trying to "pip install pylibmc", library can't find libmemcached cause it's not installed yet. How can I organise my docker-compose.yml, or maybe I have to do something with dockerfile?</p>
<p>Now my docker-compose.yml looks like (I've deleted memcached container lines):</p>
<pre><code>version: '2'
services:
app:
build: .
volumes:
- ./app:/usr/src/app
- ./logs:/var/log
expose:
- "8000"
links:
- db:db
networks:
tickets-api:
ipv4_address: 172.25.0.100
extra_hosts:
- "db:172.25.0.102"
webserver:
image: nginx:latest
links:
- app
- db
volumes_from:
- app
volumes:
- ./nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./nginx/uwsgi_params:/etc/nginx/uwsgi_params
ports:
- "80:80"
networks:
tickets-api:
ipv4_address: 172.25.0.101
db:
restart: always
image: postgres
volumes:
- ./postgresql/pgdata:/pgdata
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- PGDATA=/pgdata
networks:
tickets-api:
ipv4_address: 172.25.0.102
networks:
tickets-api:
driver: bridge
ipam:
config:
- subnet: 172.25.0.0/24
</code></pre>
| 0 |
2016-09-15T20:44:14Z
| 39,548,568 |
<p>The easiest way to solve this problem is to update the <code>Dockerfile</code> for the app and install the development dependencies required to build the python package. </p>
<p>On ubuntu/debian that might be something like:</p>
<pre><code>apt-get install libmemcached-dev gcc python-dev
</code></pre>
<p>A second (more advantaged) option is to build a wheel for this package in a separate container, then install the wheel instead of the source tarball. That way you don't have to install any other packages, and your final image will be much smaller. However it requires more work to setup.</p>
| 0 |
2016-09-17T15:31:24Z
|
[
"python",
"docker",
"memcached",
"docker-compose",
"pylibmc"
] |
Creating summary rows in pivot tables
| 39,520,069 |
<p>I have dataframe: </p>
<pre><code>df = pd.DataFrame({'State': {0: "AZ", 1: "AZ", 2:"AZ", 4: "AZ", 5: "AK", 6: "AK", 7 : "AK", 8: "AK"},
'City': {0: "A", 1: "A", 2:"B", 4: "B", 5: "C", 6: "C", 7 : "D", 8: "D"},
'Area': {0: "North", 1: "South", 2:"North", 4: "South", 5: "North", 6: "South", 7 : "North", 8: "South"},
'Restaurant': {0: "Rest1", 1: "Rest2", 2:"Rest3", 4: "Rest4", 5: "Rest5", 6: "Rest6", 7 : "Rest7", 8: "Rest8"},
'Price': {0: 2343, 1: 23445, 2:34536, 4: 7456, 5: 6584, 6: 64563, 7 : 54745, 8: 436345}},
columns=['State','City','Area','Restaurant','Price'])
print(df)
State City Area Restaurant Price
0 AZ A North Rest1 2343
1 AZ A South Rest2 23445
2 AZ B North Rest3 34536
...
</code></pre>
<p>I also have the following Pivot Table:</p>
<pre><code>pivo=pd.pivot_table(df,values=["Price"],
columns=['State',"City", 'Area'],
margins=True,
aggfunc=[len, np.mean])
print(pivo)
len mean
State City Area
Price AK C North 1 6584.000
South 1 64563.000
D North 1 54745.000
South 1 436345.000
AZ A North 1 2343.000
South 1 23445.000
B North 1 34536.000
South 1 7456.000
All 8 78752.125
</code></pre>
<p>I want to be able to calculate an "All" row that aggregates each state and each city so that it looks like this:</p>
<pre><code> len mean
State City Area
Price AK All 4 281118.5
C All 2 35573.5
North 1 6584.000
South 1 64563.000
D All 2 245545
North 1 54745.000
South 1 436345.000
...
</code></pre>
<p>I've been playing with unstack/stack but I haven't produced anything close.</p>
<p>Thank you!</p>
<p>Edit: This is the closest I've gotten:</p>
<pre><code>pivo=pd.pivot_table(df,values=["Price"],
index=['State'],
columns=["City", 'Area'],
margins=True,
aggfunc=[len, np.mean])
len mean
Price Price
State City Area
AK All 4.0 140559.000
C North 1.0 6584.000
South 1.0 64563.000
D North 1.0 54745.000
South 1.0 436345.000
AZ A North 1.0 2343.000
South 1.0 23445.000
All 4.0 16945.000
B North 1.0 34536.000
South 1.0 7456.000
All A North 1.0 2343.000
South 1.0 23445.000
All 8.0 78752.125
B North 1.0 34536.000
South 1.0 7456.000
C North 1.0 6584.000
South 1.0 64563.000
D North 1.0 54745.000
South 1.0 436345.000
</code></pre>
| 1 |
2016-09-15T20:44:59Z
| 39,523,064 |
<p>Edit: missed the fact that you wanted the state margins in there as well. I'm leaving the original answer up just in case -- it might still be useful. Scroll down for some hacky pandas.</p>
<hr>
<p>Does this help?</p>
<pre><code>In [1]: import pandas as pd
In [2]: import numpy as np
In [3]: df = pd.DataFrame({'State': {0: "AZ", 1: "AZ", 2:"AZ", 4: "AZ", 5: "AK", 6: "AK", 7 : "AK", 8: "AK"},
...:
...: 'City': {0: "A", 1: "A", 2:"B", 4: "B", 5: "C", 6: "C", 7 : "D", 8: "D"},
...: 'Area': {0: "North", 1: "South", 2:"North", 4: "South", 5: "North", 6: "South", 7 : "No
...: rth", 8: "South"},
...: 'Restaurant': {0: "Rest1", 1: "Rest2", 2:"Rest3", 4: "Rest4", 5: "Rest5", 6: "Rest6", 7
...: : "Rest7", 8: "Rest8"},
...: 'Price': {0: 2343, 1: 23445, 2:34536, 4: 7456, 5: 6584, 6: 64563, 7 : 54745, 8: 436345}
...: },
...: columns=['State','City','Area','Restaurant','Price'])
In [4]: pv = (df.pivot_table(index=['State', 'City'],
...: columns=['Area'],
...: values=['Price'],
...: margins=True,
...: aggfunc=[len, np.mean]))
In [5]: pv
Out[5]:
len mean
Price Price
Area North South All North South All
State City
AK C 1.0 1.0 2.0 6584.0 64563.0 35573.500
D 1.0 1.0 2.0 54745.0 436345.0 245545.000
AZ A 1.0 1.0 2.0 2343.0 23445.0 12894.000
B 1.0 1.0 2.0 34536.0 7456.0 20996.000
All 4.0 4.0 8.0 24552.0 132952.0 78752.125
In [6]: pv.stack()
Out[6]:
len mean
Price Price
State City Area
AK C All 2.0 35573.500
North 1.0 6584.000
South 1.0 64563.000
D All 2.0 245545.000
North 1.0 54745.000
South 1.0 436345.000
AZ A All 2.0 12894.000
North 1.0 2343.000
South 1.0 23445.000
B All 2.0 20996.000
North 1.0 34536.000
South 1.0 7456.000
All All 8.0 78752.125
North 4.0 24552.000
South 4.0 132952.000
</code></pre>
<p>As a one-liner:</p>
<pre><code>In [7]: pv = (df.pivot_table(index=['State', 'City'],
...: columns=['Area'],
...: values=['Price'],
...: margins=True,
...: aggfunc=[len, np.mean])
...: .stack())
In [8]: pv
Out[8]:
len mean
Price Price
State City Area
AK C All 2.0 35573.500
North 1.0 6584.000
South 1.0 64563.000
D All 2.0 245545.000
North 1.0 54745.000
South 1.0 436345.000
AZ A All 2.0 12894.000
North 1.0 2343.000
South 1.0 23445.000
B All 2.0 20996.000
North 1.0 34536.000
South 1.0 7456.000
All All 8.0 78752.125
North 4.0 24552.000
South 4.0 132952.000
</code></pre>
<hr>
<p>Adding in the state margins was a bit of a chore and it's not at all elegant. I'd love to see improvement on this.</p>
<hr>
<pre><code>In [9]: pv = (df.pivot_table(index=['State', 'City'],
...: columns=['Area'],
...: values=['Price'],
...: margins=True,
...: aggfunc=[len, np.mean]))
In [10]: state_agg = (df[['Price', 'State']]
...: .pivot_table(index='State', aggfunc=[len, np.mean], margins=True)
...: .assign(City= 'state_margin').assign(Area="")
...: )
...: state_agg.loc['All', 'City'] = 'total'
...:
In [11]: state_agg
Out[11]:
len mean City Area
Price Price
State
AK 4.0 140559.000 state_margin
AZ 4.0 16945.000 state_margin
All 8.0 78752.125 total
</code></pre>
<p>The following <code>iloc[0:-1]</code> drops the margin row on the first pivoted table.</p>
<pre><code>In [12]: results = (pd.concat([pv.iloc[0:-1].stack().reset_index(),
...: state_agg.reset_index()
...: ])
...: ).set_index(['State', 'City', 'Area']).sort_index()
In [13]: results
Out[13]:
len mean
Price Price
State City Area
AK C All 2.0 35573.500
North 1.0 6584.000
South 1.0 64563.000
D All 2.0 245545.000
North 1.0 54745.000
South 1.0 436345.000
state_margin 4.0 140559.000
AZ A All 2.0 12894.000
North 1.0 2343.000
South 1.0 23445.000
B All 2.0 20996.000
North 1.0 34536.000
South 1.0 7456.000
state_margin 4.0 16945.000
All total 8.0 78752.125
In [14]: idx = pd.IndexSlice
...: results.loc[idx[:, 'state_margin'], :]
...:
Out[14]:
len mean
Price Price
State City Area
AK state_margin 4.0 140559.0
AZ state_margin 4.0 16945.0
</code></pre>
| 1 |
2016-09-16T02:47:14Z
|
[
"python",
"pandas"
] |
Python Selenium: Firefox set_preference to overwrite files on download?
| 39,520,095 |
<p>I am using these Firefox preference setting for <code>selenium</code> in Python 2.7:</p>
<pre><code>ff_profile = webdriver.FirefoxProfile(profile_dir)
ff_profile.set_preference("browser.download.folderList", 2)
ff_profile.set_preference("browser.download.manager.showWhenStarting", False)
ff_profile.set_preference("browser.download.dir", dl_dir)
ff_profile.set_preference('browser.helperApps.neverAsk.saveToDisk', "text/plain, application/vnd.ms-excel, text/csv, text/comma-separated-values, application/octet-stream")
</code></pre>
<p>With Selenium, I want to recurringly download the same file, and overwrite it, thus keeping the same filename â without me having to confirm the download.</p>
<p>With the settings above, it will download without asking for location, but all downloads will creates duplicates with the filename <code>filename (1).ext</code>, <code>filename (2).ext</code> etc in MacOS.</p>
<p>I'm guessing there might not be a setting to allow overwriting from within Firefox, to prevent accidents(?).</p>
<p>(In that case, I suppose the solution would be to handle the overwriting on the disk with other Python modules; another topic).</p>
| 1 |
2016-09-15T20:47:05Z
| 39,520,192 |
<p>This is something that is <em>out of the Selenium's scope</em> and is handled by the operating system.</p>
<p>Judging by the context of this and your <a href="http://stackoverflow.com/q/39519518/771848">previous question</a>, you know (or can determine from the link text) the filename beforehand. If this is really the case, before hitting the "download" link, make sure you remove the existing file:</p>
<pre><code>import os
filename = "All-tradable-ETFs-ETCs-and-ETNs.xlsx" # or extract it dynamically from the link
filepath = os.path.join(dl_dir, filename)
if os.path.exists(filepath):
os.remove(filepath)
</code></pre>
| 1 |
2016-09-15T20:54:25Z
|
[
"python",
"selenium",
"firefox"
] |
Tkinter not displaying Unicode characters properly on Linux
| 39,520,096 |
<p>While on Windows tkinter seems to display the characters properly, the same does not happen with the same code on Linux.</p>
<p>I've tried the method shown <a href="http://tkinter.unpythonic.net/wiki/UnicodeSupport]" rel="nofollow">here</a>, adding a <code>.encode("utf-8")</code> after the character, but that just makes the char go hairwire on both systems. I've also tried just to copy and paste the character instead of using the unicode representation, and while that works on Windows, the same can't be said for Linux.</p>
<p>A snippet of code that shows my problem:</p>
<pre><code># -*- coding: utf-8 -*-
from tkinter import *
master = Tk()
previous_button = Button(master,
text=u'\u23EE',
relief='flat',
activebackground='#282828',
activeforeground='#1DB954',
bg='#282828',
fg='#1DB954',
borderwidth=0,
bd=0,
highlightthickness=0,
font='arial 11',
)
next_button = Button(master,
text=u'\u23ED',
relief='flat',
activebackground='#282828',
activeforeground='#1DB954',
bg='#282828',
fg='#1DB954',
bd=0,
highlightthickness=0,
borderwidth=0,
font='arial 11',
)
previous_button.grid()
next_button.grid()
mainloop()
</code></pre>
<p>Windows:
<a href="http://i.stack.imgur.com/qQUXp.png" rel="nofollow"><img src="http://i.stack.imgur.com/qQUXp.png" alt="The buttons as seen on windows"></a></p>
<p>Linux:
<a href="http://i.stack.imgur.com/nxg5j.png" rel="nofollow"><img src="http://i.stack.imgur.com/nxg5j.png" alt="The same buttons on Linux"></a></p>
<p>How to make tkinter render these unicodes on Linux?</p>
| 0 |
2016-09-15T20:47:06Z
| 39,520,160 |
<p>The font "arial" don't support unicode characters U+23EE and U+23ED on your Linux installation. Can you check that with a font manager?</p>
| 1 |
2016-09-15T20:51:31Z
|
[
"python",
"python-3.x",
"unicode",
"tkinter"
] |
Tkinter not displaying Unicode characters properly on Linux
| 39,520,096 |
<p>While on Windows tkinter seems to display the characters properly, the same does not happen with the same code on Linux.</p>
<p>I've tried the method shown <a href="http://tkinter.unpythonic.net/wiki/UnicodeSupport]" rel="nofollow">here</a>, adding a <code>.encode("utf-8")</code> after the character, but that just makes the char go hairwire on both systems. I've also tried just to copy and paste the character instead of using the unicode representation, and while that works on Windows, the same can't be said for Linux.</p>
<p>A snippet of code that shows my problem:</p>
<pre><code># -*- coding: utf-8 -*-
from tkinter import *
master = Tk()
previous_button = Button(master,
text=u'\u23EE',
relief='flat',
activebackground='#282828',
activeforeground='#1DB954',
bg='#282828',
fg='#1DB954',
borderwidth=0,
bd=0,
highlightthickness=0,
font='arial 11',
)
next_button = Button(master,
text=u'\u23ED',
relief='flat',
activebackground='#282828',
activeforeground='#1DB954',
bg='#282828',
fg='#1DB954',
bd=0,
highlightthickness=0,
borderwidth=0,
font='arial 11',
)
previous_button.grid()
next_button.grid()
mainloop()
</code></pre>
<p>Windows:
<a href="http://i.stack.imgur.com/qQUXp.png" rel="nofollow"><img src="http://i.stack.imgur.com/qQUXp.png" alt="The buttons as seen on windows"></a></p>
<p>Linux:
<a href="http://i.stack.imgur.com/nxg5j.png" rel="nofollow"><img src="http://i.stack.imgur.com/nxg5j.png" alt="The same buttons on Linux"></a></p>
<p>How to make tkinter render these unicodes on Linux?</p>
| 0 |
2016-09-15T20:47:06Z
| 39,530,226 |
<p>The Arial on your Linux Mint machine doesn't appear to support those characters.</p>
<p>I would suggest adding the font from Windows on Linux.</p>
<p>A simple guide can be found here:
<a href="https://community.linuxmint.com/tutorial/view/29" rel="nofollow">https://community.linuxmint.com/tutorial/view/29</a></p>
<p>Additionally, I would recommend using the expanded version of the font attribute in <code>tkinter</code> as follows:</p>
<pre><code>button = Button(parent, text=u'\u23ED', font=('Font Name', size, 'decoration')
button = Button(parent, text=u'\u23ED', font=('Arial', 12, 'bold')
</code></pre>
<p>This way you can easily support fonts with spaces in its name.</p>
| 1 |
2016-09-16T11:20:26Z
|
[
"python",
"python-3.x",
"unicode",
"tkinter"
] |
Tkinter not displaying Unicode characters properly on Linux
| 39,520,096 |
<p>While on Windows tkinter seems to display the characters properly, the same does not happen with the same code on Linux.</p>
<p>I've tried the method shown <a href="http://tkinter.unpythonic.net/wiki/UnicodeSupport]" rel="nofollow">here</a>, adding a <code>.encode("utf-8")</code> after the character, but that just makes the char go hairwire on both systems. I've also tried just to copy and paste the character instead of using the unicode representation, and while that works on Windows, the same can't be said for Linux.</p>
<p>A snippet of code that shows my problem:</p>
<pre><code># -*- coding: utf-8 -*-
from tkinter import *
master = Tk()
previous_button = Button(master,
text=u'\u23EE',
relief='flat',
activebackground='#282828',
activeforeground='#1DB954',
bg='#282828',
fg='#1DB954',
borderwidth=0,
bd=0,
highlightthickness=0,
font='arial 11',
)
next_button = Button(master,
text=u'\u23ED',
relief='flat',
activebackground='#282828',
activeforeground='#1DB954',
bg='#282828',
fg='#1DB954',
bd=0,
highlightthickness=0,
borderwidth=0,
font='arial 11',
)
previous_button.grid()
next_button.grid()
mainloop()
</code></pre>
<p>Windows:
<a href="http://i.stack.imgur.com/qQUXp.png" rel="nofollow"><img src="http://i.stack.imgur.com/qQUXp.png" alt="The buttons as seen on windows"></a></p>
<p>Linux:
<a href="http://i.stack.imgur.com/nxg5j.png" rel="nofollow"><img src="http://i.stack.imgur.com/nxg5j.png" alt="The same buttons on Linux"></a></p>
<p>How to make tkinter render these unicodes on Linux?</p>
| 0 |
2016-09-15T20:47:06Z
| 39,544,538 |
<p>All modern Linux GUI toolkits support font substitution since about 2003 (the toolkit will use other fonts to complete the selected one if it's missing glyphs), but tk is not a modern toolkit, it is windows-oriented and butt-ugly on Linux, and it largely missed this change, not sure how stuck in the past it currently is (see also fontconfig, harfbuzz-ng, pango).</p>
<p>You need to make sure you've installed a font that includes the glyphs you're needing (check with right-click in gucharmap). Common fonts with large encoding on Linux are Dejavu for example.</p>
<p>If your tk version finally caught up with the rest of the toolkits it will use this font transparently to complete the one you selected. If not you need to select explicitly the correct font in your code.</p>
<p>It is quite unlikely you'll find common fonts on Linux and Windows. Linux systems deploy free and open fonts, Microsoft deploys proprietary fonts with restrictive licensing. The Arial you find on the system is probably an obsolete incomplete buggy version with dubious licensing, that most Linux versions refuse to deploy. Don't count on it existing elsewhere and don't ask others to deploy it if you want to avoid legal problems.</p>
<p>Conversely, most fonts you find on Linux can be deployed without restrictive conditions on Windows, but won't be by default. Check the license of the fonts you want to use before making a final choice.</p>
| 1 |
2016-09-17T08:15:35Z
|
[
"python",
"python-3.x",
"unicode",
"tkinter"
] |
Pandas pd.cut Field Where Other Field Equals Value
| 39,520,101 |
<p>I am trying to use pd.cut to create a new field. The creation/population of this new field is dependent on a value in another field, however. </p>
<pre><code>hdl_bins = [0,40,59,300]
hdl_labels = ['hdl_high risk','hdl_borderline','hdl_protective']
df['hdl'] = pd.cut(df['value'],bins=hdl_bins,labels=hdl_labels)
</code></pre>
<p>I would only like this populate the new field "hdl" when the following criteria is met:</p>
<pre><code>df[(df['name']=='HDL')
</code></pre>
<p>How would I best add the "where" criteria to the pd.cut operation? Thanks!</p>
<p>EDIT:</p>
<p>Here is an example of the input:</p>
<pre><code>id,date,name,value
1,1/1/11,Weight,76.3
1,1/2/11,Height,152.7
1,1/3/11,Body mass index (BMI) [Ratio],32.7
1,1/4/11,Temperature,98.6
1,1/5/11,Systolic,118.9
1,1/6/11,Diastolic,69.8
1,1/7/11,LDL,98
1,1/8/11,HDL,63.2
1,1/9/11,Total Cholesterol,263.1
1,1/10/11,Trigl SerPl-mCnc,509.7
1,1/11/11,LDL,98
1,1/12/11,HDL,63.2
1,1/13/11,Total Cholesterol,263.1
1,1/14/11,Trigl SerPl-mCnc,509.7
</code></pre>
<p>Desired Output:</p>
<pre><code>id,date,name,value,hdl
1,1/1/11,Weight,76.3,0
1,1/2/11,Height,152.7,0
1,1/3/11,Body mass index (BMI) [Ratio],32.7,0
1,1/4/11,Temperature,98.6,0
1,1/5/11,Systolic,118.9,0
1,1/6/11,Diastolic,69.8,0
1,1/7/11,LDL,98,0
1,1/8/11,HDL,63.2,hdl_protective
1,1/9/11,Total Cholesterol,263.1,0
1,1/10/11,Trigl SerPl-mCnc,509.7,0
1,1/11/11,LDL,98,0
1,1/12/11,HDL,63.2,hdl_protective
1,1/13/11,Total Cholesterol,263.1,0
1,1/14/11,Trigl SerPl-mCnc,509.7,0
</code></pre>
| 0 |
2016-09-15T20:47:15Z
| 39,520,301 |
<p><strong>UPDATE:</strong></p>
<p>Pay attention at the last row (<code>value == 'XXX'</code>): </p>
<pre><code>In [55]: df
Out[55]:
id date name value
0 1 1/1/11 Weight 76.3
1 1 1/2/11 Height 152.7
2 1 1/3/11 Body mass index (BMI) [Ratio] 32.7
3 1 1/4/11 Temperature 98.6
4 1 1/5/11 Systolic 118.9
5 1 1/6/11 Diastolic 69.8
6 1 1/7/11 LDL 98
7 1 1/8/11 HDL 63.2
8 1 1/9/11 Total Cholesterol 263.1
9 1 1/10/11 Trigl SerPl-mCnc 509.7
10 1 1/11/11 LDL 98
11 1 1/12/11 HDL 63.2
12 1 1/13/11 Total Cholesterol 263.1
13 1 1/14/11 Trigl SerPl-mCnc 509.7
14 1 12/12/12 HDL XXX
</code></pre>
<h2>Solution:</h2>
<pre><code>In [56]: df['hdl'] = '0'
In [57]: df.ix[df['name']=='HDL', 'hdl'] = \
....: pd.cut(pd.to_numeric(df.ix[df['name']=='HDL','value'], errors='corce'),bins=hdl_bins,labels=hdl_labels)
In [58]: df
Out[58]:
id date name value hdl
0 1 1/1/11 Weight 76.3 0
1 1 1/2/11 Height 152.7 0
2 1 1/3/11 Body mass index (BMI) [Ratio] 32.7 0
3 1 1/4/11 Temperature 98.6 0
4 1 1/5/11 Systolic 118.9 0
5 1 1/6/11 Diastolic 69.8 0
6 1 1/7/11 LDL 98 0
7 1 1/8/11 HDL 63.2 hdl_protective
8 1 1/9/11 Total Cholesterol 263.1 0
9 1 1/10/11 Trigl SerPl-mCnc 509.7 0
10 1 1/11/11 LDL 98 0
11 1 1/12/11 HDL 63.2 hdl_protective
12 1 1/13/11 Total Cholesterol 263.1 0
13 1 1/14/11 Trigl SerPl-mCnc 509.7 0
14 1 12/12/12 HDL XXX NaN
</code></pre>
<p><strong>Old answer:</strong></p>
<pre><code>In [13]: df
Out[13]:
value name
0 123 XXX
1 18 LDL
2 195 LDL
3 25 XXX
4 70 LDL
5 11 LDL
6 199 XXX
7 163 LDL
8 32 LDL
9 85 XXX
In [14]: hdl_bins = [0,40,59,300]
In [15]: hdl_labels = ['hdl_high risk','hdl_borderline','hdl_protective']
In [16]: df['hdl'] = ''
In [22]: df.ix[df['name']=='LDL', 'hdl'] = \
....: pd.cut(df.ix[df['name']=='LDL','value'],bins=hdl_bins,labels=hdl_labels)
In [23]: df
Out[23]:
value name hdl
0 123 XXX
1 18 LDL hdl_high risk
2 195 LDL hdl_protective
3 25 XXX
4 70 LDL hdl_protective
5 11 LDL hdl_high risk
6 199 XXX
7 163 LDL hdl_protective
8 32 LDL hdl_high risk
9 85 XXX
</code></pre>
| 1 |
2016-09-15T21:00:48Z
|
[
"python",
"pandas",
"dataframe"
] |
Get directory dynamically with python script
| 39,520,149 |
<p>I run my python script in the background from a terminal with:</p>
<pre><code>python myscript.py &
</code></pre>
<p>In the script I have a loop which gets the current directory with os.getcwd(). If I change my working directory in the terminal though, the script doesn't get the new directory because as far as I have understood the script is attached to the original directory from which it was launched. </p>
<p>How can I update the current directory from a python script, i.e. how can I keep track of the current working directory of the process that launched the script?</p>
| 0 |
2016-09-15T20:50:49Z
| 39,520,287 |
<p><em>Disclaimer:</em> don't do this. </p>
<pre><code>import os
import subprocess
from time import sleep
ppid = os.getppid()
print "parent process id: ", ppid
subprocess.check_call(['pwdx', str(ppid)])
sleep(5) # do `cd other` in the parent process here
subprocess.check_call(['pwdx', str(ppid)])
</code></pre>
| 1 |
2016-09-15T21:00:00Z
|
[
"python",
"terminal",
"directory"
] |
Using SQLAlchemy, determine which list values are not in a DB column
| 39,520,159 |
<p>Using SQLAlchemy, given a list, I'd like to determine which values in the list are not present in a given column in an sqlite DB table. One way to do it is the following:</p>
<pre><code>def get_user_ids_not_in_DB(self, user_ids):
query__belongs = User_DB.user_id.in_(user_ids)
select__user_ids_in_DB = self.SQL_Helper.db.query(User_DB.user_id).filter(query__belongs)
user_ids_in_DB = zip(*select__user_ids_in_DB.all())[0]
return list(set(user_ids) - set(user_ids_in_DB))
</code></pre>
<p>Is there a faster/more efficient way to accomplish the same thing?</p>
| 1 |
2016-09-15T20:51:24Z
| 39,520,248 |
<p>Select all users then outer join it to an Aliased the User_db object Then add a filter for non-Aliased user_id's that are null.</p>
<pre><code> # an alias to a subquery on a table. All user ids in you list
ualias = aliased(User_DB, User_DB.user_id.in_(user_ids))
results = self.SQL_Helper.db.query(User_DB.user_id)\
.outerjoin(ualias, ualias.user_id == User_DB.user_id)\
.filter(ualias.user_id == None)
</code></pre>
<p>Pardon typos, but that the gist of it.</p>
| 2 |
2016-09-15T20:57:39Z
|
[
"python",
"python-2.7",
"sqlalchemy"
] |
Using SQLAlchemy, determine which list values are not in a DB column
| 39,520,159 |
<p>Using SQLAlchemy, given a list, I'd like to determine which values in the list are not present in a given column in an sqlite DB table. One way to do it is the following:</p>
<pre><code>def get_user_ids_not_in_DB(self, user_ids):
query__belongs = User_DB.user_id.in_(user_ids)
select__user_ids_in_DB = self.SQL_Helper.db.query(User_DB.user_id).filter(query__belongs)
user_ids_in_DB = zip(*select__user_ids_in_DB.all())[0]
return list(set(user_ids) - set(user_ids_in_DB))
</code></pre>
<p>Is there a faster/more efficient way to accomplish the same thing?</p>
| 1 |
2016-09-15T20:51:24Z
| 39,520,393 |
<p>That's the most efficient I can think of (quite close to yours):</p>
<pre><code>from future_builtins import zip, map
from operator import itemgetter
def get_user_ids_not_in_DB(self, user_ids):
unique_ids = set(user_ids)
query__belongs = User_DB.user_id.in_(unique_ids)
select__user_ids_in_DB = self.SQL_Helper.db.query(User_DB.user_id).filter(query__belongs)
user_ids_in_DB = set(map(itemgetter(0), select__user_ids_in_DB))
return (unique_ids - user_ids_in_DB)
</code></pre>
| 1 |
2016-09-15T21:08:46Z
|
[
"python",
"python-2.7",
"sqlalchemy"
] |
Add constraint to OpenMDAO at the driver/prob level
| 39,520,186 |
<p>Is it possible to add a constraint to an OpenMDAO problem? In the following example, I would like to constrain the objective function to lying below <code>-3.16</code>. I have imported the sellar problem from another file, <code>sellar_backend.py</code>. Is it possible for me to add this constraint without modifying <code>sellar_backend.py</code>?</p>
<p><strong>sellar_backend.py</strong></p>
<pre><code>import numpy as np
from openmdao.api import Problem, ScipyOptimizer, Group, ExecComp, IndepVarComp, Component
from openmdao.api import Newton, ScipyGMRES
class SellarDis1(Component):
"""Component containing Discipline 1."""
def __init__(self):
super(SellarDis1, self).__init__()
# Global Design Variable
self.add_param('z', val=np.zeros(2))
# Local Design Variable
self.add_param('x', val=0.)
# Coupling parameter
self.add_param('y2', val=1.0)
# Coupling output
self.add_output('y1', val=1.0)
def solve_nonlinear(self, params, unknowns, resids):
"""Evaluates the equation
y1 = z1**2 + z2 + x1 - 0.2*y2"""
z1 = params['z'][0]
z2 = params['z'][1]
x1 = params['x']
y2 = params['y2']
unknowns['y1'] = z1**2 + z2 + x1 - 0.2*y2
def linearize(self, params, unknowns, resids):
""" Jacobian for Sellar discipline 1."""
J = {}
J['y1','y2'] = -0.2
J['y1','z'] = np.array([[2*params['z'][0], 1.0]])
J['y1','x'] = 1.0
return J
class SellarDis2(Component):
"""Component containing Discipline 2."""
def __init__(self):
super(SellarDis2, self).__init__()
# Global Design Variable
self.add_param('z', val=np.zeros(2))
# Coupling parameter
self.add_param('y1', val=1.0)
# Coupling output
self.add_output('y2', val=1.0)
def solve_nonlinear(self, params, unknowns, resids):
"""Evaluates the equation
y2 = y1**(.5) + z1 + z2"""
z1 = params['z'][0]
z2 = params['z'][1]
y1 = params['y1']
# Note: this may cause some issues. However, y1 is constrained to be
# above 3.16, so lets just let it converge, and the optimizer will
# throw it out
y1 = abs(y1)
unknowns['y2'] = y1**.5 + z1 + z2
def linearize(self, params, unknowns, resids):
""" Jacobian for Sellar discipline 2."""
J = {}
J['y2', 'y1'] = .5*params['y1']**-.5
#Extra set of brackets below ensure we have a 2D array instead of a 1D array
# for the Jacobian; Note that Jacobian is 2D (num outputs x num inputs).
J['y2', 'z'] = np.array([[1.0, 1.0]])
return J
class StateConnection(Component):
""" Define connection with an explicit equation"""
def __init__(self):
super(StateConnection, self).__init__()
# Inputs
self.add_param('y2_actual', 1.0)
# States
self.add_state('y2_command', val=1.0)
def apply_nonlinear(self, params, unknowns, resids):
""" Don't solve; just calculate the residual."""
y2_actual = params['y2_actual']
y2_command = unknowns['y2_command']
resids['y2_command'] = y2_actual - y2_command
def solve_nonlinear(self, params, unknowns, resids):
""" This is a dummy comp that doesn't modify its state."""
pass
def linearize(self, params, unknowns, resids):
"""Analytical derivatives."""
J = {}
# State equation
J[('y2_command', 'y2_command')] = -1.0
J[('y2_command', 'y2_actual')] = 1.0
return J
class SellarStateConnection(Group):
""" Group containing the Sellar MDA. This version uses the disciplines
with derivatives."""
def __init__(self):
super(SellarStateConnection, self).__init__()
self.add('px', IndepVarComp('x', 1.0), promotes=['x'])
self.add('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])
self.add('state_eq', StateConnection())
self.add('d1', SellarDis1(), promotes=['x', 'z', 'y1'])
self.add('d2', SellarDis2(), promotes=['z', 'y1'])
self.connect('state_eq.y2_command', 'd1.y2')
self.connect('d2.y2', 'state_eq.y2_actual')
self.add('obj_cmp', ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0, y1=0.0, y2=0.0),
promotes=['x', 'z', 'y1', 'obj'])
self.connect('d2.y2', 'obj_cmp.y2')
self.add('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
self.add('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2'])
self.connect('d2.y2', 'con_cmp2.y2')
self.nl_solver = Newton()
self.ln_solver = ScipyGMRES()
</code></pre>
<p><strong>example.py</strong></p>
<pre><code>from sellar_backend import *
top = Problem()
top.root = SellarStateConnection()
top.driver = ScipyOptimizer()
top.driver.options['optimizer'] = 'SLSQP'
top.driver.options['tol'] = 1.0e-8
top.driver.add_desvar('z', lower=np.array([-10.0, 0.0]),
upper=np.array([10.0, 10.0]))
top.driver.add_desvar('x', lower=0.0, upper=10.0)
# This is my best attempt so far at adding a constraint at this level
top.add('new_constraint', ExecComp('new_con = -3.16 - obj'), promotes=['*'])
top.driver.add_constraint('new_constraint', upper=0.0)
top.driver.add_objective('obj')
top.driver.add_constraint('con1', upper=0.0)
top.driver.add_constraint('con2', upper=0.0)
top.setup()
top.run()
print("\n")
print( "Minimum found at (%f, %f, %f)" % (top['z'][0], \
top['z'][1], \
top['x']))
print("Coupling vars: %f, %f" % (top['y1'], top['d2.y2']))
print("Minimum objective: ", top['obj'])
</code></pre>
<p>This fails with <code>AttributeError: 'Problem' object has no attribute 'add'</code>. It would be very very convenient to add this new constraint at the Problem level.</p>
| 0 |
2016-09-15T20:53:59Z
| 39,520,407 |
<p>You were close. The <code>add</code> method is on top.root, not top. You add components to Groups, in this case the root group of the problem.</p>
<pre><code># This is my best attempt so far at adding a constraint at this level
top.root.add('new_constraint', ExecComp('new_con = -3.16 - obj'), promotes=['*'])
top.driver.add_constraint('new_con', upper=0.0)
</code></pre>
<p>Also, since you promoted '*' in the first line, the quantity you want to constrain is called 'new_con'.</p>
| 1 |
2016-09-15T21:09:31Z
|
[
"python",
"optimization",
"constraints",
"openmdao"
] |
How to subscribe to a list of multiple kafka wildcard patterns using kafka-python?
| 39,520,222 |
<p>I'm subscribing to Kafka using a pattern with a wildcard, as shown below. The wildcard represents a dynamic customer id.</p>
<pre><code>consumer.subscribe(pattern='customer.*.validations')
</code></pre>
<p>This works well, because I can pluck the customer Id from the topic string. But now I need to expand on the functionality to listen to a similar topic for a slightly different purpose. Let's call it <code>customer.*.additional-validations</code>. The code needs to live in the same project because so much functionality is shared, but I need to be able to take a different path based on the type of queue.</p>
<p>In the <a href="http://kafka-python.readthedocs.io/en/master/#kafkaconsumer" rel="nofollow">Kafka documentation</a> I can see that it is possible to subscribe to an array of topics. However these are hard-coded strings. Not patterns that allow for flexibility.</p>
<pre><code>>>> # Deserialize msgpack-encoded values
>>> consumer = KafkaConsumer(value_deserializer=msgpack.loads)
>>> consumer.subscribe(['msgpackfoo'])
>>> for msg in consumer:
... assert isinstance(msg.value, dict)
</code></pre>
<p>So I'm wondering if it is possible to somehow do a combination of the two? Kind of like this (non-working):</p>
<pre><code>consumer.subscribe(pattern=['customer.*.validations', 'customer.*.additional-validations'])
</code></pre>
| 3 |
2016-09-15T20:56:21Z
| 39,574,630 |
<p>In the KafkaConsumer code, it supports list of topics, or a pattern,</p>
<p><a href="https://github.com/dpkp/kafka-python/blob/68c8fa4ad01f8fef38708f257cb1c261cfac01ab/kafka/consumer/group.py#L717" rel="nofollow">https://github.com/dpkp/kafka-python/blob/68c8fa4ad01f8fef38708f257cb1c261cfac01ab/kafka/consumer/group.py#L717</a></p>
<pre><code> def subscribe(self, topics=(), pattern=None, listener=None):
"""Subscribe to a list of topics, or a topic regex pattern
Partitions will be dynamically assigned via a group coordinator.
Topic subscriptions are not incremental: this list will replace the
current assignment (if there is one).
</code></pre>
<p>So you can create a regex, with OR condition using <code>|</code>, that should work as subscribe to multiple dynamic topics regex, as it internally uses <code>re</code> module for matching.</p>
<p><code>(customer.*.validations)|(customer.*.additional-validations)</code></p>
| 4 |
2016-09-19T13:38:16Z
|
[
"python",
"apache-kafka",
"python-kafka"
] |
Weird behaviour with threads and processes mixing
| 39,520,236 |
<p>I'm running the following python code:</p>
<pre><code>import threading
import multiprocessing
def forever_print():
while True:
print("")
def main():
t = threading.Thread(target=forever_print)
t.start()
return
if __name__=='__main__':
p = multiprocessing.Process(target=main)
p.start()
p.join()
print("main process on control")
</code></pre>
<p>It terminates. </p>
<p>When I unwrapped <code>main</code> from the new process, and just ran it directly, like this:</p>
<pre><code>if name == '__main__':
main()
</code></pre>
<p>The script went on forever, as I thought it should. Am I wrong to assume that, given that <code>t</code> is a non-daemon process, <code>p</code> shouldn't halt in the first case?</p>
<p>I basically set up this little test because i've been developing an app in which threads are spawned inside subprocesses, and it's been showing some weird behaviour (sometimes it terminates properly, sometimes it doesn't). I guess what I wanted to know, in a broader sense, is if there is some sort of "gotcha" when mixing these two python libs.</p>
<p>My running environment: python 2.7 @ Ubuntu 14.04 LTS</p>
| 2 |
2016-09-15T20:57:02Z
| 39,520,316 |
<p>You need to call <code>t.join()</code> in your <code>main</code> function.</p>
<p>As your <code>main</code> function returns, the process gets terminated with both its threads.</p>
<p><code>p.join()</code> blocks the main thread waiting for the spawned process to end. Your spawned process then, creates a thread but does not wait for it to end. It returns immediately thus trashing the thread itself.</p>
<p>If Threads share memory, Processes don't. Therefore, the Thread you create in the newly spawned process remains relegated to that process. The parent process is not aware of it.</p>
| 2 |
2016-09-15T21:02:08Z
|
[
"python",
"multithreading",
"multiprocessing"
] |
Weird behaviour with threads and processes mixing
| 39,520,236 |
<p>I'm running the following python code:</p>
<pre><code>import threading
import multiprocessing
def forever_print():
while True:
print("")
def main():
t = threading.Thread(target=forever_print)
t.start()
return
if __name__=='__main__':
p = multiprocessing.Process(target=main)
p.start()
p.join()
print("main process on control")
</code></pre>
<p>It terminates. </p>
<p>When I unwrapped <code>main</code> from the new process, and just ran it directly, like this:</p>
<pre><code>if name == '__main__':
main()
</code></pre>
<p>The script went on forever, as I thought it should. Am I wrong to assume that, given that <code>t</code> is a non-daemon process, <code>p</code> shouldn't halt in the first case?</p>
<p>I basically set up this little test because i've been developing an app in which threads are spawned inside subprocesses, and it's been showing some weird behaviour (sometimes it terminates properly, sometimes it doesn't). I guess what I wanted to know, in a broader sense, is if there is some sort of "gotcha" when mixing these two python libs.</p>
<p>My running environment: python 2.7 @ Ubuntu 14.04 LTS</p>
| 2 |
2016-09-15T20:57:02Z
| 39,520,414 |
<p>For now, threads created <em>by</em> <code>multiprocessing</code> worker processes act like daemon threads with respect to process termination: the worker process exits without waiting for the threads it created to terminate. This is due to worker processes using <code>os._exit()</code> to shut down, which skips most normal shutdown processing (and in particular skips the normal exit processing code (<code>sys.exit()</code>) that <code>.join()</code>'s non-daemon <code>threading.Thread</code>s).</p>
<p>The easiest workaround is for worker processes to explicitly <code>.join()</code> the non-daemon threads they create.</p>
<p>There's an open bug report about this behavior, but it hasn't made much progress: <a href="http://bugs.python.org/issue18966" rel="nofollow">http://bugs.python.org/issue18966</a></p>
| 2 |
2016-09-15T21:09:45Z
|
[
"python",
"multithreading",
"multiprocessing"
] |
Weird behaviour with threads and processes mixing
| 39,520,236 |
<p>I'm running the following python code:</p>
<pre><code>import threading
import multiprocessing
def forever_print():
while True:
print("")
def main():
t = threading.Thread(target=forever_print)
t.start()
return
if __name__=='__main__':
p = multiprocessing.Process(target=main)
p.start()
p.join()
print("main process on control")
</code></pre>
<p>It terminates. </p>
<p>When I unwrapped <code>main</code> from the new process, and just ran it directly, like this:</p>
<pre><code>if name == '__main__':
main()
</code></pre>
<p>The script went on forever, as I thought it should. Am I wrong to assume that, given that <code>t</code> is a non-daemon process, <code>p</code> shouldn't halt in the first case?</p>
<p>I basically set up this little test because i've been developing an app in which threads are spawned inside subprocesses, and it's been showing some weird behaviour (sometimes it terminates properly, sometimes it doesn't). I guess what I wanted to know, in a broader sense, is if there is some sort of "gotcha" when mixing these two python libs.</p>
<p>My running environment: python 2.7 @ Ubuntu 14.04 LTS</p>
| 2 |
2016-09-15T20:57:02Z
| 39,520,417 |
<p>The gotcha is that the <code>multiprocessing</code> machinery calls <a href="https://docs.python.org/2/library/os.html#os._exit" rel="nofollow"><code>os._exit()</code></a> after your target function exits, which violently kills the child process, even if it has background threads running.</p>
<p>The code for <code>Process.start()</code> looks like this:</p>
<pre><code>def start(self):
'''
Start child process
'''
assert self._popen is None, 'cannot start a process twice'
assert self._parent_pid == os.getpid(), \
'can only start a process object created by current process'
assert not _current_process._daemonic, \
'daemonic processes are not allowed to have children'
_cleanup()
if self._Popen is not None:
Popen = self._Popen
else:
from .forking import Popen
self._popen = Popen(self)
_current_process._children.add(self)
</code></pre>
<p><code>Popen.__init__</code> looks like this:</p>
<pre><code> def __init__(self, process_obj):
sys.stdout.flush()
sys.stderr.flush()
self.returncode = None
self.pid = os.fork() # This forks a new process
if self.pid == 0: # This if block runs in the new process
if 'random' in sys.modules:
import random
random.seed()
code = process_obj._bootstrap() # This calls your target function
sys.stdout.flush()
sys.stderr.flush()
os._exit(code) # Violent death of the child process happens here
</code></pre>
<p>The _bootstrap method is the one that actually executes the <code>target</code> function you passed passed to the <code>Process</code> object. In your case, that's <code>main</code>. <code>main</code> returns right after you start your background thread, even though the <em>process</em> doesn't exit, because there's still a non-daemon thread running.</p>
<p>However, as soon execution hits <code>os._exit(code)</code>, the child process is killed, regardless of any non-daemon threads still executing.</p>
| 2 |
2016-09-15T21:10:06Z
|
[
"python",
"multithreading",
"multiprocessing"
] |
Ignore preceding values for a given column when calculating rolling.mean using Pandas
| 39,520,283 |
<p>Here is a <em>very small</em> subset of time series data that I have:</p>
<pre><code>Date Client Value
01-Sep-2016T ABC 160000
02-Sep-2016T ABC 150000
03-Sep-2016T ABC 190000
04-Sep-2016T ABC 200000
05-Sep-2016T ABC 140000
06-Sep-2016T ABC 120000
07-Sep-2016T ABC 185000
08-Sep-2016T ABC 119000
01-Sep-2016T DEF 200
02-Sep-2016T DEF 100
03-Sep-2016T DEF 150
04-Sep-2016T DEF 10
05-Sep-2016T DEF 5
06-Sep-2016T DEF 160
07-Sep-2016T DEF 150
08-Sep-2016T DEF 3
</code></pre>
<p>I create a data frame, as follows:</p>
<pre><code>dataFrame = pd.read_csv('test_data_02.csv')
</code></pre>
<p>Then, I attempt to add a moving average of the <code>Value</code> column, as follows:</p>
<pre><code>dataFrame['Value_MovingAverage'] = dataFrame['Value'].rolling(window=3, min_periods=1, center=False).mean()
</code></pre>
<p>Then, when I call <code>dataFrame.head(20)</code> to see the resulting <code>ValueMovingAverage</code> column, I see:</p>
<pre><code> Date Client Value Value_MovingAverage
0 01-Sep ABC 160000 160000.000000
1 02-Sep ABC 150000 155000.000000
2 03-Sep ABC 190000 166666.666667
3 04-Sep ABC 200000 180000.000000
4 05-Sep ABC 140000 176666.666667
5 06-Sep ABC 120000 153333.333333
6 07-Sep ABC 185000 148333.333333
7 08-Sep ABC 119000 141333.333333
8 01-Sep DEF 200 **101400.000000**
9 02-Sep DEF 100 39766.666667
10 03-Sep DEF 150 150.000000
11 04-Sep DEF 10 86.666667
12 05-Sep DEF 5 55.000000
13 06-Sep DEF 160 58.333333
14 07-Sep DEF 150 105.000000
15 08-Sep DEF 3 104.333333
</code></pre>
<p>As we can see, the <code>Value_MovingAverage</code> for the 'DEF' clients is affected by the very high values for the preceding two 'ABC' clients. For example, index # 8 is showing a 3-day moving average for 'DEF' of 101400.000000, because it's using the following values:</p>
<p>185,000
119,000
200</p>
<p>average --> 101400</p>
<p>I'm trying to get the Value_MovingAverage for index # 8 to show nothing (because there are no preceding values for client 'ABC') and index # 14 to show a Value_MovingAverage of 58.33333, because it's referencing the following:</p>
<p>160
10
5
average --> 58.33333</p>
<p>My questions are:</p>
<p>1) how do I tell Pandas to ignore the values for 'ABC' when computing the moving average for the 'DEF' clients (and so on for all other 'Client' values in the entire data frame)? Note that I have hundreds of 'Client' values, so creating different frames (one for each 'Client') and then applying the rolling average is not really an option.</p>
<p>2) how do I offset the moving average by one row so that the average for a given number of rows doesn't take <strong>itself</strong> into account?</p>
<p>Thanks in advance!</p>
| 1 |
2016-09-15T20:59:48Z
| 39,520,351 |
<p>I have a solution for you that doesn't directly answer the specific question you asked, but will probably solve the problem you actually have ;)</p>
<p>To wit: The <code>groupby</code> feature of Pandas. </p>
<p>Obviously your datadrame is <em>not</em> just a simple time series. It is instead a bunch of time series, concatenated for different value of 'ABC', 'DEF' and so on.</p>
<p>It looks like in the grand scheme of things you know how to use pandas stuff (e.g. <code>rolling</code>) so I leave it to you to figure out how to use <code>groupby</code>, but feel free to return with more questions if you can't get it to work :)</p>
| 1 |
2016-09-15T21:04:44Z
|
[
"python",
"pandas",
"moving-average"
] |
Ignore preceding values for a given column when calculating rolling.mean using Pandas
| 39,520,283 |
<p>Here is a <em>very small</em> subset of time series data that I have:</p>
<pre><code>Date Client Value
01-Sep-2016T ABC 160000
02-Sep-2016T ABC 150000
03-Sep-2016T ABC 190000
04-Sep-2016T ABC 200000
05-Sep-2016T ABC 140000
06-Sep-2016T ABC 120000
07-Sep-2016T ABC 185000
08-Sep-2016T ABC 119000
01-Sep-2016T DEF 200
02-Sep-2016T DEF 100
03-Sep-2016T DEF 150
04-Sep-2016T DEF 10
05-Sep-2016T DEF 5
06-Sep-2016T DEF 160
07-Sep-2016T DEF 150
08-Sep-2016T DEF 3
</code></pre>
<p>I create a data frame, as follows:</p>
<pre><code>dataFrame = pd.read_csv('test_data_02.csv')
</code></pre>
<p>Then, I attempt to add a moving average of the <code>Value</code> column, as follows:</p>
<pre><code>dataFrame['Value_MovingAverage'] = dataFrame['Value'].rolling(window=3, min_periods=1, center=False).mean()
</code></pre>
<p>Then, when I call <code>dataFrame.head(20)</code> to see the resulting <code>ValueMovingAverage</code> column, I see:</p>
<pre><code> Date Client Value Value_MovingAverage
0 01-Sep ABC 160000 160000.000000
1 02-Sep ABC 150000 155000.000000
2 03-Sep ABC 190000 166666.666667
3 04-Sep ABC 200000 180000.000000
4 05-Sep ABC 140000 176666.666667
5 06-Sep ABC 120000 153333.333333
6 07-Sep ABC 185000 148333.333333
7 08-Sep ABC 119000 141333.333333
8 01-Sep DEF 200 **101400.000000**
9 02-Sep DEF 100 39766.666667
10 03-Sep DEF 150 150.000000
11 04-Sep DEF 10 86.666667
12 05-Sep DEF 5 55.000000
13 06-Sep DEF 160 58.333333
14 07-Sep DEF 150 105.000000
15 08-Sep DEF 3 104.333333
</code></pre>
<p>As we can see, the <code>Value_MovingAverage</code> for the 'DEF' clients is affected by the very high values for the preceding two 'ABC' clients. For example, index # 8 is showing a 3-day moving average for 'DEF' of 101400.000000, because it's using the following values:</p>
<p>185,000
119,000
200</p>
<p>average --> 101400</p>
<p>I'm trying to get the Value_MovingAverage for index # 8 to show nothing (because there are no preceding values for client 'ABC') and index # 14 to show a Value_MovingAverage of 58.33333, because it's referencing the following:</p>
<p>160
10
5
average --> 58.33333</p>
<p>My questions are:</p>
<p>1) how do I tell Pandas to ignore the values for 'ABC' when computing the moving average for the 'DEF' clients (and so on for all other 'Client' values in the entire data frame)? Note that I have hundreds of 'Client' values, so creating different frames (one for each 'Client') and then applying the rolling average is not really an option.</p>
<p>2) how do I offset the moving average by one row so that the average for a given number of rows doesn't take <strong>itself</strong> into account?</p>
<p>Thanks in advance!</p>
| 1 |
2016-09-15T20:59:48Z
| 39,520,500 |
<p><strong>UPDATE:</strong></p>
<pre><code>In [41]: df['new'] = (df.groupby('Client', as_index=False)
....: .rolling(3, min_periods=1, center=False)
....: .Value.mean()
....: .reset_index(drop=True))
In [42]: df
Out[42]:
Date Client Value new
0 01-Sep-2016T ABC 160000 160000.000000
1 02-Sep-2016T ABC 150000 155000.000000
2 03-Sep-2016T ABC 190000 166666.666667
3 04-Sep-2016T ABC 200000 180000.000000
4 05-Sep-2016T ABC 140000 176666.666667
5 06-Sep-2016T ABC 120000 153333.333333
6 07-Sep-2016T ABC 185000 148333.333333
7 08-Sep-2016T ABC 119000 141333.333333
8 01-Sep-2016T DEF 200 200.000000
9 02-Sep-2016T DEF 100 150.000000
10 03-Sep-2016T DEF 150 150.000000
11 04-Sep-2016T DEF 10 86.666667
12 05-Sep-2016T DEF 5 55.000000
13 06-Sep-2016T DEF 160 58.333333
14 07-Sep-2016T DEF 150 105.000000
15 08-Sep-2016T DEF 3 104.333333
</code></pre>
<p><strong>Old answer:</strong></p>
<pre><code>In [28]: df.groupby('Client').rolling(3, min_periods=1, center=False).mean()
Out[28]:
Date Client Value
Client
ABC 0 01-Sep-2016T ABC 160000.000000
1 02-Sep-2016T ABC 155000.000000
2 03-Sep-2016T ABC 166666.666667
3 04-Sep-2016T ABC 180000.000000
4 05-Sep-2016T ABC 176666.666667
5 06-Sep-2016T ABC 153333.333333
6 07-Sep-2016T ABC 148333.333333
7 08-Sep-2016T ABC 141333.333333
DEF 8 01-Sep-2016T DEF 200.000000
9 02-Sep-2016T DEF 150.000000
10 03-Sep-2016T DEF 150.000000
11 04-Sep-2016T DEF 86.666667
12 05-Sep-2016T DEF 55.000000
13 06-Sep-2016T DEF 58.333333
14 07-Sep-2016T DEF 105.000000
15 08-Sep-2016T DEF 104.333333
</code></pre>
<p>or:</p>
<pre><code>In [31]: df.groupby('Client', as_index=False).rolling(3, min_periods=1, center=False).mean().reset_index(drop=True)
Out[31]:
Date Client Value
0 01-Sep-2016T ABC 160000.000000
1 02-Sep-2016T ABC 155000.000000
2 03-Sep-2016T ABC 166666.666667
3 04-Sep-2016T ABC 180000.000000
4 05-Sep-2016T ABC 176666.666667
5 06-Sep-2016T ABC 153333.333333
6 07-Sep-2016T ABC 148333.333333
7 08-Sep-2016T ABC 141333.333333
8 01-Sep-2016T DEF 200.000000
9 02-Sep-2016T DEF 150.000000
10 03-Sep-2016T DEF 150.000000
11 04-Sep-2016T DEF 86.666667
12 05-Sep-2016T DEF 55.000000
13 06-Sep-2016T DEF 58.333333
14 07-Sep-2016T DEF 105.000000
15 08-Sep-2016T DEF 104.333333
</code></pre>
| 1 |
2016-09-15T21:16:28Z
|
[
"python",
"pandas",
"moving-average"
] |
Filtering / smoothing a step function to retrieve biggest increments
| 39,520,349 |
<p>I have a pandas Series where the index are date time.</p>
<p>I can plot my function with the <code>step()</code> function which plots each point of the Series relatively to the time (x is the time).</p>
<p>I want a less precise approach of the evolution in time. So I need to reduce the number of steps, and ignore the smallests increments.
<a href="http://i.stack.imgur.com/pETzx.png" rel="nofollow"><img src="http://i.stack.imgur.com/pETzx.png" alt="enter image description here"></a>
The only way I found is to use the <code>poly1d()</code> function from numpy to interpolate the points as a polynomial, and then to step the function. Unfortunately I am loosing the time index during the transformation because the index of a polynomial are x. </p>
<p>Is there a way to âsimplifyâ my function to only get the dates (x values) of the biggest changes on the y axis instead of having all the dates for any change ?
As I wrote above, I'd like to have only the biggest increments and not the small changes.</p>
<p>Here is the exact data:</p>
<pre><code>2016-01-02 -5.418440
2016-01-09 -9.137942
2016-01-16 -9.137942
2016-01-23 -9.137942
2016-01-30 -9.137942
2016-02-06 -11.795107
2016-02-13 -11.795107
2016-02-20 -11.795107
2016-02-27 -11.795107
2016-03-05 -11.795107
2016-03-12 -13.106988
2016-03-19 -13.106988
2016-03-26 -13.106988
2016-04-02 -13.106988
2016-04-09 -13.106988
2016-04-16 -13.106988
2016-04-23 -13.106988
2016-04-30 -11.458878
2016-05-07 0.051123
2016-05-14 2.010179
2016-05-21 -3.210870
2016-05-28 -0.726291
2016-06-04 5.841818
2016-06-11 5.067061
2016-06-18 5.789375
2016-06-25 16.455159
2016-07-02 22.518294
2016-07-09 39.834977
2016-07-16 54.685965
2016-07-23 54.685965
2016-07-30 55.169290
2016-08-06 55.169290
2016-08-13 55.169290
2016-08-20 53.366569
2016-08-27 45.758675
2016-09-03 10.976592
2016-09-10 -0.554887
2016-09-17 -8.653451
2016-09-24 -18.198305
2016-10-01 -22.218711
2016-10-08 -21.158434
2016-10-15 -11.723798
2016-10-22 -9.928957
2016-10-29 -17.498315
2016-11-05 -22.850454
2016-11-12 -25.190656
2016-11-19 -27.250960
2016-11-26 -27.250960
2016-12-03 -27.250960
2016-12-10 -27.250960
</code></pre>
| 1 |
2016-09-15T21:04:31Z
| 39,521,031 |
<p>One way is to create a mask from your original series where the absolute difference in value from the previous value in the series is compared against your sensitivity threshold. The mask is simply a boolean selection array (matrix) for filtering your original series.</p>
<pre><code>#my_series is your Series
threshold = 10.0
diff_series = my_series.diff.abs()
mask = diff_series > threshold
#now plot the masked values only or create new series from it etc.
my_series[mask].plot()
</code></pre>
| 0 |
2016-09-15T22:05:38Z
|
[
"python",
"pandas",
"numpy"
] |
Filtering / smoothing a step function to retrieve biggest increments
| 39,520,349 |
<p>I have a pandas Series where the index are date time.</p>
<p>I can plot my function with the <code>step()</code> function which plots each point of the Series relatively to the time (x is the time).</p>
<p>I want a less precise approach of the evolution in time. So I need to reduce the number of steps, and ignore the smallests increments.
<a href="http://i.stack.imgur.com/pETzx.png" rel="nofollow"><img src="http://i.stack.imgur.com/pETzx.png" alt="enter image description here"></a>
The only way I found is to use the <code>poly1d()</code> function from numpy to interpolate the points as a polynomial, and then to step the function. Unfortunately I am loosing the time index during the transformation because the index of a polynomial are x. </p>
<p>Is there a way to âsimplifyâ my function to only get the dates (x values) of the biggest changes on the y axis instead of having all the dates for any change ?
As I wrote above, I'd like to have only the biggest increments and not the small changes.</p>
<p>Here is the exact data:</p>
<pre><code>2016-01-02 -5.418440
2016-01-09 -9.137942
2016-01-16 -9.137942
2016-01-23 -9.137942
2016-01-30 -9.137942
2016-02-06 -11.795107
2016-02-13 -11.795107
2016-02-20 -11.795107
2016-02-27 -11.795107
2016-03-05 -11.795107
2016-03-12 -13.106988
2016-03-19 -13.106988
2016-03-26 -13.106988
2016-04-02 -13.106988
2016-04-09 -13.106988
2016-04-16 -13.106988
2016-04-23 -13.106988
2016-04-30 -11.458878
2016-05-07 0.051123
2016-05-14 2.010179
2016-05-21 -3.210870
2016-05-28 -0.726291
2016-06-04 5.841818
2016-06-11 5.067061
2016-06-18 5.789375
2016-06-25 16.455159
2016-07-02 22.518294
2016-07-09 39.834977
2016-07-16 54.685965
2016-07-23 54.685965
2016-07-30 55.169290
2016-08-06 55.169290
2016-08-13 55.169290
2016-08-20 53.366569
2016-08-27 45.758675
2016-09-03 10.976592
2016-09-10 -0.554887
2016-09-17 -8.653451
2016-09-24 -18.198305
2016-10-01 -22.218711
2016-10-08 -21.158434
2016-10-15 -11.723798
2016-10-22 -9.928957
2016-10-29 -17.498315
2016-11-05 -22.850454
2016-11-12 -25.190656
2016-11-19 -27.250960
2016-11-26 -27.250960
2016-12-03 -27.250960
2016-12-10 -27.250960
</code></pre>
| 1 |
2016-09-15T21:04:31Z
| 39,521,035 |
<p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html" rel="nofollow">pandas resample function</a>. </p>
<p>Import the data and set the columns to 'Date' and 'Values'. The rest parses the Date column as datetime. </p>
<pre><code>import pandas as pd
from datetime import datetime
df.columns = ['Date','Values']
df.Date = df.Date.map(lambda x: datetime.strptime(x,'%Y-%m-%d'))
df.set_index('Date',inplace=True)
</code></pre>
<p>You can now resample the timeseries. E.g by month:</p>
<pre><code>resampled_df = df.resample('M').mean()
resampled_df.head()
</code></pre>
<p>And finally, to plot it.</p>
<pre><code>resampled_df.plot()
</code></pre>
| -1 |
2016-09-15T22:06:01Z
|
[
"python",
"pandas",
"numpy"
] |
Filtering / smoothing a step function to retrieve biggest increments
| 39,520,349 |
<p>I have a pandas Series where the index are date time.</p>
<p>I can plot my function with the <code>step()</code> function which plots each point of the Series relatively to the time (x is the time).</p>
<p>I want a less precise approach of the evolution in time. So I need to reduce the number of steps, and ignore the smallests increments.
<a href="http://i.stack.imgur.com/pETzx.png" rel="nofollow"><img src="http://i.stack.imgur.com/pETzx.png" alt="enter image description here"></a>
The only way I found is to use the <code>poly1d()</code> function from numpy to interpolate the points as a polynomial, and then to step the function. Unfortunately I am loosing the time index during the transformation because the index of a polynomial are x. </p>
<p>Is there a way to âsimplifyâ my function to only get the dates (x values) of the biggest changes on the y axis instead of having all the dates for any change ?
As I wrote above, I'd like to have only the biggest increments and not the small changes.</p>
<p>Here is the exact data:</p>
<pre><code>2016-01-02 -5.418440
2016-01-09 -9.137942
2016-01-16 -9.137942
2016-01-23 -9.137942
2016-01-30 -9.137942
2016-02-06 -11.795107
2016-02-13 -11.795107
2016-02-20 -11.795107
2016-02-27 -11.795107
2016-03-05 -11.795107
2016-03-12 -13.106988
2016-03-19 -13.106988
2016-03-26 -13.106988
2016-04-02 -13.106988
2016-04-09 -13.106988
2016-04-16 -13.106988
2016-04-23 -13.106988
2016-04-30 -11.458878
2016-05-07 0.051123
2016-05-14 2.010179
2016-05-21 -3.210870
2016-05-28 -0.726291
2016-06-04 5.841818
2016-06-11 5.067061
2016-06-18 5.789375
2016-06-25 16.455159
2016-07-02 22.518294
2016-07-09 39.834977
2016-07-16 54.685965
2016-07-23 54.685965
2016-07-30 55.169290
2016-08-06 55.169290
2016-08-13 55.169290
2016-08-20 53.366569
2016-08-27 45.758675
2016-09-03 10.976592
2016-09-10 -0.554887
2016-09-17 -8.653451
2016-09-24 -18.198305
2016-10-01 -22.218711
2016-10-08 -21.158434
2016-10-15 -11.723798
2016-10-22 -9.928957
2016-10-29 -17.498315
2016-11-05 -22.850454
2016-11-12 -25.190656
2016-11-19 -27.250960
2016-11-26 -27.250960
2016-12-03 -27.250960
2016-12-10 -27.250960
</code></pre>
| 1 |
2016-09-15T21:04:31Z
| 39,524,763 |
<p>so this is my idea:</p>
<pre><code># Load the data
a = load_table('<your_data_file>', delim_whitespace=True, names=['value'], index_col=0)
# Create and additional column containing the difference
#+between two consecutive values:
a['diff'] = a.value.diff()
# select only the value of the 'diff' column higher than a certain threshold
#+and copy them to a new frame:
b = a[abs(a['diff']) > .5] # The threshold (.5) could be what you think is the best
# Plot your new graph
b.value.plot()
</code></pre>
<p>Hope this is helpful...</p>
| 1 |
2016-09-16T06:09:37Z
|
[
"python",
"pandas",
"numpy"
] |
Filtering a multiindex dataframe based on column values dropping all rows inside level
| 39,520,362 |
<p>I am trying to filter a DataFrame based on one or more values. Here is an example CSV:</p>
<pre><code>AlignmentId,TranscriptId,classifier,value
ENSMUST00000025010-1,ENSMUST00000025010,AlnCoverage,0.99612
ENSMUST00000025010-1,ENSMUST00000025010,AlnIdentity,0.93553
ENSMUST00000025010-1,ENSMUST00000025010,Badness,0.06749
ENSMUST00000025014-1,ENSMUST00000025014,AlnCoverage,1.0
ENSMUST00000025014-1,ENSMUST00000025014,AlnIdentity,0.96382
ENSMUST00000025014-1,ENSMUST00000025014,Badness,0.03618
</code></pre>
<p>And when loaded:</p>
<pre><code>>>> df = pd.read_csv('tmp.csv', index_col=['AlignmentId', 'TranscriptId'])
>>> df
classifier value
AlignmentId TranscriptId
ENSMUST00000025010-1 ENSMUST00000025010 AlnCoverage 0.99612
ENSMUST00000025010 AlnIdentity 0.93553
ENSMUST00000025010 Badness 0.06749
ENSMUST00000025014-1 ENSMUST00000025014 AlnCoverage 1.00000
ENSMUST00000025014 AlnIdentity 0.96382
ENSMUST00000025014 Badness 0.03618
</code></pre>
<p>I want to drop every <code>AlignmentId</code> group that fail a series of <code>classifiers</code>. For this example, lets say that I want to drop <code>ENSMUST00000025010</code> because <code>AlnCoverage < 1.0</code>. Thus, I want to end up with this dataframe:</p>
<pre><code>ENSMUST00000025014-1 ENSMUST00000025014 AlnCoverage 1.00000
ENSMUST00000025014 AlnIdentity 0.96382
ENSMUST00000025014 Badness 0.03618
</code></pre>
<p>How can I do so? </p>
| 2 |
2016-09-15T21:06:20Z
| 39,520,885 |
<p>try this:</p>
<pre><code>In [169]: df = df.drop(df[(df.classifier=='AlnCoverage') & (df.value < 1)].index)
In [170]: df
Out[170]:
classifier value
AlignmentId TranscriptId
ENSMUST00000025014-1 ENSMUST00000025014 AlnCoverage 1.00000
ENSMUST00000025014 AlnIdentity 0.96382
ENSMUST00000025014 Badness 0.03618
</code></pre>
| 2 |
2016-09-15T21:50:09Z
|
[
"python",
"pandas",
"dataframe",
"multi-index"
] |
Plotting dates and associated values from a dictionary with Matplotlib
| 39,520,366 |
<p>I have a dictionary containing instances of Python's <code>datetime.date</code> and associated numeric values (integers). Something like this but a lot larger of course:</p>
<pre><code>{datetime.date(2016, 5, 31): 27, datetime.date(2016, 9, 1): 87}
</code></pre>
<p>I am trying to use Matplotlib in order to build a line graph that would display these numeric values (<code>y</code>) against these dates (<code>x</code>), in chronological order.</p>
<p>Something like this:</p>
<p><a href="http://i.stack.imgur.com/IFHdP.png" rel="nofollow">Example line graph</a> (I can't embed images.)</p>
<p>I am new to Matplotlib and fairly new to Python as well. I tried a few solutions but they wouldn't add anything meaningful to the question.</p>
<p>Any help would be appreciated.</p>
<p>Thank you</p>
| 0 |
2016-09-15T21:06:31Z
| 39,520,576 |
<p>Using only <code>matplotlib</code>:</p>
<pre><code>In [1]: import matplotlib.pyplot as plt
In [2]: time_dict = {datetime.date(2016, 5, 31): 27, datetime.date(2016, 8, 1): 88, datetime.date(2016, 2, 5): 42, datetime.date(2016, 9, 1): 87}
In [3]: x,y = zip(*sorted(time_dict.items()))
In [4]: plt.plot(x,y)
Out[4]: [<matplotlib.lines.Line2D at 0x7f460689ee48>]
</code></pre>
<p>This is the plot:</p>
<p><a href="http://i.stack.imgur.com/ruOBD.png" rel="nofollow"><img src="http://i.stack.imgur.com/ruOBD.png" alt="enter image description here"></a></p>
<p>If you can use <code>pandas</code>, this task is also easy this way: relatively trivial:</p>
<pre><code>In [6]: import pandas as pd
In [7]: df = pd.DataFrame.from_items([(k,[v]) for k,v in time_dict.items()], orient='index', columns=['values'])
In [8]: df
Out[8]:
values
2016-05-31 27
2016-09-01 87
2016-02-05 42
2016-08-01 88
In [9]: df.sort_index(inplace=True)
In [10]: df
Out[10]:
values
2016-02-05 42
2016-05-31 27
2016-08-01 88
2016-09-01 87
In [11]: df.plot()
Out[11]: <matplotlib.axes._subplots.AxesSubplot at 0x7f4611879160>
</code></pre>
<p><a href="http://i.stack.imgur.com/g070Q.png" rel="nofollow"><img src="http://i.stack.imgur.com/g070Q.png" alt="enter image description here"></a></p>
| 1 |
2016-09-15T21:22:29Z
|
[
"python",
"datetime",
"matplotlib"
] |
Phone 'MA' is mising in the acoustic model; word 'masoud' - pocketsphinx
| 39,520,456 |
<p>my name is <code>masoud</code>.
now I want to when I say <code>masoud</code>, my app print a console log. </p>
<p>to do this I made a <code>mdic.txt</code> file and I put my name inside it :</p>
<blockquote>
<p>masoud MA S O D</p>
</blockquote>
<p>I changed <code>mdic.txt</code> to <code>mdic.dict</code> and put it at <code>assets/sync</code> direcotry .</p>
<p>I made a <code>cm.txt</code> file and I put a string inside it:</p>
<pre><code>#JSGF V1.0;
/**
* JSGF Grammar for Hello World example
*/
grammar masoud;
public <greet> = (good morning | masoud) ( bhiksha | evandro | paul | philip | rita | will );
</code></pre>
<p>and I changed <code>cm.txt</code> to <code>cm.gram</code> .</p>
<p>in my MainActivity</p>
<pre><code>private void setupRecognizer(File assetsDir) throws IOException {
// The recognizer can be configured to perform multiple searches
// of different kind and switch between them
recognizer = SpeechRecognizerSetup.defaultSetup()
.setAcousticModel(new File(assetsDir, "en-us-ptm"))
//.setDictionary(new File(assetsDir, "cmudict-en-us.dict"))
.setDictionary(new File(assetsDir, "mdic.dict"))
//.setRawLogDir(assetsDir) // To disable logging of raw audio comment out this call (takes a lot of space on the device)
.setKeywordThreshold(1e-45f) // Threshold to tune for keyphrase to balance between false alarms and misses
.setBoolean("-allphone_ci", true) // Use context-independent phonetic search, context-dependent is too slow for mobile
.getRecognizer();
recognizer.addListener(this);
/** In your application you might not need to add all those searches.
* They are added here for demonstration. You can leave just one.
*/
// Create keyword-activation search.
//recognizer.addKeyphraseSearch(KWS_SEARCH, KEYPHRASE);
recognizer.addKeywordSearch(KWS_SEARCH, new File(assetsDir, "mdic.dict"));
</code></pre>
<p>now I got this message:</p>
<blockquote>
<p>"dict.c", line 195: Line 1: Phone 'MA' is mising in the acoustic
model; word 'masoud' ignored "kws_search.c", line 171: The word
'masoud' is missing in the dictionary</p>
</blockquote>
<p>I got this error at <code>recognizer.addKeywordSearch(KWS_SEARCH, new File(assetsDir, "mdic.dict"));</code> line .</p>
| 0 |
2016-09-15T21:13:08Z
| 39,525,224 |
<p>The correct transcription for masoud is "M AH S UW D". </p>
<p>There is no phone MA in acoustic model. The error says about that.</p>
| 1 |
2016-09-16T06:42:21Z
|
[
"android",
"python",
"pocketsphinx",
"pocketsphinx-android"
] |
How to measure the memory footprint of importing pandas?
| 39,520,532 |
<p>I am running Python on a low memory system.</p>
<p>I want to know whether or not importing pandas will increase memory usage significantly.</p>
<p>At present I just want to import pandas so that I can use the date_range function.</p>
| 0 |
2016-09-15T21:18:58Z
| 39,520,634 |
<p>You can try to use the <code>info()</code> method of <code>pd.DataFrame</code>, this will give you an idea of the memory usage.</p>
<pre><code>In [56]: df = pd.DataFrame(data=np.random.rand(5,5), columns=list('ABCDE'))
In [57]: df
Out[57]:
A B C D E
0 0.229201 0.145442 0.214964 0.205609 0.182592
1 0.709232 0.714943 0.983360 0.635155 0.949378
2 0.741204 0.532559 0.646229 0.649971 0.686386
3 0.073047 0.382106 0.121190 0.721732 0.146408
4 0.904605 0.115031 0.377635 0.377796 0.005747
In [58]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Data columns (total 5 columns):
A 5 non-null float64
B 5 non-null float64
C 5 non-null float64
D 5 non-null float64
E 5 non-null float64
dtypes: float64(5)
memory usage: 280.0 bytes ## <- check here!
</code></pre>
<p>I hope this helps!</p>
| 2 |
2016-09-15T21:26:44Z
|
[
"python",
"pandas"
] |
How to measure the memory footprint of importing pandas?
| 39,520,532 |
<p>I am running Python on a low memory system.</p>
<p>I want to know whether or not importing pandas will increase memory usage significantly.</p>
<p>At present I just want to import pandas so that I can use the date_range function.</p>
| 0 |
2016-09-15T21:18:58Z
| 39,520,649 |
<p>You may also want to use a Memory Profiler to get an idea of how much memory is allocated to your Pandas objects. There are several Python Memory Profilers you can use (a simple Google search can give you an idea). PySizer is one that I used a while ago.</p>
| 3 |
2016-09-15T21:28:21Z
|
[
"python",
"pandas"
] |
Matplotlib colored sphere
| 39,520,555 |
<p>I have a data set which maps a tuple of phi and theta to
a value which represents the strength of the signal.
I want to plot these on a sphere. I simply followed
a demo from matplotlib and adjusted the code to my
use case. </p>
<pre><code>from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
u = phi
v = theta
vals =vals/vals.max()
Map = cm.coolwarm
facecolors = Map(vals[:])
x = 10 * np.outer(np.cos(u), np.sin(v))
y = 10 * np.outer(np.sin(u), np.sin(v))
z = 10 * np.outer(np.ones(np.size(u)), np.cos(v))
ax.plot_surface(x, y, z, rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False, facecolors=facecolors)
plt.show()
</code></pre>
<p>This generates an error message IndexError: index 4 is out of bounds for axis 0 with size 4. I also looked into the source code, which seems
to indicate to me that facecolors isn't formatted correctly, but I'm
struggling to figure out, what formatting is needed exactly.</p>
<p>Any help or other ways to achieve this goal would be greatly
appreciated.</p>
<p>Greetings</p>
| 0 |
2016-09-15T21:20:59Z
| 39,535,813 |
<p>If your question is: "How to get rid of this IndexError?", I modified your code and now it works. <code>plot_surface</code> takes X,Y,Z and facecolors as 2D arrays of corresponding <strong>values on a 2D grid</strong>. Facecolors in your case weren't and this was the source of your error. </p>
<pre><code>from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm, colors
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
u, v = np.mgrid[0:np.pi:50j, 0:2*np.pi:50j]
strength = u
norm=colors.Normalize(vmin = np.min(strength),
vmax = np.max(strength), clip = False)
x = 10 * np.sin(u) * np.cos(v)
y = 10 * np.sin(u) * np.sin(v)
z = 10 * np.cos(u)
ax.plot_surface(x, y, z, rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False,
facecolors=cm.coolwarm(norm(strength)))
plt.show()
</code></pre>
<p>Result is this <a href="http://i.stack.imgur.com/pIpSd.png" rel="nofollow">image</a> of a sphere.</p>
<p>However, if your data is <strong>not on a 2D grid</strong> you are in trouble. Additionally if your grid is not regular the sphere you plot will look irregular as well. So if your question is: "<em>How to plot a heatmap on a sphere?</em>", there is already such a question and solution <a href="http://stackoverflow.com/questions/22128909/plotting-the-temperature-distribution-on-a-sphere-with-python">here</a> using <code>Basemap</code> package produces this result:
<a href="http://i.stack.imgur.com/8QDxC.png" rel="nofollow"><img src="http://i.stack.imgur.com/8QDxC.png" alt="enter image description here"></a></p>
| 2 |
2016-09-16T16:04:14Z
|
[
"python",
"matplotlib"
] |
allPrimes printing out a empty tuple Python
| 39,520,586 |
<p>So I have this code and it is supposed to print out a tuple with all the prime numbers in it. But instead, it's just printing out a empty tuple... </p>
<blockquote>
<p>Can anyone tell me why?
I also MUST USE A TUPLE.</p>
</blockquote>
<pre><code>def isPrime(number):
for i in range(2,int(number**(0.5))+1):
if number % i == 0:
return False
else:
return True
def allPrimes(number):
tup=()
for i in range(1,number):
if isPrime(i) == True:
tup += (i,)
print(tup)
allPrimes(26)
</code></pre>
<blockquote>
<p>Here is the correct code</p>
</blockquote>
<pre><code>def isPrime(number):
if number < 2:
return False
for i in range(2, int(number ** (0.5)) + 1):
if number % i == 0:
return False
return True
def allPrimes(number):
tup=()
for i in range(1,number):
if isPrime(i) == True:
tup += (i,)
print(tup)
allPrimes(26)
out[1]: (2, 3, 5, 7, 11, 13, 17, 19, 23)
</code></pre>
| 0 |
2016-09-15T21:23:06Z
| 39,520,638 |
<p>It's because your isPrime function doesn't work.</p>
<p><code>number % 1</code>, i.e. 'remainder when divided by 1', will always be zero for integers. </p>
| 1 |
2016-09-15T21:27:13Z
|
[
"python",
"tuples",
"primes"
] |
allPrimes printing out a empty tuple Python
| 39,520,586 |
<p>So I have this code and it is supposed to print out a tuple with all the prime numbers in it. But instead, it's just printing out a empty tuple... </p>
<blockquote>
<p>Can anyone tell me why?
I also MUST USE A TUPLE.</p>
</blockquote>
<pre><code>def isPrime(number):
for i in range(2,int(number**(0.5))+1):
if number % i == 0:
return False
else:
return True
def allPrimes(number):
tup=()
for i in range(1,number):
if isPrime(i) == True:
tup += (i,)
print(tup)
allPrimes(26)
</code></pre>
<blockquote>
<p>Here is the correct code</p>
</blockquote>
<pre><code>def isPrime(number):
if number < 2:
return False
for i in range(2, int(number ** (0.5)) + 1):
if number % i == 0:
return False
return True
def allPrimes(number):
tup=()
for i in range(1,number):
if isPrime(i) == True:
tup += (i,)
print(tup)
allPrimes(26)
out[1]: (2, 3, 5, 7, 11, 13, 17, 19, 23)
</code></pre>
| 0 |
2016-09-15T21:23:06Z
| 39,520,681 |
<p>Your <code>isPrime()</code> function starts at 1. Every integer is evenly divisible by 1, so it always returns <code>False</code>. Start at 2 instead.</p>
<pre><code>def isPrime(number):
if number < 2:
return False
for i in range(2, int(number ** (0.5)) + 1):
if number % i == 0:
return False
return True
</code></pre>
<p>Also, your allPrimes should probably use a list rather than a tuple, and you could use just <code>isPrime(i)</code> instead of <code>isPrime(i) == True</code>, but it'll work the way it is.</p>
| 2 |
2016-09-15T21:32:09Z
|
[
"python",
"tuples",
"primes"
] |
allPrimes printing out a empty tuple Python
| 39,520,586 |
<p>So I have this code and it is supposed to print out a tuple with all the prime numbers in it. But instead, it's just printing out a empty tuple... </p>
<blockquote>
<p>Can anyone tell me why?
I also MUST USE A TUPLE.</p>
</blockquote>
<pre><code>def isPrime(number):
for i in range(2,int(number**(0.5))+1):
if number % i == 0:
return False
else:
return True
def allPrimes(number):
tup=()
for i in range(1,number):
if isPrime(i) == True:
tup += (i,)
print(tup)
allPrimes(26)
</code></pre>
<blockquote>
<p>Here is the correct code</p>
</blockquote>
<pre><code>def isPrime(number):
if number < 2:
return False
for i in range(2, int(number ** (0.5)) + 1):
if number % i == 0:
return False
return True
def allPrimes(number):
tup=()
for i in range(1,number):
if isPrime(i) == True:
tup += (i,)
print(tup)
allPrimes(26)
out[1]: (2, 3, 5, 7, 11, 13, 17, 19, 23)
</code></pre>
| 0 |
2016-09-15T21:23:06Z
| 39,520,755 |
<p>There are a few issues in your code:</p>
<p>1) In <code>isPrime</code> returns <code>True</code> on the wrong line</p>
<p>2) You are printing <code>tup</code> outside the function scope</p>
<p>3) You are not handling case of <code>1</code> (in <code>isPrime</code>)</p>
<p>4) You are using <code>tuple</code> to store primes, <code>list</code> is better; it is much more efficient.</p>
<p>5) Use snake case for functions names in Python.</p>
<p>Making the changes:</p>
<pre><code>def is_prime(number):
if number < 2:
return False
for i in range(2,int(number**(0.5))+1):
if number % i == 0:
return False
return True
def all_primes(number):
my_primes = []
for i in range(1,number):
if is_prime(i):
my_primes.append(i)
return my_primes
if __name__ == "__main__":
print all_primes(40)
</code></pre>
| 1 |
2016-09-15T21:38:17Z
|
[
"python",
"tuples",
"primes"
] |
allPrimes printing out a empty tuple Python
| 39,520,586 |
<p>So I have this code and it is supposed to print out a tuple with all the prime numbers in it. But instead, it's just printing out a empty tuple... </p>
<blockquote>
<p>Can anyone tell me why?
I also MUST USE A TUPLE.</p>
</blockquote>
<pre><code>def isPrime(number):
for i in range(2,int(number**(0.5))+1):
if number % i == 0:
return False
else:
return True
def allPrimes(number):
tup=()
for i in range(1,number):
if isPrime(i) == True:
tup += (i,)
print(tup)
allPrimes(26)
</code></pre>
<blockquote>
<p>Here is the correct code</p>
</blockquote>
<pre><code>def isPrime(number):
if number < 2:
return False
for i in range(2, int(number ** (0.5)) + 1):
if number % i == 0:
return False
return True
def allPrimes(number):
tup=()
for i in range(1,number):
if isPrime(i) == True:
tup += (i,)
print(tup)
allPrimes(26)
out[1]: (2, 3, 5, 7, 11, 13, 17, 19, 23)
</code></pre>
| 0 |
2016-09-15T21:23:06Z
| 39,520,770 |
<p>Because in the first iteration if <code>number % i != 0:</code> the number is prime, you have to return <code>True</code> at the end of the iteration an break it if `number % i == 0: </p>
<pre><code>def isPrime(number):
res = True
if number < 2:
res = False
else:
for i in range(2,int(number**0.5)+1):
if number % i == 0:
res = False
break
return res
</code></pre>
| 0 |
2016-09-15T21:39:23Z
|
[
"python",
"tuples",
"primes"
] |
Determining where certain text comes from on website
| 39,520,633 |
<p>I'm trying to write a bash script that downloads the Photo of the Day from <a href="http://www.nationalgeographic.com/photography/photo-of-the-day/" rel="nofollow">National Geographic</a>, sets it as the desktop background, and puts the description of the picture found on the page in a text file on the desktop. (I'm aware there are scripts out there that do this, but NG recently changed their POTD page and they no longer work.)</p>
<p>I've gotten the picture to download and become the desktop background, but am stuck as to how to download the image's full description (the one found below the picture on the website, not the shorter version in the metadata in the header). Trouble is, the description doesn't appear in the page that my script downloads with <code>curl</code> (or <code>wget</code> for that matter). It's obviously there when view in the browser, though.</p>
<p>Where is the description text coming from if it's not in the html file? How can I download/parse the description, preferably with bash or python?</p>
<p>Thanks for any help.</p>
| 2 |
2016-09-15T21:26:42Z
| 39,520,838 |
<p>Buried within the html for that National Geographic page is the following attribute:</p>
<pre><code>data-platform-endpoint="http://www.nationalgeographic.com/photography/photo-of-the-day/_jcr_content/.gallery.2016-09.json"
</code></pre>
<p>The caption that you seek is in the JSON file that that URL points to. For example, in today's version of that JSON file, we find:</p>
<pre><code>"caption":"<p>A giraffe leads a herd of zebras as the animals stamede from a threat unseen. Your Shot photographer Mohammed AlNaser captured this image in Tanzania\u2019s Serengeti National Park. The zebras \u201cemerged from nowhere,\u201d AlNaser writes. \u201cThey were obviously drinking water and something scared them and created a few seconds of a chaos.\u201d<\/p>\n"
</code></pre>
| 1 |
2016-09-15T21:46:48Z
|
[
"python",
"html",
"bash"
] |
Reading XML on python 2.1
| 39,520,702 |
<p>So, i'm trying to process a XML file on python.</p>
<p>I'm using minidom as I'm in python 2.1 and there's no change to updating to 3.6. Currently, I have this</p>
<pre><code>import xml.dom.minidom as minidom
import socket
print 'Getting the xml file'
# Get the xml contents
file = open('<filepath>')
#print file
# Get the root of the configuration file
print 'Parsing the xml'
procs = minidom.parse(file)
</code></pre>
<p>But I'm getting this error</p>
<p><a href="http://i.stack.imgur.com/ozT8A.png" rel="nofollow"><img src="http://i.stack.imgur.com/ozT8A.png" alt="Error getting"></a></p>
<p>Any idea? Or, better yet, another way to parse xml without me having to write my own parser...</p>
| 0 |
2016-09-15T21:33:13Z
| 39,537,678 |
<p>So, I was able to get this working</p>
<p>For starters, after trying to convince to either update or install a plugin, I was notified that all python scripts are ran on <code>jython</code>, which mean, I have several java libraries to my disposal (wish they could had told me this quite sooner)</p>
<p>So, after some investigation on xml processing on <code>jython</code>, I found out that using <code>Xerces</code> and <code>xas</code> was the key</p>
<p>This is the code I finally used if any one would like to know</p>
<pre><code>from java.io import StringReader
import org.xml.sax as sax
import org.apache.xerces.parsers.DOMParser as domparser
parser = domparser()
document = open('<path to file>').read()
parser.reset()
documentIS = sax.InputSource(StringReader(document))
parser.parse(documentIS)
domtree = parser.getDocument()
results = domtree.getElementsByTagName('<tag name>')
for ix in range(results.getLength()):
item = results.item(ix).getAttribute("<attribute name>")
</code></pre>
<p>Hope someone else finds this usefull</p>
| 1 |
2016-09-16T18:08:56Z
|
[
"python",
"xml",
"python-2.1"
] |
Using selenium on calendar date picker
| 39,520,708 |
<p>I am trying to pick a date (01/01/2011) from a calendar on this page. <a href="https://cotthosting.com/NYRocklandExternal/LandRecords/protected/SrchQuickName.aspx" rel="nofollow">https://cotthosting.com/NYRocklandExternal/LandRecords/protected/SrchQuickName.aspx</a></p>
<p>The calendar is on the part of the form that says <code>Date: FROM</code>. When I click it, a calendar pops up for you to pick dates. However, the field also allows you type in a date. Given the complexity of calendars, I have chosen to use <code>send_keys()</code> but it is not working.</p>
<p>I have identified the empty field date field by its ID but for some reason it does not fill the form. when I try:</p>
<pre><code>driver.find_element_by_id('ctl00_cphMain_SrchDates1_txtFiledFrom').send_keys("01012011")
</code></pre>
<p>Any ideas on how I can maneuver it differently? I'm using Python 2.7 with Selenium and ChromeDriver</p>
| 1 |
2016-09-15T21:33:57Z
| 39,525,969 |
<p>I use Actions Class in Java .
The Java code which ran successfully is as belows: </p>
<pre><code>package stackoverflow;
public class question1 {
@Test
public void a () throws InterruptedException{
System.setProperty("webdriver.chrome.driver", "D:\\Selenium\\CP-SAT\\Chromedriver\\chromedriver.exe");
WebDriver a = new ChromeDriver();
a.manage().window().maximize();
a.get("https://cotthosting.com/NYRocklandExternal/LandRecords/protected/SrchQuickName.aspx");
Thread.sleep(2000L);
a.findElement(By.id("ctl00_cphMain_blkLogin_btnGuestLogin")).click();
Thread.sleep(2000L);
a.findElement(By.id("ctl00_cphMain_SrchDates1_txtFiledFrom")).click();
Actions b = new Actions(a);
for (int i = 0;i<6 ;i++){
for(int j = 0; j<6; j++){
WebElement c = a.findElement(By.xpath("//*[@id='ctl00_cphMain_SrchDates1_ceFiledFrom_day_"+ i + "_" + j + "']"));
Thread.sleep(2000L);
b.moveToElement(c).build().perform();
}
}
}
}
</code></pre>
<p>I tried to convert it for you in python, but I'm not sure about the syntax.
Below is the code:</p>
<pre><code> a.get("https://cotthosting.com/NYRocklandExternal/LandRecords/protected/SrchQuickName.aspx");
a.implicitly_wait(3);
a.find_element_by_id("ctl00_cphMain_blkLogin_btnGuestLogin").click();
a.implicitly_wait(3);
a.find_element_by_id("ctl00_cphMain_SrchDates1_txtFiledFrom").click();
actions = ActionChains(a);
for (int i = 0;i<6 ;i++){
for(int j = 0; j<6; j++){
WebElement c = a.find_element_by_xpath("//*[@id='ctl00_cphMain_SrchDates1_ceFiledFrom_day_"+ i + "_" + j + "']");
a.implicitly_wait(3);
actions.move_to_element(c).perform();
}
}
</code></pre>
<p>Try it at your end and let me know for further issues.
Happy learning :-)</p>
| 0 |
2016-09-16T07:26:58Z
|
[
"python",
"selenium",
"datepicker"
] |
Using selenium on calendar date picker
| 39,520,708 |
<p>I am trying to pick a date (01/01/2011) from a calendar on this page. <a href="https://cotthosting.com/NYRocklandExternal/LandRecords/protected/SrchQuickName.aspx" rel="nofollow">https://cotthosting.com/NYRocklandExternal/LandRecords/protected/SrchQuickName.aspx</a></p>
<p>The calendar is on the part of the form that says <code>Date: FROM</code>. When I click it, a calendar pops up for you to pick dates. However, the field also allows you type in a date. Given the complexity of calendars, I have chosen to use <code>send_keys()</code> but it is not working.</p>
<p>I have identified the empty field date field by its ID but for some reason it does not fill the form. when I try:</p>
<pre><code>driver.find_element_by_id('ctl00_cphMain_SrchDates1_txtFiledFrom').send_keys("01012011")
</code></pre>
<p>Any ideas on how I can maneuver it differently? I'm using Python 2.7 with Selenium and ChromeDriver</p>
| 1 |
2016-09-15T21:33:57Z
| 39,532,746 |
<p>To get this to work, add one extra step of clicking the element before sending the keys:</p>
<pre><code>datefield = driver.find_element_by_id('ctl00_cphMain_SrchDates1_txtFiledFrom')
datefield.click()
datefield.send_keys("01012011")
</code></pre>
<h1>Update:</h1>
<p>It looks like you might have to use <a href="http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.common.action_chains" rel="nofollow"><code>ActionChains</code></a> after all in your case, which will allow you to chain a series of actions together, and then perform them one after the other:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
driver = webdriver.Chrome()
driver.get("https://cotthosting.com/NYRocklandExternal/User/Login.aspx")
driver.find_element_by_id('ctl00_cphMain_blkLogin_btnGuestLogin').click()
driver.find_element_by_id('ctl00_cphMain_SrchNames1_txtFirmSurName').send_keys("Adam")
datefield = driver.find_element_by_id('ctl00_cphMain_SrchDates1_txtFiledFrom')
ActionChains(driver).move_to_element(datefield).click().send_keys('01012011').perform()
search_btn = driver.find_element_by_id('ctl00_cphMain_btnSearchAll')
ActionChains(driver).move_to_element(search_btn).click().click().perform()
</code></pre>
<p>I am not sure why two <code>click()</code> calls were necessary in this case, but it seems that they were. I tried a few other things including <code>double_click()</code>, but this was the only thing that worked for me to get the datefield unfocused and then click the search button.</p>
| 0 |
2016-09-16T13:30:10Z
|
[
"python",
"selenium",
"datepicker"
] |
How do I check if a function is being called by a method?
| 39,520,806 |
<p>How do I check if a function b() is being called by a method?</p>
<pre><code>class hello():
delete():
b()
</code></pre>
<p>I want to use mock in a unit test.</p>
| 0 |
2016-09-15T21:42:20Z
| 39,521,115 |
<p>If you are just looking for a way to mock objects <a href="https://docs.python.org/3/library/unittest.mock.html" rel="nofollow">this</a> is probably the best way to do it.</p>
<p>To answer the first part of your question, there is indeed a way to get information about the caller function. <a href="https://stackoverflow.com/questions/2529859/get-parent-function">This</a> might help you</p>
| 0 |
2016-09-15T22:14:08Z
|
[
"python",
"django",
"python-2.7",
"mocking"
] |
How do I check if a function is being called by a method?
| 39,520,806 |
<p>How do I check if a function b() is being called by a method?</p>
<pre><code>class hello():
delete():
b()
</code></pre>
<p>I want to use mock in a unit test.</p>
| 0 |
2016-09-15T21:42:20Z
| 39,524,728 |
<p>Assuming you have a test environment you can in fire up the python(or the Ipython) shell and use the debugger to call the function:</p>
<p>if this is your function </p>
<pre><code>def multiplier(a,b):
print a*b
</code></pre>
<p>then you can do this in the shell and debug the code:</p>
<pre><code>import ipdb
ipdb.runcall(multiplier, 10, 10)
</code></pre>
| 0 |
2016-09-16T06:06:23Z
|
[
"python",
"django",
"python-2.7",
"mocking"
] |
Flask WTF forms adding a glyphicon in a form field
| 39,520,899 |
<p>I have a Flask Wtf form as follows:</p>
<pre><code>class URL_Status(Form):
url = URLField("Enter URL",
validators=[url(), DataRequired()],
render_kw={"placeholder": "http://www.example.com"},)
submit = SubmitField('Search', render_kw={"onclick": "loading()"})
</code></pre>
<p>now I would like to add a bootstrap glyphicon in the input filed i.e. 'url'. As far as I know from this <a href="http://stackoverflow.com/questions/18838964/add-bootstrap-glyphicon-to-input-box">link</a>. We need to write the code as follows:</p>
<pre><code><div class="form-group has-feedback">
<label class="control-label">Username</label>
<input type="text" class="form-control" placeholder="Username" />
<i class="glyphicon glyphicon-user form-control-feedback"></i>
</div>
</code></pre>
<p>When I call the form in HTMl my html creates all the code except this line:</p>
<pre><code><i class="glyphicon glyphicon-user form-control-feedback"></i>
</code></pre>
<p>Any suggestions how can I add this line from my class 'URL_Status' so that I am able to see a glyphicon with my input field. Thanks</p>
| 0 |
2016-09-15T21:51:21Z
| 39,521,319 |
<p>I haven't used wtforms in a while. I think you need a custom widget:</p>
<pre><code>class CustomURLInput(URLInput):
...
def __call__(self,....):
...
</code></pre>
<p>Take a look at this <a href="https://github.com/wtforms/wtforms/blob/9be964158fbcd1af52b345451bbd14751127dd37/wtforms/widgets/core.py#L159" rel="nofollow">https://github.com/wtforms/wtforms/blob/9be964158fbcd1af52b345451bbd14751127dd37/wtforms/widgets/core.py#L159</a>
for details.</p>
<p>and your url field:</p>
<pre><code>url = URLField(
"Enter URL",
validators=[url(), DataRequired()],
render_kw={"placeholder": "http://www.example.com"},
widget=CustomURLWidget()
)
</code></pre>
<p>Or you can do it in your template.</p>
| 1 |
2016-09-15T22:34:45Z
|
[
"python",
"html",
"twitter-bootstrap",
"flask",
"wtforms"
] |
Flask WTF forms adding a glyphicon in a form field
| 39,520,899 |
<p>I have a Flask Wtf form as follows:</p>
<pre><code>class URL_Status(Form):
url = URLField("Enter URL",
validators=[url(), DataRequired()],
render_kw={"placeholder": "http://www.example.com"},)
submit = SubmitField('Search', render_kw={"onclick": "loading()"})
</code></pre>
<p>now I would like to add a bootstrap glyphicon in the input filed i.e. 'url'. As far as I know from this <a href="http://stackoverflow.com/questions/18838964/add-bootstrap-glyphicon-to-input-box">link</a>. We need to write the code as follows:</p>
<pre><code><div class="form-group has-feedback">
<label class="control-label">Username</label>
<input type="text" class="form-control" placeholder="Username" />
<i class="glyphicon glyphicon-user form-control-feedback"></i>
</div>
</code></pre>
<p>When I call the form in HTMl my html creates all the code except this line:</p>
<pre><code><i class="glyphicon glyphicon-user form-control-feedback"></i>
</code></pre>
<p>Any suggestions how can I add this line from my class 'URL_Status' so that I am able to see a glyphicon with my input field. Thanks</p>
| 0 |
2016-09-15T21:51:21Z
| 39,558,171 |
<p>Here is a little trick to solve this problem.</p>
<p>CSS:</p>
<pre><code>.user {
padding-left:30px;
background-repeat: no-repeat;
background-position-x: 4px;
background-position-y: 4px;
background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABcAAAAWCAYAAAArdgcFAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAA+5pVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtbG5zOmRjPSJodHRwOi8vcHVybC5vcmcvZGMvZWxlbWVudHMvMS4xLyIgeG1wTU06T3JpZ2luYWxEb2N1bWVudElEPSJ1dWlkOjY1RTYzOTA2ODZDRjExREJBNkUyRDg4N0NFQUNCNDA3IiB4bXBNTTpEb2N1bWVudElEPSJ4bXAuZGlkOkIzOUVGMUYxMDY3MTExRTI5OUZEQTZGODg4RDc1ODdCIiB4bXBNTTpJbnN0YW5jZUlEPSJ4bXAuaWlkOkIzOUVGMUYwMDY3MTExRTI5OUZEQTZGODg4RDc1ODdCIiB4bXA6Q3JlYXRvclRvb2w9IkFkb2JlIFBob3Rvc2hvcCBDUzYgKE1hY2ludG9zaCkiPiA8eG1wTU06RGVyaXZlZEZyb20gc3RSZWY6aW5zdGFuY2VJRD0ieG1wLmlpZDowMTgwMTE3NDA3MjA2ODExODA4M0ZFMkJBM0M1RUU2NSIgc3RSZWY6ZG9jdW1lbnRJRD0ieG1wLmRpZDowNjgwMTE3NDA3MjA2ODExODA4M0U3NkRBMDNEMDVDMSIvPiA8ZGM6dGl0bGU+IDxyZGY6QWx0PiA8cmRmOmxpIHhtbDpsYW5nPSJ4LWRlZmF1bHQiPmdseXBoaWNvbnM8L3JkZjpsaT4gPC9yZGY6QWx0PiA8L2RjOnRpdGxlPiA8L3JkZjpEZXNjcmlwdGlvbj4gPC9yZGY6UkRGPiA8L3g6eG1wbWV0YT4gPD94cGFja2V0IGVuZD0iciI/PkX/peQAAACrSURBVHja7JSLCYAwDEQbJ3AER+kouoFu0FEcqSM4gk4QE4ggVRPxg1A8OFCSvkqC5xDRaSZ5ciTjyvzuzbMnwKjY34FHAx618yCQXQHAcVFE5+GoVijgyt3UN1/+hPKFd0a9ubxQa6naMjOdOY2jJAdjZIH7tJ8gzRNuZuho5MriUfpLNbhINXk4Cd27pN3AJVqvQlMPSxSz+oegqXuQhz9bNvDpJfY0CzAA6Ncngv5RALIAAAAASUVORK5CYII=);}
</code></pre>
<p>Template:</p>
<pre><code><div class="col-md-4">
<form class="form form-horizontal" method="POST">
{{ form.hidden_tag() }}
{{ wtf.form_field(form.username, class="form-control user") }}
{{ wtf.form_field(form.password) }}
{{ wtf.form_field(form.submit) }}
</form>
</div>
</code></pre>
<p>This solution based on <a href="http://stackoverflow.com/a/29705837/5511849">this answer</a>. It just embedding the image representation of the glyphicon directly in the CSS using base64 URI encoding.</p>
<p>You can get base64 data of glyphicon on this <a href="http://andrew.hedges.name/experiments/glyphicons/" rel="nofollow">site</a>. Besides, You can also use image(25*25) to replace the base64 data. Like this:</p>
<pre><code>background-image: url({{ url_for('static', filename='user.png') }});
</code></pre>
| 1 |
2016-09-18T13:31:24Z
|
[
"python",
"html",
"twitter-bootstrap",
"flask",
"wtforms"
] |
"Cannot deserialize instance of string from START_ARRAY value" Salesforce API issue
| 39,520,902 |
<p>Trying to create a SF Contact with values from an .xlsx sheet.</p>
<p>I can create a contact if I manually type in a fake email address, lastname and firstname but cannot reference it to a value I have defined from an xlsx sheet.
the print commands are working fine and reading the appropriate data I want them to read.</p>
<p>Only been doing Python for 2 weeks now and have already been able to read, write and save data to/from MySQLdb without issue but now running into this issue and not finding much info on this specifically with SalesForce. Any help would be greatly appreciated. </p>
<p>So the full error is:</p>
<p>File "C:\Python27\lib\site-packages\simple_salesforce-0.70-py2.7.egg\simple_salesforce\api.py", line 749, in _exception_handler
raise exc_cls(result.url, result.status_code, name, response_content)
simple_salesforce.api.SalesforceMalformedRequest: Malformed request <a href="https://na48.salesforce.com/services/data/v37.0/sobjects/Contact/" rel="nofollow">https://na48.salesforce.com/services/data/v37.0/sobjects/Contact/</a>. Response content: [{u'errorCode': u'JSON_PARSER_ERROR', u'message': u'Cannot deserialize instance of string from START_ARRAY value [line:1, column:2]'}]</p>
<pre><code>Email = sheet.col_values(1, 1)
Last = sheet.col_values(2, 1)
First = sheet.col_values(3, 1)
print Email
print Last
print First
sf.Contact.create({'LastName' : Last,'FirstName' : First,'Email' : Email})
</code></pre>
<p>Okay, Error is fixed but it only creates one contact/case on salesforce which is the last row in the xlsx sheet rather than creating a contact/case for each row in the xlsx. It reads everything for the most part correctly and does in fact create a contact the correct way but only the last row.</p>
<p>Current Code:</p>
<pre><code>for c in range(sheet.ncols):
for r in range(sheet.nrows):
Email = sheet.col_values(1,r)[0]
print Email
Last = sheet.col_values(2,r)[0]
print Last
First = sheet.col_values(3,r)[0]
print First
Phone = sheet.col_values(4,r)[0]
print Phone
Street = sheet.col_values(5,r)[0]
print Street
City = sheet.col_values(6,r)[0]
print City
Postal = sheet.col_values(7,r)[0]
print Postal
Product = sheet.col_values(8,r)[0]
print Product
Store = sheet.col_values(9,r)[0]
print Store
SN = sheet.col_values(10,r)[0]
print SN
Name = sheet.col_values(3,r)[0]+sheet.col_values(2,r)[0]
sf.Contact.create({'FirstName' : First, 'LastName' : Last, 'Email' : Email, 'Phone' : Phone, 'MailingStreet' : Street, 'MailingCity' : City, 'MailingPostalCode' : Postal})
</code></pre>
| 0 |
2016-09-15T21:51:39Z
| 39,522,501 |
<p>The error message from the server says </p>
<blockquote>
<p>Cannot deserialize instance of string from START_ARRAY value [line:1,
column:2]</p>
</blockquote>
<p>meaning that the server is expecting a field value to be a string, but the request has an array instead.</p>
<p>Therefore guessing that sheet.col_values() returns an array, you'd want to change it to</p>
<pre><code>Email = sheet.col_values(1, 1)[0]
Last = sheet.col_values(2, 1)[0]
First = sheet.col_values(3, 1)[0]
</code></pre>
<p>Updated for 2nd issue:
Indents are significant in python, your create call only happens once because its outside the loop, you need to move it inside the loop, e.g.</p>
<pre><code>for c in range(sheet.ncols):
for r in range(sheet.nrows):
Email = sheet.col_values(1,r)[0]
print Email
Last = sheet.col_values(2,r)[0]
print Last
First = sheet.col_values(3,r)[0]
print First
Phone = sheet.col_values(4,r)[0]
print Phone
Street = sheet.col_values(5,r)[0]
print Street
City = sheet.col_values(6,r)[0]
print City
Postal = sheet.col_values(7,r)[0]
print Postal
Product = sheet.col_values(8,r)[0]
print Product
Store = sheet.col_values(9,r)[0]
print Store
SN = sheet.col_values(10,r)[0]
print SN
Name = sheet.col_values(3,r)[0]+sheet.col_values(2,r)[0]
sf.Contact.create({'FirstName' : First, 'LastName' : Last, 'Email' : Email, 'Phone' : Phone, 'MailingStreet' : Street, 'MailingCity' : City, 'MailingPostalCode' : Postal})
</code></pre>
| 0 |
2016-09-16T01:16:36Z
|
[
"python",
"excel",
"python-2.7",
"salesforce",
"xlrd"
] |
How to disable uploading a package to PyPi unless --public is passed to the upload command
| 39,520,987 |
<p>I'm developing packages and uploading development/testing/etc versions of my packages to a local devpi server. </p>
<p>In order to prevent an accidental upload to PyPi, I'm adopted the common practice of:</p>
<pre><code>setup(...,
classifiers=[
"Programming Language :: Python",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7",
"Private :: Do not Upload"
],
...)
</code></pre>
<p>which works great, but what about when I'm finally ready to upload the package to PyPi?</p>
<p>I've come up with a totally ugly, but simple hack which requires that I define the classifiers as a global variable outside of the setup() call which looks like:</p>
<pre><code>CLASSIFIERS = [
"Programming Language :: Python",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7"
]
if "--public" not in sys.argv:
CLASSIFIERS.append("Private :: Do Not Upload")
else:
sys.argv.remove("--public")
setup(...
classifiers=CLASSIFIERS,
...)
</code></pre>
<p>Another, and perhaps simpler option is to merely comment out the "Private :: Do not Upload", but that doesn't seem any more professional than my hack.</p>
<p>What I'd <em>like</em> to do is create a proper subclass of the upload command called <code>SafeUpload</code> and have it check for the <code>--public</code> cmd-line option. Perhaps, as a build may exist prior to uploading, <code>SafeBuild</code> might be a better option.</p>
<p>Unfortunately, I'm having trouble understanding the setuptools documentation on creating custom commands.</p>
<p>Does anyone have any idea how to implement this? It's not clear to me if a custom command has access to the parameters passed to <code>setup()</code>, i.e. could it directly manipulate the <code>classifiers</code> passed to <code>setup()</code>, or would if it require that a user of the command follow the convention of defining of CLASSIFIERS as a global variable <em>yuck</em>?</p>
<p>Any help would be appreciated,</p>
<p>Scott</p>
| 1 |
2016-09-15T22:01:05Z
| 39,522,563 |
<p>Going backwards on your questions; while it's really broad, the topic is still constrained enough.</p>
<p>I can tell you that the classifiers are not manipulated, but rather read from the and then written to <code>PKG-INFO</code> file by the <code>egg_info</code> command, which in turn looks for all <code>egg_info.writers</code> entry_points which the <a href="https://github.com/pypa/setuptools/blob/master/setuptools/command/egg_info.py#L394" rel="nofollow"><code>setuptools.command.egg_info:write_pkg_info</code></a> function will do the actual writing. As far as I can tell, trying to leverage that Classifier outside will not be a great way, however you can override <em>everything</em> and <em>anything</em> you want through <code>setuptools</code> so you can make your own <code>write_pkg_info</code> function, figure out how to <a href="https://docs.python.org/3/distutils/examples.html#reading-the-metadata" rel="nofollow">read the metadata</a> (which you can see in the main <a href="https://github.com/python/cpython/blob/master/Lib/distutils/command/upload.py#L116" rel="nofollow"><code>distutils.command.upload:upload.upload_file</code></a> method) and manipulate that further before upload_file finally reads it. At this point you probably are thinking that manipulating and working with this system is going to be rather annoying.</p>
<p>As I mentioned though, everything can be overridden. You can make an upload command that take the public flag, like so:</p>
<pre><code>from distutils.log import warn
from distutils.command.upload import upload as orig
# alternatively, for later versions of setuptools:
# from setuptools.command.upload import upload as orig
class upload(orig):
description = "customized upload command"
user_options = orig.user_options + [
('public', None,
'make package public on pypi'),
]
def initialize_options(self):
orig.initialize_options(self)
self.public = False
def run(self):
if not self.public:
warn('not public, not uploading')
return
return orig.run(self)
</code></pre>
<p>The accompanied <code>setup.py</code> might look something like this.</p>
<pre><code>from setuptools import setup
setup(
name='my_pypi_uploader',
version='0.0',
description='"safer" pypi uploader',
py_modules=['my_pypi_uploader'], # assuming above file is my_py_uploader.py
entry_points={
'distutils.commands': [
'upload = my_pypi_uploader:upload',
],
},
)
</code></pre>
<p>Install that as a package into your environment and the upload command will be replaced by your version. Example run:</p>
<pre><code>$ python setup.py upload
running upload
not public, not uploading
</code></pre>
<p>Try again with the public flag</p>
<pre><code>$ python setup.py upload --public
running upload
error: No dist file created in earlier command
</code></pre>
<p>Which is fine, since I didn't create any dist files at all. You could of course further extend that command by rewriting the <a href="https://github.com/python/cpython/blob/master/Lib/distutils/command/upload.py#L116" rel="nofollow"><code>upload_file</code></a> method (make a copy in your code) and change the parts to do what you want in your subclass (like injecting the private classifier there), up to you.</p>
<p>You might also be wondering why the class names are in lower case (violation of pep8), this is due to legacy stuff and how the help for a given command is generated.</p>
<pre><code>$ python setup.py upload --help
...
Options for 'upload' command:
</code></pre>
<p>Using a "properly" named class (e.g. <code>SafeUpload</code>; remember to also update the <code>entry_point</code> in the <code>setup.py</code> to point to this new class name)</p>
<pre><code>$ python setup.py upload --help
...
Options for 'SafeUpload' command:
</code></pre>
<p>of course if this output is the intent, the standard class naming convention can be used instead.</p>
<p>Though to be perfectly honest, you should not specify upload at all on your production, but rather do this on your build servers as part of post-push hook, so when the project is pushed (or tagged), build is done and the file is loaded onto your private servers, and then only further manual intervention (or automatic if specific tags are pushed) will then get the package up to pypi. However the above example should get you started in what you originally set out to do.</p>
<p>One last thing: you <em>can</em> just change <code>self.repository</code> to your private devpi location, if the <code>--public</code> flag is not set. You could either override this before calling the <code>orig.upload_file</code> method (through your customized version), or do it in <code>run</code>; so rather than quitting, your code could just verify that the repository url is not the public PyPI instance. Or alternatively, manipulate the distribution metadata (i.e. the classifiers) via <code>self.distribution.metadata</code> (<code>self</code> being the <code>upload</code> instance). You can of course create a completely new command to play with this to your hearts content (by creating a new <code>Command</code> subclass, and add a new entry_point for that).</p>
| 1 |
2016-09-16T01:25:54Z
|
[
"python",
"setuptools",
"pypi",
"devpi"
] |
Python SQLITE substituted query returns nothing from database
| 39,521,186 |
<p>I am trying to execute the following query:</p>
<pre><code>sqlite> select * from history where timeStamp >= "2016-09-15 13:05:00" and timeStamp < "2016-09-15 13:06:00";
timeStamp isOpen
------------------- ----------
2016-09-15 13:05:04 0
2016-09-15 13:05:09 0
2016-09-15 13:05:14 1
2016-09-15 13:05:19 1
2016-09-15 13:05:24 1
2016-09-15 13:05:29 1
2016-09-15 13:05:34 1
2016-09-15 13:05:39 0
2016-09-15 13:05:44 1
2016-09-15 13:05:49 1
2016-09-15 13:05:54 1
2016-09-15 13:05:59 0
</code></pre>
<p>From Postman I run: <code>{{origin}}:{{port}}/logs?from=201609151305&to=201609151306</code></p>
<p>In Python, I translate those values to: <code>2016/09/15 13:05:00</code> and <code>
2016/09/15 13:06:00</code>, which are passed to my helper method:</p>
<pre><code>vals = (lowStr, upperStr)
query = 'select * from history where timeStamp >= ? and timeStamp < ?'
returnList = accessDB('SELECT',query,vals)
</code></pre>
<p>AccessDB then does the following:</p>
<pre><code>def accessDB(operation, query, vals):
con = None
try:
con = sqlite3.connect('logs.db')
cur = con.cursor()
cur.execute(query, vals)
if operation == 'SELECT':
return cur.fetchall()
if operation == 'INSERT':
con.commit()
except sqlite3.Error as e:
print("Error %s:" % e.args[0])
sys.exit(1)
finally:
if con:
con.close()
</code></pre>
<p>Nothing is being returned in the results, however. The return list is empty. What am I doing wrong?</p>
| 0 |
2016-09-15T22:21:30Z
| 39,523,175 |
<p>In your example SQL, you are using datetimes with the format</p>
<pre><code>YYYY-MM-DD HH:MM:SS
</code></pre>
<p>In your Python example, however, you are using datetimes with the format</p>
<pre><code>YYYY/MM/DD HH:MM:SS
</code></pre>
<p>SQLite isn't seeing this as a valid date format. Change your <code>/</code>s to <code>-</code>s (how to do that depends on how you're doing the formatting).</p>
| 1 |
2016-09-16T03:03:39Z
|
[
"python",
"sqlite",
"select",
"flask",
"sqlite3"
] |
Importing C file with python bindings in python file
| 39,521,205 |
<p>There is a great program written in C, that also contains a conversion to python using python bindings. However I would like to extend the file with bindings to give it more functionality.</p>
<p>When I try to change 'import bounded_file' to 'import my_bounded_file' in the python file the bindings are used in, I get a file not found error. So my question is how do I import a .c file into a .py program if the .c file contains python bindings?</p>
<p>Thanks! </p>
| -1 |
2016-09-15T22:23:55Z
| 39,521,968 |
<p>The C file needs to be compiled to a shared library. Then, you import this library.</p>
| 0 |
2016-09-15T23:55:02Z
|
[
"python",
"c",
"import",
"binding"
] |
Python-Indexing- I do not understand how this bit of code is working
| 39,521,213 |
<p>What I do not understand is how indexing a string s = "azcbobobegghakl" would correctly give me the answer 'beggh' if I was looking for the longest substring in alphabetical order. How does this code tell the computer that 'beggh' is in alphabetical order? I thought indexing just gave each letter in the string a number that increased by one until the string ended? I tweaked the code a bit and it worked, but I do not actually understand how the code works in the first place. </p>
<pre><code>s = "azcbobobegghakl"
longest = s[0]
current = s[0]
for c in s[1:]:
if c >= current[-1]:
current += c
else:
if len(current) > len(longest):
longest = current
current = c
print "Longest substring in alphabetical order is:", longest
</code></pre>
| 0 |
2016-09-15T22:25:03Z
| 39,521,261 |
<p><code>>=</code> compares two characters by their lexicographical order; the expression</p>
<pre><code>if c >= current[-1]:
</code></pre>
<p>tests if <code>c</code> (a single character), is the same or is 'greater' than <code>current[-1]</code> (another single character. Strings are ordered lexicographically by their codepoint value; <code>a</code> is smaller than <code>b</code> because the character <code>a</code> comes before <code>b</code> in the Unicode standard:</p>
<pre><code>>>> 'a' > 'b'
False
>>> 'a' < 'b'
True
</code></pre>
<p>As such, characters are only added to <code>current</code> if they come 'later' in the alphabet than the last character of <code>current</code>. If they do not, then the sequence so far is tested for length, and a <em>new</em> substring is started from <code>c</code>.</p>
| 1 |
2016-09-15T22:29:11Z
|
[
"python",
"python-3.x"
] |
Python-Indexing- I do not understand how this bit of code is working
| 39,521,213 |
<p>What I do not understand is how indexing a string s = "azcbobobegghakl" would correctly give me the answer 'beggh' if I was looking for the longest substring in alphabetical order. How does this code tell the computer that 'beggh' is in alphabetical order? I thought indexing just gave each letter in the string a number that increased by one until the string ended? I tweaked the code a bit and it worked, but I do not actually understand how the code works in the first place. </p>
<pre><code>s = "azcbobobegghakl"
longest = s[0]
current = s[0]
for c in s[1:]:
if c >= current[-1]:
current += c
else:
if len(current) > len(longest):
longest = current
current = c
print "Longest substring in alphabetical order is:", longest
</code></pre>
| 0 |
2016-09-15T22:25:03Z
| 39,521,274 |
<p>The expression <code>c >= current[-1]</code> lexicographically compares each character with its predecessor in the string. More simply:</p>
<pre><code>>>> 'a' >= 'a'
True
>>> 'b' >= 'a'
True
>>> 'b' >= 'c'
False
>>> 'z' >= 'a'
True
</code></pre>
<p>So all the code is doing is to accumulate runs of characters that have each character <code>>=</code> the last one, and to keep the longest one seen.</p>
| 1 |
2016-09-15T22:30:41Z
|
[
"python",
"python-3.x"
] |
Python-Indexing- I do not understand how this bit of code is working
| 39,521,213 |
<p>What I do not understand is how indexing a string s = "azcbobobegghakl" would correctly give me the answer 'beggh' if I was looking for the longest substring in alphabetical order. How does this code tell the computer that 'beggh' is in alphabetical order? I thought indexing just gave each letter in the string a number that increased by one until the string ended? I tweaked the code a bit and it worked, but I do not actually understand how the code works in the first place. </p>
<pre><code>s = "azcbobobegghakl"
longest = s[0]
current = s[0]
for c in s[1:]:
if c >= current[-1]:
current += c
else:
if len(current) > len(longest):
longest = current
current = c
print "Longest substring in alphabetical order is:", longest
</code></pre>
| 0 |
2016-09-15T22:25:03Z
| 39,521,320 |
<pre><code>s = "azcbobobegghakl"
# initialize vars to first letter in string
longest = s[0]
current = s[0]
# start iterating through rest of string
for c in s[1:]:
# c is the letter of current iteration and current[-1]
# is the last letter of the current string. Every character
# has an ASCII value so this is checking the character's
# numerical values to see if they are in alphabetical order (a<b<c etc.)
if c >= current[-1]:
# so if c is bigger than current[-1] (meaning it comes
# after in the alphabet, then we want to add it to current
current += c
else:
# if c was smaller then we want to check if the alphabetic
# string we just read is longer than the previous longest
if len(current) > len(longest):
longest = current
# start the current string over beginning with the letter in c
current = c
print "Longest substring in alphabetical order is:", longest
</code></pre>
| 0 |
2016-09-15T22:34:53Z
|
[
"python",
"python-3.x"
] |
pyspark error: AttributeError: 'SparkSession' object has no attribute 'parallelize'
| 39,521,341 |
<p>I am using pyspark on Jupyter notebook. Here is how Spark setup:</p>
<pre><code>import findspark
findspark.init(spark_home='/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive', python_path='python2.7')
import pyspark
from pyspark.sql import *
sc = pyspark.sql.SparkSession.builder.master("yarn-client").config("spark.executor.memory", "2g").config('spark.driver.memory', '1g').config('spark.driver.cores', '4').enableHiveSupport().getOrCreate()
sqlContext = SQLContext(sc)
</code></pre>
<p>Then when I do:</p>
<pre><code>spark_df = sqlContext.createDataFrame(df_in)
</code></pre>
<p>where <code>df_in</code> is a pandas dataframe. I then got the following errors:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-9-1db231ce21c9> in <module>()
----> 1 spark_df = sqlContext.createDataFrame(df_in)
/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive/python/pyspark/sql/context.pyc in createDataFrame(self, data, schema, samplingRatio)
297 Py4JJavaError: ...
298 """
--> 299 return self.sparkSession.createDataFrame(data, schema, samplingRatio)
300
301 @since(1.3)
/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive/python/pyspark/sql/session.pyc in createDataFrame(self, data, schema, samplingRatio)
520 rdd, schema = self._createFromRDD(data.map(prepare), schema, samplingRatio)
521 else:
--> 522 rdd, schema = self._createFromLocal(map(prepare, data), schema)
523 jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd())
524 jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive/python/pyspark/sql/session.pyc in _createFromLocal(self, data, schema)
400 # convert python objects to sql data
401 data = [schema.toInternal(row) for row in data]
--> 402 return self._sc.parallelize(data), schema
403
404 @since(2.0)
AttributeError: 'SparkSession' object has no attribute 'parallelize'
</code></pre>
<p>Does anyone know what I did wrong? Thanks!</p>
| 1 |
2016-09-15T22:36:40Z
| 39,521,419 |
<p><code>SparkSession</code> is not a replacement for a <code>SparkContext</code> but an equivalent of the <code>SQLContext</code>. Just use it use the same way as you used to use <code>SQLContext</code>:</p>
<pre><code>spark.createDataFrame(...)
</code></pre>
<p>and if you ever have to access <code>SparkContext</code> use <code>sparkContext</code> attribute:</p>
<pre><code>spark.sparkContext
</code></pre>
<p>so if you need <code>SQLContext</code> for backwards compatibility you can:</p>
<pre><code>SQLContext(sparkContext=spark.sparkContext, sparkSession=spark)
</code></pre>
| 1 |
2016-09-15T22:44:27Z
|
[
"python",
"hadoop",
"pandas",
"apache-spark",
"pyspark"
] |
Error installing python-docx
| 39,521,451 |
<p>Trying to install python-docx through pip for 'Learn to Automate the Boring Things with Python'. I am getting errors like <a href="http://imgur.com/a/el2oY" rel="nofollow">this</a>. </p>
<p>I have Googled up some solutions to this issue, but they don't seem to work for me, or I am not deploying the solution correctly. </p>
<p>One post on Stackoverflow said to download an lxml file made available by <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml" rel="nofollow">Christoph Golke</a>. </p>
<p>I downloaded, and then tried 'pip install lxml', and basically got the same error message as the screenshot, telling me 'Unable to find vcvarsall.bat'.</p>
<p>Am I supposed to put this file in a certain directory, before executing that command? Any help would be appreciated. </p>
| 1 |
2016-09-15T22:48:10Z
| 39,521,575 |
<p>This mean that C++ Common Tools are not installed.<br>
To install them for Python2.7 go to <a href="https://www.microsoft.com/en-us/download/details.aspx?id=44266" rel="nofollow">Microsoft Visual C++ Compiler for Python 2.7</a><br>
For python3 install <code>Visual Studio Community 2015</code> and execute the following command: </p>
<pre><code>SET VS90COMNTOOLS=%VS140COMNTOOLS%
</code></pre>
| 1 |
2016-09-15T23:01:54Z
|
[
"python",
"python-2.7",
"python-3.x",
"pip"
] |
Error installing python-docx
| 39,521,451 |
<p>Trying to install python-docx through pip for 'Learn to Automate the Boring Things with Python'. I am getting errors like <a href="http://imgur.com/a/el2oY" rel="nofollow">this</a>. </p>
<p>I have Googled up some solutions to this issue, but they don't seem to work for me, or I am not deploying the solution correctly. </p>
<p>One post on Stackoverflow said to download an lxml file made available by <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml" rel="nofollow">Christoph Golke</a>. </p>
<p>I downloaded, and then tried 'pip install lxml', and basically got the same error message as the screenshot, telling me 'Unable to find vcvarsall.bat'.</p>
<p>Am I supposed to put this file in a certain directory, before executing that command? Any help would be appreciated. </p>
| 1 |
2016-09-15T22:48:10Z
| 39,555,322 |
<p>so I found the answer to this issue.</p>
<p>So I wasn't aware that to install the lxml file, you first need to change to the directory of that file, and type in the <strong>complete name</strong> of that file. Either that, or typing the path of the lxml file directly into the cmd prompt, like:</p>
<pre><code>pip install C:\Users\yourName\Downloads\lxml-3.6.4-cp35-cp35m-win32
</code></pre>
<p>or </p>
<pre><code>cd C:\Users\yourName\Downloads
pip install lxml-3.6.4-cp35-cp35m-win32
</code></pre>
<p>which successfully installed the lxml file, which then led to a successful installation of the python-docx file. </p>
<p>Essentially, a basic knowledge of the command prompt would've helped me avoid this problem...but hope this helps for anyone else who doesn't know what to do!</p>
| 0 |
2016-09-18T07:48:50Z
|
[
"python",
"python-2.7",
"python-3.x",
"pip"
] |
How can I average ACROSS groups in python-pandas?
| 39,521,461 |
<p>I have a dataset like this:</p>
<pre><code>Participant Type Rating
1 A 6
1 A 5
1 B 4
1 B 3
2 A 9
2 A 8
2 B 7
2 B 6
</code></pre>
<p>I want obtain this:</p>
<pre><code>Type MeanRating
A mean(6,9)
A mean(5,8)
B mean(4,7)
B mean(3,6)
</code></pre>
<p>So, for each type, I want the mean of the higher value in each group, then the mean of the second higher value in each group, etc.</p>
<p>I can't think up a proper way to do this with python pandas, since the means seem to apply always within groups, but not across them.</p>
| 0 |
2016-09-15T22:49:15Z
| 39,521,666 |
<p>First use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.rank.html" rel="nofollow"><code>groupby.rank</code></a> to create a column that allows you to align the highest values, second highest values, etc. Then perform another <code>groupby</code> using the newly created column to compute the means:</p>
<pre><code># Get the grouping column.
df['Grouper'] = df.groupby(['Type', 'Participant']).rank(method='first', ascending=False)
# Perform the groupby and format the result.
result = df.groupby(['Type', 'Grouper'])['Rating'].mean().rename('MeanRating')
result = result.reset_index(level=1, drop=True).reset_index()
</code></pre>
<p>The resulting output:</p>
<pre><code> Type MeanRating
0 A 7.5
1 A 6.5
2 B 5.5
3 B 4.5
</code></pre>
<p>I used the <code>method='first'</code> parameter of <code>groupby.rank</code> to handle the case of duplicate ratings within a <code>['Type', 'Participant']</code> group. You can omit it if this is not a possibility within your dataset, but it won't change the output if you leave it and there are no duplicates.</p>
| 6 |
2016-09-15T23:12:19Z
|
[
"python",
"pandas"
] |
Limit program to send only one email
| 39,521,551 |
<p>I have a program to control lots of the things about my hen house including opening and closing the door at set times. Occasionally something happens and the door doesn't open or close and I have now got it to send an email when this happens. The problem is it will send 6 or more emails and I have been trying to work out how to limit it to send only one which is possible using while or if - but then I need to re-set it so that if happens again on another day it will send another email. This is the loop that I have</p>
<pre><code>def loop():
# retrieve current datetime
now = datetime.time(datetime.datetime.now().hour, datetime.datetime.now().minute)
# Open door on all days at the correct time
if ((now.hour == HOUR_ON.hour) and (now.minute == HOUR_ON.minute)):
if (gpio.digitalRead(17) == 0):
openplay()
# Close door on all days at the correct time
if ((now.hour == HOUR_OFF.hour) and (now.minute == HOUR_OFF.minute)):
if (gpio.digitalRead(22) == 1):
closeplay()
# check if door is open, 2 minutes after set time
if ((now.hour == HOUR_ON.hour) and (now.minute == HOUR_ON.minute + 120) and (now.second == 0) and (gpio.digitalRead(25) == 0)):
# send email
sendemail()
# check if door is closed, 2 minutes after set time
if ((now.hour == HOUR_OFF.hour) and (now.minute == HOUR_OFF.minute + 120) and (now.second == 0) and (gpio.digitalRead(25) == 1)):
# send email
sendemail()
# gives CPU some time before looping again
webiopi.sleep(1)
</code></pre>
<p>This is just a hobby and I put together things from mostly searching but can't crack this so would appreciate any help with it</p>
| 1 |
2016-09-15T22:58:23Z
| 39,521,779 |
<p>Assuming <code>sendemail()</code> is a function you defined, you could rewrite it to store the time it last sent an email, and if it hasn't been long enough, don't send it.</p>
<p>The <code>now.minute == HOUR_ON.minute + 120</code> doesn't make sense to meâit looks like you're saying 2 hours, not 2 minutes, unless now.minute returns a value in seconds?</p>
<p>Another possibility: do you have multiple instances of this python program running at the same time? <code>pgrep python3</code> to see how many instances of python are running.</p>
| 0 |
2016-09-15T23:28:16Z
|
[
"python",
"raspberry-pi"
] |
Limit program to send only one email
| 39,521,551 |
<p>I have a program to control lots of the things about my hen house including opening and closing the door at set times. Occasionally something happens and the door doesn't open or close and I have now got it to send an email when this happens. The problem is it will send 6 or more emails and I have been trying to work out how to limit it to send only one which is possible using while or if - but then I need to re-set it so that if happens again on another day it will send another email. This is the loop that I have</p>
<pre><code>def loop():
# retrieve current datetime
now = datetime.time(datetime.datetime.now().hour, datetime.datetime.now().minute)
# Open door on all days at the correct time
if ((now.hour == HOUR_ON.hour) and (now.minute == HOUR_ON.minute)):
if (gpio.digitalRead(17) == 0):
openplay()
# Close door on all days at the correct time
if ((now.hour == HOUR_OFF.hour) and (now.minute == HOUR_OFF.minute)):
if (gpio.digitalRead(22) == 1):
closeplay()
# check if door is open, 2 minutes after set time
if ((now.hour == HOUR_ON.hour) and (now.minute == HOUR_ON.minute + 120) and (now.second == 0) and (gpio.digitalRead(25) == 0)):
# send email
sendemail()
# check if door is closed, 2 minutes after set time
if ((now.hour == HOUR_OFF.hour) and (now.minute == HOUR_OFF.minute + 120) and (now.second == 0) and (gpio.digitalRead(25) == 1)):
# send email
sendemail()
# gives CPU some time before looping again
webiopi.sleep(1)
</code></pre>
<p>This is just a hobby and I put together things from mostly searching but can't crack this so would appreciate any help with it</p>
| 1 |
2016-09-15T22:58:23Z
| 39,551,265 |
<p>This seems to do it. </p>
<pre><code>def loop():
global emailsent1
global emailsent2
# retrieve current datetime
now = datetime.time(datetime.datetime.now().hour, datetime.datetime.now().minute)
# Open door on all days at the correct time
if ((now.hour == HOUR_ON.hour) and (now.minute == HOUR_ON.minute)):
if (gpio.digitalRead(17) == 0):
openplay()
# Close door on all days at the correct time
if ((now.hour == HOUR_OFF.hour) and (now.minute == HOUR_OFF.minute)):
if (gpio.digitalRead(22) == 1):
closeplay()
# check if door is open, 10 minutes after set time and send email if not already sent
if ((now.hour == HOUR_ON.hour) and (now.minute == HOUR_ON.minute + 10) and (now.second == 0) and (gpio.digitalRead(25) == 1) and not emailsent1):
# send email
a = 'opened'
sendemail(a)
emailsent1 = True
if ((now.hour == HOUR_ON.hour) and (now.minute == HOUR_ON.minute + 11) and emailsent1): # reset if email has been sent
emailsent1 = False
# check if door is closed, 10 minutes after set time and send email if not already sent
if ((now.hour == HOUR_OFF.hour) and (now.minute == HOUR_OFF.minute + 10) and (now.second == 0) and (gpio.digitalRead(25) == 0) and not emailsent2):
# send email
a = 'closed'
sendemail(a)
emailsent2 = True
if ((now.hour == HOUR_OFF.hour) and (now.minute == HOUR_OFF.minute + 11) and emailsent2): # resset if email has been sent
emailsent2 = False
# gives CPU some time before looping again
webiopi.sleep(1)
</code></pre>
<p>10 minutes after closing time it checks if door closed and sendmail(a) is called and emailsent1 is set to True - after a minute it is set to false again.</p>
<p>Do I need emailsent1 and emailsent2 as global?</p>
| 0 |
2016-09-17T20:13:02Z
|
[
"python",
"raspberry-pi"
] |
Python - When trying to run the code nothing happens. I don't even get an error
| 39,521,608 |
<p>I am trying to create a small betting application with Python and when I try to run the program nothing happens what so ever. I am using IDLE.
This is my code:</p>
<pre><code>def bet():
#balance = 100
x = 0
with open("bal.txt", "r") as f:
for l in f:
bal = (sum([int(a) for a in l.split()]))
while bal > 0:
print ("Your balance is: " + str(balance) + " credits.")
while x == 0:
print ("Enter the amount you would like to bet:")
bet = int(input())
if bet > bal:
x = 0
elif bet < 0:
x = 0
else:
x = 1
print ("Pick a number between 1 and 20")
num = int(input())
convbal = bal - bet
print ("Your bet is now locked in...")
print (" ")
print (" ")
import random
rannum = random.randint(1, 20)
print ("Your guess was: " + str(num))
print (" ")
print ("The random number was: " + str(rannum))
if rannum == num:
print ("WINNER")
bal = bal + (bet * 2)
else:
print ("LOSER")
print ("")
print ("")
</code></pre>
<p>This is the outcome I receive in shell:</p>
<p><a href="http://i.stack.imgur.com/z2gO7.png" rel="nofollow"><img src="http://i.stack.imgur.com/z2gO7.png" alt="output"></a></p>
<p>Any idea will be great, thanks :)</p>
| 0 |
2016-09-15T23:05:13Z
| 39,521,657 |
<p>The ONLY thing that code does is define a function named <code>bet</code>. You could type <code>bet()</code> in the IDLE shell to call it, or put <code>bet()</code> at the bottom of the file (not indented!) to call it automatically.</p>
| 2 |
2016-09-15T23:11:24Z
|
[
"python",
"python-idle"
] |
Python - When trying to run the code nothing happens. I don't even get an error
| 39,521,608 |
<p>I am trying to create a small betting application with Python and when I try to run the program nothing happens what so ever. I am using IDLE.
This is my code:</p>
<pre><code>def bet():
#balance = 100
x = 0
with open("bal.txt", "r") as f:
for l in f:
bal = (sum([int(a) for a in l.split()]))
while bal > 0:
print ("Your balance is: " + str(balance) + " credits.")
while x == 0:
print ("Enter the amount you would like to bet:")
bet = int(input())
if bet > bal:
x = 0
elif bet < 0:
x = 0
else:
x = 1
print ("Pick a number between 1 and 20")
num = int(input())
convbal = bal - bet
print ("Your bet is now locked in...")
print (" ")
print (" ")
import random
rannum = random.randint(1, 20)
print ("Your guess was: " + str(num))
print (" ")
print ("The random number was: " + str(rannum))
if rannum == num:
print ("WINNER")
bal = bal + (bet * 2)
else:
print ("LOSER")
print ("")
print ("")
</code></pre>
<p>This is the outcome I receive in shell:</p>
<p><a href="http://i.stack.imgur.com/z2gO7.png" rel="nofollow"><img src="http://i.stack.imgur.com/z2gO7.png" alt="output"></a></p>
<p>Any idea will be great, thanks :)</p>
| 0 |
2016-09-15T23:05:13Z
| 39,521,663 |
<p>Check bal.txt can be open and that it has at least one line whose sum is > 0. The thing here is that one of your conditions is failing</p>
| 0 |
2016-09-15T23:12:07Z
|
[
"python",
"python-idle"
] |
Django: How to do reverse foreign key lookup of another class without an instance of this class?
| 39,521,610 |
<p>I have the following two Django Classes <code>MyClassA</code> and <code>MyClassB</code>.</p>
<p><code>MyClassB</code> has a foreign key reference to an instance of <code>MyClassA</code>.</p>
<pre><code>from django.db import models
class MyClassA(models.Model):
name = models.CharField(max_length=50, null=False)
@classmethod
def my_method_a(cls):
# What do I put here to call MyClassB.my_method_b()??
class MyClassB(models.Model):
name = models.CharField(max_length=50, null=False)
my_class_a = models.ForeignKey(MyClassA, related_name="MyClassB_my_class_a")
@staticmethod
def my_method_b():
return "Hello"
</code></pre>
<p>From within <code>MyClassA</code>'s class method <code>my_method_a</code>, I would like to call <code>MyClassB</code>'s static method <code>my_method_b</code>. How can I do it? If <code>my_method_a</code> was an instance method, I would simply do <code>self.MyClassB_my_class_a.model.my_method_b()</code>. But since I don't have an instance of <code>MyClassA</code>, I don't know how to do it.</p>
<p>Can I use <code>cls</code> instead of <code>self</code>?</p>
| 2 |
2016-09-15T23:05:34Z
| 39,521,633 |
<p>Since it is a <code>staticmethod</code>, you don't need an instance. </p>
<p>You can call <code>MyClassB.my_method_b()</code> directly.</p>
| 1 |
2016-09-15T23:08:11Z
|
[
"python",
"django",
"static-methods",
"class-method",
"instance-method"
] |
Raise ClientException(required_message.format(attribute)) praw.exceptions.ClientException: Required configuration setting 'client_id' missing
| 39,521,621 |
<p>I am not sure how to make this work. I also can't find client_id in my app. I just see the app secret there:</p>
<pre><code>>>> import praw
>>> r = praw.Reddit(user_agent='custom data mining framework',
... site_name='lamiastella')
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/usr/local/lib/python2.7/dist-packages/praw/reddit.py", line 101, in __init__
raise ClientException(required_message.format(attribute))
praw.exceptions.ClientException: Required configuration setting 'client_id' missing.
This setting can be provided in a praw.ini file, as a keyword argument to the `Reddit` class constructor, or as an environment variable.
</code></pre>
<p>here is my <code>praw.ini</code> file which I am not sure if it's correct or has all the necessary fields:</p>
<pre><code>[lamiastella]
domain: www.monajalal.com
user: lamiastella
pswd: mypassword
</code></pre>
<p>any help is really appreciated. </p>
<p>**Can I retrieve images using praw as well from reddit or what do you suggest?</p>
| 1 |
2016-09-15T23:07:18Z
| 39,522,434 |
<p>The error is caused from a missing <code>client_id</code> (which is your unique API key and secret for the Reddit API) in your <code>praw.ini</code> file or in your Python script.</p>
<p>In your script you could have something like:</p>
<pre><code>r.set_oauth_app_info(client_id='stJlUSUbPQe5lQ',
... client_secret='DoNotSHAREWithANYBODY',
... redirect_uri='http://127.0.0.1:65010/'
... 'authorize_callback')
</code></pre>
<p><a href="https://praw.readthedocs.io/en/stable/pages/oauth.html?highlight=client_id#step-2-setting-up-praw" rel="nofollow">https://praw.readthedocs.io/en/stable/pages/oauth.html?highlight=client_id#step-2-setting-up-praw</a></p>
<p>Or set up in the <code>praw.ini</code> file as described in the link below:</p>
<p><a href="https://praw.readthedocs.io/en/stable/pages/configuration_files.html#configuration-variables" rel="nofollow">https://praw.readthedocs.io/en/stable/pages/configuration_files.html#configuration-variables</a></p>
<p>If you have already signed up for access to the reddit API, it says:</p>
<p><a href="https://www.reddit.com/wiki/api" rel="nofollow">https://www.reddit.com/wiki/api</a></p>
<blockquote>
<p>OAUTH Client ID(s) *</p>
<ul>
<li>if you don't have yet, please email api@reddit.com when received or when you add additional</li>
</ul>
</blockquote>
<p>You can get your <code>client_id</code> from your app in:
<a href="https://www.reddit.com/prefs/apps" rel="nofollow">https://www.reddit.com/prefs/apps</a></p>
<p><a href="http://i.stack.imgur.com/Sn3HM.png" rel="nofollow"><img src="http://i.stack.imgur.com/Sn3HM.png" alt="enter image description here"></a></p>
<p>In this example from their documentation (under the API app title): the <code>client_id=p-jcoLKBynTLew</code></p>
| 1 |
2016-09-16T01:05:41Z
|
[
"python",
"reddit",
"praw"
] |
I'm struggling to write numbers (0-10) into a CSV file. How do I change into a list, convert to integers, and have the number on each line?
| 39,521,623 |
<h1>Writes numbers from 0 to 10 into a CSV file</h1>
<p>import csv</p>
<pre><code>fileW = open('numbers.csv', 'w')
fileW.write('0,')
fileW.write('\n1,')
fileW.write('\n2,')
fileW.write('\n3,')
fileW.write('\n4,')
fileW.write('\n5')
fileW.write('\n6,')
fileW.write('\n7,')
fileW.write('\n8,')
fileW.write('\n9,')
fileW.write('\n10')
fileW.close();
</code></pre>
<h1>Read the numbers into a list</h1>
<pre><code>fileR = open('numbers.csv')
contnts = fileR.read()
print(contents)
</code></pre>
<p>This is my code. How do I convert this into integers and change into a list so that when I print the contents I got one column of numbers: one number on each line like [0 on first line, 1 on second line, 2 on third line...]?</p>
| -2 |
2016-09-15T23:07:24Z
| 39,521,730 |
<p>let's use a for loop, shall we ?</p>
<pre><code># generate the list with commas
fileW = open('numbers.csv', 'w')
for i in xrange(11):
fileW.write('%s,\n'%i)
fileW.close()
# open the file and read line by line
with open('numbers.csv','r') as fileR:
for line in fileR:
# remove the trailing comma, and convert back to integer
print int(line[:-1]) +100
</code></pre>
| 0 |
2016-09-15T23:21:40Z
|
[
"python"
] |
I'm struggling to write numbers (0-10) into a CSV file. How do I change into a list, convert to integers, and have the number on each line?
| 39,521,623 |
<h1>Writes numbers from 0 to 10 into a CSV file</h1>
<p>import csv</p>
<pre><code>fileW = open('numbers.csv', 'w')
fileW.write('0,')
fileW.write('\n1,')
fileW.write('\n2,')
fileW.write('\n3,')
fileW.write('\n4,')
fileW.write('\n5')
fileW.write('\n6,')
fileW.write('\n7,')
fileW.write('\n8,')
fileW.write('\n9,')
fileW.write('\n10')
fileW.close();
</code></pre>
<h1>Read the numbers into a list</h1>
<pre><code>fileR = open('numbers.csv')
contnts = fileR.read()
print(contents)
</code></pre>
<p>This is my code. How do I convert this into integers and change into a list so that when I print the contents I got one column of numbers: one number on each line like [0 on first line, 1 on second line, 2 on third line...]?</p>
| -2 |
2016-09-15T23:07:24Z
| 39,521,740 |
<p>This is a pythonic way to save, obtain the numbers and create a list named int_list with all those integers:</p>
<pre><code>with open('numbers.csv', 'w') as fileW:
for i in range(11):
fileW.write('%s\n'%i)
int_list = []
with open('numbers.csv','r') as fileR:
for line in fileR:
line = line.rstrip('\n') # read line and clean the EOL
int_list.append(int(line))
print(line)
print(int_list)
</code></pre>
<p>Output:</p>
<blockquote>
<p>0 <br>1<br> 2<br> 3<br> 4<br> 5<br> 6<br> 7<br> 8<br> 9<br> 10<br> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]<br></p>
</blockquote>
<p>Now you can operate the list.</p>
| 0 |
2016-09-15T23:23:11Z
|
[
"python"
] |
I'm struggling to write numbers (0-10) into a CSV file. How do I change into a list, convert to integers, and have the number on each line?
| 39,521,623 |
<h1>Writes numbers from 0 to 10 into a CSV file</h1>
<p>import csv</p>
<pre><code>fileW = open('numbers.csv', 'w')
fileW.write('0,')
fileW.write('\n1,')
fileW.write('\n2,')
fileW.write('\n3,')
fileW.write('\n4,')
fileW.write('\n5')
fileW.write('\n6,')
fileW.write('\n7,')
fileW.write('\n8,')
fileW.write('\n9,')
fileW.write('\n10')
fileW.close();
</code></pre>
<h1>Read the numbers into a list</h1>
<pre><code>fileR = open('numbers.csv')
contnts = fileR.read()
print(contents)
</code></pre>
<p>This is my code. How do I convert this into integers and change into a list so that when I print the contents I got one column of numbers: one number on each line like [0 on first line, 1 on second line, 2 on third line...]?</p>
| -2 |
2016-09-15T23:07:24Z
| 39,521,772 |
<p>What you want is the Python "csv" module <a href="https://docs.python.org/2/library/csv.html" rel="nofollow">https://docs.python.org/2/library/csv.html</a>. This makes it extremely easy to read and write CSV format. For example,</p>
<pre><code>import csv
import sys
writer = csv.writer(sys.stdout)
writer.writerow([1, 2, 3])
</code></pre>
<p>With the csv module you can do many things easily, reading as well as writing, and this includes reading a whole CSV file pretty much in one gulp.</p>
| 1 |
2016-09-15T23:26:41Z
|
[
"python"
] |
Trimming string from oracle
| 39,521,643 |
<p>I'm attempting, poorly, to compare two lists of filenames and provide a list of the differences. I have read and attempted the numerous examples available.
Before you close this as a Duplicate, or Already Answered, please read further.</p>
<p>In python, I'm making a call via ftp to obtain a list of files. This returns an array as expected. Next, I'm calling Oracle to retrieve a list of filenames.
This returns an array of filenames, but the cx_oracle module returns them as <code>('filename',)</code>. When I do a compare, it is comparing filename against <code>('filename',)</code> and fails. I need to remove the single quote, parentheses and comma prior to the compare. But I'm having zero luck.
I'm attempting to trim these extra characters out of the string using slicing, but this is failing.</p>
<pre><code>for file from resultset:
print file
print file[2:10]
</code></pre>
<p>This returns <code>()</code>, rather than filename as expected.
I have tried slicing the string, I have attempted to use awk, but the metacharacters are making this difficult.<br>
If anyone can provide guidance, I would greatly appreciate it.
I have spent hours on something that should have taken minutes.</p>
<p>Thank you, Allan</p>
| 0 |
2016-09-15T23:09:17Z
| 39,521,685 |
<p>So first of all if you want to remove characters from both sides of a string you can use <code>lstrp</code> and <code>rstrip</code>.</p>
<pre><code>>>> "('filename')".lstrip("('")
'filename')'
>>> "('filename')".rstrip("')")
"('filename"
</code></pre>
<p>But I suspect that the return data is actually a tuple and not a string</p>
<pre><code>>>> ('filename',)
('filename',)
>>> type(('filename',))
<class 'tuple'>
>>> ('filename',)[2:20]
()
</code></pre>
<p>Which will allow you to say</p>
<pre><code>('filename',)[0]
'filename'
</code></pre>
| 0 |
2016-09-15T23:15:05Z
|
[
"python",
"cx-oracle"
] |
Python regex search or match not working
| 39,521,711 |
<p>I wrote this regex:</p>
<pre><code> re.search(r'^SECTION.*?:', text, re.I | re.M)
re.match(r'^SECTION.*?:', text, re.I | re.M)
</code></pre>
<p>to run on this string:</p>
<pre><code>text = 'SECTION 5.01. Financial Statements and Other Information. The Parent\nwill furnish to the Administrative Agent:\n (a) within 95 days after the end of each fiscal year of the Parent,\n its audited consolidated balance sheet and related statements of income,\n cash flows and stockholders\' equity as of the end of and for such year,\n setting forth in each case in comparative form the figures for the previous\n fiscal year, all reported on by Arthur Andersen LLP or other independent\n public accountants of recognized national standing (without a "going\n concern" or like qualification or exception and without any qualification\n or exception as to the scope of such audit) to the effect that such\n consolidated financial statements present fairly in all material respects\n the financial condition and results of operations of the Parent and its\n consolidated Subsidiaries on a consolidated basis in accordance with GAAP\n consistently applied;\n (b) within 50 days after the end of each of the first three fiscal\n quarters of each fiscal year of the Parent, its consolidated balance sheet\n and related statements of income, cash flows and stockholders\' equity as of\n the end of and for such fiscal quarter and the then elapsed portion of the\n fiscal year, setting forth in each case in comparative form the figures for\n the corresponding period or periods of (or, in the case of the balance\n sheet, as of the end of) the previous fiscal year, all certified by one of\n its Financial Officers as presenting fairly in all material respects the\n financial condition and results of operations of the Parent and its\n consolidated Subsidiaries on a consolidated basis in accordance with GAAP\n consistently applied, subject to normal year-end audit adjustments and the\n absence of footnotes;\n '
</code></pre>
<p>and i was expecting the following output:</p>
<pre><code>SECTION 5.01. Financial Statements and Other Information. The Parent\nwill furnish to the Administrative Agent:
</code></pre>
<p>but i am getting <code>None</code> as the output.</p>
<p>Please anyone tell me what i am doing wrong here?</p>
| 1 |
2016-09-15T23:19:25Z
| 39,521,756 |
<p>The <code>.*</code> will match all the text and since your text doesn't ended with <code>:</code> it returns <code>None</code>. You can use a negated character class instead to get the expected result:</p>
<pre><code>In [32]: m = re.search(r'^SECTION[^:]*?:', text, re.I | re.M)
In [33]: m.group(0)
Out[33]: 'SECTION 5.01. Financial Statements and Other Information. The Parent\nwill furnish to the Administrative Agent:'
In [34]:
</code></pre>
| 1 |
2016-09-15T23:25:28Z
|
[
"python",
"regex"
] |
how to get a result without brackets in a list?
| 39,521,715 |
<p>I have a function which merges two sorted lists (a, b) into one sorted (c). If given lists have different lengths I need to insert the rest of the longest into the list c. But the way I did it - it gives the result with brackets inside the list c (for instance if a = [1,3,5] and b = [2,4,6] then the function returns [1,2,3,4,5,[6]]. How can I get rid of those brackets?</p>
<p>here`s my code</p>
<pre><code>def merge(a,b):
c = []
i = j = 0
while i < len(a) and j < len(b):
if a[i] < b[j]:
c.append(a[i])
i = i + 1
elif b[j] < a[i]:
c.append(b[j])
j = j + 1
elif a[i] == b[j]:
c.append(a[i])
c.append(b[j])
i = i + 1
j = j + 1
if i < len(a):
c.append(a[i:])
if j < len(b):
c.append(b[j:])
return c
</code></pre>
| 1 |
2016-09-15T23:19:39Z
| 39,521,875 |
<p>you would have to add each value individually like this:</p>
<pre><code>if i < len(a):
for k in a[i:]:
c.append(k)
</code></pre>
| 1 |
2016-09-15T23:41:12Z
|
[
"python",
"list"
] |
how to get a result without brackets in a list?
| 39,521,715 |
<p>I have a function which merges two sorted lists (a, b) into one sorted (c). If given lists have different lengths I need to insert the rest of the longest into the list c. But the way I did it - it gives the result with brackets inside the list c (for instance if a = [1,3,5] and b = [2,4,6] then the function returns [1,2,3,4,5,[6]]. How can I get rid of those brackets?</p>
<p>here`s my code</p>
<pre><code>def merge(a,b):
c = []
i = j = 0
while i < len(a) and j < len(b):
if a[i] < b[j]:
c.append(a[i])
i = i + 1
elif b[j] < a[i]:
c.append(b[j])
j = j + 1
elif a[i] == b[j]:
c.append(a[i])
c.append(b[j])
i = i + 1
j = j + 1
if i < len(a):
c.append(a[i:])
if j < len(b):
c.append(b[j:])
return c
</code></pre>
| 1 |
2016-09-15T23:19:39Z
| 39,522,262 |
<p>You should use <a href="https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types" rel="nofollow"><code>.extend()</code></a> in your last lines:</p>
<pre><code>if i < len(a):
c.extend(a[i:])
if j < len(b):
c.extend(b[j:])
</code></pre>
<p>because <code>a[i:]</code> and <code>b[j:]</code> are going to be lists</p>
<p>But I don't understand why you won't you use existing methods to your end result:</p>
<pre><code>a = [1, 3, 5]
b = [2, 4, 6]
c = a + b
c.sort()
print(c)
</code></pre>
<p>Won't that work ?</p>
| 1 |
2016-09-16T00:40:23Z
|
[
"python",
"list"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.