title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Multiprocessing: main programm stops until process is finished
| 39,462,926 |
<p>I know, a minimal working example is the gold standard and I am working on it. However, maybe there is an obvious error. The function <code>run_worker</code> is executed upon a button press event. It initiates a class instance and should start a method of that class. However the function run_worker waits until the class method has finished. As a result kivy gets stuck and I cannot do other stuff. Any ideas how I should use multiprocessing in this case ?</p>
<pre><code>from multiprocessing import Process
class SettingsApp(App):
""" Short not working version of the actual programm
"""
def build(self):
"""Some kivy specific settings"""
return Interface
"""This part does not work as expected. It is run by pushing a button. However, The function does hang until the Process has finished (or has been killed)."""
def run_worker(self):
"""The pHBot application is started as a second process. Otherwise kivy would be blocked until the function stops
(which is controlled by the close button)
"""
# get the arguments in appropriate form
args = self.get_stored_settings()
# Initiate the class which should be run by a separate process
bot = pHBot(*args)
# the control method runs some devices, listens to sensors etc.
phbot = Process(target=bot.ph_control(), args=(bot,))
# start the process
phbot.start()
# This statement is executed only after phbot has stopped.
print('Started')
</code></pre>
| 0 |
2016-09-13T05:44:25Z
| 39,467,181 |
<p>I suggest you study daemon processes here:
<a href="https://pypi.python.org/pypi/python-daemon/" rel="nofollow">https://pypi.python.org/pypi/python-daemon/</a></p>
<p><a href="http://askubuntu.com/questions/192058/what-is-technical-difference-between-daemon-service-and-process">http://askubuntu.com/questions/192058/what-is-technical-difference-between-daemon-service-and-process</a></p>
| 0 |
2016-09-13T09:57:51Z
|
[
"python",
"kivy",
"python-multiprocessing"
] |
Multiprocessing: main programm stops until process is finished
| 39,462,926 |
<p>I know, a minimal working example is the gold standard and I am working on it. However, maybe there is an obvious error. The function <code>run_worker</code> is executed upon a button press event. It initiates a class instance and should start a method of that class. However the function run_worker waits until the class method has finished. As a result kivy gets stuck and I cannot do other stuff. Any ideas how I should use multiprocessing in this case ?</p>
<pre><code>from multiprocessing import Process
class SettingsApp(App):
""" Short not working version of the actual programm
"""
def build(self):
"""Some kivy specific settings"""
return Interface
"""This part does not work as expected. It is run by pushing a button. However, The function does hang until the Process has finished (or has been killed)."""
def run_worker(self):
"""The pHBot application is started as a second process. Otherwise kivy would be blocked until the function stops
(which is controlled by the close button)
"""
# get the arguments in appropriate form
args = self.get_stored_settings()
# Initiate the class which should be run by a separate process
bot = pHBot(*args)
# the control method runs some devices, listens to sensors etc.
phbot = Process(target=bot.ph_control(), args=(bot,))
# start the process
phbot.start()
# This statement is executed only after phbot has stopped.
print('Started')
</code></pre>
| 0 |
2016-09-13T05:44:25Z
| 39,486,560 |
<p>I found a solution. Not sure why it works:</p>
<pre><code> def worker(self):
"""
The pHBot application is started as a second process. Otherwise kivy would be blocked until the function stops
(which is controlled by the close button)
"""
# initiate the process
args = self.get_stored_settings()
bot = pHBot(*args)
bot.ph_control()
def run_worker(self):
"""The function is called upon button press
"""
# start the process, is
self.phbot.start()
pass # not sure if that is necessary
</code></pre>
| 0 |
2016-09-14T09:08:21Z
|
[
"python",
"kivy",
"python-multiprocessing"
] |
pelican make serve error with broken pipe?
| 39,462,958 |
<p>I was trying to make a blog with pelican, and in the step of make serve I had below errors. By searching online it looks like a web issue ( I'm not familiar with these at all ) and I didn't see a clear solution. Could anyone shed some light on? I was running on Ubuntu with Python 2.7. Thanks!
Python info:</p>
<blockquote>
<p>Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2</p>
</blockquote>
<p>Error info:</p>
<blockquote>
<p>127.0.0.1 - - [13/Sep/2016 13:23:35] "GET / HTTP/1.1" 200 - WARNING:root:Unable to find / file. WARNING:root:Unable to find /.html
file.
127.0.0.1 - - [13/Sep/2016 13:24:31] "GET / HTTP/1.1" 200 -
---------------------------------------- Exception happened during processing of request from ('127.0.0.1', 51036) Traceback (most recent
call last): File "/usr/lib/python2.7/SocketServer.py", line 295, in
_handle_request_noblock
self.process_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request
self.finish_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self) File "/usr/lib/python2.7/SocketServer.py", line 651, in <strong>init</strong>
self.finish() File "/usr/lib/python2.7/SocketServer.py", line 710, in finish
self.wfile.close() File "/usr/lib/python2.7/socket.py", line 279, in close
self.flush() File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size]) error: [Errno 32] Broken pipe</p>
</blockquote>
| 0 |
2016-09-13T05:48:03Z
| 39,462,999 |
<p>Well I installed pip on Ubuntu and then it all worked..</p>
<p>Not sure if it is a version thing..</p>
| 0 |
2016-09-13T05:51:53Z
|
[
"python",
"python-2.7",
"ubuntu",
"makefile",
"server"
] |
How to read strange csv files in Pandas?
| 39,462,978 |
<p>I would like to read sample csv file shown in below</p>
<pre><code>--------------
|A|B|C|
--------------
|1|2|3|
--------------
|4|5|6|
--------------
|7|8|9|
--------------
</code></pre>
<p>I tried </p>
<pre><code>pd.read_csv("sample.csv",sep="|")
</code></pre>
<p>But it didn't work well.</p>
<p>How can I read this csv?</p>
| 5 |
2016-09-13T05:49:58Z
| 39,463,003 |
<p>You can add parameter <code>comment</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html"><code>read_csv</code></a> and then remove columns with <code>NaN</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html"><code>dropna</code></a>:</p>
<pre><code>import pandas as pd
import io
temp=u"""--------------
|A|B|C|
--------------
|1|2|3|
--------------
|4|5|6|
--------------
|7|8|9|
--------------"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), sep="|", comment='-').dropna(axis=1, how='all')
print (df)
A B C
0 1 2 3
1 4 5 6
2 7 8 9
</code></pre>
<p>More general solution:</p>
<pre><code>import pandas as pd
import io
temp=u"""--------------
|A|B|C|
--------------
|1|2|3|
--------------
|4|5|6|
--------------
|7|8|9|
--------------"""
#after testing replace io.StringIO(temp) to filename
#separator is char which is NOT in csv
df = pd.read_csv(io.StringIO(temp), sep="^", comment='-')
#remove first and last | in data and in column names
df.iloc[:,0] = df.iloc[:,0].str.strip('|')
df.columns = df.columns.str.strip('|')
#split column names
cols = df.columns.str.split('|')[0]
#split data
df = df.iloc[:,0].str.split('|', expand=True)
df.columns = cols
print (df)
A B C
0 1 2 3
1 4 5 6
2 7 8 9
</code></pre>
| 10 |
2016-09-13T05:52:07Z
|
[
"python",
"csv",
"pandas"
] |
How to read strange csv files in Pandas?
| 39,462,978 |
<p>I would like to read sample csv file shown in below</p>
<pre><code>--------------
|A|B|C|
--------------
|1|2|3|
--------------
|4|5|6|
--------------
|7|8|9|
--------------
</code></pre>
<p>I tried </p>
<pre><code>pd.read_csv("sample.csv",sep="|")
</code></pre>
<p>But it didn't work well.</p>
<p>How can I read this csv?</p>
| 5 |
2016-09-13T05:49:58Z
| 39,463,044 |
<p>Try "import csv" rather than directly use pandas.</p>
<pre><code>import csv
easy_csv = []
with open('sample.csv', 'rb') as csvfile:
test = csv.reader(csvfile, delimiter=' ', quotechar='|')
for row in test:
row_preprocessed = """ handling rows at here; removing |, ignoring row that has ----"""
easy_csv.append([row_preprocessed])
</code></pre>
<p>After this preprocessing, you can save it into comma separated csv files to easily handle on pandas.</p>
| 1 |
2016-09-13T05:55:22Z
|
[
"python",
"csv",
"pandas"
] |
How to read strange csv files in Pandas?
| 39,462,978 |
<p>I would like to read sample csv file shown in below</p>
<pre><code>--------------
|A|B|C|
--------------
|1|2|3|
--------------
|4|5|6|
--------------
|7|8|9|
--------------
</code></pre>
<p>I tried </p>
<pre><code>pd.read_csv("sample.csv",sep="|")
</code></pre>
<p>But it didn't work well.</p>
<p>How can I read this csv?</p>
| 5 |
2016-09-13T05:49:58Z
| 39,463,090 |
<p>i try this code and its ok !:</p>
<pre><code>import pandas as pd
import numpy as np
a = pd.read_csv("a.csv",sep="|")
print(a)
for i in a:
print(i)
</code></pre>
<p><a href="http://i.stack.imgur.com/87JF9.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/87JF9.jpg" alt="enter image description here"></a></p>
| 1 |
2016-09-13T05:59:09Z
|
[
"python",
"csv",
"pandas"
] |
how to copy numpy array value into higher dimensions
| 39,463,019 |
<p>I have a (w,h) np array in 2d. I want to make a 3d dimension that has a value greater than 1 and copy its value over along the 3rd dimensions. I was hoping broadcast would do it but it can't. This is how i'm doing it</p>
<pre><code>arr = np.expand_dims(arr, axis=2)
arr = np.concatenate((arr,arr,arr), axis=2)
</code></pre>
<p>is there a a faster way to do so?</p>
| 2 |
2016-09-13T05:53:17Z
| 39,463,055 |
<p>You can <em>push</em> all dims forward, introducing a singleton dim/new axis as the last dim to create a <code>3D</code> array and then repeat three times along that one with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html" rel="nofollow"><code>np.repeat</code></a>, like so -</p>
<pre><code>arr3D = np.repeat(arr[...,None],3,axis=2)
</code></pre>
<p>Here's another approach using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.tile.html" rel="nofollow"><code>np.tile</code></a> -</p>
<pre><code>arr3D = np.tile(arr[...,None],3)
</code></pre>
| 2 |
2016-09-13T05:56:18Z
|
[
"python",
"numpy"
] |
how to copy numpy array value into higher dimensions
| 39,463,019 |
<p>I have a (w,h) np array in 2d. I want to make a 3d dimension that has a value greater than 1 and copy its value over along the 3rd dimensions. I was hoping broadcast would do it but it can't. This is how i'm doing it</p>
<pre><code>arr = np.expand_dims(arr, axis=2)
arr = np.concatenate((arr,arr,arr), axis=2)
</code></pre>
<p>is there a a faster way to do so?</p>
| 2 |
2016-09-13T05:53:17Z
| 39,463,117 |
<p>Not sure if I understood correctly, but broadcasting seems working to me in this case:</p>
<pre><code>>>> a = numpy.array([[1,2], [3,4]])
>>> c = numpy.zeros((4, 2, 2))
>>> c[0] = a
>>> c[1:] = a+1
>>> c
array([[[ 1., 2.],
[ 3., 4.]],
[[ 2., 3.],
[ 4., 5.]],
[[ 2., 3.],
[ 4., 5.]],
[[ 2., 3.],
[ 4., 5.]]])
</code></pre>
| 0 |
2016-09-13T06:01:12Z
|
[
"python",
"numpy"
] |
FormSet saves the data of only one form
| 39,463,265 |
<p>When I submitted forms (but on page I filled id more than 1 form) - my FormSet saves the data of only one form, the rest of the data just disappear...</p>
<p>My template:</p>
<pre><code> <div id="data">
<form method="post" action="/lookup/" id="test_data">{% csrf_token %}
{{ formset.management_form }}
{% for form in formset %}
<section id="test_data_row">
{{ form }}
</section>
{% endfor %}
</form>
</div>
<div class="bt">
<button type="submit" class="btn btn-default" id="submit_form" form="test_data">Submit</button>
<button type="button" class="btn btn-default" id="add" value="Add row"/>Add row</button>
</div>
</code></pre>
<p>My forms.py:</p>
<pre><code>class LookupForm(forms.ModelForm):
class Meta:
model = look
exclude = ()
LookupFormSet = formset_factory(LookupForm, can_delete=True)
</code></pre>
<p>My model</p>
<pre><code>class look(models.Model):
class Meta():
db_table = 'lookup'
id_device = models.CharField(max_length=75)
phone_number = models.CharField(max_length=100)
phone_number_country = models.CharField(max_length=1000)
text = models.CharField(max_length=1000, default=None)
</code></pre>
<p>my views.py:</p>
<pre><code>def manage_articles(request):
LookupFormSet = modelformset_factory(model=look, exclude=())
if request.method == "POST":
formset = LookupFormSet(
request.POST, request.FILES,
queryset=look.objects.none(),
)
if formset.is_valid():
for form in formset:
form.save()
return HttpResponseRedirect('/')
else:
formset = LookupFormSet(queryset=look.objects.none())
return render(request, 'req/lookup.html', {'formset': formset})
</code></pre>
<p>my JS (js for add new form):</p>
<pre><code>document.getElementById('add').onclick = duplicate;
var i = 0;
var original = document.getElementById('test_data');
function duplicate() {
var clone = original.cloneNode(true); // "deep" clone
clone.id = "test_data" + ++i; // there can only be one element with an ID
original.parentNode.appendChild(clone);
}
</code></pre>
| 0 |
2016-09-13T06:11:49Z
| 39,464,240 |
<p>You cannot save a formset as it contains multiple forms. So I would suggest you change your code to:</p>
<pre><code> if formset.is_valid():
for form in formset:
form.save()
return HttpResponseRedirect('/')
</code></pre>
<p>See the <a href="https://docs.djangoproject.com/en/1.10/topics/forms/formsets/#module-django.forms.formsets" rel="nofollow">docs</a>.</p>
| 0 |
2016-09-13T07:18:04Z
|
[
"python",
"django",
"django-forms"
] |
redis.exceptions.ConnectionError: Error 97 connecting to localhost:6379. Address family not supported by protocol
| 39,463,403 |
<p>when ever i try to run my program following error will will raise.</p>
<p>redis.exceptions.ConnectionError: Error 97 connecting to localhost:6379. Address family not supported by protocol.</p>
<p>Previously the program runs normally now this error will be raised.</p>
<pre><code>Traceback (most recent call last):
File "securit.py", line 26, in <module>
bank = red.get('bank')
File "/usr/local/lib/python2.7/dist-packages/redis/client.py", line 880, in get
return self.execute_command('GET', name)
File "/usr/local/lib/python2.7/dist-packages/redis/client.py", line 578, in execute_command
connection.send_command(*args)
File "/usr/local/lib/python2.7/dist-packages/redis/connection.py", line 563, in send_command
self.send_packed_command(self.pack_command(*args))
File "/usr/local/lib/python2.7/dist-packages/redis/connection.py", line 538, in send_packed_command
self.connect()
File "/usr/local/lib/python2.7/dist-packages/redis/connection.py", line 442, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 97 connecting to localhost:6379. Address family not supported by protocol.
</code></pre>
| 0 |
2016-09-13T06:21:51Z
| 39,464,427 |
<p>Finally i got answer for above qustion.Step by step the following done</p>
<pre><code> Setup
Before you install redis, there are a couple of prerequisites that need to be downloaded to make the installation as easy as possible.
Start off by updating all of the apt-get packages:
**sudo apt-get update**
Once the process finishes, download a compiler with build essential which will help us install Redis from source:
**sudo apt-get install build-essential**
Finally, we need to download tcl:
**sudo apt-get install tcl8.5**
Installing Redis
With all of the prerequisites and dependencies downloaded to the server, we can go ahead and begin to install redis from source:
Download the latest stable release tarball from Redis.io.
**wget http://download.redis.io/releases/redis-stable.tar.gz**
Untar it and switch into that directory:
**tar xzf redis-stable.tar.gz**
**cd redis-stable**
Proceed to with the make command:
**make**
Run the recommended make test:
**make test**
Finish up by running make install, which installs the program system-wide.
**sudo make install**
Once the program has been installed, Redis comes with a built in script that sets up Redis to run as a background daemon.
To access the script move into the utils directory:
**cd utils**
From there, run the Ubuntu/Debian install script:
**sudo ./install_server.sh**
As the script runs, you can choose the default options by pressing enter. Once the script completes, the redis-server will be running in the background.
</code></pre>
| 0 |
2016-09-13T07:28:53Z
|
[
"python",
"django",
"redis",
"socket.io"
] |
Converting a PNG image to 2D array
| 39,463,455 |
<p>I have a PNG file which when I convert the image to a numpy array, it is of the format that is 184 x 184 x 4. The image is 184 by 184 and each pixel is in RGBA format and hence the 3D array.</p>
<p>This a B&W image and the pixels are either [255, 255, 255, 255] or [0, 0, 0, 255].</p>
<p>I want to convert this to a 184 x 184 2D array where the pixels are now either 1 or 0, depending upon if it is [255, 255, 255, 255] or [0, 0, 0, 255].</p>
<p>Any ideas how to do a straightforward conversion of this.</p>
| 0 |
2016-09-13T06:25:31Z
| 39,463,609 |
<p>There would be several ways to do the comparison to give us a <code>boolean array</code> and then, we just need to convert to <code>int array</code> with type conversion. So, for the comparison, one simple way would be to compare against <code>255</code> and check for <code>ALL</code> matches along the last axis. This would correspond to checking for <code>[255, 255, 255, 255]</code>. Thus, one approach would be like so -</p>
<pre><code>((arr == 255).all(-1)).astype(int)
</code></pre>
<p>Sample run -</p>
<pre><code>In [301]: arr
Out[301]:
array([[[255, 255, 255, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[255, 255, 255, 255],
[255, 255, 255, 255]]])
In [302]: ((arr == 255).all(-1)).astype(int)
Out[302]:
array([[1, 0, 0],
[0, 1, 1]])
</code></pre>
| 0 |
2016-09-13T06:35:59Z
|
[
"python",
"arrays",
"numpy"
] |
Converting a PNG image to 2D array
| 39,463,455 |
<p>I have a PNG file which when I convert the image to a numpy array, it is of the format that is 184 x 184 x 4. The image is 184 by 184 and each pixel is in RGBA format and hence the 3D array.</p>
<p>This a B&W image and the pixels are either [255, 255, 255, 255] or [0, 0, 0, 255].</p>
<p>I want to convert this to a 184 x 184 2D array where the pixels are now either 1 or 0, depending upon if it is [255, 255, 255, 255] or [0, 0, 0, 255].</p>
<p>Any ideas how to do a straightforward conversion of this.</p>
| 0 |
2016-09-13T06:25:31Z
| 39,469,616 |
<p>If there are really only two values in the array as you say, simply scale and return one of the dimensions:</p>
<pre><code>(arr[:,:,0] / 255).astype(int)
</code></pre>
| 1 |
2016-09-13T12:03:08Z
|
[
"python",
"arrays",
"numpy"
] |
Error iterating through a Pandas series
| 39,463,692 |
<p>When I get the first and second elements of this series, it works OK, but from element 3 onwards, giving an error when I try to fetch.</p>
<pre><code>type(X_test_raw)
Out[51]: pandas.core.series.Series
len(X_test_raw)
Out[52]: 1393
X_test_raw[0]
Out[45]: 'Go until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there got amore wat...'
X_test_raw[1]
Out[46]: 'Ok lar... Joking wif u oni...'
X_test_raw[2]
</code></pre>
<blockquote>
<p>KeyError: 2</p>
</blockquote>
| 3 |
2016-09-13T06:41:28Z
| 39,463,734 |
<p>There is no index with value <code>2</code>.</p>
<p>Sample:</p>
<pre><code>X_test_raw = pd.Series([4,8,9], index=[0,4,5])
print (X_test_raw)
0 4
4 8
5 9
dtype: int64
#print (X_test_raw[2])
#KeyError: 2
</code></pre>
<p>If need third value use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.iloc.html" rel="nofollow"><code>iloc</code></a>:</p>
<pre><code>print (X_test_raw.iloc[2])
9
</code></pre>
<p>If need iterating only values:</p>
<pre><code>for x in X_test_raw:
print (x)
4
8
9
</code></pre>
<p>If need <code>indexes</code> and <code>values</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.iteritems.html" rel="nofollow"><code>Series.iteritems</code></a>:</p>
<pre><code>for idx, x in X_test_raw.iteritems():
print (idx, x)
0 4
4 8
5 9
</code></pre>
| 3 |
2016-09-13T06:43:51Z
|
[
"python",
"pandas",
"for-loop",
"indexing",
"keyerror"
] |
Error iterating through a Pandas series
| 39,463,692 |
<p>When I get the first and second elements of this series, it works OK, but from element 3 onwards, giving an error when I try to fetch.</p>
<pre><code>type(X_test_raw)
Out[51]: pandas.core.series.Series
len(X_test_raw)
Out[52]: 1393
X_test_raw[0]
Out[45]: 'Go until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there got amore wat...'
X_test_raw[1]
Out[46]: 'Ok lar... Joking wif u oni...'
X_test_raw[2]
</code></pre>
<blockquote>
<p>KeyError: 2</p>
</blockquote>
| 3 |
2016-09-13T06:41:28Z
| 39,463,764 |
<p>consider the series <code>X_test_raw</code></p>
<pre><code>X_test_raw = pd.Series(
['Go until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there got amore wat...',
'Ok lar... Joking wif u oni...',
'PLEASE DON\'T FAIL'
], [0, 1, 3])
</code></pre>
<p><code>X_test_raw</code> doesn't have an index of <code>2</code> which you are trying to reference with <code>X_test_raw[2]</code>.</p>
<p>Instead use <code>iloc</code></p>
<pre><code>X_test_raw.iloc[2]
"PLEASE DON'T FAIL"
</code></pre>
<p>You can iterate through the series with <code>iteritems</code></p>
<pre><code>for index_val, series_val in X_test_raw.iteritems():
print series_val
Go until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there got amore wat...
Ok lar... Joking wif u oni...
PLEASE DON'T FAIL
</code></pre>
| 2 |
2016-09-13T06:45:50Z
|
[
"python",
"pandas",
"for-loop",
"indexing",
"keyerror"
] |
How to find duplicate names from table?
| 39,463,906 |
<p>I know we can use <code>GROUP BY</code> and <code>HAVING COUNT > 1</code>. But this works when you have duplicate data. I have a little bit different data.</p>
<pre><code>Id Names
1 Rahul S
2 Rohit S
3 Rishu
4 Sinu
5 Rahul S
6 Rohit S
</code></pre>
<p>In the above table id 1 and 5 are same and 2 and 6 are also same. But when I use group by it count as different because of spaces. So how can I write a query with fuzzy logic something which will return these kind of duplicate datas.</p>
<p>**Update</p>
<p>Can someone help me with a query which will remove spaces from a particular column and add a imaginary column on that we can group by having count > 1</p>
<p><code>SELECT replace(ltrim(rtrim(name)),' ','') as no_space FROM table GROUP BY no_space HAVING count(*) > 1 ORDER BY no_space;</code></p>
| 0 |
2016-09-13T06:55:38Z
| 39,464,008 |
<p>Have you tried removing the spaces on selecting of the data?</p>
<p>Doing it this way ought to Cut the spaces in the string, and provide similar data. Take in mind that i mean removing double spaces and replacing it with 1 space, then doing a left and right trim on the data</p>
<p>Something like this : </p>
<pre><code>REPLACE(LTRIM(RTRIM(colName)), ' ', ' ')
</code></pre>
<p>Just remember to include the having Count > 1 </p>
| 0 |
2016-09-13T07:03:10Z
|
[
"python",
"psql",
"peewee"
] |
How to find duplicate names from table?
| 39,463,906 |
<p>I know we can use <code>GROUP BY</code> and <code>HAVING COUNT > 1</code>. But this works when you have duplicate data. I have a little bit different data.</p>
<pre><code>Id Names
1 Rahul S
2 Rohit S
3 Rishu
4 Sinu
5 Rahul S
6 Rohit S
</code></pre>
<p>In the above table id 1 and 5 are same and 2 and 6 are also same. But when I use group by it count as different because of spaces. So how can I write a query with fuzzy logic something which will return these kind of duplicate datas.</p>
<p>**Update</p>
<p>Can someone help me with a query which will remove spaces from a particular column and add a imaginary column on that we can group by having count > 1</p>
<p><code>SELECT replace(ltrim(rtrim(name)),' ','') as no_space FROM table GROUP BY no_space HAVING count(*) > 1 ORDER BY no_space;</code></p>
| 0 |
2016-09-13T06:55:38Z
| 39,464,031 |
<p>You can use trim
SELECT trim(name),trim(count(name)) FROM <code>tablename</code> group by trim(name)</p>
| 0 |
2016-09-13T07:04:38Z
|
[
"python",
"psql",
"peewee"
] |
How to find duplicate names from table?
| 39,463,906 |
<p>I know we can use <code>GROUP BY</code> and <code>HAVING COUNT > 1</code>. But this works when you have duplicate data. I have a little bit different data.</p>
<pre><code>Id Names
1 Rahul S
2 Rohit S
3 Rishu
4 Sinu
5 Rahul S
6 Rohit S
</code></pre>
<p>In the above table id 1 and 5 are same and 2 and 6 are also same. But when I use group by it count as different because of spaces. So how can I write a query with fuzzy logic something which will return these kind of duplicate datas.</p>
<p>**Update</p>
<p>Can someone help me with a query which will remove spaces from a particular column and add a imaginary column on that we can group by having count > 1</p>
<p><code>SELECT replace(ltrim(rtrim(name)),' ','') as no_space FROM table GROUP BY no_space HAVING count(*) > 1 ORDER BY no_space;</code></p>
| 0 |
2016-09-13T06:55:38Z
| 39,464,097 |
<p>You can use the REPLACE function to remove white spaces. </p>
<p>I would save the string without spaces into a new column and use the group by on that.
Like:</p>
<pre><code>Select <values you are looking for>, replace(Names, ' ', '') as d
from <Table name>
group by d
</code></pre>
| 1 |
2016-09-13T07:08:44Z
|
[
"python",
"psql",
"peewee"
] |
How to find duplicate names from table?
| 39,463,906 |
<p>I know we can use <code>GROUP BY</code> and <code>HAVING COUNT > 1</code>. But this works when you have duplicate data. I have a little bit different data.</p>
<pre><code>Id Names
1 Rahul S
2 Rohit S
3 Rishu
4 Sinu
5 Rahul S
6 Rohit S
</code></pre>
<p>In the above table id 1 and 5 are same and 2 and 6 are also same. But when I use group by it count as different because of spaces. So how can I write a query with fuzzy logic something which will return these kind of duplicate datas.</p>
<p>**Update</p>
<p>Can someone help me with a query which will remove spaces from a particular column and add a imaginary column on that we can group by having count > 1</p>
<p><code>SELECT replace(ltrim(rtrim(name)),' ','') as no_space FROM table GROUP BY no_space HAVING count(*) > 1 ORDER BY no_space;</code></p>
| 0 |
2016-09-13T06:55:38Z
| 39,464,207 |
<p>I have created table and inserted your data.Try this: </p>
<pre><code>DROP TABLE IF EXISTS Emp;
CREATE TABLE Emp (Id INT, Name VARCHAR(50));
INSERT INTO Emp (Id, Name) VALUES
(1, 'Rahul S'), (2, 'Rohit S'), (3, 'Rishu'), (4, 'Sinu'), (5, ' Rahul S'),(5, 'Rohit S');
select * from Emp Group by Name Having Name like '%Rohit%S%'
Id Name
1 5 Rohit S
2 2 Rohit S
</code></pre>
<p>Query with wildcard characters: </p>
<pre><code>select * from Emp Group by Name Having Name like '%_____%_%'
Id Name
1 5 Rahul S
2 1 Rahul S
3 5 Rohit S
4 2 Rohit S
</code></pre>
| 0 |
2016-09-13T07:16:12Z
|
[
"python",
"psql",
"peewee"
] |
Python: Accessing YAML values using "dot notation"
| 39,463,936 |
<p>I'm using a YAML configuration file. So this is the code to load my config in Python:</p>
<pre><code>import os
import yaml
with open('./config.yml') as file:
config = yaml.safe_load(file)
</code></pre>
<p>This code actually creates a dictionary. Now the problem is that in order to access the values I need to use tons of brackets.</p>
<p>YAML:</p>
<pre><code>mysql:
user:
pass: secret
</code></pre>
<p>Python:</p>
<pre><code>import os
import yaml
with open('./config.yml') as file:
config = yaml.safe_load(file)
print(config['mysql']['user']['pass']) # <--
</code></pre>
<p>I'd prefer something like that (dot notation):</p>
<pre><code>config('mysql.user.pass')
</code></pre>
<p>So, my idea is to utilize the PyStache render() interface.</p>
<pre><code>import os
import yaml
with open('./config.yml') as file:
config = yaml.safe_load(file)
import pystache
def get_config_value( yml_path, config ):
return pystache.render('{{' + yml_path + '}}', config)
get_config_value('mysql.user.pass', config)
</code></pre>
<p>Would that be a "good" solution? If not, what would be a better alternative?</p>
<p><strong>Additional question [Solved]</strong></p>
<p>I've decided to use Ilja Everilä's solution. But now I've got an additional question: How would you create a wrapper Config class around DotConf?</p>
<p>The following code doesn't work but I hope you get the idea what I'm trying to do:</p>
<pre><code>class Config( DotDict ):
def __init__( self ):
with open('./config.yml') as file:
DotDict.__init__(yaml.safe_load(file))
config = Config()
print(config.django.admin.user)
</code></pre>
<p>Error: </p>
<pre><code>AttributeError: 'super' object has no attribute '__getattr__'
</code></pre>
<p><strong>Solution</strong></p>
<p>You just need to pass <code>self</code> to the constructor of the super class.</p>
<pre><code>DotDict.__init__(self, yaml.safe_load(file))
</code></pre>
<p><strong>Even better soltution (Ilja Everilä)</strong></p>
<pre><code>super().__init__(yaml.safe_load(file))
</code></pre>
| 3 |
2016-09-13T06:57:11Z
| 39,464,072 |
<h1>The Simple</h1>
<p>You could use <a href="https://docs.python.org/3/library/functools.html#functools.reduce" rel="nofollow"><code>reduce</code></a> to extract the value from the config:</p>
<pre><code>In [41]: config = {'asdf': {'asdf': {'qwer': 1}}}
In [42]: from functools import reduce
...:
...: def get_config_value(key, cfg):
...: return reduce(lambda c, k: c[k], key.split('.'), cfg)
...:
In [43]: get_config_value('asdf.asdf.qwer', config)
Out[43]: 1
</code></pre>
<p>This solution is easy to maintain and has very few new edge cases, if your YAML uses a very limited subset of the language.</p>
<h1>The Correct</h1>
<p>Use a proper YAML parser and tools, such as in <a href="http://stackoverflow.com/a/39485868/2681632">this answer</a>.</p>
<hr>
<h1>The Convoluted</h1>
<p>On a lighter note (not to be taken too seriously), you could create a wrapper that allows using attribute access:</p>
<pre><code>In [47]: class DotConfig:
...:
...: def __init__(self, cfg):
...: self._cfg = cfg
...: def __getattr__(self, k):
...: v = self._cfg[k]
...: if isinstance(v, dict):
...: return DotConfig(v)
...: return v
...:
In [48]: DotConfig(config).asdf.asdf.qwer
Out[48]: 1
</code></pre>
<p>Do note that this fails for keywords, such as "as", "pass", "if" and the like.</p>
<p>Finally, you could get really crazy (read: probably not a good idea) and customize <code>dict</code> to handle dotted string and tuple keys as a special case, with attribute access to items thrown in the mix (with its limitations):</p>
<pre><code>In [58]: class DotDict(dict):
...:
...: # update, __setitem__ etc. omitted, but required if
...: # one tries to set items using dot notation. Essentially
...: # this is a read-only view.
...:
...: def __getattr__(self, k):
...: try:
...: v = self[k]
...: except KeyError:
...: return super().__getattr__(k)
...: if isinstance(v, dict):
...: return DotDict(v)
...: return v
...:
...: def __getitem__(self, k):
...: if isinstance(k, str) and '.' in k:
...: k = k.split('.')
...: if isinstance(k, (list, tuple)):
...: return reduce(lambda d, kk: d[kk], k, self)
...: return super().__getitem__(k)
...:
...: def get(self, k, default=None):
...: if isinstance(k, str) and '.' in k:
...: try:
...: return self[k]
...: except KeyError:
...: return default
...: return super().get(k, default=default)
...:
In [59]: dotconf = DotDict(config)
In [60]: dotconf['asdf.asdf.qwer']
Out[60]: 1
In [61]: dotconf['asdf', 'asdf', 'qwer']
Out[61]: 1
In [62]: dotconf.asdf.asdf.qwer
Out[62]: 1
In [63]: dotconf.get('asdf.asdf.qwer')
Out[63]: 1
In [64]: dotconf.get('asdf.asdf.asdf')
In [65]: dotconf.get('asdf.asdf.asdf', 'Nope')
Out[65]: 'Nope'
</code></pre>
| 7 |
2016-09-13T07:06:45Z
|
[
"python",
"python-3.x",
"yaml"
] |
Python: Accessing YAML values using "dot notation"
| 39,463,936 |
<p>I'm using a YAML configuration file. So this is the code to load my config in Python:</p>
<pre><code>import os
import yaml
with open('./config.yml') as file:
config = yaml.safe_load(file)
</code></pre>
<p>This code actually creates a dictionary. Now the problem is that in order to access the values I need to use tons of brackets.</p>
<p>YAML:</p>
<pre><code>mysql:
user:
pass: secret
</code></pre>
<p>Python:</p>
<pre><code>import os
import yaml
with open('./config.yml') as file:
config = yaml.safe_load(file)
print(config['mysql']['user']['pass']) # <--
</code></pre>
<p>I'd prefer something like that (dot notation):</p>
<pre><code>config('mysql.user.pass')
</code></pre>
<p>So, my idea is to utilize the PyStache render() interface.</p>
<pre><code>import os
import yaml
with open('./config.yml') as file:
config = yaml.safe_load(file)
import pystache
def get_config_value( yml_path, config ):
return pystache.render('{{' + yml_path + '}}', config)
get_config_value('mysql.user.pass', config)
</code></pre>
<p>Would that be a "good" solution? If not, what would be a better alternative?</p>
<p><strong>Additional question [Solved]</strong></p>
<p>I've decided to use Ilja Everilä's solution. But now I've got an additional question: How would you create a wrapper Config class around DotConf?</p>
<p>The following code doesn't work but I hope you get the idea what I'm trying to do:</p>
<pre><code>class Config( DotDict ):
def __init__( self ):
with open('./config.yml') as file:
DotDict.__init__(yaml.safe_load(file))
config = Config()
print(config.django.admin.user)
</code></pre>
<p>Error: </p>
<pre><code>AttributeError: 'super' object has no attribute '__getattr__'
</code></pre>
<p><strong>Solution</strong></p>
<p>You just need to pass <code>self</code> to the constructor of the super class.</p>
<pre><code>DotDict.__init__(self, yaml.safe_load(file))
</code></pre>
<p><strong>Even better soltution (Ilja Everilä)</strong></p>
<pre><code>super().__init__(yaml.safe_load(file))
</code></pre>
| 3 |
2016-09-13T06:57:11Z
| 39,464,793 |
<p>I had the same problem a while ago and built this getter:</p>
<pre><code> def get(self, key):
"""Tries to find the configuration value for a given key.
:param str key: Key in dot-notation (e.g. 'foo.lol').
:return: The configuration value. None if no value was found.
"""
try:
return self.__lookup(self.config, key)
except KeyError:
return None
def __lookup(self, dct, key):
"""Checks dct recursive to find the value for key.
Is used by get() interanlly.
:param dict dct: The configuration dict.
:param str key: The key we are looking for.
:return: The configuration value.
:raise KeyError: If the given key is not in the configuration dict.
"""
if '.' in key:
key, node = key.split('.', 1)
return self.__lookup(dct[key], node)
else:
return dct[key]
</code></pre>
<p>The getter looks-up a config value from <code>self.config</code> in a recursive manner (by using <code>__lookup</code>).
If you have trouble adjusting this for your case, feel free to ask for further help.</p>
| 1 |
2016-09-13T07:51:12Z
|
[
"python",
"python-3.x",
"yaml"
] |
Python: Accessing YAML values using "dot notation"
| 39,463,936 |
<p>I'm using a YAML configuration file. So this is the code to load my config in Python:</p>
<pre><code>import os
import yaml
with open('./config.yml') as file:
config = yaml.safe_load(file)
</code></pre>
<p>This code actually creates a dictionary. Now the problem is that in order to access the values I need to use tons of brackets.</p>
<p>YAML:</p>
<pre><code>mysql:
user:
pass: secret
</code></pre>
<p>Python:</p>
<pre><code>import os
import yaml
with open('./config.yml') as file:
config = yaml.safe_load(file)
print(config['mysql']['user']['pass']) # <--
</code></pre>
<p>I'd prefer something like that (dot notation):</p>
<pre><code>config('mysql.user.pass')
</code></pre>
<p>So, my idea is to utilize the PyStache render() interface.</p>
<pre><code>import os
import yaml
with open('./config.yml') as file:
config = yaml.safe_load(file)
import pystache
def get_config_value( yml_path, config ):
return pystache.render('{{' + yml_path + '}}', config)
get_config_value('mysql.user.pass', config)
</code></pre>
<p>Would that be a "good" solution? If not, what would be a better alternative?</p>
<p><strong>Additional question [Solved]</strong></p>
<p>I've decided to use Ilja Everilä's solution. But now I've got an additional question: How would you create a wrapper Config class around DotConf?</p>
<p>The following code doesn't work but I hope you get the idea what I'm trying to do:</p>
<pre><code>class Config( DotDict ):
def __init__( self ):
with open('./config.yml') as file:
DotDict.__init__(yaml.safe_load(file))
config = Config()
print(config.django.admin.user)
</code></pre>
<p>Error: </p>
<pre><code>AttributeError: 'super' object has no attribute '__getattr__'
</code></pre>
<p><strong>Solution</strong></p>
<p>You just need to pass <code>self</code> to the constructor of the super class.</p>
<pre><code>DotDict.__init__(self, yaml.safe_load(file))
</code></pre>
<p><strong>Even better soltution (Ilja Everilä)</strong></p>
<pre><code>super().__init__(yaml.safe_load(file))
</code></pre>
| 3 |
2016-09-13T06:57:11Z
| 39,485,868 |
<p>On the one hand your example takes the right approach by using <code>get_config_value('mysql.user.pass', config)</code> instead of solving the dotted access with attributes. I am not sure
if you realised that on purpose you were not trying to do the more intuitive:</p>
<p>print(config.mysql.user.pass)
which you can't get to work, even when overloading <strong>getattr</strong> as <code>pass</code> is a python language element.</p>
<p>However your example describes only a very restricted subset of YAML files as it doens't involve any sequence collections, nor any complex keys for</p>
<p>If you want to cover more than the tiny subset you can e.g. extend the powerful round-trip capable objects of <code>ruamel.yaml</code>:¹</p>
<pre><code>def mapping_string_access(self, s, delimiter=None, key_delim=None):
def p(v):
try:
v = int(v)
except:
pass
return v
# possible extend for primitives like float, datetime, booleans, etc.
if delimiter is None:
delimiter = '.'
if key_delim is None:
key_delim = ','
try:
key, rest = s.split(delimiter, 1)
except ValueError:
key, rest = s, None
if key_delim in key:
key = tuple((p(key) for key in key.split(key_delim)))
else:
key = p(key)
if rest is None:
return self[key]
return self[key].string_access(rest, delimiter, key_delim)
ruamel.yaml.comments.CommentedMap.string_access = mapping_string_access
def sequence_string_access(self, s, delimiter=None, key_delim=None):
if delimiter is None:
delimiter = '.'
try:
key, rest = s.split(delimiter, 1)
except ValueError:
key, rest = s, None
key = int(key)
if rest is None:
return self[key]
return self[key].string_access(rest, delimiter, key_delim)
ruamel.yaml.comments.CommentedSeq.string_access = sequence_string_access
</code></pre>
<p>Once that is set up you are can run the following:</p>
<pre><code>yaml_str = """\
mysql:
user:
pass: secret
list: [a: 1, b: 2, c: 3]
[2016, 9, 14]: some date
42: some answer
"""
config = ruamel.yaml.round_trip_load(yaml_str)
def get_config_value(path, data, **kw):
return data.string_access(path, **kw)
print(get_config_value('mysql.user.pass', config))
print(get_config_value('mysql:user:pass', config, delimiter=":"))
print(get_config_value('mysql.list.1.b', config))
print(get_config_value('mysql.2016,9,14', config))
print(config.string_access('mysql.42'))
</code></pre>
<p>giving:</p>
<pre><code>secret
secret
2
some date
some answer
</code></pre>
<p>showing that with a bit more forethought and very little extra work you can have flexible dotted access to many to a vast range of YAML files, and not just those consisting of recursive mappings with string scalars as keys.</p>
<ol>
<li>As shown you can directly call <code>config.string_access(</code>mysql.user.pass<code>) instead of defining and using</code>get_config_value()`</li>
<li>this works with strings and integers as mapping keys, but can be easily extended to support other key types (boolean, date, date-time).</li>
</ol>
<hr>
<p>¹ <sub>This was done using <a href="https://pypi.python.org/pypi/ruamel.yaml" rel="nofollow">ruamel.yaml</a> a YAML 1.2 parser, of which I am the author.</sub></p>
| 1 |
2016-09-14T08:32:12Z
|
[
"python",
"python-3.x",
"yaml"
] |
How to print the recursive stack in Python
| 39,464,057 |
<p>How do I print or show the recursive stack in Python when I'm running a recursive function?</p>
| 0 |
2016-09-13T07:06:01Z
| 39,464,491 |
<p>It's not clear what you want but as far as I get your question, you can print the stack of function callers in a recursive manner like the following, using the python <a href="https://docs.python.org/2/library/inspect.html" rel="nofollow">inspect module</a>.</p>
<pre><code>import inspect, sys
max_recursion_depth = 10
def rec_func(recursion_index):
if recursion_index == 0:
return
rec_func(recursion_index-1)
current_frame = inspect.currentframe()
calframe = inspect.getouterframes(current_frame, 2)
frame_object = calframe[0][0]
print("Recursion-%d: %s" % (max_recursion_depth - recursion_index, frame_object.f_locals))
print("Passed parameters: %s" % (sys._getframe(1).f_locals) )
rec_func(max_recursion_depth)
</code></pre>
<p>You can also use <code>sys.get_frame()</code> to access the current frame and by using the <code>f_locals</code> property, you can access the parameters passed to the current frame, in a recursive manner, you can observe this parameter decreasing.</p>
<p>Mostly all Information you want about the stack is also accessible from the <em>frame</em> objects that you can get as I've brought above.</p>
| 2 |
2016-09-13T07:31:59Z
|
[
"python",
"recursion"
] |
How do I use Python Django variables in my JS code?
| 39,464,103 |
<p>I'm trying to make a vertical side navigation bar populated with categories (dynamic, fetched from django models) where each category has sub-categories (also dynamic and fetched from models). When I refer to classes in my JS code, the code works i.e., upon clicking of a category, the sub-menu consisting of its respective subcategory opens up. But the problem is, when I click on any of the categories, all of them expand to reveal all their sub-categories. I just want one, which is clicked on, to open and stay open.</p>
<p>My HTML:</p>
<pre><code>{% block body %}<div class="col-md-12 col-xs-12 body-container leftsidenavigator" style="margin-top:15px;">
<div class="col-md-12 col-xs-12 leftsidenavigator-inner" style="padding:0px;">
<h2><center>Categories</center></h2>
<ul class="catindexlist catlistcat nav-collapse89">
{% for category in catindexlisted %}
<li class="catindexlistitem category-name" style="font-weight:600;padding-right:20px;"><a href="" id="category-name{{category.name}}">{{category.name}}</a></li>
<ul style="padding:0px;" class="nav-collapse88">
{% for forum in category|forumindexlistbycat %}
<li class="catlistforum forum-name" id="{{category.name}}{{forum.name}}" style="padding-right:10px;"><a href="{{ forum.get_absolute_url }}">{{forum.name}}</a></li>
{% endfor %}</ul>
{% endfor %}
</ul></div></div>{% endblock %}
</code></pre>
<p>My non-ideal but working Javascript:</p>
<pre><code>$(function() {
$(".catlistforum").hide();
$(".category-name a").click(function(e) {
e.preventDefault();
$(".catlistforum").slideToggle();
if(!($(this).parent('li').parent('div').siblings('div').children('ul').children('div').is(":visible"))){
$(this).parent('li').parent('div').siblings('div').children('ul').children('div').is(":visible").slideToggle();
}});
})
</code></pre>
<p>This is what appears in the console of chrome dev tools:
<a href="http://i.stack.imgur.com/CIeWQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/CIeWQ.png" alt="enter image description here"></a></p>
<p>I need something like this to work:</p>
<pre><code>$(function() {
$(".catlistforum").hide();
$(".category-name a").click(function(e) {
e.preventDefault();
$(".catlistforum").slideToggle();
if(!($(this).parent('li').siblings('div').children('ul').children('div').is(":visible"))){
$(this).parent('li').siblings('div').children('ul').children('div').is(":visible").slideToggle();
}});
})
</code></pre>
<p>This code, when I click on any category, reloads the page and no sub-menu is shown. Please keep in mind that I've used for loops, so the final HTML will have several categories with each having several sub-categories. </p>
<p>How can I refer to IDs dynamically generated by Django? Is this a placement issue of where I place my script in the page-because, through the chain of <code>% extends %</code> and <code>% includes %</code>, this script comes with other scripts at the bottom of the page, after my footer is rendered. What am I missing here?</p>
| 0 |
2016-09-13T07:09:10Z
| 39,466,469 |
<p>I'm not sure to have understood perfectly as 3 things bother me in your code :</p>
<ol>
<li>In both your lists, you use a 'div' between a 'ul' and a 'li'. I'm not sure if it is correct / can cause issues. <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/ul" rel="nofollow">https://developer.mozilla.org/en-US/docs/Web/HTML/Element/ul</a></li>
<li>In your first Jquery code, you use the function child(), did you mean children() ?</li>
<li>In your first Jquery code, I don't understand the 'if' statement. I may have read it too quickly but it seems to me that it says : "If no unrelated sub-menu is visible, then slideToggle all unrelated and visible sub-menus". Not sure if it's my mistake or not.</li>
</ol>
<p>But if I understand what you want to do (display the sub-menu of the clicked category, and hide all others), based on your html structure (regardless of my issue nº1, then) I think this would work (once again : based on your html structure) :</p>
<pre><code> $(function() {
$(".catlistforum").hide();
$(".category-name a").click(function(e) {
e.preventDefault();
$(".nav-collapse88").hide();
$(this).parent('li').parent('div').next('.nav-collapse88').show();
});
})
</code></pre>
<p>Personnally I would change the Html structure to quit the 'divs' mentionned above. You could maybe use a 'li' for each category, containing two 'divs' : one with class 'catindexlistitem' for the displayed item, and one - hidden with class 'nav-collapse88' - for the submenu. Here again in Javascript you could use the same steps as above : hide all '.nav-collapse' and show only the one next to the clicked item. </p>
<p>Hope it helps, good luck</p>
| 0 |
2016-09-13T09:22:49Z
|
[
"javascript",
"jquery",
"python",
"html",
"django"
] |
Make a 'ref' field to auto increment when I press CONFIRM SALE button
| 39,464,458 |
<p><a href="http://i.stack.imgur.com/pjKv4.jpg" rel="nofollow">confrm_sale</a></p>
<p>I have the problem how to make a 'ref' field to be auto increment every time I press the <em>Confirm Sale Button</em>.</p>
<p>In my first case I made this field to be auto increment every time I create a new customer with the following code:</p>
<p><strong>Python code:</strong></p>
<pre><code>@api.model
def create(self, vals):
vals['ref'] = self.env['ir.sequence'].get('res.debt')
return super(Partner, self).create(vals)
</code></pre>
<p><strong>XML code:</strong></p>
<pre><code><record id="your_sequence_id" model="ir.sequence">
<field name="name">Reference</field>
<field name="padding">3</field>
<field name="code">res.debt</field>
</record>
</code></pre>
<p>Now my problem is that I want this field also to be auto incremented, but not when I create the customer, only when I press the <em>Confirm Sale</em> button.</p>
<p>For example, I create a new sale order and I create a new customer for that order and I click save. When I now press <em>Confirm Sale</em>, that action needs to trigger auto increment for the internal reference field (<code>ref</code>).</p>
<p>If I make an order for existing customer than it should take the old sequence for that customer.</p>
<p>I have this code for the <code>action_confirm()</code>:</p>
<pre><code>@api.multi
def action_confirm(self):
for order in self:
order.state = 'sale'
if self.env.context.get('send_email'):
self.force_quotation_send()
order.order_line._action_procurement_create()
if not order.project_id:
for line in order.order_line:
if line.product_id.invoice_policy == 'cost':
order._create_analytic_account()
break
if self.env['ir.values'].get_default('sale.config.settings', 'auto_done_setting'):
self.action_done()
return True
</code></pre>
<p>Probably my first code for the auto increment I should add in this second code but I don't know how to do that.</p>
<p>Can anyone help? Thank you in advance.</p>
| 2 |
2016-09-13T07:30:39Z
| 39,483,967 |
<p>If I get your requirement right, I guess you should do something like inside your loop on orders:</p>
<pre><code>order.partner_id.ref = self.env['ir.sequence'].get('res.debt')
</code></pre>
| 1 |
2016-09-14T06:40:58Z
|
[
"python",
"auto-increment",
"confirm",
"odoo-9"
] |
Transform values of a dictionary to new dictionary
| 39,464,626 |
<p>I have a dictionary:</p>
<pre><code>{
'doc0': {
'individu': 1,
'manajemen': 1,
'tahu': 1,
'logistik': 1,
'transaksi': 1
},
'doc1': {
'manajemen': 1,
'transfer': 1,
'individu':1,
'tahu':1,
'transaksi': 1,
'logistik': 1
},
'doc2': {
'manajemen': 1,
'logistik': 1,
'transaksi': 1
}
}
</code></pre>
<p>I want use Python to transform it to the following in a new dictionary:</p>
<pre><code>{
'doc0': {
'individu': 1,
'manajemen': 1,
'tahu': 1,
'logistik': 1,
'transaksi': 1,
'transfer':0
},
'doc1': {
'individu': 1,
'manajemen': 1,
'tahu': 1,
'logistik': 1,
'transaksi': 1
},
'doc2': {
'individu': 0,
'manajemen': 1,
'tahu': 0,
'logistik': 1,
'transaksi': 1,
'transfer':0
}
}
</code></pre>
| 0 |
2016-09-13T07:40:31Z
| 39,465,312 |
<p>You can take a look at the following code :</p>
<pre><code>>>> all = list(set([j for i in list(d.keys()) for j in list(d[i].keys())]))
>>> all
['transfer', 'tahu', 'transaksi', 'individu', 'manajemen', 'logistik']
>>> for k in all:
for j in list(d.keys()):
if not k in d[j].keys():
d[j][k]=0
>>> d
{'doc0': {'individu': 1,
'logistik': 1,
'manajemen': 1,
'tahu': 1,
'transaksi': 1,
'transfer': 0},
'doc1': {'individu': 1,
'logistik': 1,
'manajemen': 1,
'tahu': 1,
'transaksi': 1,
'transfer': 1},
'doc2': {'individu': 0,
'logistik': 1,
'manajemen': 1,
'tahu': 0,
'transaksi': 1,
'transfer': 0}}
</code></pre>
<p>I take all keys in the nested dictionary, and then create the values that don't exist (hence the zeroes in the last output).</p>
| 0 |
2016-09-13T08:19:27Z
|
[
"python",
"dictionary"
] |
Transform values of a dictionary to new dictionary
| 39,464,626 |
<p>I have a dictionary:</p>
<pre><code>{
'doc0': {
'individu': 1,
'manajemen': 1,
'tahu': 1,
'logistik': 1,
'transaksi': 1
},
'doc1': {
'manajemen': 1,
'transfer': 1,
'individu':1,
'tahu':1,
'transaksi': 1,
'logistik': 1
},
'doc2': {
'manajemen': 1,
'logistik': 1,
'transaksi': 1
}
}
</code></pre>
<p>I want use Python to transform it to the following in a new dictionary:</p>
<pre><code>{
'doc0': {
'individu': 1,
'manajemen': 1,
'tahu': 1,
'logistik': 1,
'transaksi': 1,
'transfer':0
},
'doc1': {
'individu': 1,
'manajemen': 1,
'tahu': 1,
'logistik': 1,
'transaksi': 1
},
'doc2': {
'individu': 0,
'manajemen': 1,
'tahu': 0,
'logistik': 1,
'transaksi': 1,
'transfer':0
}
}
</code></pre>
| 0 |
2016-09-13T07:40:31Z
| 39,465,700 |
<p>You will face the problem with the key ordering in a dictionary. The keys (or the key-value pairs) in the dictionary are sorted arbitrarily. The order is not fixed and can change in different steps.</p>
<p>To mitigate this problem you can use the <code>OrderedDict</code> from the module <code>collections</code>.</p>
<p>If I understand your problem, you want to bring in line your dictionaries and make them have the same keys. If there is a key in <code>doc1</code>, but <code>doc2</code> is missing it, so <code>doc2</code> should be updated with this key and the value 0. If that's right, then you rather want to duplicate the keys, not the values.</p>
<p>This is my try:</p>
<pre><code>from collections import OrderedDict
# your initial data
my_dict = {
'doc0': {
'individu': 1,
'manajemen': 1,
'tahu': 1,
'logistik': 1,
'transaksi': 1
},
'doc1': {
'manajemen': 1,
'transfer': 1,
'individu':1,
'tahu':1,
'transaksi': 1,
'logistik': 1
},
'doc2': {
'manajemen': 1,
'logistik': 1,
'transaksi': 1
}
}
# get all keys in a help list
list_of_keys = []
for key in my_dict:
for dockey in my_dict[key]:
if dockey not in list_of_keys:
list_of_keys.append(dockey)
# sort the list
list_of_keys.sort()
# the list looks like this
# ['individu', 'logistik', 'manajemen', 'tahu', 'transaksi', 'transfer']
# now we can iterate through our data
# and align the keys and values in an OrderedDict
my_ordered_dict = OrderedDict()
# get the doc keys and update the ordered dictionary
for key in my_dict:
my_ordered_dict.update([(key, OrderedDict())])
for i in list_of_keys:
for key in my_dict:
if i in my_dict[key].iterkeys():
my_ordered_dict[key].update([(i, 1)])
else:
my_ordered_dict[key].update([(i, 0)])
</code></pre>
<p>Now you have all keys in every dictionary, they are sorted and the order is preserved. The initial data stays unchanged.</p>
<p>I didn't use any advanced features like list comprehensions, ternary operators or similar. The approach is a little bit naive and simple because you are a beginner, and I think it would be easier for you to understand every step.</p>
<p>I hope it can help you.</p>
| 0 |
2016-09-13T08:42:28Z
|
[
"python",
"dictionary"
] |
Transform values of a dictionary to new dictionary
| 39,464,626 |
<p>I have a dictionary:</p>
<pre><code>{
'doc0': {
'individu': 1,
'manajemen': 1,
'tahu': 1,
'logistik': 1,
'transaksi': 1
},
'doc1': {
'manajemen': 1,
'transfer': 1,
'individu':1,
'tahu':1,
'transaksi': 1,
'logistik': 1
},
'doc2': {
'manajemen': 1,
'logistik': 1,
'transaksi': 1
}
}
</code></pre>
<p>I want use Python to transform it to the following in a new dictionary:</p>
<pre><code>{
'doc0': {
'individu': 1,
'manajemen': 1,
'tahu': 1,
'logistik': 1,
'transaksi': 1,
'transfer':0
},
'doc1': {
'individu': 1,
'manajemen': 1,
'tahu': 1,
'logistik': 1,
'transaksi': 1
},
'doc2': {
'individu': 0,
'manajemen': 1,
'tahu': 0,
'logistik': 1,
'transaksi': 1,
'transfer':0
}
}
</code></pre>
| 0 |
2016-09-13T07:40:31Z
| 39,465,716 |
<p>I'm not entirely clear what you are trying to accomplish, but to cause all the key/value pairs in dict2 to be added to dict1 or updated in dict1, you do <code>dict1.update(dict2)</code>. Example:</p>
<pre><code>>>> dict1={"apples":14, "bananas":22}
>>> dict2={"apples":4, "pears":7}
>>> dict1.update(dict2)
>>> dict1
{'apples': 4, 'bananas': 22, 'pears': 7}
</code></pre>
<p>Alternatively if you want to copy keys and values from dict2 only if the key is not already in dict1, then</p>
<pre><code>>>> dict1={"apples":14, "bananas":22}
>>> dict2={"apples":4, "pears":7}
>>> for k in set(dict2.keys()) - set(dict1.keys()):
>>> dict1[k]=dict2[k]
...
>>> dict1
{'apples': 14, 'bananas': 22, 'pears': 7}
</code></pre>
| 0 |
2016-09-13T08:43:23Z
|
[
"python",
"dictionary"
] |
How to find values from one dataframe in another using pandas?
| 39,464,636 |
<pre><code>I have two dataframes:
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],'B': ['B7', 'B4', 'B0', 'B3'] })
df2 = pd.DataFrame({'A': ['A4', 'A3', 'A7', 'A8'],'B': ['B0', 'B1', 'B2', 'B3']})
</code></pre>
<p>and i need to get all the common values from the column <code>B</code>, so here it would be <code>B0</code> and <code>B3</code>.</p>
<p>Using <code>df1.B.isin(df2.B)</code> gives me <code>False False True True</code>, but not a list of values. </p>
| 3 |
2016-09-13T07:40:55Z
| 39,464,656 |
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p>
<pre><code>print (df1[df1.B.isin(df2.B)])
A B
2 A2 B0
3 A3 B3
print (df1.ix[df1.B.isin(df2.B), 'B'])
2 B0
3 B3
Name: B, dtype: object
print (df1.ix[df1.B.isin(df2.B), 'B'].tolist())
['B0', 'B3']
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a>:</p>
<pre><code>print (pd.merge(df1,df2, on='B'))
A_x B A_y
0 A2 B0 A4
1 A3 B3 A8
print (pd.merge(df1,df2, on='B')['B'])
0 B0
1 B3
Name: B, dtype: object
</code></pre>
| 4 |
2016-09-13T07:42:12Z
|
[
"python",
"pandas",
"indexing",
"merge",
"condition"
] |
How to get Bundle ID of MAC application?
| 39,464,668 |
<p>I want to use Python and <code>atomac</code> module to trigger an application in MAC OS like following scripts:</p>
<pre><code>atomac.launchAppByBundleID()
app_win = atomac.getAppRefByBundleId(app_bundle_ID)
</code></pre>
<p>But I don't know how to get the Bundle ID (<code>app_bundle_ID</code>) of the application.</p>
| 2 |
2016-09-13T07:42:44Z
| 39,464,824 |
<p>I use two methods to get the bundler ID:</p>
<pre><code>osascript -e 'id of app "SomeApp"'
</code></pre>
<p>and</p>
<pre><code>mdls -name kMDItemCFBundleIdentifier -r SomeApp.app
</code></pre>
| 0 |
2016-09-13T07:52:30Z
|
[
"python",
"osx",
"ui-automation"
] |
How to get Bundle ID of MAC application?
| 39,464,668 |
<p>I want to use Python and <code>atomac</code> module to trigger an application in MAC OS like following scripts:</p>
<pre><code>atomac.launchAppByBundleID()
app_win = atomac.getAppRefByBundleId(app_bundle_ID)
</code></pre>
<p>But I don't know how to get the Bundle ID (<code>app_bundle_ID</code>) of the application.</p>
| 2 |
2016-09-13T07:42:44Z
| 39,938,914 |
<p>if you just need it to launch the app look in the app's info.plist file. the file is in the app bundle in the Contents directory. This works for a lot of apps.</p>
| 0 |
2016-10-09T00:57:15Z
|
[
"python",
"osx",
"ui-automation"
] |
Don't make stats public when uploading to YouTube via API
| 39,464,705 |
<p>I'm uploading to YouTube using Python via <a href="https://developers.google.com/youtube/v3/guides/uploading_a_video" rel="nofollow">an officially provided script</a>.</p>
<p>The default settings for my channel (defined on youtube.com/upload_defaults when logged in) have <strong>Make video statistics on the watch page publicly visible</strong> set to disabled.</p>
<p>The response contains the following</p>
<pre><code>'status': {
'publicStatsViewable': True,
</code></pre>
<p>and the <strong>edit</strong> page (<strong>advanced</strong> tab) of the video reveals that it is in fact <a href="http://i.stack.imgur.com/NyWTP.png" rel="nofollow">turned on</a>.</p>
<p>How to disable publicly visible statistics for a video when uploading via API with Python?</p>
<p>I assume something in this part of the upload script needs to be changed but it is unclear to me which:</p>
<pre><code>body=dict(
snippet=dict(
title=options.title,
description=options.description,
tags=tags,
categoryId=options.category
),
status=dict(
privacyStatus=options.privacyStatus
)
)
# Call the API's videos.insert method to create and upload the video.
insert_request = youtube.videos().insert(
part=",".join(body.keys()),
body=body,
media_body=MediaFileUpload(options.file, chunksize=-1, resumable=True)
)
resumable_upload(insert_request)
</code></pre>
| 3 |
2016-09-13T07:44:57Z
| 39,464,890 |
<p>Just type:</p>
<pre><code> status=dict(
privacyStatus="private"
)
</code></pre>
| 0 |
2016-09-13T07:55:54Z
|
[
"python",
"youtube-api",
"youtube-api-v3"
] |
Don't make stats public when uploading to YouTube via API
| 39,464,705 |
<p>I'm uploading to YouTube using Python via <a href="https://developers.google.com/youtube/v3/guides/uploading_a_video" rel="nofollow">an officially provided script</a>.</p>
<p>The default settings for my channel (defined on youtube.com/upload_defaults when logged in) have <strong>Make video statistics on the watch page publicly visible</strong> set to disabled.</p>
<p>The response contains the following</p>
<pre><code>'status': {
'publicStatsViewable': True,
</code></pre>
<p>and the <strong>edit</strong> page (<strong>advanced</strong> tab) of the video reveals that it is in fact <a href="http://i.stack.imgur.com/NyWTP.png" rel="nofollow">turned on</a>.</p>
<p>How to disable publicly visible statistics for a video when uploading via API with Python?</p>
<p>I assume something in this part of the upload script needs to be changed but it is unclear to me which:</p>
<pre><code>body=dict(
snippet=dict(
title=options.title,
description=options.description,
tags=tags,
categoryId=options.category
),
status=dict(
privacyStatus=options.privacyStatus
)
)
# Call the API's videos.insert method to create and upload the video.
insert_request = youtube.videos().insert(
part=",".join(body.keys()),
body=body,
media_body=MediaFileUpload(options.file, chunksize=-1, resumable=True)
)
resumable_upload(insert_request)
</code></pre>
| 3 |
2016-09-13T07:44:57Z
| 39,484,635 |
<p>The solution was to modify the body to include <code>status.publicStatsViewable</code>, set to <code>False</code>. Just add the following line to the <code>body</code> construction block:</p>
<pre><code> publicStatsViewable=False,
</code></pre>
<p>so that it looks like:</p>
<pre><code>body=dict(
snippet=dict(
title=options.title,
description=options.description,
tags=tags,
categoryId=options.category
),
status=dict(
publicStatsViewable=False,
privacyStatus=options.privacyStatus
)
)
</code></pre>
<p>Note that I only tested with the value being a boolean <code>False</code>. I did not test it with using a string.</p>
| 0 |
2016-09-14T07:20:11Z
|
[
"python",
"youtube-api",
"youtube-api-v3"
] |
Should a connection to Redis cluster be made on each Flask request?
| 39,464,748 |
<p>I have a Flask API, it connects to a Redis cluster for caching purposes. Should I be creating and tearing down a Redis connection on each flask api call? Or, should I try and maintain a connection across requests?</p>
<p>My argument against the second option is that I should really try and keep the api as stateless as possible, and I also don't know if keeping some persistent across request might causes threads race conditions or other side effects.</p>
<p>However, if I want to persist a connection, should it be saved on the session or on the application context?</p>
| 0 |
2016-09-13T07:47:58Z
| 39,465,104 |
<p>This is about performance and scale. To get those 2 buzzwords buzzing you'll in fact need persistent connections.</p>
<p>Eventual race conditions will be no different than with a reconnect on every request so that shouldn't be a problem. Any RCs will depend on how you're using redis, but if it's just caching there's not much room for error.</p>
<p>I understand the desired stateless-ness of an API from a client sides POV, but not so sure what you mean about the server side.</p>
<p>I'd suggest you put them in the application context, not the sessions (those could become too numerous) whereas the app context gives you the optimal 1 connection per process (and created immediately at startup). Scaling this way becomes easy-peasy: you'll never have to worry about hitting the max connection counts on the redis box (and the less multiplexing the better).</p>
| 2 |
2016-09-13T08:07:44Z
|
[
"python",
"flask",
"redis"
] |
Should a connection to Redis cluster be made on each Flask request?
| 39,464,748 |
<p>I have a Flask API, it connects to a Redis cluster for caching purposes. Should I be creating and tearing down a Redis connection on each flask api call? Or, should I try and maintain a connection across requests?</p>
<p>My argument against the second option is that I should really try and keep the api as stateless as possible, and I also don't know if keeping some persistent across request might causes threads race conditions or other side effects.</p>
<p>However, if I want to persist a connection, should it be saved on the session or on the application context?</p>
| 0 |
2016-09-13T07:47:58Z
| 39,465,980 |
<p>It's good idea from the performance standpoint to keep connections to a database opened between requests. The reason for that is that opening and closing connections is not free and takes some time which may become problem when you have too many requests. Another issue that a database can only handle up to a certain number of connections and if you open more, database performance will degrade, so you need to control how many connections are opened at the same time.</p>
<p>To solve both of these issues you may use a connection pool. A connection pool contains a number of opened database connections and provides access to them. When a database operation should be performed from a connection shoul be taken from a pool. When operation is completed a connection should be returned to the pool. If a connection is requested when all connections are taken a caller will have to wait until some connections are returned to the pool. Since no new connections are opened in this processed (they all opened in advance) this will ensure that a database will not be overloaded with too many parallel connections.</p>
<p>If connection pool is used correctly a single connection will be used by only one thread will use it at any moment.</p>
<p>Despite of the fact that connection pool has a state (it should track what connections are currently in use) your API will be stateless. This is because from the API perspective "stateless" means: does not have a state/side-effects visible to an API user. Your server can perform a number of operations that change its internal state like writing to log files or writing to a cache, but since this does not influence what data is being returned as a reply to API calls this does not make this API "stateful".</p>
<p>You can see some examples of using Redis connection pool <a href="http://www.programcreek.com/python/example/22730/redis.ConnectionPool" rel="nofollow">here</a>.</p>
<p>Regarding where it should be stored I would use application context since it fits better to its <a href="http://flask.pocoo.org/docs/0.11/appcontext/#context-usage" rel="nofollow">purpose</a>.</p>
| 0 |
2016-09-13T08:57:02Z
|
[
"python",
"flask",
"redis"
] |
How to make my chatbot learn permanently?
| 39,464,762 |
<p>I am using <code>pyAIML v1.0</code> to create an offline chatbot in Python.</p>
<p>This type of response is pretty easy with AIML:</p>
<blockquote>
<p>Human: Hi <br>
Bot: Hi what is your name? <br>
Human: My name is Dev <br>
Bot: Nice to meet you Dev <br>
Human: What is my name? <br>
Bot: Your name is Dev <br></p>
</blockquote>
<p>Hence Bot can learn my learn but it has no storage that's why I have to tell
my name again when I restart the bot.</p>
<p>So my question is: <strong>Is there anyway to store the data learned during chat permanently?</strong></p>
<h3>Edit:</h3>
<p>Here's my code to main python script</p>
<pre><code>import aiml
import sys
kern = aiml.Kernel()
#It builds cache Ultimates!
brainLoaded = False
forceReload = False
while not brainLoaded:
if forceReload or (len(sys.argv) >= 2 and sys.argv[1] == "reload"):
kern.bootstrap(learnFiles="Database.xml", commands="load aiml b")
brainLoaded = True
# Now that we've loaded the brain, save it to speed things up for
# next time.
kern.saveBrain("Ultima_Cache.brn")
else:
# Attempt to load the brain file. If it fails, fall back on the Reload
# method.
try:
# It is our cache file.
kern.bootstrap(brainFile = "Ultima_Cache.brn")
brainLoaded = True
except:
forceReload = True
# Enter the main input/output loop.
print "\nUltima v0.1"
while(True):
print kern.respond(raw_input("> "))
</code></pre>
| -4 |
2016-09-13T07:49:01Z
| 39,465,573 |
<p>I think you should call <code>saveBrain</code> (<a href="https://github.com/creatorrr/pyAIML/blob/master/Kernel.py#L162" rel="nofollow">https://github.com/creatorrr/pyAIML/blob/master/Kernel.py#L162</a>) in a regular manner to store your progress. e.g.:</p>
<pre><code># Enter the main input/output loop.
print "\nUltima v0.1"
while(True):
print kern.respond(raw_input("> "))
kern.saveBrain("Ultima_Cache.brn")
</code></pre>
| 0 |
2016-09-13T08:35:34Z
|
[
"python",
"aiml"
] |
Permutations in python (Hour,Minutes,Seconds)
| 39,464,959 |
<p>Please, help.<br>
I need to get list of all permutations, when first number from 0 to 23, second number from 0 to 59 and third number from 0 to 59.<br><br>
For example:<br>
01,01,01<br>
...<br>
10,10,10<br>
...<br>
23,59,59<br>
...etc</p>
| -5 |
2016-09-13T08:00:40Z
| 39,467,127 |
<p>Something like this if I have understood your question:</p>
<pre><code>>>> [(h,m,s) for h in range(24) for m in range(60) for s in range(60)]
</code></pre>
| 0 |
2016-09-13T09:54:39Z
|
[
"python",
"permutation"
] |
How can I raise an error if input is NaN?
| 39,465,094 |
<p>I am trying to implement an algorithm that converts a decimal number to is binary equivalent.</p>
<p>This is what I have. </p>
<pre><code>def binary_converter(n):
if n < 0:raise ValueError, "Invalid input"
if n >255:raise ValueError, "Invalid input"
if n > 1:
binary_converter(n//2)
print(n % 2,end = '')
# Take decimal number from user
dec = int(input("Enter an integer: "))
binary_converter(dec)
</code></pre>
<p>My question is about these lines:</p>
<pre><code> if n < 0:raise ValueError, "Invalid input"
if n >255:raise ValueError, "Invalid input"
</code></pre>
<p>I am attempting to validate the input contains only digits 0-9.</p>
<p>How can I achieve that validation?</p>
| -1 |
2016-09-13T08:07:13Z
| 39,465,628 |
<p>A couple of methods. One of which is using the strings <code>.isdigit()</code>. The general problem with <code>.isdigit()</code> is that it doesn't work on negatives, however with your code, it really isn't a problem. Try replacing your input with the following custom function:</p>
<pre><code>def positive_int_input():
s = input("Enter an integer: ")
if not s.isdigit():
raise ValueError("Input is not a positive integer")
else:
return int(s)
</code></pre>
<p>A few examples:</p>
<pre><code>positive_int_input()
# Enter an integer: 42
#-> 42
positive_int_input()
# Enter an integer: 3.14
# ValueError: Input is not an integer
positive_int_input()
# Enter an integer: Hello World!
# ValueError: Input is not an integer
</code></pre>
<p>That being said:</p>
<pre><code>int("Some String")
# ValueError: invalid literal for int() with base 10: 'Some String'
</code></pre>
<p>An Error is returned regardless, so I don't know exactly what you think is wrong with your code besides fixing up the indenting.</p>
| 0 |
2016-09-13T08:38:38Z
|
[
"python"
] |
MATLAB ind2sub and Numpy unravel_index inconsistency
| 39,465,157 |
<p>Based on the <a href="http://stackoverflow.com/a/33072609">following answer</a>:</p>
<p>Using Octave, I get:</p>
<pre><code>>> [x, y, z] = ind2sub([27, 5, 58], 3766)
x = 13
y = 5
z = 28
</code></pre>
<p>Using Numpy, I get:</p>
<pre><code>>>> import numpy as np
>>> np.unravel_index(3765, (27, 5, 58))
(12, 4, 53)
</code></pre>
<p>Why, in Numpy, the <code>z</code> component is 58, when it should be 27 according to octave?</p>
| 1 |
2016-09-13T08:10:52Z
| 39,465,469 |
<p>Well with MATLAB that follows column-major indexing, for <code>(x,y,z)</code> the elements are stored at <code>x</code>, then <code>y</code> and then <code>z</code>. With NumPy for <code>(x,y,z)</code> because of row-major indexing, it's the other way - <code>z</code>, <code>y</code> and then <code>x</code>. So, to replicate the same behavior in NumPy, you need to flip the grid shape for use with <code>np.unravel_index</code> and finally flip the output indices, like so -</p>
<pre><code>np.unravel_index(3765, (58, 5, 27))[::-1]
</code></pre>
<p>Sample run -</p>
<pre><code>In [18]: np.unravel_index(3765, (58, 5, 27))[::-1]
Out[18]: (12, 4, 27)
</code></pre>
| 1 |
2016-09-13T08:29:34Z
|
[
"python",
"matlab",
"numpy"
] |
AttributeError - module 'django.http.request' has no attribute 'META'
| 39,465,214 |
<p>I got this error, but I've done exactly the same:</p>
<blockquote>
<p>AttributeError at /courses/
module 'django.http.request' has no attribute 'META'</p>
</blockquote>
<p>The error is occuring in :</p>
<pre><code>from django.shortcuts import render
from django.http import request
from django.http import HttpResponse
from .models import Course
# Create your views here.
def course_list(response):
courses = Course.objects.all()
return render(request, 'courses/course_list.html',{'courses':courses})
# output=', '.join([str(course) for course in courses])
# return HttpResponse(output)
</code></pre>
<p>But the server shows no issues at all.</p>
<pre><code>Performing system checks...
System check identified no issues (0 silenced).
September 13, 2016 - 13:51:18
Django version 1.10.1, using settings 'learning_site.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CTRL-BREAK.
</code></pre>
| 0 |
2016-09-13T08:13:49Z
| 39,465,250 |
<p>Your function parameter is called <code>response</code> but then you use <code>request</code> which is a module you import, change the field param to be called <code>request</code> or change its usage inside the function to be <code>response</code></p>
<pre><code>def course_list(request):
courses = Course.objects.all()
return render(request, 'courses/course_list.html',{'courses':courses})
def course_list(response):
courses = Course.objects.all()
return render(response, 'courses/course_list.html',{'courses':courses})
</code></pre>
| 2 |
2016-09-13T08:15:45Z
|
[
"python",
"django"
] |
Encoding with 'idna' codec failed in RethinkDB
| 39,465,259 |
<p>I have a <code>flask</code> app that runs and connects to a remote <code>rethinkdb</code> database on <a href="https://www.compose.io" rel="nofollow">compose.io</a>. The app is also deployed to <a href="https://www.pythonanywhere.com" rel="nofollow">pythonanywhere.com</a>, but this deployment keeps throwing the following error:</p>
<pre><code>Traceback (most recent call last):
File "/home/user/.virtualenvs/venv/lib/python3.5/encodings/idna.py", line 165, in encode
raise UnicodeError("label empty or too long")
UnicodeError: label empty or too long
...
rethinkdb.errors.ReqlDriverError: Could not connect to rethinkdb://[user]:[password]@aws-us-east-1-portal.1.dblayer.com:23232. Error: encoding with 'idna' codec failed (UnicodeError: label empty or too long)
</code></pre>
<p>The connection code looks exactly like this:</p>
<pre><code>conn = r.connect(host='aws-us-east-1-portal.1.dblayer.com',
port=23232,
auth_key='[auth_key]',
ssl={'ca_certs': './cacert'})
</code></pre>
<p>I'm not sure how to proceed from here.</p>
<p>Running Python 3.5.</p>
| 1 |
2016-09-13T08:16:19Z
| 39,469,868 |
<p>The idna codec is attempting to convert your rethinkdb URL into an ascii-compatible equivalent string.</p>
<p>This worked for me:</p>
<pre><code>"rethinkdb://user:password@aws-us-east-1-portal.1.dblayer.com:23232".encode("idna")
</code></pre>
<p>So my guess is that some character/sequence of characters in your username or password is causing the issue. Try the connection with a (possibly bogus) very simple password and see if you get the same issue.</p>
<p>Alternatively, you could do the encode in a Python shell with the connection string and gradually simplify it until you identify the problematic piece(s).</p>
| 2 |
2016-09-13T12:15:56Z
|
[
"python",
"flask",
"rethinkdb",
"pythonanywhere",
"compose"
] |
python - django - using a flag on each model field
| 39,465,465 |
<p>I want to build a simple moderation system for my application.I have a class in my application models like this:</p>
<pre><code>#models.py
class TableName(models.Model):
is_qualified = False
title = models.CharField(max_length=300, blank=False)
description = models.TextField(max_length=500, default="DEFAULT VALUE")
video = models.FileField(upload_to='somepath')
picture_thumbnail = models.ImageField(upload_to='somepath')
</code></pre>
<p>I have 3 questions:</p>
<ol>
<li>How can I add <code>is_qualified</code> to every field in my model and setting it to <code>False</code> as default?</li>
<li>How can I write a view method first for checking if admin checked an object (for example title or description) and used its checkbox to change field's <code>is_qualified</code> value to <code>True</code>?</li>
<li>How to add a checkbox for each object in admin area for using that view method?</li>
</ol>
<p>Thank you very much.</p>
| 0 |
2016-09-13T08:29:14Z
| 39,465,507 |
<p>You need to make <code>is_qualified</code> an actual field - a BooleanField would be appropriate - and have it default to False.</p>
<pre><code>is_qualified = model.BooleanField(default=False)
</code></pre>
| 1 |
2016-09-13T08:31:54Z
|
[
"python",
"django"
] |
python - django - using a flag on each model field
| 39,465,465 |
<p>I want to build a simple moderation system for my application.I have a class in my application models like this:</p>
<pre><code>#models.py
class TableName(models.Model):
is_qualified = False
title = models.CharField(max_length=300, blank=False)
description = models.TextField(max_length=500, default="DEFAULT VALUE")
video = models.FileField(upload_to='somepath')
picture_thumbnail = models.ImageField(upload_to='somepath')
</code></pre>
<p>I have 3 questions:</p>
<ol>
<li>How can I add <code>is_qualified</code> to every field in my model and setting it to <code>False</code> as default?</li>
<li>How can I write a view method first for checking if admin checked an object (for example title or description) and used its checkbox to change field's <code>is_qualified</code> value to <code>True</code>?</li>
<li>How to add a checkbox for each object in admin area for using that view method?</li>
</ol>
<p>Thank you very much.</p>
| 0 |
2016-09-13T08:29:14Z
| 39,466,060 |
<p>Hmm, adding is_qualified for each field would be a bit too much.</p>
<p>If your are using postgresql I would consider using <a href="https://github.com/djangonauts/django-hstore" rel="nofollow">django-hstore</a>, where you can dynamically add key-value fields. </p>
<p>Using this package, you can make something like your field name as a key, and True/False as a value.</p>
<p>Then when trying to validate is your object "qualified" you just make something like this:</p>
<p><code>is_valid = all([value for key, value in your_hstore_field.items()])</code></p>
<hr>
<p>EDIT</p>
<pre><code>class TableName(models.Model):
is_qualified = models.BooleanField(default=False)
title = models.CharField(max_length=300, blank=False)
description = models.TextField(max_length=500, default="DEFAULT VALUE")
video = models.FileField(upload_to='somepath')
picture_thumbnail = models.ImageField(upload_to='somepath')
data = hstore.DictionaryField()
</code></pre>
<p>Then you can have some custom function like this:</p>
<pre><code>def update_qualified(obj_instance):
if all(value for key, value in obj_instance.data.items()):
obj_instance.is_qualified = True
else:
obj_instance.is_qualified = False
obj_instance.save()
</code></pre>
| 1 |
2016-09-13T09:00:42Z
|
[
"python",
"django"
] |
Count the number of open browser tabs in Firefox and Chrome
| 39,465,747 |
<p>I want to make a function that counts the number of open tabs in the browser - Chrome or Firefox - using Java or Python. I know Firefox and Chrome tab counters exist, because there are <em>AddOns</em> and <em>Extensions</em> that achieve that, but I cannot export the value to another function like I want. </p>
<p>Does anyone know the API call?</p>
| -1 |
2016-09-13T08:44:46Z
| 39,468,746 |
<p>Unless Firefox exposes an API for other programs (not addons nor extensions) to control and query its internal state (a quick Google search suggests that it doesn't), I suspect you'll be out of luck. </p>
| 1 |
2016-09-13T11:19:47Z
|
[
"python",
"firefox"
] |
How to debug external .py functions run from Jupyter/IPython notebook
| 39,465,752 |
<p>My Jupyter/IPython notebook executes functions in an external .py.</p>
<p>I need to set breakpoints within these functions, inspect variables, single step, etc.</p>
<p>It just isn't practical to use a combination of <code>print</code> statements and throwing exceptions to early-exit a cell.</p>
<p>I need some kind of workflow.</p>
<p>Is it possible to hook up some third-party editor/IDE to view the .py and somehow connect it to the Python runtime Jupyter/IPython is using?</p>
<p>So that if I set a breakpoint in my external .py using my IDE and execute a cell in the notebook which encounters said breakpoint, I can continue to navigate manually from within the IDE.</p>
<p>EDIT: I've found <a href="https://pypi.python.org/pypi/ipdb" rel="nofollow">https://pypi.python.org/pypi/ipdb</a> <a href="https://www.quora.com/What-are-your-favorite-tricks-for-IPython-Notebook" rel="nofollow">https://www.quora.com/What-are-your-favorite-tricks-for-IPython-Notebook</a></p>
<p>EDIT <a href="https://www.youtube.com/watch?v=Jb2HHOahvcE" rel="nofollow">https://www.youtube.com/watch?v=Jb2HHOahvcE</a> <-- this video is getting close to what I'm after, I just can't quite see how to put it all together. That video demonstrates spyder which is an IDE with an IPython prompt... I wonder if maybe I can run my notebook through the prompt and debug it.</p>
<p>EDIT: It looks as though PyCharm does exactly what I'm after: <a href="https://www.jetbrains.com/help/pycharm/2016.1/tutorial-using-ipython-jupyter-notebook-with-pycharm.html" rel="nofollow">https://www.jetbrains.com/help/pycharm/2016.1/tutorial-using-ipython-jupyter-notebook-with-pycharm.html</a> </p>
<p>EDIT: I'm in the middle of trying to get PyCharm to behave. I will provide the details in an answer if I sort it out.</p>
| 7 |
2016-09-13T08:44:55Z
| 39,562,438 |
<blockquote>
<p>Is it possible to hook up some third-party editor/IDE to view the .py
and somehow connect it to the Python runtime Jupyter/IPython is using?</p>
</blockquote>
<p>Yes, <strong>it's possible</strong>.</p>
<h2>Using <a href="https://www.gnu.org/software/emacs/" rel="nofollow">Emacs</a> as your third party Python IDE.</h2>
<p>You can open a file side-by-side with the iPython runtime. It is possible to send definitions from the external file directly to the prompt using key shortcuts, and <strong>set breakpoints, single step and inspect variables</strong> from the iPython prompt using the standard python debugger pdb.</p>
<p>Auto-completion works fine. Simply writing <code>import XXX</code> makes Emacs aware of all functions within the <code>XXX</code> module. </p>
<p>Here is an example screenshot:
<a href="http://i.stack.imgur.com/17GSY.png" rel="nofollow"><img src="http://i.stack.imgur.com/17GSY.png" alt="Spacemacs"></a></p>
<p>And these are the steps I followed to get to the screenshot.* </p>
<ol>
<li><strong>Open file</strong> playground.py on Emacs.</li>
<li>Press <code>C-c C-p</code> to <strong>fire up an iPython process and connect file to it</strong>. Note that the iPython process is fully functional. I can call all magic methods on it.</li>
<li>Press <code>M-m m d b</code> to <strong>include the breakpoint</strong> in the line I want (in form of a pdb import)</li>
<li>Press <code>C-M-x</code> to <strong>send the definition to the iPython prompt</strong>. Note in the screenshot that I am calling <code>unique_everseen</code> without ever having typed it into the prompt. The definition was sent directly from the file (that is why we have an empty cell <a href="http://i.stack.imgur.com/17GSY.png" rel="nofollow">2</a>, as a visual feedback that Emacs did something).</li>
<li><strong>Execute the function from iPython prompt so as to trigger the breakpoint.</strong> I get information telling me what line I am and enter the debugger automatically.</li>
<li>I <strong>inspect variables</strong> to see what is going on.</li>
<li>I press <code>n</code> to <strong>step to the next</strong> instruction.</li>
<li>I click on the filename (displayed in red on the prompt) and it takes my cursor directly to that line within the playground.py file. Note the black triangles next to the line numbers telling me where the stepper is in. Pretty nice.</li>
</ol>
<p>Now, <em>I haven't even scratched the surface</em> of what Emacs can offer.
If this interests you I'd suggest looking into it. It's really powerful and has a great, very helpful and active community.</p>
<p>To get all Python configuration running out of the box without having to configure anything yourself simply install <a href="https://github.com/syl20bnr/spacemacs" rel="nofollow">Spacemacs</a>, a popular distribution of Emacs (what anaconda is to Python, Spacemacs is to Emacs). The install instructions can be found under the link. </p>
<p><strong>Using Emacs to edit iPython notebooks.</strong></p>
<p>A package called EIN (Emacs iPython Notebooks) "provides a IPython Notebook client and integrated REPL in Emacs."
To get it working you open your iPython notebook server from any terminal with <code>ipython notebook</code>, and call the function <code>ein:notebooklist</code>.</p>
<p>More information can be found under the project's page <a href="https://tkf.github.io/emacs-ipython-notebook/" rel="nofollow">here</a>, and on the Spacemacs iPython-Notebook layer page <a href="https://github.com/syl20bnr/spacemacs/tree/master/layers/%2Blang/ipython-notebook" rel="nofollow">here</a>.</p>
<p>In particular, from EIN documentation:</p>
<ol>
<li>Copy/paste cells, even to/from different notebooks.</li>
<li>Console integration: You can easily connect to a kernel via the console application. This enables you to start debugging in the same kernel. It is even possible to connect to a console over ssh. </li>
<li>An IPython kernel can be "connected" to any Emacs buffer. This enables you to evaluate a buffer or buffer region using the same kernel as the notebook. Notebook goodies such as tooltip help, help browser and code completion are available in these buffers.</li>
<li>Jump to definition (go to the definition by hitting M-. over an object).</li>
</ol>
<p>All functionality described above for the simple iPython + file workflow is available for iPython notebooks (*.ipynb).</p>
<p>Screenshots <a href="https://github.com/millejoh/emacs-ipython-notebook/wiki/Screenshots" rel="nofollow">here.</a></p>
| 5 |
2016-09-18T20:35:39Z
|
[
"python",
"debugging",
"ipython",
"jupyter-notebook",
"spyder"
] |
How to debug external .py functions run from Jupyter/IPython notebook
| 39,465,752 |
<p>My Jupyter/IPython notebook executes functions in an external .py.</p>
<p>I need to set breakpoints within these functions, inspect variables, single step, etc.</p>
<p>It just isn't practical to use a combination of <code>print</code> statements and throwing exceptions to early-exit a cell.</p>
<p>I need some kind of workflow.</p>
<p>Is it possible to hook up some third-party editor/IDE to view the .py and somehow connect it to the Python runtime Jupyter/IPython is using?</p>
<p>So that if I set a breakpoint in my external .py using my IDE and execute a cell in the notebook which encounters said breakpoint, I can continue to navigate manually from within the IDE.</p>
<p>EDIT: I've found <a href="https://pypi.python.org/pypi/ipdb" rel="nofollow">https://pypi.python.org/pypi/ipdb</a> <a href="https://www.quora.com/What-are-your-favorite-tricks-for-IPython-Notebook" rel="nofollow">https://www.quora.com/What-are-your-favorite-tricks-for-IPython-Notebook</a></p>
<p>EDIT <a href="https://www.youtube.com/watch?v=Jb2HHOahvcE" rel="nofollow">https://www.youtube.com/watch?v=Jb2HHOahvcE</a> <-- this video is getting close to what I'm after, I just can't quite see how to put it all together. That video demonstrates spyder which is an IDE with an IPython prompt... I wonder if maybe I can run my notebook through the prompt and debug it.</p>
<p>EDIT: It looks as though PyCharm does exactly what I'm after: <a href="https://www.jetbrains.com/help/pycharm/2016.1/tutorial-using-ipython-jupyter-notebook-with-pycharm.html" rel="nofollow">https://www.jetbrains.com/help/pycharm/2016.1/tutorial-using-ipython-jupyter-notebook-with-pycharm.html</a> </p>
<p>EDIT: I'm in the middle of trying to get PyCharm to behave. I will provide the details in an answer if I sort it out.</p>
| 7 |
2016-09-13T08:44:55Z
| 39,594,357 |
<p>In Jupyter, you can use python debugger by adding below two lines for a breakpoint. </p>
<pre><code>import pdb
pdb.set_trace()
</code></pre>
<p>Code execution will pause at this step and will provide you text box for debugging python code. I have attached screenshot for same.</p>
<p>You can refer to pdb <a href="https://docs.python.org/2/library/pdb.html#debugger-commands" rel="nofollow">documentation</a> for the operations you can perform apart from printing your variables</p>
<p><a href="http://i.stack.imgur.com/7MMia.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/7MMia.jpg" alt="PDB in Jupyter Notebook"></a></p>
| 2 |
2016-09-20T12:39:21Z
|
[
"python",
"debugging",
"ipython",
"jupyter-notebook",
"spyder"
] |
Machine Learning Cocktail Party Audio Application
| 39,465,776 |
<p>What's going on people,</p>
<p>I have a question with regards to this post:</p>
<p><a href="http://stackoverflow.com/questions/20414667/cocktail-party-algorithm-svd-implementation-in-one-line-of-code">cocktail party algorithm SVD implementation ... in one line of code?</a></p>
<p>I realize there are similar questions to this. However, please note that my particular question takes things in a new direction, inasmuch that I'm looking for a purely Python equivalent.</p>
<p>If someone is familiar with the concept, please advise as to whether this procedure is as elegant/simple when written in python 3.5 (as opposed to the original octave 'one line of code').</p>
<p>Also include any relevant Python libraries for this kind of application.</p>
<p>Of course, if it turns out that Python is not equipped for this kind of application at all, please explain why. </p>
<p>I'm just seeking some expert opinions about what it might look like and/or the feasibility in Python 3.5 only.</p>
<p>Thank you for reading,</p>
<p>I hope others who use Python will appreciate this question.</p>
| 0 |
2016-09-13T08:45:57Z
| 39,466,992 |
<p>How about using numpy?
Using <a href="http://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html" rel="nofollow">this</a> guide I translated the statement to</p>
<pre><code>from numpy import *
U, S, Vh = linalg.svd(dot((tile(sum(x*x,0),(x.shape[0],1))*x),x.T))
</code></pre>
<p>It runs but I do not have any data to actually test it.</p>
| 1 |
2016-09-13T09:48:38Z
|
[
"python",
"octave"
] |
How to crop zero edges of a numpy array?
| 39,465,812 |
<p>I have this ugly, un-pythonic beast:</p>
<pre><code>def crop(dat, clp=True):
'''Crops zero-edges of an array and (optionally) clips it to [0,1].
Example:
>>> crop( np.array(
... [[0,0,0,0,0,0],
... [0,0,0,0,0,0],
... [0,1,0,2,9,0],
... [0,0,0,0,0,0],
... [0,7,4,1,0,0],
... [0,0,0,0,0,0]]
... ))
array([[1, 0, 1, 1],
[0, 0, 0, 0],
[1, 1, 1, 0]])
'''
if clp: np.clip( dat, 0, 1, out=dat )
while np.all( dat[0,:]==0 ):
dat = dat[1:,:]
while np.all( dat[:,0]==0 ):
dat = dat[:,1:]
while np.all( dat[-1,:]==0 ):
dat = dat[:-1,:]
while np.all( dat[:,-1]==0 ):
dat = dat[:,:-1]
return dat
# Below gets rid of zero-lines/columns in the middle
#+so not usable.
#dat = dat[~np.all(dat==0, axis=1)]
#dat = dat[:, ~np.all(dat == 0, axis=0)]
</code></pre>
<p>How do I tame it, and make it beautiful?</p>
| 3 |
2016-09-13T08:47:53Z
| 39,466,129 |
<p>Try incorporating something like this:</p>
<pre><code># argwhere will give you the coordinates of every non-zero point
true_points = np.argwhere(dat)
# take the smallest points and use them as the top left of your crop
top_left = true_points.min(axis=0)
# take the largest points and use them as the bottom right of your crop
bottom_right = true_points.max(axis=0)
out = dat[top_left[0]:bottom_right[0]+1, # plus 1 because slice isn't
top_left[1]:bottom_right[1]+1] # inclusive
</code></pre>
<p>This could be expanded without reasonable difficulty for the general <code>n-d</code> case.</p>
| 3 |
2016-09-13T09:04:36Z
|
[
"python",
"numpy",
"crop"
] |
How to crop zero edges of a numpy array?
| 39,465,812 |
<p>I have this ugly, un-pythonic beast:</p>
<pre><code>def crop(dat, clp=True):
'''Crops zero-edges of an array and (optionally) clips it to [0,1].
Example:
>>> crop( np.array(
... [[0,0,0,0,0,0],
... [0,0,0,0,0,0],
... [0,1,0,2,9,0],
... [0,0,0,0,0,0],
... [0,7,4,1,0,0],
... [0,0,0,0,0,0]]
... ))
array([[1, 0, 1, 1],
[0, 0, 0, 0],
[1, 1, 1, 0]])
'''
if clp: np.clip( dat, 0, 1, out=dat )
while np.all( dat[0,:]==0 ):
dat = dat[1:,:]
while np.all( dat[:,0]==0 ):
dat = dat[:,1:]
while np.all( dat[-1,:]==0 ):
dat = dat[:-1,:]
while np.all( dat[:,-1]==0 ):
dat = dat[:,:-1]
return dat
# Below gets rid of zero-lines/columns in the middle
#+so not usable.
#dat = dat[~np.all(dat==0, axis=1)]
#dat = dat[:, ~np.all(dat == 0, axis=0)]
</code></pre>
<p>How do I tame it, and make it beautiful?</p>
| 3 |
2016-09-13T08:47:53Z
| 39,467,080 |
<p>This should work in any number of dimensions. I believe it is also quite efficient because swapping axes and slicing create only views on the array, not copies (which rules out functions such as <code>take()</code> or <code>compress()</code> which one might be tempted to use) or any temporaries. However it is not significantly 'nicer' than your own solution.</p>
<pre><code>def crop2(dat, clp=True):
if clp: np.clip( dat, 0, 1, out=dat )
for i in range(dat.ndim):
dat = np.swapaxes(dat, 0, i) # send i-th axis to front
while np.all( dat[0]==0 ):
dat = dat[1:]
while np.all( dat[-1]==0 ):
dat = dat[:-1]
dat = np.swapaxes(dat, 0, i) # send i-th axis to its original position
return dat
</code></pre>
| 1 |
2016-09-13T09:52:38Z
|
[
"python",
"numpy",
"crop"
] |
How to crop zero edges of a numpy array?
| 39,465,812 |
<p>I have this ugly, un-pythonic beast:</p>
<pre><code>def crop(dat, clp=True):
'''Crops zero-edges of an array and (optionally) clips it to [0,1].
Example:
>>> crop( np.array(
... [[0,0,0,0,0,0],
... [0,0,0,0,0,0],
... [0,1,0,2,9,0],
... [0,0,0,0,0,0],
... [0,7,4,1,0,0],
... [0,0,0,0,0,0]]
... ))
array([[1, 0, 1, 1],
[0, 0, 0, 0],
[1, 1, 1, 0]])
'''
if clp: np.clip( dat, 0, 1, out=dat )
while np.all( dat[0,:]==0 ):
dat = dat[1:,:]
while np.all( dat[:,0]==0 ):
dat = dat[:,1:]
while np.all( dat[-1,:]==0 ):
dat = dat[:-1,:]
while np.all( dat[:,-1]==0 ):
dat = dat[:,:-1]
return dat
# Below gets rid of zero-lines/columns in the middle
#+so not usable.
#dat = dat[~np.all(dat==0, axis=1)]
#dat = dat[:, ~np.all(dat == 0, axis=0)]
</code></pre>
<p>How do I tame it, and make it beautiful?</p>
| 3 |
2016-09-13T08:47:53Z
| 39,469,078 |
<p>Definitely not the prettiest approach but wanted to try something else.</p>
<pre><code>def _fill_gap(a):
"""
a = 1D array of `True`s and `False`s.
Fill the gap between first and last `True` with `True`s.
Doesn't do a copy of `a` but in this case it isn't really needed.
"""
a[slice(*a.nonzero()[0].take([0,-1]))] = True
return a
def crop3(d, clip=True):
dat = np.array(d)
if clip: np.clip(dat, 0, 1, out=dat)
dat = np.compress(_fill_gap(dat.any(axis=0)), dat, axis=1)
dat = np.compress(_fill_gap(dat.any(axis=1)), dat, axis=0)
return dat
</code></pre>
<p>But it works.</p>
<pre><code>In [639]: crop3(np.array(
...: [[0,0,0,0,0,0],
...: [0,0,0,0,0,0],
...: [0,1,0,2,9,0],
...: [0,0,0,0,0,0],
...: [0,7,4,1,0,0],
...: [0,0,0,0,0,0]]))
Out[639]:
array([[1, 0, 1, 1],
[0, 0, 0, 0],
[1, 1, 1, 0]])
</code></pre>
| 1 |
2016-09-13T11:37:59Z
|
[
"python",
"numpy",
"crop"
] |
Convex Optimization in Python
| 39,465,864 |
<p>I recently got interested in soccer statistics. Right now I want to implement the famous Dixon-Coles Model in Python 3.5 (<a href="http://www.math.ku.dk/~rolf/teaching/thesis/DixonColes.pdf" rel="nofollow">paper-link</a>).</p>
<p>The basic problem is, that from the model described in the paper a Likelihood function with numerous parameters results, which needs to be maximized.</p>
<p>For example: The likelihood function for one <em>Bundesliga</em> season would result in 37 parameters. Of course I do the minimization of the corresponding negative log-likelihood function. I know that this <code>log</code> function is strictly <em>convex</em> so the optimization should not be too difficult. I also included the analytic gradient, but as the number of parameters exceeds ~10 the optimization methods from the <em>SciPy-Package</em> fail (<code>scipy.optimize.minimize()</code>).</p>
<p><strong>My question:</strong>
Which other optimization techniques are out there and are mostly suited for optimization problems involving ~40 independent parameters?</p>
<p>Some hints to other methods would be great!</p>
| 0 |
2016-09-13T08:50:39Z
| 39,466,169 |
<p>You can make use of Metaheuristic algorithms which work both on convex and non-convex spaces. Probably the most famous one of them is <a href="https://en.wikipedia.org/wiki/Genetic_algorithm" rel="nofollow">Genetic algorithm</a>. It is also easy to implement and the concept is straightforward. The beautiful thing about Genetic algorithm is that you can adapt it to solve most of the optimization problems.</p>
| 0 |
2016-09-13T09:07:18Z
|
[
"python",
"python-3.x",
"optimization",
"scipy",
"convex"
] |
How to specify a firefox identity when calling bokeh.show
| 39,465,963 |
<p>I'm using bokeh to do some interactive data analysis. I'm using a separate firefox profile for this work than I do for other browsing, and I would like to be able to have bokeh open a tab with this other identity when I run the script. The general form is </p>
<pre><code>from bokeh.client import push_session
from bokeh.io import curdoc
from bokeh.plotting import show
[analysis setup]
session = push_session(curdoc())
session.show(*args, **kwargs)
</code></pre>
<p>At the moment, <code>args</code> and <code>kwargs</code> only have the grid layout information. Running this script opens a tab in the default firefox instance. I can then open it up with the firefox profile I want by running</p>
<pre><code>$ firefox -P --no-remote ipython --new-tab http://localhost:5006/?bokeh-session-id=xIjdv4HI8MR1xTkWf8iR5fauYKHvp3wDc3Zre5fv444o
</code></pre>
<p>from the command line. From there on out everything works fine, but I'd like to have bokeh open a tab with the new profile without the extra step. The documentation for session.show only tells me that I can specify a tab or a window, but nothing further.</p>
| 0 |
2016-09-13T08:55:57Z
| 39,475,218 |
<p>There is a bug fixed in Bokeh <strong>0.12.3</strong>. You can set the browser to use like:</p>
<pre><code>from bokeh.client import push_session
from bokeh.io import curdoc
from bokeh.plotting import figure, show
# prepare some data
x = [1, 2, 3, 4, 5]
y = [6, 7, 2, 4, 5]
# create a new plot with a title and axis labels
p = figure(title='simple line example', x_axis_label='x', y_axis_label='y')
# add a line renderer with legend and line thickness
p.line(x, y, legend='Temp.', line_width=2)
# HERE you define the custom browser
# custom_firefox_bg = '/usr/bin/firefox -P ipython --new-tab %s &'
custom_firefox = '/usr/bin/firefox -P ipython --new-tab %s'
session = push_session(curdoc())
session.show(obj=p, browser=custom_firefox)
</code></pre>
<p><code>%s</code> will be replace by the URL. If the command ends with <code>&</code>, then the browser will be opened in the background to not block your Python script.</p>
| 1 |
2016-09-13T16:50:21Z
|
[
"python",
"firefox",
"bokeh"
] |
What does x ^ 2 mean? in python
| 39,466,009 |
<p>i'm sure it's very basic and I should understand it, but I don't!</p>
<p>I'm given this to do:</p>
<pre><code>> a=int(input("Enter the value for the co-efficient of x^2. "))
> b=int(input("Enter the value for the co-efficient of x. "))
> c=int(input("Enter the value for the constant term. ")) s=b**2-4*a*c
> x1=(-b+(s**(1/2)))/2*a x2=(-b-(s**(1/2)))/2*a if a==0 and b==0:
> print("The equation has no roots.") elif a==0:
> print("The equation has only one root and it is", -c/b) elif s<0:
> print("The roots are not real!") else:
> print("The roots are", x1, "and", x2)
</code></pre>
<p>I get the right solution and everything right but I have no idea what the ^ is there for. The only reason I used it is because it was used in the other example during the lesson before the practice!</p>
| -6 |
2016-09-13T08:58:12Z
| 39,466,030 |
<p>It doesn't 'mean' anything, not to Python; it is just another character in a string literal:</p>
<pre><code>"Enter the value for the co-efficient of x^2. "
</code></pre>
<p>You could have written something else:</p>
<pre><code>"Enter the value for the co-efficient of x to the power 2. "
</code></pre>
<p>and nothing but the output shown when asking for <code>input()</code> would have changed.</p>
<p><code>^</code> is a common notation for exponents. In Python, <code>**</code> is used for exponents, and that is what the rest of the code uses.</p>
| 3 |
2016-09-13T08:59:24Z
|
[
"python",
"python-3.x"
] |
StanfordCoreNLP openIE issue
| 39,466,086 |
<p>I am facing the same issue as
<a href="http://stackoverflow.com/questions/37375137/stanford-corenlp-openie-annotator">Stanford CoreNLP OpenIE annotator</a>
I try output = nlp.annotate(s, properties={"annotators":"tokenize,ssplit,pos,depparse,natlog,openie", "outputFormat": "json","openie.triple.strict":"true", "openie.max_entailments_per_clause":"1","openie.splitter.disable":"true"})</p>
<p>But still I get 4 clauses</p>
<p>(u'are pulled from', u'Twenty percent electric motors', u'assembly line') (u'are pulled from', u'percent electric motors', u'assembly line') (u'are', u'Twenty percent electric motors', u'pulled') (u'are', u'percent electric motors', u'pulled')
Am I doing anything wrong? How to get precise triple
('are pulled from', 'Twenty percent electric motors', 'assembly line')</p>
| -2 |
2016-09-13T09:02:02Z
| 39,520,850 |
<p>This is actually expected behavior. It was a design decision in the OpenIE system to produce all triples which are logically entailed by the original sentence, even if they are redundant. The idea being that these triples are usually used for something akin to IR-ish lookup, and in these cases it's convenient to not have to do fuzzy matching for whether any of the triples are "similar enough" to the query.</p>
| 0 |
2016-09-15T21:47:55Z
|
[
"python",
"stanford-nlp",
"stanford-nlp-server"
] |
Import Only Work Inside Python Function
| 39,466,228 |
<p><strong>Background Info:</strong> I'm developing a model with scikit-learn. I'm splitting the data into separate training and testing sets using the sklearn.cross_validation module, as shown below:</p>
<pre><code>def train_test_split(input_data):
from sklearn.cross_validation import train_test_split
### STEP 1: Separate y variable and remove from X
y = input_data['price']
X = input_data.copy()
X.drop('price', axis=1, inplace=True)
### STEP 2: Split into training & test sets
X_train, X_test, y_train, y_test =\
train_test_split(X, y, test_size=0.2, random_state=0)
return X_train, X_test, y_train, y_test
</code></pre>
<p><strong>My Question:</strong> When I try to import the sklearn.cross_validation module outside of my function, like so, I get the following error:</p>
<pre><code>from sklearn.cross_validation import train_test_split
def train_test_split(input_data):
### STEP 1: Separate y variable and remove from X
y = input_data['price']
X = input_data.copy()
X.drop('price', axis=1, inplace=True)
### STEP 2: Split into training & test sets
X_train, X_test, y_train, y_test =\
train_test_split(X, y, test_size=0.2, random_state=0)
return X_train, X_test, y_train, y_test
</code></pre>
<p><strong>Error:</strong></p>
<pre><code>TypeError: train_test_split() got an unexpected keyword argument 'test_size'
</code></pre>
<p>Any idea why?</p>
| 1 |
2016-09-13T09:10:28Z
| 39,466,293 |
<p>You are importing the function <code>train_test_split</code> from <code>sklear.cross_validation</code> and then overriding the name with your local function <code>train_test_split</code>.</p>
<p>Try:</p>
<pre><code>from sklearn.cross_validation import train_test_split as sk_train_test_split
def train_test_split(input_data):
### STEP 1: Separate y variable and remove from X
y = input_data['price']
X = input_data.copy()
X.drop('price', axis=1, inplace=True)
### STEP 2: Split into training & test sets
X_train, X_test, y_train, y_test =\
sk_train_test_split(X, y, test_size=0.2, random_state=0) # use the imported function instead of local one
return X_train, X_test, y_train, y_test
</code></pre>
| 4 |
2016-09-13T09:13:01Z
|
[
"python",
"scikit-learn",
"python-import"
] |
How to make a file transfer program in python
| 39,466,458 |
<p>The title might not be relevant for my question becuase I don't actually want a wireless file transfering script, I need a file manager type.</p>
<p>I want something with which I can connect my phone with my pc (eg: hotspot and wifi) and then I would like to show text file browser (I have the code for that) by sending lists of all files and folders using <code>os.listdir()</code>, whenever the selected option is a file (<code>os.path.isdir() == False</code>), I would like to transfer the file and run it(like: picture, video, etc).</p>
<p>The file Browser code which I wrote runs on windows and also Android (after making a few changes) using <code>qpython</code>. My code is</p>
<pre><code>import os
def FileBrowser(cwd = os.getcwd()):
while True:
if cwd[-1:] != "\\":
cwd = cwd + "\\"
files = os.listdir(cwd)
count = 1
tmpp = ""
print("\n\n" + "_"*50 +"\n\n")
print(cwd + "\n")
for f in files:
if os.path.isdir(cwd + f) == True:
s1 = str(count) + ". " + f
tmps1 = 40 - (len(s1)+5)
t2 = int(tmps1/3)
s1 = s1 + " " * t2 + "-" * (tmps1 - t2)
print(s1 + "<dir>")
else:
print(str(count) + ". " + f + tmpp)
count = count + 1
s = raw_input("Enter the file/Directory: ")
if s == "...":
tmp1 = cwd.count("\\")
tmp2 = cwd.rfind("\\")
if tmp1 > 1:
cwd = cwd[0:tmp2]
tmp2 = cwd.rfind("\\")
cwd = cwd[0:tmp2+1]
continue
else:
continue
else:
s = int(s) - 1
if os.path.isdir(cwd + files[s]) == True:
cwd = cwd + files[s] + "\\"
continue
else:
f1 = files[s]
break
return f1
def main():
fb = FileBrowser()
main()
</code></pre>
| 0 |
2016-09-13T09:22:06Z
| 39,468,470 |
<p>A very naive approach using Python is to go to the root of the directory you want to be served and use:</p>
<pre><code>python -m SimpleHTTPServer
</code></pre>
<p>The connect to it on port 8000.</p>
| 2 |
2016-09-13T11:04:32Z
|
[
"python"
] |
How to make a file transfer program in python
| 39,466,458 |
<p>The title might not be relevant for my question becuase I don't actually want a wireless file transfering script, I need a file manager type.</p>
<p>I want something with which I can connect my phone with my pc (eg: hotspot and wifi) and then I would like to show text file browser (I have the code for that) by sending lists of all files and folders using <code>os.listdir()</code>, whenever the selected option is a file (<code>os.path.isdir() == False</code>), I would like to transfer the file and run it(like: picture, video, etc).</p>
<p>The file Browser code which I wrote runs on windows and also Android (after making a few changes) using <code>qpython</code>. My code is</p>
<pre><code>import os
def FileBrowser(cwd = os.getcwd()):
while True:
if cwd[-1:] != "\\":
cwd = cwd + "\\"
files = os.listdir(cwd)
count = 1
tmpp = ""
print("\n\n" + "_"*50 +"\n\n")
print(cwd + "\n")
for f in files:
if os.path.isdir(cwd + f) == True:
s1 = str(count) + ". " + f
tmps1 = 40 - (len(s1)+5)
t2 = int(tmps1/3)
s1 = s1 + " " * t2 + "-" * (tmps1 - t2)
print(s1 + "<dir>")
else:
print(str(count) + ". " + f + tmpp)
count = count + 1
s = raw_input("Enter the file/Directory: ")
if s == "...":
tmp1 = cwd.count("\\")
tmp2 = cwd.rfind("\\")
if tmp1 > 1:
cwd = cwd[0:tmp2]
tmp2 = cwd.rfind("\\")
cwd = cwd[0:tmp2+1]
continue
else:
continue
else:
s = int(s) - 1
if os.path.isdir(cwd + files[s]) == True:
cwd = cwd + files[s] + "\\"
continue
else:
f1 = files[s]
break
return f1
def main():
fb = FileBrowser()
main()
</code></pre>
| 0 |
2016-09-13T09:22:06Z
| 39,468,592 |
<p>you may need to <a href="https://docs.python.org/3/howto/sockets.html" rel="nofollow">socket programming</a>. creating a link (connection) between your PC and you smart phone and then try to transfer files</p>
| 1 |
2016-09-13T11:11:43Z
|
[
"python"
] |
Use of Scaler with LassoCV, RidgeCV
| 39,466,671 |
<p>I would like to use scikit-learn LassoCV/RidgeCV while applying a 'StandardScaler' on each fold training set. I do not want to apply the scaler before the cross-validation to avoid leakage but I cannot figure out how I am supposed to do that with LassoCV/RidgeCV. </p>
<p>Is there a way to do this ? Or should I create a pipeline with Lasso/Ridge and 'manually' search for the hyperparameters (using GridSearchCV for instance) ?</p>
<p>Many thanks.</p>
| 0 |
2016-09-13T09:34:31Z
| 39,481,787 |
<p>I got the answer through the scikit-learn mailing list so here it is: </p>
<p>'There is no way to use the "efficient" EstimatorCV objects with pipelines.
This is an API bug and there's an open issue and maybe even a PR for that.'</p>
<p>Many thanks to Andreas Mueller for the answer.</p>
| 0 |
2016-09-14T02:52:59Z
|
[
"python",
"machine-learning",
"scikit-learn"
] |
change datetime format in python
| 39,466,677 |
<p>I am getting this datetime in this format from my database.</p>
<pre><code>2016-09-13T08:46:59.953948+00:00
</code></pre>
<p>I want to change this date into format like </p>
<pre><code>13 sep 2016 08:46:59
</code></pre>
<p>I have used datetime module like</p>
<pre><code>import datetime
datetime.datetime.strptime(2016-09-13T08:46:59.953948+00:00, 'changing format')
</code></pre>
<p>But it is giving error </p>
<pre><code>TypeError at /admin/help
must be string, not datetime.datetime
</code></pre>
| -1 |
2016-09-13T09:35:08Z
| 39,466,764 |
<p>You can use</p>
<pre><code>import datetime
# n is your time as shown in example
your_time = n.strftime("%d/%m/%y %H:%M:%S")
</code></pre>
<p>When trying to make human readable format I recommend <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/humanize/" rel="nofollow">django humanize</a></p>
| -1 |
2016-09-13T09:38:13Z
|
[
"python",
"django"
] |
change datetime format in python
| 39,466,677 |
<p>I am getting this datetime in this format from my database.</p>
<pre><code>2016-09-13T08:46:59.953948+00:00
</code></pre>
<p>I want to change this date into format like </p>
<pre><code>13 sep 2016 08:46:59
</code></pre>
<p>I have used datetime module like</p>
<pre><code>import datetime
datetime.datetime.strptime(2016-09-13T08:46:59.953948+00:00, 'changing format')
</code></pre>
<p>But it is giving error </p>
<pre><code>TypeError at /admin/help
must be string, not datetime.datetime
</code></pre>
| -1 |
2016-09-13T09:35:08Z
| 39,467,293 |
<p>If you don't care about the timezone and milliseconds you can try by the following function:</p>
<pre><code>from datetime import datetime
def parse_db_time_string(time_string):
date = datetime.strptime(time_string.split('.')[0], '%Y-%m-%dT%H:%M:%S')
return datetime.strftime(date, '%d %b %Y %H:%M:%S')
</code></pre>
<p>You can call the function with your db strings then simply like this:</p>
<pre><code>old_time_string = '2016-09-13T08:46:59.953948+00:00'
new_time_string = parse_db_time_string(old_time_string)
</code></pre>
| 0 |
2016-09-13T10:03:28Z
|
[
"python",
"django"
] |
change datetime format in python
| 39,466,677 |
<p>I am getting this datetime in this format from my database.</p>
<pre><code>2016-09-13T08:46:59.953948+00:00
</code></pre>
<p>I want to change this date into format like </p>
<pre><code>13 sep 2016 08:46:59
</code></pre>
<p>I have used datetime module like</p>
<pre><code>import datetime
datetime.datetime.strptime(2016-09-13T08:46:59.953948+00:00, 'changing format')
</code></pre>
<p>But it is giving error </p>
<pre><code>TypeError at /admin/help
must be string, not datetime.datetime
</code></pre>
| -1 |
2016-09-13T09:35:08Z
| 39,467,931 |
<p>By importing Datetime and Arrow you can convert into your required format. Arrow is a python library to format date & time.</p>
<pre><code>import arrow
import datetime
today = arrow.utcnow().to('Asia/Calcutta').format('YYYY-MM-DD HH:mm:ss')
'2016-09-13 15:57:38'
current_date = datetime.datetime.strptime(today, "%Y-%m-%d %H:%M:%S").strftime('%d-%B-%Y %H:%M:%S')
'13-September-2016 15:57:38'
</code></pre>
<p>To know more about Arrow <a href="https://micropyramid.com/blog/python-arrow-to-show-human-friendly-time/" rel="nofollow">https://micropyramid.com/blog/python-arrow-to-show-human-friendly-time/</a></p>
| 0 |
2016-09-13T10:36:13Z
|
[
"python",
"django"
] |
How do I write lists into CSV in python?
| 39,466,695 |
<p>I have the following list in Python. </p>
<pre><code>[('a1',
[('b', 1),
('c', 2),
('d', 3),
('e', 4),
('f', 5),
('g', 6]),
('a2',
[('c', 7),
('f', 8),
('g', 9),
('b', 1),
('e', 2),
('d', 3)])]
</code></pre>
<p>I would like to save the list as the following format in csv: </p>
<pre><code>a1 a2
b 1 c 7
c 2 f 8
d 3 g 9
e 4 b 1
f 5 e 2
g 6 d 3
</code></pre>
| -1 |
2016-09-13T09:35:41Z
| 39,466,944 |
<p>This should get you started:</p>
<pre><code>>>> a = [('b', 1), ('c', 2)]
>>> b = [('c', 7), ('f', 8)]
>>>
>>> for x,y in zip(a,b):
... k1, v1 = x
... k2, v2 = y
... print("{k1} {v1} {k2} {v2}".format(k1=k1, v1=v1, k2=k2, v2=v2))
...
b 1 c 7
c 2 f 8
</code></pre>
| -1 |
2016-09-13T09:46:42Z
|
[
"python",
"csv"
] |
How do I write lists into CSV in python?
| 39,466,695 |
<p>I have the following list in Python. </p>
<pre><code>[('a1',
[('b', 1),
('c', 2),
('d', 3),
('e', 4),
('f', 5),
('g', 6]),
('a2',
[('c', 7),
('f', 8),
('g', 9),
('b', 1),
('e', 2),
('d', 3)])]
</code></pre>
<p>I would like to save the list as the following format in csv: </p>
<pre><code>a1 a2
b 1 c 7
c 2 f 8
d 3 g 9
e 4 b 1
f 5 e 2
g 6 d 3
</code></pre>
| -1 |
2016-09-13T09:35:41Z
| 39,467,367 |
<p>The csv format is quite simple.</p>
<p>To start to know how to that, just create a csv file with the output you want, and open it with any text editor, you will obtain:</p>
<pre><code>a1,,a2,
b,1,c,7
c,2,f,8
d,3,g,9
e,4,b,1
f,5,e,2
g,6,d,3
</code></pre>
<p>So here is the code you need, but should have at least to obtain alone.</p>
<pre><code>input_list = [('a1', [('b', 1), ('c', 2), ('d', 3), ('e', 4), ('f', 5), ('g', 6)]), ('a2', [('c', 7), ('f', 8), ('g', 9), ('b', 1), ('e', 2), ('d', 3)])]
with open("my_file.csv", 'w') as f:
first_line = [x[0] + ',' for x in input_list]
f.write(",".join(first_line) + "\n")
for x,y in zip(input_list[0][1], input_list[1][1]):
k1, v1 = x
k2, v2 = y
f.write("{k1},{v1},{k2},{v2}\n".format(k1=k1, v1=v1, k2=k2, v2=v2))
</code></pre>
<p>An other solution is to use the <a href="https://docs.python.org/3/library/csv.html" rel="nofollow">csv module</a>. And there are some examples in the doc.</p>
| 0 |
2016-09-13T10:07:25Z
|
[
"python",
"csv"
] |
Unique permutations of fixed length integer partitions where each element has a maximum
| 39,466,720 |
<p>This question is similar to a question I had several months ago: <a href="http://stackoverflow.com/questions/36435754/generating-a-numpy-array-with-all-combinations-of-numbers-that-sum-to-less-than/36563744#36563744">Generating a numpy array with all combinations of numbers that sum to less than a given number</a>.
In that question, I wanted to generate all numbers that summed to at most a constant, given that each element had a certain maximum.</p>
<p>This time I want to calculate all permutations that sum up to exactly that constant. This can be regarded as calculating the unique permutations of integer partitions where each element has a certain maximum. The end result should be stored in a numpy array.</p>
<p>Using a generator, a one liner achieves what we want:</p>
<pre><code>import numpy as np
from itertools import product
K = 3
maxRange = np.array([1,3,2])
states = np.array([i for i in product(*(range(i+1) for i in maxRange)) if sum(i)==K])
</code></pre>
<p>giving</p>
<pre><code>array([[0, 1, 2],
[0, 2, 1],
[0, 3, 0],
[1, 0, 2],
[1, 1, 1],
[1, 2, 0]])
</code></pre>
<p>I'm having quite slow performance when <code>K=20</code> and <code>maxRange = [20]*6</code>. The number of permutations is limited with 53130, but it already takes 20 seconds. My gut feeling tells me this should takes much less than a second. </p>
<p>Does anyone have a faster solution available? I'm having trouble to modify the solution to my earlier question to account for this, because I don't know how to cut off the permutations for which it is no longer possible to add up to exactly <code>K</code>. </p>
<p>I don't mind solutions that use the <code>@jit</code> operator from numba... as long as they are faster than what I have now!</p>
<p>Thanks in advance.</p>
| 1 |
2016-09-13T09:36:48Z
| 39,488,039 |
<p>I've had to think about this pretty long, but I've managed to modify the solution to <a href="http://stackoverflow.com/questions/36435754/generating-a-numpy-array-with-all-combinations-of-numbers-that-sum-to-less-than/36563744#36563744">Generating a numpy array with all combinations of numbers that sum to less than a given number</a> for this problem:</p>
<p>For the number of partitions, the idea is to calculate the array <code>feasible_range</code> that specifies how much we need at least in total at a certain stage to still reach <code>max_sum</code>. For example, if we want to reach a total of 3 and <code>max_range[0] == 1</code>, then we need to have at least 2 before starting with the final element. This array follows from a cumulative sum: </p>
<pre><code>feasible_range = np.maximum(max_sum - np.append(np.array([0]),np.cumsum(max_range)[:-1]),0)
</code></pre>
<p>Now we can calculate the number of partitions as before, by setting the elements that can never lead to a feasible partition to 0.</p>
<pre><code>def number_of_partitions(max_range, max_sum):
M = max_sum + 1
N = len(max_range)
arr = np.zeros(shape=(M,N), dtype = int)
feasible_range = max_sum - np.append(np.array([0]),np.cumsum(max_range)[:-1])
feasible_range = np.maximum(feasible_range,0)
arr[:,-1] = np.where(np.arange(M) <= min(max_range[-1], max_sum), 1, 0)
arr[:feasible_range[-1],-1] = 0
for i in range(N-2,-1,-1):
for j in range(max_range[i]+1):
arr[j:,i] += arr[:M-j,i+1]
arr[:feasible_range[i],i]=0 #Set options that will never add up to max_sum at 0.
return arr.sum(axis = 0),feasible_range
</code></pre>
<p>The partition function also has a similar interpretation as before. </p>
<pre><code>def partition(max_range, max_sum, out = None, n_part = None,feasible_range=None):
#Gives all possible partitions of the sets 0,...,max_range[i] that sum up to max_sum.
if out is None:
max_range = np.asarray(max_range, dtype = int).ravel()
n_part,feasible_range = number_of_partitions(max_range, max_sum)
out = np.zeros(shape = (n_part[0], max_range.size), dtype = int)
if(max_range.size == 1):
out[:] = np.arange(feasible_range[0],min(max_range[0],max_sum) + 1, dtype = int).reshape(-1,1)
return out
#Copy is needed since otherwise we overwrite some values of P.
P = partition(max_range[1:], max_sum, out=out[:n_part[1],1:], n_part = n_part[1:],feasible_range=feasible_range[1:]).copy()
S = max_sum - P.sum(axis = 1) #The remaining space in the partition
offset, sz = 0, 0
for i in range(max_range[0]+1):
#select indices for which there is remaining space
#do this only if adding i brings us within the feasible_range.
ind, = np.where(np.logical_and(S-i>=0,S-i <= max_sum-feasible_range[0]))
offset, sz = offset + sz, ind.size
out[offset:offset+sz, 0] = i
out[offset:offset+sz, 1:] = P[ind]
return out
</code></pre>
<p>For <code>K=20</code> and <code>maxRange = [20]*6</code>, <code>partition(maxRange,K)</code> takes 13ms compared to 18.5 seconds first. </p>
<p>I'm not really fond of the part where I have to copy; that can probably be avoided by reversing the ordering. The speed is good enough now, though.</p>
| 0 |
2016-09-14T10:22:07Z
|
[
"python",
"performance",
"numpy",
"permutation",
"integer-partition"
] |
Most frequently occuring n words in a string
| 39,466,725 |
<p>I have a problem with the following problem:</p>
<p><strong>Problem</strong>:</p>
<p>Implement a function count_words() in Python that takes as input a string s and a number n, and returns the n most frequently-occuring words in s. The return value should be a list of tuples - the top n words paired with their respective counts [(, ), (, ), ...], sorted in descending count order.</p>
<p>You can assume that all input will be in lowercase and that there will be no punctuations or other characters (only letters and single separating spaces). In case of a tie (equal count), order the tied words alphabetically.</p>
<p>E.g.:</p>
<p>print count_words("betty bought a bit of butter but the butter was bitter",3)
Output:</p>
<p>[('butter', 2), ('a', 1), ('betty', 1)]</p>
<p><strong>This is my solution:</strong></p>
<pre><code> """Count words."""
from operator import itemgetter
from collections import Counter
def count_words(s, n):
"""Return the n most frequently occuring words in s."""
# TODO: Count the number of occurences of each word in s
words = s.split(" ");
words = Counter(words)
# TODO: Sort the occurences in descending order (alphabetically in case of ties)
print(words)
# TODO: Return the top n words as a list of tuples (<word>, <count>)
top_n = words.most_common(n)
return top_n
def test_run()
"""Test count_words() with some inputs."""
print(count_words("cat bat mat cat bat cat", 3))
print(count_words("betty bought a bit of butter but the butter was bitter", 3))
if __name__ == '__main__':
test_run()
</code></pre>
<p>The problem is that elements with equal counts are ordered arbitrarily, how can i order that elements by alphabetical order ??</p>
| 0 |
2016-09-13T09:36:53Z
| 39,466,849 |
<p>You can sort them using the <em>number of occurrence</em> (in reverse order) and then the <em>lexicographical order</em>:</p>
<pre><code>>>> lst = [('meat', 2), ('butter', 2), ('a', 1), ('betty', 1)]
>>>
>>> sorted(lst, key=lambda x: (-x[1], x[0]))
# ^ reverse order
[('butter', 2), ('meat', 2), ('a', 1), ('betty', 1)]
</code></pre>
<p>The number of occurrence takes precedence over the lex. order.</p>
<p>In your case, use <code>words.items()</code> in place of the list of the list I have used. You will no longer need to use <code>most_common</code> as <code>sorted</code> already does the same.</p>
| 3 |
2016-09-13T09:42:15Z
|
[
"python"
] |
Most frequently occuring n words in a string
| 39,466,725 |
<p>I have a problem with the following problem:</p>
<p><strong>Problem</strong>:</p>
<p>Implement a function count_words() in Python that takes as input a string s and a number n, and returns the n most frequently-occuring words in s. The return value should be a list of tuples - the top n words paired with their respective counts [(, ), (, ), ...], sorted in descending count order.</p>
<p>You can assume that all input will be in lowercase and that there will be no punctuations or other characters (only letters and single separating spaces). In case of a tie (equal count), order the tied words alphabetically.</p>
<p>E.g.:</p>
<p>print count_words("betty bought a bit of butter but the butter was bitter",3)
Output:</p>
<p>[('butter', 2), ('a', 1), ('betty', 1)]</p>
<p><strong>This is my solution:</strong></p>
<pre><code> """Count words."""
from operator import itemgetter
from collections import Counter
def count_words(s, n):
"""Return the n most frequently occuring words in s."""
# TODO: Count the number of occurences of each word in s
words = s.split(" ");
words = Counter(words)
# TODO: Sort the occurences in descending order (alphabetically in case of ties)
print(words)
# TODO: Return the top n words as a list of tuples (<word>, <count>)
top_n = words.most_common(n)
return top_n
def test_run()
"""Test count_words() with some inputs."""
print(count_words("cat bat mat cat bat cat", 3))
print(count_words("betty bought a bit of butter but the butter was bitter", 3))
if __name__ == '__main__':
test_run()
</code></pre>
<p>The problem is that elements with equal counts are ordered arbitrarily, how can i order that elements by alphabetical order ??</p>
| 0 |
2016-09-13T09:36:53Z
| 39,467,092 |
<p>The python function <code>sorted</code> is <a href="http://stackoverflow.com/a/1915418/1112586">stable</a>, which means in case of a tie, the tied items will be in the same order. Because of this, you can sort first on the strings to get them in order:</p>
<pre><code>alphabetical_sort = sorted(words.items(), key=lambda x: x[0])
</code></pre>
<p>and then on the counts:</p>
<pre><code>final_sort = sorted(alphabetical_sort, key=lambda x: x[1], reverse=True)
</code></pre>
<p><strong>Edit:</strong> Didn't see Moses' better answer. Of course, the less sorts the better.</p>
| 0 |
2016-09-13T09:53:01Z
|
[
"python"
] |
What does "local to a blueprint" mean?
| 39,466,753 |
<p>I'm having some trouble understanding the difference between <a href="http://flask.pocoo.org/docs/0.10/api/#flask.Blueprint.errorhandler" rel="nofollow">Blueprint.errorhandler</a> and <a href="http://flask.pocoo.org/docs/0.10/api/#flask.Blueprint.app_errorhandler" rel="nofollow">Blueprint.app_errorhandler</a>. According the API document(emphasis mine):</p>
<blockquote>
<p><code>errorhandler(code_or_exception)</code></p>
<p>Registers an error handler that becomes active for this blueprint
only. Please be aware that routing does not happen <strong>local to a
blueprint</strong> so an error handler for 404 usually is not handled by a
blueprint unless it is caused inside a view function. Another special
case is the 500 internal server error which is always looked up from
the application.</p>
</blockquote>
<p>AFAIK, a blueprint object in Flask is "a set of operations which can be registered on an application, even multiple times".</p>
<p>My questions are:</p>
<ol>
<li>How can an error be local to a set of operations?</li>
<li>How can a view function cause an error?</li>
</ol>
| 0 |
2016-09-13T09:38:02Z
| 39,466,847 |
<p>'local' means that in relation to the routes a blueprint registers. Blueprint routes are always prefixed by the name you registered your blueprint with, so they are naturally grouped and in a URL path topology sense they have locality. 'nonlocal' then is any view not associated with the blueprint; they'll have a different prefix or no prefix at all.</p>
<p>View functions can raise <a href="https://docs.python.org/2/library/exceptions.html" rel="nofollow"><em>exceptions</em></a>, and specific exceptions (anything derived from <a href="http://werkzeug.pocoo.org/docs/0.11/exceptions/#werkzeug.exceptions.HTTPException" rel="nofollow"><code>HTTPException</code></a>) have a HTTP error code associated with them. <code>@blueprint.errorhandler()</code> registers a handler for such exceptions or associated HTTP error codes.</p>
<p>What the documentation is stating is that errors raised <em>during routing</em> (such as the 404 <code>NotFound</code> error) do not have a view yet and therefor you can't route that error to a specific blueprint either.</p>
<p>Under the hood, when routing succeeds to find a view to handle the request, <code>request.blueprint</code> is set to the associated registered blueprint that corresponds to that view. If then an error occurs and an error handler needs to be found, the <code>request.blueprint</code> value lets Flask find 'local' error handlers.</p>
| 1 |
2016-09-13T09:42:14Z
|
[
"python",
"flask",
"error-handling",
"decorator"
] |
Fill MISSING values only in a dataframe (pandas)
| 39,466,757 |
<p>What I have in a dataframe:</p>
<pre><code>email user_name sessions ymo
a@a.com JD 1 2015-03-01
a@a.com JD 2 2015-05-01
</code></pre>
<p>What I need:</p>
<pre><code>email user_name sessions ymo
a@a.com JD 0 2015-01-01
a@a.com JD 0 2015-02-01
a@a.com JD 1 2015-03-01
a@a.com JD 0 2015-04-01
a@a.com JD 2 2015-05-01
a@a.com JD 0 2015-06-01
a@a.com JD 0 2015-07-01
a@a.com JD 0 2015-08-01
a@a.com JD 0 2015-09-01
a@a.com JD 0 2015-10-01
a@a.com JD 0 2015-11-01
a@a.com JD 0 2015-12-01
</code></pre>
<p><code>ymo</code> column are <code>pd.Timestamp</code>s:</p>
<pre><code>all_ymo
[Timestamp('2015-01-01 00:00:00'),
Timestamp('2015-02-01 00:00:00'),
Timestamp('2015-03-01 00:00:00'),
Timestamp('2015-04-01 00:00:00'),
Timestamp('2015-05-01 00:00:00'),
Timestamp('2015-06-01 00:00:00'),
Timestamp('2015-07-01 00:00:00'),
Timestamp('2015-08-01 00:00:00'),
Timestamp('2015-09-01 00:00:00'),
Timestamp('2015-10-01 00:00:00'),
Timestamp('2015-11-01 00:00:00'),
Timestamp('2015-12-01 00:00:00')]
</code></pre>
<p>Unfortunately, this answer: <a href="http://stackoverflow.com/questions/31786881/adding-values-for-missing-data-combinations-in-pandas">Adding values for missing data combinations in Pandas</a> is not good as it creates duplicates for existing <code>ymo</code> values.</p>
<p>I tried something like this, but it is <em>extremely</em> slow:</p>
<pre><code>for em in all_emails:
existent_ymo = fill_ymo[fill_ymo['email'] == em]['ymo']
existent_ymo = set([pd.Timestamp(datetime.date(t.year, t.month, t.day)) for t in existent_ymo])
missing_ymo = list(existent_ymo - all_ymo)
multi_ind = pd.MultiIndex.from_product([[em], missing_ymo], names=col_names)
fill_ymo = sessions.set_index(col_names).reindex(multi_ind, fill_value=0).reset_index()
</code></pre>
| 1 |
2016-09-13T09:38:05Z
| 39,467,108 |
<ul>
<li>generate month beginning dates and <code>reindex</code> </li>
<li><code>ffill</code> and <code>bfill</code> columns <code>['email', 'user_name']</code></li>
<li><code>fillna(0)</code> for column <code>'sessions'</code></li>
</ul>
<hr>
<pre><code>mbeg = pd.date_range('2015-01-31', periods=12, freq='M') - pd.offsets.MonthBegin()
df1 = df.set_index('ymo').reindex(mbeg)
df1[['email', 'user_name']] = df1[['email', 'user_name']].ffill().bfill()
df1['sessions'] = df1['sessions'].fillna(0).astype(int)
df1
</code></pre>
<p><a href="http://i.stack.imgur.com/FHOKR.png" rel="nofollow"><img src="http://i.stack.imgur.com/FHOKR.png" alt="enter image description here"></a></p>
| 2 |
2016-09-13T09:53:39Z
|
[
"python",
"pandas"
] |
Fill MISSING values only in a dataframe (pandas)
| 39,466,757 |
<p>What I have in a dataframe:</p>
<pre><code>email user_name sessions ymo
a@a.com JD 1 2015-03-01
a@a.com JD 2 2015-05-01
</code></pre>
<p>What I need:</p>
<pre><code>email user_name sessions ymo
a@a.com JD 0 2015-01-01
a@a.com JD 0 2015-02-01
a@a.com JD 1 2015-03-01
a@a.com JD 0 2015-04-01
a@a.com JD 2 2015-05-01
a@a.com JD 0 2015-06-01
a@a.com JD 0 2015-07-01
a@a.com JD 0 2015-08-01
a@a.com JD 0 2015-09-01
a@a.com JD 0 2015-10-01
a@a.com JD 0 2015-11-01
a@a.com JD 0 2015-12-01
</code></pre>
<p><code>ymo</code> column are <code>pd.Timestamp</code>s:</p>
<pre><code>all_ymo
[Timestamp('2015-01-01 00:00:00'),
Timestamp('2015-02-01 00:00:00'),
Timestamp('2015-03-01 00:00:00'),
Timestamp('2015-04-01 00:00:00'),
Timestamp('2015-05-01 00:00:00'),
Timestamp('2015-06-01 00:00:00'),
Timestamp('2015-07-01 00:00:00'),
Timestamp('2015-08-01 00:00:00'),
Timestamp('2015-09-01 00:00:00'),
Timestamp('2015-10-01 00:00:00'),
Timestamp('2015-11-01 00:00:00'),
Timestamp('2015-12-01 00:00:00')]
</code></pre>
<p>Unfortunately, this answer: <a href="http://stackoverflow.com/questions/31786881/adding-values-for-missing-data-combinations-in-pandas">Adding values for missing data combinations in Pandas</a> is not good as it creates duplicates for existing <code>ymo</code> values.</p>
<p>I tried something like this, but it is <em>extremely</em> slow:</p>
<pre><code>for em in all_emails:
existent_ymo = fill_ymo[fill_ymo['email'] == em]['ymo']
existent_ymo = set([pd.Timestamp(datetime.date(t.year, t.month, t.day)) for t in existent_ymo])
missing_ymo = list(existent_ymo - all_ymo)
multi_ind = pd.MultiIndex.from_product([[em], missing_ymo], names=col_names)
fill_ymo = sessions.set_index(col_names).reindex(multi_ind, fill_value=0).reset_index()
</code></pre>
| 1 |
2016-09-13T09:38:05Z
| 39,467,725 |
<p>I try create more general solution with <code>periods</code>:</p>
<pre><code>print (df)
email user_name sessions ymo
0 a@a.com JD 1 2015-03-01
1 a@a.com JD 2 2015-05-01
2 b@b.com AB 1 2015-03-01
3 b@b.com AB 2 2015-05-01
mbeg = pd.period_range('2015-01', periods=12, freq='M')
print (mbeg)
PeriodIndex(['2015-01', '2015-02', '2015-03', '2015-04', '2015-05', '2015-06',
'2015-07', '2015-08', '2015-09', '2015-10', '2015-11', '2015-12'],
dtype='int64', freq='M')
#convert column ymo to period
df.ymo = df.ymo.dt.to_period('m')
#groupby and reindex with filling 0
df = df.groupby(['email','user_name'])
.apply(lambda x: x.set_index('ymo')
.reindex(mbeg, fill_value=0)
.drop(['email','user_name'], axis=1))
.rename_axis(('email','user_name','ymo'))
.reset_index()
</code></pre>
<pre><code>print (df)
email user_name ymo sessions
0 a@a.com JD 2015-01 0
1 a@a.com JD 2015-02 0
2 a@a.com JD 2015-03 1
3 a@a.com JD 2015-04 0
4 a@a.com JD 2015-05 2
5 a@a.com JD 2015-06 0
6 a@a.com JD 2015-07 0
7 a@a.com JD 2015-08 0
8 a@a.com JD 2015-09 0
9 a@a.com JD 2015-10 0
10 a@a.com JD 2015-11 0
11 a@a.com JD 2015-12 0
12 b@b.com AB 2015-01 0
13 b@b.com AB 2015-02 0
14 b@b.com AB 2015-03 1
15 b@b.com AB 2015-04 0
16 b@b.com AB 2015-05 2
17 b@b.com AB 2015-06 0
18 b@b.com AB 2015-07 0
19 b@b.com AB 2015-08 0
20 b@b.com AB 2015-09 0
21 b@b.com AB 2015-10 0
22 b@b.com AB 2015-11 0
23 b@b.com AB 2015-12 0
</code></pre>
<p>Then if need <code>datetimes</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_timestamp.html" rel="nofollow"><code>to_timestamp</code></a>:</p>
<pre><code>df.ymo = df.ymo.dt.to_timestamp()
print (df)
email user_name ymo sessions
0 a@a.com JD 2015-01-01 0
1 a@a.com JD 2015-02-01 0
2 a@a.com JD 2015-03-01 1
3 a@a.com JD 2015-04-01 0
4 a@a.com JD 2015-05-01 2
5 a@a.com JD 2015-06-01 0
6 a@a.com JD 2015-07-01 0
7 a@a.com JD 2015-08-01 0
8 a@a.com JD 2015-09-01 0
9 a@a.com JD 2015-10-01 0
10 a@a.com JD 2015-11-01 0
11 a@a.com JD 2015-12-01 0
12 b@b.com AB 2015-01-01 0
13 b@b.com AB 2015-02-01 0
14 b@b.com AB 2015-03-01 1
15 b@b.com AB 2015-04-01 0
16 b@b.com AB 2015-05-01 2
17 b@b.com AB 2015-06-01 0
18 b@b.com AB 2015-07-01 0
19 b@b.com AB 2015-08-01 0
20 b@b.com AB 2015-09-01 0
21 b@b.com AB 2015-10-01 0
22 b@b.com AB 2015-11-01 0
23 b@b.com AB 2015-12-01 0
</code></pre>
<p>Solution with datetimes:</p>
<pre><code>print (df)
email user_name sessions ymo
0 a@a.com JD 1 2015-03-01
1 a@a.com JD 2 2015-05-01
2 b@b.com AB 1 2015-03-01
3 b@b.com AB 2 2015-05-01
mbeg = pd.date_range('2015-01-31', periods=12, freq='M') - pd.offsets.MonthBegin()
df = df.groupby(['email','user_name'])
.apply(lambda x: x.set_index('ymo')
.reindex(mbeg, fill_value=0)
.drop(['email','user_name'], axis=1))
.rename_axis(('email','user_name','ymo'))
.reset_index()
</code></pre>
<pre><code>print (df)
email user_name ymo sessions
0 a@a.com JD 2015-01-01 0
1 a@a.com JD 2015-02-01 0
2 a@a.com JD 2015-03-01 1
3 a@a.com JD 2015-04-01 0
4 a@a.com JD 2015-05-01 2
5 a@a.com JD 2015-06-01 0
6 a@a.com JD 2015-07-01 0
7 a@a.com JD 2015-08-01 0
8 a@a.com JD 2015-09-01 0
9 a@a.com JD 2015-10-01 0
10 a@a.com JD 2015-11-01 0
11 a@a.com JD 2015-12-01 0
12 b@b.com AB 2015-01-01 0
13 b@b.com AB 2015-02-01 0
14 b@b.com AB 2015-03-01 1
15 b@b.com AB 2015-04-01 0
16 b@b.com AB 2015-05-01 2
17 b@b.com AB 2015-06-01 0
18 b@b.com AB 2015-07-01 0
19 b@b.com AB 2015-08-01 0
20 b@b.com AB 2015-09-01 0
21 b@b.com AB 2015-10-01 0
22 b@b.com AB 2015-11-01 0
23 b@b.com AB 2015-12-01 0
</code></pre>
| 2 |
2016-09-13T10:25:06Z
|
[
"python",
"pandas"
] |
simple SNTP python script
| 39,466,780 |
<p>I need help to complete following script:</p>
<pre class="lang-py prettyprint-override"><code>import socket
import struct
import sys
import time
NTP_SERVER = '0.uk.pool.ntp.org'
TIME1970 = 2208988800L
def sntp_client():
client = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
data = str.encode('\xlb' + 47 * '\0')
client.sendto(data, (NTP_SERVER, 123))
data, addr = client.recvfrom(1024)
if data:
print('Response received from:', addr)
t = struct.unpack('!12I', data)[10]
t -= TIME1970
print('\tTime: %s' % time.ctime(t))
if __name__ == '__main__':
sntp_client()
</code></pre>
<p>Expected output:</p>
<pre class="lang-py prettyprint-override"><code>Response received from: ('80.82.244.120', 123)
Time: Tue Sep 13 14:49:38 2016
</code></pre>
<p>Problem is that program is not giving any output. It looks like it stucks at:</p>
<pre class="lang-py prettyprint-override"><code>data, addr = client.recvfrom(1024)
</code></pre>
<p>I hope someone can help me with this.</p>
| 2 |
2016-09-13T09:39:02Z
| 40,121,066 |
<p>There is nothing wrong with your script as written, you need to look for another reason why the server might not be responding to you, such as firewall settings.
My own python SNTP script is almost exactly the same:</p>
<pre><code>#!/bin/env python
import socket
import struct
import sys
import time
TIME1970 = 2208988800L # Thanks to F.Lundh
pow2_31 = pow(2,31)
pow2_32 = pow(2,32)
pow2_16 = pow(2,16)
if len(sys.argv) < 2:
sys.stderr.write("Usage : " + sys.argv[0] + " <SNTP server>")
exit(1)
server = sys.argv[1]
client = socket.socket( socket.AF_INET, socket.SOCK_DGRAM )
data = '\x1b' + 47 * '\0'
time_start = time.time()
try:
client.sendto( data, ( server, 123 ))
client.settimeout(2)
except:
print "server <%s> not recognized" % (server)
exit(2)
try:
data, address = client.recvfrom( 1024 )
except socket.timeout:
print "timed out"
exit(3)
if data:
time_reply = (time.time() - time_start) * 1000
print 'received %d bytes from %s in %d ms :' % (len(data), address, time_reply)
upacket = struct.unpack( '!48B', data )
print upacket
</code></pre>
<p>usage:
$ ./sntp_client.py 0.uk.pool.ntp.org</p>
<p>received 48 bytes from ('83.170.75.28', 123) in 154 ms :
(28, 3, 3, 236, 0, 0, 1, 171, 0, 0, 3, 0, 20, 139, 208, 232, 219, 177, 86, 148, 230, 192, 1, 15, 0, 0, 0, 0, 0, 0, 0, 0, 219, 177, 88, 27, 60, 214, 85, 212, 219, 177, 88, 27, 60, 238, 157, 39)</p>
| 0 |
2016-10-19T02:26:22Z
|
[
"python"
] |
Maya python incomplete autocompletion
| 39,466,822 |
<p>I'm really new with python programming in Maya and I'm trying to find a confortable way to write code
I would like to have an IDE where if I write "cmds.ls" the autocompletion give me the list off all the arguments
What I have now is a completion with some pointers and a function with "pass" inside
I know that until some version ago it was possible to have the list of all the arguments.
Am I wrong?
If I open the file "maya.cmds.pypredef" I have a list of function just declared with "pass" in the scope</p>
| 0 |
2016-09-13T09:41:03Z
| 39,573,381 |
<p>I never used auto-completion but here is an easy way to get it in Sublime Text, I was not able to get the arguments though, only the function names. You should be able to do the same thing for any other IDE in a similar way.</p>
<ul>
<li>Go into your Maya install <em>folder/MayaXXXX/devkit/other/pymel/extras/completion/py</em>. </li>
<li>If that hierarchy doesn't exists and stops at devkit, there is a README with a link to download the devkit file for your Maya version.</li>
<li><p>Download the Maya Developer Kit and replace the folders (devkit, include, etc.) in your Maya installation.</p></li>
<li><p>Install the Jedi Package in Sublime Text (or any other)</p></li>
<li><p>Add that to the Jedi user settings file:</p>
<pre><code> {
"python_package_paths": ["folder/MayaXXXX/devkit/other/pymel/extras/completion/py"]
}
</code></pre></li>
</ul>
<p>You are all set!</p>
<p>To use the auto completion, make sure it is activated in Sublime Text and that you work on a Python file, import the maya.cmds module and do a Ctrl + Space. The auto-completion window should appear with all the cmds commands.</p>
<pre><code>import maya.cmds as cmds
cmds.
</code></pre>
<p>The arguments do not seem to be implemented in the Maya devkit, they seem to be set as <code>*args, **keywords</code> for all functions. I don't remember seeing any auto-completion with arguments in Maya, and if it exists I am interested!</p>
<p>(<a href="http://www.janpijpers.com/mel-autocompletion-for-sublime-and-mayasublime/" rel="nofollow">Source</a>)</p>
<hr>
<p>After fiddling with the arguments of the functions in the file <strong>MayaXXXX\devkit\other\pymel\extras\completion\py\maya\cmds\__init__.py</strong>, I can say that it is not possible to have the auto-completion with the arguments unless Autodesk correctly build this file with them.</p>
<p>Maybe someone took the time to do it and shared the file, but I doubt it.</p>
| 1 |
2016-09-19T12:36:51Z
|
[
"python",
"maya",
"code-completion"
] |
how to print json data
| 39,466,890 |
<p>I have following json file and python code and i need output example...</p>
<p><strong>json file</strong></p>
<pre><code>{"b": [{"1": "add"},{"2": "act"}],
"p": [{"add": "added"},{"act": "acted"}],
"pp": [{"add": "added"},{"act": "acted"}],
"s": [{"add": "adds"},{"act": "acts"}],
"ing": [{"add": "adding"},{"act": "acting"}]}
</code></pre>
<p><strong>python</strong></p>
<pre><code>import json
data = json.load(open('jsonfile.json'))
#print data
</code></pre>
<p><strong>out put example</strong></p>
<pre><code>>> b
>> p
>> pp
>> s
>> ing
</code></pre>
<p>any ideas how to do that?</p>
| 0 |
2016-09-13T09:44:14Z
| 39,466,925 |
<p>This doesn't have anything to do with JSON. You have a dictionary, and you want to print the keys, which you can do with <code>data.keys()</code>.</p>
| 4 |
2016-09-13T09:45:48Z
|
[
"python",
"json",
"python-3.x"
] |
how to print json data
| 39,466,890 |
<p>I have following json file and python code and i need output example...</p>
<p><strong>json file</strong></p>
<pre><code>{"b": [{"1": "add"},{"2": "act"}],
"p": [{"add": "added"},{"act": "acted"}],
"pp": [{"add": "added"},{"act": "acted"}],
"s": [{"add": "adds"},{"act": "acts"}],
"ing": [{"add": "adding"},{"act": "acting"}]}
</code></pre>
<p><strong>python</strong></p>
<pre><code>import json
data = json.load(open('jsonfile.json'))
#print data
</code></pre>
<p><strong>out put example</strong></p>
<pre><code>>> b
>> p
>> pp
>> s
>> ing
</code></pre>
<p>any ideas how to do that?</p>
| 0 |
2016-09-13T09:44:14Z
| 39,467,000 |
<p>Simply unpack the <code>keys</code> with <code>*</code> in a print call, this provides the keys as positional arguments to <code>print</code>; use <code>sep = '\n'</code> if you want each key on a different line:</p>
<pre><code>print(*data.keys(), sep= '\n')
</code></pre>
<p>This will print out:</p>
<pre><code>b
pp
p
ing
s
</code></pre>
<p>As noted by @WayneWerner <code>print(*data, sep='\n')</code> is in effect like calling <code>data.keys()</code> and achieves the same result.</p>
| 2 |
2016-09-13T09:48:59Z
|
[
"python",
"json",
"python-3.x"
] |
how to print json data
| 39,466,890 |
<p>I have following json file and python code and i need output example...</p>
<p><strong>json file</strong></p>
<pre><code>{"b": [{"1": "add"},{"2": "act"}],
"p": [{"add": "added"},{"act": "acted"}],
"pp": [{"add": "added"},{"act": "acted"}],
"s": [{"add": "adds"},{"act": "acts"}],
"ing": [{"add": "adding"},{"act": "acting"}]}
</code></pre>
<p><strong>python</strong></p>
<pre><code>import json
data = json.load(open('jsonfile.json'))
#print data
</code></pre>
<p><strong>out put example</strong></p>
<pre><code>>> b
>> p
>> pp
>> s
>> ing
</code></pre>
<p>any ideas how to do that?</p>
| 0 |
2016-09-13T09:44:14Z
| 39,467,118 |
<p>Here's a working example (it's emulating your file using <a href="https://docs.python.org/3/library/io.html#io.StringIO" rel="nofollow">io.StringIO</a>):</p>
<pre><code>import json
import io
jsonfile_json = io.StringIO("""
{
"b": [{"1": "add"}, {"2": "act"}],
"p": [{"add": "added"}, {"act": "acted"}],
"pp": [{"add": "added"}, {"act": "acted"}],
"s": [{"add": "adds"}, {"act": "acts"}],
"ing": [{"add": "adding"}, {"act": "acting"}]
}
""")
data = json.load(jsonfile_json)
for k in data.keys():
print(k)
</code></pre>
<p>As you can see, the answer to your question is using <code>keys()</code> method</p>
| 2 |
2016-09-13T09:54:14Z
|
[
"python",
"json",
"python-3.x"
] |
how to print json data
| 39,466,890 |
<p>I have following json file and python code and i need output example...</p>
<p><strong>json file</strong></p>
<pre><code>{"b": [{"1": "add"},{"2": "act"}],
"p": [{"add": "added"},{"act": "acted"}],
"pp": [{"add": "added"},{"act": "acted"}],
"s": [{"add": "adds"},{"act": "acts"}],
"ing": [{"add": "adding"},{"act": "acting"}]}
</code></pre>
<p><strong>python</strong></p>
<pre><code>import json
data = json.load(open('jsonfile.json'))
#print data
</code></pre>
<p><strong>out put example</strong></p>
<pre><code>>> b
>> p
>> pp
>> s
>> ing
</code></pre>
<p>any ideas how to do that?</p>
| 0 |
2016-09-13T09:44:14Z
| 39,467,331 |
<p>For the sake of completeness:</p>
<pre><code>d = {'p': 'pstuff', 'pp': 'ppstuff', 'b': 'bstuff', 's': 'sstuff'}
print('\n'.join(d))
</code></pre>
<p>Works in any version of Python. If you care about order:</p>
<pre><code>print('\n'.join(sorted(d)))
</code></pre>
<p>Though in all honesty, I'd probably do Jim's approach:</p>
<pre><code>print(*d, sep='\n'))
</code></pre>
| 2 |
2016-09-13T10:05:16Z
|
[
"python",
"json",
"python-3.x"
] |
Django: using context variable in a script
| 39,467,023 |
<p>I have a class view that inherits from <code>TemplateView</code> and sets a context variable to a serialized list of items:</p>
<pre><code>class MyView(TemplateView):
def get_context_data(self, **kwargs):
context = super(MyView, self).get_context_data(**kwargs)
context['items'] = serializers.serialize("json", items) # assume items is an existing list
return context
</code></pre>
<p>As is noted in <a href="http://stackoverflow.com/questions/298772/django-template-variables-and-javascript">this post</a> you're supposed to be able to access Django variables from our Django templates in the following manner:</p>
<pre><code><script>var items = {{ items }};</script>
</code></pre>
<p>However I am getting a JavaScript error, which I assume is caused because of automatic escaping:</p>
<pre><code>Uncaught SyntaxError: Unexpected token &
</code></pre>
<p>I also tried using the filter:</p>
<pre><code><script>var items = {{ items | escapejs }};</script>
</code></pre>
<p>Only to find another error, this time a Django one (<code>TemplateSyntaxError</code>):</p>
<pre><code>Could not parse the remainder: ' | escapejs' from 'items | escapejs'
</code></pre>
<p>How can I solve this issue?</p>
<p>PS: I am using Django 1.4. (and no, I cannot upgrade it to the most recent version).</p>
| 0 |
2016-09-13T09:49:46Z
| 39,467,144 |
<p>You can't use spaces between a template variable, the filter character, and the filter itself. So it should be <code>{{ items|escapejs }}</code>.</p>
<p>Although as Sebastian points out, you probably want <code>{{ items|safe }}</code> instead.</p>
| 3 |
2016-09-13T09:55:53Z
|
[
"javascript",
"python",
"django"
] |
Django: using context variable in a script
| 39,467,023 |
<p>I have a class view that inherits from <code>TemplateView</code> and sets a context variable to a serialized list of items:</p>
<pre><code>class MyView(TemplateView):
def get_context_data(self, **kwargs):
context = super(MyView, self).get_context_data(**kwargs)
context['items'] = serializers.serialize("json", items) # assume items is an existing list
return context
</code></pre>
<p>As is noted in <a href="http://stackoverflow.com/questions/298772/django-template-variables-and-javascript">this post</a> you're supposed to be able to access Django variables from our Django templates in the following manner:</p>
<pre><code><script>var items = {{ items }};</script>
</code></pre>
<p>However I am getting a JavaScript error, which I assume is caused because of automatic escaping:</p>
<pre><code>Uncaught SyntaxError: Unexpected token &
</code></pre>
<p>I also tried using the filter:</p>
<pre><code><script>var items = {{ items | escapejs }};</script>
</code></pre>
<p>Only to find another error, this time a Django one (<code>TemplateSyntaxError</code>):</p>
<pre><code>Could not parse the remainder: ' | escapejs' from 'items | escapejs'
</code></pre>
<p>How can I solve this issue?</p>
<p>PS: I am using Django 1.4. (and no, I cannot upgrade it to the most recent version).</p>
| 0 |
2016-09-13T09:49:46Z
| 39,467,524 |
<pre><code><script>
var items = "{{items}}";
</script>
</code></pre>
| 0 |
2016-09-13T10:15:23Z
|
[
"javascript",
"python",
"django"
] |
How to keep every nth item in a list and make the rest zeros
| 39,467,082 |
<p>I am trying to model and fit to noisy data over a long time series and I want to see what happens to my fit if I remove a substantial amount of my data.</p>
<p>I have a long time-series of data and I am only interested in every nth item. However I still want to plot this list over time but with every other unwanted element removed.</p>
<p>For example, for n=4, the list</p>
<pre><code>a = [1,2,3,4,5,6,7,8,9,10...]
</code></pre>
<p>Should become</p>
<pre><code>a_new = [1,0,0,0,5,0,0,0,9,0...]
</code></pre>
<p>I don't mind if the <em>position</em> of the nth item is at the start or end of the sequence, my series is effectively arbitrary and so long that it won't matter what I delete. For example 'a_new' could also be:</p>
<pre><code>a_new = [0,0,0,4,0,0,0,8,0,0...]
</code></pre>
<p>Ideally the solution wouldn't depend on the length of the list, but I can have that length as a variable.</p>
<p>Edit 1:</p>
<p>I actually wanted empty elements, not zero's, (if that's possible?) so:</p>
<pre><code>a_new = [1,,,,5,,,,9...]
</code></pre>
<p>Edit 2:</p>
<p>I needed to remove the corresponding elements from my time series too so that when everything is plotted, each data element has the same index as the time series element.</p>
<p>Thanks!</p>
| 1 |
2016-09-13T09:52:41Z
| 39,467,128 |
<p>Use a <em>list comprehension</em> with a <a href="http://stackoverflow.com/questions/394809/does-python-have-a-ternary-conditional-operator"><em>ternary conditional</em></a> that takes the <code>mod</code> of each element on the number <code>n</code>:</p>
<pre><code>>>> a = [1,2,3,4,5,6,7,8,9,10]
>>> n = 4
>>> [i if i % n == 0 else 0 for i in a]
[0, 0, 0, 4, 0, 0, 0, 8, 0, 0]
</code></pre>
<hr>
<p>In case the data does not proceed incrementally, which is most likely, use <code>enumerate</code> so the <code>mod</code> is taken on the index and not on the element:</p>
<pre><code>>>> [v if i % n == 0 else 0 for i, v in enumerate(a)]
[1, 0, 0, 0, 5, 0, 0, 0, 9, 0]
</code></pre>
<p>The starting point can also be easily changed when using <code>enumerate</code>:</p>
<pre><code>>>> [v if i % n == 0 else 0 for i, v in enumerate(a, 1)] # start indexing from 1
[0, 0, 0, 4, 0, 0, 0, 8, 0, 0]
</code></pre>
<hr>
<p>If you intend to <em>remove</em> your unwanted data rather than replace them, then a <em>filter</em> using <code>if</code> (instead of the ternary operator) in the list comprehension can handle this:</p>
<pre><code>>>> [v for i, v in enumerate(a, 1) if i % n == 0]
[4, 8]
</code></pre>
| 5 |
2016-09-13T09:54:44Z
|
[
"python",
"list"
] |
How to keep every nth item in a list and make the rest zeros
| 39,467,082 |
<p>I am trying to model and fit to noisy data over a long time series and I want to see what happens to my fit if I remove a substantial amount of my data.</p>
<p>I have a long time-series of data and I am only interested in every nth item. However I still want to plot this list over time but with every other unwanted element removed.</p>
<p>For example, for n=4, the list</p>
<pre><code>a = [1,2,3,4,5,6,7,8,9,10...]
</code></pre>
<p>Should become</p>
<pre><code>a_new = [1,0,0,0,5,0,0,0,9,0...]
</code></pre>
<p>I don't mind if the <em>position</em> of the nth item is at the start or end of the sequence, my series is effectively arbitrary and so long that it won't matter what I delete. For example 'a_new' could also be:</p>
<pre><code>a_new = [0,0,0,4,0,0,0,8,0,0...]
</code></pre>
<p>Ideally the solution wouldn't depend on the length of the list, but I can have that length as a variable.</p>
<p>Edit 1:</p>
<p>I actually wanted empty elements, not zero's, (if that's possible?) so:</p>
<pre><code>a_new = [1,,,,5,,,,9...]
</code></pre>
<p>Edit 2:</p>
<p>I needed to remove the corresponding elements from my time series too so that when everything is plotted, each data element has the same index as the time series element.</p>
<p>Thanks!</p>
| 1 |
2016-09-13T09:52:41Z
| 39,467,239 |
<pre><code>[0 if i%4 else num for i, num in enumerate(a)]
</code></pre>
| 0 |
2016-09-13T10:00:43Z
|
[
"python",
"list"
] |
How to keep every nth item in a list and make the rest zeros
| 39,467,082 |
<p>I am trying to model and fit to noisy data over a long time series and I want to see what happens to my fit if I remove a substantial amount of my data.</p>
<p>I have a long time-series of data and I am only interested in every nth item. However I still want to plot this list over time but with every other unwanted element removed.</p>
<p>For example, for n=4, the list</p>
<pre><code>a = [1,2,3,4,5,6,7,8,9,10...]
</code></pre>
<p>Should become</p>
<pre><code>a_new = [1,0,0,0,5,0,0,0,9,0...]
</code></pre>
<p>I don't mind if the <em>position</em> of the nth item is at the start or end of the sequence, my series is effectively arbitrary and so long that it won't matter what I delete. For example 'a_new' could also be:</p>
<pre><code>a_new = [0,0,0,4,0,0,0,8,0,0...]
</code></pre>
<p>Ideally the solution wouldn't depend on the length of the list, but I can have that length as a variable.</p>
<p>Edit 1:</p>
<p>I actually wanted empty elements, not zero's, (if that's possible?) so:</p>
<pre><code>a_new = [1,,,,5,,,,9...]
</code></pre>
<p>Edit 2:</p>
<p>I needed to remove the corresponding elements from my time series too so that when everything is plotted, each data element has the same index as the time series element.</p>
<p>Thanks!</p>
| 1 |
2016-09-13T09:52:41Z
| 39,467,273 |
<p>Here's a working example to filter functions given a certain step K:</p>
<pre><code>def filter_f(data, K=4):
if K <= 0:
return data
N = len(data)
f_filter = [0 if i % K else 1 for i in range(N)]
return [a * b for a, b in zip(data, f_filter)]
f_input = range(10)
for K in range(10):
print("Original function: {0}".format(f_input))
print("Filtered function (step={0}): {1}".format(
K, filter_f(f_input, K)))
print("-" * 80)
</code></pre>
<p>Output:</p>
<pre><code>Original function: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Filtered function (step=0): [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
--------------------------------------------------------------------------------
Original function: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Filtered function (step=1): [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
--------------------------------------------------------------------------------
Original function: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Filtered function (step=2): [0, 0, 2, 0, 4, 0, 6, 0, 8, 0]
--------------------------------------------------------------------------------
Original function: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Filtered function (step=3): [0, 0, 0, 3, 0, 0, 6, 0, 0, 9]
--------------------------------------------------------------------------------
Original function: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Filtered function (step=4): [0, 0, 0, 0, 4, 0, 0, 0, 8, 0]
--------------------------------------------------------------------------------
Original function: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Filtered function (step=5): [0, 0, 0, 0, 0, 5, 0, 0, 0, 0]
--------------------------------------------------------------------------------
Original function: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Filtered function (step=6): [0, 0, 0, 0, 0, 0, 6, 0, 0, 0]
--------------------------------------------------------------------------------
Original function: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Filtered function (step=7): [0, 0, 0, 0, 0, 0, 0, 7, 0, 0]
--------------------------------------------------------------------------------
Original function: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Filtered function (step=8): [0, 0, 0, 0, 0, 0, 0, 0, 8, 0]
--------------------------------------------------------------------------------
Original function: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Filtered function (step=9): [0, 0, 0, 0, 0, 0, 0, 0, 0, 9]
--------------------------------------------------------------------------------
</code></pre>
| 0 |
2016-09-13T10:02:15Z
|
[
"python",
"list"
] |
Can't get this custom logging adapter example to work
| 39,467,271 |
<p>I have been looking at examples related to context logging here:
<a href="https://docs.python.org/2/howto/logging-cookbook.html#using-loggeradapters-to-impart-contextual-information" rel="nofollow" title="Logging Cookbook">Logging Cookbook</a></p>
<p>However, I cannot get the following example to work. The example should be demonstrating the use of custom adapters and goes on as the following:</p>
<pre><code># Here is a simple example:
class CustomAdapter(logging.LoggerAdapter):
"""
This example adapter expects the passed in dict-like object to have a
'connid' key, whose value in brackets is prepended to the log message.
"""
def process(self, msg, kwargs):
return '[%s] %s' % (self.extra['connid'], msg), kwargs
# which you can use like this:
logger = logging.getLogger(__name__)
adapter = CustomAdapter(logger, {'connid': some_conn_id})
# Then any events that you log to the adapter will have the value of some_conn_id prepended to the log messages.
</code></pre>
<p>However, no matter what I have tried, I always get a key error:</p>
<pre><code>logger = logging.getLogger(__name__)
syslog = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s <<<CONTEXT: %(my_context)s>>> : %(message)s')
syslog.setFormatter(formatter)
adapter = CustomAdapter(logger, {'my_context': '1956'})
logger.setLevel(logging.INFO)
logger.addHandler(syslog)
logger.info('The sky is so blue', {'my_context': '6642'})
Traceback (most recent call last):
File "/Users/me/apps/Darwin64/python2.7/lib/python2.7/logging/__init__.py", line 859, in emit
msg = self.format(record)
File "/Users/me/apps/Darwin64/python2.7/lib/python2.7/logging/__init__.py", line 732, in format
return fmt.format(record)
File "/Users/me/apps/Darwin64/python2.7/lib/python2.7/logging/__init__.py", line 474, in format
s = self._fmt % record.__dict__
KeyError: 'my_context'
Logged from file myApp.py, line 62
</code></pre>
<p>What is it that I am doing wrong?</p>
<p>--- Solution: EDIT_01 ---</p>
<p>I have changed the code so that it uses <code>adapter.info('The sky is so blue', {'my_context': '6642'})</code>. And it worked. However, I had to remove the <code>my_context</code> from the formatter. However, with the code below, the <code>my_context</code> bit is hardcoded and no matter what I pass in through the logger, it would always display the initial value. Is there a way to pass in some values to the adapter?</p>
<pre><code>logger = logging.getLogger(__name__)
syslog = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s %(message)s')
syslog.setFormatter(formatter)
logger.addHandler(syslog)
adapter = CustomAdapter(logger, {'my_context': '1956'})
logger.setLevel(logging.INFO)
adapter.info('The sky is so blue', {'my_context': '6642'})
</code></pre>
<p>This would always generate:</p>
<pre><code>2016-09-13 11:33:18,404 [1956] The sky is so blue
</code></pre>
<p>even we are passing <code>6642</code> through the logger.</p>
| 0 |
2016-09-13T10:02:12Z
| 39,467,475 |
<p>You have to use the adapter for logging, not the logger. Try this:</p>
<pre><code>import logging
class CustomAdapter(logging.LoggerAdapter):
def process(self, msg, kwargs):
# use my_context from kwargs or the default given on instantiation
my_context = kwargs.pop('my_context', self.extra['my_context'])
return '[%s] %s' % (my_context, msg), kwargs
logger = logging.getLogger(__name__)
syslog = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s %(message)s')
syslog.setFormatter(formatter)
logger.addHandler(syslog)
adapter = CustomAdapter(logger, {'my_context': '1956'})
logger.setLevel(logging.INFO)
adapter.info('The sky is so blue', my_context='6642')
adapter.info('The sky is so blue')
</code></pre>
<p>Output:</p>
<pre><code>2016-09-13 14:49:28,539 [6642] The sky is so blue
2016-09-13 14:49:28,540 [1956] The sky is so blue
</code></pre>
| 1 |
2016-09-13T10:12:32Z
|
[
"python",
"logging"
] |
Install python package zipline on cloud 9 environment Support workspace python
| 39,467,287 |
<p>I am trying to install python on the cloud 9 environment.</p>
<p>I simply did below, from the <a href="http://www.zipline.io/install.html" rel="nofollow">installation tutorial</a>:</p>
<pre><code>pip install zipline
</code></pre>
<p>However, I get:</p>
<pre><code>Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_ubuntu/zipline
Storing debug log for failure in /home/ubuntu/.pip/pip.log
</code></pre>
<p>My full log looks like the following:</p>
<pre><code>------------------------------------------------------------
/usr/bin/pip run on Tue Sep 13 06:28:52 2016
Downloading/unpacking zipline
Getting page https://pypi.python.org/simple/zipline/
URLs to search for versions for zipline:
* https://pypi.python.org/simple/zipline/
Analyzing links from page https://pypi.python.org/simple/zipline/
Found link https://pypi.python.org/packages/06/8e/8355df80f313706418ee9db3521c6f0578426d92b6dcddf396d58c4de2e6/zipline-1.0.2.tar.gz#md5=4c7958ad131ebbeeec7d4399bdeff12b (from https://pypi.python.org/simple/zipline/), version: 1.0.2
Found link https://pypi.python.org/packages/17/d3/8c58193ec8052d86ff67c00a285a01f646d3d53106844716090c22bba15e/zipline-0.6.0.tar.gz#md5=75d818c291df133946bb15a1b08ae0d8 (from https://pypi.python.org/simple/zipline/), version: 0.6.0
Found link https://pypi.python.org/packages/1f/8c/55ea3687c717bc532753860196222f12717201719625d043448ea94f4ff6/zipline-0.5.10.tar.gz#md5=48f394a5ea83848642d879cbeaba8342 (from https://pypi.python.org/simple/zipline/), version: 0.5.10
Found link https://pypi.python.org/packages/22/4a/affbfd183fa4133d13ecebc069bf9887c7755e531175384b57879daecfe2/zipline-0.8.3.tar.gz#md5=042ffcee614d2279add9a1bfd27a33cf (from https://pypi.python.org/simple/zipline/), version: 0.8.3
Found link https://pypi.python.org/packages/32/29/27e8b5963f1a366c95999ae420c0221f94c3622b69af02f51f7c8b57f086/zipline-0.7.0.tar.gz#md5=62d45c3c0d9a624e787b7e413937f7b4 (from https://pypi.python.org/simple/zipline/), version: 0.7.0
Found link https://pypi.python.org/packages/46/84/4e00850f8cd2809b2630348d19d1ccb18959be0d82907cef5413fef6436b/zipline-0.5.9.tar.gz#md5=b8a983ba23b0c7d4ab18a59af9055ab6 (from https://pypi.python.org/simple/zipline/), version: 0.5.9
Found link https://pypi.python.org/packages/48/fa/d7dc2e7aca9574b3e06c2d105bb268b85b54633ae8580afe4dd8d607e4be/zipline-0.5.3.tar.gz#md5=e0809b60dc3775868f35ca56d6801f62 (from https://pypi.python.org/simple/zipline/), version: 0.5.3
Found link https://pypi.python.org/packages/51/12/9e062af374e826d13e01a705b8ce9532e954d24694cf807fc699fc0978b2/zipline-0.8.1.tar.gz#md5=39a01cc0b79927122abf9f0a9aa9361d (from https://pypi.python.org/simple/zipline/), version: 0.8.1
Skipping https://pypi.python.org/packages/60/87/7d6d6bd43af11482551eb17cde392b034d7f9d5f43be6f805be51b7bef27/zipline-0.8.3-cp34-cp34m-macosx_10_10_x86_64.whl#md5=164b2ecae2d6debba9755f6cfa69633f (from https://pypi.python.org/simple/zipline/) because it is not compatible with this Python
Found link https://pypi.python.org/packages/76/b7/c420885e7f7b7a664797674c85886a608ae441ae9a45631f9d997ad6392c/zipline-0.5.0.tar.gz#md5=a7cc9112ef0d028768bb80e05f4ce2b4 (from https://pypi.python.org/simple/zipline/), version: 0.5.0
Skipping https://pypi.python.org/packages/7c/c5/ad47e2ec20b9fd9e907d5e01d448c9e7dbf82b3068301566baf0c628acf9/zipline-0.8.0-cp34-cp34m-macosx_10_10_x86_64.whl#md5=c455dc96d1b409b7592ed4f7db193429 (from https://pypi.python.org/simple/zipline/) because it is not compatible with this Python
Found link https://pypi.python.org/packages/7e/51/cb0336444aeb679cda1aabb6d7c264c1ae8562488503728ee0b23a5101ce/zipline-0.5.5.tar.gz#md5=dca0bb54cfe6ec7c92340e30d8f07469 (from https://pypi.python.org/simple/zipline/), version: 0.5.5
Skipping https://pypi.python.org/packages/82/2e/024769455739b6da65ccaa99fdb668acebf614e1db4e241bb4a1117efdda/zipline-0.8.0-cp27-none-macosx_10_10_x86_64.whl#md5=69bc977edb5b51e5411d04b816c00c56 (from https://pypi.python.org/simple/zipline/) because it is not compatible with this Python
Found link https://pypi.python.org/packages/89/4b/c954275b6a6582c10b6e0ef80d7fe9486e269a26338b42c99795639a46d8/zipline-0.5.6.tar.gz#md5=ab0c71f227e63c61499237b0023964df (from https://pypi.python.org/simple/zipline/), version: 0.5.6
Found link https://pypi.python.org/packages/95/9e/98e2f815f3dd4e8f49b48431f4fc9ff8336270a9c77f5da147cf05b346e1/zipline-0.7.0rc1.tar.gz#md5=5963340584c1bddd446c09bfbbedd38c (from https://pypi.python.org/simple/zipline/), version: 0.7.0rc1
Found link https://pypi.python.org/packages/97/cf/75e5a64093facaf990baee15eaa2cec4bb204b73cc9aa16d2dbc020591f6/zipline-0.8.4.tar.gz#md5=b385dfee59a0b0408a8dcaae7c4c1d0e (from https://pypi.python.org/simple/zipline/), version: 0.8.4
Found link https://pypi.python.org/packages/9a/aa/5c71f400aba2586619f43b577c99c571f45cc06db3674abc063d109db018/zipline-0.5.7.tar.gz#md5=1fd26989fe73eeb4ad0bb3f997311717 (from https://pypi.python.org/simple/zipline/), version: 0.5.7
Found link https://pypi.python.org/packages/a9/5a/5b716619d4bf9b0424ff9f975f8e4bc28641b0f4ffaa57a455722bf60cff/zipline-0.5.4.tar.gz#md5=98e873cb49bbf397b4ca17f48432a9e2 (from https://pypi.python.org/simple/zipline/), version: 0.5.4
Found link https://pypi.python.org/packages/b5/52/708c93e51bb3e9b49dcdd9641e62fa5f1c60cf4172c0cf45567a2dece996/zipline-0.8.2.tar.gz#md5=2ddca99691409ebb8c85012ddbaecb19 (from https://pypi.python.org/simple/zipline/), version: 0.8.2
Found link https://pypi.python.org/packages/c2/5a/0471cad4b5b392074c78622a782c2614b5c3bab68418ae3c4315031d12be/zipline-1.0.1.tar.gz#md5=645d7df6286b16466df5ed8225fd2dc7 (from https://pypi.python.org/simple/zipline/), version: 1.0.1
Skipping https://pypi.python.org/packages/d5/34/f211a756945bc765dfeddb86d31b99b2992c123279a24c26cfd7d942575d/zipline-0.8.3-cp27-none-macosx_10_10_x86_64.whl#md5=1c87ce91a7b559b14be657c164e78c87 (from https://pypi.python.org/simple/zipline/) because it is not compatible with this Python
Found link https://pypi.python.org/packages/d9/90/05ad3dc4e6fee0b890e5316113a2eea457a85928382ae7eacec346937f6c/zipline-0.6.1.tar.gz#md5=e07499447eccdfc97d57478daef4d114 (from https://pypi.python.org/simple/zipline/), version: 0.6.1
Found link https://pypi.python.org/packages/dc/e0/341e3c5775201b998bd69a81c2b47cc2b40d09c56730163f4b6a9ef4336d/zipline-0.5.2.tar.gz#md5=a99e177d7aeb780b00884119069b2f36 (from https://pypi.python.org/simple/zipline/), version: 0.5.2
Found link https://pypi.python.org/packages/df/ef/ffab7fc9bc6d4e833446a012421b470ff8c67f118a5efdbe0a593690aaa0/zipline-0.5.8.tar.gz#md5=512bb1ca2a13861c6128a852e627193f (from https://pypi.python.org/simple/zipline/), version: 0.5.8
Found link https://pypi.python.org/packages/ec/98/e46201a8c0041112c67e7563bdd0d2c2f930f037f27cf157d1210406e0d2/zipline-0.9.0.tar.gz#md5=020494f647c8f5adab3a06ddb7e42dcc (from https://pypi.python.org/simple/zipline/), version: 0.9.0
Found link https://pypi.python.org/packages/ee/b4/c445d8b4821e8a170dc46de3d058ea07b0fdc3dedce4f9692ebca408ebb6/zipline-0.5.1.tar.gz#md5=2f91b1c1081a401cee28b6f8583da150 (from https://pypi.python.org/simple/zipline/), version: 0.5.1
Found link https://pypi.python.org/packages/f3/42/449f570dca8b46edc0a65e35ce5543b6b4d1ee7b8bfebd06277b054442f7/zipline-1.0.0.tar.gz#md5=cc04ef77b46b631ea05b9953b9c6d587 (from https://pypi.python.org/simple/zipline/), version: 1.0.0
Ignoring link https://pypi.python.org/packages/95/9e/98e2f815f3dd4e8f49b48431f4fc9ff8336270a9c77f5da147cf05b346e1/zipline-0.7.0rc1.tar.gz#md5=5963340584c1bddd446c09bfbbedd38c (from https://pypi.python.org/simple/zipline/), version 0.7.0rc1 is a pre-release (use --pre to allow).
Using version 1.0.2 (newest of versions: 1.0.2, 1.0.1, 1.0.0, 0.9.0, 0.8.4, 0.8.3, 0.8.2, 0.8.1, 0.7.0, 0.6.1, 0.6.0, 0.5.10, 0.5.9, 0.5.8, 0.5.7, 0.5.6, 0.5.5, 0.5.4, 0.5.3, 0.5.2, 0.5.1, 0.5.0)
Downloading from URL https://pypi.python.org/packages/06/8e/8355df80f313706418ee9db3521c6f0578426d92b6dcddf396d58c4de2e6/zipline-1.0.2.tar.gz#md5=4c7958ad131ebbeeec7d4399bdeff12b (from https://pypi.python.org/simple/zipline/)
Running setup.py (path:/tmp/pip_build_ubuntu/zipline/setup.py) egg_info for package zipline
warning: no files found matching '*.pyx' under directory 'Cython/Debugger/Tests'
warning: no files found matching '*.pxd' under directory 'Cython/Debugger/Tests'
warning: no files found matching '*.h' under directory 'Cython/Debugger/Tests'
warning: no files found matching '*.pxd' under directory 'Cython/Utility'
Unable to find pgen, not compiling formal grammar.
Installed /tmp/pip_build_ubuntu/zipline/Cython-0.24.1-py2.7-linux-x86_64.egg
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pip_build_ubuntu/zipline/setup.py", line 300, in <module>
**conditional_arguments
File "/usr/lib/python2.7/distutils/core.py", line 111, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 239, in __init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 264, in fetch_build_eggs
replace_conflicting=True
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 620, in resolve
dist = best[req.key] = env.best_match(req, ws, installer)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 852, in best_match
dist = working_set.find(req)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 503, in find
raise VersionConflict(dist,req) # XXX add more info
pkg_resources.VersionConflict: (numpy 1.8.2 (/usr/lib/python2.7/dist-packages), Requirement.parse('numpy>=1.9.2'))
Complete output from command python setup.py egg_info:
warning: no files found matching '*.pyx' under directory 'Cython/Debugger/Tests'
warning: no files found matching '*.pxd' under directory 'Cython/Debugger/Tests'
warning: no files found matching '*.h' under directory 'Cython/Debugger/Tests'
warning: no files found matching '*.pxd' under directory 'Cython/Utility'
Unable to find pgen, not compiling formal grammar.
Installed /tmp/pip_build_ubuntu/zipline/Cython-0.24.1-py2.7-linux-x86_64.egg
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pip_build_ubuntu/zipline/setup.py", line 300, in <module>
**conditional_arguments
File "/usr/lib/python2.7/distutils/core.py", line 111, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 239, in __init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 264, in fetch_build_eggs
replace_conflicting=True
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 620, in resolve
dist = best[req.key] = env.best_match(req, ws, installer)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 852, in best_match
dist = working_set.find(req)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 503, in find
raise VersionConflict(dist,req) # XXX add more info
pkg_resources.VersionConflict: (numpy 1.8.2 (/usr/lib/python2.7/dist-packages), Requirement.parse('numpy>=1.9.2'))
----------------------------------------
Cleaning up...
Removing temporary dir /tmp/pip_build_ubuntu...
Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_ubuntu/zipline
Exception information:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1230, in prepare_files
req_to_install.run_egg_info()
File "/usr/lib/python2.7/dist-packages/pip/req.py", line 326, in run_egg_info
command_desc='python setup.py egg_info')
File "/usr/lib/python2.7/dist-packages/pip/util.py", line 715, in call_subprocess
% (command_desc, proc.returncode, cwd))
InstallationError: Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_ubuntu/zipline
</code></pre>
<p>Any suggestions, why it is not working?</p>
<p>I appreciate your replies!</p>
| 4 |
2016-09-13T10:03:10Z
| 39,542,507 |
<p>As recommended by <code>zipline</code>'s <a href="http://www.zipline.io/install.html#installing-with-pip" rel="nofollow">documentation</a>, this answer uses a virtualenv where we use <code>pip</code> to install and manage <code>zipline</code> and its dependencies.</p>
<p>First, go to the terminal in your Cloud9 IDE to update the package lists and install the necessary <a href="http://www.zipline.io/install.html#gnu-linux" rel="nofollow">dependencies</a> and <code>python-virtualenv</code>:</p>
<pre><code>sudo apt-get update
sudo apt-get install libatlas-base-dev python-dev gfortran pkg-config libfreetype6-dev python-virtualenv
</code></pre>
<p>Note that <code>numpy</code> is required but unfortunately, the <code>python-numpy</code> package in the Ubuntu 14.04 repositories is too old for <code>zipline</code>, so we use <code>pip</code> instead to install <code>numpy</code>.</p>
<p>Now, create a virtualenv at your project directory, activate the virtualenv, upgrade <code>pip</code> and <code>setuptools</code>, and then install <code>numpy</code> and <code>zipline</code>:</p>
<pre><code>virtualenv venv # create a virtualenv called venv
source venv/bin/activate # activate virtualenv
pip install -U pip setuptools # upgrade pip and setuptools
pip install numpy # install numpy
pip install zipline # install zipline
</code></pre>
<p>The <code>pip</code> installation for <code>zipline</code> will take a while; please be patient. Once you are done working in the virtualenv, deactivate it by running:</p>
<pre><code>deactivate
</code></pre>
| 2 |
2016-09-17T03:12:51Z
|
[
"python",
"ubuntu",
"c9.io"
] |
How to count concurrent events in a dataframe in one line?
| 39,467,341 |
<p>I have a dataset with phone calls. I want to count how many active calls there are for each record.
I found this <a href="https://stackoverflow.com/questions/24745882/pandas-cumulative-sum-using-current-row-as-condition">question</a> but I'd like to avoid loops and functions.</p>
<p>Each call has a <code>date</code>, a <code>start time</code> and a <code>end time</code>.</p>
<p>The dataframe:</p>
<pre><code> start end date
0 09:17:12 09:18:20 2016-08-10
1 09:15:58 09:17:42 2016-08-11
2 09:16:40 09:17:49 2016-08-11
3 09:17:05 09:18:03 2016-08-11
4 09:18:22 09:18:30 2016-08-11
</code></pre>
<p>What I want:</p>
<pre><code> start end date activecalls
0 09:17:12 09:18:20 2016-08-10 1
1 09:15:58 09:17:42 2016-08-11 1
2 09:16:40 09:17:49 2016-08-11 2
3 09:17:05 09:18:03 2016-08-11 3
4 09:18:22 09:18:30 2016-08-11 1
</code></pre>
<p>My code:</p>
<pre><code>import pandas as pd
df = pd.read_clipboard(sep='\s\s+')
df['activecalls'] = df[(df['start'] <= df.loc[df.index]['start']) & \
(df['end'] > df.loc[df.index]['start']) & \
(df['date'] == df.loc[df.index]['date'])].count()
print(df)
</code></pre>
<p>What I get: </p>
<pre><code> start end date activecalls
0 09:17:12 09:18:20 2016-08-10 NaN
1 09:15:58 09:17:42 2016-08-11 NaN
2 09:16:40 09:17:49 2016-08-11 NaN
3 09:17:05 09:18:03 2016-08-11 NaN
4 09:18:22 09:18:30 2016-08-11 NaN
</code></pre>
| 1 |
2016-09-13T10:05:48Z
| 39,484,971 |
<p>You can use:</p>
<pre><code>#convert time and date to datetime
df['date_start'] = pd.to_datetime(df.start + ' ' + df.date)
df['date_end'] = pd.to_datetime(df.end + ' ' + df.date)
#remove columns
df = df.drop(['start','end','date'], axis=1)
</code></pre>
<p>Solution with loop:</p>
<pre><code>active_events= []
for i in df.index:
active_events.append(len(df[(df["date_start"]<=df.loc[i,"date_start"]) &
(df["date_end"]> df.loc[i,"date_start"])]))
df['activecalls'] = pd.Series(active_events)
print (df)
date_start date_end activecalls
0 2016-08-10 09:17:12 2016-08-10 09:18:20 1
1 2016-08-11 09:15:58 2016-08-11 09:17:42 1
2 2016-08-11 09:16:40 2016-08-11 09:17:49 2
3 2016-08-11 09:17:05 2016-08-11 09:18:03 3
4 2016-08-11 09:18:22 2016-08-11 09:18:30 1
</code></pre>
<p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a></p>
<pre><code>#cross join
df['tmp'] = 1
df1 = pd.merge(df,df.reset_index(),on=['tmp'])
df = df.drop('tmp', axis=1)
#print (df1)
#filtering by conditions
df1 = df1[(df1["date_start_x"]<=df1["date_start_y"])
(df1["date_end_x"]> df1["date_start_y"])]
print (df1)
date_start_x date_end_x activecalls_x tmp index \
0 2016-08-10 09:17:12 2016-08-10 09:18:20 1 1 0
6 2016-08-11 09:15:58 2016-08-11 09:17:42 1 1 1
7 2016-08-11 09:15:58 2016-08-11 09:17:42 1 1 2
8 2016-08-11 09:15:58 2016-08-11 09:17:42 1 1 3
12 2016-08-11 09:16:40 2016-08-11 09:17:49 2 1 2
13 2016-08-11 09:16:40 2016-08-11 09:17:49 2 1 3
18 2016-08-11 09:17:05 2016-08-11 09:18:03 3 1 3
24 2016-08-11 09:18:22 2016-08-11 09:18:30 1 1 4
date_start_y date_end_y activecalls_y
0 2016-08-10 09:17:12 2016-08-10 09:18:20 1
6 2016-08-11 09:15:58 2016-08-11 09:17:42 1
7 2016-08-11 09:16:40 2016-08-11 09:17:49 2
8 2016-08-11 09:17:05 2016-08-11 09:18:03 3
12 2016-08-11 09:16:40 2016-08-11 09:17:49 2
13 2016-08-11 09:17:05 2016-08-11 09:18:03 3
18 2016-08-11 09:17:05 2016-08-11 09:18:03 3
24 2016-08-11 09:18:22 2016-08-11 09:18:30 1
</code></pre>
<pre><code>#get size - active calls
print (df1.groupby(['index'], sort=False).size())
index
0 1
1 1
2 2
3 3
4 1
dtype: int64
df['activecalls'] = df1.groupby('index').size()
print (df)
date_start date_end activecalls
0 2016-08-10 09:17:12 2016-08-10 09:18:20 1
1 2016-08-11 09:15:58 2016-08-11 09:17:42 1
2 2016-08-11 09:16:40 2016-08-11 09:17:49 2
3 2016-08-11 09:17:05 2016-08-11 09:18:03 3
4 2016-08-11 09:18:22 2016-08-11 09:18:30 1
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>def a(df):
active_events= []
for i in df.index:
active_events.append(len(df[(df["date_start"]<=df.loc[i,"date_start"]) & (df["date_end"]> df.loc[i,"date_start"])]))
df['activecalls'] = pd.Series(active_events)
return (df)
def b(df):
df['tmp'] = 1
df1 = pd.merge(df,df.reset_index(),on=['tmp'])
df = df.drop('tmp', axis=1)
df1 = df1[(df1["date_start_x"]<=df1["date_start_y"]) & (df1["date_end_x"]> df1["date_start_y"])]
df['activecalls'] = df1.groupby('index').size()
return (df)
print (a(df))
print (b(df))
In [160]: %timeit (a(df))
100 loops, best of 3: 6.76 ms per loop
In [161]: %timeit (b(df))
The slowest run took 4.42 times longer than the fastest. This could mean that an intermediate result is being cached.
100 loops, best of 3: 4.61 ms per loop
</code></pre>
| 1 |
2016-09-14T07:39:48Z
|
[
"python",
"python-3.x",
"datetime",
"pandas",
"condition"
] |
Python Site Scrape Help Needed
| 39,467,369 |
<p>I'm new to python (and a lot of coding outside of SQL, SAS, and a little R) and I am trying to use it to build a dataset based on data from a number of different web pages. Thanks in advance for your help.</p>
<p>I am using Python 3.4.4 and have successfully pulled the code of the sites, but I'm having trouble with writing the regex code to isolate the specific data elements/metrics I want. Below is a sample of the webpage's code and I want to isolate the whole numbers by themselves between the tdclass statements. </p>
<pre><code><tr class="Company"><td class="Company"> <ahref="http://www.theacsi.org/index.php?option=com_content&view=article&id=149&catid=&Itemid=214&amp;c=Liz+Claiborne&amp;i=Apparel" id="L">Liz Claiborne</a> </td><td class="Baseline"> 84 </td><td class="Y1995"> 81 </td><td class="Y1996"> 81 </td><td class="Y1997"> 77 </td><td class="Y1998"> 78 </td><td class="Y1999"> 76 </td><td class="Y2000"> 79 </td><td class="Y2001"> 79 </td><td class="Y2002"> 80 </td><td class="Y2003"> 78 </td><td class="Y2004"> 79 </td><td class="Y2005"> 78 </td><td class="Y2006"> 81 </td><td class="Y2007"> 79 </td><td class="Y2008"> 79 </td><td class="Y2009"> 82 </td><td class="Y2010"> 79 </td><td class="Y2011"> 79 </td><td clas
</code></pre>
| -3 |
2016-09-13T10:07:28Z
| 39,467,677 |
<p>I think you might want to look at lxml and xpath not to mention other scraping soft. Thousends of posts in aera. Check the link below:</p>
<p><a href="http://docs.python-guide.org/en/latest/scenarios/scrape/" rel="nofollow">http://docs.python-guide.org/en/latest/scenarios/scrape/</a></p>
<p>If you dont like to use other modules use build RE (Regular expressions) module, which provides you usefull informations about how to extract specific text from string.</p>
| 0 |
2016-09-13T10:23:06Z
|
[
"python",
"html",
"regex"
] |
error when using keras' sk-learn API
| 39,467,496 |
<p>  i'm learning keras these days, and i met an error when using scikit-learn API.Here are something maybe useful: </p>
<p><strong>ENVIRONMENT</strong>: </p>
<pre><code>python:3.5.2
keras:1.0.5
scikit-learn:0.17.1
</code></pre>
<p><strong>CODE</strong></p>
<pre><code>import pandas as pd
from keras.layers import Input, Dense
from keras.models import Model
from keras.models import Sequential
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import cross_val_score
from sqlalchemy import create_engine
from sklearn.cross_validation import KFold
def read_db():
"get prepared data from mysql."
con_str = "mysql+mysqldb://root:0000@localhost/nbse?charset=utf8"
engine = create_engine(con_str)
data = pd.read_sql_table('data_ml', engine)
return data
def nn_model():
"create a model."
model = Sequential()
model.add(Dense(output_dim=100, input_dim=105, activation='softplus'))
model.add(Dense(output_dim=1, input_dim=100, activation='softplus'))
model.compile(loss='mean_squared_error', optimizer='adam')
return model
data = read_db()
y = data.pop('PRICE').as_matrix()
x = data.as_matrix()
model = nn_model()
model = KerasRegressor(build_fn=model, nb_epoch=2)
model.fit(x,y) #something wrong here!
</code></pre>
<p><strong>ERROR</strong></p>
<pre><code>Traceback (most recent call last):
File "C:/Users/Administrator/PycharmProjects/forecast/gridsearch.py", line 43, in <module>
model.fit(x,y)
File "D:\Program Files\Python35\lib\site-packages\keras\wrappers\scikit_learn.py", line 135, in fit
**self.filter_sk_params(self.build_fn.__call__))
TypeError: __call__() missing 1 required positional argument: 'x'
Process finished with exit code 1
</code></pre>
<p>  the model works well without packaging with kerasRegressor, but i wanna using sk_learn's gridSearch after this, so i'm here for help. I tried but still have no idea. </p>
<p><strong>something maybe helpful:</strong></p>
<pre><code>keras.warappers.scikit_learn.py
class BaseWrapper(object):
def __init__(self, build_fn=None, **sk_params):
self.build_fn = build_fn
self.sk_params = sk_params
self.check_params(sk_params)
def fit(self, X, y, **kwargs):
'''Construct a new model with build_fn and fit the model according
to the given training data.
# Arguments
X : array-like, shape `(n_samples, n_features)`
Training samples where n_samples in the number of samples
and n_features is the number of features.
y : array-like, shape `(n_samples,)` or `(n_samples, n_outputs)`
True labels for X.
kwargs: dictionary arguments
Legal arguments are the arguments of `Sequential.fit`
# Returns
history : object
details about the training history at each epoch.
'''
if self.build_fn is None:
self.model = self.__call__(**self.filter_sk_params(self.__call__))
elif not isinstance(self.build_fn, types.FunctionType):
self.model = self.build_fn(
**self.filter_sk_params(self.build_fn.__call__))
else:
self.model = self.build_fn(**self.filter_sk_params(self.build_fn))
loss_name = self.model.loss
if hasattr(loss_name, '__name__'):
loss_name = loss_name.__name__
if loss_name == 'categorical_crossentropy' and len(y.shape) != 2:
y = to_categorical(y)
fit_args = copy.deepcopy(self.filter_sk_params(Sequential.fit))
fit_args.update(kwargs)
history = self.model.fit(X, y, **fit_args)
return history
</code></pre>
<p>  error occored in this line:</p>
<pre><code> self.model = self.build_fn(
**self.filter_sk_params(self.build_fn.__call__))
</code></pre>
<p>self.build_fn here is keras.models.Sequential </p>
<pre><code>models.py
class Sequential(Model):
def call(self, x, mask=None):
if not self.built:
self.build()
return self.model.call(x, mask)
</code></pre>
<p>So, what's that x mean and how to fix this error?<br>
Thanks!</p>
| 2 |
2016-09-13T10:13:56Z
| 39,877,785 |
<p><a href="http://stackoverflow.com/users/6825808/xiao">xiao</a>, I ran into the same issue! Hopefully this helps:</p>
<h1>Background and The Issue</h1>
<p>The <a href="https://keras.io/scikit-learn-api/#wrappers-for-the-scikit-learn-api" rel="nofollow">documentation for Keras</a> states that, when implementing Wrappers for scikit-learn, there are two arguments. The first is the build function, which is a "callable function or class instance". Specifically, it states that:</p>
<blockquote>
<p><code>build_fn</code> should construct, compile and return a Keras model, which will then be used to fit/predict. One of the following three values could be passed to build_fn:</p>
<ol>
<li>A function</li>
<li>An instance of a class that implements the <strong>call</strong> method</li>
<li>None. This means you implement a class that inherits from either <code>KerasClassifier</code> or <code>KerasRegressor</code>. The <strong>call</strong> method of the present class will then be treated as the default build_fn.</li>
</ol>
</blockquote>
<p>In your code, you create the model, and then pass the model as the value for the argument <code>build_fn</code> when creating the <code>KerasRegressor</code> wrapper:</p>
<pre><code>model = nn_model()
model = KerasRegressor(build_fn=model, nb_epoch=2)
</code></pre>
<p>Herein lies the issue. Rather than passing your <code>nn_model</code> <strong>function</strong> as the <code>build_fn</code>, you pass an actual instance of the Keras <code>Sequential</code> model. For this reason, when <code>fit()</code> is called, it cannot find the <code>call</code> method, because it is not implemented in the class you returned.</p>
<h1>Proposed Solution</h1>
<p>What I did to make things work is pass the function as <code>build_fn</code>, rather than an actual model:</p>
<pre><code>data = read_db()
y = data.pop('PRICE').as_matrix()
x = data.as_matrix()
# model = nn_model() # Don't do this!
# set build_fn equal to the nn_model function
model = KerasRegressor(build_fn=nn_model, nb_epoch=2) # note that you do not call the function!
model.fit(x,y) # fixed!
</code></pre>
<p>This is not the only solution (you could set <code>build_fn</code> to a class that implements the <code>call</code> method appropriately), but the one that worked for me. I hope it helps you!</p>
| 0 |
2016-10-05T15:13:04Z
|
[
"python",
"machine-learning",
"scikit-learn",
"keras"
] |
save numpy arrays to txt
| 39,467,517 |
<p>I have to arrays (q, I) with different number of columns each and I want to save them in a txt file preserving the order of the columns, meaning in the txt file the arrays should be like:</p>
<pre><code>q, I0, I1, I2, ...
</code></pre>
<p>The shape of my arrays are:</p>
<pre><code>q.shape = (300, )
I.shape = (300, 12)
</code></pre>
<p>I tried this:</p>
<pre><code>save_arrays = np.array(zip(q, I))
np.savetxt('dummy.txt', save_arrays, delimiter='\t', newline='\r\n',
fmt='%.5f', header='q [A-1]/I [a.u.]')
</code></pre>
<p>The shape of save_arrays is:</p>
<pre><code>save_arrays.shape = (300, 2)
</code></pre>
<p>It has two columns instead of 13. Those columns are the single array q and the multi-column array I.</p>
<p>Anyway, I'm getting this TypeError as well:</p>
<pre><code>TypeError: Mismatch between array dtype ('object') and format specifier ('%.5f %.5f')
</code></pre>
<p>Any help is appreciated.</p>
| 0 |
2016-09-13T10:15:08Z
| 39,468,219 |
<p>Try <code>save_arrays = np.hstack((q[:,np.newaxis],I))</code></p>
| 1 |
2016-09-13T10:50:07Z
|
[
"python",
"arrays",
"numpy"
] |
Python Newbie - Rock, Paper, Scissors
| 39,467,619 |
<p>I'm very new to Python and decided to set myself a challenge of programming a Rock, Paper, Scissors game without copying someone else's code. However, I need help from a Pythonista grown-up!</p>
<p>I've seen many other variations on Rock, Paper, Scissors, on here but nothing to explain why my version isn't working. My program basically follows this format: set empty variables at start, define 4 functions that prints intro text, receives player input, randomly picks the computer's choice, then assesses whether its a win or a loss for the player. </p>
<p>This is all then stuck in a while loop that breaks once the player selects that they don't want to play anymore. (This bit is working fine)</p>
<p>However, whenever I run the code, it just always gives a draw and doesn't seem to store any data for the computer's choice function call. Does anybody know what I'm doing wrong?</p>
<p>Many thanks!</p>
<pre><code>import random
playerAnswer = ''
computerAnswer = ''
winsTotal = 0
timesPlayed = 0
def showIntroText():
print('Time to play Rock, Paper, Scissors.')
print('Type in your choice below:')
def playerChoose():
playerInput = input()
return
def computerChoose():
randomNumber = random.randint(1, 3)
if randomNumber == 1:
computerPick = 'Paper'
elif randomNumber == 2:
computerPick = 'Scissors'
else:
computerPick = 'Rock'
return
def assessResult():
if playerAnswer == computerAnswer:
print('Draw!')
elif playerAnswer == 'Rock' and computerAnswer == 'Paper':
print('Paper beats Rock. You lose!')
elif playerAnswer == 'Paper' and computerAnswer == 'Scissors':
print('Scissors cuts Paper. You lose!')
elif playerAnswer == 'Scissors' and computerAnswer == 'Rock':
print('Rock blunts Scissors. You lose!')
else:
print('You win!')
winsTotal += 1
return
while True:
timesPlayed += 1
showIntroText()
playerAnswer = playerChoose()
computerAnswer = computerChoose()
assessResult()
print('Do you want to play again? (y/n)')
playAgain = input()
if playAgain == 'n':
break
print('Thank you for playing! You played ' + str(timesPlayed) + ' games.')
</code></pre>
| 2 |
2016-09-13T10:20:16Z
| 39,467,910 |
<p>You have missed returning values in most of the case.</p>
<p>** Add '<strong>return playerInput</strong> ' in <strong>playerChoose()</strong> instead of only return.</p>
<p>** Add ' <strong>return computerPick</strong> ' in <strong>computerChoose()</strong> instead of return.</p>
<p>** Initialize <strong>winsTotal</strong> variable before using it as <strong>'winsTotal = 0</strong>' in <strong>assessResult()</strong>.</p>
<p>** Variables you have intialized at the start of program are out of scope for functions.</p>
<p>Please check this <a href="http://stackoverflow.com/questions/291978/short-description-of-python-scoping-rules">StackOverFlow link</a> for <strong>understanding scope of variables in python</strong>.</p>
<p>** Add '<strong>return winsTotal</strong>' in <strong>assessResult()</strong> instead of return.</p>
<pre><code>import random
def showIntroText():
print('Time to play Rock, Paper, Scissors.')
print('Type in your choice below:')
def playerChoose():
playerInput = input()
return playerInput
def computerChoose():
randomNumber = random.randint(1, 3)
if randomNumber == 1:
computerPick = 'Paper'
elif randomNumber == 2:
computerPick = 'Scissors'
else:
computerPick = 'Rock'
return computerPick
def assessResult(winsTotal):
if playerAnswer == computerAnswer:
print('Draw!')
elif playerAnswer == 'Rock' and computerAnswer == 'Paper':
print('Paper beats Rock. You lose!')
elif playerAnswer == 'Paper' and computerAnswer == 'Scissors':
print('Scissors cuts Paper. You lose!')
elif playerAnswer == 'Scissors' and computerAnswer == 'Rock':
print('Rock blunts Scissors. You lose!')
else:
print('You win!')
winsTotal += 1
return winsTotal
total_win = 0
while True:
timesPlayed += 1
showIntroText()
playerAnswer = playerChoose()
computerAnswer = computerChoose()
total_win = assessResult(total_win)
print('Do you want to play again? (y/n)')
playAgain = input()
if playAgain == 'n':
break
print('Thank you for playing! You played ' + str(timesPlayed) + ' games.' + 'Out of which you won '+ str(total_win))
</code></pre>
<p>Output:</p>
<pre><code> C:\Users\dinesh_pundkar\Desktop>python c.py
Time to play Rock, Paper, Scissors.
Type in your choice below:
"Rock"
You win!
Do you want to play again? (y/n)
"y"
Time to play Rock, Paper, Scissors.
Type in your choice below:
"Rock"
Draw!
Do you want to play again? (y/n)
"y"
Time to play Rock, Paper, Scissors.
Type in your choice below:
"Rock"
Paper beats Rock. You lose!
Do you want to play again? (y/n)
"y"
Time to play Rock, Paper, Scissors.
Type in your choice below:
"Rock"
Paper beats Rock. You lose!
Do you want to play again? (y/n)
"n"
Thank you for playing! You played 4 games.Out of which you won 1
</code></pre>
| 1 |
2016-09-13T10:35:32Z
|
[
"python"
] |
Python Newbie - Rock, Paper, Scissors
| 39,467,619 |
<p>I'm very new to Python and decided to set myself a challenge of programming a Rock, Paper, Scissors game without copying someone else's code. However, I need help from a Pythonista grown-up!</p>
<p>I've seen many other variations on Rock, Paper, Scissors, on here but nothing to explain why my version isn't working. My program basically follows this format: set empty variables at start, define 4 functions that prints intro text, receives player input, randomly picks the computer's choice, then assesses whether its a win or a loss for the player. </p>
<p>This is all then stuck in a while loop that breaks once the player selects that they don't want to play anymore. (This bit is working fine)</p>
<p>However, whenever I run the code, it just always gives a draw and doesn't seem to store any data for the computer's choice function call. Does anybody know what I'm doing wrong?</p>
<p>Many thanks!</p>
<pre><code>import random
playerAnswer = ''
computerAnswer = ''
winsTotal = 0
timesPlayed = 0
def showIntroText():
print('Time to play Rock, Paper, Scissors.')
print('Type in your choice below:')
def playerChoose():
playerInput = input()
return
def computerChoose():
randomNumber = random.randint(1, 3)
if randomNumber == 1:
computerPick = 'Paper'
elif randomNumber == 2:
computerPick = 'Scissors'
else:
computerPick = 'Rock'
return
def assessResult():
if playerAnswer == computerAnswer:
print('Draw!')
elif playerAnswer == 'Rock' and computerAnswer == 'Paper':
print('Paper beats Rock. You lose!')
elif playerAnswer == 'Paper' and computerAnswer == 'Scissors':
print('Scissors cuts Paper. You lose!')
elif playerAnswer == 'Scissors' and computerAnswer == 'Rock':
print('Rock blunts Scissors. You lose!')
else:
print('You win!')
winsTotal += 1
return
while True:
timesPlayed += 1
showIntroText()
playerAnswer = playerChoose()
computerAnswer = computerChoose()
assessResult()
print('Do you want to play again? (y/n)')
playAgain = input()
if playAgain == 'n':
break
print('Thank you for playing! You played ' + str(timesPlayed) + ' games.')
</code></pre>
| 2 |
2016-09-13T10:20:16Z
| 39,467,974 |
<p>It is always a draw because you aren't returning the answers from your function, both playerAnswer and computerAnswer return None</p>
| 1 |
2016-09-13T10:37:55Z
|
[
"python"
] |
Python Newbie - Rock, Paper, Scissors
| 39,467,619 |
<p>I'm very new to Python and decided to set myself a challenge of programming a Rock, Paper, Scissors game without copying someone else's code. However, I need help from a Pythonista grown-up!</p>
<p>I've seen many other variations on Rock, Paper, Scissors, on here but nothing to explain why my version isn't working. My program basically follows this format: set empty variables at start, define 4 functions that prints intro text, receives player input, randomly picks the computer's choice, then assesses whether its a win or a loss for the player. </p>
<p>This is all then stuck in a while loop that breaks once the player selects that they don't want to play anymore. (This bit is working fine)</p>
<p>However, whenever I run the code, it just always gives a draw and doesn't seem to store any data for the computer's choice function call. Does anybody know what I'm doing wrong?</p>
<p>Many thanks!</p>
<pre><code>import random
playerAnswer = ''
computerAnswer = ''
winsTotal = 0
timesPlayed = 0
def showIntroText():
print('Time to play Rock, Paper, Scissors.')
print('Type in your choice below:')
def playerChoose():
playerInput = input()
return
def computerChoose():
randomNumber = random.randint(1, 3)
if randomNumber == 1:
computerPick = 'Paper'
elif randomNumber == 2:
computerPick = 'Scissors'
else:
computerPick = 'Rock'
return
def assessResult():
if playerAnswer == computerAnswer:
print('Draw!')
elif playerAnswer == 'Rock' and computerAnswer == 'Paper':
print('Paper beats Rock. You lose!')
elif playerAnswer == 'Paper' and computerAnswer == 'Scissors':
print('Scissors cuts Paper. You lose!')
elif playerAnswer == 'Scissors' and computerAnswer == 'Rock':
print('Rock blunts Scissors. You lose!')
else:
print('You win!')
winsTotal += 1
return
while True:
timesPlayed += 1
showIntroText()
playerAnswer = playerChoose()
computerAnswer = computerChoose()
assessResult()
print('Do you want to play again? (y/n)')
playAgain = input()
if playAgain == 'n':
break
print('Thank you for playing! You played ' + str(timesPlayed) + ' games.')
</code></pre>
| 2 |
2016-09-13T10:20:16Z
| 39,468,035 |
<p>As some of people said playerChoose() and computerChoose() return with None</p>
<p>Modifidy these statement playerChoose() -> return playerInput
and computerChoose() -> return computerPick</p>
<p>AS well as you have to use global variable. Insert this row </p>
<pre><code>global winsTotal
</code></pre>
<p>in the assessResult().</p>
| 0 |
2016-09-13T10:40:24Z
|
[
"python"
] |
Python Newbie - Rock, Paper, Scissors
| 39,467,619 |
<p>I'm very new to Python and decided to set myself a challenge of programming a Rock, Paper, Scissors game without copying someone else's code. However, I need help from a Pythonista grown-up!</p>
<p>I've seen many other variations on Rock, Paper, Scissors, on here but nothing to explain why my version isn't working. My program basically follows this format: set empty variables at start, define 4 functions that prints intro text, receives player input, randomly picks the computer's choice, then assesses whether its a win or a loss for the player. </p>
<p>This is all then stuck in a while loop that breaks once the player selects that they don't want to play anymore. (This bit is working fine)</p>
<p>However, whenever I run the code, it just always gives a draw and doesn't seem to store any data for the computer's choice function call. Does anybody know what I'm doing wrong?</p>
<p>Many thanks!</p>
<pre><code>import random
playerAnswer = ''
computerAnswer = ''
winsTotal = 0
timesPlayed = 0
def showIntroText():
print('Time to play Rock, Paper, Scissors.')
print('Type in your choice below:')
def playerChoose():
playerInput = input()
return
def computerChoose():
randomNumber = random.randint(1, 3)
if randomNumber == 1:
computerPick = 'Paper'
elif randomNumber == 2:
computerPick = 'Scissors'
else:
computerPick = 'Rock'
return
def assessResult():
if playerAnswer == computerAnswer:
print('Draw!')
elif playerAnswer == 'Rock' and computerAnswer == 'Paper':
print('Paper beats Rock. You lose!')
elif playerAnswer == 'Paper' and computerAnswer == 'Scissors':
print('Scissors cuts Paper. You lose!')
elif playerAnswer == 'Scissors' and computerAnswer == 'Rock':
print('Rock blunts Scissors. You lose!')
else:
print('You win!')
winsTotal += 1
return
while True:
timesPlayed += 1
showIntroText()
playerAnswer = playerChoose()
computerAnswer = computerChoose()
assessResult()
print('Do you want to play again? (y/n)')
playAgain = input()
if playAgain == 'n':
break
print('Thank you for playing! You played ' + str(timesPlayed) + ' games.')
</code></pre>
| 2 |
2016-09-13T10:20:16Z
| 39,468,088 |
<p>add input and return in your functions</p>
<p><code>def computerChoose</code> And <code>def assessResult</code>return None</p>
<p>for Example by this code you can play this game :</p>
<pre><code>import random
playerAnswer = ''
computerAnswer = ''
winsTotal = 0
timesPlayed = 0
def playerChoose():
playerInput = input("insert:")
return playerInput
def computerChoose():
randomNumber = random.randint(1, 3)
if randomNumber == 1:
computerPick = 'Paper'
elif randomNumber == 2:
computerPick = 'Scissors'
else:
computerPick = 'Rock'
return computerPick
def assessResult(playerAnswer, computerAnswer):
if playerAnswer == computerAnswer:
print('Draw!')
elif playerAnswer == 'Rock' and computerAnswer == 'Paper':
print('Paper beats Rock. You lose!')
elif playerAnswer == 'Paper' and computerAnswer == 'Scissors':
print('Scissors cuts Paper. You lose!')
elif playerAnswer == 'Scissors' and computerAnswer == 'Rock':
print('Rock blunts Scissors. You lose!')
else:
print('You win!')
return
while True:
timesPlayed += 1
playerAnswer = playerChoose()
computerAnswer = computerChoose()
assessResult(playerAnswer,computerAnswer)
print('Do you want to play again? (y/n)')
playAgain = input()
if playAgain == 'n':
break
print('Thank you for playing! You played ' + str(timesPlayed) + ' games.')
</code></pre>
| 1 |
2016-09-13T10:43:03Z
|
[
"python"
] |
tkinter "Class" has no attribute "button" even though is set in an instance
| 39,467,809 |
<p>I'm trying run example from this <a href="http://zetcode.com/articles/tkinterlongruntask/" rel="nofollow">tutorial</a> and getting an error:</p>
<pre><code>self.startBtn.config(state=tk.DISABLED) AttributeError: 'Example' object has no attribute 'startBtn'
</code></pre>
<p>I rewrite my code for simplicity (and get still the error):</p>
<pre><code> #!/usr/bin/env python3
import tkinter as tk
class Example(tk.Frame):
def __init__(self, parent):
tk.Frame.__init__(self, parent)
self.grid()
startBtn = tk.Button(self, text="Start", command=self.disableButton)
startBtn.grid(row=0, column=0, padx=10, sticky=tk.E)
def disableButton(self):
self.startBtn.config(state=tk.DISABLED)
def main():
root = tk.Tk()
root.geometry("400x300")
app = Example(root)
app.mainloop()
if __name__ == '__main__':
main()
</code></pre>
<p>What I missed? </p>
<hr>
<p>Actualization
I forgot mention that program is able to compile and error occurs when button is pressed.</p>
| 2 |
2016-09-13T10:29:23Z
| 39,468,019 |
<p>In the <em>function</em>, you use:</p>
<pre><code>def disableButton(self):
self.startBtn.config(state=tk.DISABLED)
</code></pre>
<p>but the <em>button is created</em> without <code>self</code>:</p>
<pre><code>startBtn = tk.Button(self, text="Start", command=self.disableButton)
startBtn.grid(row=0, column=0, padx=10, sticky=tk.E)
</code></pre>
<p>Simply make it: </p>
<pre><code> self.startBtn = tk.Button(self, text="Start", command=self.disableButton)
self.startBtn.grid(row=0, column=0, padx=10, sticky=tk.E)
def disableButton(self):
self.startBtn.config(state=tk.DISABLED)
</code></pre>
<p>...to be able to configure the button's state in the function.</p>
| 2 |
2016-09-13T10:39:53Z
|
[
"python",
"class",
"tkinter"
] |
How to convert a list of strings into a list of integers
| 39,467,821 |
<p>In the below part of code <strong>v</strong> is a list of characters. </p>
<pre><code>import collections
import csv
import sys
with open("prom output.csv","r") as f:
cr = csv.reader(f,delimiter=",")
d=collections.defaultdict(lambda : list())
header=next(cr)
for r in cr:
d[r[0]].append(r[1])
with open("sorted output.csv","w") as f:
cr = csv.writer(f,sys.stdout, lineterminator='\n')
od = collections.OrderedDict(sorted(d.items()))
for k,v in od.items():
cr.writerow(v)
</code></pre>
<p>My output looks like </p>
<p><a href="http://i.stack.imgur.com/tbveK.png" rel="nofollow"><img src="http://i.stack.imgur.com/tbveK.png" alt="enter image description here"></a></p>
<p>I want to map all the characters of my input into an integer, so that instead of a table with characters i get a table with numbers. I tried to use the built in function <strong>ord()</strong> but it doesnt work, since it only accepts single characters as input and not lists. Can you help?</p>
| 0 |
2016-09-13T10:29:57Z
| 39,467,911 |
<p>You can use <a href="https://docs.python.org/3/library/functions.html#map" rel="nofollow"><code>map()</code></a> to apply an operation to each item in a list:</p>
<pre><code>a = ['a', 'b', 'c']
b = map(lambda c: ord(c), a)
print b
>>> [97, 98, 99]
</code></pre>
| 0 |
2016-09-13T10:35:33Z
|
[
"python",
"excel"
] |
How to convert a list of strings into a list of integers
| 39,467,821 |
<p>In the below part of code <strong>v</strong> is a list of characters. </p>
<pre><code>import collections
import csv
import sys
with open("prom output.csv","r") as f:
cr = csv.reader(f,delimiter=",")
d=collections.defaultdict(lambda : list())
header=next(cr)
for r in cr:
d[r[0]].append(r[1])
with open("sorted output.csv","w") as f:
cr = csv.writer(f,sys.stdout, lineterminator='\n')
od = collections.OrderedDict(sorted(d.items()))
for k,v in od.items():
cr.writerow(v)
</code></pre>
<p>My output looks like </p>
<p><a href="http://i.stack.imgur.com/tbveK.png" rel="nofollow"><img src="http://i.stack.imgur.com/tbveK.png" alt="enter image description here"></a></p>
<p>I want to map all the characters of my input into an integer, so that instead of a table with characters i get a table with numbers. I tried to use the built in function <strong>ord()</strong> but it doesnt work, since it only accepts single characters as input and not lists. Can you help?</p>
| 0 |
2016-09-13T10:29:57Z
| 39,467,938 |
<p>If you have a list of letters that you want converting into numbers try:</p>
<pre><code>>>> [ord(l) for l in letters]
[97, 98, 99, 100, 101, 102, 103]
</code></pre>
<p>or </p>
<pre><code>>>> list(map(ord, letters))
[97, 98, 99, 100, 101, 102, 103]
</code></pre>
<p>Or if you're dealing with capitalized column headings and want the corresponding index</p>
<pre><code>>>> letters = ['A', 'B', 'C', 'D', 'E']
>>> [ord(l.lower()) -96 for l in letters]
[1, 2, 3, 4, 5]
</code></pre>
| 3 |
2016-09-13T10:36:32Z
|
[
"python",
"excel"
] |
How to change variable of main class from inherited class?
| 39,467,861 |
<p>I need to change object variable directly from inherited class.
Here is my code example: </p>
<pre><code>class A(object):
def __init__(self,initVal=0):
self.myVal = initVal
def worker(self):
self.incrementor = B()
self.incrementor.incMyVal(5)
class B(A):
def incMyVal(self,incVal):
super().myVal += incVal
obj = A(5)
print(obj.myVal)
obj.worker()
print(obj.myVal)
</code></pre>
<p>But it doesn't work:</p>
<pre><code>AttributeError: 'super' object has no attribute 'myVal'
</code></pre>
<p>I also tried to use global/nonlocal keywords with my variable in B class, but no luck.</p>
<p>In my main case, the <code>B</code> class is an event handler. And it should change the attribute of an object when an event fires. So I'm not able to use <code>return</code> in the <code>incMyVal</code> method.</p>
| 0 |
2016-09-13T10:32:21Z
| 39,467,923 |
<p><code>super()</code> can only search for <em>class attributes</em> in the class MRO, not instance attributes. <code>myVal</code> is set on an <em>instance</em> of the class, not on a class itself.</p>
<p>There is only ever one instance; it doesn't matter if code from class <code>A</code> or a derived class is altering attributes on an instance, it is just one namespace.</p>
<p>However, in your case, you shouldn't even be using inheritance. You are trying to use an <em>independent, second instance</em> to alter the attributes of an instance of <code>A</code>. Class inheritance doesn't give you access to <em>instances</em> of the base class like this.</p>
<p>Refactor <code>B</code> to take an instance of <code>A</code>, then act on that instance:</p>
<pre><code>class B:
def __init__(self, ainstance):
self.ainstance = ainstance
def incMyVal(self, incVal):
self.ainstance.myVal += incVal
</code></pre>
<p>Note that <code>B</code> is not a subclass of <code>A</code> here; it is not a (specialised) object of the same type at all; it is a different kind of thing, something that increments attributes of another object.</p>
<p>Pass in the instance when you create an instance of <code>B</code>:</p>
<pre><code>def worker(self):
self.incrementor = B(self)
self.incrementor.incMyVal(5)
</code></pre>
<p>This does create a circular reference, which can keep objects alive for longer than perhaps needed. You may want to use a <a href="https://docs.python.org/3/library/weakref.html" rel="nofollow">weak reference</a> instead:</p>
<pre><code>import weakref
class B:
def __init__(self, ainstance):
self.ainstance_ref = weakref.ref(ainstance)
def incMyVal(self, incVal):
ainstance = self.ainstance_ref()
if ainstance is not None:
ainstance.myVal += incVal
</code></pre>
<p>Now <code>B</code> instances only hold a weak reference to their <code>A</code> instance, and will do nothing if that instance no longer exists.</p>
| 3 |
2016-09-13T10:35:58Z
|
[
"python",
"python-3.x"
] |
Passing Series with dtype= 'category' as categories for Pandas Categorical function
| 39,467,885 |
<p>when I run this code I get the following error :</p>
<pre><code>import pandas as pd
car_colors = pd.Series(['Blue', 'Red', 'Green'],
dtype='category')
car_data = pd.Categorical(['Yellow', 'Green', 'Red', 'Blue','Purple'],
categories= car_colors, ordered=False)
print car_colors
s = pd.Series(car_data)
s
</code></pre>
<blockquote>
<p>ValueError: object <strong>array</strong> method not producing an array</p>
</blockquote>
<p>But the funny thing is, when I remove the <code>dtype = 'category'</code>, the code works fine.</p>
<p>So in short, the categorical function is accepting series but not with <code>dtype = 'category'</code></p>
<p>Is it a bug or am I doing something wrong ?</p>
| 1 |
2016-09-13T10:33:44Z
| 39,467,943 |
<p>It looks like need add <code>tolist</code> to <code>categories</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Categorical.html" rel="nofollow"><code>Categorical</code></a>:</p>
<pre><code>car_colors = pd.Series(['Blue', 'Red', 'Green'],
dtype='category')
car_data = pd.Categorical(['Yellow', 'Green', 'Red', 'Blue','Purple'],
categories = car_colors.tolist(), ordered=False)
s = pd.Series(car_data)
print (s)
0 NaN
1 Green
2 Red
3 Blue
4 NaN
dtype: category
Categories (3, object): [Blue, Red, Green]
</code></pre>
<p>Another solution from <a href="http://stackoverflow.com/questions/39467885/passing-series-with-dtype-category-as-categories-for-pandas-categorical-funct/39467943#comment66255855_39467885">EdChum's comment</a> is use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.cat.categories.html" rel="nofollow"><code>cat.categories</code></a>:</p>
<pre><code>car_data = pd.Categorical(['Yellow', 'Green', 'Red', 'Blue','Purple'],
categories = car_colors.cat.categories, ordered=False)
s = pd.Series(car_data)
print (s)
0 NaN
1 Green
2 Red
3 Blue
4 NaN
dtype: category
Categories (3, object): [Blue, Green, Red]
</code></pre>
| 0 |
2016-09-13T10:36:48Z
|
[
"python",
"pandas",
"categorical-data"
] |
How can I redirect the output to be put into a file in python (version 3.3.3)?
| 39,467,939 |
<pre><code>sentence = "ASK NOT WHAT YOUR COUNTRY CAN DO FOR YOU ASK WHAT YOU CAN DO FOR YOUR COUNTRY"
s = sentence.split()
another = [0]
print(sentence)
for count, i in enumerate(s):
if s.count(i) < 2:
another.append(max(another) + 1)
else:
another.append(s.index(i) +1)
another.remove(0)
print(another)
</code></pre>
| -4 |
2016-09-13T10:36:34Z
| 39,468,049 |
<p>I am guessing you want the sentence put into a text file? If so, here is the code:</p>
<pre><code>text_file = ("textfile.txt", "w")
text_file.write(sentence)
text_file.close()
</code></pre>
<p>Make sure textfile.txt is in the same folder as your program.</p>
| 0 |
2016-09-13T10:41:02Z
|
[
"python"
] |
Is it possible to use pango markup text on a Gtk+ 3 toolbutton label using Glade and PyDev?
| 39,468,012 |
<p>I have put together a Gtk+ interface in Glade and part of the UI is a tool palette with several toolbuttons using utf-8 characters as labels. They work fine in the default font, but I would like to change font details using pango markup. This is straightforward when dealing with a label as such, as one can apply</p>
<pre><code>label.set_markup(pangoMarkupString)
</code></pre>
<p>but the label in a toolbutton can not, as far as I can tell, be addressed directly in this way. A naive</p>
<pre><code>button.label.set_markup(pangoMarkupString)
</code></pre>
<p>naturally doesn't work and returns an error saying that <em>toolbuttons</em> do not have the <em>label</em> property. Is there any way to use pango parsed text in a toolbutton, and what depth of python trickery would be required at the application end?</p>
| 1 |
2016-09-13T10:39:31Z
| 39,477,395 |
<p>To save any others from hours of fruitless head-scratching and searching, and to open the eyes of other newbies to the powers of Gtk+ 3 and Glade, I present the solution I found.</p>
<ol>
<li>Right click on your tool palette in the <em>outliner</em> and select edit</li>
<li>Choose the hierarchy tab in the editor</li>
<li>Select your button in the hierarchy <em>outliner</em></li>
<li>Under <strong>Label</strong> properties choose <em>widget</em> rather than <em>text</em></li>
<li>Click on the selector icon at the right of the widget entry box</li>
<li>Add a new widget</li>
<li>Leave the tool palette editor</li>
<li>Select <em>label1</em>, the new widget you created in the <em>outliner</em></li>
<li>You can now edit its id, label text and attributes</li>
</ol>
<p>I don't need it yet, but I wonder how to do this with a regular button...</p>
| 1 |
2016-09-13T19:13:10Z
|
[
"python",
"pygtk",
"gtk3",
"glade",
"pango"
] |
How to compare two different pandas dataframe whose lengths are not equal?
| 39,468,033 |
<p>I'm having two dataframes i.e. <strong>df1 and df2</strong> </p>
<pre><code>df1: df2:
Column1 Column2 ColumnA ColumnB
0 abc a 0 stu aaa
1 pqr b 1 mno bbb
2 stu c 2 pqr ccc
3 mno d 3 abc ddd
4 xyz e 4 xyz eee
5 uiq fff
6 mls ggg
7 qww hhh
8 dfg iii
</code></pre>
<p>Now I want to use <code>column1</code> first value i.e. <code>abc</code> and search for this value is available in the <code>columnA</code> of the <code>df2</code>. If match found then I want that matching row's value of <code>columnB</code> in one seperate column of <code>df1</code>. (note - every entry in <code>column1</code> may available at once or none in <code>columnA</code> of the <code>df2</code>).</p>
<p><strong>Output format :</strong></p>
<pre><code>df1 :
Column1 Column2 Column3
0 abc a ddd
1 pqr b ccc
2 stu c aaa
3 mno d bbb
4 xyz e eee
</code></pre>
<p>I tried with different scenarios. when I try to compare this dataframes like:</p>
<pre><code>df['column1'] == df['columnA']
</code></pre>
<p>I'm getting error because length of both dataframes is not same. How I can perform this type of operation in pandas dataframe?</p>
| 0 |
2016-09-13T10:40:20Z
| 39,468,152 |
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a>, but first rename column <code>ColumnA</code> and last <code>ColumnB</code>:</p>
<pre><code>print (pd.merge(df1,df2.rename(columns={'ColumnA':'Column1'}))
.rename(columns={'ColumnB': 'Column3'}))
Column1 Column2 Column3
0 abc a ddd
1 pqr b ccc
2 stu c aaa
3 mno d bbb
4 xyz e eee
</code></pre>
<p>Another solution with parameters <code>left_on</code> and <code>right_on</code>, but is necessary <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="nofollow"><code>drop</code></a> column <code>ColumnA</code>:</p>
<pre><code>print (pd.merge(df1,df2, left_on='Column1', right_on='ColumnA')
.drop('ColumnA', axis=1)
.rename(columns={'ColumnB': 'Column3'}))
Column1 Column2 Column3
0 abc a ddd
1 pqr b ccc
2 stu c aaa
3 mno d bbb
4 xyz e eee
</code></pre>
<hr>
<p>EDIT by comment:</p>
<p>If joined values are duplicated, rows are multipled:</p>
<pre><code>import pandas as pd
data = [['abc','a'], ['pqr','b'], ['pqr','b'], ['pqr','b']]
df1 = pd.DataFrame(data, columns=['Column1','Column2'])
df2 = pd.DataFrame({'ColumnA': {0: 'stu', 1: 'pqr', 2: 'pqr'},
'ColumnB': {0: 'aaa', 1: 'ccc', 2: 'ccc'}})
print (df1)
Column1 Column2
0 abc a
1 pqr b
2 pqr b
3 pqr b
print (df2)
ColumnA ColumnB
0 stu aaa
1 pqr ccc
2 pqr ccc
print (pd.merge(df1,df2.rename(columns={'ColumnA':'Column1'}))
.rename(columns={'ColumnB': 'Column3'}))
Column1 Column2 Column3
0 pqr b ccc
1 pqr b ccc
2 pqr b ccc
3 pqr b ccc
4 pqr b ccc
5 pqr b ccc
</code></pre>
<p>Then is possible use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow"><code>drop_duplicates</code></a>:</p>
<pre><code>print (df1)
Column1 Column2
0 abc a
1 pqr b
2 pqr b
3 pqr b
df1 = df1.drop_duplicates()
print (df1)
Column1 Column2
0 abc a
1 pqr b
print (df2)
ColumnA ColumnB
0 stu aaa
1 pqr ccc
2 pqr ccc
df2 = df2.drop_duplicates()
print (df2)
ColumnA ColumnB
0 stu aaa
1 pqr ccc
print (pd.merge(df1,df2.rename(columns={'ColumnA':'Column1'}))
.rename(columns={'ColumnB': 'Column3'}))
Column1 Column2 Column3
0 pqr b ccc
</code></pre>
<p>EDIT1:</p>
<p>If <code>DataFrames</code> have more columns, is important specify join columns by parameter <code>on</code>:</p>
<pre><code>print (pd.merge(df1,df2.rename(columns={'ColumnA':'Column1'}), on='Column1')
.rename(columns={'ColumnB': 'Column3'}))
</code></pre>
<p>EDIT2:</p>
<p>If need remove rows with <code>NaN</code> in selected columns use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html" rel="nofollow"><code>dropna</code></a>:</p>
<pre><code>df = pd.DataFrame({'A':[1,2,3],
'B':[4,5,np.nan],
'C':[7,8,np.nan],
'D':[np.nan,3,5],
'E':[5,3,6],
'F':[7,4,3]})
print (df)
A B C D E F
0 1 4.0 7.0 NaN 5 7
1 2 5.0 8.0 3.0 3 4
2 3 NaN NaN 5.0 6 3
print (df.dropna(subset=['C','B']))
A B C D E F
0 1 4.0 7.0 NaN 5 7
1 2 5.0 8.0 3.0 3 4
</code></pre>
| 1 |
2016-09-13T10:46:08Z
|
[
"python",
"pandas",
"dataframe",
"merge",
"comparison"
] |
Crop overlapping images with pillow
| 39,468,216 |
<p>I need help, please.
I'm trying to select and crop the overlapping area of two images with the Python Pillow library.</p>
<p>I have the upper-left pixel coordinate of the two pictures. With these, I can find out which one is located above the other.</p>
<p>I wrote a function, taking two images as arguments:</p>
<pre><code>def function(img1, img2):
x1 = 223 #x coordinate of the first image
y1 = 197 #y coordinate of the first image
x2 = 255 #x coordinate of the second image
y2 = 197 #y coordinate of the second image
dX = x1 - x2
dY = y1 - y2
if y1 <= y2: #if the first image is above the other
upper = img1
lower = img2
flag = False
else:
upper = img2
lower = img1
flag = True
if dX <= 0: #if the lower image is on the left
box = (abs(dX), abs(dY), upper.size[0], upper.size[1])
a = upper.crop(box)
box = (0, 0, upper.size[0] - abs(dX), upper.size[1] - abs(dY))
b = lower.crop(box)
else:
box = (0, abs(dY), lower.size[0] - abs(dX), upper.size[1])
a = upper.crop(box)
box = (abs(dX), 0, lower.size[0], upper.size[1] - abs(dY))
b = lower.crop(box)
if flag:
return b,a #switch the two images again
else:
return a,b
</code></pre>
<p>I know for sure that the result is wrong (It's a school assignment).
Thanks for your help.</p>
| 0 |
2016-09-13T10:49:50Z
| 39,469,880 |
<p>First of all, I don't quite get what do you mean by one picture being "above" the other (shouldn't that be a z-position?), but take a look at this: <a href="http://stackoverflow.com/questions/20484942/how-to-make-rect-from-the-intersection-of-two">How to make rect from the intersection of two?</a> , the first answer might be a good lead. :)</p>
| 0 |
2016-09-13T12:16:30Z
|
[
"python",
"pillow"
] |
Spark performance issue (likely caused by "basic" mistakes)
| 39,468,225 |
<p>I'm relatively new to Apache Spark (version 1.6), and I feel I hit a wall: I looked through most of the Spark-related question on SE, but I found nothing that helped me so far. I believe I am doing something fundamentally wrong at the basic level, but I cannot point out what exactly it is, especially since other pieces of code I've written are running just fine.</p>
<p>I'll try to be as specific as possible in explaining my situation, although I'll simplify my task for better understanding. Keep in mind that, as I am still learning it, I'm running this code using Spark's <strong>local mode</strong>; also worthy of note is that I've been using DataFrames (and not RDDs). Lastly, do note that the following code is written in Python using Pyspark, but I do welcome possible solutions using Scala or Java, as I believe the issue is a very basic one.</p>
<p>I have a generic JSON file, with its structure resembling the following:</p>
<pre><code>{"events":[
{"Person":"Alex","Shop":"Burger King","Timestamp":"100"},
{"Person":"Alex","Shop":"McDonalds","Timestamp":"101"},
{"Person":"Alex","Shop":"McDonalds","Timestamp":"104"},
{"Person":"Nathan","Shop":"KFC","Timestamp":"100"},
{"Person":"Nathan","Shop":"KFC","Timestamp":"120"},
{"Person":"Nathan","Shop":"Burger King","Timestamp":"170"}]}
</code></pre>
<p>What I need to do, is count how much time has passed between two visits by the same person to the same shop. The output should be the list of shops which have had at least one customer visit them at least once every 5 seconds, alongside the number of customers that meet this requirement. In the case above, the output should look something like this:</p>
<pre><code>{"Shop":"McDonalds","PeopleCount":1}
</code></pre>
<p>My idea was to assign to each <em>pair</em> (Person, Shop) the same identifier, and then proceed to verify if that pair met the requirement. The identifier can be assigned by using the <em>window function</em> <strong>ROW_NUMBER()</strong>, which requires the use of <em>hiveContext</em> in Spark. This is how the file above should look like after the identifier has been assigned:</p>
<pre><code>{"events":[
{"Person":"Alex","Shop":"Burger King","Timestamp":"100","ID":1},
{"Person":"Alex","Shop":"McDonalds","Timestamp":"101", "ID":2},
{"Person":"Alex","Shop":"McDonalds","Timestamp":"104", "ID":2},
{"Person":"Nathan","Shop":"KFC","Timestamp":"100","ID":3},
{"Person":"Nathan","Shop":"KFC","Timestamp":"120","ID":3},
{"Person":"Nathan","Shop":"Burger King","Timestamp":"170","ID":4}]}
</code></pre>
<p>As I need to perform several steps (some of these requiring the use of <em>self joins</em>) for each pair before coming to a conclusion, I made use of temporary tables.</p>
<p>The code I've written is something like this (of course, I have included only the relevant parts - "df" stands for "dataframe"):</p>
<pre><code>t1_df = hiveContext.read.json(inputFileName)
t1_df.registerTempTable("events")
t2_df = hiveContext.sql("SELECT Person, Shop, ROW_NUMBER() OVER (order by Person asc, Shop asc) as ID FROM events group by Person, Shop HAVING count(*)>1") #if there are less than 2 entries for the same pair, then we can discard this pair
t2_df.write.mode("overwrite").saveAsTable("orderedIDs")
n_pairs = t2_df.count() #used to determine how many pairs I need to inspect
i=1
while i<=n_pairs:
#now I perform several operations, each one displaying this structure
#first operation...
query="SELECT ... FROM orderedIDs WHERE ID=%d" %i
t3_df = hiveContext.sql(query)
t3_df.write.mode("overwrite").saveAsTable("table1")
#...second operation...
query2="SELECT ... FROM table1 WHERE ..."
t4_df = hiveContext.sql(query2)
temp3_df.write.mode("overwrite").saveAsTable("table2")
#...and so on. Let us skip to the last operation in this loop, which consists of the "saving" of the shop if it met the requirements:
t8_df = hiveContext.sql("SELECT Shop from table7")
t8_df.write.mode("append").saveAsTable("goodShops")
i=i+1
#then we only need to write the table to a proper file
output_df = hiveContext.sql("SELECT Shop, count(*) as PeopleCount from goodShops group by Shop")
output_df.write.json('output')
</code></pre>
<p>Now, here come the issues: the output is the correct one. I've tried with several inputs, and the program works fine, in that regard. However, it is tremendously slow: it takes around 15-20 seconds to analyze each pair, regardless of the entries each pair has. So, for example, if there are 10 pairs it takes around 3 minutes, if there are 100 it takes 30 minutes, and so on. I ran this code on several machines with relatively decent hardware, but nothing changed.
I also tried caching some (or even all) of the tables I used, but the problem still persisted (the time required even increased in certain circumstances). More specifically, the last operation of the loop (the one which uses the "append") takes several seconds to complete (from 5 to 10), whereas the first 6 only take 1-2 seconds (which is still a lot, considering the scope of the task, but definitely more manageable).</p>
<p>I believe the issue may lie in one (or more) of the following:</p>
<ol>
<li>use of a loop, which <em>might</em> cause problems of parallelism;</li>
<li>use of the "saveAsTable" method, which forces writing to I/O </li>
<li>bad or poor use of caching</li>
</ol>
<p>These 3 are the only things that come to my mind, as the other pieces of software I've written using Spark (for which I did not encounter any performance issues) do not make use of the abovementioned techniques, since I basically performed simple JOIN operations and used the <em>registerTempTable</em> method for using temporary tables (which, to my understanding, cannot be used in a <em>loop</em>) instead of the <em>saveAsTable</em> method.</p>
<p>I tried to be as clear as possible, but if you do require more details I am up for providing additional information.</p>
<p>EDIT: I managed to solve my issue thanks to zero323's answer. Indeed, the use of the LAG function was all I really needed to do my stuff. On the other hand, I've learnt that using the "saveAsTable" method should be discouraged - especially in loops - as it causes a major decrease in performance every time it is called. I'll avoid using it from now on unless it is absolutely necessary.</p>
| 0 |
2016-09-13T10:50:40Z
| 39,470,910 |
<blockquote>
<p>how much time has passed between two visits by the same person to the same shop. The output should be the list of shops which have had at least one customer visit them at least once every 5 seconds, alongside the number of customers that meet this requirement.</p>
</blockquote>
<p>How about simple <code>lag</code> with aggregation:</p>
<pre><code>from pyspark.sql.window import Window
from pyspark.sql.functions import col, lag, sum
df = (sc
.parallelize([
("Alex", "Burger King", "100"), ("Alex", "McDonalds", "101"),
("Alex", "McDonalds", "104"), ("Nathan", "KFC", "100"),
("Nathan", "KFC", "120"), ("Nathan", "Burger King", "170")
]).toDF(["Person", "Shop", "Timestamp"])
.withColumn("Timestamp", col("timestamp").cast("long")))
w = (Window()
.partitionBy("Person", "Shop")
.orderBy("timestamp"))
ind = ((
# Difference between current and previous timestamp le 5
col("Timestamp") - lag("Timestamp", 1).over(w)) <= 5
).cast("long") # Cast so we can sum
(df
.withColumn("ind", ind)
.groupBy("Shop")
.agg(sum("ind").alias("events"))
.where(col("events") > 0)
.show())
## +---------+------+
## | Shop|events|
## +---------+------+
## |McDonalds| 1|
## +---------+------+
</code></pre>
| 1 |
2016-09-13T13:07:01Z
|
[
"python",
"sql",
"apache-spark",
"pyspark",
"spark-dataframe"
] |
rolling mean with increasing window
| 39,468,228 |
<p>I have a range</p>
<pre><code>np.arange(1,11) # [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
</code></pre>
<p>and for each element, <em>i</em>, in my range I want to compute the average from element <em>i=0</em> to my current element. the result would be something like:</p>
<pre><code>array([ 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. , 5.5])
# got this result via np.cumsum(np.arange(1,11,dtype=np.float32))/(np.arange(1, 11))
</code></pre>
<p>I was wondering if there isn't an out of the box function in numpy / pandas that gives me this result?</p>
| 2 |
2016-09-13T10:50:52Z
| 39,468,295 |
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.expanding.html" rel="nofollow"><code>expanding()</code></a> (requires pandas 0.18.0):</p>
<pre><code>ser = pd.Series(np.arange(1, 11))
ser.expanding().mean()
Out:
0 1.0
1 1.5
2 2.0
3 2.5
4 3.0
5 3.5
6 4.0
7 4.5
8 5.0
9 5.5
dtype: float64
</code></pre>
| 4 |
2016-09-13T10:55:06Z
|
[
"python",
"pandas",
"numpy"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.