title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Iterate only one dict list from multi list dict
| 39,624,778 |
<p>I have a dict of lists like so:</p>
<pre><code>edu_options = { 'Completed Graduate School' : ['medical','litigation','specialist'...],
'Completed College' : ['linguistic','lpn','liberal','chicano'... ],
'Attended College' : ['general','inprogress','courseworktowards','continu'...],
</code></pre>
<p>My original code without an attempt at hierarchical matching:</p>
<pre><code>for edu_level in edu_options:
for option in edu_options[edu_level]
if option in cleaned_string:
user = edu_level
return user
else: continue
</code></pre>
<p>I'm comparing a string to these lists and returning the key. I want to do it in a hierarchical way.</p>
<pre><code> for edu_level in edu_options:
for option in edu_options[edu_level]:
if cleaned_string in edu_options["Completed Graduate School"]:
user = "Completed Graduate School"
return user
elif cleaned_string in edu_options["Completed College"]:
user = "Completed College"
return user
elif option in cleaned_string:
user = edu_level
return user
</code></pre>
<p>These if statements work for the majority of comparison str but don't pick up a few cases. For the first and second if statement, I only want to compare it to the respective list such as "Completed Graduate School". Is there a way to iterate through only that list without using another for loop? Something like </p>
<pre><code>Ex: string = Bachelor of Arts: Communication and Civil Service
cleaned_string = bachelorofartscommunicationandcivilservice
option = iterating through each item(str) of lists in edu_option
</code></pre>
<p>I want the graduate and college lists to be run through first because they are smaller and more specific. The error I'm trying to correct is another larger list in edu_options contains sub str that matches to cleaned_string incorrectly.</p>
| 0 |
2016-09-21T19:13:58Z
| 39,625,463 |
<pre><code> for edu_level in edu_options:
for option in edu_options[edu_level]:
if cleaned_string in edu_options["Completed Graduate School"]:
user = "Completed Graduate School"
return user
elif cleaned_string in edu_options["Completed College"]:
user = "Completed College"
return user
elif option == cleaned_string:
user = edu_level
return user
</code></pre>
| 0 |
2016-09-21T19:54:53Z
|
[
"python",
"list",
"dictionary"
] |
Putting the return of my Python Script into a CSV file
| 39,624,781 |
<p>I have been recently working on a python script to look for missing reverse entries in our DNS servers on BIND and am having trouble figuring out a good way to able to return the output in preferably a CSV file.</p>
<p>Here's the code I am working with and I know it is rough around the edges right now. Any help / ideas would be very appreciated thank you.</p>
<pre><code>import dns.zone
import dns.ipv4
import os.path
import sys
reverse_map = {}
for file in sys.argv[1:]:
zone = dns.zone.from_file(filename, os.path.basename(filename),relativize=False)
for (name, ttl, rdata) in zone.iterate_rdatas('A'):
l = reverse_map.get(rdata.address)
if l is None:
l = []
reverse_map[rdata.address] = l
l.append(name)
keys = reverse_map.keys()
keys.sort(lambda a1, a2: cmp(dns.ipv4.inet_aton(a1), dns.ipv4.inet_aton(a2)))
for k in keys:
v = reverse_map[k]
v.sort()
l = map(str, v)
print [k, l]
import csv
with open('csv.csv', 'rb') as f:
data = list(csv.reader(f))
import collections
counter = collections.defaultdict(int)
for row in data:
counter[row[0]] += 1
writer = csv.writer(open("/path/to/my/csv/file", 'w'))
for row in data:
if counter[row[0]] >= 4:
writer.writerow(row)
</code></pre>
| 0 |
2016-09-21T19:14:02Z
| 39,625,860 |
<p>Try <code>openpyxl</code> instead, that one seems to work better for me.</p>
<p><a href="https://openpyxl.readthedocs.io/en/default/" rel="nofollow" title="Documentation">Documentation</a></p>
| 0 |
2016-09-21T20:18:47Z
|
[
"python"
] |
How do I add headers for the output csv for apache beam dataflow?
| 39,624,809 |
<p>I noticed in the java sdk, there is a function that allows you to write the headers of a csv file.
<a href="https://cloud.google.com/dataflow/java-sdk/JavaDoc/com/google/cloud/dataflow/sdk/io/TextIO.Write.html#withHeader-java.lang.String-" rel="nofollow">https://cloud.google.com/dataflow/java-sdk/JavaDoc/com/google/cloud/dataflow/sdk/io/TextIO.Write.html#withHeader-java.lang.String-</a></p>
<p>Is this features mirrored on the python skd? </p>
| 1 |
2016-09-21T19:15:07Z
| 39,625,500 |
<p>This feature does not yet exist in the Python SDK</p>
| 0 |
2016-09-21T19:57:26Z
|
[
"python",
"google-cloud-dataflow",
"apache-beam"
] |
How do I add headers for the output csv for apache beam dataflow?
| 39,624,809 |
<p>I noticed in the java sdk, there is a function that allows you to write the headers of a csv file.
<a href="https://cloud.google.com/dataflow/java-sdk/JavaDoc/com/google/cloud/dataflow/sdk/io/TextIO.Write.html#withHeader-java.lang.String-" rel="nofollow">https://cloud.google.com/dataflow/java-sdk/JavaDoc/com/google/cloud/dataflow/sdk/io/TextIO.Write.html#withHeader-java.lang.String-</a></p>
<p>Is this features mirrored on the python skd? </p>
| 1 |
2016-09-21T19:15:07Z
| 39,625,521 |
<p>This is not implemented at this moment. However you can implement/extend it yourself (see <a href="https://gist.github.com/Fematich/97703910d867f972e9d01b21d8f41221" rel="nofollow">attached notebook</a> for an example+test with my version of apache_beam).</p>
<p>This is based on a <a href="https://github.com/apache/incubator-beam/blob/python-sdk/sdks/python/apache_beam/io/fileio.py" rel="nofollow">note in the docstring</a> of the superclass <code>FileSink</code>, mentioning that you should overwrite the <code>open</code> function:</p>
<p>The new class that works for my version of apache_beam ('0.3.0-incubating.dev'):</p>
<pre><code>import apache_beam as beam
from apache_beam.io import TextFileSink
from apache_beam.io.fileio import ChannelFactory,CompressionTypes
from apache_beam import coders
class TextFileSinkWithHeader(TextFileSink):
def __init__(self,
file_path_prefix,
file_name_suffix='',
append_trailing_newlines=True,
num_shards=0,
shard_name_template=None,
coder=coders.ToStringCoder(),
compression_type=CompressionTypes.NO_COMPRESSION,
header=None):
super(TextFileSinkWithHeader, self).__init__(
file_path_prefix,
file_name_suffix=file_name_suffix,
num_shards=num_shards,
shard_name_template=shard_name_template,
coder=coder,
compression_type=compression_type,
append_trailing_newlines=append_trailing_newlines)
self.header = header
def open(self, temp_path):
channel_factory = ChannelFactory.open(
temp_path,
'wb',
mime_type=self.mime_type)
channel_factory.write(self.header+"\n")
return channel_factory
</code></pre>
<p>You can subsequently use it as follows:</p>
<pre><code>beam.io.Write(TextFileSinkWithHeader('./names_w_headers',header="names"))
</code></pre>
<p>See <a href="https://gist.github.com/Fematich/97703910d867f972e9d01b21d8f41221" rel="nofollow">the notebook</a> for the complete overview.</p>
| 0 |
2016-09-21T19:58:53Z
|
[
"python",
"google-cloud-dataflow",
"apache-beam"
] |
am getting a typeerror while executing python program
| 39,624,813 |
<p>I have typed </p>
<pre><code>x = input("enter name: ")
Print ("hey") + x
</code></pre>
<p>But when compiled am getting</p>
<pre><code>Typeerror: unsupported operand type(s) for +: 'nonetype' and 'str'
</code></pre>
<p>I'm using python 3.6.0b1.</p>
| -1 |
2016-09-21T19:15:18Z
| 39,624,868 |
<p>You are trying to add to the return value of <code>print</code> (which is <code>None</code>) to a string. Setting parentheses correctly is important.</p>
<pre><code>x = input("enter name: ")
print("hey" + x)
</code></pre>
| 0 |
2016-09-21T19:18:33Z
|
[
"python",
"typeerror"
] |
am getting a typeerror while executing python program
| 39,624,813 |
<p>I have typed </p>
<pre><code>x = input("enter name: ")
Print ("hey") + x
</code></pre>
<p>But when compiled am getting</p>
<pre><code>Typeerror: unsupported operand type(s) for +: 'nonetype' and 'str'
</code></pre>
<p>I'm using python 3.6.0b1.</p>
| -1 |
2016-09-21T19:15:18Z
| 39,624,904 |
<p>print() is a function call which takes a string, so you need to pass the string inside the brackets to the function call like so;</p>
<pre><code>x = input("enter name: ")
print ("hey " + x)
</code></pre>
<p>Further reading on print() is available here: <a href="https://docs.python.org/3/tutorial/inputoutput.html" rel="nofollow">https://docs.python.org/3/tutorial/inputoutput.html</a></p>
| 0 |
2016-09-21T19:20:38Z
|
[
"python",
"typeerror"
] |
Anaconda allensdk NEURON model
| 39,624,891 |
<p>I've download Allen neuron model:
Nr5a1-Cre VISp layer 2/3 473862496</p>
<p>Installed Anaconda with all the required packages, have the NEURON:
<a href="https://alleninstitute.github.io/AllenSDK/install.html" rel="nofollow">https://alleninstitute.github.io/AllenSDK/install.html</a></p>
<p>now how do I use allensdk package to run their model through the NEURON,</p>
<p>they have a sort of explanation:
<a href="http://alleninstitute.github.io/AllenSDK/biophysical_models.html" rel="nofollow">http://alleninstitute.github.io/AllenSDK/biophysical_models.html</a></p>
<p>but where exactly do I write this code? Python? Anaconda promt? Spider?</p>
<p>Not python not Anaconda accept the code as is, so I guess I need to access the allensdk package first, how do I do that?</p>
<p>Thank you.</p>
| -1 |
2016-09-21T19:19:50Z
| 40,004,532 |
<p>Thanks for the question. The first example in your documentation link shows how to download a model, as you've probably done. I do this by writing a python script and running it from the command prompt.</p>
<p>The script looks like this:</p>
<pre class="lang-py prettyprint-override"><code>from allensdk.api.queries.biophysical_api import BiophysicalApi
bp = BiophysicalApi()
bp.cache_stimulus = True # change to False to not download the large stimulus NWB file
neuronal_model_id = 473862496 # here's your model
bp.cache_data(neuronal_model_id, working_directory='neuronal_model')
</code></pre>
<p>You can run this from the command prompt (Anaconda command prompt is fine) as follows:</p>
<pre><code>$ python <your_script_name.py>
</code></pre>
<p>Moving down the documentation, the next step to running the model is to run the following on the command prompt:</p>
<pre><code>$ cd neuronal_model
$ nrnivmodl ./modfiles # compile the model (only needs to be done once)
$ python -m allensdk.model.biophysical.runner manifest.json
</code></pre>
<p>First you step into the working directory you specified in the first script. </p>
<p>Next you run a NEURON binary (nrnivmodl), which compiles your modfiles. You'll need to have NEURON with python bindings installed and on your PATH to run this. I'm not sure about this, but I think compiling modfiles in Windows requires a different command/workflow. If that's your operating system, I'll have to refer you here since I'm not too familiar with NEURON on Windows:</p>
<p><a href="https://www.neuron.yale.edu/neuron/static/docs/nmodl/mswin.html" rel="nofollow">https://www.neuron.yale.edu/neuron/static/docs/nmodl/mswin.html</a></p>
<p>Next you are calling an script packaged with the allensdk for running models based on one of the files we downloaded in the first script (manifest.json).</p>
| 0 |
2016-10-12T17:12:38Z
|
[
"python",
"neuron-simulator"
] |
TypeError during assignment to attribute reference?
| 39,624,929 |
<p>Reading about <a href="https://docs.python.org/3/reference/simple_stmts.html#assignment-statements" rel="nofollow">Assignment statements</a> in the Python's docs I found this:</p>
<blockquote>
<p>If the target is an attribute reference: The primary expression in the reference is evaluated. It should yield an object with assignable attributes; if this is not the case, <code>TypeError</code> is raised. That object is then asked to assign the assigned object to the given attribute; if it cannot perform the assignment, it raises an exception (usually but not necessarily <code>AttributeError</code>).</p>
</blockquote>
<p>I'm wondering how to get this <code>TypeError</code>?</p>
<p><strong>What Python's types doesn't have routine for setting attributes?</strong></p>
| 1 |
2016-09-21T19:21:59Z
| 39,624,994 |
<p>If you want to raise TypeError in your code:</p>
<pre><code>raise TypeError
</code></pre>
<p>I suggest you read up on exceptions and exception handling in Python for more information. <a href="https://docs.python.org/3/tutorial/errors.html" rel="nofollow">https://docs.python.org/3/tutorial/errors.html</a> </p>
| -2 |
2016-09-21T19:25:51Z
|
[
"python",
"python-3.x",
"typeerror",
"attr"
] |
TypeError during assignment to attribute reference?
| 39,624,929 |
<p>Reading about <a href="https://docs.python.org/3/reference/simple_stmts.html#assignment-statements" rel="nofollow">Assignment statements</a> in the Python's docs I found this:</p>
<blockquote>
<p>If the target is an attribute reference: The primary expression in the reference is evaluated. It should yield an object with assignable attributes; if this is not the case, <code>TypeError</code> is raised. That object is then asked to assign the assigned object to the given attribute; if it cannot perform the assignment, it raises an exception (usually but not necessarily <code>AttributeError</code>).</p>
</blockquote>
<p>I'm wondering how to get this <code>TypeError</code>?</p>
<p><strong>What Python's types doesn't have routine for setting attributes?</strong></p>
| 1 |
2016-09-21T19:21:59Z
| 39,625,088 |
<p>This documentation line is just really out of date. It dates back to at least <a href="https://docs.python.org/release/1.4/ref/ref6.html#HDR2" rel="nofollow">Python 1.4</a>, long before type/class unification. I believe back then, trying to do something like</p>
<pre><code>x = 1
x.foo = 3
</code></pre>
<p>would have produced a TypeError, but I wasn't writing Python back then, and I don't have a sufficiently ancient interpreter version to test it.</p>
<p>If you look at the <a href="https://hg.python.org/cpython/file/3.5/Objects/object.c#l914" rel="nofollow">source code</a> for attribute assignment dispatch, you can see that the documented check still exists:</p>
<pre><code>if (tp->tp_setattro != NULL) {
...
return ...;
}
if (tp->tp_setattr != NULL) {
...
return ...;
}
Py_DECREF(name);
assert(name->ob_refcnt >= 1);
if (tp->tp_getattr == NULL && tp->tp_getattro == NULL)
PyErr_Format(PyExc_TypeError,
"'%.100s' object has no attributes "
"(%s .%U)",
tp->tp_name,
value==NULL ? "del" : "assign to",
name);
else
PyErr_Format(PyExc_TypeError,
"'%.100s' object has only read-only attributes "
"(%s .%U)",
tp->tp_name,
value==NULL ? "del" : "assign to",
name);
return -1;
</code></pre>
<p>If an object's type has no routine for setting attributes, Python raises an error, complaining about "no attributes" or "only read-only attributes" depending on whether the type has a routine for getting attributes. I believe in the early days, types like <code>int</code> would have gone down this code path. However, all types now inherit such routines from <code>object</code>, so I don't think this code path is ever taken.</p>
<p>There's a related code path in <a href="https://hg.python.org/cpython/file/3.5/Objects/typeobject.c#l3005" rel="nofollow"><code>type.__setattr__</code></a> that raises a <code>TypeError</code> for setting attributes on types written in C. This code path is still taken, but it's not as general as what the documentation describes:</p>
<pre><code>if (!(type->tp_flags & Py_TPFLAGS_HEAPTYPE)) {
PyErr_Format(
PyExc_TypeError,
"can't set attributes of built-in/extension type '%s'",
type->tp_name);
return -1;
}
</code></pre>
| 4 |
2016-09-21T19:31:48Z
|
[
"python",
"python-3.x",
"typeerror",
"attr"
] |
TypeError during assignment to attribute reference?
| 39,624,929 |
<p>Reading about <a href="https://docs.python.org/3/reference/simple_stmts.html#assignment-statements" rel="nofollow">Assignment statements</a> in the Python's docs I found this:</p>
<blockquote>
<p>If the target is an attribute reference: The primary expression in the reference is evaluated. It should yield an object with assignable attributes; if this is not the case, <code>TypeError</code> is raised. That object is then asked to assign the assigned object to the given attribute; if it cannot perform the assignment, it raises an exception (usually but not necessarily <code>AttributeError</code>).</p>
</blockquote>
<p>I'm wondering how to get this <code>TypeError</code>?</p>
<p><strong>What Python's types doesn't have routine for setting attributes?</strong></p>
| 1 |
2016-09-21T19:21:59Z
| 39,625,177 |
<p>This code produces a <code>TypeError</code> and it seems like it is what the documentation describes:</p>
<pre><code>>>> def f(): pass
...
>>> f.func_globals = 0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: readonly attribute
</code></pre>
<p>But is this <code>TypeError</code> really raised because the documentation says that? I sincerely doubt it. I guess <code>func_globals</code> implementation simply raises <code>TypeError</code> if you try to assign something to it.</p>
<p><strong>BTW...</strong></p>
<p>I would actually excpect the same in the next example, but it is an <code>AttributeError</code> instead:</p>
<pre><code>>>> class A(object):
... __slots__ = 'a',
...
>>> A().b = 0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'A' object has no attribute 'b'
</code></pre>
<p><strong>Update (Python 3)</strong></p>
<p>The above was in Python 2.7. In Python 3, there is no <code>func_globals</code>, so this is not applicable (you can assign anything to it).</p>
<p>What attributes function has in Python 3 seem to raise an <code>AttributeError</code> when it is read-only.</p>
<pre><code>>>> f.__globals__ = 0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: readonly attribute
</code></pre>
<p>This makes perfect sense to me. Perhaps this part of the documentation is just a relic as far as Python 3 is concerned.</p>
| 0 |
2016-09-21T19:38:06Z
|
[
"python",
"python-3.x",
"typeerror",
"attr"
] |
Need a bidirectional Map which allows duplicate values, and returns list of values given a key
| 39,624,938 |
<p>I have a need for a data structure which I think should be a common need<br>
I need a Map which allows duplicate values(most commonly do, but wait.. getting to the point).. see this sample<br></p>
<pre><code>k1 -> v1
k2 -> v1
k3 -> v1
k4 -> v2
k5 -> v2
</code></pre>
<p>Now if I do map.getByValue(v1), I should get Set(k1, k2, k3) . Otherwise it should behave like a 'normal' map. It should have high performance, so no for-loop kind of suggestions please.<br>
Also, note that the following will not do for me<br></p>
<pre><code>v1 -> (k1, k2, k3)
v2 -> (k4, k5)
</code></pre>
<p>.. since I dont want this situation ever (note, k1 is in both lists)..</p>
<pre><code>v1 -> (k1, k2, k3)
v2 -> (k4, k5, k1)
</code></pre>
<p>I don't want to use two map solution. I am thinking of some way to keep values sorted so that getByValue(value) is still performant.</p>
| 0 |
2016-09-21T19:23:02Z
| 39,625,050 |
<p>High performance means that you need some sort of index to search for a key. There are no indexes that work in both directions, so actually you need two of them. So you have to use two multimaps, one for each direction, and a wrapper that will maintain them in consistent state.</p>
| 0 |
2016-09-21T19:29:18Z
|
[
"java",
"c#",
"python",
"data-structures"
] |
pymongo: no such cmd: update
| 39,624,983 |
<p>I am using TxMongo 16.1.0 (this one uses pymongo under the hood), Mongodb 2.4.14 in my program.<br>
I don't understand why I receive this pymongo.errors.OperationFailure:
(It cannot recognize <code>update</code> cmd???) </p>
<pre><code>TxMongo: command SON([('update', u'units'), ('updates', [SON([('q', {'baseIP': u'10.12.59.119'}), ('u', {'$set': {'status': 'busy'}}), ('upsert', False), ('multi', False)])]), ('writeConcern', {})]) on namespace db_test.$cmd failed with 'no such cmd: update'
</code></pre>
<p>I used Son in python to make ordered dict, but still the error.</p>
| 0 |
2016-09-21T19:25:18Z
| 39,626,444 |
<p>It turns out that this might be a bug of update_one() in TxMongo.
Switch to update() will be fine. </p>
| 0 |
2016-09-21T20:56:13Z
|
[
"python",
"pymongo"
] |
AttributeError: 'module' object has no attribute 'urls'
| 39,625,054 |
<p>Python 2.7 & Django 1.10
ERROR:</p>
<pre><code>AttributeError: 'module' object has no attribute 'urls'
</code></pre>
<p><strong>main/urls.py</strong></p>
<pre><code>from django.conf.urls import url, include
from django.contrib import admin
import article
urlpatterns = [
url(r'^admin/', include(admin.site.urls)),
url(r'^', include(article.urls))
]
</code></pre>
<p><strong>article/urls.py</strong></p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.basic_one, name='basic_one')
]
</code></pre>
<p>Structure:</p>
<p>APP/main/urls.py</p>
<p>APP/article/urls.py</p>
| 0 |
2016-09-21T19:29:35Z
| 39,625,409 |
<p>It's easier to put just quotes around the includes and don't import articles. Like so:</p>
<pre><code>from django.conf.urls import url, include
from django.contrib import admin
urlpatterns = [
url(r'^admin/', include(admin.site.urls)),
url(r'^', include('article.urls'))
]
</code></pre>
| 0 |
2016-09-21T19:52:02Z
|
[
"python",
"django",
"python-2.7",
"web"
] |
AttributeError: 'module' object has no attribute 'urls'
| 39,625,054 |
<p>Python 2.7 & Django 1.10
ERROR:</p>
<pre><code>AttributeError: 'module' object has no attribute 'urls'
</code></pre>
<p><strong>main/urls.py</strong></p>
<pre><code>from django.conf.urls import url, include
from django.contrib import admin
import article
urlpatterns = [
url(r'^admin/', include(admin.site.urls)),
url(r'^', include(article.urls))
]
</code></pre>
<p><strong>article/urls.py</strong></p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.basic_one, name='basic_one')
]
</code></pre>
<p>Structure:</p>
<p>APP/main/urls.py</p>
<p>APP/article/urls.py</p>
| 0 |
2016-09-21T19:29:35Z
| 39,630,954 |
<p>In main/urls.py</p>
<pre><code>from django.conf.urls import url, include
from django.contrib import admin
from article import urls
urlpatterns = [
url(r'^admin/', include(admin.site.urls)),
url(r'^', include(urls))
]
</code></pre>
<p>Or</p>
<pre><code>from django.conf.urls import url, include
from django.contrib import admin
urlpatterns = [
url(r'^admin/', include(admin.site.urls)),
url(r'^', include('article.urls'))
]
</code></pre>
<p>You can learn more from <a href="https://docs.djangoproject.com/en/1.10/topics/http/urls/#including-other-urlconfs" rel="nofollow">Django Documentation</a> </p>
| 0 |
2016-09-22T05:31:25Z
|
[
"python",
"django",
"python-2.7",
"web"
] |
Improving code for similar code found
| 39,625,092 |
<p>I passed codeclimate to my code, and I obtained the following:</p>
<blockquote>
<p>Similar code found in 1 other location</p>
</blockquote>
<p>This is my code:</p>
<pre><code>stradd = 'iterable_item_added'
if stradd in ddiff:
added = ddiff[stradd]
npos_added = parseRoots(added)
dics_added = makeAddDicts(localTable, pk, npos_added)
else:
dics_added = []
strchanged = 'values_changed'
if strchanged in ddiff:
updated = ddiff[strchanged]
npos_updated = parseRoots(updated)
dics_updated = makeUpdatedDicts(localTable, pk, npos_updated)
else:
dics_updated = []
</code></pre>
<p>Where <code>iterable_item_added</code> and <code>values_changed</code> are repeated. How to change it?</p>
| 0 |
2016-09-21T19:32:05Z
| 39,625,366 |
<p>just abstract the parameters and create an helper method:</p>
<pre><code>def testmethod(name,localTable,m,ddiff,pk):
if name in ddiff:
npos = parseRoots(ddiff[name])
rval = m(localTable, pk, npos)
else:
rval = []
return rval
</code></pre>
<p>the call it:</p>
<pre><code>dics_added = testmethod('iterable_item_added',localTable,makeAddDicts,ddiff,pk)
dics_updated = testmethod('values_changed',localTable,makeUpdatedDicts,ddiff,pk)
</code></pre>
<p>note: be careful when factorizing code, you can introduce bugs (and make code better readable :)).</p>
<p>Also: that helper method forces to pass a lot of local variables. Maybe creating an object and member variables would simplify even more.</p>
<p>In that case, it appears to be a bit "overkill" to do that in order to make your review tool shut up.</p>
| 1 |
2016-09-21T19:49:38Z
|
[
"python",
"optimization",
"code-climate"
] |
Fitting only 2 paramter of a function with many parameters in python
| 39,625,122 |
<p>I know there is a question <br>
<a href="http://stackoverflow.com/questions/12208634/fitting-only-one-paramter-of-a-function-with-many-parameters-in-python">Fitting only one paramter of a function with many parameters in python</a> <br>
but I have a little bit different situation. Problem with parameters of lambda function.</p>
<p>I am trying to fit Lorentz func</p>
<pre><code>def lorentz(ph, A, xc, w, f0):
return f0 + A * w**2 / (w**2 + (ph - xc)**2)
</code></pre>
<p>if I am fitting only one parameter (xc) its working good.</p>
<pre><code>p1, p2, p3, p4 = params
popt, pcov = curve_fit(lambda x, xc: lorentz(x, p1, xc, p3, p4), abjd, adata, bounds=param_bounds)
</code></pre>
<p>But if I try to fit only 2 parameters (a, xc) it fails</p>
<pre><code>p1, p2, p3, p4 = params
popt, pcov = curve_fit(lambda x, a, xc: lorentz(x, a, xc, p3, p4), abjd, adata, bounds=param_bounds)
</code></pre>
<p>Error message is</p>
<pre><code>Traceback (most recent call last):
File "template.py", line 294, in <module>
popt, pcov = curve_fit(lambda x, a, xc: lorentz(x, a, xc, p3, p4), abjd, adata, bounds=param_bounds)
File "/usr/local/lib/python2.7/dist-packages/scipy/optimize/minpack.py", line 683, in curve_fit
**kwargs)
File "/usr/local/lib/python2.7/dist-packages/scipy/optimize/_lsq/least_squares.py", line 769, in least_squares
f0 = fun_wrapped(x0)
File "/usr/local/lib/python2.7/dist-packages/scipy/optimize/_lsq/least_squares.py", line 764, in fun_wrapped
return np.atleast_1d(fun(x, *args, **kwargs))
File "/usr/local/lib/python2.7/dist-packages/scipy/optimize/minpack.py", line 455, in func_wrapped
return func(xdata, *params) - ydata
TypeError: <lambda>() takes exactly 3 arguments (2 given)
</code></pre>
| 0 |
2016-09-21T19:34:22Z
| 39,625,729 |
<p>Here is the solution for all 4 parameters of Lorentz function</p>
<pre><code>import numpy as np
from scipy.optimize import curve_fit
def lorentz(ph, A, xc, w, f0):
return f0 + A * w**2 / (w**2 + (ph - xc)**2)
A, xc, w, f0 = 2,2,2,2 # true values
ph = np.linspace(-5, 10, 100)
y = lorentz(ph, A, xc, w, f0)
ydata = y + 0.15 * np.random.normal(size=len(ph)) # sample data
popt, pcov = curve_fit(lambda x, _A, _xc: lorentz(x, _A, _xc, w, f0), ph, ydata,bounds=([1,1], [3, 3]))
A, xc = popt # fitted values (only two)
</code></pre>
<p>You can easily add or remove parameters by putting them under the function lorentz() or with lambda</p>
<p>The result looks like this
<a href="http://i.stack.imgur.com/TNhV0.png" rel="nofollow"><img src="http://i.stack.imgur.com/TNhV0.png" alt="enter image description here"></a></p>
| 1 |
2016-09-21T20:10:52Z
|
[
"python",
"lambda",
"scipy",
"curve-fitting"
] |
Selecting the right else statement depending on user input
| 39,625,125 |
<p>I'm working on a little program where the user inputs a price and then the program would output the shipping cost based on the input.</p>
<pre><code>#Initilaize variables
lowUS = 6.00;
medUS = 9.00;
highUS = 12.00;
lowCan = 8.00;
medCan = 12.00;
highCan = 15.00;
#Greet user and ask for input
print("Welcome to Ben's shipping calculator!");
print("We will calculate your shipping cost for you!");
orderTotal = float(input("Please input your total amount."));
country = input("In which country do you reside? Please type C for Canada or U or USA. ");
#Validate input
while country not in {'u', 'c', 'C', 'U'}:
print ("Invalid input. Please try again. ")
country = input("In which country do you reside? Please type C for Canada or U or USA. ");
#Determine how much the shipping fee is
if country == 'U' or country == 'u':
if orderTotal <= 50.00:
if orderTotal > 50.00 and orderTotal <= 100.00:
if orderTotal > 100.00 and orderTotal <= 150.00:
if orderTotal > 150.00:
print("Your shipping is free and your grand total is", orderTotal)
else:
print("Your shipping fee is: ", highUS);
orderTotal = (orderTotal + highUS);
else:
print("Your shipping fee is: ", medUS);
orderTotal = (orderTotal + medUS);
else:
print("Your shipping fee is: ", lowUS);
orderTotal = (orderTotal + lowUS);
elif country == 'c' or country == 'C':
if orderTotal <= 50.00:
if orderTotal > 50.00 and orderTotal <= 100.00:
if orderTotal > 100.00 and orderTotal <= 150.00:
if orderTotal > 150.00:
print("Your shipping is free and your grand total is", orderTotal)
else:
print("Your shipping fee is: ", highCan);
orderTotal = (orderTotal + highCan);
else:
print("Your shipping fee is: ", medCan);
orderTotal = (orderTotal + medCan);
else:
print("Your shipping fee is: ", lowCan);
orderTotal = (orderTotal + lowCan);
print("Your grand total is: $", orderTotal);
</code></pre>
<p>I am very new to python and programming and I'm not sure if this is a good way to go at it, but I'm learning if-else statements so I thought I would give it a try. So far it only works if you input "50" for the amount. It will calculate based off of the country only for "50". I'm not sure where I went wrong, if someone could help and explain, that would be great.</p>
| 0 |
2016-09-21T19:34:31Z
| 39,625,227 |
<p>Your first <code>if</code> only enters if it's less than or equal to 50... the next check you make is if it's greater than 50 which it can't be - so that block will never execute... So your <code>else</code> clause is the only one that can execute... Basically, your nested <code>if</code> statements won't execute because the criteria for doing so is already excluded from the enclosing <code>if</code>.</p>
<p>You're best off restructuring your logic as:</p>
<pre><code>if orderTotal > 150:
# do something
elif orderTotal > 100:
# do something
elif orderTotal > 50:
# do something
else: # 50 or under...
# do something else
</code></pre>
| 1 |
2016-09-21T19:41:34Z
|
[
"python",
"variables",
"if-statement"
] |
Selecting the right else statement depending on user input
| 39,625,125 |
<p>I'm working on a little program where the user inputs a price and then the program would output the shipping cost based on the input.</p>
<pre><code>#Initilaize variables
lowUS = 6.00;
medUS = 9.00;
highUS = 12.00;
lowCan = 8.00;
medCan = 12.00;
highCan = 15.00;
#Greet user and ask for input
print("Welcome to Ben's shipping calculator!");
print("We will calculate your shipping cost for you!");
orderTotal = float(input("Please input your total amount."));
country = input("In which country do you reside? Please type C for Canada or U or USA. ");
#Validate input
while country not in {'u', 'c', 'C', 'U'}:
print ("Invalid input. Please try again. ")
country = input("In which country do you reside? Please type C for Canada or U or USA. ");
#Determine how much the shipping fee is
if country == 'U' or country == 'u':
if orderTotal <= 50.00:
if orderTotal > 50.00 and orderTotal <= 100.00:
if orderTotal > 100.00 and orderTotal <= 150.00:
if orderTotal > 150.00:
print("Your shipping is free and your grand total is", orderTotal)
else:
print("Your shipping fee is: ", highUS);
orderTotal = (orderTotal + highUS);
else:
print("Your shipping fee is: ", medUS);
orderTotal = (orderTotal + medUS);
else:
print("Your shipping fee is: ", lowUS);
orderTotal = (orderTotal + lowUS);
elif country == 'c' or country == 'C':
if orderTotal <= 50.00:
if orderTotal > 50.00 and orderTotal <= 100.00:
if orderTotal > 100.00 and orderTotal <= 150.00:
if orderTotal > 150.00:
print("Your shipping is free and your grand total is", orderTotal)
else:
print("Your shipping fee is: ", highCan);
orderTotal = (orderTotal + highCan);
else:
print("Your shipping fee is: ", medCan);
orderTotal = (orderTotal + medCan);
else:
print("Your shipping fee is: ", lowCan);
orderTotal = (orderTotal + lowCan);
print("Your grand total is: $", orderTotal);
</code></pre>
<p>I am very new to python and programming and I'm not sure if this is a good way to go at it, but I'm learning if-else statements so I thought I would give it a try. So far it only works if you input "50" for the amount. It will calculate based off of the country only for "50". I'm not sure where I went wrong, if someone could help and explain, that would be great.</p>
| 0 |
2016-09-21T19:34:31Z
| 39,625,322 |
<p>Your logic is a bit screwy. If the amount is <= $50, then it is not > $50 or <= $100, so anything following that nested "if" will never execute:</p>
<pre><code>if orderTotal <= 50.00:
if orderTotal > 50.00 and orderTotal <= 100.00:
# ** this will never run **
else:
# This will run, if orderTotal <= 50.00
</code></pre>
<p>Nothing will happen if <code>orderTotal > 50.00</code>, because there's no <code>else</code> for the <code>if orderTotal <= 50.00</code> test. @Jon Clements' answer shows the correct way to structure your code.</p>
| 0 |
2016-09-21T19:47:15Z
|
[
"python",
"variables",
"if-statement"
] |
Algorithm Complexity Analysis for Variable Length Queue BFS
| 39,625,159 |
<p>I have developed an algorithm that is kind of a variation of a BFS on a tree, but it includes a probabilistic factor. To check whether a node is the one I am looking for, a statistical test is performed (I won't get into too much detail about this). If the test result is positive, the node is added to another queue (called <code>tested</code>). But when a node fails the test, the nodes in the <code>tested</code> need to be tested again, so this queue is appended to the one with the nodes yet to be tested.</p>
<p>In Python, considering that the queue <code>q</code> starts with the root node:</p>
<pre><code>...
tested = []
while q:
curr = q.pop(0)
p = statistical_test(curr)
if p:
tested.append(curr)
else:
q.extend(curr.children())
q.extend(tested)
tested = []
return tested
</code></pre>
<p>As the algorithm is probabilistic, more than one node might be in <code>tested</code> after the search, but that is expected. The problem I am facing is trying to estimate this algorithm's complexity because I can't simply use BFS's complexity as <code>q</code> and <code>tested</code> will have a variable length.</p>
<p>I don't need a closed and definitive answer for this. What I need are some insights on how to deal with this situation.</p>
| 1 |
2016-09-21T19:36:58Z
| 39,625,684 |
<p>The worst case scenario is the following process:</p>
<blockquote>
<ol>
<li>All elements <em>1 : n-1</em> pass the test and are appended to the <code>tested</code> queue.</li>
<li>Element <em>n</em> fails the test, is removed from <code>q</code>, and <em>n-1</em> elements from <code>tested</code> are pushed back into <code>q</code>.</li>
<li>Go back to step 1 with <em>n = n-1</em></li>
</ol>
</blockquote>
<p>This is a classic O(n<sup>2</sup>) process.</p>
| 0 |
2016-09-21T20:08:01Z
|
[
"python",
"algorithm",
"time-complexity",
"analysis"
] |
Cannot convert {'price_total__sum': Decimal('258.00')} to Decimal
| 39,625,175 |
<p>I get this error when I do a Sum of an entire column:</p>
<pre><code>Cannot convert {'price_total__sum': Decimal('258.00')} to Decimal
</code></pre>
<p>here the entire project: <a href="https://github.com/pierangelo1982/djangocommerce/tree/berge" rel="nofollow">https://github.com/pierangelo1982/djangocommerce/tree/berge</a></p>
<pre><code>def add_to_order(request):
if request.method == "POST":
form = AddOrderForm(request.POST)
if form.is_valid():
post = form.save(commit=False)
post.user = request.user
post.published_date = timezone.now()
post.tot_price = CartItem.objects.filter(user_id=request.user.id).aggregate(Sum('price_total'))
post.save()
cart_list = CartItem.objects.filter(user_id=request.user.id)
for cart in cart_list:
formOrder = AddOrderItemForm(request.POST)
post_cart = formOrder.save(commit=False)
post_cart.order = post
post_cart.product = cart.product
post_cart.composition = cart.composition
post_cart.price = cart.price
post_cart.quantity = cart.quantity
post_cart.total = cart.price_total
post_cart.price_discount = cart.price_discount
post_cart.price_reserved = cart.price_reserved
post_cart.save()
#cart_list.delete() #cancello carrello dopo ordine
return redirect('/order', pk=post.pk)
else:
form = AddOrderForm()
return render(request, 'order-form.html', {'form': form})
</code></pre>
<p>My Models Order, that receive the sum value, and the form:</p>
<pre><code>class Order(models.Model):
user = models.ForeignKey(User, null=True, blank=True, verbose_name="Utente")
code = models.CharField('Codice', max_length=250, null=True, blank=True)
tot_price = models.DecimalField('Prezzo', max_digits=10, decimal_places=2, blank=True, null=True)
tot_discount = models.DecimalField('Totale Scontato', max_digits=10, decimal_places=2, blank=True, null=True)
tot_price_reserved = models.DecimalField('Prezzo Scontato Riservato', max_digits=10, decimal_places=2, blank=True, null=True)
pub_date = models.DateTimeField('date published', editable=False)
inlavorazione = models.BooleanField('in lavorazione', default=False)
pagato = models.BooleanField('pagato', default=False)
spedito = models.BooleanField('spedito', default=False)
chiuso = models.BooleanField('chiuso', default=False)
def save(self, *args, **kwargs):
self.pub_date = datetime.now()
super(Order, self).save(*args, **kwargs) # Call the "real" save() method.
def __unicode__(self):
return self.pub_date.strftime('%Y-%m-%d')
class Meta:
verbose_name_plural = "Ordine"
ordering = ['id']
</code></pre>
<p>Form:</p>
<pre><code>class AddOrderForm(ModelForm):
class Meta:
model = Order
fields = ['user', 'tot_price', 'tot_discount', 'tot_price_reserved']
</code></pre>
| -1 |
2016-09-21T19:38:01Z
| 39,625,236 |
<p>As the error shows, your <code>aggregate()</code> call returns a dictionary: <code>{'price_total__sum': Decimal('258.00')}</code>. You can't set that dict as a DecimalField in the target model; you need to extract the actual value first.</p>
<pre><code>price_total = CartItem.objects.filter(user_id=request.user.id).aggregate(Sum('price_total'))
post.tot_price = price_total['price_total__sum']
</code></pre>
| 1 |
2016-09-21T19:42:08Z
|
[
"python",
"django",
"virtualenv"
] |
Difference between a+b and a.__add__(b)
| 39,625,229 |
<p>I am currently trying to understand where the difference between using <code>a+b</code> and <code>a.__add__(b)</code> is when it comes to custom classes. There are numerous websites that say that using the '+'-operator results in using the special method <code>__add__</code> - which is fine so far.</p>
<p>But when i run the following example I get two different results.</p>
<pre><code>class C:
def __add__(self, other):
print("C.__add__", self, other)
return "result"
def __radd__(self, other):
print("C.__radd__", self, other)
return "reversed result"
c = C()
print(1+c)
print()
print(1 .__add__(c))
print(int.__add__(1,c))
</code></pre>
<p>Result:</p>
<pre><code>C.__radd__ <C object at 0x7f60b92e9550> 1
reversed result
NotImplemented
NotImplemented
</code></pre>
<p>Now from what I understood, when executing <code>1+c</code> Python checks/executes the int <code>__add__</code> method - finds that there is no implementation for adding int and C objects - returns NotImplemented - which lets Python know to check object C for <code>__radd__</code> and executes the code within.</p>
<p>Why does <code>1+c</code> result in executing the <code>__radd__</code> code but the other two version are just returning <code>NotImplemented</code> without checking <code>__radd__</code> ??</p>
| 1 |
2016-09-21T19:41:46Z
| 39,625,287 |
<p><code>a+b</code> is equivalent to <code>import operator; operator.add(a,b)</code>. It starts by calling <code>a.__add__(b)</code> and then, if necessary, <code>b.__radd__(a)</code>. But <code>ifsubclass(type(b), type(a))</code>, then <code>b.__radd__(a)</code> is called first.</p>
<p>Based on the <a href="https://docs.python.org/3/reference/datamodel.html#special-method-names" rel="nofollow">docs on "special" methods</a>:</p>
<ul>
<li><p>Regarding <a href="https://docs.python.org/3/reference/datamodel.html#object.__add__" rel="nofollow"><code>__add__()</code></a>:</p>
<blockquote>
<p><code>__add__()</code> is called to implement the binary arithmetic "+" operation. For instance, to evaluate the expression x + y, where x is an instance of a class that has an <code>__add__()</code> method, <code>x.__add__(y)</code> is called. </p>
<p>If one of those methods does not support the operation with the supplied arguments, it should return <strong>NotImplemented</strong>.</p>
</blockquote></li>
<li><p>Regarding <a href="https://docs.python.org/3/reference/datamodel.html#object.__radd__" rel="nofollow"><code>__radd__()</code></a>:</p>
<blockquote>
<p>These functions are only called if the left operand does not support the corresponding operation and the operands are of different types. For instance, to evaluate the expression x + y, where y is an instance of a class that has an <code>__radd__()</code> method, <code>y.__radd__(x)</code> is called if <code>x.__add__(y)</code> returns NotImplemented.</p>
<p>If the right operandâs type is a subclass of the left operandâs type and that subclass provides the reflected method for the operation, this method will be called before the left operandâs non-reflected method. This behavior allows subclasses to override their ancestorsâ operations.</p>
</blockquote></li>
</ul>
<p><strong>Explanation with the examples based on the behaviour:</strong></p>
<p><strong>Case 1:</strong></p>
<pre><code>>>> print 1+c
('C.__radd__', <__main__.C instance at 0x7ff5631397a0>, 1)
reversed result
</code></pre>
<p>These functions <code>radd</code> are only called if the left operand does not support the corresponding operation and the operands are of different types. In this case, <code>1</code> does not support addition of the class hence, it falls back to the <code>__radd__()</code> function of the <code>C</code> class. In case <code>__radd__</code> was not implement in <code>C()</code> class, it would have fallen back to <code>__add__()</code></p>
<p><strong>Case2:</strong></p>
<pre><code>>>> 1 .__add__(c)
NotImplemented
>>> c .__add__(1)
('C.__add__', <__main__.C instance at 0x7ff563139830>, 1)
'result'
</code></pre>
<p><code>1 .__add__(c)</code> gives <code>NotImplemented</code> as <code>1</code> is of <code>int</code> type and <code>add</code> of <code>int</code> class do not supports <code>add</code> with C class object. But <code>c .__add(1)</code> run because <code>C()</code> class supports that.</p>
<p><strong>Case 3:</strong></p>
<pre><code>>>> int.__add__(1, c)
NotImplemented
>>> C.__add__(c, 1)
('C.__add__', <__main__.C instance at 0x7ff5610add40>, 1)
'result'
</code></pre>
<p>Similar to <code>case 2</code>. But here, the call is made via class with first argument as object of that class. Behaviour would be same.</p>
<p><strong>Case 4:</strong> </p>
<pre><code>>>> int.__add__(c, 1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: descriptor '__add__' requires a 'int' object but received a 'instance'
>>> C.__add__(1, c)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method __add__() must be called with C instance as first argument (got int instance instead)
</code></pre>
<p>Vice-versa of <code>case 3</code>. As is cleared from the stack-trace, <code>__add__</code> expected the object of the calling class as the first argument, failing which resulted in exception.</p>
| 2 |
2016-09-21T19:44:47Z
|
[
"python"
] |
find first instance of a row in pandas dataframe matching a criterion
| 39,625,302 |
<p>I have a dataframe giving event time (in days) and a value associated with each event. </p>
<p>sorry for placing this in as code snippet, not sure of any other way to show format as a table in this question.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>+-----------+----------+
| EventTime | Value |
+-----------+----------+
| 389.9067 | 0.076014 |
| 670.9632 | 0.190521 |
| 1012.2839 | 0.266599 |
| 1025.2452 | 0.355095 |
| 1347.1064 | 0.45189 |
| 3554.909 | 0.64213 |
| 3932.491 | 0.688693 |
| 4450.6369 | 0.730536 |
| 4819.5832 | 0.746905 |
| 6252.0017 | 0.880531 |
| 6951.3345 | 0.898307 |
| 7607.0877 | 0.945048 |
| 9044.0014 | 1.002455 |
| 9433.6679 | 1.083201 |
+-----------+----------+</code></pre>
</div>
</div>
</p>
<p>I am interested in obtaining the event time associated with the first value >= a given input, e.g. if input =0.40, I want to report 1347.1064</p>
<p>Ideally, I would like a general solution where I supply a list of value thresholds and the dataframe, e.g. (.4, .7, .9) and obtain back a list (or any other data structure) with the corresponding event times.</p>
<p>Looked around, did not see anything obvious in terms of a solution, but probably just missing something or my ignorance of pandas, trying to learn.</p>
<p>Thanks in advance</p>
| 0 |
2016-09-21T19:45:50Z
| 39,625,407 |
<p>Here's one approach using <code>searchsorted</code> -</p>
<pre><code>df.EventTime[df.Value.searchsorted([.4,.7,.9])]
</code></pre>
<p>Sample run -</p>
<pre><code>In [281]: df
Out[281]:
EventTime Value
0 333.690569 0.097736
1 942.624952 0.136822
2 211.588088 0.246093
3 514.476542 0.483235
4 650.769771 0.643968
5 457.457053 0.687587
6 10.519801 0.730046
7 692.091846 0.833983
8 210.612897 0.922743
9 512.066182 0.964927
In [282]: df.EventTime[df.Value.searchsorted([.4,.7,.9])]
Out[282]:
3 514.476542
6 10.519801
8 210.612897
Name: EventTime, dtype: float64
</code></pre>
<p>If you need the <code>EvenTime</code> values as an array, use <code>df.EventTime.values</code> instead.</p>
| 2 |
2016-09-21T19:51:51Z
|
[
"python",
"pandas"
] |
Pandas Grouping By Datetime
| 39,625,328 |
<p>I'm attempting to count the number of users that login to a system on an hourly basis on a given date. The date I have resembles: </p>
<pre><code>df=
Name Date
name_1 2012-07-12 22:20:00
name_1 2012-07-16 22:19:00
name_1 2013-12-16 17:50:00
...
name_2 2010-01-11 19:54:00
name_2 2010-02-06 12:10:00
...
name_2 2012-07-18 22:12:00
...
name_5423 2013-11-23 10:21:00
</code></pre>
<p>since I'm not interested in the users name I've deleted that column. I manage to create a grouped data structure and a new dataframe <code>df2</code> using the following command </p>
<pre><code>grp = df.groupby(by=[df.Date.map(lambda x: (x.year, x.month, x.day, x.hour))])
df2 = pd.DataFrame({'Count' : grp.size()}).reset_index()
</code></pre>
<p><code>grp</code> converts the <code>datetime</code> type into a tuple of <code>(year, month, day, hour)</code>.</p>
<p>I'm able to convert it back to a <code>datetime</code> type using a <code>for</code> loop </p>
<pre><code>for i in range(len(df2)):
proper_date = datetime.datetime(*df2['Date'][i])
df2.set_value(i, 'Date', proper_date)
</code></pre>
<p>What I'm wondering is if there is a better/more efficient way of going about this?</p>
| 1 |
2016-09-21T19:47:25Z
| 39,625,417 |
<p>You can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> by column <code>Date</code> converted to <code>h</code> and aggregate <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow"><code>size</code></a>:</p>
<pre><code>print (df.Date.values.astype('datetime64[h]'))
['2012-07-12T22+0200' '2012-07-16T22+0200' '2013-12-16T17+0100'
'2010-01-11T19+0100' '2010-02-06T12+0100' '2012-07-18T22+0200'
'2013-11-23T10+0100']
print (df.Name.groupby([df.Date.values.astype('datetime64[h]')]).size())
2010-01-11 19:00:00 1
2010-02-06 12:00:00 1
2012-07-12 22:00:00 1
2012-07-16 22:00:00 1
2012-07-18 22:00:00 1
2013-11-23 10:00:00 1
2013-12-16 17:00:00 1
dtype: int64
</code></pre>
<p>Another solution:</p>
<pre><code>print (df.Date.values.astype('<M8[h]'))
['2012-07-12T22+0200' '2012-07-16T22+0200' '2013-12-16T17+0100'
'2010-01-11T19+0100' '2010-02-06T12+0100' '2012-07-18T22+0200'
'2013-11-23T10+0100']
print (df.Name.groupby([df.Date.values.astype('<M8[h]')]).size())
2010-01-11 19:00:00 1
2010-02-06 12:00:00 1
2012-07-12 22:00:00 1
2012-07-16 22:00:00 1
2012-07-18 22:00:00 1
2013-11-23 10:00:00 1
2013-12-16 17:00:00 1
dtype: int64
</code></pre>
| 3 |
2016-09-21T19:52:39Z
|
[
"python",
"datetime",
"pandas",
"time-series",
"hour"
] |
Pandas Grouping By Datetime
| 39,625,328 |
<p>I'm attempting to count the number of users that login to a system on an hourly basis on a given date. The date I have resembles: </p>
<pre><code>df=
Name Date
name_1 2012-07-12 22:20:00
name_1 2012-07-16 22:19:00
name_1 2013-12-16 17:50:00
...
name_2 2010-01-11 19:54:00
name_2 2010-02-06 12:10:00
...
name_2 2012-07-18 22:12:00
...
name_5423 2013-11-23 10:21:00
</code></pre>
<p>since I'm not interested in the users name I've deleted that column. I manage to create a grouped data structure and a new dataframe <code>df2</code> using the following command </p>
<pre><code>grp = df.groupby(by=[df.Date.map(lambda x: (x.year, x.month, x.day, x.hour))])
df2 = pd.DataFrame({'Count' : grp.size()}).reset_index()
</code></pre>
<p><code>grp</code> converts the <code>datetime</code> type into a tuple of <code>(year, month, day, hour)</code>.</p>
<p>I'm able to convert it back to a <code>datetime</code> type using a <code>for</code> loop </p>
<pre><code>for i in range(len(df2)):
proper_date = datetime.datetime(*df2['Date'][i])
df2.set_value(i, 'Date', proper_date)
</code></pre>
<p>What I'm wondering is if there is a better/more efficient way of going about this?</p>
| 1 |
2016-09-21T19:47:25Z
| 39,625,651 |
<p>Another answer using resampling. Not very efficient, I think, but interesting.</p>
<pre><code># Test data
d = {'Date': ['2012-07-12 22:20:00', '2012-07-12 22:19:00', '2013-12-16 17:50:00', '2010-01-11 19:54:00', '2010-02-06 12:10:00', '2012-07-18 22:12:00'],
'Name': ['name_1', 'name_1', 'name_1', 'name_2', 'name_2', 'name_5']}
df = pd.DataFrame(d)
df['Date'] = pd.to_datetime(df['Date'])
result = df.set_index('Date')
# Resampling data for each hour
result = result.resample('H').count()
# Filtering to keep only hours with at least one row
result[result['Name'] > 0]
Name
Date
2010-01-11 19:00:00 1
2010-02-06 12:00:00 1
2012-07-12 22:00:00 2
2012-07-18 22:00:00 1
2013-12-16 17:00:00 1
</code></pre>
| 1 |
2016-09-21T20:06:14Z
|
[
"python",
"datetime",
"pandas",
"time-series",
"hour"
] |
How do I call a secured Google Apps Script web app endpoint?
| 39,625,336 |
<p>I want to make a POST request to my Google Apps Script Web App, but I don't know what format google wants me to send my credentials in. I have restricted access to the API to anyone in my gmail domain. The script that is making the POST request is written in Python and running automatically on a server, no user input.</p>
<p>Even a link to a page of documentation that addresses this issue would be great. I searched but couldn't find anything.</p>
| -1 |
2016-09-21T19:47:59Z
| 39,665,062 |
<p>Up until a few months ago there was no authenticated way to do server to webapp calls. The webapp had to run as the author with anonymous access.
Recently a new service was released called the Execution API. Which allows someone to execute a script using a simple REST call. </p>
<p>Overview of the service:</p>
<blockquote>
<p><a href="https://developers.google.com/apps-script/guides/rest/" rel="nofollow">https://developers.google.com/apps-script/guides/rest/</a> </p>
</blockquote>
<p>Python quickstart:</p>
<blockquote>
<p><a href="https://developers.google.com/apps-script/guides/rest/quickstart/target-script" rel="nofollow">https://developers.google.com/apps-script/guides/rest/quickstart/target-script</a></p>
</blockquote>
| 0 |
2016-09-23T16:00:56Z
|
[
"python",
"google-apps-script"
] |
How to separate and graph lines in array by a value in each line
| 39,625,442 |
<p>I'm new to Python and am trying to pull data from a growing CSV, and create a live updating plot. I want to create two different x,y arrays depending on which antenna the data is coming through (one of the values in each line of data separated by commas). The data file looks like the following:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>TimeStamp, ReadCount, Antenna, Protocol, RSSI, EPC, Sensor
09/21/2016 15:24:40.560, 5499, 1, GEN2, -21, E036112D912508B3, 23.78,47.00,0.00,2.21, (Infinity%)
09/21/2016 15:24:41.138, 5506, 1, GEN2, -9, E036112D912508B3, 23.99,46.00,0.00,2.26, (Infinity%)
09/21/2016 15:24:41.623, 5513, 1, GEN2, -25, E036112D912508B3, 23.99,46.00,0.00,2.26, (Infinity%)
09/21/2016 15:24:42.120, 5520, 1, GEN2, -18, E036112D912508B3, 23.78,46.00,0.00,2.26, (Infinity%)
09/21/2016 15:24:42.633, 5527, 1, GEN2, -12, E036112D912508B3, 23.99,45.00,0.00,2.23, (Infinity%)
09/21/2016 15:24:43.211, 5534, 1, GEN2, -9, E036112D912508B3, 23.99,46.00,0.00,2.26, (Infinity%)
09/21/2016 15:24:43.744, 5541, 1, GEN2, -16, E036112D912508B3, 23.99,46.00,0.00,2.26, (Infinity%)</code></pre>
</div>
</div>
</p>
<p>Code I have which successfully shows the graph, but just takes in all lines of data into one x,y set of arrays looks like the following: </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>import matplotlib
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from datetime import datetime
offset = -7.4954
slope = 0.9548
fig = plt.figure(facecolor='#07000d')
ax1 = fig.add_subplot(111, axisbg='#07000d')
ax1.spines['bottom'].set_color("#5998ff")
ax1.spines['top'].set_color("#5998ff")
ax1.spines['left'].set_color("#5998ff")
ax1.spines['right'].set_color("#5998ff")
def animate(i):
graph_data = open('SensorLog.csv','r').read()
dataArray = graph_data.split('\n')
xar=[]
yar=[]
for eachLine in dataArray:
if 'TimeStamp' not in eachLine:
if len(eachLine)>1:
t,rc,ant,prot,rssi,epc,temp,ten,powr,unpowr,inf=(eachLine.split(','))
time = datetime.strptime(t, '%m/%d/%Y %H:%M:%S.%f')
clock = time.strftime('%I:%M')
xs = matplotlib.dates.datestr2num(clock)
hfmt = matplotlib.dates.DateFormatter('%m/%d/%Y\n%I:%M:%S %p')
# Convert tension
tension = int(float(ten)*float(slope)+float(offset))
xar.append(xs)
yar.append(tension)
ax1.clear()
ax1.grid(True, color='w')
plt.ylabel('Tension (lb)',color='w', fontsize=20)
plt.title('Spiral 1 Tension',color='w', fontsize=26)
ax1.tick_params(axis='y', colors='w')
ax1.tick_params(axis='x', colors='w')
ax1.xaxis.set_major_formatter(hfmt)
fig.autofmt_xdate()
ax1.plot (xar,yar, 'c', linewidth=2)
ani = animation.FuncAnimation(fig, animate, interval=10000)
plt.show()</code></pre>
</div>
</div>
</p>
<p>I am trying to separate data pulled in on antenna 1 and 2 and plot each on the same graph (shared x axis) with different colored line plots...my attempt is here but it is not working:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>import matplotlib
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from datetime import datetime
offset = -7.4954
slope = 0.9548
fig = plt.figure(facecolor='#07000d')
ax1 = fig.add_subplot(111, axisbg='#07000d')
ax2 = fig.add_subplot(111, axisbg='#07000d')
ax1.spines['bottom'].set_color("#5998ff")
ax1.spines['top'].set_color("#5998ff")
ax1.spines['left'].set_color("#5998ff")
ax1.spines['right'].set_color("#5998ff")
ax2.spines['bottom'].set_color("#5998ff")
ax2.spines['top'].set_color("#5998ff")
ax2.spines['left'].set_color("#5998ff")
ax2.spines['right'].set_color("#5998ff")
def animate(i):
graph_data = open('SensorLog.csv','r').read()
dataArray = graph_data.split('\n')
xar=[]
yar=[]
xar2=[]
yar2=[]
for eachLine in dataArray:
if 'TimeStamp' not in eachLine:
if len(eachLine)>1:
t,rc,ant,prot,rssi,epc,temp,ten,powr,unpowr,inf=(eachLine.split(','))
time = datetime.strptime(t, '%m/%d/%Y %H:%M:%S.%f')
clock = time.strftime('%I:%M')
xs = matplotlib.dates.datestr2num(clock)
hfmt = matplotlib.dates.DateFormatter('%m/%d/%Y\n%I:%M:%S %p')
# Convert tension
tension = int(float(ten)*float(slope)+float(offset))
if ant == '1':
xar.append(xs)
yar.append(tension)
if ant == '2':
xar2.append(xs)
yar2.append(tension)
ax1.clear()
ax2.clear()
ax1.grid(True, color='w')
ax2.grid(True, color='w')
plt.ylabel('Tension (lb)',color='w', fontsize=20)
plt.title('Spiral 1 Tension',color='w', fontsize=26)
ax1.tick_params(axis='y', colors='w')
ax1.tick_params(axis='x', colors='w')
ax1.xaxis.set_major_formatter(hfmt)
ax2.tick_params(axis='y', colors='w')
ax2.tick_params(axis='x', colors='w')
ax2.xaxis.set_major_formatter(hfmt)
fig.autofmt_xdate()
ax1.plot (xar,yar, 'c', linewidth=2)
ax2.plot (xar2,yar2,'r', linewidth=3)
ani = animation.FuncAnimation(fig, animate, interval=10000)
plt.show()</code></pre>
</div>
</div>
</p>
<p>Do you guys have any input on how I could successfully separate ant 1 and ant 2 data and plot it on the same figure in different colors?</p>
| 1 |
2016-09-21T19:53:51Z
| 39,636,557 |
<p>You can simply plot each data set using the same axis. </p>
<p>The following approach uses the Python <code>csv.DictReader</code> to help with reading in the data, along with a <code>defaultdict(list)</code> to automatically split the data into lists based on the antenna for each row.</p>
<p>This also adds code to addresses your comment regarding grouping data points not more than 60 seconds apart from each other, and only displaying the last 5 minutes worth of entries:</p>
<pre><code>import matplotlib
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from datetime import datetime, timedelta
import collections
import csv
offset = -7.4954
slope = 0.9548
def plot(ax, data, colour, width):
if data:
last_dt = data[0][0]
sixty = timedelta(seconds=60)
x = []
y = []
# Plot groups of data not more than 60 seconds apart
for dt, ten in data:
if dt <= last_dt + sixty:
x.append(dt)
y.append(ten)
else:
ax.plot(matplotlib.dates.date2num(x), y, colour, linewidth=width)
x = [dt]
y = [ten]
last_dt = dt
ax.plot(matplotlib.dates.date2num(x), y, colour, linewidth=width)
def animate(i, fig, ax):
# Read in the CSV file
data = collections.defaultdict(list)
fields = ["TimeStamp", "ReadCount", "Antenna", "Protocol", "RSSI", "EPC", "Temp", "Ten", "Powr", "Unpowr", "Inf"]
with open('SensorLog.csv') as f_input:
csv_input = csv.DictReader(f_input, skipinitialspace=True, fieldnames=fields)
header = next(csv_input)
# Separate the rows based on the Antenna field
for row in csv_input:
try:
data[row['Antenna']].append(
[datetime.strptime(row['TimeStamp'], '%m/%d/%Y %H:%M:%S.%f'),
int(float(row['Ten']) * float(slope) + float(offset))])
except TypeError as e:
pass
# Drop any data points more than 5 mins older than the last entry
latest_dt = data[row['Antenna']][-1][0] # Last entry
not_before = latest_dt - timedelta(minutes=5)
for antenna, entries in data.items():
data[antenna] = [[dt, count] for dt, count in entries if dt >= not_before]
# Redraw existing axis
ax.clear()
ax.spines['bottom'].set_color("#5998ff")
ax.spines['top'].set_color("#5998ff")
ax.spines['left'].set_color("#5998ff")
ax.spines['right'].set_color("#5998ff")
hfmt = matplotlib.dates.DateFormatter('%m/%d/%Y\n%I:%M:%S %p')
ax.xaxis.set_major_formatter(hfmt)
fig.autofmt_xdate()
plot(ax, data['1'], 'c', 2) # Antenna 1
plot(ax, data['2'], 'r', 3) # Antenna 2
ax.grid(True, color='w')
plt.ylabel('Tension (lb)', color='w', fontsize=20)
plt.title('Spiral 1 Tension', color='w', fontsize=26)
ax.tick_params(axis='y', colors='w')
ax.tick_params(axis='x', colors='w')
fig = plt.figure(facecolor='#07000d')
ax = fig.add_subplot(111, axisbg='#07000d')
ani = animation.FuncAnimation(fig, animate, fargs=(fig, ax), interval=1000)
plt.show()
</code></pre>
<p>This would give you the following kind of output:</p>
<p><a href="http://i.stack.imgur.com/RsCrD.png" rel="nofollow"><img src="http://i.stack.imgur.com/RsCrD.png" alt="demo plot"></a></p>
| 0 |
2016-09-22T10:26:27Z
|
[
"python"
] |
generate a modified copy of a tuple
| 39,625,459 |
<p>I know I can't modify a tuple and I've seen ways to create a tuple from another one concatenating parts of the original manually like <a href="http://stackoverflow.com/questions/11458239/python-changing-value-in-a-tuple">here</a>.</p>
<p>But wonder whether there has emerged some pythonic way to 'modify' a tuple by implicitly creating a new one like</p>
<pre><code>>>> source_tuple = ('this', 'is', 'the', 'old', 'tuple')
>>> new_tuple = source_tuple.replace(3, 'new')
>>> new_tuple
('this', 'is', 'the', 'new', 'tuple')
</code></pre>
<p>A possible implementation could look like this but I'm looking for a <em>built in</em> solution:</p>
<pre><code>def replace_at(source, index, value):
if isinstance(source, tuple):
return source[:index] + (value,) + source[index + 1:]
elif isinstance(source, list):
return source[:index] + [value,] + source[index + 1:]
else:
explode()
</code></pre>
<p>it's not much work to implement such a functionality but like the <code>Enum</code> has demonstrated it's sometimes better to have an implementation everyone uses..</p>
<p><strong>Edit</strong>: my goal is <em>not</em> to replace the source tuple. I know I could use lists but even in this case I would make a copy first. So I'm really just looking for a way to create a modified copy.</p>
| 1 |
2016-09-21T19:54:43Z
| 39,625,515 |
<p>You could use some kind of comprehension:</p>
<pre><code>source_tuple = ('this', 'is', 'the', 'old', 'tuple')
new_tuple = tuple((value if x != 3 else 'new'
for x, value in enumerate(source_tuple)))
# ('this', 'is', 'the', 'new', 'tuple')
</code></pre>
<p>This is rather idotic in this case but gives you an idea of the general concept. Better use a <strong>list</strong>, though, after all, you can change the values index-based here.</p>
| 0 |
2016-09-21T19:58:27Z
|
[
"python",
"tuples",
"immutability"
] |
generate a modified copy of a tuple
| 39,625,459 |
<p>I know I can't modify a tuple and I've seen ways to create a tuple from another one concatenating parts of the original manually like <a href="http://stackoverflow.com/questions/11458239/python-changing-value-in-a-tuple">here</a>.</p>
<p>But wonder whether there has emerged some pythonic way to 'modify' a tuple by implicitly creating a new one like</p>
<pre><code>>>> source_tuple = ('this', 'is', 'the', 'old', 'tuple')
>>> new_tuple = source_tuple.replace(3, 'new')
>>> new_tuple
('this', 'is', 'the', 'new', 'tuple')
</code></pre>
<p>A possible implementation could look like this but I'm looking for a <em>built in</em> solution:</p>
<pre><code>def replace_at(source, index, value):
if isinstance(source, tuple):
return source[:index] + (value,) + source[index + 1:]
elif isinstance(source, list):
return source[:index] + [value,] + source[index + 1:]
else:
explode()
</code></pre>
<p>it's not much work to implement such a functionality but like the <code>Enum</code> has demonstrated it's sometimes better to have an implementation everyone uses..</p>
<p><strong>Edit</strong>: my goal is <em>not</em> to replace the source tuple. I know I could use lists but even in this case I would make a copy first. So I'm really just looking for a way to create a modified copy.</p>
| 1 |
2016-09-21T19:54:43Z
| 39,626,364 |
<p>If you need to create new tuple with replaced element, you may use something like this:</p>
<pre><code>def replace_value_in_tuple(t, ind, value):
return tuple(
map(lambda i: value if i == ind else t[i], range(len(t)))
)
</code></pre>
| 0 |
2016-09-21T20:50:35Z
|
[
"python",
"tuples",
"immutability"
] |
generate a modified copy of a tuple
| 39,625,459 |
<p>I know I can't modify a tuple and I've seen ways to create a tuple from another one concatenating parts of the original manually like <a href="http://stackoverflow.com/questions/11458239/python-changing-value-in-a-tuple">here</a>.</p>
<p>But wonder whether there has emerged some pythonic way to 'modify' a tuple by implicitly creating a new one like</p>
<pre><code>>>> source_tuple = ('this', 'is', 'the', 'old', 'tuple')
>>> new_tuple = source_tuple.replace(3, 'new')
>>> new_tuple
('this', 'is', 'the', 'new', 'tuple')
</code></pre>
<p>A possible implementation could look like this but I'm looking for a <em>built in</em> solution:</p>
<pre><code>def replace_at(source, index, value):
if isinstance(source, tuple):
return source[:index] + (value,) + source[index + 1:]
elif isinstance(source, list):
return source[:index] + [value,] + source[index + 1:]
else:
explode()
</code></pre>
<p>it's not much work to implement such a functionality but like the <code>Enum</code> has demonstrated it's sometimes better to have an implementation everyone uses..</p>
<p><strong>Edit</strong>: my goal is <em>not</em> to replace the source tuple. I know I could use lists but even in this case I would make a copy first. So I'm really just looking for a way to create a modified copy.</p>
| 1 |
2016-09-21T19:54:43Z
| 39,626,421 |
<hr>
<p>You can use a slice on the tuple (which yields a new tuple) and concatenate:</p>
<pre><code>>>> x=3
>>> new_tuple=source_tuple[0:x]+('new',)+source_tuple[x+1:]
>>> new_tuple
('this', 'is', 'the', 'new', 'tuple')
</code></pre>
<p>Which you can then support either a list or tuple like so:</p>
<pre><code>>>> def replace_at(source, index, value):
... return source[0:index]+type(source)((value,))+source[index+1:]
...
>>> replace_at([1,2,3],1,'new')
[1, 'new', 3]
>>> replace_at((1,2,3),1,'new')
(1, 'new', 3)
</code></pre>
<p>Or, just do it directly on a list:</p>
<pre><code>>>> source_tuple = ('this', 'is', 'the', 'old', 'tuple')
>>> li=list(source_tuple)
>>> li[3]='new'
>>> new_tuple=tuple(li)
>>> new_tuple
('this', 'is', 'the', 'new', 'tuple')
</code></pre>
<p>As stated in the comments -- that is what lists are for...</p>
| 2 |
2016-09-21T20:54:43Z
|
[
"python",
"tuples",
"immutability"
] |
generate a modified copy of a tuple
| 39,625,459 |
<p>I know I can't modify a tuple and I've seen ways to create a tuple from another one concatenating parts of the original manually like <a href="http://stackoverflow.com/questions/11458239/python-changing-value-in-a-tuple">here</a>.</p>
<p>But wonder whether there has emerged some pythonic way to 'modify' a tuple by implicitly creating a new one like</p>
<pre><code>>>> source_tuple = ('this', 'is', 'the', 'old', 'tuple')
>>> new_tuple = source_tuple.replace(3, 'new')
>>> new_tuple
('this', 'is', 'the', 'new', 'tuple')
</code></pre>
<p>A possible implementation could look like this but I'm looking for a <em>built in</em> solution:</p>
<pre><code>def replace_at(source, index, value):
if isinstance(source, tuple):
return source[:index] + (value,) + source[index + 1:]
elif isinstance(source, list):
return source[:index] + [value,] + source[index + 1:]
else:
explode()
</code></pre>
<p>it's not much work to implement such a functionality but like the <code>Enum</code> has demonstrated it's sometimes better to have an implementation everyone uses..</p>
<p><strong>Edit</strong>: my goal is <em>not</em> to replace the source tuple. I know I could use lists but even in this case I would make a copy first. So I'm really just looking for a way to create a modified copy.</p>
| 1 |
2016-09-21T19:54:43Z
| 39,626,638 |
<p>If you're thinking of swapping values on the fly, then a <code>list</code> is the more appropriate data structure; as we already know tuples are <em>immutable</em>.</p>
<p>On another note, if you're looking for a <em>swap-value</em> logic in a <code>tuple</code>, you can have a look at <a href="https://docs.python.org/3.4/library/collections.html#namedtuple-factory-function-for-tuples-with-named-fields" rel="nofollow"><code>collections.namedtuple</code></a> which has a <code>_replace</code> method.</p>
<blockquote>
<p>They can be used wherever regular tuples are used</p>
</blockquote>
<pre><code>>>> source_tuple = ('this', 'is', 'the', 'old', 'tuple')
>>> Factory = namedtuple('Factory', range(5), rename=True)
>>> source_tuple = Factory(*source_tuple)
>>> source_tuple
Factory(_0='this', _1='is', _2='the', _3='old', _4='tuple')
>>> new_tuple = source_tuple._replace(_3='new')
>>> new_tuple
Factory(_0='this', _1='is', _2='the', _3='new', _4='tuple')
</code></pre>
<p>Well that doesn't look too elegant. I still suggest you use a <code>list</code> instead.</p>
| 1 |
2016-09-21T21:08:57Z
|
[
"python",
"tuples",
"immutability"
] |
How do I retain source lines in tracebacks when running dynamically-compiled code objects?
| 39,625,465 |
<p>Say I use <a href="https://docs.python.org/2/library/functions.html#compile" rel="nofollow">compile</a> to create a <code>code</code> object from a string and a name:</p>
<pre><code>>>> a = compile('raise ValueError\n', '<at runtime>', 'exec')
</code></pre>
<p>I would like the lines within that string to appear within the traceback (note - the following is run in IDLE):</p>
<pre><code>>>> exec(a)
Traceback (most recent call last):
File "<pyshell#11>", line 1, in <module>
exec(c)
File "<at runtime>", line 1, in <module>
raise ValueError <-- This line is what I want
ValueError
</code></pre>
<p>Alas, they do not:</p>
<pre><code>>>> exec(a)
Traceback (most recent call last):
File "<pyshell#11>", line 1, in <module>
exec(c)
File "<at runtime>", line 1, in <module>
ValueError
</code></pre>
<p>Without creating a temporary file, how do I make that <code>raise ValueError</code> line appear in the traceback?</p>
| 1 |
2016-09-21T19:55:01Z
| 39,625,821 |
<p>Using the undocumented <code>cache</code> member of the builtin <a href="https://docs.python.org/2/library/linecache.html" rel="nofollow"><code>linecache</code></a>, this seems to work:</p>
<pre><code>def better_compile(src, name, mode):
# there is an example of this being set at
# https://hg.python.org/cpython/file/2.7/Lib/linecache.py#l104
from linecache import cache
cache[name] = (
len(src), None,
[line+'\n' for line in src.splitlines()], name
)
return compile(src, name, mode)
</code></pre>
<pre><code>>>> c = better_compile('raise ValueError\n', '<a name>', 'exec')
>>> exec(c)
Traceback (most recent call last):
File "<pyshell#50>", line 1, in <module>
exec(c)
File "<a name>", line 1, in <module>
raise ValueError
ValueError
</code></pre>
<hr>
<p>It turns out this is <a href="https://github.com/python/cpython/blob/4b2d53def9de6034956efe64afff80df360ff1ac/Lib/idlelib/pyshell.py#L681" rel="nofollow">pretty much how IDLE does it</a></p>
| 2 |
2016-09-21T20:15:50Z
|
[
"python"
] |
How do I retain source lines in tracebacks when running dynamically-compiled code objects?
| 39,625,465 |
<p>Say I use <a href="https://docs.python.org/2/library/functions.html#compile" rel="nofollow">compile</a> to create a <code>code</code> object from a string and a name:</p>
<pre><code>>>> a = compile('raise ValueError\n', '<at runtime>', 'exec')
</code></pre>
<p>I would like the lines within that string to appear within the traceback (note - the following is run in IDLE):</p>
<pre><code>>>> exec(a)
Traceback (most recent call last):
File "<pyshell#11>", line 1, in <module>
exec(c)
File "<at runtime>", line 1, in <module>
raise ValueError <-- This line is what I want
ValueError
</code></pre>
<p>Alas, they do not:</p>
<pre><code>>>> exec(a)
Traceback (most recent call last):
File "<pyshell#11>", line 1, in <module>
exec(c)
File "<at runtime>", line 1, in <module>
ValueError
</code></pre>
<p>Without creating a temporary file, how do I make that <code>raise ValueError</code> line appear in the traceback?</p>
| 1 |
2016-09-21T19:55:01Z
| 39,626,362 |
<p>Well, you could write your own exception handler which fills the data:</p>
<pre><code>code = """
def f1():
f2()
def f2():
1 / 0
f1()
"""
a = compile(code, '<at runtime>', 'exec')
import sys
import traceback
try:
exec(a)
except:
etype, exc, tb = sys.exc_info()
exttb = traceback.extract_tb(tb)
## Fill the missing data:
exttb2 = [(fn, lnnr, funcname,
(code.splitlines()[lnnr-1] if fn=='<at runtime>'
else line))
for fn, lnnr, funcname, line in exttb]
# Print:
sys.stderr.write('Traceback (most recent call last):\n')
for line in traceback.format_list(exttb2):
sys.stderr.write(line)
for line in traceback.format_exception_only(etype, exc):
sys.stderr.write(line)
</code></pre>
<p>Result:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "<at runtime>", line 8, in <module>
f1()
File "<at runtime>", line 3, in f1
f2()
File "<at runtime>", line 6, in f2
1 / 0
ZeroDivisionError: integer division or modulo by zero
</code></pre>
<p>Now just wrap that compile & exec into a <code>smart_exec</code> function to call every time...</p>
| 0 |
2016-09-21T20:50:30Z
|
[
"python"
] |
Django Form Failure
| 39,625,487 |
<p>I have the following form:</p>
<pre><code># coding=utf-8
class SelectTwoTeams(BootstrapForm):
def __init__(self, *args, **kwargs):
user = kwargs.pop('user', None)
self.currentSelectedTeam1 = kwargs.pop('currentSelectedTeam1', None)
self.currentSelectedTeam2 = kwargs.pop('currentSelectedTeam2', None)
self.currentfixturematchday = kwargs.pop('currentfixturematchday', None)
self.currentCampaignNo = kwargs.pop('currentCampaignNo', None)
super(SelectTwoTeams, self).__init__(*args, **kwargs)
cantSelectTeams = UserSelection.objects.select_related().filter(~Q(fixtureid__fixturematchday=self.currentfixturematchday),campaignno=self.currentCampaignNo)
if not cantSelectTeams:
queryset = StraightredTeam.objects.filter(currentteam = 1)
else:
queryset = StraightredTeam.objects.filter(currentteam = 1).exclude(teamid__in=cantSelectTeams.values_list('teamselectionid', flat=True))
self.fields['team1'].queryset = queryset
self.fields['team2'].queryset = queryset
self.fields['team1'].initial = self.currentSelectedTeam1
self.fields['team2'].initial = self.currentSelectedTeam2
team1 = forms.ModelChoiceField(queryset=StraightredTeam.objects.none(), empty_label=None,
widget=forms.Select(attrs={"class":"select-format",'onchange': 'this.form.submit();'}))
team2 = forms.ModelChoiceField(queryset=StraightredTeam.objects.none(), empty_label=None,
widget=forms.Select(attrs={"class":"select-format",'onchange': 'this.form.submit();'}))
def clean(self):
cleaned_data = self.cleaned_data # individual field's clean methods have already been called
team1 = cleaned_data.get("team1")
team2 = cleaned_data.get("team2")
if team1 == team2:
raise forms.ValidationError("You picked the same team!")
return cleaned_data
</code></pre>
<p>If I use the following in my HTML file and choose the same two teams it correctly says "You Picked the same team!":</p>
<pre><code> <form action="" method="post">
{% csrf_token %}
{{ form }}
</form>
</code></pre>
<p>However, if I use the following:</p>
<pre><code> <form action="" method="post">
{% csrf_token %}
{{ form.team1 }}{{ form.team2 }}
</form>
</code></pre>
<p>I get no feedback. Nothing occurs when I choose the same two teams. Any idea as to why separating the fields stops it from working?</p>
<p>Many thanks, Alan.</p>
| 0 |
2016-09-21T19:56:24Z
| 39,625,921 |
<p>The difference between those is not "separating the fields". It is that you have switched from the full form representation - including form labels, layout, and most importantly errors - to only displaying the two input fields themselves. </p>
<p>That's fine to do of course, as for most purposes you will want the extra layout control that it gives you, but you do need to remember to put in all the other things that the basic <code>{{ form }}</code> version does.</p>
<pre><code>{{ form.non_field_errors }}
{{ form.team1.label_tag }}{{ form.team1 }}{{ form.team1.errors }}
{{ form.team2.label_tag }}{{ form.team2 }}{{ form.team2.errors }}
</code></pre>
| 1 |
2016-09-21T20:22:17Z
|
[
"python",
"django"
] |
How do you optimize this code for nn prediction?
| 39,625,552 |
<p>How do you optimize this code?
At the moment it is running to slow for the amount of data that goes through this loop. This code runs 1-nearest neighbor. It will predict the label of the training_element based off the p_data_set</p>
<pre><code># [x] , [[x1],[x2],[x3]], [l1, l2, l3]
def prediction(training_element, p_data_set, p_label_set):
temp = np.array([], dtype=float)
for p in p_data_set:
temp = np.append(temp, distance.euclidean(training_element, p))
minIndex = np.argmin(temp)
return p_label_set[minIndex]
</code></pre>
| 1 |
2016-09-21T20:00:38Z
| 39,625,784 |
<p>You could use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html#scipy.spatial.distance.cdist" rel="nofollow"><code>distance.cdist</code></a> to directly get the distances <code>temp</code> and then use <code>.argmin()</code> to get min-index, like so -</p>
<pre><code>minIndex = distance.cdist(training_element[None],p_data_set).argmin()
</code></pre>
<p>Here's an alternative approach using <a href="http://docs.scipy.org/doc/numpy-1.10.4/reference/generated/numpy.einsum.html" rel="nofollow"><code>np.einsum</code></a> -</p>
<pre><code>subs = p_data_set - training_element
minIndex = np.einsum('ij,ij->i',subs,subs).argmin()
</code></pre>
<p><strong>Runtime test</strong></p>
<p>Well I was thinking <code>cKDTree</code> would easily beat <code>cdist</code>, but I guess <code>training_element</code> being a <code>1D</code> array isn't too heavy for <code>cdist</code> and I am seeing it to beat out <code>cKDTree</code> instead by a good <strong><code>10x+</code></strong> margin! </p>
<p>Here's the timing results -</p>
<pre><code>In [422]: # Setup arrays
...: p_data_set = np.random.randint(0,9,(40000,100))
...: training_element = np.random.randint(0,9,(100,))
...:
In [423]: def tree_based(p_data_set,training_element): #@ali_m's soln
...: tree = cKDTree(p_data_set)
...: dist, idx = tree.query(training_element, k=1)
...: return idx
...:
...: def einsum_based(p_data_set,training_element):
...: subs = p_data_set - training_element
...: return np.einsum('ij,ij->i',subs,subs).argmin()
...:
In [424]: %timeit tree_based(p_data_set,training_element)
1 loops, best of 3: 210 ms per loop
In [425]: %timeit einsum_based(p_data_set,training_element)
100 loops, best of 3: 17.3 ms per loop
In [426]: %timeit distance.cdist(training_element[None],p_data_set).argmin()
100 loops, best of 3: 14.8 ms per loop
</code></pre>
| 1 |
2016-09-21T20:13:42Z
|
[
"python",
"performance",
"numpy",
"scipy",
"nearest-neighbor"
] |
How do you optimize this code for nn prediction?
| 39,625,552 |
<p>How do you optimize this code?
At the moment it is running to slow for the amount of data that goes through this loop. This code runs 1-nearest neighbor. It will predict the label of the training_element based off the p_data_set</p>
<pre><code># [x] , [[x1],[x2],[x3]], [l1, l2, l3]
def prediction(training_element, p_data_set, p_label_set):
temp = np.array([], dtype=float)
for p in p_data_set:
temp = np.append(temp, distance.euclidean(training_element, p))
minIndex = np.argmin(temp)
return p_label_set[minIndex]
</code></pre>
| 1 |
2016-09-21T20:00:38Z
| 39,625,848 |
<p>Use a <a href="https://en.wikipedia.org/wiki/K-d_tree" rel="nofollow"><em>k</em>-D tree</a> for fast nearest-neighbour lookups, e.g. <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.html" rel="nofollow"><code>scipy.spatial.cKDTree</code></a>:</p>
<pre><code>from scipy.spatial import cKDTree
# I assume that p_data_set is (nsamples, ndims)
tree = cKDTree(p_data_set)
# training_elements is also assumed to be (nsamples, ndims)
dist, idx = tree.query(training_elements, k=1)
predicted_labels = p_label_set[idx]
</code></pre>
| 2 |
2016-09-21T20:17:35Z
|
[
"python",
"performance",
"numpy",
"scipy",
"nearest-neighbor"
] |
How do you optimize this code for nn prediction?
| 39,625,552 |
<p>How do you optimize this code?
At the moment it is running to slow for the amount of data that goes through this loop. This code runs 1-nearest neighbor. It will predict the label of the training_element based off the p_data_set</p>
<pre><code># [x] , [[x1],[x2],[x3]], [l1, l2, l3]
def prediction(training_element, p_data_set, p_label_set):
temp = np.array([], dtype=float)
for p in p_data_set:
temp = np.append(temp, distance.euclidean(training_element, p))
minIndex = np.argmin(temp)
return p_label_set[minIndex]
</code></pre>
| 1 |
2016-09-21T20:00:38Z
| 39,626,372 |
<p>Python can be quite fast programming language if used properly.
This is my suggestion (faster_prediction):</p>
<pre><code>import numpy as np
import time
def euclidean(a,b):
return np.linalg.norm(a-b)
def prediction(training_element, p_data_set, p_label_set):
temp = np.array([], dtype=float)
for p in p_data_set:
temp = np.append(temp, euclidean(training_element, p))
minIndex = np.argmin(temp)
return p_label_set[minIndex]
def faster_prediction(training_element, p_data_set, p_label_set):
temp = np.tile(training_element, (p_data_set.shape[0],1))
temp = np.sqrt(np.sum( (temp - p_data_set)**2 , 1))
minIndex = np.argmin(temp)
return p_label_set[minIndex]
training_element = [1,2,3]
p_data_set = np.random.rand(100000, 3)*10
p_label_set = np.r_[0:p_data_set.shape[0]]
t1 = time.time()
result_1 = prediction(training_element, p_data_set, p_label_set)
t2 = time.time()
t3 = time.time()
result_2 = faster_prediction(training_element, p_data_set, p_label_set)
t4 = time.time()
print "Execution time 1:", t2-t1, "value: ", result_1
print "Execution time 2:", t4-t3, "value: ", result_2
print "Speed up: ", (t4-t3) / (t2-t1)
</code></pre>
<p>I get the following result on pretty old laptop:</p>
<pre><code>Execution time 1: 21.6033108234 value: 9819
Execution time 2: 0.0176379680634 value: 9819
Speed up: 1224.81857013
</code></pre>
<p>which makes me think I must have done some stupid mistake :)</p>
<p>In case of very huge data, where memory might be an issue, I suggest using Cython or implementing function in C++ and wrapping it in python.</p>
| 0 |
2016-09-21T20:51:26Z
|
[
"python",
"performance",
"numpy",
"scipy",
"nearest-neighbor"
] |
When do system variables update in IPython kernel?
| 39,625,721 |
<p>I started a notebook by doing <code>jupyter notebook</code>, and then creating a new notebook. </p>
<p>Then, I went to the terminal, and I set the PATH:</p>
<pre><code>export PATH=$PATH:<absolute path>
</code></pre>
<p>But, then when I go back to the IPython notebook, I try to print this new system variable: </p>
<pre><code>import os
print(os.environ["PATH"].split(os.pathsep))
</code></pre>
<p>But, I don't see my <code><absolute path></code> that I just added??</p>
<p>I even tried restarting the kernel, but this doesn't help at all. What can I do? Thank you.</p>
<p><strong>EDIT</strong>:</p>
<p>I tried to refresh my environmental variables via the terminal by doing: <code>bash --login</code>, but this did not help at all. </p>
<p>Also, another thing that is peculiar about this is that when I add it to the system path manually in ipython: </p>
<pre><code>os.environ['PATH'] = os.environ['PATH'] + os.pathsep + <absolute path>
</code></pre>
<p>...it works fine in the notebook and kernel where I added it. But, when I spin up another IPython kernel, it isn't on the PATH anymore.</p>
<p>I'm on <code>osx</code>.</p>
| 0 |
2016-09-21T20:10:17Z
| 39,625,875 |
<p>When you do</p>
<pre><code>export PATH=$PATH:<absolute path>
</code></pre>
<p>in a terminal, it is only effective in this terminal session. That is to say, this <code>export</code> command has no effect on other terminal sessions.</p>
<p>If you want your PATH environment to be effective all the way, you need to edit your .bashrc file, and</p>
<pre><code>source ~/.bashrc
</code></pre>
<p>to activate it. </p>
| 1 |
2016-09-21T20:19:46Z
|
[
"python",
"osx",
"environment-variables",
"ipython",
"jupyter-notebook"
] |
Sum of list without lowest and highest list integers (python)
| 39,625,834 |
<p>Trying to practice my list comprehension, but at this point my code is looking a little (too) long per line length comprehension: </p>
<pre><code>def sum_array(arr):
return 0 if arr == None else sum(sorted(arr)[1:-1] for x in range(len(arr or [])-2))
</code></pre>
<p>Objective is to calculate sum of integers minus the min and max. If array is empty, <code>None</code>, or if only 1 element exists, the function should return 0.</p>
<p>I am receivingthe following</p>
<blockquote>
<p>TypeError: unsupported operand type(s) for +: 'int' and 'list'</p>
</blockquote>
<p>Please advise! </p>
| 0 |
2016-09-21T20:16:32Z
| 39,625,900 |
<p>Not sure why you need the <code>for</code> in the <code>sum</code>. It appears that <code>x</code> isn't used anywhere. This could be simplified to:</p>
<pre><code>def sum_array(arr):
return 0 if not arr else sum(sorted(arr)[1:-1])
</code></pre>
| 1 |
2016-09-21T20:21:20Z
|
[
"python",
"if-statement",
"list-comprehension"
] |
Sum of list without lowest and highest list integers (python)
| 39,625,834 |
<p>Trying to practice my list comprehension, but at this point my code is looking a little (too) long per line length comprehension: </p>
<pre><code>def sum_array(arr):
return 0 if arr == None else sum(sorted(arr)[1:-1] for x in range(len(arr or [])-2))
</code></pre>
<p>Objective is to calculate sum of integers minus the min and max. If array is empty, <code>None</code>, or if only 1 element exists, the function should return 0.</p>
<p>I am receivingthe following</p>
<blockquote>
<p>TypeError: unsupported operand type(s) for +: 'int' and 'list'</p>
</blockquote>
<p>Please advise! </p>
| 0 |
2016-09-21T20:16:32Z
| 39,625,920 |
<p>The easiest way to know where both the max and min is is to sort the list.</p>
<pre><code>arr = sorted(arr)
</code></pre>
<p>arr[0] is the smallest, arr[-1] is the largest</p>
<p>So</p>
<pre><code>if arr is None:
return 0
else:
return sum(sorted(arr)[1:-1])
</code></pre>
| 0 |
2016-09-21T20:22:17Z
|
[
"python",
"if-statement",
"list-comprehension"
] |
Sum of list without lowest and highest list integers (python)
| 39,625,834 |
<p>Trying to practice my list comprehension, but at this point my code is looking a little (too) long per line length comprehension: </p>
<pre><code>def sum_array(arr):
return 0 if arr == None else sum(sorted(arr)[1:-1] for x in range(len(arr or [])-2))
</code></pre>
<p>Objective is to calculate sum of integers minus the min and max. If array is empty, <code>None</code>, or if only 1 element exists, the function should return 0.</p>
<p>I am receivingthe following</p>
<blockquote>
<p>TypeError: unsupported operand type(s) for +: 'int' and 'list'</p>
</blockquote>
<p>Please advise! </p>
| 0 |
2016-09-21T20:16:32Z
| 39,625,927 |
<p>Your code is almost correct except that the last logic doesn't apply.</p>
<pre><code>def sum_array(arr):
return sum(sorted(arr)[1:-1]) if arr else 0
</code></pre>
| 0 |
2016-09-21T20:22:34Z
|
[
"python",
"if-statement",
"list-comprehension"
] |
Sum of list without lowest and highest list integers (python)
| 39,625,834 |
<p>Trying to practice my list comprehension, but at this point my code is looking a little (too) long per line length comprehension: </p>
<pre><code>def sum_array(arr):
return 0 if arr == None else sum(sorted(arr)[1:-1] for x in range(len(arr or [])-2))
</code></pre>
<p>Objective is to calculate sum of integers minus the min and max. If array is empty, <code>None</code>, or if only 1 element exists, the function should return 0.</p>
<p>I am receivingthe following</p>
<blockquote>
<p>TypeError: unsupported operand type(s) for +: 'int' and 'list'</p>
</blockquote>
<p>Please advise! </p>
| 0 |
2016-09-21T20:16:32Z
| 39,625,936 |
<blockquote>
<p>Beautiful is better than ugly.</p>
</blockquote>
<pre><code>def sum_array(arr):
if arr is None or len(arr) <= 1:
return 0
else:
return sum(sorted(arr)[1:-1])
</code></pre>
| 2 |
2016-09-21T20:23:01Z
|
[
"python",
"if-statement",
"list-comprehension"
] |
Sum of list without lowest and highest list integers (python)
| 39,625,834 |
<p>Trying to practice my list comprehension, but at this point my code is looking a little (too) long per line length comprehension: </p>
<pre><code>def sum_array(arr):
return 0 if arr == None else sum(sorted(arr)[1:-1] for x in range(len(arr or [])-2))
</code></pre>
<p>Objective is to calculate sum of integers minus the min and max. If array is empty, <code>None</code>, or if only 1 element exists, the function should return 0.</p>
<p>I am receivingthe following</p>
<blockquote>
<p>TypeError: unsupported operand type(s) for +: 'int' and 'list'</p>
</blockquote>
<p>Please advise! </p>
| 0 |
2016-09-21T20:16:32Z
| 39,626,090 |
<p>Here's a version which meets your requirements:</p>
<pre><code>def sum_array(arr):
return (None if len(arr) <= 1 else 0) if not arr else sum(sorted(arr)[1:-1])
cases = [[j + 1 for j in range(i)] for i in range(5)]
for c in cases:
print(c, sum_array(c))
# Requirements: Objective is to calculate sum of integers minus the min and
# max. If array is empty, None, or if only 1 element exists, the function
# should return 0.
# [] None
# [1] 0
# [1, 2] 0
# [1, 2, 3] 2
# [1, 2, 3, 4] 5
</code></pre>
| 0 |
2016-09-21T20:33:28Z
|
[
"python",
"if-statement",
"list-comprehension"
] |
Sum of list without lowest and highest list integers (python)
| 39,625,834 |
<p>Trying to practice my list comprehension, but at this point my code is looking a little (too) long per line length comprehension: </p>
<pre><code>def sum_array(arr):
return 0 if arr == None else sum(sorted(arr)[1:-1] for x in range(len(arr or [])-2))
</code></pre>
<p>Objective is to calculate sum of integers minus the min and max. If array is empty, <code>None</code>, or if only 1 element exists, the function should return 0.</p>
<p>I am receivingthe following</p>
<blockquote>
<p>TypeError: unsupported operand type(s) for +: 'int' and 'list'</p>
</blockquote>
<p>Please advise! </p>
| 0 |
2016-09-21T20:16:32Z
| 39,626,538 |
<p>You get <code>TypeError</code> because you try to <code>sum</code> lists. Let me change your code a little bit:</p>
<pre><code> In[1]: arr = [1, 2, 3, 4, 5]
[sorted(arr)[1:-1] for x in range(len(arr))]
Out[1]: [[2, 3, 4], [2, 3, 4], [2, 3, 4], [2, 3, 4], [2, 3, 4]]
</code></pre>
<p>I change your generator expression on list comprehension to show the idea. Really you generate a list on each implicit <code>next</code> call. A further consequence of this error is:</p>
<pre><code> In[2]: sum([[1],[2],[3]]) # will raise TypeError
Out[2]: TypeError: unsupported operand type(s) for +: 'int' and 'list'
In[3]: sum([[1],[2],[3]], []) # if you specify start, it will work
Out[3]: [1, 2, 3]
</code></pre>
<p>The signature of <code>sum</code> function is <code>sum(iterable[, start])</code>. Sums <em>start</em> and the <em>items of an iterable</em> from left to right and returns the total. <code>start</code> defaults to 0. The iterableâs items are normally numbers, and the start value is not allowed to be a string. In fist case you really try to sum <code>[1] + [2] + [3] + 0</code>, and because <code>+</code> between <code>list</code> and <code>int</code> is not defined, that's why you get TypeError. In the second case <code>[1] + [2] + [3] + []</code> this operation concatenates list and is perfectly valid.</p>
<p>And if you want review of your code and some feedback look at Stackexchange's <a href="http://codereview.stackexchange.com/">CodeReview</a> site.</p>
<p>You can try this, if you like <em>one-liners</em>. Here is also an assumption that a valid array is always instance of <code><class list></code>:</p>
<pre><code> In[4]: sum_array = lambda arr: sum(sorted(arr)[1:-1]) if isinstance(arr, list) else 0
sum_array([4, 3, 8, 1, 7, 12, 5, 9])
Out[4]: 36
</code></pre>
<p><strong>But it's not good practice!</strong></p>
| 0 |
2016-09-21T21:02:08Z
|
[
"python",
"if-statement",
"list-comprehension"
] |
Sum of list without lowest and highest list integers (python)
| 39,625,834 |
<p>Trying to practice my list comprehension, but at this point my code is looking a little (too) long per line length comprehension: </p>
<pre><code>def sum_array(arr):
return 0 if arr == None else sum(sorted(arr)[1:-1] for x in range(len(arr or [])-2))
</code></pre>
<p>Objective is to calculate sum of integers minus the min and max. If array is empty, <code>None</code>, or if only 1 element exists, the function should return 0.</p>
<p>I am receivingthe following</p>
<blockquote>
<p>TypeError: unsupported operand type(s) for +: 'int' and 'list'</p>
</blockquote>
<p>Please advise! </p>
| 0 |
2016-09-21T20:16:32Z
| 39,627,125 |
<p>Just an alternative way using <code>lambda</code> and <code>reduce</code>:</p>
<pre><code>s = 0 if arr == None else reduce(lambda x, y: x+y, sorted(arr)[1:-1])
</code></pre>
| 0 |
2016-09-21T21:50:46Z
|
[
"python",
"if-statement",
"list-comprehension"
] |
Crash when sending email in Python 2.7.10
| 39,625,905 |
<p>I'm trying to send an email within python, but the program is crashing when I run it either as a function in a larger program or on it's own in the interpreter.</p>
<pre><code>import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
fromaddr = "exampleg@gmail.com"
toaddr = "recipient@address.com"
msg = MIMEMultipart()
msg['From'] = fromaddr
msg['To'] = toaddr
msg['Subject'] = "Hi there"
body = "example"
msg.attach(MIMEText(body, 'plain'))
server = smtplib.SMTP('smtp.gmail.com', 587)
server.starttls()
server.login(fromaddr, "Password")
text = msg.as_string()
server.sendmail(fromaddr, toaddr, text)
server.quit()
</code></pre>
<p>In the interpreter, it seems to fail with <code>server = smtplib.SMTP('smtp.gmail.com', 587)</code></p>
<p>Any ideas?</p>
| -1 |
2016-09-21T20:21:36Z
| 39,654,563 |
<p>My standard suggestion (as I'm the developer of it) is <a href="https://github.com/kootenpv/yagmail" rel="nofollow">yagmail</a>.</p>
<p>Install: <code>pip install yagmail</code></p>
<p>Then:</p>
<pre><code>import yagmail
yag = yagmail.SMTP(fromaddr, "pasword")
yag.send(toaddr, "Hi there", "example")
</code></pre>
<p>A lot of things can be made easier using the package, such as HTML email, adding attachments and avoiding having to write passwords in your script.</p>
<p>For the instructions to all of this (and more, sorry for the cliches), have a look at the <a href="https://github.com/kootenpv/yagmail" rel="nofollow">readme on github</a>.</p>
| 0 |
2016-09-23T07:00:39Z
|
[
"python",
"email",
"smtp"
] |
Crash when sending email in Python 2.7.10
| 39,625,905 |
<p>I'm trying to send an email within python, but the program is crashing when I run it either as a function in a larger program or on it's own in the interpreter.</p>
<pre><code>import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
fromaddr = "exampleg@gmail.com"
toaddr = "recipient@address.com"
msg = MIMEMultipart()
msg['From'] = fromaddr
msg['To'] = toaddr
msg['Subject'] = "Hi there"
body = "example"
msg.attach(MIMEText(body, 'plain'))
server = smtplib.SMTP('smtp.gmail.com', 587)
server.starttls()
server.login(fromaddr, "Password")
text = msg.as_string()
server.sendmail(fromaddr, toaddr, text)
server.quit()
</code></pre>
<p>In the interpreter, it seems to fail with <code>server = smtplib.SMTP('smtp.gmail.com', 587)</code></p>
<p>Any ideas?</p>
| -1 |
2016-09-21T20:21:36Z
| 39,654,848 |
<p>That's because you are getting error when trying to connect Google SMTP server.
Note that if you are using Google SMTP you should use:</p>
<p>Username: Your gmail address<br>
Password: Your gmail password</p>
<p>And you should be already logged in. If you still get an error, you should check what is the problem in this list: <a href="http://stackoverflow.com/a/25238515/1600523">http://stackoverflow.com/a/25238515/1600523</a></p>
<p>Note: You can also use your own SMTP server.</p>
| 0 |
2016-09-23T07:16:22Z
|
[
"python",
"email",
"smtp"
] |
Parsing CSV using Python
| 39,625,962 |
<p>I have the following csv file that has three fields Vulnerability Title,
Vulnerability Severity Level , Asset IP Address
which shows vulnerabilities name , level of vulnerability and IP address that is having that vulnerability.
I am trying to print a report that would list
vulnerability in a column
severity next to it
and last column list of IP address having that vulnerability.</p>
<pre><code>Vulnerability Title Vulnerability Severity Level Asset IP Address
TLS/SSL Server Supports RC4 Cipher Algorithms (CVE-2013-2566) 4 10.103.64.10
TLS/SSL Server Supports RC4 Cipher Algorithms (CVE-2013-2566) 4 10.103.64.10
TLS/SSL Server Supports RC4 Cipher Algorithms (CVE-2013-2566) 4 10.103.65.10
TLS/SSL Server Supports RC4 Cipher Algorithms (CVE-2013-2566) 4 10.103.65.164
TLS/SSL Server Supports RC4 Cipher Algorithms (CVE-2013-2566) 4 10.103.64.10
TLS/SSL Server Supports RC4 Cipher Algorithms (CVE-2013-2566) 4 10.10.30.81
TLS/SSL Server Supports RC4 Cipher Algorithms (CVE-2013-2566) 4 10.10.30.81
TLS/SSL Server Supports RC4 Cipher Algorithms (CVE-2013-2566) 4 10.10.50.82
TLS/SSL Server Supports Weak Cipher Algorithms 6 10.103.65.164
Weak Cryptographic Key 3 10.103.64.10
Unencrypted Telnet Service Available 4 10.10.30.81
Unencrypted Telnet Service Available 4 10.10.50.82
TLS/SSL Server Supports Anonymous Cipher Suites with no Key Authentication 6 10.103.65.164
TLS/SSL Server Supports The Use of Static Key Ciphers 3 10.103.64.10
TLS/SSL Server Supports The Use of Static Key Ciphers 3 10.103.65.10
TLS/SSL Server Supports The Use of Static Key Ciphers 3 10.103.65.100
TLS/SSL Server Supports The Use of Static Key Ciphers 3 10.103.65.164
TLS/SSL Server Supports The Use of Static Key Ciphers 3 10.103.65.164
TLS/SSL Server Supports The Use of Static Key Ciphers 3 10.103.64.10
TLS/SSL Server Supports The Use of Static Key Ciphers 3 10.10.30.81
</code></pre>
<p>and I would like to recreate a csv file that uses Vulnerability Title tab as the key and creates a second tab called Vulnerability Severity Level and last tab would contain all the ip addresses that has the vulnerabilities</p>
<pre><code>import csv
from pprint import pprint
from collections import defaultdict
import glob
x= glob.glob("/root/*.csv")
d = defaultdict()
n = defaultdict()
for items in x:
with open(items, 'rb') as f:
reader = csv.DictReader(f, delimiter=',')
for row in reader:
a = row["Vulnerability Title"]
b = row["Vulnerability Severity Level"], row["Asset IP Address"]
c = row["Asset IP Address"]
# d = row["Vulnerability Proof"]
d.setdefault(a, []).append(b)
f.close()
pprint(d)
with open('results/ipaddress.csv', 'wb') as csv_file:
writer = csv.writer(csv_file)
for key, value in d.items():
for x,y in value:
n.setdefault(y, []).append(x)
# print x
writer.writerow([key,n])
with open('results/ipaddress2.csv', 'wb') as csv2_file:
writer = csv.writer(csv2_file)
for key, value in d.items():
n.setdefault(value, []).append(key)
writer.writerow([key,n])
</code></pre>
<p>Since I cant explain very well. let me try to simplify</p>
<p>lets say I have the following csv</p>
<pre><code>Car model owner
Honda Blue James
Toyota Blue Tom
Chevy Green James
Chevy Green Tom
</code></pre>
<p>I am trying to create this csv as the following:</p>
<pre><code>Car model owner
Honda Blue James
Toyota Blue Tom
Chevy Green James,Tom
</code></pre>
<p>both of the solutions are correct.
here is my final script as well</p>
<pre><code>import csv
import pandas as pd
df = pd.read_csv('test.csv', names=['Vulnerability Title', 'Vulnerability Severity Level','Asset IP Address'])
#print df
grouped = df.groupby(['Vulnerability Title','Vulnerability Severity Level'])
groups = grouped.groups
#print groups
new_data = [k + (v['Asset IP Address'].tolist(),) for k, v in grouped]
new_df = pd.DataFrame(new_data, columns=['Vulnerability Title' ,'Vulnerability Severity Level', 'Asset IP Address'])
print new_df
new_df.to_csv('final.csv')
</code></pre>
<p>thank you</p>
| 1 |
2016-09-21T20:24:37Z
| 39,626,554 |
<p>Answer considering your car example. Essentially, I am creating a dictionary which has the car brand as the key, and a two element tuple. The first element of the tuple is the color and the second, a list of owners.):</p>
<pre><code>import csv
car_dict = {}
with open('<file_to_read>', 'rb') as fi:
reader = csv.reader(fi)
for f in reader:
if f[0] in car_dict:
car_dict[f[0]][1].append(f[2])
else:
car_dict[f[0]] = (f[1], [f[2]])
with open('<file_to_write>', 'wb') as ou:
for k in car_dict:
out_string ='{}\t{}\t{}\n'.format(k, car_dict[k][0], ','.join(car_dict[k][1]))
ou.write(out_string)
</code></pre>
| 1 |
2016-09-21T21:03:22Z
|
[
"python",
"csv",
"dictionary",
"setdefault"
] |
Parsing CSV using Python
| 39,625,962 |
<p>I have the following csv file that has three fields Vulnerability Title,
Vulnerability Severity Level , Asset IP Address
which shows vulnerabilities name , level of vulnerability and IP address that is having that vulnerability.
I am trying to print a report that would list
vulnerability in a column
severity next to it
and last column list of IP address having that vulnerability.</p>
<pre><code>Vulnerability Title Vulnerability Severity Level Asset IP Address
TLS/SSL Server Supports RC4 Cipher Algorithms (CVE-2013-2566) 4 10.103.64.10
TLS/SSL Server Supports RC4 Cipher Algorithms (CVE-2013-2566) 4 10.103.64.10
TLS/SSL Server Supports RC4 Cipher Algorithms (CVE-2013-2566) 4 10.103.65.10
TLS/SSL Server Supports RC4 Cipher Algorithms (CVE-2013-2566) 4 10.103.65.164
TLS/SSL Server Supports RC4 Cipher Algorithms (CVE-2013-2566) 4 10.103.64.10
TLS/SSL Server Supports RC4 Cipher Algorithms (CVE-2013-2566) 4 10.10.30.81
TLS/SSL Server Supports RC4 Cipher Algorithms (CVE-2013-2566) 4 10.10.30.81
TLS/SSL Server Supports RC4 Cipher Algorithms (CVE-2013-2566) 4 10.10.50.82
TLS/SSL Server Supports Weak Cipher Algorithms 6 10.103.65.164
Weak Cryptographic Key 3 10.103.64.10
Unencrypted Telnet Service Available 4 10.10.30.81
Unencrypted Telnet Service Available 4 10.10.50.82
TLS/SSL Server Supports Anonymous Cipher Suites with no Key Authentication 6 10.103.65.164
TLS/SSL Server Supports The Use of Static Key Ciphers 3 10.103.64.10
TLS/SSL Server Supports The Use of Static Key Ciphers 3 10.103.65.10
TLS/SSL Server Supports The Use of Static Key Ciphers 3 10.103.65.100
TLS/SSL Server Supports The Use of Static Key Ciphers 3 10.103.65.164
TLS/SSL Server Supports The Use of Static Key Ciphers 3 10.103.65.164
TLS/SSL Server Supports The Use of Static Key Ciphers 3 10.103.64.10
TLS/SSL Server Supports The Use of Static Key Ciphers 3 10.10.30.81
</code></pre>
<p>and I would like to recreate a csv file that uses Vulnerability Title tab as the key and creates a second tab called Vulnerability Severity Level and last tab would contain all the ip addresses that has the vulnerabilities</p>
<pre><code>import csv
from pprint import pprint
from collections import defaultdict
import glob
x= glob.glob("/root/*.csv")
d = defaultdict()
n = defaultdict()
for items in x:
with open(items, 'rb') as f:
reader = csv.DictReader(f, delimiter=',')
for row in reader:
a = row["Vulnerability Title"]
b = row["Vulnerability Severity Level"], row["Asset IP Address"]
c = row["Asset IP Address"]
# d = row["Vulnerability Proof"]
d.setdefault(a, []).append(b)
f.close()
pprint(d)
with open('results/ipaddress.csv', 'wb') as csv_file:
writer = csv.writer(csv_file)
for key, value in d.items():
for x,y in value:
n.setdefault(y, []).append(x)
# print x
writer.writerow([key,n])
with open('results/ipaddress2.csv', 'wb') as csv2_file:
writer = csv.writer(csv2_file)
for key, value in d.items():
n.setdefault(value, []).append(key)
writer.writerow([key,n])
</code></pre>
<p>Since I cant explain very well. let me try to simplify</p>
<p>lets say I have the following csv</p>
<pre><code>Car model owner
Honda Blue James
Toyota Blue Tom
Chevy Green James
Chevy Green Tom
</code></pre>
<p>I am trying to create this csv as the following:</p>
<pre><code>Car model owner
Honda Blue James
Toyota Blue Tom
Chevy Green James,Tom
</code></pre>
<p>both of the solutions are correct.
here is my final script as well</p>
<pre><code>import csv
import pandas as pd
df = pd.read_csv('test.csv', names=['Vulnerability Title', 'Vulnerability Severity Level','Asset IP Address'])
#print df
grouped = df.groupby(['Vulnerability Title','Vulnerability Severity Level'])
groups = grouped.groups
#print groups
new_data = [k + (v['Asset IP Address'].tolist(),) for k, v in grouped]
new_df = pd.DataFrame(new_data, columns=['Vulnerability Title' ,'Vulnerability Severity Level', 'Asset IP Address'])
print new_df
new_df.to_csv('final.csv')
</code></pre>
<p>thank you</p>
| 1 |
2016-09-21T20:24:37Z
| 39,626,676 |
<p>When manipulate structured date, especially large data set. I would like to suggest you to use <a href="http://pandas.pydata.org/" rel="nofollow">pandas</a>.</p>
<p>For your problem, I will give you an example of pandas groupby feature as solution. Suppose you have the data:</p>
<pre><code>data = [['vt1', 3, '10.0.0.1'], ['vt1', 3, '10.0.0.2'],
['vt2', 4, '10.0.10.10']]
</code></pre>
<p>The pandas to operate date is very fensy:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(data=data, columns=['title', 'level', 'ip'])
grouped = df.groupby(['title', 'level'])
</code></pre>
<p>Then</p>
<pre><code>groups = grouped.groups
</code></pre>
<p>will be a dict that is almost you need.</p>
<pre><code>print(groups)
{('vt1', 3): [0, 1], ('vt2', 4): [2]}
</code></pre>
<p><code>[0,1]</code> represents the row label. Actually you can iterate on these groups to apply any operation you want. For example, If you want to save them into csv file:</p>
<pre><code>new_data = [k + (v['ip'].tolist(),) for k, v in grouped]
new_df = pd.DataFrame(new_data, columns=['title', 'level', 'ips'])
</code></pre>
<p>Let's see what is new_df now:</p>
<pre><code> title level ips
0 vt1 3 [10.0.0.1, 10.0.0.2]
1 vt2 4 [10.0.10.10]
</code></pre>
<p>That's what you need. And finally, save to file:</p>
<pre><code>new_df.to_csv(filename)
</code></pre>
<p>I strongly suggest that you should learn pandas data manipulation. You may find that was much easier and cleaner.</p>
| 1 |
2016-09-21T21:12:22Z
|
[
"python",
"csv",
"dictionary",
"setdefault"
] |
Getting position of user click in pygame
| 39,626,018 |
<p>I have a pygame window embedded in a a frame in tkinter. In another frame I have a button which calls the following function when clicked:</p>
<pre><code>def setStart():
global start
# set start position
for event in pygame.event.get():
if event.type == pygame.MOUSEBUTTONUP:
start = event.pos
print("start",start)
break
</code></pre>
<p>I am intending for the program to print the position of the place where the user clicked on the pygame surface after the button is clicked. However on the first click of the button and the following click of the pygame surface there is no output. It is on the second click of the button <em>before the corresponding second click on the pygame surface</em> that python prints out an output like : </p>
<p>('start', (166, 115))</p>
<p>How can I get it to give me a result right after the click on the pygame surface? I had the same problem when I had two seperate tkinter and pygame windows so the embedding of pygame into tkinter is unlikely to be the cause of the problem.</p>
<p>EDIT: after further testing it appears that if the button is pressed and then the pygame surface is clicked on multiple times, upon a second click of the button the coordinates of all of these clicks are printed out as a batch. </p>
| 0 |
2016-09-21T20:28:59Z
| 39,648,328 |
<p>First I have to say that I don't know anything about pygame. However, if you are already using Tkinter, I could help you maybe: I would define a new function (let's call it mouse_click). In the button-click-function I would bind the new function to the game's surface. In the new function I print out the current mouse position:</p>
<pre><code>def button_click(self):
game_surface.bind("<Button-1>", self.mouse_click)
def mouse_click(self, event):
print "X:", event.x
print "Y:", event.y
</code></pre>
<p>I hope this is helpful. Please notice that you should modify this code to make it work in your program (using the correct widget names and so on).
By the way, "Button-1" is the event identifier of the left mouse button.</p>
| 0 |
2016-09-22T20:28:59Z
|
[
"python",
"python-2.7",
"tkinter",
"pygame",
"pygame-surface"
] |
Getting position of user click in pygame
| 39,626,018 |
<p>I have a pygame window embedded in a a frame in tkinter. In another frame I have a button which calls the following function when clicked:</p>
<pre><code>def setStart():
global start
# set start position
for event in pygame.event.get():
if event.type == pygame.MOUSEBUTTONUP:
start = event.pos
print("start",start)
break
</code></pre>
<p>I am intending for the program to print the position of the place where the user clicked on the pygame surface after the button is clicked. However on the first click of the button and the following click of the pygame surface there is no output. It is on the second click of the button <em>before the corresponding second click on the pygame surface</em> that python prints out an output like : </p>
<p>('start', (166, 115))</p>
<p>How can I get it to give me a result right after the click on the pygame surface? I had the same problem when I had two seperate tkinter and pygame windows so the embedding of pygame into tkinter is unlikely to be the cause of the problem.</p>
<p>EDIT: after further testing it appears that if the button is pressed and then the pygame surface is clicked on multiple times, upon a second click of the button the coordinates of all of these clicks are printed out as a batch. </p>
| 0 |
2016-09-21T20:28:59Z
| 39,660,950 |
<p>In the most basic form here's how you do it in pygame:</p>
<pre><code>import pygame
pygame.init()
screen = pygame.display.set_mode((100, 100))
clock = pygame.time.Clock()
while True:
clock.tick(60)
for event in pygame.event.get():
if event.type == pygame.MOUSEBUTTONUP: # or MOUSEBUTTONDOWN depending on what you want.
print(event.pos)
elif event.type == pygame.QUIT:
quit()
pygame.display.update()
</code></pre>
<p>More information on how to handle pygame events can be found in the <a href="http://www.pygame.org/docs/ref/event.html" rel="nofollow">docs</a> or in the <a class='doc-link' href="http://stackoverflow.com/documentation/pygame/5110/event-handling/18046/event-loop#t=201609231214073595012">Stackoverflow documentation</a>.</p>
<p>And here's how you do it in tkinter:</p>
<pre><code>try:
import tkinter as tk # Python 3
except ImportError:
import Tkinter as tk # Python 2
root = tk.Tk()
def print_pos(event):
print(event.x, event.y)
root.bind("<Button-1>", print_pos)
root.mainloop()
</code></pre>
<p>More information on tkinter events can be found in <a href="http://effbot.org/tkinterbook/tkinter-events-and-bindings.htm" rel="nofollow">effbots documentation</a>.</p>
<p>I would suggest <strong>not</strong> putting <code>break</code> in an event loop in pygame. Doing so makes all other events go un-handled, meaning that it's possible for the program to not respond to certain events.</p>
<p>Your <em>"EDIT: [...]"</em> is unfortunately wrong, given the code you've given us. I had no problem with the code; it printed the position of where I released the mouse button and always printed just one position. So there have to be a logical error somewhere else in your code.</p>
| 0 |
2016-09-23T12:37:14Z
|
[
"python",
"python-2.7",
"tkinter",
"pygame",
"pygame-surface"
] |
Make a list of ranges in numpy
| 39,626,041 |
<p>I want to make a list of integer sequences with random start points. The way I would do this in pure python is </p>
<pre>
x = np.zeros(1000, 10) # 1000 sequences of 10 elements each
starts = np.random.randint(1, 1000, 1000)
for i in range(len(x)):
x[i] = np.arange(starts[i], starts[i] + 10)
</pre>
<p>I wonder if there is a more elegant way of doing this using Numpy functionality.</p>
| 0 |
2016-09-21T20:30:17Z
| 39,626,089 |
<p>You can use <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>broadcasting</code></a> after extending <code>starts</code> to a <code>2D</code> version and adding in the <code>1D</code> range array, like so -</p>
<pre><code>x = starts[:,None] + np.arange(10)
</code></pre>
<p><strong>Explanation</strong></p>
<p>Let's take a small example for <code>starts</code> to see what that <code>broadcasting</code> does in this case.</p>
<pre><code>In [382]: starts
Out[382]: array([3, 1, 3, 2])
In [383]: starts.shape
Out[383]: (4,)
In [384]: starts[:,None]
Out[384]:
array([[3],
[1],
[3],
[2]])
In [385]: starts[:,None].shape
Out[385]: (4, 1)
In [386]: np.arange(10).shape
Out[386]: (10,)
</code></pre>
<p>Thus, looking at the shapes and putting those together, a schematic diagram of the same would look something like this -</p>
<pre><code>starts : 4
np.arange(10) : 10
</code></pre>
<p>After extending <code>starts</code> :</p>
<pre><code>starts[:,None] : 4 x 1
np.arange(10) : 10
</code></pre>
<p>Thus, when we add <code>starts[:,None]</code> with <code>np.arange(10)</code>, the elems of <code>starts[:,None]</code> would be broadcasted along its second axis <code>10</code> times corresponding to the length of the other array along that axis. For <code>np.arange(10)</code>, it would be converted to <code>2D</code> with its first dim being a singleton dim and its elems being broadcasted along it <code>4</code> times correspoinding to the length of <code>4</code> for the other array <code>starts[:,None]</code> along that axis. Please note that there aren't explicit replications, as under the hood the elems are broadcasted and added on the fly.</p>
<p>Thus, functionally we would have the replications, like so -</p>
<pre><code>In [391]: np.repeat(starts[:,None],10,axis=1)
Out[391]:
array([[3, 3, 3, 3, 3, 3, 3, 3, 3, 3],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[3, 3, 3, 3, 3, 3, 3, 3, 3, 3],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2]])
In [392]: np.repeat(np.arange(10)[None],4,axis=0)
Out[392]:
array([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]])
</code></pre>
<p>These broadcasted elems are then added to give us the desired output <code>x</code>.</p>
| 0 |
2016-09-21T20:33:24Z
|
[
"python",
"numpy"
] |
delete space in one line prints
| 39,626,058 |
<p>How to remove spaces in this print ?</p>
<p>For example</p>
<pre><code>for i in range(5):
print i,
</code></pre>
<p>Will prints: 1 2 3 4 5</p>
<p>But I would like to get print like: 12345</p>
<p>Someone can help ?</p>
| -4 |
2016-09-21T20:31:51Z
| 39,626,273 |
<p>In Python 3, you can use:</p>
<pre><code>for i in range(5):
print(i, end="")
</code></pre>
<p>In Python 2, however you can not achieve it via simple print. There are two ways to do it:</p>
<pre><code># Way 1: Using "sys.stdout". But this will write to stdout
>>> import sys
>>> for i in range(5):
... sys.stdout.write(str(i))
...
01234>>>
# Way 2: Convert it to list of string and then join them
>>> print ''.join(map(str, range(5)))
01234
</code></pre>
| 0 |
2016-09-21T20:44:06Z
|
[
"python",
"printing",
"spaces"
] |
pyFFTW doesn't find libfftw3l.so while import
| 39,626,070 |
<p>In my Raspbian system I have succesfully installed pyFFTW, but there is a problem while import package.</p>
<pre><code> import pyfftw
File "/usr/local/lib/python3.4/dist-packages/pyfftw/__init__.py", line 16, in <module>
from .pyfftw import (
ImportError: libfftw3l.so.3: cannot open shared object file: No such file or directory
</code></pre>
<p>Actually, I have FFTW installed from source.</p>
<hr>
<p>I've dig into __init__.py and there is an <strong>relative import</strong> line:</p>
<pre><code>from .pyfftw import (
FFTW
blah blah )
</code></pre>
<p>there is no module pyfftw in the . folder but I suppose this line indicates to ./<strong>pyfftw.cpython-34m.so</strong> file which probably wraps C code of FFTW.</p>
<p>How to tell to this pyfftw.cpython-34m.so file where it should look for correct path?</p>
| 0 |
2016-09-21T20:32:28Z
| 39,626,071 |
<p>The problem was with PYTHONPATH.</p>
<p>To check if the file is somewhere at the disk:</p>
<pre><code>$ sudo file / -name libfftw3l.so.3
/home/pi/bin/fftw-3.3.5/.libs/libfftw3.so.3
/usr/lib/arm-linux-gnueabihf/libfftw3.so.3
/usr/local/lib/libfftw3.so.3
</code></pre>
<p>And add a line before import pyfftw (see <a href="https://docs.python.org/3/tutorial/modules.html" rel="nofollow" title="here">here</a>):</p>
<pre><code>import sys
sys.path.append('/usr/local/lib/libfftw3.so.3')
</code></pre>
| 0 |
2016-09-21T20:32:28Z
|
[
"python",
"path",
"pyfftw"
] |
Python (pip) throwing [SSL: CERTIFICATE_VERIFY_FAILED] even if certificate chain updated
| 39,626,142 |
<p>This is a followup to a <a href="http://stackoverflow.com/q/39356413/827480">previous SO post</a>.</p>
<p>I am using Windows/cygwin and I have the need for python to understand a custom CA certificate, as the network infrastructure resigns all SSL requests with its own certificate.</p>
<p>If I try to run <code>pip search SimpleHTTPServer</code>, I get the following error message:</p>
<pre><code>...
File "c:\users\erbe\appdata\local\programs\python\python35-32\lib\ssl.py", line 633, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645)
</code></pre>
<p>I have tried to add the certificates to my list of trusted certificates by doing the following:</p>
<ol>
<li>Copy my .pem file to /etc/pki/ca-trust/source/anchors</li>
<li><code>update-ca-trust extract</code></li>
</ol>
<p>I have verified that this works as I can now point to the generated PEM file and run pip successfully: <code>pip --cert /usr/local/ssl/cert.pem search SimpleHTTPServer</code>:</p>
<pre><code>$ pip --cert tls-ca-bundle.pem search SimpleHTTPServer
ComplexHTTPServer (0.1) - A Multithreaded Python SimpleHTTPServer
SimpleTornadoServer (1.0) - better SimpleHTTPServer using tornado
rangehttpserver (1.2.0) - SimpleHTTPServer with support for Range requests
</code></pre>
<p>However, I want this to work without having to specify the certificate manually every time. I am hoping to update the certificate chain that python uses:</p>
<pre><code>$ python -c "import ssl; print(ssl.get_default_verify_paths())"
DefaultVerifyPaths(cafile=None, capath=None, openssl_cafile_env='SSL_CERT_FILE', openssl_cafile='/usr/local/ssl/cert.pem', openssl_capath_env='SSL_CERT_DIR', openssl_capath='/usr/local/ssl/certs')
</code></pre>
<p>I have verified that through a series of symlinks, that /usr/local/ssl/cert.pem points to the same file. However, if I execute <code>pip</code>, I still get the <code>[SSL: CERTIFICATE_VERIFY_FAILED]</code> error message.</p>
<p>I uninstalled the Windows version of python, and reinstalled the Cygwin version of python. With it, I ran <code>easy_install-2.7 pip</code>. Now at least I am able to execute pip with the full certificate path without an error message:</p>
<pre><code>$ pip --cert /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem search simpleHttpServer
LittleHTTPServer (0.5.0) - Little bit extended SimpleHTTPServer
SimpleHTTP404Server (0.2.0) - A Python SimpleHTTPServer, but serves 404.html if a page is not found.
django-localsrv (0.1.2) - Django app for serving static content from different sources (files, strings, urls, etc.) at custom paths,
</code></pre>
<p>Just to be safe, I also tried updating the SSL_CERT_DIR varaible to point to /etc/pki/ca-trust-extracted/pem and set the SSL_CERT_FILE to /etc/pki/ca-trust-extracted/pem/tls-ca-bundle.pem but these do not work:</p>
<pre><code>$ set | grep SSL
SSL_CERT_DIR=/etc/pki/ca-trust/extracted/pem
SSL_CERT_FILE=/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
$ python -c "import ssl; print(ssl.get_default_verify_paths())"
DefaultVerifyPaths(cafile='/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem', capath='/etc/pki/ca-trust/extracted/pem', openssl_cafile_env='SSL_CERT_FILE', openssl_cafile='/usr/ssl/cert.pem', openssl_capath_env='SSL_CERT_DIR', openssl_capath='/usr/ssl/certs')
$ pip search simpleHttpServer
Exception:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/pip-8.1.2-py2.7.egg/pip/basecommand.py", line 215, in main
status = self.run(options, args)
...
...
File "/usr/lib/python2.7/site-packages/pip-8.1.2-py2.7.egg/pip/_vendor/requests/adapters.py", line 477, in send
raise SSLError(e, request=request)
SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
</code></pre>
<p>What am I doing wrong? Is this a cygwin vs Windows problem? Which PEM files do I need to update? </p>
| 0 |
2016-09-21T20:36:40Z
| 39,626,495 |
<p>You can add pip command line option defaults to its configuration file. In windows, it should be located under %APPDATA%\pip\pip.ini.</p>
<p>To add a certificate, put the following lines in the file:</p>
<pre><code>[global]
cert = windows path to your certificate
</code></pre>
| 1 |
2016-09-21T21:00:01Z
|
[
"python",
"windows",
"ssl",
"https",
"cygwin"
] |
Why i can't install autopy?
| 39,626,215 |
<p>I'm using MacBook and Operating System is MacOS Sierra.</p>
<p>I use this command to install autopy:</p>
<pre><code>sudo pip install autopy
</code></pre>
<p>But i get this error:</p>
<pre><code>Collecting autopy
Downloading autopy-0.51.tar.gz (74kB)
100% |ââââââââââââââââââââââââââââââââ| 81kB 256kB/s
Installing collected packages: autopy
Running setup.py install for autopy ... error
Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/tmp/pip-build-jSobWR/autopy/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-lxk785-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build/lib.macosx-10.12-intel-2.7
creating build/lib.macosx-10.12-intel-2.7/autopy
copying autopy/__init__.py -> build/lib.macosx-10.12-intel-2.7/autopy
running build_ext
building 'color' extension
creating build/temp.macosx-10.12-intel-2.7
creating build/temp.macosx-10.12-intel-2.7/src
cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -DNDEBUG=1 -DMM_LITTLE_ENDIAN -DIS_MACOSX -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/autopy-color-module.c -o build/temp.macosx-10.12-intel-2.7/src/autopy-color-module.o -Wall -Wparentheses -Winline -Wbad-function-cast -Wdisabled-optimization -Wshadow
cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -DNDEBUG=1 -DMM_LITTLE_ENDIAN -DIS_MACOSX -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/MMBitmap.c -o build/temp.macosx-10.12-intel-2.7/src/MMBitmap.o -Wall -Wparentheses -Winline -Wbad-function-cast -Wdisabled-optimization -Wshadow
cc -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. build/temp.macosx-10.12-intel-2.7/src/autopy-color-module.o build/temp.macosx-10.12-intel-2.7/src/MMBitmap.o -o build/lib.macosx-10.12-intel-2.7/autopy/color.so
building 'screen' extension
cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -DNDEBUG=1 -DMM_LITTLE_ENDIAN -DIS_MACOSX -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/autopy-screen-module.c -o build/temp.macosx-10.12-intel-2.7/src/autopy-screen-module.o -Wall -Wparentheses -Winline -Wbad-function-cast -Wdisabled-optimization -Wshadow
cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -DNDEBUG=1 -DMM_LITTLE_ENDIAN -DIS_MACOSX -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/screen.c -o build/temp.macosx-10.12-intel-2.7/src/screen.o -Wall -Wparentheses -Winline -Wbad-function-cast -Wdisabled-optimization -Wshadow
cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -DNDEBUG=1 -DMM_LITTLE_ENDIAN -DIS_MACOSX -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/screengrab.c -o build/temp.macosx-10.12-intel-2.7/src/screengrab.o -Wall -Wparentheses -Winline -Wbad-function-cast -Wdisabled-optimization -Wshadow
src/screengrab.c:48:26: warning: implicit declaration of function 'CGDisplayBitsPerPixel' is invalid in C99 [-Wimplicit-function-declaration]
bitsPerPixel = (uint8_t)CGDisplayBitsPerPixel(displayID);
^
src/screengrab.c:174:15: warning: 'kCGLPFAFullScreen' is deprecated: first deprecated in macOS 10.6 [-Wdeprecated-declarations]
attribs[0] = kCGLPFAFullScreen;
^
/System/Library/Frameworks/OpenGL.framework/Headers/CGLTypes.h:98:2: note: 'kCGLPFAFullScreen' has been explicitly marked deprecated here
kCGLPFAFullScreen OPENGL_ENUM_DEPRECATED(10_0, 10_6) = 54,
^
src/screengrab.c:191:2: warning: 'CGLSetFullScreen' is deprecated: first deprecated in macOS 10.6 [-Wdeprecated-declarations]
CGLSetFullScreen(glContext);
^
/System/Library/Frameworks/OpenGL.framework/Headers/OpenGL.h:73:17: note: 'CGLSetFullScreen' has been explicitly marked deprecated here
extern CGLError CGLSetFullScreen(CGLContextObj ctx) OPENGL_DEPRECATED(10_0, 10_6);
^
src/screengrab.c:194:2: warning: implicit declaration of function 'glReadBuffer' is invalid in C99 [-Wimplicit-function-declaration]
glReadBuffer(GL_FRONT);
^
src/screengrab.c:194:15: error: use of undeclared identifier 'GL_FRONT'
glReadBuffer(GL_FRONT);
^
src/screengrab.c:197:2: warning: implicit declaration of function 'glFinish' is invalid in C99 [-Wimplicit-function-declaration]
glFinish();
^
src/screengrab.c:199:6: warning: implicit declaration of function 'glGetError' is invalid in C99 [-Wimplicit-function-declaration]
if (glGetError() != GL_NO_ERROR) return NULL;
^
src/screengrab.c:199:22: error: use of undeclared identifier 'GL_NO_ERROR'
if (glGetError() != GL_NO_ERROR) return NULL;
^
src/screengrab.c:207:2: warning: implicit declaration of function 'glPopClientAttrib' is invalid in C99 [-Wimplicit-function-declaration]
glPopClientAttrib(); /* Clear attributes previously set. */
^
src/screengrab.c:223:2: warning: implicit declaration of function 'glPushClientAttrib' is invalid in C99 [-Wimplicit-function-declaration]
glPushClientAttrib(GL_CLIENT_PIXEL_STORE_BIT);
^
src/screengrab.c:223:21: error: use of undeclared identifier 'GL_CLIENT_PIXEL_STORE_BIT'
glPushClientAttrib(GL_CLIENT_PIXEL_STORE_BIT);
^
src/screengrab.c:225:2: warning: implicit declaration of function 'glPixelStorei' is invalid in C99 [-Wimplicit-function-declaration]
glPixelStorei(GL_PACK_ALIGNMENT, BYTE_ALIGN); /* Force alignment. */
^
src/screengrab.c:225:16: error: use of undeclared identifier 'GL_PACK_ALIGNMENT'
glPixelStorei(GL_PACK_ALIGNMENT, BYTE_ALIGN); /* Force alignment. */
^
src/screengrab.c:226:16: error: use of undeclared identifier 'GL_PACK_ROW_LENGTH'
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
^
src/screengrab.c:227:16: error: use of undeclared identifier 'GL_PACK_SKIP_ROWS'
glPixelStorei(GL_PACK_SKIP_ROWS, 0);
^
src/screengrab.c:228:16: error: use of undeclared identifier 'GL_PACK_SKIP_PIXELS'
glPixelStorei(GL_PACK_SKIP_PIXELS, 0);
^
src/screengrab.c:235:2: warning: implicit declaration of function 'glReadPixels' is invalid in C99 [-Wimplicit-function-declaration]
glReadPixels(x, y, width, height,
^
src/screengrab.c:236:30: error: use of undeclared identifier 'GL_BGRA'
MMRGB_IS_BGR ? GL_BGRA : GL_RGBA,
^
src/screengrab.c:240:15: error: use of undeclared identifier 'GL_UNSIGNED_INT_8_8_8_8_REV'
GL_UNSIGNED_INT_8_8_8_8_REV, /* Native format */
^
10 warnings and 9 errors generated.
error: command 'cc' failed with exit status 1
----------------------------------------
Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/tmp/pip-build-jSobWR/autopy/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-lxk785-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/tmp/pip-build-jSobWR/autopy/
</code></pre>
<p>Why am i getting this error?</p>
| 0 |
2016-09-21T20:40:54Z
| 39,626,280 |
<p>There is a known issue that for some reason wasn't fixed.</p>
<p>Issue: <a href="https://github.com/msanders/autopy/issues/75" rel="nofollow">https://github.com/msanders/autopy/issues/75</a></p>
<p>It contains following workaround (commands to type in console):</p>
<pre><code>brew install libpng
CFLAGS="-Wno-return-type" pip install git+https://github.com/potpath/autopy.git
</code></pre>
| 1 |
2016-09-21T20:45:02Z
|
[
"python",
"osx",
"macos-sierra",
"autopy"
] |
How did numpy implement multi-dimensional broadcasting?
| 39,626,233 |
<p>Memory (row major order):</p>
<pre><code>[[A(0,0), A(0,1)]
[A(1,0), A(1,1)]]
has this memory layout:
[A(0,0), A(0,1), A(1,0), A(1,1)]
</code></pre>
<p>I guess the algorithm work like this in the following cases.</p>
<p>Broadcasting Dimension is last dimension:</p>
<pre><code>[[0, 1, 2, 3] [[1]
x
[4, 5, 6, 7]] [10]]
A (2 by 4) B (2 by 1)
Iterate 0th dimensions of A and B simultaneously {
Iterate last dimension of A{
multiply;
}
}
</code></pre>
<p>Broadcasting dimension is 0th dimension:</p>
<pre><code>[[0, 1, 2, 3]
x [[1,10,100,1000]]
[4, 5, 6, 7]]
A (2 by 4) B (1 by 4)
Iterate 0th dimension of A{
Iterate 1st dimensions of A and B simultaneously{
multiply;
}
}
</code></pre>
<p>Question:</p>
<ol>
<li><p>How does numpy know which order of multiplication is the best.
(reading memory in order is better than reading memory all over the place. but how did numpy figure that out?)</p></li>
<li><p>What would numpy do if the arrays have more than two dimension</p></li>
<li>What would numpy do if the broadcasting dimension is not the last dimension?</li>
</ol>
<p>2nd guess of what is going on:</p>
<pre><code>#include <iostream>
int main(void){
const int nA = 12;
const int nB = 3;
int A[nA];
int B[nB];
for(int i = 0; i != nA; ++i) A[i] = i+1;
for(int i = 0; i != nB; ++i) B[i] = i+1;
//dimension
int dA[] = {2,3,2};
int dB[] = {1,3,1};
int* pA = A;
int* pB = B;
int* pA_end = A + nA;
//is it possible to make the compiler
//generate the iA and sA?
int iB = 0;
int iB_max = 2;
int sB[] = {1,0};
while(pA != pA_end){
std::cout << "*pA, *pB: " << *pA << ", " << *pB <<std::endl;
std::cout << "iB: " << iB <<std::endl;
*(pA) *= *(pB);
++pA;
pB += sB[iB];
++iB;
if (iB == iB_max) {iB = 0; pB = B;}
}
for(pA = A; pA != pA_end; ++pA){
std::cout << *(pA) << ", ";
}
std::cout << std::endl;
}
</code></pre>
| 3 |
2016-09-21T20:42:04Z
| 39,626,641 |
<p>To really get into broadcasting details you need to understand array shape and strides. But a lot of the work is now implemented in <code>c</code> code using <code>nditer</code>. You can read about it at <a href="http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html</a>. <code>np.nditer</code> gives you access to the tool at the Python level, but its real value comes when used with <code>cython</code> or your own <code>c</code> code.</p>
<p><code>np.lib.stride_tricks</code> has functions that let you play with strides. One of its functions helps visualize how arrays are broadcasted together. In practice the work is done with <code>nditer</code>, but this function may help understand the action:</p>
<pre><code>In [629]: np.lib.stride_tricks.broadcast_arrays(np.arange(6).reshape(2,3),
np.array([[1],[2]]))
Out[629]:
[array([[0, 1, 2],
[3, 4, 5]]),
array([[1, 1, 1],
[2, 2, 2]])]
</code></pre>
<p>Note that, effectively the 2nd array has been replicated to match the 1st's shape. But the replication is done with stride tricks, not with actual copies.</p>
<pre><code>In [631]: A,B=np.lib.stride_tricks.broadcast_arrays(np.arange(6).reshape(2,3),
np.array([[1],[2]]))
In [632]: A.shape
Out[632]: (2, 3)
In [633]: A.strides
Out[633]: (12, 4)
In [634]: B.shape
Out[634]: (2, 3)
In [635]: B.strides
Out[635]: (4, 0)
</code></pre>
<p>It's this <code>(4,0)</code> strides that does the replication without copy.</p>
<p>=================</p>
<p>Using the python level <code>nditer</code>, here's what it does during broadcasting.</p>
<pre><code>In [1]: A=np.arange(6).reshape(2,3)
In [2]: B=np.array([[1],[2]])
</code></pre>
<p>The plain nditer feeds elements one set at a time
<a href="http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html#using-an-external-loop" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html#using-an-external-loop</a></p>
<pre><code>In [5]: it =np.nditer((A,B))
In [6]: for a,b in it:
...: print(a,b)
0 1
1 1
2 1
3 2
4 2
5 2
</code></pre>
<p>But when I turn <code>extenal_loop</code> on, it iterates in chunks, here respective rows of the broadcasted arrays:</p>
<pre><code>In [7]: it =np.nditer((A,B), flags=['external_loop'])
In [8]: for a,b in it:
...: print(a,b)
[0 1 2] [1 1 1]
[3 4 5] [2 2 2]
</code></pre>
<p>With a more complex broadcasting the <code>external_loop</code> still produces 1d arrays that allow simple <code>c</code> style iteration:</p>
<pre><code>In [13]: A1=np.arange(24).reshape(3,2,4)
In [18]: it =np.nditer((A1,np.arange(3)[:,None,None]), flags=['external_loop'])
In [19]: while not it.finished:
...: print(it[:])
...: it.iternext()
...:
(array([0, 1, 2, 3, 4, 5, 6, 7]), array([0, 0, 0, 0, 0, 0, 0, 0]))
(array([ 8, 9, 10, 11, 12, 13, 14, 15]), array([1, 1, 1, 1, 1, 1, 1, 1]))
(array([16, 17, 18, 19, 20, 21, 22, 23]), array([2, 2, 2, 2, 2, 2, 2, 2]))
</code></pre>
<p>Note that while <code>A1</code> is (3,2,4), the nditer loop yields 3 steps (1st axis) with 2*4 length elements. </p>
<p>I found in another <code>cython/nditer</code> SO question that the first approach did not produce much of a speed improvement, but the 2nd helped a lot. In <code>c</code> or <code>cython</code> the <code>external_loop</code> case would do simple low level iteration.</p>
<p>===============</p>
<p>If I broadcast on the 1 and 3rd axis, the iterator takes 2*3 steps (effectively flattening the 1st 2 axes, and feeding the 3rd):</p>
<pre><code>In [20]: it =np.nditer((A1,np.arange(2)[None,:,None]), flags=['external_loop'])
In [21]: while not it.finished:
...: print(it[:])
...: it.iternext()
...:
(array([0, 1, 2, 3]), array([0, 0, 0, 0]))
(array([4, 5, 6, 7]), array([1, 1, 1, 1]))
(array([ 8, 9, 10, 11]), array([0, 0, 0, 0]))
(array([12, 13, 14, 15]), array([1, 1, 1, 1]))
(array([16, 17, 18, 19]), array([0, 0, 0, 0]))
(array([20, 21, 22, 23]), array([1, 1, 1, 1]))
</code></pre>
<p>But with <code>buffered</code>, it iterates once, feeding me 2 1d arrays:</p>
<pre><code>In [22]: it =np.nditer((A1,np.arange(2)[None,:,None]), flags=['external_loop','buffered'])
In [23]: while not it.finished:
...: print(it[:])
...: it.iternext()
...:
(array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]),
array([0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1]))
</code></pre>
<p><a href="http://stackoverflow.com/questions/39058641/does-cython-offer-any-reasonably-easy-and-efficient-way-to-iterate-numpy-arrays">Does Cython offer any reasonably easy and efficient way to iterate Numpy arrays as if they were flat?</a>
has some speed tests, showing that buffered external loop is fastest</p>
<p><code>cython</code> translates this into fast simple <code>c</code> iteration:</p>
<pre><code>for xarr in it:
x = xarr
size = x.shape[0]
for i in range(size):
x[i] = x[i]+1.0
</code></pre>
| 3 |
2016-09-21T21:09:18Z
|
[
"python",
"c",
"numpy"
] |
How do I draw a plot using pyqtgraph on PyQt4 widget created in QT Designer?
| 39,626,294 |
<p>I'm just starting with pyqtgraph. I have a graphicsView widget that I promoted with QT designer per the documentation. I would like to try a plot to see if it works. When I tried <code>pg.plot(x,y)</code> the program created a plot in a separate window rather than in the graphicsView widget. I'm using Windows 10, PyQt4, and Python 2.7. What am I doing wrong?</p>
<pre><code>from PyQt4 import QtGui
from PyQt4 import QtCore
import ui_test #Gui File
import sys
import pyqtgraph as pg
class Gui(QtGui.QMainWindow, ui_test.Ui_MainWindow):
def __init__(self):
super(self.__class__, self).__init__()
self.setupUi(self) # This is defined in ui_pumptest.py file automatically
self.plot()
def plot(self):
vb = pg.ViewBox()
self.graphicsView.setCentralItem(vb)
def main():
app = QtGui.QApplication(sys.argv) # A new instance of QApplication
form = Gui() # We set the form to be our ExampleApp (design)
form.show() # Show the form
app.exec_() # and execute the. app
if __name__ == '__main__': # if we're running file directly and not importing it
main() # run the main function
</code></pre>
| 0 |
2016-09-21T20:46:03Z
| 40,052,282 |
<p>Can you share the ui_pumptest.py file source? Otherwise it's hard to tell what your intentions are. If not, at least detail what construct you used to place in the QGraphicsView promotion process in QtDesigner (assuming you followed <a href="http://www.pyqtgraph.org/documentation/how_to_use.html#embedding-widgets-inside-pyqt-applications" rel="nofollow">http://www.pyqtgraph.org/documentation/how_to_use.html#embedding-widgets-inside-pyqt-applications</a>).</p>
<p>pg.plot creates it's own plotwindow->plotwidget structure, so that's why you're getting a separate window if calling that. The item you call plot on should be the container object name that was promoted in your Qt Ui file. </p>
| 0 |
2016-10-14T21:29:18Z
|
[
"python",
"pyqt4",
"pyqtgraph"
] |
Return from root to user using python os.system('kill') not working
| 39,626,329 |
<p>I need to run a command from my python script as a root user (just sudo wont work), so I use os.system('sudo su') and am able to get root access. But again I need to return back to user . I tried os.system('exit'), but it still doesnt come out of root login to user login. I have to manually enter exit in the terminal to get back to user login. Can someone help me on how to do this in python ?</p>
<pre><code>import os
import time
os.system('clear') #clear the terminal
os.system('sudo eject /dev/sr0')
time.sleep(2)
os.system('sudo modprobe option')
time.sleep(2)
os.system('sudo su')
time.sleep(5)
os.system('echo 2001 7d0e > /sys/bus/usb-serial/drivers/option1/new_id')
time.sleep(2)
os.system('exit')
</code></pre>
| 0 |
2016-09-21T20:48:17Z
| 39,640,415 |
<p>Your <code>echo</code> command doesn't work with <code>sudo</code> because the redirection to <code>> /sys/bus/...</code> is made by your own user, and then given to <code>sudo echo</code>, but root is required to access the file in the 1st place.</p>
<p>Try using <code>sudo bash -c 'command > redirect'</code> instead, which will acquire root permissions <em>before</em> attempting the redirect:</p>
<pre><code>def sudo(command):
return os.system("sudo bash -c '{}'".format(command))
sudo('echo foo > /sys/restricted/file')
</code></pre>
| 0 |
2016-09-22T13:27:43Z
|
[
"python",
"root",
"exit",
"os.system"
] |
n-dataframes inner joined to final dataframe
| 39,626,332 |
<p>I'm trying to figure out how to inner merge n-dataframes to a single final dataframe.</p>
<p>I need to be able to specify a list of dataframes in which the inner join of <strong>all</strong> is output as another dataframe. Again, the exact number will not be known in advance, but the integer count can be.</p>
<p>See the below code:</p>
<pre><code>import pandas as pd
result = pd.merge(df_1, df_2, on=['Col1', 'Col2', 'Col3', 'Col4'], how='inner')
result_2 = pd.merge(df_3, df_4, on=['Col1', 'Col2', 'Col3', 'Col4'], how='inner')
result_final = pd.merge(result, result_2, on=['Col1', 'Col2', 'Col3', 'Col4'], how='inner')
</code></pre>
| 1 |
2016-09-21T20:48:26Z
| 39,627,468 |
<p>How about this:</p>
<pre><code>dflist = [df1, df2, df3, df4]
result_final = reduce(lambda x,y: x.merge(y,
on=['Col1', 'Col2', 'Col3', 'Col4'],
how='inner'),
dflist)
</code></pre>
| 1 |
2016-09-21T22:23:23Z
|
[
"python",
"pandas"
] |
n-dataframes inner joined to final dataframe
| 39,626,332 |
<p>I'm trying to figure out how to inner merge n-dataframes to a single final dataframe.</p>
<p>I need to be able to specify a list of dataframes in which the inner join of <strong>all</strong> is output as another dataframe. Again, the exact number will not be known in advance, but the integer count can be.</p>
<p>See the below code:</p>
<pre><code>import pandas as pd
result = pd.merge(df_1, df_2, on=['Col1', 'Col2', 'Col3', 'Col4'], how='inner')
result_2 = pd.merge(df_3, df_4, on=['Col1', 'Col2', 'Col3', 'Col4'], how='inner')
result_final = pd.merge(result, result_2, on=['Col1', 'Col2', 'Col3', 'Col4'], how='inner')
</code></pre>
| 1 |
2016-09-21T20:48:26Z
| 39,631,467 |
<pre><code>cols = ['Col1', 'Col2', 'Col3', 'Col4']
pd.concat([d.set_index(cols) for d in [df_1, df_2, df_3, df_4]], axis=1)
</code></pre>
| 1 |
2016-09-22T06:09:04Z
|
[
"python",
"pandas"
] |
Python data will not input into CSV file
| 39,626,359 |
<p>I am working on a project for my shop that would allow me to track dimensions for my statistical process analysis. I have a part with 2 dimensions that I will measure 5 samples for. The dimension are OAL (Over All Length) and a Barb Diameter. I got Python and tKinter to create the window and put all the data into the correct place, but it will not export to the CSV file. It keeps telling me that the name is not defined, but the variable does exist and if I use a print command, the correct value comes up in the shell. So I know the variable exists I'm not sure if it's because I'm using tKinter or not. Any help would be appreciated.</p>
<pre><code>import time
from tkinter import *
import threading
import csv
import datetime
def gui():
root = Tk()
root.title("Troy Screw Products")
titleLabel = Label(root,text="Inspection Worksheet")
partNumLabel = Label(root,text="Part #68800")
now = datetime.datetime.now()
typeLabel = ["Barb Dia","Barb Dia","OAL","Knurl","Threads","Chamfer","OD","OD","OD"]
dimLabel = [".356",".333",".437",".376","n/a",".258",".337",".321",".305"]
tolLabel = [".354/.358",".331/.335",".433/.441",".374/.378","1/4-20",".252/.263",".335/.339",".319/.323",".303/.307"]
observations = ["Obs 1","Obs 2","Obs 3","Obs 4","Obs 5"]
cd1Obs = []
Label(text="Inspection Worksheet").grid(row=0,column=0)
Label(text="Part #68800").grid(row=1,column=0)
r=0
for c in typeLabel:
Label(text=c,relief=RIDGE,width=15).grid(row=2,column=r)
r=r+1
r=0
for c in dimLabel:
Label(text=c,relief=RIDGE,width=15).grid(row=3,column=r)
r=r+1
r=0
for c in tolLabel:
Label(text=c,relief=RIDGE,width=15).grid(row=4,column=r)
r=r+1
r=0
for c in tolLabel:
Checkbutton(width=15).grid(row=5,column=r)
r=r+1
Label(text="").grid(row=6,column=1)
Label(text="").grid(row=7,column=1)
Label(text="OAL").grid(row=8,column=2)
Label(text="Barb Dia").grid(row=8,column=6)
r=9
for c in observations:
Label(text=c,width=15).grid(row=r,column=1)
Label(text=c,width=15).grid(row=r,column=5)
r=r+1
dimOneOb1=StringVar()
dimOneOb2=StringVar()
dimOneOb3=StringVar()
dimOneOb4=StringVar()
dimOneOb5=StringVar()
dimTwoOb1=StringVar()
dimTwoOb2=StringVar()
dimTwoOb3=StringVar()
dimTwoOb4=StringVar()
dimTwoOb5=StringVar()
Entry(textvariable=dimOneOb1).grid(row=9,column=2)
Entry(textvariable=dimOneOb2).grid(row=10,column=2)
Entry(textvariable=dimOneOb3).grid(row=11,column=2)
Entry(textvariable=dimOneOb4).grid(row=12,column=2)
Entry(textvariable=dimOneOb5).grid(row=13,column=2)
Entry(textvariable=dimTwoOb1).grid(row=9,column=6)
Entry(textvariable=dimTwoOb2).grid(row=10,column=6)
Entry(textvariable=dimTwoOb3).grid(row=11,column=6)
Entry(textvariable=dimTwoOb4).grid(row=12,column=6)
Entry(textvariable=dimTwoOb5).grid(row=13,column=6)
def submitEntry():
groupOal1=dimOneOb1.get()
groupOal2=dimOneOb2.get()
groupOal3=dimOneOb3.get()
groupOal4=dimOneOb4.get()
groupOal5=dimOneOb5.get()
groupBarb1=dimTwoOb1.get()
groupBarb2=dimTwoOb2.get()
groupBarb3=dimTwoOb3.get()
groupBarb4=dimTwoOb4.get()
groupBarb5=dimTwoOb5.get()
writeCsv()
Button(text="Submit",command=submitEntry).grid(row=14,column=7)
def writeCsv():
with open("CD 68800 OAL.csv", "a") as cdOal: #open file and give file variable name; r=read, w=write, a=append
cdOalWriter = csv.writer(cdOal) #Give writer a variable name
cdOalWriter.writerow([now.strftime("%Y-%m-%d %H:%M"),groupOal1,groupOal2,groupOal3,groupOal4,groupOal5])
csOal.close()
root.mainloop()
op1 = threading.Thread(target = gui)
op1.start()
</code></pre>
| 0 |
2016-09-21T20:50:18Z
| 39,627,644 |
<p>Simply pass those variables in the method, <code>writeCsv()</code>. Because the <em>groupOas</em> are local to the <code>submitEntry()</code> function, its called function, <code>writeCsv</code>, does not see such objects. Also, below uses the <a href="http://stackoverflow.com/questions/36901/what-does-double-star-and-star-do-for-python-parameters">argument unpack idiom</a>, <code>*</code> of list of objects (I comment out the more verbose lines):</p>
<pre><code>def submitEntry():
groupOal1=dimOneOb1.get()
groupOal2=dimOneOb2.get()
groupOal3=dimOneOb3.get()
groupOal4=dimOneOb4.get()
groupOal5=dimOneOb5.get()
group0as = [groupOal1,groupOal2,groupOal3,groupOal4,groupOal5]
groupBarb1=dimTwoOb1.get()
groupBarb2=dimTwoOb2.get()
groupBarb3=dimTwoOb3.get()
groupBarb4=dimTwoOb4.get()
groupBarb5=dimTwoOb5.get()
writeCsv(*group0as)
# writeCsv(groupOal1,groupOal2,groupOal3,groupOal4,groupOal5)
def writeCsv(a, b, c, d, e):
with open("CD 68800 OAL.csv", "a") as cdOal:
cdOalWriter = csv.writer(cdOal)
cdOalWriter.writerow([now.strftime("%Y-%m-%d %H:%M"), a, b, c, d, e])
csOal.close()
</code></pre>
| 0 |
2016-09-21T22:44:56Z
|
[
"python",
"csv",
"tkinter"
] |
How to get odds-ratios and other related features with scikit-learn
| 39,626,401 |
<p>I'm going through this <a href="http://www.ats.ucla.edu/stat/mult_pkg/faq/general/odds_ratio.htm" rel="nofollow">odds ratios in logistic regression tutorial</a>, and trying to get the exactly the same results with the logistic regression module of scikit-learn. With the code below, I am able to get the coefficient and intercept but I could not find a way to find other properties of the model listed in the tutorial such as <em>log-likelyhood, Odds Ratio, Std. Err., z, P>|z|, [95% Conf. Interval]</em>. If someone could show me how to have them calculated with <code>sklearn</code> package, I would appreciate it.</p>
<pre><code>import pandas as pd
from sklearn import linear_model
url = 'http://www.ats.ucla.edu/stat/mult_pkg/faq/general/sample.csv'
df = pd.read_csv(url, na_values=[''])
y = df.hon.values
X = df.math.values
y = y.reshape(200,1)
X = X.reshape(200,1)
clf = linear_model.LogisticRegression(C=1e5)
clf.fit(X,y)
clf.coef_
clf.intercept_
</code></pre>
| 0 |
2016-09-21T20:53:29Z
| 39,630,166 |
<p>You can get the odds ratios by taking the exponent of the coeffecients:</p>
<pre><code>import numpy as np
X = df.female.values.reshape(200,1)
clf.fit(X,y)
np.exp(clf.coef_)
# array([[ 1.80891307]])
</code></pre>
<p>As for the other statistics, these are not easy to get from scikit-learn (where model evaluation is mostly done using cross-validation), if you need them you're better off using a different library such as <code>statsmodels</code>.</p>
| 0 |
2016-09-22T04:16:42Z
|
[
"python",
"scikit-learn"
] |
How to get odds-ratios and other related features with scikit-learn
| 39,626,401 |
<p>I'm going through this <a href="http://www.ats.ucla.edu/stat/mult_pkg/faq/general/odds_ratio.htm" rel="nofollow">odds ratios in logistic regression tutorial</a>, and trying to get the exactly the same results with the logistic regression module of scikit-learn. With the code below, I am able to get the coefficient and intercept but I could not find a way to find other properties of the model listed in the tutorial such as <em>log-likelyhood, Odds Ratio, Std. Err., z, P>|z|, [95% Conf. Interval]</em>. If someone could show me how to have them calculated with <code>sklearn</code> package, I would appreciate it.</p>
<pre><code>import pandas as pd
from sklearn import linear_model
url = 'http://www.ats.ucla.edu/stat/mult_pkg/faq/general/sample.csv'
df = pd.read_csv(url, na_values=[''])
y = df.hon.values
X = df.math.values
y = y.reshape(200,1)
X = X.reshape(200,1)
clf = linear_model.LogisticRegression(C=1e5)
clf.fit(X,y)
clf.coef_
clf.intercept_
</code></pre>
| 0 |
2016-09-21T20:53:29Z
| 39,711,837 |
<p>In addition to @maxymoo's answer, to get other statistics, <code>statsmodel</code> can be used. Assuming that you have your data in a <code>DataFrame</code> called <code>df</code>, the code below should show a good summary:</p>
<pre><code>import pandas as pd
from patsy import dmatrices
import statsmodels.api as sm
y, X = dmatrices( 'label ~ age + gender', data=df, return_type='dataframe')
mod = sm.Logit(y, X)
res = mod.fit()
print res.summary()
</code></pre>
| 0 |
2016-09-26T20:28:57Z
|
[
"python",
"scikit-learn"
] |
add dimension to an xarray DataArray
| 39,626,402 |
<p>I need to add a dimension to a <code>DataArray</code>, filling the values across the new dimension. Here's the original array.</p>
<pre><code>a_size = 10
a_coords = np.linspace(0, 1, a_size)
b_size = 5
b_coords = np.linspace(0, 1, b_size)
# original 1-dimensional array
x = xr.DataArray(
np.random.random(a_size),
coords=[('a', a coords)])
</code></pre>
<p>I guess I could create an empty DataArray with the new dimension and copy the existing data in.</p>
<pre><code>y = xr.DataArray(
np.empty((b_size, a_size),
coords=([('b', b_coords), ('a', a_coords)])
y[:] = x
</code></pre>
<p>A better idea might be to be to use <code>concat</code>. It took me a while to figure out how to specify both the dims and the coords for the concat dimension, and none of these options seem great. Is there something I'm missing that can makes this version cleaner?</p>
<pre><code># specify the dimension name, then set the coordinates
y = xr.concat([x for _ in b_coords], 'b')
y['b'] = b_coords
# specify the coordinates, then rename the dimension
y = xr.concat([x for _ in b_coords], b_coords)
y.rename({'concat_dim': 'b'})
# use a DataArray as the concat dimension
y = xr.concat(
[x for _ in b_coords],
xr.DataArray(b_coords, name='b', dims=['b']))
</code></pre>
<p>Still, is there a better way to do this than either of the two above options?</p>
| 2 |
2016-09-21T20:53:30Z
| 39,627,264 |
<p>You've done a pretty thorough analysis of the current options, and indeed none of these are very clean.</p>
<p>This would certainly be useful functionality to write for xarray, but nobody has gotten around to implementing it yet. Maybe you would be interested in helping out?</p>
<p>See this GitHub issue for some API proposals: <a href="https://github.com/pydata/xarray/issues/170" rel="nofollow">https://github.com/pydata/xarray/issues/170</a></p>
| 1 |
2016-09-21T22:03:56Z
|
[
"python",
"python-xarray"
] |
how to calculate entropy from np histogram
| 39,626,432 |
<p>I have an example of a histogram with:</p>
<pre><code>mu1 = 10, sigma1 = 10
s1 = np.random.normal(mu1, sigma1, 100000)
</code></pre>
<p>and calculated </p>
<pre><code>hist1 = np.histogram(s1, bins=50, range=(-10,10), density=True)
for i in hist1[0]:
ent = -sum(i * log(abs(i)))
print (ent)
</code></pre>
<p>Now I want to find the entropy from the given histogram array, but since np.histogram returns two arrays, I'm having trouble calculating the entropy. How can I just call on first array of np.histogram and calculate entropy? I would also get math domain error for the entropy even if my code above is correct. :( </p>
<p>**Edit:
How do I find entropy when Mu = 0? and log(0) yields math domain error?</p>
<hr>
<p>So the actual code I'm trying to write is: </p>
<pre><code>mu1, sigma1 = 0, 1
mu2, sigma2 = 10, 1
s1 = np.random.normal(mu1, sigma1, 100000)
s2 = np.random.normal(mu2, sigma2, 100000)
hist1 = np.histogram(s1, bins=100, range=(-20,20), density=True)
data1 = hist1[0]
ent1 = -(data1*np.log(np.abs(data1))).sum()
hist2 = np.histogram(s2, bins=100, range=(-20,20), density=True)
data2 = hist2[0]
ent2 = -(data2*np.log(np.abs(data2))).sum()
</code></pre>
<p>So far, the first example ent1 would yield nan, and the second, ent2, yields math domain error :(</p>
| 1 |
2016-09-21T20:55:30Z
| 39,626,753 |
<p>You can calculate the entropy using vectorized code:</p>
<pre><code>import numpy as np
mu1 = 10
sigma1 = 10
s1 = np.random.normal(mu1, sigma1, 100000)
hist1 = np.histogram(s1, bins=50, range=(-10,10), density=True)
data = hist1[0]
ent = -(data*np.log(np.abs(data))).sum()
# output: 7.1802159512213191
</code></pre>
<p>But if you like to use a for loop, you may write:</p>
<pre><code>import numpy as np
import math
mu1 = 10
sigma1 = 10
s1 = np.random.normal(mu1, sigma1, 100000)
hist1 = np.histogram(s1, bins=50, range=(-10,10), density=True)
ent = 0
for i in hist1[0]:
ent -= i * math.log(abs(i))
print (ent)
# output: 7.1802159512213191
</code></pre>
| 1 |
2016-09-21T21:19:22Z
|
[
"python",
"numpy",
"histogram",
"entropy"
] |
Appropriate Time to Dynamically Set Variable Names?
| 39,626,439 |
<p>Edit: Turns out the answer is an emphatic "no". However, I'm still struggling to populate the lists with the right amount of entries. </p>
<hr>
<p>I've been searching StackOverflow all over for this, and I keep seeing that dynamically setting variable names is not a good solution. However, I can't think of another way to to this. </p>
<p>I have a <code>DataFrame</code> created from <code>pandas</code> (read in from excel) that has columns with string headers and integer entries, and one column that has the numbers (let's call it Week) 1 through 52 increasing sequentially. What I want to do is create separate lists each named for the column headers and the entry is the week number appearing the number of times of the listed integer. </p>
<p>This is simple for a few columns, just manually create lists names, but as the number of columns grows, this could get a little out of hand.</p>
<p>Atrocious explanation, it was the best I could come up with. Hopefully a simplified example will clarify. </p>
<pre><code>week str1 str2 str3
1 8 2 5
2 1 0 3
3 2 1 1
</code></pre>
<p>Desired output:</p>
<pre><code>str1_count = [1, 1, 1, 1, 1, 1, 1, 1, 2, 3, 3] # eight 1's, one 2, and two 3's
str2_count = [1, 1, 3] # two 1's, one 3
str3_count = [1, 1, 1, 1, 1, 2, 2, 2, 3] # five 1's, three 2's, one 3
</code></pre>
<p>What I have so far: </p>
<pre><code>results = {}
df = pd.DataFrame(from_csv(...., sep = ","))
for key in df:
for i in df[key]
results[key] = i # this only creates a list with the int value of the most recent i
</code></pre>
| 0 |
2016-09-21T20:55:46Z
| 39,626,943 |
<p>So, like this?</p>
<pre><code>import collections
import csv
import io
reader = csv.DictReader(io.StringIO('''
week,str1,str2,str3
1,8,2,5
2,1,0,3
3,2,1,1
'''.strip()))
data = collections.defaultdict(list)
for row in reader:
for key in ('str1', 'str2', 'str3'):
data[key].extend([row['week']]*int(row[key]))
from pprint import pprint
pprint(dict(data))
# Output:
{'str1': ['1', '1', '1', '1', '1', '1', '1', '1', '2', '3', '3'],
'str2': ['1', '1', '3'],
'str3': ['1', '1', '1', '1', '1', '2', '2', '2', '3']}
</code></pre>
<p>Note: Pandas is good for crunching data and doing some interesting operations on it, but if you just need something simple you don't need it. This is one of those cases.</p>
| 1 |
2016-09-21T21:35:30Z
|
[
"python",
"pandas",
"dataframe"
] |
distance from root search of tree fails
| 39,626,562 |
<p>The tree is as follows:</p>
<pre class="lang-py prettyprint-override"><code> (1,1)
/ \
(2,1) (1,2)
/ \ / \
(3,1)(2,3) (3,2)(1,3)
and onward
</code></pre>
<p>The root is (1,1), all values in the tree are tuples.</p>
<pre><code>Where (x,y) is an element of the tree:
The leftChild will be (x+y,y)
The rightChild will be (x,x+y)
</code></pre>
<p>I am building a function that finds the distance from the root (1,1). I cannot build the tree from scratch as it will be too time consuming.</p>
<p>I have found that 1 distance from the tuple being searched for we must subtract the the max with the min. We can work backward.</p>
<pre><code> 1 2
(3,2)->(1,2)->(1,1)
(3,2)->(3-2,2) = (1,2)->(1,2-1) = (1,1)
given this is always true:
if x > y:
newtuple = (x-y,y)
distance += 1
else if y > x:
newtuple = (x,y-x)
distance += 1
</code></pre>
<p>Yet because possible test cases can go up to even x = 10^50, this is even too slow.</p>
<p>So I have found a formally to find the amount of subtractions of x with y or vice versa to make a x > y change to y < x and vice versa until (x,y) = (1,1).</p>
<p>So X - Y(a certain amount of times, say z) will make x less than y...
X - Y*z = y
find z via algebra... z = (Y-X)/(-Y)</p>
<p>This is my code so far:</p>
<pre><code>from decimal import Decimal
import math
def answer(M,F):
M = int(M)
F = int(F)
i = 0
while True:
if M == 1 and F == 1:
return str(i)
if M > F:
x = math.ceil((F-M)/(-F))
M -= F*x
elif F > M:
x = math.ceil((M-F)/(-M))
F -= M*x
else:
if F == M and F != 1:
return "impossible"
i += x
if M < 1 or F < 1:
return "impossible"
return str(i)
</code></pre>
<p>And it's not passing some unknown test cases, yet passes all the test cases I can think of. What test cases could I be failing? Where is my code wrong?</p>
<p>p.s. used Decimal module, just removed from code to make more readible.</p>
| 1 |
2016-09-21T21:04:06Z
| 39,652,148 |
<p>Floor division allows for no loss, but maybe -1 in error which I consider for the code below.</p>
<pre class="lang-py prettyprint-override"><code>def answer(M,F):
M = int(M)
F = int(F)
i = 0
while True:
if M == 1 and F == 1:
return str(i)
if M > F:
x = F-M
x = x//(-F)
if F < M-(F*x):
x += 1
M -= F*x
elif F > M:
x = M-F
x = x//(-M)
if M < F-(M*x):
x += 1
F -= M*x
else:
if F == M and F != 1:
return "impossible"
i += x
if M < 1 or F < 1:
return "impossible"
return str(i)
</code></pre>
| 0 |
2016-09-23T03:34:09Z
|
[
"python",
"algorithm",
"data-structures",
"tree",
"binary-tree"
] |
Confusion on how to do add basic indexing in sqlalchemy after table creation
| 39,626,659 |
<p>I am trying to get a simple example of indexing working with a database that has 100,000 entries and see how it improves speed. The table looks something like this:</p>
<pre><code>user = Table('user', metadata,
Column('id', Integer, primary_key=True),
Column('first_name', String(16), nullable=False),
Column('last_name', String(16), nullable=False),
Column('age', Integer, nullable=False),
Column('joined_at', DateTime(), nullable=False, index=True),
)
</code></pre>
<p>I am given a user key/value dict with keys 'first_name', 'last_name', 'age', and 'joined_at' query looks like this:</p>
<pre><code>q = session.query(UserTable).filter(and_(
UserTable.first_name == user['first_name'],
UserTable.last_name == user['last_name'],
UserTable.age == user['age'],
UserTable.joined_at == user['joined_at']))
</code></pre>
<p>I was wondering what syntax would properly use create() on the new index for joined_at to only look at the joined_at columns that are at the same Datetime as user['joined_at'] since the index was added after the table was created</p>
| 1 |
2016-09-21T21:10:36Z
| 39,626,777 |
<p>Indexing in SQL and thus in SQLAlchemy is happening behind the scene.</p>
<p>Indexing is a feature of the underlying SQL engine and doesn't need any special query in order to utilize the performance gain. The mere definition is sufficient.</p>
| 0 |
2016-09-21T21:21:38Z
|
[
"python",
"mysql",
"indexing",
"sqlalchemy"
] |
Solving a first order BVP with two boundary conditions with scipy's solve_bvp
| 39,626,681 |
<p>I am using scipy's BVP solver: </p>
<p><a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_bvp.html" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_bvp.html</a></p>
<p>The problem I am running into is that you can only have as many boundary conditions as you have equations. I only have one equation but I have two boundary conditions. How can this be fixed? </p>
<p><strong>MWE</strong></p>
<pre><code>>>> import numpy as np
>>> from scipy.integrate import solve_bvp
>>>
>>> x = np.linspace(0, 1, 100)
>>> dydx = lambda x,y: y*np.sin(x)
>>>
>>> result = solve_bvp(dydx,
... lambda ya,yb: np.array([ (ya[0]-1)**2 + (yb[0]-1)**2 ]),
... x, [np.ones(len(x))], max_nodes=100000, tol=1e-9)
>>>
>>> result
message: 'The algorithm converged to the desired accuracy.'
niter: 2
p: None
rms_residuals: array([ 3.48054730e-10, 3.47134800e-10, 3.46220750e-10,
3.45304147e-10, 3.44446495e-10, 3.43708535e-10,
3.42834209e-10, 3.41730399e-10, 3.40902853e-10,
3.40116511e-10, 3.39286663e-10, 3.38873550e-10,
3.37853506e-10, 3.36632825e-10, 3.35880059e-10,
3.35385717e-10, 3.35453551e-10, 3.34784891e-10,
3.32401725e-10, 3.34486867e-10, 3.35674629e-10,
3.37743169e-10, 3.34329677e-10, 3.29311236e-10,
3.27606354e-10, 3.28578369e-10, 3.27772742e-10,
3.26447666e-10, 3.24908674e-10, 3.24192402e-10,
3.25862692e-10, 3.28872815e-10, 3.22757465e-10,
3.21914926e-10, 3.20227078e-10, 3.23579897e-10,
3.28140843e-10, 3.18151515e-10, 3.21177949e-10,
3.16611117e-10, 3.45372059e-10, 3.18345626e-10,
3.24069081e-10, 3.32570305e-10, 3.19141250e-10,
3.14376144e-10, 3.18278959e-10, 3.11802424e-10,
3.15597596e-10, 3.22818017e-10, 3.15384028e-10,
3.17673241e-10, 3.08099021e-10, 3.11743210e-10,
3.28763320e-10, 3.24475197e-10, 3.28343741e-10,
3.25892534e-10, 3.12411478e-10, 3.37194926e-10,
3.20060651e-10, 3.03517565e-10, 3.00795182e-10,
3.06846379e-10, 3.00064770e-10, 3.05765788e-10,
2.99543196e-10, 2.98157661e-10, 2.97863071e-10,
2.96467397e-10, 3.74567928e-10, 3.24304178e-10,
3.16165056e-10, 3.02449962e-10, 2.93348900e-10,
3.08601600e-10, 2.93492038e-10, 3.11756310e-10,
2.97438508e-10, 3.17903029e-10, 3.05491804e-10,
3.02623385e-10, 3.06340149e-10, 2.94595579e-10,
2.87571373e-10, 3.03866639e-10, 3.42985927e-10,
3.21829601e-10, 3.70164964e-10, 3.53563487e-10,
3.00178404e-10, 2.83888849e-10, 2.82310753e-10,
2.85661232e-10, 3.11405296e-10, 2.80954237e-10,
2.79523163e-10, 2.80819968e-10, 2.94406497e-10,
3.19548071e-10, 2.95355340e-10, 2.77522541e-10,
2.76703591e-10, 2.88121141e-10, 2.75290617e-10,
2.84220379e-10, 2.89876300e-10, 3.14510031e-10,
3.11057911e-10, 2.72303350e-10, 2.79168046e-10,
2.90700062e-10, 2.78438999e-10, 2.68897634e-10,
2.69286657e-10, 2.90472537e-10, 2.78378707e-10,
2.97980086e-10, 2.97008148e-10, 2.65028623e-10,
2.64744165e-10, 2.69437313e-10, 2.63909411e-10,
2.62339786e-10, 2.71045386e-10, 2.65850861e-10,
2.78162780e-10, 2.61231989e-10, 2.70109868e-10,
2.61595375e-10, 2.59299272e-10, 2.65106316e-10,
2.74283076e-10, 2.86861196e-10, 3.03175803e-10,
2.58290170e-10, 3.61324845e-10, 3.39239278e-10,
2.91296094e-10, 2.83918017e-10, 4.52002829e-10,
2.52915179e-10, 3.13709791e-10, 3.72555078e-10,
2.48903834e-10, 2.58089690e-10, 2.86634265e-10,
2.60879823e-10, 2.64643448e-10, 3.03583577e-10,
5.12385186e-10, 2.42415186e-10, 3.47677749e-10,
2.41037177e-10, 2.91624837e-10, 2.88486833e-10,
2.97731066e-10, 3.46537042e-10, 2.44416103e-10,
4.29099468e-10, 4.71320607e-10, 2.97672164e-10,
3.26787171e-10, 2.34920240e-10, 2.64792458e-10,
2.91952218e-10, 2.47064463e-10, 2.34000456e-10,
4.10948830e-10, 2.36520479e-10, 3.42444147e-10,
2.76749245e-10, 2.51379106e-10, 2.40093828e-10,
2.72602754e-10, 3.94004751e-10, 2.84796018e-10,
3.72431030e-10, 2.23313796e-10, 3.32252341e-10,
3.34369044e-10, 2.63230702e-10, 2.17694780e-10,
3.25346854e-10, 2.64869219e-10, 3.51158895e-10,
3.60872478e-10, 3.09047143e-10, 2.22308395e-10,
2.43344334e-10, 2.16527726e-10, 2.98642975e-10,
2.77152047e-10, 2.66161092e-10, 2.91230604e-10,
2.37973344e-10, 2.95802884e-10, 2.78890213e-10,
2.19485810e-10, 3.53536609e-10, 2.16716319e-10,
2.51682560e-10, 2.04749227e-10, 4.31531575e-10,
3.47595602e-10, 2.38112586e-10, 1.92156254e-10,
2.46451083e-10, 2.99903096e-10, 1.90926751e-10,
2.05652721e-10, 2.33415220e-10, 2.43209873e-10,
1.85670073e-10, 2.02780645e-10, 1.89290313e-10,
1.81291292e-10, 1.77940599e-10, 3.60470288e-10,
3.28978503e-10, 1.74204497e-10, 1.95779041e-10,
2.50524362e-10, 2.49249184e-10, 1.67522152e-10,
1.68202192e-10, 1.82172067e-10, 1.77510490e-10,
1.62468247e-10, 1.75426885e-10, 3.24084379e-10,
2.21087707e-10, 1.88843987e-10, 2.57800867e-10,
1.53483353e-10, 1.80491618e-10, 2.28820880e-10,
2.32095332e-10, 1.90031952e-10, 1.46493968e-10,
2.00403717e-10, 3.23811210e-10, 1.90421082e-10,
1.45237509e-10, 1.67970046e-10, 1.49189288e-10,
1.39748871e-10, 1.40621758e-10, 1.33316350e-10,
2.22781676e-10, 1.31021647e-10, 2.12758988e-10,
1.38894682e-10, 1.75219768e-10, 1.78296709e-10,
3.67044064e-10, 2.04279379e-10, 2.11899286e-10,
1.59322174e-10, 1.21129350e-10, 1.18003803e-10,
1.42850831e-10, 1.33020880e-10, 1.27620814e-10,
1.48379719e-10, 3.35008994e-10, 3.31675208e-10,
2.49871984e-10, 1.06526186e-10, 1.57190187e-10,
9.38688508e-11, 2.16167913e-10, 1.12548066e-10,
1.98572296e-10, 2.12773340e-10, 3.09554965e-10,
2.32665662e-10, 8.05365861e-11, 2.71090303e-10,
1.60686511e-10, 1.20088934e-10, 3.23772391e-10,
2.01129249e-10, 3.04370308e-10, 6.75862037e-11,
7.60074235e-11, 1.55486106e-10, 2.24650749e-10,
2.10826836e-10, 3.75354523e-10, 1.48504437e-10,
1.65019969e-10, 7.52309342e-11, 3.59188285e-10,
1.55801401e-10, 1.52568581e-10, 5.38230045e-11])
sol: <scipy.interpolate.interpolate.PPoly object at 0x2ad860930d58>
status: 0
success: True
x: array([ 0. , 0.003367 , 0.00673401, 0.01010101, 0.01346801,
0.01683502, 0.02020202, 0.02356902, 0.02693603, 0.03030303,
0.03367003, 0.03703704, 0.04040404, 0.04377104, 0.04713805,
0.05050505, 0.05387205, 0.05723906, 0.06060606, 0.06397306,
0.06734007, 0.07070707, 0.07407407, 0.07744108, 0.08080808,
0.08417508, 0.08754209, 0.09090909, 0.09427609, 0.0976431 ,
0.1010101 , 0.1043771 , 0.10774411, 0.11111111, 0.11447811,
0.11784512, 0.12121212, 0.12457912, 0.12794613, 0.13131313,
0.13468013, 0.13804714, 0.14141414, 0.14478114, 0.14814815,
0.15151515, 0.15488215, 0.15824916, 0.16161616, 0.16498316,
0.16835017, 0.17171717, 0.17508418, 0.17845118, 0.18181818,
0.18518519, 0.18855219, 0.19191919, 0.1952862 , 0.1986532 ,
0.2020202 , 0.20538721, 0.20875421, 0.21212121, 0.21548822,
0.21885522, 0.22222222, 0.22558923, 0.22895623, 0.23232323,
0.23569024, 0.23905724, 0.24242424, 0.24579125, 0.24915825,
0.25252525, 0.25589226, 0.25925926, 0.26262626, 0.26599327,
0.26936027, 0.27272727, 0.27609428, 0.27946128, 0.28282828,
0.28619529, 0.28956229, 0.29292929, 0.2962963 , 0.2996633 ,
0.3030303 , 0.30639731, 0.30976431, 0.31313131, 0.31649832,
0.31986532, 0.32323232, 0.32659933, 0.32996633, 0.33333333,
0.33670034, 0.34006734, 0.34343434, 0.34680135, 0.35016835,
0.35353535, 0.35690236, 0.36026936, 0.36363636, 0.36700337,
0.37037037, 0.37373737, 0.37710438, 0.38047138, 0.38383838,
0.38720539, 0.39057239, 0.39393939, 0.3973064 , 0.4006734 ,
0.4040404 , 0.40740741, 0.41077441, 0.41414141, 0.41750842,
0.42087542, 0.42424242, 0.42760943, 0.43097643, 0.43434343,
0.43771044, 0.44107744, 0.44444444, 0.44781145, 0.45117845,
0.45454545, 0.45791246, 0.46127946, 0.46464646, 0.46801347,
0.47138047, 0.47474747, 0.47811448, 0.48148148, 0.48484848,
0.48821549, 0.49158249, 0.49494949, 0.4983165 , 0.5016835 ,
0.50505051, 0.50841751, 0.51178451, 0.51515152, 0.51851852,
0.52188552, 0.52525253, 0.52861953, 0.53198653, 0.53535354,
0.53872054, 0.54208754, 0.54545455, 0.54882155, 0.55218855,
0.55555556, 0.55892256, 0.56228956, 0.56565657, 0.56902357,
0.57239057, 0.57575758, 0.57912458, 0.58249158, 0.58585859,
0.58922559, 0.59259259, 0.5959596 , 0.5993266 , 0.6026936 ,
0.60606061, 0.60942761, 0.61279461, 0.61616162, 0.61952862,
0.62289562, 0.62626263, 0.62962963, 0.63299663, 0.63636364,
0.63973064, 0.64309764, 0.64646465, 0.64983165, 0.65319865,
0.65656566, 0.65993266, 0.66329966, 0.66666667, 0.67003367,
0.67340067, 0.67676768, 0.68013468, 0.68350168, 0.68686869,
0.69023569, 0.69360269, 0.6969697 , 0.7003367 , 0.7037037 ,
0.70707071, 0.71043771, 0.71380471, 0.71717172, 0.72053872,
0.72390572, 0.72727273, 0.73063973, 0.73400673, 0.73737374,
0.74074074, 0.74410774, 0.74747475, 0.75084175, 0.75420875,
0.75757576, 0.76094276, 0.76430976, 0.76767677, 0.77104377,
0.77441077, 0.77777778, 0.78114478, 0.78451178, 0.78787879,
0.79124579, 0.79461279, 0.7979798 , 0.8013468 , 0.8047138 ,
0.80808081, 0.81144781, 0.81481481, 0.81818182, 0.82154882,
0.82491582, 0.82828283, 0.83164983, 0.83501684, 0.83838384,
0.84175084, 0.84511785, 0.84848485, 0.85185185, 0.85521886,
0.85858586, 0.86195286, 0.86531987, 0.86868687, 0.87205387,
0.87542088, 0.87878788, 0.88215488, 0.88552189, 0.88888889,
0.89225589, 0.8956229 , 0.8989899 , 0.9023569 , 0.90572391,
0.90909091, 0.91245791, 0.91582492, 0.91919192, 0.92255892,
0.92592593, 0.92929293, 0.93265993, 0.93602694, 0.93939394,
0.94276094, 0.94612795, 0.94949495, 0.95286195, 0.95622896,
0.95959596, 0.96296296, 0.96632997, 0.96969697, 0.97306397,
0.97643098, 0.97979798, 0.98316498, 0.98653199, 0.98989899,
0.99326599, 0.996633 , 1. ])
y: array([[ 0.79388397, 0.79388847, 0.79390197, 0.79392447, 0.79395597,
0.79399647, 0.79404598, 0.79410449, 0.794172 , 0.79424853,
0.79433406, 0.7944286 , 0.79453215, 0.79464471, 0.7947663 ,
0.7948969 , 0.79503653, 0.79518518, 0.79534287, 0.79550958,
0.79568534, 0.79587013, 0.79606397, 0.79626686, 0.7964788 ,
0.7966998 , 0.79692987, 0.797169 , 0.79741721, 0.7976745 ,
0.79794087, 0.79821634, 0.7985009 , 0.79879457, 0.79909735,
0.79940925, 0.79973028, 0.80006043, 0.80039973, 0.80074817,
0.80110577, 0.80147253, 0.80184846, 0.80223358, 0.80262788,
0.80303138, 0.80344409, 0.80386601, 0.80429716, 0.80473755,
0.80518718, 0.80564606, 0.80611421, 0.80659164, 0.80707835,
0.80757437, 0.80807969, 0.80859433, 0.8091183 , 0.80965162,
0.8101943 , 0.81074634, 0.81130776, 0.81187857, 0.81245879,
0.81304843, 0.8136475 , 0.81425602, 0.814874 , 0.81550144,
0.81613838, 0.81678482, 0.81744077, 0.81810625, 0.81878128,
0.81946586, 0.82016002, 0.82086378, 0.82157714, 0.82230012,
0.82303274, 0.82377501, 0.82452696, 0.8252886 , 0.82605994,
0.826841 , 0.8276318 , 0.82843236, 0.82924269, 0.83006282,
0.83089275, 0.83173252, 0.83258213, 0.83344161, 0.83431098,
0.83519025, 0.83607944, 0.83697858, 0.83788768, 0.83880677,
0.83973586, 0.84067497, 0.84162413, 0.84258336, 0.84355267,
0.84453209, 0.84552164, 0.84652134, 0.84753122, 0.84855129,
0.84958158, 0.85062211, 0.8516729 , 0.85273397, 0.85380536,
0.85488708, 0.85597915, 0.85708161, 0.85819447, 0.85931775,
0.86045149, 0.86159571, 0.86275043, 0.86391567, 0.86509147,
0.86627784, 0.86747482, 0.86868242, 0.86990068, 0.87112962,
0.87236927, 0.87361965, 0.8748808 , 0.87615273, 0.87743548,
0.87872907, 0.88003353, 0.88134889, 0.88267518, 0.88401242,
0.88536065, 0.88671989, 0.88809017, 0.88947152, 0.89086397,
0.89226754, 0.89368228, 0.89510821, 0.89654536, 0.89799375,
0.89945343, 0.90092442, 0.90240675, 0.90390045, 0.90540555,
0.9069221 , 0.90845011, 0.90998962, 0.91154066, 0.91310327,
0.91467748, 0.91626331, 0.91786081, 0.91947001, 0.92109093,
0.92272362, 0.92436811, 0.92602442, 0.9276926 , 0.92937269,
0.9310647 , 0.93276869, 0.93448468, 0.93621271, 0.93795282,
0.93970504, 0.9414694 , 0.94324595, 0.94503471, 0.94683573,
0.94864904, 0.95047469, 0.95231269, 0.9541631 , 0.95602595,
0.95790128, 0.95978913, 0.96168953, 0.96360252, 0.96552814,
0.96746643, 0.96941743, 0.97138117, 0.9733577 , 0.97534706,
0.97734928, 0.97936441, 0.98139248, 0.98343353, 0.98548761,
0.98755476, 0.98963501, 0.99172841, 0.993835 , 0.99595481,
0.9980879 , 1.0002343 , 1.00239405, 1.0045672 , 1.00675379,
1.00895385, 1.01116744, 1.0133946 , 1.01563536, 1.01788978,
1.02015789, 1.02243974, 1.02473537, 1.02704483, 1.02936815,
1.03170539, 1.03405659, 1.03642179, 1.03880103, 1.04119437,
1.04360185, 1.0460235 , 1.04845939, 1.05090954, 1.05337402,
1.05585286, 1.05834611, 1.06085381, 1.06337602, 1.06591277,
1.06846412, 1.07103012, 1.0736108 , 1.07620622, 1.07881642,
1.08144145, 1.08408136, 1.0867362 , 1.08940601, 1.09209084,
1.09479074, 1.09750576, 1.10023595, 1.10298135, 1.10574202,
1.108518 , 1.11130934, 1.11411609, 1.1169383 , 1.11977602,
1.1226293 , 1.12549819, 1.12838274, 1.13128299, 1.13419901,
1.13713083, 1.14007851, 1.14304211, 1.14602166, 1.14901722,
1.15202884, 1.15505658, 1.15810048, 1.1611606 , 1.16423698,
1.16732967, 1.17043874, 1.17356423, 1.17670619, 1.17986467,
1.18303973, 1.18623141, 1.18943978, 1.19266488, 1.19590676,
1.19916548, 1.20244108, 1.20573363, 1.20904318, 1.21236977,
1.21571346, 1.2190743 , 1.22245235, 1.22584765, 1.22926027,
1.23269025, 1.23613766, 1.23960253, 1.24308492, 1.2465849 ,
1.25010251, 1.2536378 , 1.25719083]])
yp: array([[ 0. , 0.00267302, 0.0053461 , 0.0080193 , 0.01069269,
0.01336631, 0.01604024, 0.01871453, 0.02138925, 0.02406445,
0.0267402 , 0.02941655, 0.03209358, 0.03477132, 0.03744986,
0.04012924, 0.04280954, 0.0454908 , 0.04817309, 0.05085648,
0.05354102, 0.05622677, 0.05891379, 0.06160215, 0.0642919 ,
0.06698311, 0.06967583, 0.07237013, 0.07506607, 0.0777637 ,
0.0804631 , 0.08316431, 0.08586741, 0.08857244, 0.09127948,
0.09398858, 0.0966998 , 0.09941321, 0.10212887, 0.10484683,
0.10756715, 0.11028991, 0.11301515, 0.11574295, 0.11847335,
0.12120642, 0.12394223, 0.12668083, 0.12942228, 0.13216665,
0.134914 , 0.13766438, 0.14041786, 0.1431745 , 0.14593436,
0.1486975 , 0.15146398, 0.15423387, 0.15700722, 0.1597841 ,
0.16256456, 0.16534867, 0.16813649, 0.17092808, 0.1737235 ,
0.17652281, 0.17932607, 0.18213335, 0.18494471, 0.1877602 ,
0.1905799 , 0.19340385, 0.19623212, 0.19906478, 0.20190187,
0.20474348, 0.20758965, 0.21044044, 0.21329593, 0.21615617,
0.21902122, 0.22189114, 0.22476599, 0.22764585, 0.23053076,
0.23342079, 0.236316 , 0.23921645, 0.2421222 , 0.24503332,
0.24794987, 0.2508719 , 0.25379948, 0.25673268, 0.25967155,
0.26261615, 0.26556655, 0.2685228 , 0.27148497, 0.27445313,
0.27742732, 0.28040762, 0.28339409, 0.28638678, 0.28938576,
0.29239109, 0.29540283, 0.29842105, 0.3014458 , 0.30447715,
0.30751515, 0.31055988, 0.31361139, 0.31666974, 0.31973499,
0.32280722, 0.32588647, 0.32897281, 0.3320663 , 0.33516701,
0.33827498, 0.3413903 , 0.34451301, 0.34764319, 0.35078088,
0.35392616, 0.35707908, 0.3602397 , 0.3634081 , 0.36658432,
0.36976843, 0.37296049, 0.37616057, 0.37936872, 0.382585 ,
0.38580948, 0.38904223, 0.39228329, 0.39553273, 0.39879061,
0.402057 , 0.40533195, 0.40861553, 0.4119078 , 0.41520881,
0.41851863, 0.42183733, 0.42516495, 0.42850157, 0.43184723,
0.43520202, 0.43856597, 0.44193917, 0.44532166, 0.4487135 ,
0.45211476, 0.45552551, 0.45894578, 0.46237566, 0.4658152 ,
0.46926446, 0.47272349, 0.47619237, 0.47967114, 0.48315988,
0.48665863, 0.49016747, 0.49368644, 0.49721562, 0.50075505,
0.5043048 , 0.50786493, 0.5114355 , 0.51501656, 0.51860818,
0.52221041, 0.52582331, 0.52944695, 0.53308138, 0.53672666,
0.54038285, 0.54405001, 0.54772819, 0.55141745, 0.55511786,
0.55882946, 0.56255232, 0.5662865 , 0.57003205, 0.57378903,
0.5775575 , 0.58133751, 0.58512912, 0.58893239, 0.59274738,
0.59657414, 0.60041272, 0.60426319, 0.6081256 , 0.61200001,
0.61588646, 0.61978503, 0.62369576, 0.6276187 , 0.63155392,
0.63550147, 0.6394614 , 0.64343376, 0.64741862, 0.65141602,
0.65542602, 0.65944867, 0.66348403, 0.66753215, 0.67159308,
0.67566687, 0.67975358, 0.68385327, 0.68796597, 0.69209174,
0.69623064, 0.70038272, 0.70454802, 0.7087266 , 0.7129185 ,
0.71712379, 0.7213425 , 0.72557469, 0.72982041, 0.7340797 ,
0.73835262, 0.74263921, 0.74693953, 0.75125361, 0.75558151,
0.75992327, 0.76427895, 0.76864858, 0.77303222, 0.7774299 ,
0.78184168, 0.7862676 , 0.79070771, 0.79516204, 0.79963065,
0.80411358, 0.80861086, 0.81312256, 0.81764869, 0.82218932,
0.82674447, 0.8313142 , 0.83589854, 0.84049753, 0.84511122,
0.84973964, 0.85438283, 0.85904083, 0.86371368, 0.86840142,
0.87310408, 0.8778217 , 0.88255432, 0.88730198, 0.89206471,
0.89684254, 0.90163551, 0.90644365, 0.911267 , 0.91610559,
0.92095945, 0.92582862, 0.93071312, 0.93561298, 0.94052825,
0.94545894, 0.95040508, 0.95536671, 0.96034386, 0.96533654,
0.97034479, 0.97536863, 0.98040809, 0.98546319, 0.99053396,
0.99562042, 1.0007226 , 1.00584051, 1.01097418, 1.01612363,
1.02128888, 1.02646995, 1.03166686, 1.03687962, 1.04210827,
1.0473528 , 1.05261324, 1.0578896 ]])
</code></pre>
<p>As you can see, y is very far from the boundary conditions of <code>y(x=0) = y(x=1) = 1</code>.</p>
| 2 |
2016-09-21T21:12:47Z
| 39,643,988 |
<p>If you specify two boundary conditions y(0)=1 and y(1)=1 for a first order ODE, then in general the problem is overdetermined and <em>there is no solution</em>. If you specify just the initial condition y(0)=y0, you have a first order initial value problem. In fact, in this case, you can derive the exact solution: y(x) = y0*exp(-cos(x)). </p>
| 1 |
2016-09-22T16:11:07Z
|
[
"python",
"numpy",
"scipy",
"numerical-methods",
"differential-equations"
] |
Python Jinja2 call to macro results in (undesirable) newline
| 39,626,767 |
<p>My JINJA2 template is like below.</p>
<pre><code>{% macro print_if_john(name) -%}
{% if name == 'John' -%}
Hi John
{%- endif %}
{%- endmacro %}
Hello World!
{{print_if_john('Foo')}}
{{print_if_john('Foo2')}}
{{print_if_john('John')}}
</code></pre>
<p>The resulting output is</p>
<pre><code>Helloâ¢World!
Hiâ¢John
</code></pre>
<p>I don't want the 2 newlines between 'Hello World!' and 'Hi John'. It looks like when a call to macro results in no output from macro, JINJA is inserting a newline anyways.. Is there any way to avoid this? I've put minus in the call to macro itself but that didn't help.</p>
<p>Note that I tested this template and resulting code at <a href="http://jinja2test.tk/" rel="nofollow">http://jinja2test.tk/</a></p>
| 1 |
2016-09-21T21:20:27Z
| 39,626,907 |
<p>The newlines come from the <code>{{print_if_john(...)}}</code> lines themselves, <em>not</em> the macro. </p>
<p>If you were to join those up or use <code>-</code> within those blocks, the newlines disappear:</p>
<pre><code>>>> from jinja2 import Template
>>> t = Template('''\
... {% macro print_if_john(name) -%}
... {% if name == 'John' -%}
... Hi John
... {% endif %}
... {%- endmacro -%}
... Hello World!
... {{print_if_john('Foo')-}}
... {{print_if_john('Foo2')-}}
... {{print_if_john('John')-}}
... ''')
>>> t.render()
u'Hello World!\nHi John\n'
>>> print t.render()
Hello World!
Hi John
</code></pre>
<p>Note that you can still use newlines and other whitespace within the <code>{{...}}</code> blocks.</p>
<p>I removed the initial <code>-</code> from the <code>{% endif %}</code> block because when you remove the newlines from the <code>{{..}}</code> blocks, you want to <em>add one back in</em> when you actually do print the <code>Hi John</code> line. That way multiple <code>print_if_john('John')</code> calls will still get their line separators.</p>
<p>The full template, freed from the demo session:</p>
<pre class="lang-none prettyprint-override"><code>{% macro print_if_john(name) -%}
{% if name == 'John' -%}
Hi John
{% endif %}
{%- endmacro -%}
Hello World!
{{print_if_john('Foo')-}}
{{print_if_john('Foo2')-}}
{{print_if_john('John')-}}
</code></pre>
| 1 |
2016-09-21T21:32:24Z
|
[
"python",
"macros",
"jinja2"
] |
Getting new objects in the image
| 39,626,806 |
<p>i am trying to build a card recognition machine. The thing is the cards will be put on top of a cenario.</p>
<p>That being said i come for help on how can i compare to images, one is the empty background(cenario) and the other is the same cenario with a card on top of it.</p>
<pre><code>import numpy as np
from PIL import Image
import cv2
image1 = cv2.imread("gray_bk.png")
image2 = cv2.imread("gray_novo.png")
cv2.imwrite('LutGrey.png',gray_image)
novo = cv2.subtract(image1,image2)
cv2.imwrite(file, novo)
</code></pre>
<p>this is my code so far, the problem with it, is that the return is a black image with the card in it (OK) but the colors in the card are all messed up, how do i execute the same operation without messing the colors ? And what would be the best way to "cut" the card into a new (smaller) image.</p>
| 1 |
2016-09-21T21:23:52Z
| 39,627,314 |
<p>If you are working with real images, I suggest the following:</p>
<ul>
<li>read colour images</li>
<li>do some initial presmoothing</li>
<li>substract grayscale images and threshold the result</li>
<li>optionally do some morphological closing</li>
<li>find the biggest region on the thresholded image with labelling</li>
<li>get the bounding rectangle of the card area (for example cv's boundingRect) and extract this area from original colour image ¶</li>
</ul>
<p>An alternative would be, in general a more robust approach, based on feature detection. Check this out for starter:
<a href="http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_matcher/py_matcher.html" rel="nofollow">http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_matcher/py_matcher.html</a>. In this case, all possible card candidates must be known in advance.</p>
<p>If the card was just superimposed on the background image than this might work:</p>
<pre><code># read directly to float
image1 = cv2.imread("gray_bk.png").astype(np.float) / 255
image2 = cv2.imread("gray_novo.png").astype(np.float) / 255
# substract and threshold
threshold = 0.1
binary_image = np.abs(np.mean(image2,2) - np.mean(image1,2)) > threshold
# get card area
y,x = np.where(binary_image)
# extract card from the image
card = image2[ y.min():y.max(), x.min():x.max(),:]
</code></pre>
| 0 |
2016-09-21T22:08:32Z
|
[
"python",
"opencv"
] |
Python Pandas: Write certain rows in file
| 39,626,818 |
<p>The csv file is way to big, so I am reading it chunk by chunk.
Therefore, I use read_csv with chunksize.</p>
<p>I want to store all rows, where the last entry has the value 1 in one file and all the other rows where the last entry is 0 in another file.</p>
<p>Suppose it looks like this:</p>
<pre><code>ID A B C
0 0.0 0.1 1
1 0.1 0.2 0
2 0.1 0.0 1
</code></pre>
<p>So, I want to store row with ID 0 and 2 in one file and the row with ID 1 in another file.</p>
<p>How do I do that with pandas?</p>
| 0 |
2016-09-21T21:25:42Z
| 39,626,851 |
<p>From <a href="http://stackoverflow.com/questions/19674212/pandas-data-frame-select-rows-and-clear-memory">this</a> post:</p>
<blockquote>
<pre><code>reader = pd.read_csv('big_table.txt', sep='\t', header=0,
index_col=0, usecols=the_columns_i_want_to_use,
chunksize=10000)
df = pd.concat([ chunk.ix[rows_that_I_want_] for chunk in reader ])
</code></pre>
</blockquote>
<p>But instead make 2 data frames:</p>
<pre><code>df0 = pd.concat([ chunk[chunk["C"] == 0] for chunk in reader ])
df1 = pd.concat([ chunk[chunk["C"] == 1] for chunk in reader ])
</code></pre>
<p>Then save each data frame independently</p>
| 1 |
2016-09-21T21:28:24Z
|
[
"python",
"pandas"
] |
Python Pandas: Write certain rows in file
| 39,626,818 |
<p>The csv file is way to big, so I am reading it chunk by chunk.
Therefore, I use read_csv with chunksize.</p>
<p>I want to store all rows, where the last entry has the value 1 in one file and all the other rows where the last entry is 0 in another file.</p>
<p>Suppose it looks like this:</p>
<pre><code>ID A B C
0 0.0 0.1 1
1 0.1 0.2 0
2 0.1 0.0 1
</code></pre>
<p>So, I want to store row with ID 0 and 2 in one file and the row with ID 1 in another file.</p>
<p>How do I do that with pandas?</p>
| 0 |
2016-09-21T21:25:42Z
| 39,630,613 |
<p>I would simply do it like this:</p>
<pre><code>first = True
df = pd.read_csv('file.csv', chunksize=1e5)
for chunk in df:
if first:
chunk[chunk['C'] == 1].to_csv('ones.csv', header=True)
chunk[chunk['C'] == 0].to_csv('zero.csv', header=True)
first = False
chunk[chunk['C'] == 1].to_csv('ones.csv', header=False)
chunk[chunk['C'] == 0].to_csv('zero.csv', header=False)
</code></pre>
| 0 |
2016-09-22T04:59:59Z
|
[
"python",
"pandas"
] |
Python SSH and comparing output to an imported list
| 39,626,819 |
<p>I am looking to ssh to multiple servers 1 at a time , compare the output of an ssh command to a list, and then run a command on the items in that appear in both the output and a separate list.</p>
<p>I'd like to ssh to each server in a loop "ssh " but I'm unsure how to import the next server from the list into the middle of the ssh command.</p>
<p>The other issue I'm having is obtaining the results of a command run which I could then compare.</p>
<p>Any guidance or direction to a helpful post would be fantastic.<br>
Thank you!</p>
| -3 |
2016-09-21T21:25:45Z
| 39,627,891 |
<p>The answer to your question if I understood correctly lies in Operative Systems principles,</p>
<p>You iterate the SSH connection list, and by using Popen you are able to SSH via Shell.</p>
<p>After that you need to keep a second process open as well a Pipe for the 2 processes to communicate(the one that opened the ssh via shell, this one is blocked in IO) and use the second process to make verification etc and send new commands via PIPE created by the first process. PIPE.STDIN</p>
<p>In short, check Popen Python Module.</p>
<p><a href="https://docs.python.org/2/library/subprocess.html" rel="nofollow">https://docs.python.org/2/library/subprocess.html</a></p>
| 0 |
2016-09-21T23:14:47Z
|
[
"python",
"list",
"loops",
"ssh"
] |
else essentially superfluous
| 39,626,857 |
<p>I don't have a background in programming, so this is probably really dumb, but I've never considered this before: it seems that the <code>else</code> statement is essentially superfluous because when the condition is False, Python just moves to the next unindented line .</p>
<p>For example, normally you would write: (if not using <code>elif</code>)</p>
<pre><code>x=2
if x == 1:
value = "one"
else:
if x == 2:
value = "two"
print value
</code></pre>
<p>But this works too:</p>
<pre><code>x=2
if x == 1:
value = "one"
if x == 2:
value = "two"
print value
</code></pre>
<p>Could someone give an example that shows how and when the <code>else:</code> statement is essential?</p>
| 0 |
2016-09-21T21:28:40Z
| 39,626,893 |
<p>If the conditions are mutually exclusive then <code>else</code> is superfluous. It's not if the conditions overlap, though.</p>
<pre><code>x = 2
if x > 0:
print 'foo'
else: # better -- elif x > 1:
if x > 1:
print 'bar'
</code></pre>
<p>This program prints <code>foo</code>.</p>
<pre><code>x = 2
if x > 0:
print 'foo'
if x > 1:
print 'bar'
</code></pre>
<p>This program prints <code>foo</code> <code>bar</code>.</p>
| 2 |
2016-09-21T21:31:13Z
|
[
"python",
"if-statement",
"syntax"
] |
else essentially superfluous
| 39,626,857 |
<p>I don't have a background in programming, so this is probably really dumb, but I've never considered this before: it seems that the <code>else</code> statement is essentially superfluous because when the condition is False, Python just moves to the next unindented line .</p>
<p>For example, normally you would write: (if not using <code>elif</code>)</p>
<pre><code>x=2
if x == 1:
value = "one"
else:
if x == 2:
value = "two"
print value
</code></pre>
<p>But this works too:</p>
<pre><code>x=2
if x == 1:
value = "one"
if x == 2:
value = "two"
print value
</code></pre>
<p>Could someone give an example that shows how and when the <code>else:</code> statement is essential?</p>
| 0 |
2016-09-21T21:28:40Z
| 39,626,896 |
<p>Else is never a must but is rather a convenience. Here are some:</p>
<pre><code>if x == 1:
value = "one"
else:
value = "not_one"
</code></pre>
<p>or</p>
<pre><code>if x < 1:
value = "less_than_one"
elif x < 2:
value = "between_one_and_two"
else:
value = "more_than_two"
</code></pre>
<hr>
<p>They can both be rewritten as:</p>
<pre><code>if x == 1:
value = "one"
if x != 1:
value = "not_one"
</code></pre>
<p>or</p>
<pre><code>if x < 1:
value = "less_than_one"
if 1 <= x < 2:
value = "between_one_and_two"
if 2 <= x:
value = "more_than_two"
</code></pre>
| 0 |
2016-09-21T21:31:26Z
|
[
"python",
"if-statement",
"syntax"
] |
else essentially superfluous
| 39,626,857 |
<p>I don't have a background in programming, so this is probably really dumb, but I've never considered this before: it seems that the <code>else</code> statement is essentially superfluous because when the condition is False, Python just moves to the next unindented line .</p>
<p>For example, normally you would write: (if not using <code>elif</code>)</p>
<pre><code>x=2
if x == 1:
value = "one"
else:
if x == 2:
value = "two"
print value
</code></pre>
<p>But this works too:</p>
<pre><code>x=2
if x == 1:
value = "one"
if x == 2:
value = "two"
print value
</code></pre>
<p>Could someone give an example that shows how and when the <code>else:</code> statement is essential?</p>
| 0 |
2016-09-21T21:28:40Z
| 39,627,132 |
<p>The <code>else</code> clause is useful if you are looking for a specific case (using <code>if</code>) and then everything else that is not covered by the <code>if</code> (using <code>else</code>). </p>
<p>Often a <a href="https://docs.python.org/2.5/whatsnew/pep-308.html" rel="nofollow">conditional expression</a> is used:</p>
<pre><code>>>> x=3
>>> value='one' if x==1 else 'not one'
>>> value
'not one'
</code></pre>
<p>If you did not use an <code>else</code> here (either as part of the conditional or as part of an <code>if/else</code> then <code>value</code> would not de defined:</p>
<pre><code>>>> del value # remove the name `value`
>>> x
3
>>> if x==1: value='one'
...
>>> value
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'value' is not defined
</code></pre>
<p>So you do need the <code>else</code> class to make sure that <code>value</code> received a assignment if x is not 1.</p>
<hr>
<p>BTW, In Python, rather than long cascade of <code>if/elif/elif/else</code> you can use a dictionary with a default:</p>
<pre><code>>>> conditions={1:'one', 2:'two', 3:'three'}
>>> conditions.get(2,'not one, two, or three...')
'two'
>>> conditions.get(19,'not one, two, or three...')
'not one, two, or three...'
</code></pre>
| 0 |
2016-09-21T21:51:11Z
|
[
"python",
"if-statement",
"syntax"
] |
else essentially superfluous
| 39,626,857 |
<p>I don't have a background in programming, so this is probably really dumb, but I've never considered this before: it seems that the <code>else</code> statement is essentially superfluous because when the condition is False, Python just moves to the next unindented line .</p>
<p>For example, normally you would write: (if not using <code>elif</code>)</p>
<pre><code>x=2
if x == 1:
value = "one"
else:
if x == 2:
value = "two"
print value
</code></pre>
<p>But this works too:</p>
<pre><code>x=2
if x == 1:
value = "one"
if x == 2:
value = "two"
print value
</code></pre>
<p>Could someone give an example that shows how and when the <code>else:</code> statement is essential?</p>
| 0 |
2016-09-21T21:28:40Z
| 39,627,407 |
<p><code>else</code> <em>remembers</em> that the condition was <code>False</code>. For example, consider this code:</p>
<pre><code>if x == 1:
x = 2
print("bar")
else:
if x == 2:
print("foo")
</code></pre>
<p><em>With</em> the <code>else</code>, only one of "bar" or "foo" will be printed.</p>
<p><em>Without</em> the <code>else</code>, both "bar" and "foo" can be printed since the value of <code>x</code> can change.</p>
<p>This also can help with 'overlapping' conditions:</p>
<pre><code>if x > 10:
print("big")
elif x > 5:
print("medium")
elif x > 1:
print("sizable")
else:
print("small")
</code></pre>
| 1 |
2016-09-21T22:16:17Z
|
[
"python",
"if-statement",
"syntax"
] |
Scrapy Pipeline not starting
| 39,626,870 |
<p>I'm having problems with Scrapy pipelines.
EnricherPipeline is never starting. I put a debugger in the fist line of process_item and it never gets control.
JsonPipeline does start, but the first argument it receives is of type <code>generator object process_item</code> and not the MatchItem instance it should receive (when I disable the EnricherPipeline, JsonPipeline works as expected.</p>
<pre><code>class MatchSpider(CrawlSpider):
def parse(self, response):
browser = Browser(browser='Chrome')
browser.get(response.url)
browser.find_element_by_xpath('//a[contains(text(), "{l}") and @title="{c}"]'.format(l=self.league, c=self.country)).click()
browser.find_element_by_xpath('//select[@id="seasons"]/option[text()="{s}"]'.format(s=self.season.replace('-', '/'))).click()
browser.find_element_by_xpath('//a[contains(text(), "Fixture")]').click()
page_matches = browser.find_elements_by_xpath('//*[contains(@class, "result-1 rc")]')
matches.extend([m.get_attribute('href') for m in page_matches]
for m in matches[:1]:
yield Request(m, callback=self.process_match, dont_filter=True)
def process_match(self, response):
match_item = MatchItem()
match_item['url'] = response.url
match_item['project'] = self.settings.get('BOT_NAME')
match_item['spider'] = self.name
match_item['server'] = socket.gethostname()
match_item['date'] = datetime.datetime.now()
return match_item
class EnricherPipeline:
def process_item(self, item, spider):
self.match = defaultdict(dict)
self.match['date'] = item['match']['startTime']
self.match['referee'] = item['match']['refereeName']
self.match['stadium'] = item['match']['venueName']
self.match['exp_mins'] = item['match']['expandedMinutes']
yield self.match
class JsonPipeline:
def process_item(self, item, scraper):
output_dir = 'data/matches/{league}/{season}'.format(league=scraper.league, season=scraper.season)
if not os.path.exists(output_dir):
os.makedirs(output_dir)
file_name = "-".join([str(datetime.strptime(item['date'], '%Y-%m-%dT%H:%M:%S').date()),
item['home']['name'], item['away']['name']]) + '.json'
item_path = os.sep.join((output_dir, file_name))
with open(item_path, 'w') as f:
f.write(json.dumps(item))
ITEM_PIPELINES = {
'scrapers.whoscored.whoscored.pipelines.EnricherPipeline': 300,
'scrapers.whoscored.whoscored.pipelines.JsonPipeline': 800,
}
</code></pre>
| 0 |
2016-09-21T21:29:47Z
| 39,640,297 |
<p>Ok, so the problem was that EnricherPipeline was yielding and not returning a result. After that it worked as expected, although I still don't understand why a debugger is not working in that first pipeline.</p>
| 0 |
2016-09-22T13:23:03Z
|
[
"python",
"scrapy"
] |
boto3 query using KeyConditionExpression
| 39,626,894 |
<p>I'm having trouble understanding why below query on a DynamoDB table doesn't work:</p>
<pre><code>dict_table.query(KeyConditionExpression='norm = :cihan', ExpressionAttributeValues={':cihan': {'S': 'cihan'}})
</code></pre>
<p>and throws this error: </p>
<p><code>ClientError: An error occurred (ValidationException) when calling the Query operation: One or more parameter values were invalid: Condition parameter type does not match schema type</code></p>
<p>while the following works:</p>
<pre><code>dict_table.query(KeyConditionExpression=Key('norm').eq('cihan'))
</code></pre>
<p><code>norm</code> is a field with type string. I'm using boto3 v 1.4.0 and <a href="http://boto3.readthedocs.io/en/latest/reference/services/dynamodb.html#DynamoDB.Client.query" rel="nofollow">following the docs</a>:</p>
<pre><code>In [43]: boto3.__version__
Out[43]: '1.4.0'
</code></pre>
<p>Can anyone see what's the error in the first query?</p>
<p>Bonus question: What's with all the tokens and the need to replace them all the time? Why can't I just say <code>dict_table.query(KeyConditionExpression='norm = cihan')</code></p>
| 1 |
2016-09-21T21:31:18Z
| 39,633,493 |
<p>Please change the ExpressionAttributeValues as mentioned below. </p>
<pre><code>ExpressionAttributeValues={':cihan': 'cihan'}
</code></pre>
| 0 |
2016-09-22T08:02:41Z
|
[
"python",
"amazon-web-services",
"amazon-dynamodb",
"boto3"
] |
Python iterating over matrix class
| 39,626,898 |
<pre><code>from collections.abc import Sequence
class Map(Sequence):
""" Represents a map for a floor as a matrix """
def __init__(self, matrix):
""" Takes a map as a matrix """
self.matrix = matrix
self.height = len(matrix)
self.width = len(matrix[0])
super().__init__()
def __getitem__(self, item):
""" Needed by Sequence """
return self.matrix[item]
def __len__(self):
""" Needed by Sequence """
return len(self.matrix)
def search(self, entity):
""" Returns a generator of tuples that contain the x and y for every element in the map that matches 'entity' """
for row in range(self.height):
for column in range(self.width):
if matrix[row][column] == entity:
yield (row, column)
# Examples
gmap = Map([[0, 0, 0],
[0, 1, 0],
[0, 0, 0]])
for entity in gmap:
print(entity)
</code></pre>
<p>How can I implement<code>__iter__</code> so that </p>
<pre><code>for entity in gmap:
print(entity)
</code></pre>
<p>yields <code>0 0 0 0 1 0 0 0 0</code> and not </p>
<pre><code>[0, 0, 0]
[0, 1, 0]
[0, 0, 0]
</code></pre>
<p>This would save me from needing to subclass <code>Sequence</code> and would make the code for <code>search()</code> neater</p>
<p>Additionally, are their any other magic methods I should be using? (aside from <code>__str__</code>, im doing that after I get iterating working)</p>
| 0 |
2016-09-21T21:31:33Z
| 39,626,948 |
<p>You may implement <code>__iter__()</code> like so:</p>
<pre><code>from itertools import chain
def __iter__(self):
return chain.from_iterable(self.matrix)
</code></pre>
<p><code>itertools.chain.from_iterable()</code> takes an iterable of iterables and combines them all together. It creates a generator thus does not use extra memory.</p>
| 0 |
2016-09-21T21:35:38Z
|
[
"python",
"class",
"iterable"
] |
Assigning a class variable and then reassigning in a definition without using global in Python
| 39,626,913 |
<p>So my question is in the title and the following two snippets of code are my attempts around this. I am trying to assign a variable as soon as the script is started and then just run the loop definition at certain time intervals and update that same variable. I do not want to use a global.</p>
<pre><code>from twisted.internet import task, reactor
class DudeWheresMyCar(object):
counter = 20
stringInit = 'initialized string'
def loop():
stringNew = 'this will be updated with a changing string'
if (stringInit == stringNew): #Error line
print(stringNew)
elif (stringInit != stringNew ):
stringInit = stringNew
pass
task.LoopingCall(loop).start(counter)
reactor.run()
</code></pre>
<p>This leads to an error undefined stringInit. I know why I am getting this error so I made an attempt to fix this using the .self variable and the code is below.</p>
<pre><code>from twisted.internet import task, reactor
class DudeWheresMyCar(object):
counter = 20
def __init__(self):
self.stringInit = 'Initialized string'
def loop(self):
stringNew = 'this will be updated with a changing string'
if (self.stringInit == stringNew):
print(stringNew)
elif (self.stringInit != stringNew ):
self.stringInit = stringNew
pass
task.LoopingCall(self.loop).start(counter) #Error line
reactor.run()
</code></pre>
<p>I get an error that says that self is undefined. I understand what is causing both scenario's to throw errors but I am not sure how to change my approach to accomplish my goal. I also ran into using a singleton but that still does not fix the problem in scenario 2.</p>
| 0 |
2016-09-21T21:33:04Z
| 39,628,234 |
<p>I think you want a <code>classmethod</code>, and you need to start the the task outside of the class definition. I would expect something like the following code to work</p>
<pre><code>from twisted.internet import task, reactor
class DudeWheresMyCar(object):
counter = 20
stringInit = 'Initialized string'
@classmethod
def loop(cls):
stringNew = 'this will be updated with a changing string'
if (cls.stringInit == stringNew):
print(stringNew)
elif (cls.stringInit != stringNew ):
cls.stringInit = stringNew
task.LoopingCall(DudeWheresMyCar.loop).start(DudeWheresMyCar.counter)
reactor.run()
</code></pre>
| 1 |
2016-09-22T00:01:34Z
|
[
"python",
"python-3.x"
] |
find the CSS path (ancestor tags) in HTML using python
| 39,626,940 |
<p>I want to get all the ancestor div tags where I match a text. So for example if the html looks like <a href="http://i.stack.imgur.com/aePNZ.png" rel="nofollow">HTML snippet</a></p>
<p>And i'm searching for "Earl E. Byrd". I wanna get a list which contains {"buyer-info","buyer-name"}</p>
<p>This is what i did </p>
<pre><code>r=requests.get(self.url,verify='/path/to/certfile')
soup = BeautifulSoup(r.text,"lxml")
divTags = soup.find_all('div')
</code></pre>
<p>How should I proceed ?</p>
| 0 |
2016-09-21T21:35:13Z
| 39,627,053 |
<p>A solution using an <a href="/questions/tagged/xpath" class="post-tag" title="show questions tagged 'xpath'" rel="tag">xpath</a> expression :</p>
<pre><code>//div[@title="buyer-info"]/div[text() = "Carlson Busses"]/ancestor::div
</code></pre>
| 0 |
2016-09-21T21:44:44Z
|
[
"python",
"web-scraping",
"bs4"
] |
find the CSS path (ancestor tags) in HTML using python
| 39,626,940 |
<p>I want to get all the ancestor div tags where I match a text. So for example if the html looks like <a href="http://i.stack.imgur.com/aePNZ.png" rel="nofollow">HTML snippet</a></p>
<p>And i'm searching for "Earl E. Byrd". I wanna get a list which contains {"buyer-info","buyer-name"}</p>
<p>This is what i did </p>
<pre><code>r=requests.get(self.url,verify='/path/to/certfile')
soup = BeautifulSoup(r.text,"lxml")
divTags = soup.find_all('div')
</code></pre>
<p>How should I proceed ?</p>
| 0 |
2016-09-21T21:35:13Z
| 39,627,506 |
<p>If you want to search for the div by text and get all the previous divs that have <em>title</em> attributes, first find the div using the text, then use <code>find_all_previous</code> setting <code>title=True</code></p>
<pre><code>soup = BeautifulSoup(r.text,"lxml")
div = soup.find('div', text="Earl E. Byrd")
print([div["title"]] + [d["title"] for d in div.find_all_previous("div", title=True)])
</code></pre>
| 0 |
2016-09-21T22:27:54Z
|
[
"python",
"web-scraping",
"bs4"
] |
Python downloading PDF into a .zip
| 39,627,036 |
<p>What I am trying to do is loop through a list of URL to download a series of .pdfs, and save them to a .zip. At the moment I am just trying to test code using just one URL. The ERROR I am getting is:</p>
<pre><code>Traceback (most recent call last):
File "I:\test_pdf_download_zip.py", line 36, in <module>
zip_file(zipfile_name, url)
File "I:\test_pdf_download_zip.py", line 30, in zip_file
myzip.write(dowload_pdf(url))
TypeError: expected a string or other character buffer object
</code></pre>
<p>Would someone know how to pass .pdf request to the .zip correctly (avoiding the error above) in order for me to append it, or know if it is possible to do this?</p>
<pre><code>import os
import zipfile
import requests
output = r"I:"
# File name of the zipfile
zipfile_name = os.path.join(output, "test.zip")
# Random test pdf
url = r"http://www.pdf995.com/samples/pdf.pdf"
def create_zipfile(zipfile_name):
zipfile.ZipFile(zipfile_name, "w")
def dowload_pdf(url):
response = requests.get(url, stream=True)
with open('test.pdf', 'wb') as f:
f.write(response.content)
def zip_file(zip_name, url):
with open(zip_name,'a') as myzip:
myzip.write(dowload_pdf(url))
if __name__ == "__main__":
create_zipfile(zipfile_name)
zip_file(zipfile_name, url)
print("Done")
</code></pre>
| 0 |
2016-09-21T21:43:07Z
| 39,627,509 |
<p>Your <code>download_pdf()</code> function is saving a file but it doesn't return anything. You need to modify it so it actually returns the file path to <code>myzip.write()</code>. You don't want to hardcode test.pdf but pass unique paths to your download function so you don't end up with multiple <code>test.pdf</code> in your archive.</p>
<pre><code>def dowload_pdf(url, path):
response = requests.get(url, stream=True)
with open(path, 'wb') as f:
f.write(response.content)
return path
</code></pre>
| 0 |
2016-09-21T22:28:03Z
|
[
"python",
"pdf",
"zip",
"python-requests"
] |
Eliminating stop words from a text, while NOT deleting duplicate regular words
| 39,627,066 |
<p>I'm trying to create a list with the most common 50 words within a specific text file, however I want to eliminate the stop words from that list. I have done that using this code.</p>
<pre><code>from nltk.corpus import gutenberg
carroll = nltk.Text(nltk.corpus.gutenberg.words('carroll-alice.txt'))
carroll_list = FreqDist(carroll)
stops = set(stopwords.words("english"))
filtered_words = [word for word in carroll_list if word not in stops]
</code></pre>
<p>However, this is deleting the duplicates of the words I want. Like when I do this:</p>
<pre><code>fdist = FreqDist(filtered_words)
fdist.most_common(50)
</code></pre>
<p>I get the output:</p>
<pre><code> [('right', 1), ('certain', 1), ('delighted', 1), ('adding', 1),
('work', 1), ('young', 1), ('Up', 1), ('soon', 1), ('use', 1),
('submitted', 1), ('remedies', 1), ('tis', 1), ('uncomfortable', 1)....]
</code></pre>
<p>It is saying that there is one instance of each word, clearly it eliminated the duplicates. I want to keep the duplicates so I can see what word is most common. Any help would be greatly appreciated.</p>
| 0 |
2016-09-21T21:45:25Z
| 39,627,368 |
<p>As you have it written now, <code>list</code> is already a distribution containing the words as keys and the occurrence count as the value:</p>
<pre><code>>>> list
FreqDist({u',': 1993, u"'": 1731, u'the': 1527, u'and': 802, u'.': 764, u'to': 725, u'a': 615, u'I': 543, u'it': 527, u'she': 509, ...})
</code></pre>
<p>You then iterate over the keys meaning each word is only there once. I believe you actually want to create <code>filtered_words</code> like this:</p>
<pre><code>filtered_words = [word for word in carroll if word not in stops]
</code></pre>
<p>Also, you should try to avoid using variable names that match Python builtin functions (<code>list</code> is a Python builtin function).</p>
| 1 |
2016-09-21T22:13:31Z
|
[
"python",
"nltk"
] |
Python BeautifulSoup Mac Installation Error
| 39,627,108 |
<p>I am completely new to all things programming. As I am working to learn the basics of Python, I have run into a problem that I've been unable to work through by reading and Googling.</p>
<p>I am trying to install BeautifulSoup, and I thought I had done so successfully, but when I try to test whether or not it's installed correctly I get an error.</p>
<p>I am using PyCharm and typed the following into the Python Console:</p>
<pre><code>>>> from bs4 import BeautifulSoup
</code></pre>
<p>And I receive the following error:</p>
<pre><code>Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
ImportError: No module named 'bs4'
</code></pre>
<p>I read through a previous thread about a BeautifulSoup install error and one of the things that was mentioned was to check the preferences settings in PyCharm to ensure that it's using the right version of Python . . .</p>
<p><img src="http://i.stack.imgur.com/SBcmp.png" alt="PyCharm Project Interpreter Screenshot"></p>
<p>Anyway, I can't seem to figure out what's wrong, so any insight and help in resolving this issue would be tremendously appreciated.</p>
| 0 |
2016-09-21T21:48:58Z
| 39,627,606 |
<p>You can use pip to install beautifulsoup on mac, by typing in the following command in Terminal:</p>
<pre><code>pip install beautifulsoup4
</code></pre>
<p>You might be facing some permission problems if you are running the OS X preinstalled python as interpreter. I would suggest installing python with Homebrew if that's the case. </p>
| -2 |
2016-09-21T22:40:57Z
|
[
"python",
"beautifulsoup"
] |
Sort dict keys into list
| 39,627,112 |
<p>I have a dictionary containing IP addresses and hd space for each IP. </p>
<p>{'192.168.100.102': '7.3G', '192.168.100.103': '3.5G', '192.168.100.101': '7.4G', '192.168.100.107': '17G'} </p>
<p>I want to take three IPs with the greatest space and put them into a list. Is this possible?</p>
| -2 |
2016-09-21T21:49:08Z
| 39,627,153 |
<p>This should work:</p>
<pre><code>ips = {'192.168.100.102': '7.3G', '192.168.100.103': '3.5G', '192.168.100.101': '7.4G', '192.168.100.107': '17G'}
sorted(ips, key=ips.get)[:3]
</code></pre>
<p>Actually this doesn't work because of the G in each value, use @Moses Koledoye's answer below. </p>
| 0 |
2016-09-21T21:52:43Z
|
[
"python",
"dictionary"
] |
Sort dict keys into list
| 39,627,112 |
<p>I have a dictionary containing IP addresses and hd space for each IP. </p>
<p>{'192.168.100.102': '7.3G', '192.168.100.103': '3.5G', '192.168.100.101': '7.4G', '192.168.100.107': '17G'} </p>
<p>I want to take three IPs with the greatest space and put them into a list. Is this possible?</p>
| -2 |
2016-09-21T21:49:08Z
| 39,627,175 |
<p>Slice off the trailing <code>'G'</code> part from the values and convert them to <code>float</code> in the <em>sort key</em>:</p>
<pre><code>ips = {'192.168.100.102': '7.3G', '192.168.100.103': '3.5G', '192.168.100.101': '7.4G', '192.168.100.107': '17G'}
sorted_ips = sorted(ips, key=lambda x: float(ips[x][:-1]), reverse=True)[:3]
print(sorted_ips)
# ['192.168.100.107', '192.168.100.101', '192.168.100.102']
</code></pre>
| 1 |
2016-09-21T21:54:26Z
|
[
"python",
"dictionary"
] |
Why isn't my RNN learning?
| 39,627,187 |
<p>I'm trying to implement a simple RNN using numpy (based on <a href="https://iamtrask.github.io/2015/11/15/anyone-can-code-lstm/" rel="nofollow">this article</a>), and I'm training it to do binary addition where it adds two 8-bit unsigned integers one bit at a time (starting from the end) with the purpose of having it learn to "carry the one" during addition when necessary. However, it doesn't seem to be learning. For training, I'm choosing two random numbers, forward propagating 8 steps with one bit from a and b as input and storing the outputs and hidden layer values at each timestep, and backpropagating 8 steps where I calculate the hidden layer error (<code>output_error.dot(weights_hidden_to_output.T)) * sigmoid_to_derivative(hidden) + future_hidden_error.dot(weights_hidden_to_hidden.T)</code>) and the updates for each of the weight matrices by matrix multiplying the parent layer by the error of the child layer. Is this the correct method?</p>
<p>Here's my code if it will make it more clear. I noticed that for some reason, the weights start increasing like crazy all of a sudden every time I train it, and they cause an overflow in the sigmoid function which causes the training to fail. Any idea what could cause this?</p>
<pre><code>import numpy as np
np.random.seed(0)
def sigmoid(x):
return np.atleast_2d(1/(1+np.exp(-x)))
#return np.atleast_2d(np.max(x, 0.01))
def sig_deriv(x):
return x*(1-x)
def add_bias(x):
return np.hstack([np.ones((len(x), 1)), x])
def dec_to_bin(dec):
return np.array(map(int, list(format(dec, '#010b'))[2:]))
def bin_to_dec(b):
out = 0
for bit in b:
out = (out << 1) | bit
return out
batch_size = 8
learning_rate = .1
input_size = 2
hidden_size = 16
output_size = 1
weights_xh = 2 * np.random.random((input_size+1, hidden_size)) - 1
weights_hh = 2 * np.random.random((hidden_size+1, hidden_size)) - 1
weights_hy = 2 * np.random.random((hidden_size+1, output_size)) - 1
xh_update = np.zeros_like(weights_xh)
hh_update = np.zeros_like(weights_hh)
hy_update = np.zeros_like(weights_hy)
for i in xrange(10000):
a = np.random.randint(0, 2**batch_size/2)
b = np.random.randint(0, 2**batch_size/2)
sum_ = a+b
X = add_bias(np.hstack([np.atleast_2d(dec_to_bin(a)).T, np.atleast_2d(dec_to_bin(b)).T]))
y = np.atleast_2d(dec_to_bin(sum_)).T
error = 0
output_errors = []
outputs = []
hiddens = [add_bias(np.zeros((1, hidden_size)))]
#forward propagation through time
for j in xrange(batch_size):
hidden = sigmoid(X[-j-1].dot(weights_xh) + hiddens[-1].dot(weights_hh))
hidden = add_bias(hidden)
hiddens.append(hidden)
output = sigmoid(hidden.dot(weights_hy))
outputs.append(output[0][0])
output_error = (y[-j-1] - output)
error += np.abs(output_error[0])
output_errors.append((output_error * sig_deriv(output)))
future_hidden_error = np.zeros((1,hidden_size))
#backward ppropagation through time
for j in xrange(batch_size):
output_error = output_errors[-j-1]
hidden = hiddens[-j-1]
prev_hidden = hiddens[-j-2]
hidden_error = (output_error.dot(weights_hy.T) * sig_deriv(hidden)) + future_hidden_error.dot(weights_hh.T)
hidden_error = np.delete(hidden_error, 0, 1) #delete bias error
xh_update += np.atleast_2d(X[j]).T.dot(hidden_error)
hh_update += prev_hidden.T.dot(hidden_error)
hy_update += hidden.T.dot(output_error)
future_hidden_error = hidden_error
weights_xh += (xh_update * learning_rate)/batch_size
weights_hh += (hh_update * learning_rate)/batch_size
weights_hy += (hy_update * learning_rate)/batch_size
xh_update *= 0
hh_update *= 0
hy_update *= 0
if i%1000==0:
guess = map(int, map(round, outputs[::-1]))
print "Iteration {}".format(i)
print "Error: {}".format(error)
print "Problem: {} + {} = {}".format(a, b, sum_)
print "a: {}".format(list(dec_to_bin(a)))
print "+ b: {}".format(list(dec_to_bin(b)))
print "Solution: {}".format(map(int, y))
print "Guess: {} ({})".format(guess, bin_to_dec(guess))
print
</code></pre>
| 1 |
2016-09-21T21:55:36Z
| 39,628,442 |
<p>I figured it out. In case anyone wants to know why it wasn't working, it was because I was only multiplying one part of the hidden error (the part that came from the output error) by the derivative of the hidden layer activation. Now it's easily learning the addition problem within a few thousand iterations.</p>
| 0 |
2016-09-22T00:27:23Z
|
[
"python",
"numpy",
"machine-learning",
"neural-network",
"recurrent-neural-network"
] |
Concurrent download and processing of large files in python
| 39,627,188 |
<p>I have a list of URLs for large files to <strong>download</strong> (e.g. compressed archives), which I want to <strong>process</strong> (e.g. decompress the archives). </p>
<p>Both download and processing take a long time and processing is heavy on disk IO, so I want to have <strong>just one of each to run at a time</strong>. Since the two tasks take about the same time and do not compete for the same resources, I want to download the next file(s) while the last is being processed.</p>
<p>This is a variation of the <strong><a href="https://en.wikipedia.org/wiki/Producer%E2%80%93consumer_problem" rel="nofollow">producer-consumer problem</a></strong>.</p>
<p>The situation is similar to <a href="http://stackoverflow.com/q/12474182/512111">reading and processing images</a> or <a href="http://stackoverflow.com/questions/37825218/fastest-way-to-read-and-process-100-000-urls-in-python">downloading loads of files</a>, but my downloader calls are not (yet) picklable, so I have not been able to use multiprocessing, and both tasks take about the same time.</p>
<p>Here is a dummy example, where both download and processing are blocking:</p>
<pre><code>import time
import posixpath
def download(urls):
for url in urls:
time.sleep(3) # this is the download (more like 1000s)
yield posixpath.basename(url)
def process(fname):
time.sleep(2) # this is the processing part (more like 600s)
urls = ['a', 'b', 'c']
for fname in download(urls):
process(fname)
print(fname)
</code></pre>
<p>How could I make the two tasks concurrent? Can I use <code>yield</code> or <code>yield from</code> <a href="http://stackoverflow.com/questions/9708902/in-practice-what-are-the-main-uses-for-the-new-yield-from-syntax-in-python-3">in a smart way</a>, perhaps in combination with <a href="https://docs.python.org/2/library/collections.html#collections.deque" rel="nofollow"><code>deque</code></a>? Or must it be <a href="https://docs.python.org/3.4/library/asyncio.html" rel="nofollow"><code>asyncio</code></a> with <code>Future</code>?</p>
| 3 |
2016-09-21T21:55:36Z
| 39,627,242 |
<p>I'd simply use <code>threading.Thread(target=process, args=(fname,))</code> and start a new thread for processing.</p>
<p>But before that, end last processing thread :</p>
<pre><code>t = None
for fname in download(urls):
if t is not None: # wait for last processing thread to end
t.join()
t = threading.Thread(target=process, args=(fname,))
t.start()
print('[i] thread started for %s' % fname)
</code></pre>
<p>See <a href="https://docs.python.org/3/library/threading.html" rel="nofollow">https://docs.python.org/3/library/threading.html</a></p>
| 0 |
2016-09-21T22:01:21Z
|
[
"python",
"concurrency",
"yield",
"coroutine",
"yield-from"
] |
Error: 'float' object does not support item assignment
| 39,627,259 |
<p>I'm programming in python and I don't understand what i'm doing wrong:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from math import exp
x=np.linspace(0.0,4.0,100)
y1=x
for i in range(100):
y2[i]=1.5*(1-exp(-x[i]))
</code></pre>
<p>This last line gives me an error that says: float object does not support item assigment. I don't understand how y2 can be considered a float object since it is a list in which every element is calculated with 1.5*(1-exp(-x[i])).</p>
| 0 |
2016-09-21T22:03:16Z
| 39,627,290 |
<p>As <em>Jean-François Fabre</em> and <em>Barmar</em> have noted, you get this message only if you have y2 already assigned to a float. IN any case, you'll need to build the list one way or another.</p>
<p>Using the <strong>numpy</strong> array facilities (credit to <em>John1024</em>):</p>
<pre><code>y2 = 1.5*(1-np.exp(-x))
</code></pre>
<p>Using a <em>list comprehension</em>:</p>
<pre><code>y2 = [ 1.5*(1-exp(-x[i])) for i in range(100) ]
</code></pre>
<p>If these are more advanced than you want to use, you can initialize y2 and build it in your loop:</p>
<pre><code>y2 = []
for i in range(100):
y2.append(1.5*(1-exp(-x[i])))
</code></pre>
| 1 |
2016-09-21T22:06:15Z
|
[
"python"
] |
Process hangs if web browser crashes in selenium
| 39,627,390 |
<p>I am using selenium + python, been using implicit waits and try/except code on python to catch errors. However I have been noticing that if the browser crashes (let's say the user closes the browser during the program's executing), my python program will hang, and the timeouts of implicit wait seems to not work when this happens. The below process will just stay there forever.</p>
<pre><code>from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium import webdriver
import datetime
import time
import sys
import os
def open_browser():
print "Opening web page..."
driver = webdriver.Chrome()
driver.implicitly_wait(1)
#driver.set_page_load_timeout(30)
return driver
driver = open_browser() # Opens web browser
# LET'S SAY I CLOSE THE BROWSER RIGHT HERE!
# IF I CLOSE THE PROCESS HERE, THE PROGRAM WILL HANG FOREVER
time.sleep(5)
while True:
try:
driver.get('http://www.google.com')
break
except:
driver.quit()
driver = open_browser()
</code></pre>
| 2 |
2016-09-21T22:14:46Z
| 39,628,352 |
<p>The code you have provided will always hang in the event that there is an exception getting the google home page.
What is probably happening is that attempting to get the google home page is resulting in an exception which would normally halt the program, but you are masking that out with the except clause.</p>
<p>Attempt with the following amendment to your loop.</p>
<pre><code>max_attemtps = 10
attempts = 0
while attempts <= max_attempts:
try:
print "Retrieving google"
driver.get('http://www.google.com')
break
except:
print "Retrieving google failed"
attempts += 1
</code></pre>
| 0 |
2016-09-22T00:15:36Z
|
[
"python",
"selenium"
] |
How to add `colorbar` to `networkx` using a `seaborn` color palette? (Python 3)
| 39,627,490 |
<p>I'm trying to add a <code>colorbar</code> to my <code>networkx</code> drawn <code>matplotlib ax</code> from the range of <code>1</code> (being the lightest) and <code>3</code> (being the darkest) [check out the line w/ <code>cmap</code> below]. I'm trying to combine a lot of <code>PyData</code> functionalities. </p>
<p><strong>How can I add a color bar type feature on a networkx plot using a seaborn color palette?</strong> </p>
<p><a href="http://i.stack.imgur.com/Yt5ud.png" rel="nofollow"><img src="http://i.stack.imgur.com/Yt5ud.png" alt="enter image description here"></a></p>
<pre><code># Set up Graph
DF_adj = pd.DataFrame(np.array(
[[1, 0, 1, 1],
[0, 1, 1, 0],
[1, 1, 1, 1],
[1, 0, 1, 1] ]), columns=['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'], index=['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'])
G = nx.Graph(DF_adj.as_matrix())
G = nx.relabel_nodes(G, dict(zip(range(4), ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'])))
# Color mapping
color_palette = sns.cubehelix_palette(3)
cmap = {k:color_palette[v-1] for k,v in zip(['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'],[2, 1, 3, 2])}
# Draw
nx.draw(G, node_color=[cmap[node] for node in G.nodes()], with_labels=True)
</code></pre>
<p>In this, they are all using <code>matplotlib</code> color palettes: <a href="http://jakevdp.github.io/mpl_tutorial/tutorial_pages/tut3.html" rel="nofollow">http://jakevdp.github.io/mpl_tutorial/tutorial_pages/tut3.html</a> I even tried converting them to a <code>ListedColormap</code> object but it didn't work. </p>
<p>This doesn't work for my situation either b/c matplotlib colormap: <a href="http://stackoverflow.com/questions/30353363/seaborn-regplot-with-colorbar">Seaborn regplot with colorbar?</a></p>
<p>Same for <a href="http://matplotlib.org/examples/pylab_examples/colorbar_tick_labelling_demo.html" rel="nofollow">http://matplotlib.org/examples/pylab_examples/colorbar_tick_labelling_demo.html</a></p>
<p>This was the closest I got but it didn't work I got a autoscale Nonetype: <a href="http://stackoverflow.com/questions/37902459/how-do-i-use-seaborns-color-palette-as-a-colormap-in-matplotlib">How do I use seaborns color_palette as a colormap in matplotlib?</a></p>
| 3 |
2016-09-21T22:26:00Z
| 39,628,280 |
<p>I think the best thing to do here is to fake it following <a href="http://stackoverflow.com/a/11558629/5285918">this answer</a> since you don't have a "ScalarMappable" to work with.</p>
<p>For a discrete colormap</p>
<pre><code>from matplotlib.colors import ListedColormap
sm = plt.cm.ScalarMappable(cmap=ListedColormap(color_palette),
norm=plt.Normalize(vmin=0, vmax=3))
sm._A = []
plt.colorbar(sm)
</code></pre>
<p>If you want a linear (continuous) colormap and to only show integer ticks</p>
<pre><code>sm = plt.cm.ScalarMappable(cmap=sns.cubehelix_palette(3, as_cmap=True),
norm=plt.Normalize(vmin=0, vmax=3))
sm._A = []
plt.colorbar(sm, ticks=range(4))
</code></pre>
<p><a href="http://i.stack.imgur.com/pswBu.png" rel="nofollow"><img src="http://i.stack.imgur.com/pswBu.png" alt="enter image description here"></a></p>
| 3 |
2016-09-22T00:06:15Z
|
[
"python",
"matplotlib",
"colors",
"networkx",
"seaborn"
] |
Forward WSGI cookies to Requests
| 39,627,548 |
<p>I am developing a WSGI middleware application (Python 2.7) using Werkzeug. This app works within a SAML SSO environment and needs a SAML token to be accessed. </p>
<p>The middleware also performs requests to other applications in the same SAML environment, acting on behalf of the logged in user. In order to do that without the need of user feedback, I need to forward the SAML session cookie that I can get from the WSGI environment to requests that I am performing using the Requests library. </p>
<p>My issue is that the cookies that I get from WSGI/Werkzeug can only be parsed as http.cookies.SimpleCooke , while Requests accepts cookielib.CookieJar instances. </p>
<p>I have not found a way to cleanly forward these session cookies without resorting to shameful hacks such as parsing the raw content of the set-cookie headers. </p>
<p>Any suggestions? </p>
<p>Thanks,</p>
<p>gm</p>
| 0 |
2016-09-21T22:33:44Z
| 39,627,858 |
<p>Cookies are just <a href="https://tools.ietf.org/html/rfc6265#section-4.1" rel="nofollow">HTTP headers</a>. Just use pull the cookie value from http.cookies.SimpleCookie, and add it to your requests session's cookie <a href="http://docs.python-requests.org/en/master/_modules/requests/cookies/" rel="nofollow">jar</a>.</p>
<p>Not a hack. :)</p>
| 0 |
2016-09-21T23:10:40Z
|
[
"python",
"python-requests",
"session-cookies",
"saml",
"wsgi"
] |
how can i fix entropy yielding nan?
| 39,627,634 |
<p>I am trying to calculate entropy from array resulted from np.histogram by</p>
<pre><code>mu1, sigma1 = 0, 1
s1 = np.random.normal(mu1, sigma1, 100000)
hist1 = np.histogram(s1, bins=100, range=(-20,20), density=True)
data1 = hist1[0]
ent1 = -(data1*np.log(np.abs(data1))).sum()
</code></pre>
<p>However, this ent1 would return nan. What is the problem here? </p>
| 1 |
2016-09-21T22:43:42Z
| 39,627,718 |
<p>The problem is that you have zero probabilities in your histogram, which don't make numerical sense when applying Shannon's entropy formula. A solution is to ignore the zero probabilities.</p>
<pre><code>mu1, sigma1 = 0, 1
s1 = np.random.normal(mu1, sigma1, 100000)
hist1 = np.histogram(s1, bins=100, range=(-20,20), density=True)
data1 = hist1[0]
non_zero_data = data1[data1 != 0]
ent1 = -(non_zero_data*np.log(np.abs(non_zero_data))).sum()
</code></pre>
| 2 |
2016-09-21T22:53:36Z
|
[
"python",
"numpy",
"entropy"
] |
how can i fix entropy yielding nan?
| 39,627,634 |
<p>I am trying to calculate entropy from array resulted from np.histogram by</p>
<pre><code>mu1, sigma1 = 0, 1
s1 = np.random.normal(mu1, sigma1, 100000)
hist1 = np.histogram(s1, bins=100, range=(-20,20), density=True)
data1 = hist1[0]
ent1 = -(data1*np.log(np.abs(data1))).sum()
</code></pre>
<p>However, this ent1 would return nan. What is the problem here? </p>
| 1 |
2016-09-21T22:43:42Z
| 39,627,816 |
<p>To compute the entropy, you could use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.entr.html" rel="nofollow"><code>scipy.special.entr</code></a>. For example,</p>
<pre><code>In [147]: from scipy.special import entr
In [148]: x = np.array([3, 2, 1, 0, 0.5, 2.5, 5])
In [149]: entr(x).sum()
Out[149]: -14.673474028700136
</code></pre>
<p>To check that result, we can also compute the entropy using <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.xlogy.html" rel="nofollow"><code>scipy.special.xlogy</code></a>:</p>
<pre><code>In [150]: from scipy.special import xlogy
In [151]: -xlogy(x, x).sum()
Out[151]: -14.673474028700136
</code></pre>
<p>Finally, we can verify that is the same result that you expect:</p>
<pre><code>In [152]: xnz = x[x != 0]
In [153]: -(xnz*np.log(xnz)).sum()
Out[153]: -14.673474028700136
</code></pre>
| 2 |
2016-09-21T23:05:32Z
|
[
"python",
"numpy",
"entropy"
] |
Sum in Spark gone bad
| 39,627,773 |
<p>Based on <a href="http://stackoverflow.com/questions/39235576/unbalanced-factor-of-kmeans">Unbalanced factor of KMeans?</a>, I am trying to compute the Unbalanced Factor, but I fail.</p>
<p>Every element of the RDD <code>r2_10</code> is a pair, where the key is cluster and the value is a tuple of points. All these are IDs. Below I present what happens:</p>
<pre><code>In [1]: r2_10.collect()
Out[1]:
[(0, ('438728517', '28138008')),
(13824, ('4647699097', '6553505321')),
(9216, ('2575712582', '1776542427')),
(1, ('8133836578', '4073591194')),
(9217, ('3112663913', '59443972', '8715330944', '56063461')),
(4609, ('6812455719',)),
(13825, ('5245073744', '3361024394')),
(4610, ('324470279',)),
(2, ('2412402108',)),
(3, ('4766885931', '3800674818', '4673186647', '350804823', '73118846'))]
In [2]: pdd = r2_10.map(lambda x: (x[0], 1)).reduceByKey(lambda a, b: a + b)
In [3]: pdd.collect()
Out[3]:
[(13824, 1),
(9216, 1),
(0, 1),
(13825, 1),
(1, 1),
(4609, 1),
(9217, 1),
(2, 1),
(4610, 1),
(3, 1)]
In [4]: n = pdd.count()
In [5]: n
Out[5]: 10
In [6]: total = pdd.map(lambda x: x[1]).sum()
In [7]: total
Out[7]: 10
</code></pre>
<p>and <code>total</code> is supposed to have the total number of points. However, it's 10...The goal is to be 22!</p>
<p>What am I missing here? </p>
| 2 |
2016-09-21T22:59:52Z
| 39,627,819 |
<p>The problem is because you missed to count the number of points grouped in each cluster, thus you have to change how <code>pdd</code> was created.</p>
<pre><code>pdd = r2_10.map(lambda x: (x[0], len(x[1]))).reduceByKey(lambda a, b: a + b)
</code></pre>
<p>However, You could obtain the same result in a single pass (without computing <code>pdd</code>), by mapping the values of the <code>RDD</code> and then reducing by using <code>sum</code>.</p>
<pre><code>total = r2_10.map(lambda x: len(x[1])).sum()
</code></pre>
| 1 |
2016-09-21T23:05:39Z
|
[
"python",
"function",
"apache-spark",
"machine-learning",
"distributed-computing"
] |
Create Folder with Numpy Savetxt
| 39,627,787 |
<p>I'm trying loop over many arrays and create files stored in different folders.</p>
<p>Is there a way to have np.savetxt creating the folders I need as well?</p>
<p>Thanks</p>
| 0 |
2016-09-21T23:01:38Z
| 39,628,096 |
<p><code>savetxt</code> just does a <code>open(filename, 'w')</code>. <code>filename</code> can include a directory as part of the path name, but you'll have to first create the directory with something like <code>os.mkdir</code>. In other words, use the standard Python directory and file functions.</p>
| 1 |
2016-09-21T23:40:22Z
|
[
"python",
"numpy"
] |
Create Folder with Numpy Savetxt
| 39,627,787 |
<p>I'm trying loop over many arrays and create files stored in different folders.</p>
<p>Is there a way to have np.savetxt creating the folders I need as well?</p>
<p>Thanks</p>
| 0 |
2016-09-21T23:01:38Z
| 39,640,835 |
<p>Actually in order to make all intermediate directories if needed the <code>os.makedirs(path, exist_ok=True)</code> . If not needed the command will not throw an error.</p>
| 0 |
2016-09-22T13:45:42Z
|
[
"python",
"numpy"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.