title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Json to CSV Conversion | 39,714,035 | <p>I have a very large JSON file with multiple individual JSON objects in the format shown below. I am trying to convert it to a CSV so that each row is a combination of the outer id/name/alphabet in a JSON object and 1 set of conversion: id/name/alphabet. This is repeated for all the sets of id/name/alphabet within an individual JSON object. So from the object below, 2 rows should be created where the first row is (outer) id/name/alphabet and 1st id/name/alphabet of conversion. The second row is again (outer) id/name/alphabet and now the 2nd id/name/alphabet of conversion.</p>
<p>Important note is that certain Objects in the file can have upwards of 50/60 conversion id/name/alphabet pairs.</p>
<p>What I tried so far was to flatten the JSON objects first which resulted in keys like conversion_id_0 and conversion_id_1 etc... so I can map the outer as its always constant but I am unsure how to map each corresponding number set to be a seperate row.</p>
<p>Any help or insight would be greatly appreciated!</p>
<pre><code>[
{
"alphabet": "ABCDEFGHIJKL",
"conversion": [
{
"alphabet": "BCDEFGHIJKL",
"id": 18589260,
"name": [
"yy"
]
},
{
"alphabet": "EFGHIJEFGHIJ",
"id": 18056632,
"name": [
"zx",
"cd"
]
}
],
"id": 23929934,
"name": [
"x",
"y"
]
}
]
</code></pre>
| 0 | 2016-09-26T23:50:51Z | 39,725,278 | <p>Your question is unclear about exactly the mapping from input JSON data to rows of the CSV file, so I had to guess on what should happen when there's more than one "name" associated with an inner or outer object.</p>
<p>Regardless, hopefully the following will give you a general idea of how to solve such problems.</p>
<pre><code>import csv
objects = [
{
"alphabet": "ABCDEFGHIJKL",
"id": 23929934,
"name": [
"x",
"y"
],
"conversion": [
{
"alphabet": "BCDEFGHIJKL",
"id": 18589260,
"name": [
"yy"
]
},
{
"alphabet": "EFGHIJEFGHIJ",
"id": 18056632,
"name": [
"zx",
"cd"
]
}
],
}
]
with open('converted_json.csv', 'wb') as outfile:
def group(item):
return [item["id"], item["alphabet"], ' '.join(item["name"])]
writer = csv.writer(outfile, quoting=csv.QUOTE_NONNUMERIC)
for obj in objects:
outer = group(obj)
for conversion in obj["conversion"]:
inner = group(conversion)
writer.writerow(outer + inner)
</code></pre>
<p>Contents of the CSV file generated:</p>
<pre class="lang-none prettyprint-override"><code>23929934,"ABCDEFGHIJKL","x y",18589260,"BCDEFGHIJKL","yy"
23929934,"ABCDEFGHIJKL","x y",18056632,"EFGHIJEFGHIJ","zx cd"
</code></pre>
| 0 | 2016-09-27T12:55:31Z | [
"python",
"json",
"csv"
]
|
Did I write this recursively? | 39,714,040 | <p>I need to write a recursive method to reverse a list, using no loops and no built-in functions such as <strong>reverse, reversed</strong>, or <strong>::</strong> (the list-slicing operator).</p>
<p>Did I do this properly?</p>
<pre><code>def reverseList(alist):
if len(alist) == 1:
return alist
else:
return reverseList(alist[1:]) + [alist[0]]
print (reverseList([1,2,3,4,5]))
</code></pre>
| 0 | 2016-09-26T23:51:12Z | 39,714,063 | <p>Looks like it works, except for the empty list []. You should add it as a base case. For example a correct code would look like:</p>
<pre><code>def reverseList(alist):
if len(alist) <= 1:
return alist
else:
return reverseList(alist[1:]) + [alist[0]]
print (reverseList([1,2,3,4,5]))
</code></pre>
| 0 | 2016-09-26T23:53:47Z | [
"python",
"recursion"
]
|
Did I write this recursively? | 39,714,040 | <p>I need to write a recursive method to reverse a list, using no loops and no built-in functions such as <strong>reverse, reversed</strong>, or <strong>::</strong> (the list-slicing operator).</p>
<p>Did I do this properly?</p>
<pre><code>def reverseList(alist):
if len(alist) == 1:
return alist
else:
return reverseList(alist[1:]) + [alist[0]]
print (reverseList([1,2,3,4,5]))
</code></pre>
| 0 | 2016-09-26T23:51:12Z | 39,729,844 | <p>Yes, you did great. The code is short, clear, readable, and calls itself appropriately. Yes, you can recode this for the empty list.</p>
<pre><code>if len(alist) <= 1:
return alist
</code></pre>
<p>Also, try a few more test cases:</p>
<pre><code>print (reverseList([1,2,3,4,5]))
print (reverseList([1, [False, 2.71828], ["hello", "world", "I'm", "done"], 4, 5]))
print (reverseList([])
print (reverseList([7])
</code></pre>
| 1 | 2016-09-27T16:28:02Z | [
"python",
"recursion"
]
|
What is first value that is passed into StatsModels predict function? | 39,714,057 | <p>I have the following OLS model from StatsModels:</p>
<pre><code>X = df['Grade']
y = df['Results']
X = statsmodels.tools.tools.add_constant(X)
mod = sm.OLS(y,X)
results = mod.fit()
</code></pre>
<p>When trying to predict a new Y value for an X value of 4, I have to pass the following:</p>
<pre><code>results.predict([1,4])
</code></pre>
<p>I don't understand why an array with the first value being '1' needs to be passed in order for the predict function to work correctly. Why do I need to include a 1 instead of just saying:</p>
<pre><code>results.predict([4])
</code></pre>
<p>I'm not clear on the concept at work here. Does anybody know what's going on?</p>
| 0 | 2016-09-26T23:53:05Z | 39,714,418 | <p>You are adding a constant to the regression equation with <code>X = statsmodels.tools.tools.add_constant(X)</code>. So your regressor X has two columns where the first column is a array of ones.</p>
<p>You need to do the same with the regressor that is used in prediction. So, the <code>1</code> means include the constant in the prediction. If you use zero instead, then the contribution of the constant (<code>0 * params[0]</code>) is zero and the prediction is only the slope effect.</p>
<p>The formula interface adds the constant automatically both for the regressor in the model and for the regressor in the prediction. However, with the pandas DataFrame or numpy ndarray interface, the constant needs to be added by the user both for the model and for predict.</p>
| 1 | 2016-09-27T00:43:05Z | [
"python",
"statsmodels"
]
|
How to write a function with a list as parameters | 39,714,150 | <p>Here is the question, I'm trying to deï¬ne a function <code>sample_mean</code> that takes in a list of numbers as a parameter and returns the sample mean of the the numbers in that list. Here is what I have so far, but I'm not sure it is totally right. </p>
<pre><code>def sample_mean(list):
""" (list) -> number
takes in a list of numbers as a parameter and returns the sample mean of the the numbers in that list
sample_mean =
sample_mean =
"""
mean = 0
values = [list]
for list in values:
print('The sample mean of', values, 'is', mean(list))
</code></pre>
| 0 | 2016-09-27T00:06:33Z | 39,714,346 | <p>As stated above by @idjaw, don't use <code>list</code> as a parameter instead use <code>listr</code> (for example). Your <code>values = [list]</code> is erroneous (also stated by @idjaw) and should be removed.</p>
<p>Also, according to <a href="https://www.python.org/dev/peps/pep-0257/" rel="nofollow">PEP257</a>, you should not use <code>"(list) -> number"</code> in your docstrings as that should only be used for builtins.</p>
<p>Finally, your loop should look like <code>for l in listr:</code> and then you add values to your mean variable. divide it by the number of values in the list and print the result.</p>
| 0 | 2016-09-27T00:31:58Z | [
"python"
]
|
How to write a function with a list as parameters | 39,714,150 | <p>Here is the question, I'm trying to deï¬ne a function <code>sample_mean</code> that takes in a list of numbers as a parameter and returns the sample mean of the the numbers in that list. Here is what I have so far, but I'm not sure it is totally right. </p>
<pre><code>def sample_mean(list):
""" (list) -> number
takes in a list of numbers as a parameter and returns the sample mean of the the numbers in that list
sample_mean =
sample_mean =
"""
mean = 0
values = [list]
for list in values:
print('The sample mean of', values, 'is', mean(list))
</code></pre>
| 0 | 2016-09-27T00:06:33Z | 39,714,435 | <p>Firstly, don't use <code>list</code> as a name because it shadows/hides the builtin <a href="https://docs.python.org/3/library/functions.html#func-list" rel="nofollow"><code>list</code></a> class for the scope in which it is declared. Use a name that describes the values in the list, in this case <code>samples</code> might be a good name. The function could be implemented with something like this:</p>
<pre><code>def sample_mean(samples):
total = 0
for value in samples:
total = total + value
return total / float(len(samples))
</code></pre>
<p>Or a shorter version which avoids writing your own loop by making use of Python's <code>sum()</code> function :</p>
<pre><code>def sample_mean(samples):
return sum(samples) / float(len(samples))
</code></pre>
<p>Call the function like this:</p>
<pre><code>>>> print(sample_mean([1,2,3,4,5]))
3.0
</code></pre>
<p>Note the use of <code>float()</code> to ensure that the division operation does not lose the fractional part. This is only an issue in Python 2 which uses integer division by default. Alternatively you could add this to the top of your script:</p>
<pre><code>from __future__ import division
</code></pre>
<p>If you are sure that you only need to support Python 3 you can remove the <code>float()</code> and ignore the above.</p>
| 1 | 2016-09-27T00:45:12Z | [
"python"
]
|
Generating data from meshgrid data (Numpy) | 39,714,176 | <p>I'd like to ask how to generate corresponding values from a meshgrid. I have a function "foo" that takes one 1D array with the length of 2, and returns some real number. </p>
<pre><code>import numpy as np
def foo(X):
#this function takes a vector, e.g., np.array([2,3]), and returns a real number.
return sum(X)**np.sin( sum(X) );
x = np.arange(-2, 1, 1) # points in the x axis
y = np.arange( 3, 8, 1) # points in the y axis
X, Y = np.meshgrid(x, y) # X, Y : grid
</code></pre>
<p>I generate X and Y grids using meshgrid.</p>
<p>Then, how can I generate corresponding Z values using "foo" function, in order to plot them in 3D, e.g., plotting using plot_surface function with X,Y,Z values?</p>
<p>Here the question is how to generate Z values, which has the same shape to X and Y, using "foo" function. Since my "foo" function only takes an 1D array, I do not know how I can uses this function with X and Y to generate corresponding Z values.</p>
| 2 | 2016-09-27T00:09:55Z | 39,714,550 | <p>Stack your two numpy arrays in "depth" using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.dstack.html" rel="nofollow"><code>np.dstack</code></a>, and then modify your <code>foo</code> function, so that it operates on only the last axis of your stacked array. This is easily done using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.sum.html" rel="nofollow"><code>np.sum</code></a> with parameter <code>axis=-1</code>, instead of using the builtin <code>sum</code>:</p>
<pre><code>import numpy as np
def foo(xy):
return np.sum(xy, axis=-1) ** np.sin(np.sum(xy, axis=-1))
x = np.arange(-2, 1, 1) # points in the x axis
y = np.arange( 3, 8, 1) # points in the y axis
X, Y = np.meshgrid(x, y) # X, Y : grid
XY = np.dstack((X, Y))
</code></pre>
<p>And now, you should get:</p>
<pre><code>>>> XY.shape
(5, 3, 2)
>>> foo(XY)
array([[ 1. , 1.87813065, 1.1677002 ],
[ 1.87813065, 1.1677002 , 0.35023496],
[ 1.1677002 , 0.35023496, 0.2136686 ],
[ 0.35023496, 0.2136686 , 0.60613935],
[ 0.2136686 , 0.60613935, 3.59102217]])
</code></pre>
<hr>
<p>If you want to achieve the same effect, but <em>without</em> modifying <code>foo</code>, then you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.apply_along_axis.html" rel="nofollow"><code>np.apply_along_axis</code></a>, which should do exactly what you need:</p>
<pre><code>>>> np.apply_along_axis(foo, -1, XY)
array([[ 1. , 1.87813065, 1.1677002 ],
[ 1.87813065, 1.1677002 , 0.35023496],
[ 1.1677002 , 0.35023496, 0.2136686 ],
[ 0.35023496, 0.2136686 , 0.60613935],
[ 0.2136686 , 0.60613935, 3.59102217]])
</code></pre>
| 1 | 2016-09-27T00:59:38Z | [
"python",
"numpy",
"matplotlib"
]
|
how to rotate xticks on one axis of figure in matplotlib without "getting" the labels as a list | 39,714,183 | <p>suppose i have the following code which creates one matplotlib figure with two axes, the second of which has x-axis labels as dates:</p>
<pre><code>import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import datetime as dt
x1 = np.arange(0,30)
x2 = pd.date_range('1/1/2016', periods=30, freq='D')
y1 = np.random.randn(30)
y2 = np.random.randn(30)
%matplotlib inline
fig, ax = plt.subplots(1,2, figsize=(18,5))
ax[0].scatter(x1,y1)
ax[1].scatter(x2,y2)
</code></pre>
<p>displaying this in an ipython notebook will show the x axis labels of the graph on the right as running into one another. i would like to rotate the labels to improve visibility. all of the documentation and online searching seems to suggest one of the following 2 options (both after the last line above): </p>
<p><strong>#1</strong></p>
<pre><code>plt.setp(ax[1].xaxis.get_majorticklabels(),rotation=90,horizontalalignment='right')
</code></pre>
<p><strong>or #2</strong></p>
<pre><code>plt.xticks(rotation=90)
</code></pre>
<p>either of these will work but will also print a list of labels (which for some reason is different in the first example than in the second)</p>
<p>how do i accomplish the rotation/display without also outputting some array? </p>
| 0 | 2016-09-27T00:10:46Z | 39,736,364 | <p>i was able to use this approach. not sure if it is the most elegant way, but it works without outputting an array</p>
<pre><code>for tick in ax[1].get_xticklabels():
tick.set_rotation(90)
</code></pre>
| 0 | 2016-09-28T00:46:53Z | [
"python",
"matplotlib"
]
|
scrapy shell not opening long link | 39,714,205 | <p>I'm dealing with
scrapy shell. URL that I'm trying to crawl is: <a href="http://allegro.pl/sportowe-uzywane-251188?a_enum[127779][15]=15&a_text_i[1][0]=2004&a_text_i[1][1]=2009&a_text_i[5][0]=950&id=251188&offerTypeBuyNow=1&order=p&string=gsxr&bmatch=base-relevance-aut-1-5-0913" rel="nofollow">http://allegro.pl/sportowe-uzywane-251188?a_enum[127779][15]=15&a_text_i[1][0]=2004&a_text_i[1][1]=2009&a_text_i[5][0]=950&id=251188&offerTypeBuyNow=1&order=p&string=gsxr&bmatch=base-relevance-aut-1-5-0913</a></p>
<p>But when I do "view(response)" I'm getting blank page
Page looks not loaded </p>
<pre><code>>>> response.css("title")
[]
</code></pre>
<p>Now fun part is sometimes it loads properly with same set of commands</p>
| 0 | 2016-09-27T00:13:51Z | 39,714,274 | <p>It's working for me, I suggest you to start with very basic tutorial:</p>
<pre><code>import scrapy
class BlogSpider(scrapy.Spider):
name = 'blogspider'
start_urls = ['http://allegro.pl/sportowe-uzywane-251188?a_enum%5B127779%5D%5B15%5D=15&a_text_i%5B1%5D%5B0%5D=2004&a_text_i%5B1%5D%5B1%5D=2009&a_text_i%5B5%5D%5B0%5D=950&id=251188&offerTypeBuyNow=1&order=p&string=gsxr&bmatch=base-relevance-aut-1-1-0913']
def parse(self, response):
print "----------------------------------------------------------------"
print response.body
print "----------------------------------------------------------------"
</code></pre>
<p>I'm able to see the body of the page. <code>view(response)</code> is wrong, undefined function.</p>
<p>Save this code as <code>myspider.py</code> and run with <code>scrapy runspider myspider.py</code>. You will see a big string printing into your terminal, that is the body between <code>-------------</code>s.</p>
<p><strong>For Scrapy Shell:</strong></p>
<p>Start in shell mode: <code>scrapy shell</code></p>
<p>Just run:</p>
<pre><code>>>> fetch("http://allegro.pl/sportowe-uzywane-251188?a_enum%5B127779%5D%5B15%5D=15&a_text_i%5B1%5D%5B0%5D=2004&a_text_i%5B1%5D%5B1%5D=2009&a_text_i%5B5%5D%5B0%5D=950&id=251188&offerTypeBuyNow=1&order=p&string=gsxr&bmatch=base-relevance-aut-1-1-0913")
>>> view(response)
</code></pre>
<p>It will open scraped page in your default browser. You URL is working for me.</p>
<p>And for title tag it shows:</p>
<pre><code>>>> response.css("title")
[<Selector xpath=u'descendant-or-self::title' data=u'<title>Gsxr w Sportowe U\u017cywane - Motocyk'>]
</code></pre>
<p>Crawled/scraped web page will be saved under <code>/tmp</code> directory like <code>/tmp/tmpn8wziQ.html</code></p>
| 0 | 2016-09-27T00:22:10Z | [
"python",
"scrapy"
]
|
scrapy shell not opening long link | 39,714,205 | <p>I'm dealing with
scrapy shell. URL that I'm trying to crawl is: <a href="http://allegro.pl/sportowe-uzywane-251188?a_enum[127779][15]=15&a_text_i[1][0]=2004&a_text_i[1][1]=2009&a_text_i[5][0]=950&id=251188&offerTypeBuyNow=1&order=p&string=gsxr&bmatch=base-relevance-aut-1-5-0913" rel="nofollow">http://allegro.pl/sportowe-uzywane-251188?a_enum[127779][15]=15&a_text_i[1][0]=2004&a_text_i[1][1]=2009&a_text_i[5][0]=950&id=251188&offerTypeBuyNow=1&order=p&string=gsxr&bmatch=base-relevance-aut-1-5-0913</a></p>
<p>But when I do "view(response)" I'm getting blank page
Page looks not loaded </p>
<pre><code>>>> response.css("title")
[]
</code></pre>
<p>Now fun part is sometimes it loads properly with same set of commands</p>
| 0 | 2016-09-27T00:13:51Z | 39,736,462 | <p>Thank you very much mertyildiran for help.</p>
<p>scrapy shell is not working for me. Some times it fetch web but for most of times not.I don't know why.</p>
<p>Anyway I end up with code that works great every single time.</p>
<p>import scrapy</p>
<p>class QuotesSpider(scrapy.Spider):
name = "allegro"
start_urls = ['<a href="http://allegro.pl/sportowe-uzywane-251188?a_enum%5B127779%5D%5B15%5D=15&a_text_i%5B1%5D%5B0%5D=2004&a_text_i%5B1%5D%5B1%5D=2009&a_text_i%5B5%5D%5B0%5D=950&id=251188&offerTypeBuyNow=1&order=p&string=gsxr&bmatch=base-relevance-aut-1-1-0913" rel="nofollow">http://allegro.pl/sportowe-uzywane-251188?a_enum%5B127779%5D%5B15%5D=15&a_text_i%5B1%5D%5B0%5D=2004&a_text_i%5B1%5D%5B1%5D=2009&a_text_i%5B5%5D%5B0%5D=950&id=251188&offerTypeBuyNow=1&order=p&string=gsxr&bmatch=base-relevance-aut-1-1-0913</a>']</p>
<pre><code>def parse(self, response):
for lista in response.css("article.offer"):
yield {
'link': lista.css('a.offer-title::attr(href)').extract(),
}
</code></pre>
| 0 | 2016-09-28T01:00:07Z | [
"python",
"scrapy"
]
|
missing required Charfield in django is saved as empty string and do not raise an error | 39,714,214 | <p>If I try to save incomplete model instance in Django 1.10, I would expect Django to raise an error. It does not seem to be the case.</p>
<p>models.py:</p>
<pre><code>from django.db import models
class Essai(models.Model):
ch1 = models.CharField(max_length=100, blank=False)
ch2 = models.CharField(max_length=100, blank=False)
</code></pre>
<p>So I have two fields not allowed to be empty (default behavior, <code>NOT NULL</code> restriction is applied by Django at MySQL table creation). I expect Django to rase an error if one of the fields is not set before storing.</p>
<p>However, when I create an incomplete instance, the data is stored just fine:</p>
<pre><code>>>> from test.models import Essai
>>> bouh = Essai()
>>> bouh.ch1 = "some content for ch1"
>>> bouh.save()
>>> bouh.id
9
>>> bouh.ch1
'some content for ch1'
>>> bouh.ch2
''
>>>
</code></pre>
<p>I would have expected Django to raise an error. If I force <code>ch2</code> to <code>None</code>, however, it raises an error:</p>
<pre><code>>>> bouh = Essai()
>>> bouh.ch1 = "some content for ch1"
>>> bouh.ch2 = None
>>> bouh.save()
Traceback (most recent call last):
(...)
return Database.Cursor.execute(self, query, params)
django.db.utils.IntegrityError: NOT NULL constraint failed: test_essai.ch2
>>> bouh.id
>>> bouh.ch1
'some content for ch1'
>>> bouh.ch2
>>>
</code></pre>
<p>Is this normal behavior? Did I miss something? Why is Django not raising an error as default behavior in this simple case?</p>
<p>Thanks for your lights!</p>
<p><code>EDIT</code></p>
<p>I got a few replies pointing out the fact that in SQL empty string <code>""</code> is not equivalent to NULL, as stated in <a href="http://stackoverflow.com/questions/17816229/django-model-blank-false-does-not-work">Django model blank=False does not work?</a>
However, if we look at ModelForm behavior, there seem to be a inconsistency in django doc.</p>
<p>see question
<a href="http://stackoverflow.com/questions/39739029/followup-missing-required-charfield-in-django-modelform-is-saved-as-empty-stri">Followup : missing required Charfield in django Modelform is saved as empty string and do not raise an error</a></p>
| 0 | 2016-09-27T00:15:11Z | 39,714,325 | <p>It seems like you have a slight misunderstanding of what <code>[blank][1]</code> stands for </p>
<blockquote>
<p>Note that this is different than null. null is purely
database-related, whereas blank is validation-related. If a field has
blank=True, form validation will allow entry of an empty value. If a
field has blank=False, the field will be required.</p>
</blockquote>
<p>And also</p>
<blockquote>
<p>Avoid using null on string-based fields such as CharField and
TextField because empty string values will always be stored as empty
strings, not as NULL. If a string-based field has null=True, that
means it has two possible values for âno dataâ: NULL, and the empty
string. In most cases, itâs redundant to have two possible values for
âno data;â the Django convention is to use the empty string, not NULL.</p>
</blockquote>
<p>Thus in your case, blank=false does not mean django will refuse to write an empty string for <code>ch1</code> or <code>ch2</code>. but it will refuse to let you leave that field blank if you use a ModelForm with it.</p>
| 0 | 2016-09-27T00:29:14Z | [
"python",
"mysql",
"django"
]
|
missing required Charfield in django is saved as empty string and do not raise an error | 39,714,214 | <p>If I try to save incomplete model instance in Django 1.10, I would expect Django to raise an error. It does not seem to be the case.</p>
<p>models.py:</p>
<pre><code>from django.db import models
class Essai(models.Model):
ch1 = models.CharField(max_length=100, blank=False)
ch2 = models.CharField(max_length=100, blank=False)
</code></pre>
<p>So I have two fields not allowed to be empty (default behavior, <code>NOT NULL</code> restriction is applied by Django at MySQL table creation). I expect Django to rase an error if one of the fields is not set before storing.</p>
<p>However, when I create an incomplete instance, the data is stored just fine:</p>
<pre><code>>>> from test.models import Essai
>>> bouh = Essai()
>>> bouh.ch1 = "some content for ch1"
>>> bouh.save()
>>> bouh.id
9
>>> bouh.ch1
'some content for ch1'
>>> bouh.ch2
''
>>>
</code></pre>
<p>I would have expected Django to raise an error. If I force <code>ch2</code> to <code>None</code>, however, it raises an error:</p>
<pre><code>>>> bouh = Essai()
>>> bouh.ch1 = "some content for ch1"
>>> bouh.ch2 = None
>>> bouh.save()
Traceback (most recent call last):
(...)
return Database.Cursor.execute(self, query, params)
django.db.utils.IntegrityError: NOT NULL constraint failed: test_essai.ch2
>>> bouh.id
>>> bouh.ch1
'some content for ch1'
>>> bouh.ch2
>>>
</code></pre>
<p>Is this normal behavior? Did I miss something? Why is Django not raising an error as default behavior in this simple case?</p>
<p>Thanks for your lights!</p>
<p><code>EDIT</code></p>
<p>I got a few replies pointing out the fact that in SQL empty string <code>""</code> is not equivalent to NULL, as stated in <a href="http://stackoverflow.com/questions/17816229/django-model-blank-false-does-not-work">Django model blank=False does not work?</a>
However, if we look at ModelForm behavior, there seem to be a inconsistency in django doc.</p>
<p>see question
<a href="http://stackoverflow.com/questions/39739029/followup-missing-required-charfield-in-django-modelform-is-saved-as-empty-stri">Followup : missing required Charfield in django Modelform is saved as empty string and do not raise an error</a></p>
| 0 | 2016-09-27T00:15:11Z | 39,716,793 | <blockquote>
<p>Use this,</p>
</blockquote>
<pre><code>from django.db import models
class Essai(models.Model):
ch1 = models.CharField(max_length=100)
ch2 = models.CharField(max_length=100)
</code></pre>
| 0 | 2016-09-27T05:42:01Z | [
"python",
"mysql",
"django"
]
|
Why do large values cause an infinite loop in this program? | 39,714,236 | <p>This is a program I made to calculate credit card balance. It works for most inputs, but when <code>balance</code> gets too large, the program runs an infinite loop. What can be done to improve the code so that it calculates even the larger values?</p>
<pre><code>monthlyPayment = 0
monthlyInterestRate = annualInterestRate /12
newbalance = balance
month = 0
while newbalance > 0:
monthlyPayment += .1
newbalance = balance
for month in range(1,13):
newbalance -= monthlyPayment
newbalance += monthlyInterestRate * newbalance
month += 1
print("Lowest Payment:" + str(round(monthlyPayment,2)))
</code></pre>
| -3 | 2016-09-27T00:16:52Z | 39,714,292 | <p>You must make the testing condition of <code>while</code> become <code>False</code>. Since it is <code>newbalance > 0</code>, it should eventually emerge from the <code>for</code> loop with a positive value.</p>
| 0 | 2016-09-27T00:23:47Z | [
"python",
"python-3.x",
"infinite-loop"
]
|
Why do large values cause an infinite loop in this program? | 39,714,236 | <p>This is a program I made to calculate credit card balance. It works for most inputs, but when <code>balance</code> gets too large, the program runs an infinite loop. What can be done to improve the code so that it calculates even the larger values?</p>
<pre><code>monthlyPayment = 0
monthlyInterestRate = annualInterestRate /12
newbalance = balance
month = 0
while newbalance > 0:
monthlyPayment += .1
newbalance = balance
for month in range(1,13):
newbalance -= monthlyPayment
newbalance += monthlyInterestRate * newbalance
month += 1
print("Lowest Payment:" + str(round(monthlyPayment,2)))
</code></pre>
| -3 | 2016-09-27T00:16:52Z | 39,714,298 | <pre><code>while newbalance > 0:
monthlyPayment += .1
newbalance = balance
</code></pre>
<p>Here is your problem. As long as <code>balance</code> is greater than 0, <code>newbalance</code> will always be reset to <code>balance</code> and the while loop will evaluate as true and will cause an infinite loop. </p>
| 4 | 2016-09-27T00:24:10Z | [
"python",
"python-3.x",
"infinite-loop"
]
|
How to use pd.melt for two rows as headers | 39,714,267 | <p>I have a dataframe that looks like this:</p>
<pre><code> DATETIME | TAGNAME1 | TAGNAME2
0 DESCRIPTION | TAG_DESCRIPTION | TAG2_DESCRIPTION
1 01/01/2015 00:00:00 | 100 | 200
</code></pre>
<p>I need to have following result</p>
<pre><code> DATETIME | TAGNAME | DESCRIPTION | VALUE
0 01/01/2015 00:00:00 | TAGNAME1 | TAG1_DESCRIPTION | 100
1 01/01/2015 00:00:00 | TAGNAME2 | TAG2_DESCRIPTION | 200
</code></pre>
<p>I saw some examples using pd.melt so I ran following command</p>
<pre><code>pd.melt(df, id_vars=['DATETIME'], var_name=['TagName'], value_name='Value')
</code></pre>
<p>But I am missing the DESCRIPTION as a new column</p>
<p>is there any way to achieve what i need?</p>
<p>Thanks in advance</p>
| 0 | 2016-09-27T00:20:49Z | 39,716,140 | <p>Consider slicing dataframe by rows and running two melts with final merge:</p>
<pre><code>from io import StringIO
import pandas as pd
data = '''DATETIME|TAGNAME1|TAGNAME2
DESCRIPTION|TAG_DESCRIPTION|TAG2_DESCRIPTION
1/01/2015 00:00:00|100|200'''
df = pd.read_table(StringIO(data), sep="|")
# DATETIME TAGNAME1 TAGNAME2
# 0 DESCRIPTION TAG_DESCRIPTION TAG2_DESCRIPTION
# 1 1/01/2015 00:00:00 100 200
df1 = df[0:1] # FIRST ROW
df2 = df[1:len(df)] # SECOND TO LAST ROW
mdf = pd.merge(pd.melt(df1, id_vars=['DATETIME'], var_name='TAGNAME',
value_name='DESCRIPTION')[['TAGNAME', 'DESCRIPTION']],
pd.melt(df2, id_vars=['DATETIME'], var_name='TAGNAME',
value_name='VALUE'),
on=['TAGNAME'])
mdf = mdf[['DATETIME', 'TAGNAME', 'DESCRIPTION', 'VALUE']]
# DATETIME TAGNAME DESCRIPTION VALUE
# 0 1/01/2015 00:00:00 TAGNAME1 TAG_DESCRIPTION 100
# 1 1/01/2015 00:00:00 TAGNAME2 TAG2_DESCRIPTION 200
</code></pre>
| 1 | 2016-09-27T04:46:06Z | [
"python",
"pandas"
]
|
Python XML Parsing - Aligning Indexed Elements | 39,714,273 | <p>I am in dire need of some help on parsing an xml file. I have been spinning my wheels for a couple of weeks and have not made much progress. I have the below XML snippet and am trying to create a column with the names and values. The problem is that some names are indexed and causing alignment issues when I print them side by side.</p>
<p>Can you please help or direct me to parsing this XML and get the desired print? Ideally I would like to do it without using lxml as the system does not have this module but do not mind suggestions using lxml as well.</p>
<p>XML File Example:</p>
<pre><code><root>
<elments>
<mgmtid>
<date>20160926</date>
<gp>3600</gp>
<name p="">watermelons</name>
<name p="">bananas</name>
<name p="">oranges</name>
<valuegroup>
<objid>None</objid>
<value p="">10</value>
<value p="">15</value>
<value p="">20</value>
</valuegroup>
</mgmtid>
<mgmtid>
<date>20160926</date>
<gp>3600</gp>
<name p="">apples</name>
<valuegroup>
<objid>red</objid>
<value p="">100</value>
</valuegroup>
<valuegroup>
<objid>blue</objid>
<value p="">200</value>
</valuegroup>
<valuegroup>
<objid>yellow</objid>
<value p="">300</value>
</valuegroup>
<valuegroup>
<objid>white</objid>
<value p="">400</value>
</valuegroup>
<valuegroup>
<objid>green</objid>
<value p="">500</value>
</valuegroup>
</mgmtid>
<mgmtid>
<date>20160926</date>
<gp>3600</gp>
<name p="">strawberry</name>
<name p="">guava</name>
<valuegroup>
<objid>None</objid>
<value p="">650</value>
<value p="">750</value>
</valuegroup>
</mgmtid>
</elments>
</root>
</code></pre>
<p>My attempt (miserably failed) at getting names and values. As you can see the values are not aligned with their names.</p>
<pre><code>import xml.etree.ElementTree as ET
import itertools
import collections
tree = ET.parse('test_xml_file.xml')
root = tree.getroot()
names = []
values = []
for i in (tree.findall('.//')):
if i.tag == 'name':
n = (i.tag, i.text)
names.append(n[0] + ' ' + str(n[1]))
for i in (tree.findall('.//')):
if i.tag == 'value' or i.tag == 'objid':
v = (i.tag, i.text)
values.append(v[0] + ' ' + str(v[1]))
print('=' * 45)
for n, v in itertools.zip_longest(names, values):
print(str(n).ljust(20, ' ') + str(v))
</code></pre>
<p>Current Output:</p>
<pre><code>name watermelons objid None
name bananas value 10
name oranges value 15
name apples value 20
name strawberry objid red
name guava value 100
None objid blue
None value 200
None objid yellow
None value 300
None objid white
None value 400
None objid green
None value 500
None objid None
None value 650
None value 750
</code></pre>
<p>Desired Output:</p>
<pre><code>=============================
name Index value
=============================
watermelons None 10
bananas None 15
oranges None 20
apples red 100
apples blue 200
apples yellow 300
apples white 400
apples green 500
strawberry None 650
guava None 750
</code></pre>
| 0 | 2016-09-27T00:21:33Z | 39,714,765 | <p>I don't see a quick and fancy way of solving it, this code works:</p>
<pre><code>import xml.etree.ElementTree as ET
tree = ET.parse('test_xml_file.xml')
elements = []
mgmtids = tree.getroot().findall(".//mgmtid")
for mgmtid in mgmtids:
names = mgmtid.findall(".//name")
objids = mgmtid.findall(".//valuegroup/objid")
values = mgmtid.findall(".//valuegroup/value")
if len(names) == len(values):
for i in range(len(names)):
elements.append([names[i].text,objids[0].text,values[i].text])
elif len(objids)==len(values):
for i in range(len(values)):
elements.append([names[0].text,objids[i].text,values[i].text])
elif len(names) == len(objids):
for i in range(len(names)):
elements.append([names[i].text,objids[i].text,values[0].text])
#elements.append([names[i].text,objids[i].text,values[i].text for i in len(names)])
print "\n".join([" - ".join([text for text in el]) for el in elements])
</code></pre>
<p>Hope it helps!</p>
| 0 | 2016-09-27T01:36:45Z | [
"python",
"xml",
"parsing",
"xml-parsing",
"lxml"
]
|
Python Creating a dictionary of dictionaries structure, nested values are the same | 39,714,344 | <p>I'm attempting to build a data structure that can change in size and be posted to Firebase. The issue I am seeing is during the construction of the data structure. I have the following code written: </p>
<pre><code>for i in range(len(results)):
designData = {"Design Flag" : results[i][5],
"performance" : results[i][6]}
for j in range(len(objectiveNameArray)):
objectives[objectiveNameArray[j]] = results[i][columnHeaders.index(objectiveNameArray[j])]
designData["objectives"] = copy.copy(objectives)
for k in range(len(variableNameArray)):
variables[variableNameArray[k]] = results[i][columnHeaders.index(variableNameArray[k])]
designData["variables"] = copy.copy(variables)
for l in range(len(responseNameArray)):
responses[responseNameArray[l]] = results[i][columnHeaders.index(responseNameArray[l])]
designData["responses"] = copy.copy(responses)
for m in range(len(constraintNameArray)):
constraintViolated = False
if constraintNameArray[m][1] == "More than":
if results[i][columnHeaders.index(constraintNameArray[m][0])] > constraintNameArray[m][2]:
constraintViolated = True
else:
constraintViolated = False
elif constraintNameArray[m][1] == "Less than":
if results[i][columnHeaders.index(constraintNameArray[m][0])] < constraintNameArray[m][2]:
constraintViolated = True
else:
constraintViolated = False
if constraintNameArray[m][0] in constraints:
if constraints[constraintNameArray[m][0]]["violated"] == True:
constraintViolated = True
constraints[constraintNameArray[m][0]] = {"value" : results[i][columnHeaders.index(constraintNameArray[m][0])], "violated" : constraintViolated}
designData["constraints"] = copy.copy(constraints)
data[studyName][results[i][4]] = designData
</code></pre>
<p>When I include print(designData) inside of the for loop, I see that my results are changing as expected for each loop iteration.</p>
<p>However, if I include print(data) outside of the for loop, I get a data structure where the values added by the results array are all the same values for each iteration of the loop even though the key is different. </p>
<p><a href="http://i.stack.imgur.com/XAtis.png" rel="nofollow">Comparing print(data) and print(designData)</a></p>
<p>I apologize in advance if this isn't enough information this is my first post on Stack so please be patient with me. </p>
| 0 | 2016-09-27T00:31:52Z | 39,714,477 | <p>It is probably because you put the variables like <code>objectives</code>, <code>variables</code>, <code>responses</code> directly to the <code>designData</code>. Try the following:</p>
<pre><code>import copy
....
designData['objectives'] = copy.copy(objectives)
....
designData['variables'] = copy.copy(variables)
....
designData['responses'] = copy.copy(responses)
</code></pre>
<p>For similar questions, see <a href="http://stackoverflow.com/questions/2612802/how-to-clone-or-copy-a-list-in-python">copy a list</a>.</p>
| 0 | 2016-09-27T00:50:04Z | [
"python",
"dictionary"
]
|
Search for element in list of list Python | 39,714,372 | <p>New to programming.
I'm trying to create a program that has a list o list (As a data base) the first element being an invoice, the second the value of that invoice and the third, the margin of income. Given an invoice (input), it will search within the list o lists(data base).</p>
<p>I currently have:</p>
<pre><code>Data_base= [["f1",2000,.24],["f2",150000,.32],["f3",345000,.32]]
invoice = input("Enter invoice: ")
if invoice in data_base :
print ("Invoice is valid")
else :
print("Invoice does not exist, enter a valid document: ")
</code></pre>
<p>When I run the program and I enter a valid element within the list, it does not recognize as an element within the list.</p>
<p>What am I doing wrong?</p>
<p>What is the code to be given the index if the element is within the list of lists?</p>
| 0 | 2016-09-27T00:36:40Z | 39,714,442 | <p><code>in</code> keyword in Python is used to check the membership of an element in a <code>iterator</code>. That is each element of the <code>iterator</code> is checked with the operand and returns True if the operand exists in the <code>iterator</code>.</p>
<p>The <code>in</code> keyword is specifically looking for invoice in data_base which is a list. Now, it is a different story that each element of this list is another list.</p>
<p>The line <code>if invoice in data_base :</code> does the following,
Say the user input was "f1"
now "f1" is compared to the list <code>["f1",2000,.24]</code>
that is </p>
<p><code>"f1" == ["f1",2000,.24]</code> which is False, you are assuming this should return <code>True</code> but it does not, since string "f1" is not equal to the list <code>["f1",2000,.24]</code>.</p>
<p>then </p>
<p><code>"f1" == ["f2",150000,.32]</code> which is False</p>
<p>and finally </p>
<p><code>"f1" == ["f3",345000,.32]</code> which is also False.</p>
<p>You are essentially comparing the invoice(user input) to the whole element instead of just the first element of the element.</p>
<p>This program should give you the correct output. </p>
<pre><code>Data_base= [["f1",2000,.24],["f2",150000,.32],["f3",345000,.32]]
invoices_db = [ele[0] for ele in Data_base]
invoice = input("Enter invoice: ")
if invoice in invoices_db :
print ("Invoice is valid")
else :
print("Invoice does not exist, enter a valid document: ")
</code></pre>
<p>For more details refer the answer on <a href="http://stackoverflow.com/questions/19775692/use-and-meaning-of-in-in-an-if-statement-in-python">Use and meaning of "in" in an if statement in python</a></p>
| 1 | 2016-09-27T00:46:07Z | [
"python",
"list",
"search",
"element"
]
|
Search for element in list of list Python | 39,714,372 | <p>New to programming.
I'm trying to create a program that has a list o list (As a data base) the first element being an invoice, the second the value of that invoice and the third, the margin of income. Given an invoice (input), it will search within the list o lists(data base).</p>
<p>I currently have:</p>
<pre><code>Data_base= [["f1",2000,.24],["f2",150000,.32],["f3",345000,.32]]
invoice = input("Enter invoice: ")
if invoice in data_base :
print ("Invoice is valid")
else :
print("Invoice does not exist, enter a valid document: ")
</code></pre>
<p>When I run the program and I enter a valid element within the list, it does not recognize as an element within the list.</p>
<p>What am I doing wrong?</p>
<p>What is the code to be given the index if the element is within the list of lists?</p>
| 0 | 2016-09-27T00:36:40Z | 39,714,529 | <p>If you wish to search by invoice number, a better data structure to use is the dictionary. Use the invoice number as the key, and the invoice value and margin as values. Example:</p>
<pre><code>invoices = {'f1': [2000, .24],
'f2': [150000, .32],
'f3': [345000, .32]}
invoice = input("Enter invoice: ")
if invoice in invoices:
value, margin = invoices[invoice]
print('Invoice {} is valid. Value {}, margin {}'.format(invoice, value, margin))
else:
print("Invoice does not exist, enter a valid document: ")
</code></pre>
<p>Now when you say <code>invoice in invoices</code> Python will search for the invoice number in the dictionary's keys. Compared to searching in a list this is a very efficient operation (O(1) vs. O(n)).</p>
| 1 | 2016-09-27T00:57:14Z | [
"python",
"list",
"search",
"element"
]
|
Search for element in list of list Python | 39,714,372 | <p>New to programming.
I'm trying to create a program that has a list o list (As a data base) the first element being an invoice, the second the value of that invoice and the third, the margin of income. Given an invoice (input), it will search within the list o lists(data base).</p>
<p>I currently have:</p>
<pre><code>Data_base= [["f1",2000,.24],["f2",150000,.32],["f3",345000,.32]]
invoice = input("Enter invoice: ")
if invoice in data_base :
print ("Invoice is valid")
else :
print("Invoice does not exist, enter a valid document: ")
</code></pre>
<p>When I run the program and I enter a valid element within the list, it does not recognize as an element within the list.</p>
<p>What am I doing wrong?</p>
<p>What is the code to be given the index if the element is within the list of lists?</p>
| 0 | 2016-09-27T00:36:40Z | 39,714,543 | <p>Here's the solution I found:</p>
<pre><code>data_base = [["f1",2000,.24],["f2",150000,.32],["f3",345000,.32]]
invoice = input("Enter invoice: ")
found = False
for data in data_base:
if invoice == data[0]:
found = True
break
result = "Invoice is valid" if found else "Invoice does not exist, enter a valid document: "
print(result)
</code></pre>
<p>First you should iterate in all <code>data_base</code> elements and then check if the first element is the invoice you're looking for.</p>
| 0 | 2016-09-27T00:58:32Z | [
"python",
"list",
"search",
"element"
]
|
Search for element in list of list Python | 39,714,372 | <p>New to programming.
I'm trying to create a program that has a list o list (As a data base) the first element being an invoice, the second the value of that invoice and the third, the margin of income. Given an invoice (input), it will search within the list o lists(data base).</p>
<p>I currently have:</p>
<pre><code>Data_base= [["f1",2000,.24],["f2",150000,.32],["f3",345000,.32]]
invoice = input("Enter invoice: ")
if invoice in data_base :
print ("Invoice is valid")
else :
print("Invoice does not exist, enter a valid document: ")
</code></pre>
<p>When I run the program and I enter a valid element within the list, it does not recognize as an element within the list.</p>
<p>What am I doing wrong?</p>
<p>What is the code to be given the index if the element is within the list of lists?</p>
| 0 | 2016-09-27T00:36:40Z | 39,714,606 | <pre><code>Data_base= [["f1",2000,.24],["f2",150000,.32],["f3",345000,.32]]
invoice = 2000
result = [i for i in Data_base if i[1] == invoice]
for item in result:
print item
</code></pre>
| 0 | 2016-09-27T01:11:11Z | [
"python",
"list",
"search",
"element"
]
|
NaN results in tensorflow Neural Network | 39,714,374 | <p>I have this problem that after one iteration nearly all my parameters (cost function, weights, hypothesis function, etc.) output 'NaN'. My code is similar to the tensorflow tutorial MNIST-Expert (<a href="https://www.tensorflow.org/versions/r0.9/tutorials/mnist/pros/index.html" rel="nofollow">https://www.tensorflow.org/versions/r0.9/tutorials/mnist/pros/index.html</a>). I looked for solutions already and so far I tried: reducing the learning rate to nearly zero and setting it to zero, using AdamOptimizer instead of gradient descent, using sigmoid function for the hypothesis function in the last layer and using only numpy functions. I have some negative and zero values in my input data, so I can't use the logarithmic cross entropy instead of the quadratic cost function. The result is the same, butMy input data consist of stresses and strains of soils.</p>
<pre><code>import tensorflow as tf
import Datafiles3_pv_complete as soil
import numpy as np
m_training = int(18.0)
m_cv = int(5.0)
m_test = int(5.0)
total_examples = 28
" range for running "
range_training = xrange(0,m_training)
range_cv = xrange(m_training,(m_training+m_cv))
range_test = xrange((m_training+m_cv),total_examples)
""" Using interactive Sessions"""
sess = tf.InteractiveSession()
""" creating input and output vectors """
x = tf.placeholder(tf.float32, shape=[None, 11])
y_true = tf.placeholder(tf.float32, shape=[None, 3])
""" Standard Deviation Calculation"""
stdev = np.divide(2.0,np.sqrt(np.prod(x.get_shape().as_list()[1:])))
""" Weights and Biases """
def weights(shape):
initial = tf.truncated_normal(shape, stddev=stdev)
return tf.Variable(initial)
def bias(shape):
initial = tf.truncated_normal(shape, stddev=1.0)
return tf.Variable(initial)
""" Creating weights and biases for all layers """
theta1 = weights([11,7])
bias1 = bias([1,7])
theta2 = weights([7,7])
bias2 = bias([1,7])
"Last layer"
theta3 = weights([7,3])
bias3 = bias([1,3])
""" Hidden layer input (Sum of weights, activation functions and bias)
z = theta^T * activation + bias
"""
def Z_Layer(activation,theta,bias):
return tf.add(tf.matmul(activation,theta),bias)
""" Creating the sigmoid function
sigmoid = 1 / (1 + exp(-z))
"""
def Sigmoid(z):
return tf.div(tf.constant(1.0),tf.add(tf.constant(1.0), tf.exp(tf.neg(z))))
""" hypothesis functions - predicted output """
' layer 1 - input layer '
hyp1 = x
' layer 2 '
z2 = Z_Layer(hyp1, theta1, bias1)
hyp2 = Sigmoid(z2)
' layer 3 '
z3 = Z_Layer(hyp2, theta2, bias2)
hyp3 = Sigmoid(z3)
' layer 4 - output layer '
zL = Z_Layer(hyp3, theta3, bias3)
hypL = tf.add( tf.add(tf.pow(zL,3), tf.pow(zL,2) ), zL)
""" Cost function """
cost_function = tf.mul( tf.div(0.5, m_training), tf.pow( tf.sub(hypL, y_true), 2))
#cross_entropy = -tf.reduce_sum(y_true*tf.log(hypL) + (1-y_true)*tf.log(1-hypL))
""" Gradient Descent """
train_step = tf.train.GradientDescentOptimizer(learning_rate=0.003).minimize(cost_function)
""" Training and Evaluation """
correct_prediction = tf.equal(tf.arg_max(hypL, 1), tf.arg_max(y_true, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.run(tf.initialize_all_variables())
keep_prob = tf.placeholder(tf.float32)
""" Testing - Initialise lists """
hyp1_test = []
z2_test = []
hyp2_test = []
z3_test = []
hyp3_test = []
zL_test = []
hypL_test = []
cost_function_test =[]
complete_error_test = []
theta1_test = []
theta2_test = []
theta3_test = []
bias1_test = []
bias2_test = []
bias3_test = []
""" ------------------------- """
complete_error_init = tf.abs(tf.reduce_mean(tf.sub(hypL,y_true),1))
training_error=[]
for j in range_training:
feedj = {x: soil.input_scale[j], y_true: soil.output_scale[j] , keep_prob: 1.0}
""" ------------------------- """
'Testing - adding to list'
z2_init = z2.eval(feed_dict=feedj)
z2_test.append(z2_init)
hyp2_init = hyp2.eval(feed_dict=feedj)
hyp2_test.append(hyp2_init)
z3_init = z3.eval(feed_dict=feedj)
z3_test.append(z3_init)
hyp3_init = hyp3.eval(feed_dict=feedj)
hyp3_test.append(hyp3_init)
zL_init = zL.eval(feed_dict=feedj)
zL_test.append(zL_init)
hypL_init = hypL.eval(feed_dict=feedj)
hypL_test.append(hypL_init)
cost_function_init = cost_function.eval(feed_dict=feedj)
cost_function_test.append(cost_function_init)
complete_error = complete_error_init.eval(feed_dict=feedj)
complete_error_test.append(complete_error)
print 'number iterations: %g, error (S1, S2, S3): %g, %g, %g' % (j, complete_error[0], complete_error[1], complete_error[2])
theta1_init = theta1.eval()
theta1_test.append(theta1_init)
theta2_init = theta2.eval()
theta2_test.append(theta2_init)
theta3_init = theta3.eval()
theta3_test.append(theta3_init)
bias1_init = bias1.eval()
bias1_test.append(bias1_init)
bias2_init = bias2.eval()
bias2_test.append(bias2_init)
bias3_init = bias3.eval()
bias3_test.append(bias3_init)
""" ------------------------- """
train_accuracy = accuracy.eval(feed_dict=feedj)
print("step %d, training accuracy %g" % (j, train_accuracy))
train_step.run(feed_dict=feedj)
training_error.append(1 - train_accuracy)
cv_error=[]
for k in range_cv:
feedk = {x: soil.input_scale[k], y_true: soil.output_scale[k] , keep_prob: 1.0}
cv_accuracy = accuracy.eval(feed_dict=feedk)
print("cross-validation accuracy %g" % cv_accuracy)
cv_error.append(1-cv_accuracy)
for l in range_test:
print("test accuracy %g" % accuracy.eval(feed_dict={x: soil.input_matrixs[l], y_true: soil.output_matrixs[l], keep_prob: 1.0}))
</code></pre>
<p>The last weeks I was working on a Unit-model for this problem, but the same output occurred. I have no idea what to try next. Hope someone can help me.</p>
<h1>Edit:</h1>
<p>I checked some parameters in detail again. The hypothesis function (hyp) and activation function (z) for layer 3 and 4 (last layer) have the same entries for each data point, i.e. the same value in each line for one column.</p>
| 4 | 2016-09-27T00:36:47Z | 39,716,676 | <p>1e^-3 is still fairly high, for the classifier you've described. NaN actually means that the weights have tended to infinity, so I would suggest exploring even lower learning rates, around 1e^-7 specifically. If it continues to diverge, multiply your learning rate by 0.1, and repeat until the weights are finite-valued.</p>
| 0 | 2016-09-27T05:33:00Z | [
"python",
"neural-network",
"tensorflow",
null
]
|
NaN results in tensorflow Neural Network | 39,714,374 | <p>I have this problem that after one iteration nearly all my parameters (cost function, weights, hypothesis function, etc.) output 'NaN'. My code is similar to the tensorflow tutorial MNIST-Expert (<a href="https://www.tensorflow.org/versions/r0.9/tutorials/mnist/pros/index.html" rel="nofollow">https://www.tensorflow.org/versions/r0.9/tutorials/mnist/pros/index.html</a>). I looked for solutions already and so far I tried: reducing the learning rate to nearly zero and setting it to zero, using AdamOptimizer instead of gradient descent, using sigmoid function for the hypothesis function in the last layer and using only numpy functions. I have some negative and zero values in my input data, so I can't use the logarithmic cross entropy instead of the quadratic cost function. The result is the same, butMy input data consist of stresses and strains of soils.</p>
<pre><code>import tensorflow as tf
import Datafiles3_pv_complete as soil
import numpy as np
m_training = int(18.0)
m_cv = int(5.0)
m_test = int(5.0)
total_examples = 28
" range for running "
range_training = xrange(0,m_training)
range_cv = xrange(m_training,(m_training+m_cv))
range_test = xrange((m_training+m_cv),total_examples)
""" Using interactive Sessions"""
sess = tf.InteractiveSession()
""" creating input and output vectors """
x = tf.placeholder(tf.float32, shape=[None, 11])
y_true = tf.placeholder(tf.float32, shape=[None, 3])
""" Standard Deviation Calculation"""
stdev = np.divide(2.0,np.sqrt(np.prod(x.get_shape().as_list()[1:])))
""" Weights and Biases """
def weights(shape):
initial = tf.truncated_normal(shape, stddev=stdev)
return tf.Variable(initial)
def bias(shape):
initial = tf.truncated_normal(shape, stddev=1.0)
return tf.Variable(initial)
""" Creating weights and biases for all layers """
theta1 = weights([11,7])
bias1 = bias([1,7])
theta2 = weights([7,7])
bias2 = bias([1,7])
"Last layer"
theta3 = weights([7,3])
bias3 = bias([1,3])
""" Hidden layer input (Sum of weights, activation functions and bias)
z = theta^T * activation + bias
"""
def Z_Layer(activation,theta,bias):
return tf.add(tf.matmul(activation,theta),bias)
""" Creating the sigmoid function
sigmoid = 1 / (1 + exp(-z))
"""
def Sigmoid(z):
return tf.div(tf.constant(1.0),tf.add(tf.constant(1.0), tf.exp(tf.neg(z))))
""" hypothesis functions - predicted output """
' layer 1 - input layer '
hyp1 = x
' layer 2 '
z2 = Z_Layer(hyp1, theta1, bias1)
hyp2 = Sigmoid(z2)
' layer 3 '
z3 = Z_Layer(hyp2, theta2, bias2)
hyp3 = Sigmoid(z3)
' layer 4 - output layer '
zL = Z_Layer(hyp3, theta3, bias3)
hypL = tf.add( tf.add(tf.pow(zL,3), tf.pow(zL,2) ), zL)
""" Cost function """
cost_function = tf.mul( tf.div(0.5, m_training), tf.pow( tf.sub(hypL, y_true), 2))
#cross_entropy = -tf.reduce_sum(y_true*tf.log(hypL) + (1-y_true)*tf.log(1-hypL))
""" Gradient Descent """
train_step = tf.train.GradientDescentOptimizer(learning_rate=0.003).minimize(cost_function)
""" Training and Evaluation """
correct_prediction = tf.equal(tf.arg_max(hypL, 1), tf.arg_max(y_true, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.run(tf.initialize_all_variables())
keep_prob = tf.placeholder(tf.float32)
""" Testing - Initialise lists """
hyp1_test = []
z2_test = []
hyp2_test = []
z3_test = []
hyp3_test = []
zL_test = []
hypL_test = []
cost_function_test =[]
complete_error_test = []
theta1_test = []
theta2_test = []
theta3_test = []
bias1_test = []
bias2_test = []
bias3_test = []
""" ------------------------- """
complete_error_init = tf.abs(tf.reduce_mean(tf.sub(hypL,y_true),1))
training_error=[]
for j in range_training:
feedj = {x: soil.input_scale[j], y_true: soil.output_scale[j] , keep_prob: 1.0}
""" ------------------------- """
'Testing - adding to list'
z2_init = z2.eval(feed_dict=feedj)
z2_test.append(z2_init)
hyp2_init = hyp2.eval(feed_dict=feedj)
hyp2_test.append(hyp2_init)
z3_init = z3.eval(feed_dict=feedj)
z3_test.append(z3_init)
hyp3_init = hyp3.eval(feed_dict=feedj)
hyp3_test.append(hyp3_init)
zL_init = zL.eval(feed_dict=feedj)
zL_test.append(zL_init)
hypL_init = hypL.eval(feed_dict=feedj)
hypL_test.append(hypL_init)
cost_function_init = cost_function.eval(feed_dict=feedj)
cost_function_test.append(cost_function_init)
complete_error = complete_error_init.eval(feed_dict=feedj)
complete_error_test.append(complete_error)
print 'number iterations: %g, error (S1, S2, S3): %g, %g, %g' % (j, complete_error[0], complete_error[1], complete_error[2])
theta1_init = theta1.eval()
theta1_test.append(theta1_init)
theta2_init = theta2.eval()
theta2_test.append(theta2_init)
theta3_init = theta3.eval()
theta3_test.append(theta3_init)
bias1_init = bias1.eval()
bias1_test.append(bias1_init)
bias2_init = bias2.eval()
bias2_test.append(bias2_init)
bias3_init = bias3.eval()
bias3_test.append(bias3_init)
""" ------------------------- """
train_accuracy = accuracy.eval(feed_dict=feedj)
print("step %d, training accuracy %g" % (j, train_accuracy))
train_step.run(feed_dict=feedj)
training_error.append(1 - train_accuracy)
cv_error=[]
for k in range_cv:
feedk = {x: soil.input_scale[k], y_true: soil.output_scale[k] , keep_prob: 1.0}
cv_accuracy = accuracy.eval(feed_dict=feedk)
print("cross-validation accuracy %g" % cv_accuracy)
cv_error.append(1-cv_accuracy)
for l in range_test:
print("test accuracy %g" % accuracy.eval(feed_dict={x: soil.input_matrixs[l], y_true: soil.output_matrixs[l], keep_prob: 1.0}))
</code></pre>
<p>The last weeks I was working on a Unit-model for this problem, but the same output occurred. I have no idea what to try next. Hope someone can help me.</p>
<h1>Edit:</h1>
<p>I checked some parameters in detail again. The hypothesis function (hyp) and activation function (z) for layer 3 and 4 (last layer) have the same entries for each data point, i.e. the same value in each line for one column.</p>
| 4 | 2016-09-27T00:36:47Z | 39,738,059 | <p>Finally, no more NaN values. The solution is to <strong>scale</strong> my input and output data. The result (accuracy) is still not good, but at least I get some real values for the parameters. I tried feature scaling before in other attempts (where I probably had some other mistakes as well) and assumed it wouldn't help with my problem either.</p>
| 0 | 2016-09-28T04:27:56Z | [
"python",
"neural-network",
"tensorflow",
null
]
|
Group unique 0th elements of CSV for unique ith elements in python or hive | 39,714,574 | <p>Please see the image at link to best see the input and required output formats and read description below</p>
<p><a href="http://i.stack.imgur.com/vWCpc.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/vWCpc.jpg" alt="enter image description here"></a></p>
<p>I'm seeking to take a 3 (or 2) column csv and create a new csv where for each unique 1st element (ie 2nd column) all the unique 0th elements are grouped so the that the structure of output csv rows are as such:
unique 1st element, unique 0th element #1, unique 0th element #2,...</p>
<p>Using Python 3.x or Python 2.x or Hive or SQL. Very much appreciate any suggestions. Thank you!</p>
| 1 | 2016-09-27T01:03:44Z | 39,717,658 | <p>You can do it this way:</p>
<pre><code>In [34]: df
Out[34]:
c1 c2
0 1 p1
1 1 p1
2 1 p2
3 2 p2
4 2 p3
5 3 p3
6 3 p3
7 3 p3
8 3 p4
9 3 p4
10 3 p5
In [36]: (df.groupby('c2')['c1']
....: .apply(lambda x: ','.join(x.unique().astype(str)))
....: .to_frame('unique').to_csv(r'D:/temp/output.csv')
....: )
</code></pre>
<p>output.csv:</p>
<pre><code>c2,unique
p1,1
p2,"1,2"
p3,"2,3"
p4,3
p5,3
</code></pre>
| 1 | 2016-09-27T06:37:31Z | [
"python",
"csv",
"pandas",
"hive",
"itertools"
]
|
json file is not populating my comboboxes correctly | 39,714,588 | <p>I am trying to iterate a json file such that my ui - if it found geos in scene, it will append the info into the first column and while doing so, it will append the color options for each of the geos it found in the second column (color options comes from a json file)</p>
<p>While I am able to add in geos into the first column, I am having issues in getting the color options to be added into the second column which are filled with comboboxes</p>
<p>Eg. There is a pCube and a pPlane in my scene, instead of having my comboboxes populated with the options, it seems to be grabbing the last geo object it found and populate just one of the color options for pPlane.</p>
<pre><code>def get_all_mesh():
all_mesh = cmds.listRelatives(cmds.ls(type = 'mesh'), parent=True)
# Result: [u'pCube1', u'pSphere1', u'pPlane1'] #
return all_mesh
def get_color():
with open('/Desktop/colors.json') as data_file:
data = json.load(data_file)
for index, name in enumerate(data):
geo_names = get_all_mesh()
for geo in geo_names:
# Remove all the digits
geo_exclude_num = ''.join(i for i in geo if not i.isdigit())
if geo_exclude_num in name:
for item in (data[name]):
print "{0} - {1}".format(name, item)
return item
class testTableView(QtGui.QDialog):
def __init__(self, parent=None):
QtGui.QDialog.__init__(self, parent)
self.setWindowTitle('Color Test')
self.setModal(False)
self.all_mesh = get_all_mesh()
# Build the GUI
self.init_ui()
self.populate_data()
def init_ui(self):
# Table setup
self.mesh_table = QtGui.QTableWidget()
self.mesh_table.setRowCount(len(self.all_mesh))
self.mesh_table.setColumnCount(3)
self.mesh_table.setHorizontalHeaderLabels(['Mesh Found', 'Color for Mesh'])
self.md_insert_color_btn = QtGui.QPushButton('Apply color')
# Layout
self.layout = QtGui.QVBoxLayout()
self.layout.addWidget(self.mesh_table)
self.layout.addWidget(self.md_insert_color_btn)
self.setLayout(self.layout)
def populate_data(self):
geo_name = self.all_mesh
for row_index, geo_item in enumerate(geo_name):
new_item = QtGui.QTableWidgetItem(geo_item)
# Add in each and every mesh found in scene and append them into rows
self.mesh_table.setItem(row_index, 0, new_item)
# Insert in the color
combobox = QtGui.QComboBox()
color_list = get_color()
combobox.addItems(color_list)
self.mesh_table.setCellWidget(row_index, 1, combobox)
# To opent the dialog window
dialog = testTableView()
dialog.show()
</code></pre>
<p>This is the contents in my json file:</p>
<pre><code>{
"pCube": [
"blue",
"purple",
"yellow",
"green",
"white",
"silver",
"red"
],
"pCone": [
"black",
"yellow"
],
"pSphere": [
"silver"
],
"pPlane": [
"red",
"yellow"
],
"pPrism": [
"white"
]
}
</code></pre>
<p>Adding on, instead of seeing per field of my combobox to be filled with the names of the color, I am getting a single character per field.</p>
<p>Can someone kindly provide me any insights?</p>
| 0 | 2016-09-27T01:07:06Z | 39,714,632 | <p>This bit of <code>get_color()</code>:</p>
<pre><code>for item in (data[name]):
print "{0} - {1}".format(name, item)
return item
</code></pre>
<p>will return from your function (as soon as it hits the return statement) before going though all your colors.</p>
<p>You probably want to accumulate all your colors before returning. Something like: </p>
<pre><code>def get_color():
with open('/Desktop/colors.json') as data_file:
data = json.load(data_file)
items = set()
for index, name in enumerate(data):
geo_names = get_all_mesh()
for geo in geo_names:
# Remove all the digits
geo_exclude_num = ''.join(i for i in geo if not i.isdigit())
if geo_exclude_num in name:
for item in (data[name]):
print "{0} - {1}".format(name, item)
items.add(item)
return items
</code></pre>
<p>The reasons its showing you a list of characters for the first color is because this statement:</p>
<pre><code>combobox.addItems(color_list)
</code></pre>
<p>is treating that single color as a list, and iterating over it to populate the options. Fixing the first part will fix this too.</p>
| 2 | 2016-09-27T01:14:55Z | [
"python",
"json",
"combobox",
"maya"
]
|
Django tests slows after MacOS Sierra | 39,714,611 | <p>I'm working on a Django project using Python 3 and Django 1.10 on Mac.</p>
<p>Before update I was running 40 tests in 0.441s.</p>
<p>Now after MacOS Sierra: Ran 40 tests in 5.487s</p>
<p>I did some investigations and found this line to be the problem:</p>
<pre><code>response = self.client.post(r('subscriptions:new'), data)
</code></pre>
<p>If I pass a empty dict instead of data, the tests run faster. Anyone have a clue why this is happening?</p>
| 2 | 2016-09-27T01:11:48Z | 39,802,359 | <p>I found that to resolve local DNS was taking forever to resolve.</p>
<p>If anyone has the same problem run this commands:</p>
<p>sudo scutil --get LocalHostName
sudo scutil --get HostName</p>
<p>If the result is not the same, use this commands to put them equal:</p>
<p>sudo scutil --set LocalHostName My-MacBook
sudo scutil --set HostName My-MacBook</p>
<p>Problem solved to me.</p>
| 2 | 2016-10-01T02:12:57Z | [
"python",
"django",
"macos-sierra"
]
|
How to add new column with handling nan value | 39,714,682 | <p>I have a dataframe like this</p>
<pre><code> A B
0 a 1
1 b 2
2 c 3
3 d nan
4 e nan
</code></pre>
<p>I would like to add column C like below</p>
<pre><code> A B C
0 a 1 a1
1 b 2 b2
2 c 3 c3
3 d nan d
4 e nan e
</code></pre>
<p>So I tried </p>
<pre><code>df["C"]=df.A+df.B
</code></pre>
<p>but It returns </p>
<pre><code> C
a1
b2
c3
nan
nan
</code></pre>
<p>How can get correct result?</p>
| 4 | 2016-09-27T01:24:02Z | 39,714,714 | <p>In your code, I think the data type of the element in the dataframe is str, so, try <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html?highlight=fillna#pandas.DataFrame.fillna" rel="nofollow">fillna</a>.</p>
<pre><code>In [10]: import pandas as pd
In [11]: import numpy as np
In [12]: df = pd.DataFrame({'A': ['a', 'b', 'c', 'd', 'e'],
'B': ['1', '2', '3', np.nan, np.nan]})
In [13]: df.B.fillna('')
Out[13]:
0 1
1 2
2 3
3
4
Name: B, dtype: object
In [14]: df
Out[14]:
A B
0 a 1
1 b 2
2 c 3
3 d NaN
4 e NaN
[5 rows x 2 columns]
In [15]: df.B = df.B.fillna('')
In [16]: df["C"]=df.A+df.B
In [17]: df
Out[17]:
A B C
0 a 1 a1
1 b 2 b2
2 c 3 c3
3 d d
4 e e
[5 rows x 3 columns]
</code></pre>
| 2 | 2016-09-27T01:29:47Z | [
"python",
"pandas"
]
|
How to add new column with handling nan value | 39,714,682 | <p>I have a dataframe like this</p>
<pre><code> A B
0 a 1
1 b 2
2 c 3
3 d nan
4 e nan
</code></pre>
<p>I would like to add column C like below</p>
<pre><code> A B C
0 a 1 a1
1 b 2 b2
2 c 3 c3
3 d nan d
4 e nan e
</code></pre>
<p>So I tried </p>
<pre><code>df["C"]=df.A+df.B
</code></pre>
<p>but It returns </p>
<pre><code> C
a1
b2
c3
nan
nan
</code></pre>
<p>How can get correct result?</p>
| 4 | 2016-09-27T01:24:02Z | 39,714,744 | <pre><code>df['C'] = pd.Series(df.fillna('').values.tolist()).str.join(' ')
</code></pre>
| 0 | 2016-09-27T01:33:34Z | [
"python",
"pandas"
]
|
How to add new column with handling nan value | 39,714,682 | <p>I have a dataframe like this</p>
<pre><code> A B
0 a 1
1 b 2
2 c 3
3 d nan
4 e nan
</code></pre>
<p>I would like to add column C like below</p>
<pre><code> A B C
0 a 1 a1
1 b 2 b2
2 c 3 c3
3 d nan d
4 e nan e
</code></pre>
<p>So I tried </p>
<pre><code>df["C"]=df.A+df.B
</code></pre>
<p>but It returns </p>
<pre><code> C
a1
b2
c3
nan
nan
</code></pre>
<p>How can get correct result?</p>
| 4 | 2016-09-27T01:24:02Z | 39,719,187 | <p>You can use <code>add</code> method with the <code>fill_value</code> parameter</p>
<pre><code>df['C'] = df.A.add(df.B, fill_value='')
df
</code></pre>
<p><a href="http://i.stack.imgur.com/JLttG.png" rel="nofollow"><img src="http://i.stack.imgur.com/JLttG.png" alt="enter image description here"></a></p>
| 1 | 2016-09-27T07:57:29Z | [
"python",
"pandas"
]
|
Text Parser in Python | 39,714,692 | <p>I have to write a code to read data in text file. This text file has a specific format. It is like comma-separated values (CSV) file that stores tabular data. And, I must be able to perform calculations on the data of that file.</p>
<p>Here's the format instruction of that file:</p>
<p>A dataset has to start with a declaration of its name:</p>
<p>@relation name</p>
<p>followed by a list of all the attributes in the dataset</p>
<p>@attribute attribute_name specification</p>
<p>If an attribute is nominal, specification contains a list of the possible attribute values in curly brackets:</p>
<p>@attribute nominal_attribute {first_value, second_value, third_value}</p>
<p>If an attribute is numeric, specification is replaced by the keyword </p>
<p>@attribute numeric_attribute numeric</p>
<p>After the attribute declarations, the actual data is introduced by a </p>
<p>@data</p>
<p>tag, which is followed by a list of all the instances. The instances are listed in comma-separated format, with a question mark representing a missing value. </p>
<p>Comments are lines starting with % and are ignored.</p>
<p>I must be able to make calculations on this data separated by comma, and must know which data is associated to which attribute.</p>
<p>Example dataset file:
1: <a href="https://drive.google.com/open?id=0By6GDPYLwp2cSkd5M0J0ZjczVW8" rel="nofollow">https://drive.google.com/open?id=0By6GDPYLwp2cSkd5M0J0ZjczVW8</a>
2: <a href="https://drive.google.com/open?id=0By6GDPYLwp2cejB5SVlhTFdubnM" rel="nofollow">https://drive.google.com/open?id=0By6GDPYLwp2cejB5SVlhTFdubnM</a></p>
<p>I have no experience with parsing and very little experience with Python. So, I felt to ask the experts for the easy way to do it.</p>
<p>Thanks</p>
| 0 | 2016-09-27T01:25:21Z | 39,714,925 | <p>Here is a simple solution that I came up with:</p>
<p>The idea is to read the file line by line and apply rules depending on the type of line encountered.</p>
<p>As you see in the sample input, there could be broadly 5 types of input you may encounter.</p>
<ol>
<li><p>A comment which could start with '%' -> no action is needed here.</p></li>
<li><p>A blank line i.e. '\n' -> no action needed here.</p></li>
<li><p>A line that starts with @, which indicates it could be an attribute or name of the relation.</p></li>
<li><p>If not any of these, then it is the data itself.</p></li>
</ol>
<p>The code follows a simple if-else logic taking actions at every step. based on the above 4 rules.</p>
<pre><code>with open("../Downloads/Reading_Data_Files.txt","r") as dataFl:
lines = [line for line in dataFl]
attribute = []
data = []
for line in lines:
if line.startswith("%") or 'data' in line or line=='\n': # this is a comment or the data line
pass
elif line.startswith("@"):
if "relation" in line:
relationName = line.split(" ")[1]
elif "attribute" in line:
attribute.append(line.split(" ")[1])
else:
data.append(list(map(lambda x : x.strip(),line.split(","))))
print("Relation Name is : %s" %relationName)
print("Attributes are " + ','.join(attribute))
print(data)
</code></pre>
<p>If you want to see which attribute is what here is a solution, which is essentially the same solution as above but with a minor tweak. The only issue with solution above is that the output is a list of lists and to tell which attribute is which is an issue. Hence, a rather better solution would be annotate each data element with the corresponding attribute name. The output will be of the form:
<code>{'distance': '45', 'temperature': '75', 'BusArrival': 'on_time', 'Students': '25'}</code></p>
<pre><code>with open("/Users/sreejithmenon/Downloads/Reading_Data_Files.txt","r") as dataFl:
lines = [line for line in dataFl]
attribute = []
data = []
for line in lines:
if line.startswith("%") or 'data' in line or line=='\n': # this is a comment or the data line
pass
elif line.startswith("@"):
if "relation" in line:
relationName = line.split(" ")[1]
elif "attribute" in line:
attribute.append(line.split(" ")[1])
else:
dataLine = list(map(lambda x : x.strip(),line.split(",")))
dataDict = {attribute[i] : dataLine[i] for i in range(len(attribute))} # each line of data is now a dictionary.
data.append(dataDict)
print("Relation Name is : %s" %relationName)
print("Attributes are " + ','.join(attribute))
print(data)
</code></pre>
<p>You could use pandas Data frames to do more analysis, slicing, querying etc. Here is a link that should help you get started with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html</a></p>
<p>Edit: Explanation to comments
Meaning of the line: <code>dataLine = list(map(lambda x : x.strip(),line.split(",")))</code>
<code>split(<delimiter>)</code> function will split a string into pieces wherever there is a delimiter and returns a list(iterator). </p>
<p>For instance,
<code>"hello, world".split(",")</code> will return <code>['hello',' world']</code> <em>Notice the space in front of "world".</em></p>
<p><code>map</code> is a function that can apply a function(first argument) to each element in a <code>iterator</code>(second argument). It is generally used as a short-hand to apply transformations to each element of the iterator. <code>strip()</code> removes any leading or trailing whitespace. A <code>lambda expression</code> is a function and here it simply applies the strip function. <code>map()</code> extracts each element from the iterator and passes it to the lambda function and appends the returned answer to the final solution. Please read more about <code>map function</code> online. Pre-req: <code>lambda expressions</code>.</p>
<p>Part II in the comment: <em>And when I am typing 'print(data[0])' all the data along with their attribute is printed. What if I want to print only no. of students of 5th row? What is I want to multiple all no. of students with corresponding temperature and store it in a new column with corresponding index?</em></p>
<p>When you <code>print(data[0])</code> it should give you the first row as is, with the related attributes and should look something like this.</p>
<pre><code>data[0]
Out[63]:
{'BusArrival': 'on_time',
'Students': '25',
'distance': '45',
'temperature': '75'}
</code></pre>
<p>I suggest you use pandas dataframe for quick manipulations of the data. </p>
<pre><code>import pandas as pd
df = pd.DataFrame(data)
df
Out[69]:
BusArrival Students distance temperature
0 on_time 25 45 75
1 before 12 40 70
2 after 49 50 80
3 on_time 24 44 74
4 before 15 38 75
# and so on
</code></pre>
<p>Now you want to extract the 5th row only,</p>
<pre><code>df.iloc[5]
Out[73]:
BusArrival after
Students 45
distance 49
temperature 85
Name: 5, dtype: object
</code></pre>
<p>Product of students and temperature is now simply,</p>
<pre><code>df['Students'] = df['Students'].astype('int') # making sure they are not strings
df['temperature'] = df['temperature'].astype('int')
df['studentTempProd'] = df['Students'] * df['temperature']
df
Out[82]:
BusArrival Students distance temperature studentTempProd
0 on_time 25 45 75 1875
1 before 12 40 70 840
2 after 49 50 80 3920
3 on_time 24 44 74 1776
4 before 15 38 75 1125
</code></pre>
<p>There is a lot more you can do with pandas. Like only extracting the 'on_time' bus arrivals etc. </p>
| 3 | 2016-09-27T02:02:46Z | [
"python",
"parsing",
"text"
]
|
Tkinter formatting multiple frames next to each other | 39,714,710 | <p>so I am creating a GUI and I essentially want to have two different "toolbars" at the top of the GUI, similar to <img src="http://i.imgur.com/NZByh97.png" alt="this">.<br>
Currently, I have the respective buttons for each toolbar placed into two different respective frames called Toolbar and Selectbar. On each button I call .pack() to format them within the toolbars, and then on each toolbar I call<br>
<code>toolbar.grid(row=0, column=0, sticky='NW')</code><br>
<code>selectbar.grid(row=0, column=1, sticky='NE')</code><br>
However, I don't believe this is right because they're two different "grids" that it is trying to place in the respective columns. It still gives me something close to the desired product in this: <img src="http://i.imgur.com/Kviqrl8.png" alt="this">.<br>
However, I was wondering how I would "combine" these two frames into a larger respective frame so that I could possibly use <code>.configurecolumn(0, weight=1)</code> to get the first toolbar to stretch out farther along the screen. </p>
<p>In essence, I was wondering how I would be able to have these two "toolbars" next to each other, but have the first one extend with the blank space. </p>
<p>Edit: Here is the code with some parts omitted. </p>
<pre><code> from tkinter import *
from MenuBar import *
from ToolBar import *
import tkinter.ttk
class App(Tk):
def __init__(self):
Tk.__init__(self)
#Creates the MenuBar
menubar = MenuBar(self)
self.config(menu=menubar)
#Creates the ToolBar
toolbar = Frame(bg="#f2f2f2", bd=1, relief=RAISED, width=1000)
newUndIcon = itk.PhotoImage(file="Icons/newUndirected.png")
newDirIcon = itk.PhotoImage(file="Icons/newDirected.png")
b0 = Button(toolbar, text="Create a new undirected graph", image=newUndIcon, relief=FLAT)
b1 = Button(toolbar, text="Create a new directed graph", image=newDirIcon, relief=FLAT)
b2 = Button(toolbar, text="Open an existing graph", image=openIcon, relief=FLAT)
b3 = Button(toolbar, text="Save graph", image=saveIcon, relief=FLAT)
b0.img = newUndIcon
b1.img = newDirIcon
b2.img = openIcon
b3.img = saveIcon
b0.pack(side=LEFT, padx=2, pady=2)
b1.pack(side=LEFT, padx=2, pady=2)
b2.pack(side=LEFT, padx=2, pady=2)
b3.pack(side=LEFT, padx=2, pady=2)
#toolbar.pack(side=TOP, fill=X)
toolbar.grid(row=0, sticky='NW')
toolbar.columnconfigure(1, weight=1)
selectBar = Frame(bg="#f2f2f2", bd=1, relief=FLAT)
c0 = Button(selectBar, image=newUndIcon, relief=FLAT)
c1 = Button(selectBar, image=newDirIcon, relief=FLAT)
c2 = Button(selectBar, image=vertexIcon, relief=FLAT)
c0.img = newUndIcon
c1.img = newDirIcon
c2.img = vertexIcon
c0.pack(side=LEFT, padx=2, pady=2)
c1.pack(side=LEFT, padx=2, pady=2)
c2.pack(side=LEFT, padx=2, pady=2)
selectBar.grid(row=0, column=1, sticky='NW')
selectBar.columnconfigure(1, weight=1)
app=App()
app.iconbitmap('Icons/titleIcon.ico')
app.title("GMPX")
app.geometry('900x600')
app.config(bg="#FFF")
app.mainloop()
app.destroy()
</code></pre>
| -1 | 2016-09-27T01:29:29Z | 39,714,911 | <p>You can try putting the <code>toolbar</code> and <code>selectBar</code> inside a <code>Frame</code>, and use <code>pack()</code> instead of <code>grid()</code>:</p>
<pre><code>topbar = Frame(self)
....
toolbar = Frame(topbar, ...)
toolbar.pack(side=LEFT, fill=X, expand=True)
...
selectBar = Frame(topbar, ...)
selectBar.pack(side=RIGHT)
...
topbar.pack(fill=X)
</code></pre>
| 1 | 2016-09-27T02:00:54Z | [
"python",
"python-3.x",
"user-interface",
"tkinter"
]
|
Python - legend values duplicate | 39,714,758 | <p>I'm plotting a matrix, as shown below, and the legend repeats over and over again. I've tried using numpoints = 1 and this didn't seem to have any effect. Any hints?</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import matplotlib
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (10, 8) # set default figure size, 8in by 6inimport numpy as np
data = pd.read_csv('data/assg-03-data.csv', names=['exam1', 'exam2', 'admitted'])
x = data[['exam1', 'exam2']].as_matrix()
y = data.admitted.as_matrix()
# plot the visualization of the exam scores here
no_admit = np.where(y == 0)
admit = np.where(y == 1)
from pylab import *
# plot the example figure
plt.figure()
# plot the points in our two categories, y=0 and y=1, using markers to indicated
# the category or output
plt.plot(x[no_admit,0], x[no_admit,1],'yo', label = 'Not admitted', markersize=8, markeredgewidth=1)
plt.plot(x[admit,0], x[admit,1], 'r^', label = 'Admitted', markersize=8, markeredgewidth=1)
# add some labels and titles
plt.xlabel('$Exam 1 score$')
plt.ylabel('$Exam 2 score$')
plt.title('Admit/No Admit as a function of Exam Scores')
plt.legend()
</code></pre>
| 0 | 2016-09-27T01:35:33Z | 39,715,215 | <p>It's nearly impossible to understand the problem if you don't put an example of data format especially if one is not familiar with pandas.
However, assuming your input has this format:</p>
<pre><code>x=pd.DataFrame(np.array([np.arange(10),np.arange(10)**2]).T,columns=['exam1','exam2']).as_matrix()
y=pd.DataFrame(np.arange(10)%2).as_matrix()
>>x
array([[ 0, 0],
[ 1, 1],
[ 2, 4],
[ 3, 9],
[ 4, 16],
[ 5, 25],
[ 6, 36],
[ 7, 49],
[ 8, 64],
[ 9, 81]])
>> y
array([[0],
[1],
[0],
[1],
[0],
[1],
[0],
[1],
[0],
[1]])
</code></pre>
<p>the reason is the strange transformation from DataFrame to matrix, I guess it wouldn't happen if you have vectors (1D arrays).
For my example this works (not sure if it is the cleanest form, I don't know where the 2D matrix for <code>x</code> and <code>y</code> comes from):</p>
<pre><code>plt.plot(x[no_admit,0][0], x[no_admit,1][0],'yo', label = 'Not admitted', markersize=8, markeredgewidth=1)
plt.plot(x[admit,0][0], x[admit,1][0], 'r^', label = 'Admitted', markersize=8, markeredgewidth=1)
</code></pre>
| 0 | 2016-09-27T02:44:39Z | [
"python",
"matplotlib",
"legend"
]
|
Scrapy Post Data | 39,714,810 | <p>Im moving from python requests to scrapy, I'd like to make a post request that clicks a button at the bottom of an instagram hashtag page.</p>
<p>The cURL is this</p>
<pre><code>curl "https://www.instagram.com/query/" -H "cookie: mid=VwBJIwAEAAGiVNY3epWm9pRgD9Ge; fbm_124024574287414=base_domain=.instagram.com; ig_pr=1; ig_vw=956; s_network=; fbsr_124024574287414=5HQEzU7XMqOLO4KeQMmSvyBcKsH2svemV1-nWIE4_iM.eyJhbGdvcml0aG0iOiJITUFDLVNIQTI1NiIsImNvZGUiOiJBUUQ0TnNLMjVCZmdvUFN4TjdfODNQaW81Z3U4MTNaZmZWVlNCcEdJNUdRWlczdmdfNGVXNXJyck5Sc3NXRFlSWjZiZEpWMU95V3hNUUcwSE9qMHItYlRiYk40VXpNZG5aLUJ5Zzk0VWZNSW1RZTd4R1JzTS1yaXRabmc0Z3FYNkpwbnF4b0VXajRPNEVGSDVoTXBCUFNHUGNHN0RHQ01uSjFLeXh1dllOc2cyaFpnSDFheVI0RUhMbE1nZGM4emVrNm9DXzdLa2s1TUoyYzhyYmEwWXo1VkI1bVVmS3NvLS11dXVxdjJlRmxFUHpYczVNQ3E1bW5BRk5IeWxxMG9veENQcXcwWUVLSnpsNnZSUzFReGUzQWpsQzFPU0cySU1QM0wwMGhUcnRraFF4ZEFhZElVMUtNNUw5VTRab2dlbjltdUFadkJjV0U3UUMxeTdibDRyTzhwWCIsImlzc3VlZF9hdCI6MTQ3NDkzODQ3MywidXNlcl9pZCI6IjEzNzc3ODgzNjkifQ; csrftoken=th33gPnvrsNS74reomY69ETfojX2avQ7" -H "origin: https://www.instagram.com" -H "accept-encoding: gzip, deflate, br" -H "accept-language: en-US,en;q=0.8" -H "user-agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36" -H "x-requested-with: XMLHttpRequest" -H "x-csrftoken: th33gPnvrsNS74reomY69ETfojX2avQ7" -H "x-instagram-ajax: 1" -H "content-type: application/x-www-form-urlencoded" -H "accept: */*" -H "referer: https://www.instagram.com/explore/tags/love/" -H "authority: www.instagram.com" --data "q=ig_hashtag(love)+"%"7B+media.after(J0HV-nGYwAAAF0HV-nGXAAAAFjgA"%"2C+10)+"%"7B"%"0A++count"%"2C"%"0A++nodes+"%"7B"%"0A++++caption"%"2C"%"0A++++code"%"2C"%"0A++++comments+"%"7B"%"0A++++++count"%"0A++++"%"7D"%"2C"%"0A++++comments_disabled"%"2C"%"0A++++date"%"2C"%"0A++++dimensions+"%"7B"%"0A++++++height"%"2C"%"0A++++++width"%"0A++++"%"7D"%"2C"%"0A++++display_src"%"2C"%"0A++++id"%"2C"%"0A++++is_video"%"2C"%"0A++++likes+"%"7B"%"0A++++++count"%"0A++++"%"7D"%"2C"%"0A++++owner+"%"7B"%"0A++++++id"%"0A++++"%"7D"%"2C"%"0A++++thumbnail_src"%"2C"%"0A++++video_views"%"0A++"%"7D"%"2C"%"0A++page_info"%"0A"%"7D"%"0A+"%"7D&ref=tags"%"3A"%"3Ashow" --compressed
</code></pre>
<p>So for the form data I have tried two things:</p>
<pre><code>body = response.xpath("//body")
html = str(body.extract())
end_cursor = re.search(r"\"end\_cursor\"\: \"(.+?)\"", html).group(1)
data = "q=ig_hashtag({})+%7B+media.after({}+10)+%7B%0A++count%2C%0A++nodes+%7B%0A++++caption%2C%0A++++code%2C%0A++++comments+%7B%0A++++++count%0A++++%7D%2C%0A++++comments_disabled%2C%0A++++date%2C%0A++++dimensions+%7B%0A++++++height%2C%0A++++++width%0A++++%7D%2C%0A++++display_src%2C%0A++++id%2C%0A++++is_video%2C%0A++++likes+%7B%0A++++++count%0A++++%7D%2C%0A++++owner+%7B%0A++++++id%0A++++%7D%2C%0A++++thumbnail_src%2C%0A++++video_views%0A++%7D%2C%0A++page_info%0A%7D%0A+%7D&ref=tags%3A%3Ashow".format(tag, end_cursor)
url = 'https://www.instagram.com/query/'
yield Request(url, body=data, method="POST", callback=self.parseHashtag)
</code></pre>
<p>and this</p>
<pre><code>data = {"q" :"ig_hashtag({})+%7B+media.after({}+10)+%7B%0A++count%2C%0A++nodes+%7B%0A++++caption%2C%0A++++code%2C%0A++++comments+%7B%0A++++++count%0A++++%7D%2C%0A++++comments_disabled%2C%0A++++date%2C%0A++++dimensions+%7B%0A++++++height%2C%0A++++++width%0A++++%7D%2C%0A++++display_src%2C%0A++++id%2C%0A++++is_video%2C%0A++++likes+%7B%0A++++++count%0A++++%7D%2C%0A++++owner+%7B%0A++++++id%0A++++%7D%2C%0A++++thumbnail_src%2C%0A++++video_views%0A++%7D%2C%0A++page_info%0A%7D%0A+%7D&ref=tags%3A%3Ashow".format(tag, end_cursor)}
yield FormRequest(url, formdata=data, callback=self.parseHashtag)
</code></pre>
<p>I am getting a 403 error so I am obviously sending the data incorrectly, am I formatting the data incorrectly or calling the post incorrectly? Those are my two thoughts but I'm quite unsure. Any help would be very appreciated, thank you.</p>
<p>The url is this - <a href="https://www.instagram.com/explore/tags/love/" rel="nofollow">https://www.instagram.com/explore/tags/love/</a></p>
<p>This is my git, <a href="https://github.com/Fuledbyramen/instagram_crawler/blob/master/instagram/spiders/instagram_spider.py" rel="nofollow">https://github.com/Fuledbyramen/instagram_crawler/blob/master/instagram/spiders/instagram_spider.py</a></p>
| 0 | 2016-09-27T01:45:36Z | 39,718,694 | <p>You seem to be missing correct headers or any headers for that matter.</p>
<p>You should provide every header that you see in the network inspector, aside from cookies that scrapy managed and populates by itself.</p>
<p>You can easily extract the headers from the curl string network inspect gives you by:</p>
<pre><code>foo = '''-H "accept-encoding: gzip, deflate, br" -H "accept-language: en-US,en;q=0.8" -H "user-agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36" -H "x-requested-with: XMLHttpRequest" -H "x-csrftoken: th33gPnvrsNS74reomY69ETfojX2avQ7" -H "x-instagram-ajax: 1" -H "content-type: application/x-www-form-urlencoded" -H "accept: */*" -H "referer: https://www.instagram.com/explore/tags/love/" -H "authority: www.instagram.com"'''
headers = [s.strip(' "').split(': ') for s in foo.split('-H')]
headers = [h for h in headers if any(h)]
headers = {k: v for k,v in headers}
</code></pre>
<p>And you'll get:</p>
<pre><code> {'accept': '*/*',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.8',
'authority': 'www.instagram.com',
'content-type': 'application/x-www-form-urlencoded',
'referer': 'https://www.instagram.com/explore/tags/love/',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36',
'x-csrftoken': 'th33gPnvrsNS74reomY69ETfojX2avQ7',
'x-instagram-ajax': '1',
'x-requested-with': 'XMLHttpRequest'}
</code></pre>
<p>Some of these are totally not necessary, like referer is mostly used for analitics, accept-language, accept and accept-encoding can most likely be ignored. User-agent is managed by scrapy too. </p>
<p>So what you have left is <code>x-crsftoken</code> which might do nothing, but usually those are hidden somewhere in the html source; <code>x-instagram-ajax</code> seems like a static header to indicate ajax request; <code>x-requested-with</code> shows request type and is mainly there to prevent man in the middle attacks, you should have it as it is to indicate that the request type to avoid being blocked.</p>
<p>Edit:
I've tried the website and you can actualyl just do a GET request with body as url parameters. Just right click request in network inspect and click <code>copy location with parameters</code> this will automatically convert the dict-like data from the body in url parameters.</p>
<p>i.e. <a href="https://www.instagram.com/query/?q=ig_hashtag(scrapy)%20%7B%20media.after(J0HV-vvswAAAF0HV-Qp7AAAAFiYA%2C%2016)%20%7B%0A%20%20count%2C%0A%20%20nodes%20%7B%0A%20%20%20%20caption%2C%0A%20%20%20%20code%2C%0A%20%20%20%20comments%20%7B%0A%20%20%20%20%20%20count%0A%20%20%20%20%7D%2C%0A%20%20%20%20comments_disabled%2C%0A%20%20%20%20date%2C%0A%20%20%20%20dimensions%20%7B%0A%20%20%20%20%20%20height%2C%0A%20%20%20%20%20%20width%0A%20%20%20%20%7D%2C%0A%20%20%20%20display_src%2C%0A%20%20%20%20id%2C%0A%20%20%20%20is_video%2C%0A%20%20%20%20likes%20%7B%0A%20%20%20%20%20%20count%0A%20%20%20%20%7D%2C%0A%20%20%20%20owner%20%7B%0A%20%20%20%20%20%20id%0A%20%20%20%20%7D%2C%0A%20%20%20%20thumbnail_src%2C%0A%20%20%20%20video_views%0A%20%20%7D%2C%0A%20%20page_info%0A%7D%0A%20%7D&ref=tags%3A%3Ashow" rel="nofollow">https://www.instagram.com/query/?q=ig_hashtag(scrapy)%20%7B%20media.after(J0HV-vvswAAAF0HV-Qp7AAAAFiYA%2C%2016)%20%7B%0A%20%20count%2C%0A%20%20nodes%20%7B%0A%20%20%20%20caption%2C%0A%20%20%20%20code%2C%0A%20%20%20%20comments%20%7B%0A%20%20%20%20%20%20count%0A%20%20%20%20%7D%2C%0A%20%20%20%20comments_disabled%2C%0A%20%20%20%20date%2C%0A%20%20%20%20dimensions%20%7B%0A%20%20%20%20%20%20height%2C%0A%20%20%20%20%20%20width%0A%20%20%20%20%7D%2C%0A%20%20%20%20display_src%2C%0A%20%20%20%20id%2C%0A%20%20%20%20is_video%2C%0A%20%20%20%20likes%20%7B%0A%20%20%20%20%20%20count%0A%20%20%20%20%7D%2C%0A%20%20%20%20owner%20%7B%0A%20%20%20%20%20%20id%0A%20%20%20%20%7D%2C%0A%20%20%20%20thumbnail_src%2C%0A%20%20%20%20video_views%0A%20%20%7D%2C%0A%20%20page_info%0A%7D%0A%20%7D&ref=tags%3A%3Ashow</a></p>
| 1 | 2016-09-27T07:31:09Z | [
"python",
"curl",
"scrapy"
]
|
How to pass values to a generator in a for loop? | 39,714,852 | <p>I know that you can use <code>.send(value)</code> to send values to an generator. I also know that you can iterate over a generator in a for loop. Is it possible to pass values to a generator while iterating over it in a for loop?</p>
<p>What I'm trying to do is</p>
<pre><code>def example():
previous = yield
for i range(0,10):
previous = yield previous*i
t = example()
for value in example"...pass in a value?...":
"...do something with the result..."
</code></pre>
| 0 | 2016-09-27T01:53:01Z | 39,714,970 | <p>Ok, so I figured it out. The trick is to create an additional generator that wraps <code>t.send(value)</code> in a for loop <code>(t.send(value) for value in [...])</code>.</p>
<pre><code>def example():
previous = yield
for i in range(0,10):
previous = yield previous * i
t = examplr()
t.send(None)
for i in (t.send(i) for i in ["list of objects to pass in"]):
print i
</code></pre>
| 0 | 2016-09-27T02:10:37Z | [
"python",
"generator"
]
|
How to pass values to a generator in a for loop? | 39,714,852 | <p>I know that you can use <code>.send(value)</code> to send values to an generator. I also know that you can iterate over a generator in a for loop. Is it possible to pass values to a generator while iterating over it in a for loop?</p>
<p>What I'm trying to do is</p>
<pre><code>def example():
previous = yield
for i range(0,10):
previous = yield previous*i
t = example()
for value in example"...pass in a value?...":
"...do something with the result..."
</code></pre>
| 0 | 2016-09-27T01:53:01Z | 39,715,014 | <p>You technically could, but the results would be confusing. eg:</p>
<pre><code>def example():
previous = (yield)
for i in range(1,10):
received = (yield previous)
if received is not None:
previous = received*i
t = example()
for i, value in enumerate(t):
t.send(i)
print value
</code></pre>
<p>Outputs:</p>
<pre><code>None
0
2
8
18
</code></pre>
<p>Dave Beazley wrote an <a href="http://www.dabeaz.com/coroutines/Coroutines.pdf" rel="nofollow">amazing article</a> on coroutines (<strong>tldr; don't mix generators and coroutines in the same function</strong>)</p>
| 1 | 2016-09-27T02:16:27Z | [
"python",
"generator"
]
|
404 Response when running FlaskClient test method | 39,714,995 | <p>I'm baffled by this. I'm using an application factory in a Flask application and under the test configuration my routes always return 404s.</p>
<p>However when I use Flask-Script and load the app from the interpreter everything works as expected, the response comes back as 200. </p>
<p>Navigating to the URL with the browser works fine </p>
<p><strong>app/__init__.py</strong></p>
<pre><code>def create_app():
app = Flask(__name__)
return app
</code></pre>
<p><strong>sever1.py</strong></p>
<pre><code>from flask import Flask
from flask_script import Manager
from app import create_app
app = create_app()
app_context = app.app_context()
app_context.push()
manager = Manager(app)
@app.route('/')
def index():
return '<h1>Hello World!</h1>'
@app.route('/user/<name>')
def user(name):
return '<h1>Hello, %s!</h1>' % name
@manager.command
def test():
"""Run the unit tests"""
import unittest
tests = unittest.TestLoader().discover('tests')
unittest.TextTestRunner(verbosity=2).run(tests)
if __name__ == '__main__':
manager.run()
</code></pre>
<p><strong>tests/test.py</strong></p>
<pre><code>#imports committed
def setUp(self):
self.app = create_app('testing')
self.app_context = self.app.app_context()
self.app_context.push()
self.client = self.app.test_client()
def test_app_exists(self):
response = self.client.get('/', follow_redirects=True)
print(response) #404 :(
self.assertTrue("Hello World!" in response.get_data()) #this is just an example of how it fails
</code></pre>
| 0 | 2016-09-27T02:13:46Z | 39,727,804 | <p>You're not using the factory pattern correctly. You should use blueprints to collect routes and register them with the app in the factory. (Or use <code>app.add_url_rule</code> in the factory.) Nothing outside the factory should affect the app.</p>
<p>Right now you create an instance of the app and then use that instance to register routes. Then you create a different instance in your tests, which doesn't have the routes registered. Since that instance doesn't have any registered routes, it returns 404 for requests to those urls.</p>
<p>Instead, register your routes with a blueprint, then register the blueprint with the app in the factory. Use the factory to create an app during tests. Pass the factory to the Flask-Script manager. You should not need to push the app context manually.</p>
<pre><code>bp = Blueprint('myapp', __name__)
@bp.route('/')
def index():
return 'Hello, World!'
def create_app(config='dev'):
app = Flask(__name__)
# config goes here
app.register_blueprint(bp)
return app
class SomeTest(TestCase):
def setUp(self):
self.app = create_app(config='test')
self.client = self.app.test_client()
def test_index(self):
rv = self.client.get('/')
self.assertEqual(rv.data, b'Hello, World!')
manager = Manager(create_app)
manager.add_option('-c', '--config', dest='config', required=False)
if __name__ == '__main__':
manager.run()
</code></pre>
| 1 | 2016-09-27T14:50:28Z | [
"python",
"flask"
]
|
Pyspark ml can't fit the model and always "AttributeError: 'PipelinedRDD' object has no attribute '_jdf' | 39,715,051 | <pre><code>data = sqlContext.sql("select a.churn,b.pay_amount,c.all_balance from db_bi.t_cust_churn a left join db_bi.t_cust_pay b on a.cust_id=b.cust_id left join db_bi.t_cust_balance c on a.cust_id=c.cust_id limit 5000").cache()
def labelData(df):
return df.map(lambda row: LabeledPoint(row[0], row[1:]))
traindata = labelData(data) --this step works well.
from pyspark.ml.classification import LogisticRegression
lr = LogisticRegression(maxIter=10, regParam=0.3, elasticNetParam=0.8)
lrModel = lr.fit(lrdata)
lrModel = lr.fit(lrdata)
</code></pre>
<hr>
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-40-b84a106121e6> in <module>()
----> 1 lrModel = lr.fit(lrdata)
/home/hadoop/spark/python/pyspark/ml/pipeline.pyc in fit(self, dataset, params)
67 return self.copy(params)._fit(dataset)
68 else:
---> 69 return self._fit(dataset)
70 else:
71 raise ValueError("Params must be either a param map or a list/tuple of param maps, "
/home/hadoop/spark/python/pyspark/ml/wrapper.pyc in _fit(self, dataset)
131
132 def _fit(self, dataset):
--> 133 java_model = self._fit_java(dataset)
134 return self._create_model(java_model)
135
/home/hadoop/spark/python/pyspark/ml/wrapper.pyc in _fit_java(self, dataset)
128 """
129 self._transfer_params_to_java()
--> 130 return self._java_obj.fit(dataset._jdf)
131
132 def _fit(self, dataset):
AttributeError: 'PipelinedRDD' object has no attribute '_jdf'
</code></pre>
| -1 | 2016-09-27T02:21:11Z | 40,127,006 | <p>I guess you are using the tutorial for the latest spark version <a href="https://spark.apache.org/docs/latest/ml-classification-regression.html" rel="nofollow">(2.0.1)</a> with
<code>pyspark.ml.classification import LogisticRegression</code> whereas you need some other version, e.g. <a href="https://spark.apache.org/docs/1.6.2/mllib-linear-methods.html" rel="nofollow">1.6.2</a> with <code>pyspark.mllib.classification import LogisticRegressionWithLBFGS, LogisticRegressionModel</code>. Note the different libraries.</p>
| 0 | 2016-10-19T09:12:21Z | [
"python",
"apache-spark",
"pyspark",
"apache-spark-mllib"
]
|
Selenium click a button if the parameter is true | 39,715,120 | <p>I wanted to create a function which takes a parameter and if the parament is True then the button is click otherwise no. Can i use this?</p>
<pre><code>def buttonClick(self, Button):
if Button == True:
self.driver.find_element_by_id('button').click
</code></pre>
| -2 | 2016-09-27T02:30:18Z | 39,715,276 | <p>Two main things to fix from the top of my head:</p>
<ul>
<li>you can avoid having <code>== True</code> part</li>
<li>you are not calling the <code>click</code> method - add the <code>()</code></li>
</ul>
<p>Fixed version:</p>
<pre><code>def buttonClick(self, should_click_button):
if should_click_button:
self.driver.find_element_by_id('button').click()
</code></pre>
<p>Sample usage:</p>
<pre><code>instance = MyClass()
instance.buttonClick(True)
instance.buttonClick(False)
</code></pre>
<p>You can also set the default value for the argument:</p>
<pre><code>def buttonClick(self, should_click_button=False):
if should_click_button:
self.driver.find_element_by_id('button').click()
</code></pre>
<p>Now, if you don't need to click the button, simply don't pass the argument:</p>
<pre><code>instance = MyClass()
instance.buttonClick(True)
instance.buttonClick()
</code></pre>
| 1 | 2016-09-27T02:51:40Z | [
"python",
"selenium"
]
|
How do I access each element in list in for-loop operations? | 39,715,121 | <p>I am looping through this list and don't understand why <code>print(d)</code> returns each number in <code>seed</code> but assigning <code>i["seed"] = d</code> assigns the last element of <code>seed</code>.</p>
<p>How do I access each element in <code>seed</code> for operations other than <code>print()</code>?</p>
<pre><code>res = []
seed = [1, 2, 3, 4, 5, 6, 7, 8, 9]
i = {}
for d in seed:
print(d)
i["seed"] = d
res.append(i)
print(res)
</code></pre>
<p>Thanks!</p>
| 2 | 2016-09-27T02:30:19Z | 39,715,163 | <p>The issue is there is only one object <code>i</code>, and you're replacing its value every time in the loop. By the last iteration your (one instance) of i has its value set to the last item you assigned it to. Each time you append to <code>res</code>, you're basically appending the a pointer to the same object (so you end up with 9 pointers pointing to the same object when your loop is finished).</p>
<pre><code>res = []
seed = [1, 2, 3, 4, 5, 6, 7, 8, 9]
i = {}
for d in seed:
print(d)
i["seed"] = d # this gets replaced every time through your loop!
res.append(i.copy()) # need to copy `i`, or else we're updating the same instance
print(res)
</code></pre>
| 0 | 2016-09-27T02:36:29Z | [
"python",
"for-loop"
]
|
How do I access each element in list in for-loop operations? | 39,715,121 | <p>I am looping through this list and don't understand why <code>print(d)</code> returns each number in <code>seed</code> but assigning <code>i["seed"] = d</code> assigns the last element of <code>seed</code>.</p>
<p>How do I access each element in <code>seed</code> for operations other than <code>print()</code>?</p>
<pre><code>res = []
seed = [1, 2, 3, 4, 5, 6, 7, 8, 9]
i = {}
for d in seed:
print(d)
i["seed"] = d
res.append(i)
print(res)
</code></pre>
<p>Thanks!</p>
| 2 | 2016-09-27T02:30:19Z | 39,715,165 | <p>To answer your first question, it is because thew value of seed is being overwritten, as shown here:</p>
<pre><code>>>> p = {'t':'H'}
>>> p['t'] = 'k'
>>> p
{'t': 'k'}
</code></pre>
<p>I am confused on your second part of the question. Elaborate more?</p>
| 0 | 2016-09-27T02:36:35Z | [
"python",
"for-loop"
]
|
How do I access each element in list in for-loop operations? | 39,715,121 | <p>I am looping through this list and don't understand why <code>print(d)</code> returns each number in <code>seed</code> but assigning <code>i["seed"] = d</code> assigns the last element of <code>seed</code>.</p>
<p>How do I access each element in <code>seed</code> for operations other than <code>print()</code>?</p>
<pre><code>res = []
seed = [1, 2, 3, 4, 5, 6, 7, 8, 9]
i = {}
for d in seed:
print(d)
i["seed"] = d
res.append(i)
print(res)
</code></pre>
<p>Thanks!</p>
| 2 | 2016-09-27T02:30:19Z | 39,715,172 | <p>The problem is where you are defining i. I <em>think</em> you intended to have a dictionary object named i that has single attribute named "seed". and you want to add those dictionaries to res.</p>
<p>In actual fact you only have one dictionary called i and you just update the value in it every time through the loop. </p>
<p>try this:</p>
<pre><code>res = []
seed = [1, 2, 3, 4, 5, 6, 7, 8, 9]
for d in seed:
i = {}
print(d)
i["seed"] = d
res.append(i)
print(res)
</code></pre>
<p>That will create a new instance of i for each loop, and you should get the result you are looking for.</p>
| 1 | 2016-09-27T02:37:05Z | [
"python",
"for-loop"
]
|
How do I access each element in list in for-loop operations? | 39,715,121 | <p>I am looping through this list and don't understand why <code>print(d)</code> returns each number in <code>seed</code> but assigning <code>i["seed"] = d</code> assigns the last element of <code>seed</code>.</p>
<p>How do I access each element in <code>seed</code> for operations other than <code>print()</code>?</p>
<pre><code>res = []
seed = [1, 2, 3, 4, 5, 6, 7, 8, 9]
i = {}
for d in seed:
print(d)
i["seed"] = d
res.append(i)
print(res)
</code></pre>
<p>Thanks!</p>
| 2 | 2016-09-27T02:30:19Z | 39,715,176 | <p>Curly braces <code>{}</code> in python represent <a href="http://www.tutorialspoint.com/python/python_dictionary.htm" rel="nofollow">dictionaries</a> which works based on a key and value. Each time you iterate through your loop you overwrite the key: 'seed' with the current value of the list seed. So by the time the loop ends the last value of the list seed is the current value of seed in the dictionary i.</p>
| 0 | 2016-09-27T02:37:23Z | [
"python",
"for-loop"
]
|
Turning Python Object into a JSON string | 39,715,136 | <p>I have been working on this bot that takes the twitter api and brings it into my database. Before I was grabbing 1 tweet at a time which wasn't efficient considering I was using 1 request out of the limit they had. So instead I decided to grab 150 results. I get these results back:</p>
<p><code>[Status(ID=780587171757625344, ScreenName=Ampsx, Created=Tue Sep 27 01:57:39 +0000 2016, Text='You know who you are #memes').</code></p>
<p>I get about 150 of these. Is there a library where I can turn this into JSON?</p>
| 0 | 2016-09-27T02:32:44Z | 39,715,187 | <p>Yes, a quick Google search would have revealed the <a href="https://docs.python.org/2.7/library/json.html" rel="nofollow">json</a> module.</p>
<pre><code>import json
# instead of an empty list, create a list of dict objects
# representing the statues as you'd like to see them in JSON.
statuses = { 'statuses': [] }
with open('file.json', 'w') as f:
f.write(json.dumps(statuses).encode('utf-8'))
</code></pre>
| 0 | 2016-09-27T02:39:13Z | [
"python",
"json"
]
|
Turning Python Object into a JSON string | 39,715,136 | <p>I have been working on this bot that takes the twitter api and brings it into my database. Before I was grabbing 1 tweet at a time which wasn't efficient considering I was using 1 request out of the limit they had. So instead I decided to grab 150 results. I get these results back:</p>
<p><code>[Status(ID=780587171757625344, ScreenName=Ampsx, Created=Tue Sep 27 01:57:39 +0000 2016, Text='You know who you are #memes').</code></p>
<p>I get about 150 of these. Is there a library where I can turn this into JSON?</p>
| 0 | 2016-09-27T02:32:44Z | 39,715,191 | <pre><code>import json
jsondata = json.dumps(TwitterStatusObject.__dict__)
</code></pre>
| 0 | 2016-09-27T02:40:03Z | [
"python",
"json"
]
|
Turning Python Object into a JSON string | 39,715,136 | <p>I have been working on this bot that takes the twitter api and brings it into my database. Before I was grabbing 1 tweet at a time which wasn't efficient considering I was using 1 request out of the limit they had. So instead I decided to grab 150 results. I get these results back:</p>
<p><code>[Status(ID=780587171757625344, ScreenName=Ampsx, Created=Tue Sep 27 01:57:39 +0000 2016, Text='You know who you are #memes').</code></p>
<p>I get about 150 of these. Is there a library where I can turn this into JSON?</p>
| 0 | 2016-09-27T02:32:44Z | 39,715,201 | <p>If you're using 2.6+ there's a bundled library you can use (<a href="https://docs.python.org/2.7/library/json.html" rel="nofollow">docs</a>), just:</p>
<pre><code>import json
json_string = json.dumps(object)
</code></pre>
<p>We use this a lot for quick API endpoints, you just need to be careful about having functions or complex nesting in the objects you're trying to serialize, it's quite configurable (so you can skip fields, customize output of some, etc.) but can get messy pretty quickly.</p>
| 2 | 2016-09-27T02:42:22Z | [
"python",
"json"
]
|
tensorflow python 2.7 interactive shell: function call fails | 39,715,183 | <p><a href="http://i.stack.imgur.com/PieQ0.png" rel="nofollow"><img src="http://i.stack.imgur.com/PieQ0.png" alt="enter image description here"></a></p>
<p>I am studying udacity course: deep learning. Actually it teaches Google's tensorFlow. In Python interactive shell, I define the function softmax, when I invoke it, it says syntaxt error. Why? Any hint. </p>
<p>I have import numpy as np</p>
<pre><code>>>> import numpy as np
>>> import tensorflow as tf
>>> def softmax(x):
... return np.exp(x);
... softmax([1])
SyntaxError: invalid syntax
</code></pre>
<p>Are there requirment exact number of indention space? </p>
| 0 | 2016-09-27T02:38:30Z | 39,715,262 | <p>When using interactive, make sure that the <code>...</code> is not there. The compiler will include anything in <code>...</code> in a function, method, while, or for block. To escape from the <code>...</code>, just hit enter again and call the method from <code>>>></code>. </p>
| 1 | 2016-09-27T02:50:28Z | [
"python",
"tensorflow"
]
|
Perfect numbers program python | 39,715,186 | <p>I'm taking an intro course for python, and in one excercise we are to write a function where we type in a number and return bool True or False, if the number is a perfect number. Then we are to create another function that takes an upperlimit and will check every number up until that limit and if it is a perfect number, to print the perfect number. As of now my problem is with the 2nd half of this excersie, and instead of printing out the perfect number, it will print out how many there are with "True". Again the first function IS suppouse to return True or False, so I'm not sure how we can get the 2nd function to print out the actual number!</p>
<pre><code>def perfect(num):
x=1
adding=0
while x<num:
if num % x == 0:
adding=adding+x
x=x+1
if adding==num:
#print(num)
return (adding==num)
else:
return False
def perfectList(upperlimit):
x=1
while x<upperlimit:
if perfect(x)==True:
print(perfect(x))
x=x+1
</code></pre>
| 0 | 2016-09-27T02:39:03Z | 39,715,209 | <p>Your second function is very close.</p>
<pre><code>def perfectList(upperlimit):
x=1
while x < upperlimit:
if perfect(x)==True:
print(x) # changed from print(perfect(x))
x=x+1
</code></pre>
<p>you just needed to change to <code>print(x)</code> (the number) not <code>print(perfect(x))</code> , which returns whether the number is a perfect number.</p>
| 0 | 2016-09-27T02:43:43Z | [
"python",
"function",
"perfect-numbers"
]
|
making a function that can take arguments in various shapes | 39,715,227 | <p>Q1)
Numpy functions can take arguments in different shapes. For instance, np.sum(V) can take either of two below and return outputs with different shapes.</p>
<pre><code>x1= np.array( [1,3] ) #(1)
x2= np.array([[[1,2],[3,4]], [[5,6],[7,8]]]) #(2)
</code></pre>
<p>I am making my own function something like below, which adds two values in an 1D vector with the length of two and return the real number.</p>
<pre><code>def foo(V):
return V[0]+V[1];
</code></pre>
<p>However, this foo function can only take one 1D vector and cannot take any other shapes. It can only take x1 above as an argument but not x2. If I want to make my function work with either of two variables above(x1 and x2), or with any other shapes that has arrays with the length of 2 in their last dimension, how should I revise my foo function?</p>
<hr>
<p>---------------------------update------------------------------</p>
<p>My original function was a hardcoded negative gaussian pdf function.</p>
<pre><code>def nGauss(X, mu, cov):
# multivariate negative gaussian.
# mu is a vector and cov is a covariance matrix.
k = X.shape[0];
dev = X-mu
p1 = np.power( np.power(np.pi * 2, k) , -0.5);
p2 = np.power( np.linalg.det(cov) , -0.5)
p3 = np.exp( -0.5 * np.dot( np.dot(dev.transpose(), np.linalg.inv(cov)), dev));
return -1.0 * p1 * p2 * p3;
</code></pre>
<p>Now his function can return only one pdf value. For example, it can only take arguments like np.array([1,2]), but cannot take arguments X like np.array([[[1,2], [5,6]], [[7,8],[9,0]]]). Here my question was how to make my gaussian function takes arguments of arbitrary shapes and return the pdf value of each point maintaining the same structure except the last dimension, such as
<code>nGauss(np.array( [1,2] ), mu, cov)</code> returns [ 0.000023 ], and
<code>nGauss(np.array([[[1,2], [5,6]], [[7,8],[9,0]]]), mu, cov)</code> returns [[ 0.000023, 0000014], [0.000012, 0.000042]].</p>
<p>I notice that scipy function 'multivariate_normal.pdf' can do this.</p>
<hr>
<p>Q2)
I am also having a difficulty in understanding np's basic array.</p>
<pre><code>t1=np.array([[1,2,3], [4,5,6]])
t2=np.array([1,2,3])
t3=np.array([[1,2,3], [4,5,6],5])
</code></pre>
<p>The shape of t1 is (2,3), and it seems legitimate in terms of matrix perspective; 2 rows and 3 columns. However, the shape of t2 is (3,), which I think has to be (3). What's the meaning of the empty space after "3,"? Also, the shape of t3 is (3,). In this case, is the meaning of the empty space that dimensions vary? </p>
<p>In advance, thanks for your help.</p>
| 3 | 2016-09-27T02:45:46Z | 39,715,281 | <p>for Q1 you can pack and unpack arguments:</p>
<pre><code>def foo(*args):
result = []
for v in args:
result.append(v[0] + v[1])
return result
</code></pre>
<p>This will allow you pass in as many vector arguments as you want, then iterate over them, returning a list of each result. You can also pack and unpack kwargs with **. More info here:</p>
<p><a href="https://docs.python.org/2/tutorial/controlflow.html#unpacking-argument-lists" rel="nofollow">https://docs.python.org/2/tutorial/controlflow.html#unpacking-argument-lists</a></p>
| 1 | 2016-09-27T02:52:54Z | [
"python",
"numpy"
]
|
making a function that can take arguments in various shapes | 39,715,227 | <p>Q1)
Numpy functions can take arguments in different shapes. For instance, np.sum(V) can take either of two below and return outputs with different shapes.</p>
<pre><code>x1= np.array( [1,3] ) #(1)
x2= np.array([[[1,2],[3,4]], [[5,6],[7,8]]]) #(2)
</code></pre>
<p>I am making my own function something like below, which adds two values in an 1D vector with the length of two and return the real number.</p>
<pre><code>def foo(V):
return V[0]+V[1];
</code></pre>
<p>However, this foo function can only take one 1D vector and cannot take any other shapes. It can only take x1 above as an argument but not x2. If I want to make my function work with either of two variables above(x1 and x2), or with any other shapes that has arrays with the length of 2 in their last dimension, how should I revise my foo function?</p>
<hr>
<p>---------------------------update------------------------------</p>
<p>My original function was a hardcoded negative gaussian pdf function.</p>
<pre><code>def nGauss(X, mu, cov):
# multivariate negative gaussian.
# mu is a vector and cov is a covariance matrix.
k = X.shape[0];
dev = X-mu
p1 = np.power( np.power(np.pi * 2, k) , -0.5);
p2 = np.power( np.linalg.det(cov) , -0.5)
p3 = np.exp( -0.5 * np.dot( np.dot(dev.transpose(), np.linalg.inv(cov)), dev));
return -1.0 * p1 * p2 * p3;
</code></pre>
<p>Now his function can return only one pdf value. For example, it can only take arguments like np.array([1,2]), but cannot take arguments X like np.array([[[1,2], [5,6]], [[7,8],[9,0]]]). Here my question was how to make my gaussian function takes arguments of arbitrary shapes and return the pdf value of each point maintaining the same structure except the last dimension, such as
<code>nGauss(np.array( [1,2] ), mu, cov)</code> returns [ 0.000023 ], and
<code>nGauss(np.array([[[1,2], [5,6]], [[7,8],[9,0]]]), mu, cov)</code> returns [[ 0.000023, 0000014], [0.000012, 0.000042]].</p>
<p>I notice that scipy function 'multivariate_normal.pdf' can do this.</p>
<hr>
<p>Q2)
I am also having a difficulty in understanding np's basic array.</p>
<pre><code>t1=np.array([[1,2,3], [4,5,6]])
t2=np.array([1,2,3])
t3=np.array([[1,2,3], [4,5,6],5])
</code></pre>
<p>The shape of t1 is (2,3), and it seems legitimate in terms of matrix perspective; 2 rows and 3 columns. However, the shape of t2 is (3,), which I think has to be (3). What's the meaning of the empty space after "3,"? Also, the shape of t3 is (3,). In this case, is the meaning of the empty space that dimensions vary? </p>
<p>In advance, thanks for your help.</p>
| 3 | 2016-09-27T02:45:46Z | 39,715,470 | <p>For Q1, I'm guessing you want to add the innermost dimensions of your arrays, regardless of how many dimensions the arrays have. The simplest way to do this is to use <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#basic-slicing-and-indexing" rel="nofollow">ellipsis indexing</a>. Here's a detailed example:</p>
<pre><code>>>> a = np.arange(24).reshape((3, 4, 2))
>>> a
array([[[ 0, 1],
[ 2, 3],
[ 4, 5],
[ 6, 7]],
[[ 8, 9],
[10, 11],
[12, 13],
[14, 15]],
[[16, 17],
[18, 19],
[20, 21],
[22, 23]]])
>>> a[..., 0]
array([[ 0, 2, 4, 6],
[ 8, 10, 12, 14],
[16, 18, 20, 22]])
>>> a[..., 1]
array([[ 1, 3, 5, 7],
[ 9, 11, 13, 15],
[17, 19, 21, 23]])
>>> a[..., 0] + a[..., 1]
array([[ 1, 5, 9, 13],
[17, 21, 25, 29],
[33, 37, 41, 45]])
</code></pre>
<p>This works equally well for a 1D array:</p>
<pre><code>>>> a = np.array([1, 2])
>>> a[..., 0] + a[..., 1]
3
</code></pre>
<p>So just define <code>foo</code> as:</p>
<pre><code>def foo(V):
return V[..., 0] + V[..., 1]
</code></pre>
<hr>
<p>For your <code>nGauss</code> function, the simplest solution is to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.apply_along_axis.html" rel="nofollow"><code>np.apply_along_axis</code></a>. For example, you would call it like this:</p>
<pre><code>>>> np.apply_along_axis(nGauss, -1, x1, mu, cov)
</code></pre>
| 1 | 2016-09-27T03:22:29Z | [
"python",
"numpy"
]
|
making a function that can take arguments in various shapes | 39,715,227 | <p>Q1)
Numpy functions can take arguments in different shapes. For instance, np.sum(V) can take either of two below and return outputs with different shapes.</p>
<pre><code>x1= np.array( [1,3] ) #(1)
x2= np.array([[[1,2],[3,4]], [[5,6],[7,8]]]) #(2)
</code></pre>
<p>I am making my own function something like below, which adds two values in an 1D vector with the length of two and return the real number.</p>
<pre><code>def foo(V):
return V[0]+V[1];
</code></pre>
<p>However, this foo function can only take one 1D vector and cannot take any other shapes. It can only take x1 above as an argument but not x2. If I want to make my function work with either of two variables above(x1 and x2), or with any other shapes that has arrays with the length of 2 in their last dimension, how should I revise my foo function?</p>
<hr>
<p>---------------------------update------------------------------</p>
<p>My original function was a hardcoded negative gaussian pdf function.</p>
<pre><code>def nGauss(X, mu, cov):
# multivariate negative gaussian.
# mu is a vector and cov is a covariance matrix.
k = X.shape[0];
dev = X-mu
p1 = np.power( np.power(np.pi * 2, k) , -0.5);
p2 = np.power( np.linalg.det(cov) , -0.5)
p3 = np.exp( -0.5 * np.dot( np.dot(dev.transpose(), np.linalg.inv(cov)), dev));
return -1.0 * p1 * p2 * p3;
</code></pre>
<p>Now his function can return only one pdf value. For example, it can only take arguments like np.array([1,2]), but cannot take arguments X like np.array([[[1,2], [5,6]], [[7,8],[9,0]]]). Here my question was how to make my gaussian function takes arguments of arbitrary shapes and return the pdf value of each point maintaining the same structure except the last dimension, such as
<code>nGauss(np.array( [1,2] ), mu, cov)</code> returns [ 0.000023 ], and
<code>nGauss(np.array([[[1,2], [5,6]], [[7,8],[9,0]]]), mu, cov)</code> returns [[ 0.000023, 0000014], [0.000012, 0.000042]].</p>
<p>I notice that scipy function 'multivariate_normal.pdf' can do this.</p>
<hr>
<p>Q2)
I am also having a difficulty in understanding np's basic array.</p>
<pre><code>t1=np.array([[1,2,3], [4,5,6]])
t2=np.array([1,2,3])
t3=np.array([[1,2,3], [4,5,6],5])
</code></pre>
<p>The shape of t1 is (2,3), and it seems legitimate in terms of matrix perspective; 2 rows and 3 columns. However, the shape of t2 is (3,), which I think has to be (3). What's the meaning of the empty space after "3,"? Also, the shape of t3 is (3,). In this case, is the meaning of the empty space that dimensions vary? </p>
<p>In advance, thanks for your help.</p>
| 3 | 2016-09-27T02:45:46Z | 39,715,487 | <p>Your function works with both arrays:</p>
<pre><code>In [1]: def foo(V):
...: return V[0]+V[1]
...:
In [2]: foo(np.array([1,3]))
Out[2]: 4
In [3]: foo(np.array([[[1,2],[3,4]], [[5,6],[7,8]]]))
Out[3]:
array([[ 6, 8],
[10, 12]])
</code></pre>
<p>This answer is just the sum of these two arrays:</p>
<pre><code>In [4]: np.array([[[1,2],[3,4]], [[5,6],[7,8]]])[0]
Out[4]:
array([[1, 2],
[3, 4]])
In [5]: np.array([[[1,2],[3,4]], [[5,6],[7,8]]])[1]
Out[5]:
array([[5, 6],
[7, 8]])
</code></pre>
<p>If you expected something else, you'll have to show us.</p>
<p>As for your second question:</p>
<pre><code>In [6]: t1=np.array([[1,2,3], [4,5,6]])
...: t2=np.array([1,2,3])
...: t3=np.array([[1,2,3], [4,5,6],5])
...:
In [7]: t1.shape
Out[7]: (2, 3)
In [8]: t2.shape
Out[8]: (3,)
In [9]: t3.shape
Out[9]: (3,)
</code></pre>
<p><code>(3,)</code> is a 1 element tuple. Compare these expressions.</p>
<pre><code>In [11]: (3)
Out[11]: 3
In [12]: (3,)
Out[12]: (3,)
</code></pre>
<p>There have been several recent questions about (3,) v (3,1) shape arrays, and <code>np.array([[1,2,3]])</code> v. <code>np.array([1,2,3])</code>.</p>
<p><code>t3</code> is an object dtype array, with 3 elements. The 3 inputs are different length, so it can't create a 2d array. Stay away from this type of array for now. Focus on the simpler arrays.</p>
<pre><code>In [10]: t3
Out[10]: array([[1, 2, 3], [4, 5, 6], 5], dtype=object)
In [13]: t3[0]
Out[13]: [1, 2, 3]
In [14]: t3[2]
Out[14]: 5
</code></pre>
<p><a href="http://stackoverflow.com/questions/39706277/numpy-why-is-difference-of-a-2-1-array-and-a-vertical-matrix-slice-not-a-2-1">Numpy: Why is difference of a (2,1) array and a vertical matrix slice not a (2,1) array</a></p>
<p><a href="http://stackoverflow.com/questions/39694318/difference-between-single-and-double-bracket-numpy-array">Difference between single and double bracket Numpy array?</a></p>
<p>=====================</p>
<p>With the <code>nGauss</code>:</p>
<pre><code>In [53]: mu=np.array([0,0])
In [54]: cov=np.eye(2)
In [55]: xx=np.array([[[1,2], [5,6]], [[7,8],[9,0]]])
In [56]: np.apply_along_axis(nGauss, -1, xx, mu, cov)
Out[56]:
array([[ -1.30642333e-02, -9.03313360e-15],
[ -4.61510838e-26, -4.10103631e-19]])
</code></pre>
<p><code>apply_along_axis</code> iterates on the 1st 2 dim, passing each <code>xx[i,j,:]</code> to <code>nGauss</code>. It's not fast, but is relatively easy to apply.</p>
<pre><code>k = X.shape[0]; # I assume you want
k = X.shape[[1] # the last dimension
dev = X-mu # works as long as mu has k terms
</code></pre>
<p>this is a scalar:</p>
<pre><code>p1 = np.power( np.power(np.pi * 2, k) , -0.5);
</code></pre>
<p>so is</p>
<pre><code>p2 = np.power( np.linalg.det(cov) , -0.5)
</code></pre>
<p>So it comes down to generalizing this expression:</p>
<pre><code>p3 = np.exp( -0.5 * np.dot( np.dot(dev.transpose(), np.linalg.inv(cov)), dev));
</code></pre>
<p>In the simple (2,) <code>x</code> case, <code>dev</code> is 1d, and <code>dev.transpose()</code> does nothing.</p>
<p>It's easier to generalize <code>einsum</code> than <code>dot</code>; I think the equivalent is:</p>
<pre><code>p3 = np.einsum('j,j', np.einsum('i,ij', dev, np.linalg.inv(cov)), dev)
p3 = np.exp( -0.5 * p3)
</code></pre>
<p>which simplifies to</p>
<pre><code>p3 = np.einsum('i,ij,j', dev, np.linalg.inv(cov), dev)
</code></pre>
<p>generalizing to higher dim:</p>
<pre><code>p3 = np.einsum('...i,ij,...j', dev, np.linalg.inv(cov), dev)
</code></pre>
<p>So with:</p>
<pre><code>def nGaussA(X, mu, cov):
# multivariate negative gaussian.
# mu is a vector and cov is a covariance matrix.
k = X.shape[-1];
dev = X-mu
p1 = np.power( np.power(np.pi * 2, k) , -0.5);
p2 = np.power( np.linalg.det(cov) , -0.5)
p3 = np.einsum('...i,ij,...j', dev, np.linalg.inv(cov), dev)
p3 = np.exp( -0.5 * p3)
return -1.0 * p1 * p2 * p3;
</code></pre>
<p>matching earlier values:</p>
<pre><code>In [85]: nGaussA(x,mu,cov)
Out[85]: -0.013064233284684921
In [86]: nGaussA(xx,mu,cov)
Out[86]:
array([[ -1.30642333e-02, -9.03313360e-15],
[ -4.61510838e-26, -4.10103631e-19]])
</code></pre>
<p>So the way to generalize the function is to check each step. If it produces a scalar, keep it. If operates with an <code>x</code> keep it. But if it requires coordinating dimensions with other arrays, use a numpy operation that does that. Often that involves broadcasting. Sometimes it helps to study other numpy functions to see how they generalize (e.g. <code>apply_along_axis</code>, <code>apply_over_axes</code>, <code>cross</code>, etc).</p>
<p>An interactive numpy session is essential; allowing me to try ideas with small sample arrays.</p>
| 2 | 2016-09-27T03:24:47Z | [
"python",
"numpy"
]
|
slice a list based on some values | 39,715,311 | <p>Hi I'm looking for a way to split a list based on some values, and assuming the list's length equals to sum of some values, e.g.:</p>
<p>list: <code>l = ['a','b','c','d','e','f']</code>
values: <code>v = (1,1,2,2)</code>
so <code>len(l) = sum(v)</code></p>
<p>and I'd like to have a function to return a tuple or a list, like: <code>(['a'], ['b'], ['c','d'], ['d','e'])</code></p>
<p>currently my code is like:</p>
<pre><code>(list1,list2,list3,list4) = (
l[0:v[0]],
l[v[0]:v[0]+v[1]],
l[v[0]+v[1]:v[0]+v[1]+v[2]],
l[v[0]+v[1]+v[2]:v[0]+v[1]+v[2]+v[3]])`
</code></pre>
<p>I'm thinking about make this clearer, but closest one I have so far is (note the results are incorrect, not what I wanted)</p>
<pre><code>s=0
[list1,list2,list3,list4] = [l[s:s+i] for i in v]
</code></pre>
<p>the problem is I couldn't increase <code>s</code> at the same time while iterating values in <code>v</code>, I'm hoping to get a better code to do so, any suggestion is appreciated, thanks!</p>
| 0 | 2016-09-27T02:57:21Z | 39,715,361 | <p>If you weren't stuck on ancient Python, I'd point you to <a href="https://docs.python.org/3/library/itertools.html#itertools.accumulate" rel="nofollow"><code>itertools.accumulate</code></a>. Of course, even on ancient Python, you could use the (roughly) equivalent code provided in the docs I linked to do it. Using either the Py3 code or equivalent, you could do:</p>
<pre><code>from itertools import accumulate # Or copy accumulate equivalent Python code
from itertools import chain
# Calls could be inlined in listcomp, but easier to read here
starts = accumulate(chain((0,), v)) # Extra value from starts ignored when ends exhausted
ends = accumulate(v)
list1,list2,list3,list4 = [l[s:e] for s, e in zip(starts, ends)]
</code></pre>
| 2 | 2016-09-27T03:07:06Z | [
"python",
"python-2.7"
]
|
slice a list based on some values | 39,715,311 | <p>Hi I'm looking for a way to split a list based on some values, and assuming the list's length equals to sum of some values, e.g.:</p>
<p>list: <code>l = ['a','b','c','d','e','f']</code>
values: <code>v = (1,1,2,2)</code>
so <code>len(l) = sum(v)</code></p>
<p>and I'd like to have a function to return a tuple or a list, like: <code>(['a'], ['b'], ['c','d'], ['d','e'])</code></p>
<p>currently my code is like:</p>
<pre><code>(list1,list2,list3,list4) = (
l[0:v[0]],
l[v[0]:v[0]+v[1]],
l[v[0]+v[1]:v[0]+v[1]+v[2]],
l[v[0]+v[1]+v[2]:v[0]+v[1]+v[2]+v[3]])`
</code></pre>
<p>I'm thinking about make this clearer, but closest one I have so far is (note the results are incorrect, not what I wanted)</p>
<pre><code>s=0
[list1,list2,list3,list4] = [l[s:s+i] for i in v]
</code></pre>
<p>the problem is I couldn't increase <code>s</code> at the same time while iterating values in <code>v</code>, I'm hoping to get a better code to do so, any suggestion is appreciated, thanks!</p>
| 0 | 2016-09-27T02:57:21Z | 39,715,362 | <p>You could just write a simple loop to iterate over <code>v</code> to generate a result:</p>
<pre><code>l = ['a','b','c','d','e','f']
v = (1,1,2,2)
result = []
offset = 0
for size in v:
result.append(l[offset:offset+size])
offset += size
print result
</code></pre>
<p>Output:</p>
<pre><code>[['a'], ['b'], ['c', 'd'], ['e', 'f']]
</code></pre>
| 1 | 2016-09-27T03:07:18Z | [
"python",
"python-2.7"
]
|
slice a list based on some values | 39,715,311 | <p>Hi I'm looking for a way to split a list based on some values, and assuming the list's length equals to sum of some values, e.g.:</p>
<p>list: <code>l = ['a','b','c','d','e','f']</code>
values: <code>v = (1,1,2,2)</code>
so <code>len(l) = sum(v)</code></p>
<p>and I'd like to have a function to return a tuple or a list, like: <code>(['a'], ['b'], ['c','d'], ['d','e'])</code></p>
<p>currently my code is like:</p>
<pre><code>(list1,list2,list3,list4) = (
l[0:v[0]],
l[v[0]:v[0]+v[1]],
l[v[0]+v[1]:v[0]+v[1]+v[2]],
l[v[0]+v[1]+v[2]:v[0]+v[1]+v[2]+v[3]])`
</code></pre>
<p>I'm thinking about make this clearer, but closest one I have so far is (note the results are incorrect, not what I wanted)</p>
<pre><code>s=0
[list1,list2,list3,list4] = [l[s:s+i] for i in v]
</code></pre>
<p>the problem is I couldn't increase <code>s</code> at the same time while iterating values in <code>v</code>, I'm hoping to get a better code to do so, any suggestion is appreciated, thanks!</p>
| 0 | 2016-09-27T02:57:21Z | 39,715,366 | <p>Maybe make a generator of the values in l?</p>
<pre><code>def make_list(l, v):
g = (x for x in l)
if len(l) == sum(v):
return [[next(g) for _ in range(val)] for val in v]
return None
</code></pre>
| 1 | 2016-09-27T03:07:41Z | [
"python",
"python-2.7"
]
|
slice a list based on some values | 39,715,311 | <p>Hi I'm looking for a way to split a list based on some values, and assuming the list's length equals to sum of some values, e.g.:</p>
<p>list: <code>l = ['a','b','c','d','e','f']</code>
values: <code>v = (1,1,2,2)</code>
so <code>len(l) = sum(v)</code></p>
<p>and I'd like to have a function to return a tuple or a list, like: <code>(['a'], ['b'], ['c','d'], ['d','e'])</code></p>
<p>currently my code is like:</p>
<pre><code>(list1,list2,list3,list4) = (
l[0:v[0]],
l[v[0]:v[0]+v[1]],
l[v[0]+v[1]:v[0]+v[1]+v[2]],
l[v[0]+v[1]+v[2]:v[0]+v[1]+v[2]+v[3]])`
</code></pre>
<p>I'm thinking about make this clearer, but closest one I have so far is (note the results are incorrect, not what I wanted)</p>
<pre><code>s=0
[list1,list2,list3,list4] = [l[s:s+i] for i in v]
</code></pre>
<p>the problem is I couldn't increase <code>s</code> at the same time while iterating values in <code>v</code>, I'm hoping to get a better code to do so, any suggestion is appreciated, thanks!</p>
| 0 | 2016-09-27T02:57:21Z | 39,715,413 | <p>The idea here is using a nested loop. Assuming that your condition will always holds true, the logic then is to run through <code>v</code> and pick up <code>i</code> elements from <code>l</code> where <code>i</code> is an number from <code>v</code>. </p>
<pre><code>index = 0 # this is the start index
for num in v:
temp = [] # this is a temp array, to hold individual elements in your result array.
for j in range(index, index+num): # this loop will pickup the next num elements from l
temp.append(l[j])
data.append(temp)
index += num
</code></pre>
<p>Output:
<code>[['a'], ['b'], ['c', 'd'], ['e', 'f']]</code></p>
<p>The first answer <a href="http://stackoverflow.com/a/39715361/5759063">http://stackoverflow.com/a/39715361/5759063</a> is the most pythonic way to do it. This is just the algorithmic backbone. </p>
| 1 | 2016-09-27T03:13:54Z | [
"python",
"python-2.7"
]
|
slice a list based on some values | 39,715,311 | <p>Hi I'm looking for a way to split a list based on some values, and assuming the list's length equals to sum of some values, e.g.:</p>
<p>list: <code>l = ['a','b','c','d','e','f']</code>
values: <code>v = (1,1,2,2)</code>
so <code>len(l) = sum(v)</code></p>
<p>and I'd like to have a function to return a tuple or a list, like: <code>(['a'], ['b'], ['c','d'], ['d','e'])</code></p>
<p>currently my code is like:</p>
<pre><code>(list1,list2,list3,list4) = (
l[0:v[0]],
l[v[0]:v[0]+v[1]],
l[v[0]+v[1]:v[0]+v[1]+v[2]],
l[v[0]+v[1]+v[2]:v[0]+v[1]+v[2]+v[3]])`
</code></pre>
<p>I'm thinking about make this clearer, but closest one I have so far is (note the results are incorrect, not what I wanted)</p>
<pre><code>s=0
[list1,list2,list3,list4] = [l[s:s+i] for i in v]
</code></pre>
<p>the problem is I couldn't increase <code>s</code> at the same time while iterating values in <code>v</code>, I'm hoping to get a better code to do so, any suggestion is appreciated, thanks!</p>
| 0 | 2016-09-27T02:57:21Z | 39,715,461 | <p>Best I could find is a two line solution:</p>
<pre><code>breaks=[0]+[sum(v[:i+1]) for i in range(len(v))] #build a list of section indices
result=[l[breaks[i]:breaks[i+1]] for i in range(len(breaks)-1)] #split array according to indices
print result
</code></pre>
| 1 | 2016-09-27T03:21:29Z | [
"python",
"python-2.7"
]
|
How to put if and then statements while creating snowflakes in python | 39,715,354 | <p>--im a beginner ..so im not sure how to make sure that the snowflakes don't overlap. Thanks!</p>
<pre><code>import turtle
turtle.right(90)
turtle.penup()
turtle.goto(-700,300)
turtle.pendown()
def snowflakebranch(n):
turtle.forward(n*4)
for i in range(3):
turtle.backward(n)
turtle.right(45)
turtle.forward(n)
turtle.backward(n)
turtle.left(90)
turtle.forward(n)
turtle.backward(n)
turtle.right(45)
def snowflake(n):
for i in range(8):
snowflakebranch(n)
turtle.backward(n)
turtle.right(45)
import random
turtle.colormode(255)
turtle.tracer(0)
for i in range(35):
r = random.randint(0, 255)
g = random.randint(0, 255)
b = random.randint(0, 255)
turtle.color(r, g, b)
x = random.randint(-500, 500)
y = random.randint(-500, 500)
d = random.randint(6, 16)
snowflake(d)
turtle.penup()
turtle.goto(x, y)
#turtle.forward(250)
turtle.pendown()
turtle.update()
</code></pre>
| 3 | 2016-09-27T03:05:12Z | 39,716,593 | <p>One approach would be to calculate a bounding rectangle (or circle) for each snowflake. Save these as a list or a set. Whenever you plan to make a new snowflake, first check if its bounding rectangle (or circle) overlaps with the bounds of any previous snowflakes. If it does, don't draw it. If it doesn't, draw it and save its bounds too. An incomplete outline of this approach:</p>
<pre><code>import turtle
import random
def snowflakebranch(n):
turtle.forward(n * 4)
for _ in range(3):
turtle.backward(n)
turtle.right(45)
turtle.forward(n)
turtle.backward(n)
turtle.left(90)
turtle.forward(n)
turtle.backward(n)
turtle.right(45)
def snowflake(n):
for _ in range(8):
snowflakebranch(n)
turtle.backward(n)
turtle.right(45)
def overlapping(bounds_list, bounds):
for previous in bounds_list:
if overlap(previous, bounds):
return True
return False
def overlap(b1, b2):
# return True or False if these two rectanges or circles overlap
pass
turtle.penup()
turtle.colormode(255)
turtle.tracer(0)
previous_bounds = []
i = 0
while i < 35:
x = random.randint(-500, 500)
y = random.randint(-500, 500)
turtle.goto(x, y)
r = random.randint(0, 255)
g = random.randint(0, 255)
b = random.randint(0, 255)
turtle.color(r, g, b)
turtle.pendown()
d = random.randint(6, 16)
# work out the bounding rectangle or circle based on 'd', 'x' & 'y'
# e.g. (x, y, width & height) or (x, y, radius)
bounds = ( ... )
if not overlapping(previous_bounds, bounds):
snowflake(d)
turtle.update()
previous_bounds.append(bounds)
i += 1
turtle.penup()
turtle.done()
</code></pre>
<p>An image of non-overlapping snowflakes using the above logic with the bounding circles also displayed:</p>
<p><a href="http://i.stack.imgur.com/CuJMQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/CuJMQ.png" alt="enter image description here"></a></p>
<p>I actually like the look of your overlapping snowflakes. Even if you want overlap, the above logic will allow you to control how much overlap.</p>
| 0 | 2016-09-27T05:25:47Z | [
"python",
"turtle-graphics"
]
|
Using XPath with Scrapy | 39,715,429 | <p>I am new to using Scrapy and is trying get all the URLs of the listings on the page using Xpath.</p>
<p>The first xpath works</p>
<pre><code>sel.xpath('//[contains(@class, "attraction_element")]')
</code></pre>
<p>but the second xpath is giving an error</p>
<pre><code>get_parsed_string(snode_attraction, '//[@class="property_title"]/a/@href')
</code></pre>
<p>What is wrong and how can we fix it?</p>
<p><strong>Scrapy Code</strong></p>
<pre><code>def clean_parsed_string(string):
if len(string) > 0:
ascii_string = string
if is_ascii(ascii_string) == False:
ascii_string = unicodedata.normalize('NFKD', ascii_string).encode('ascii', 'ignore')
return str(ascii_string)
else:
return None
def get_parsed_string(selector, xpath):
return_string = ''
extracted_list = selector.xpath(xpath).extract()
if len(extracted_list) > 0:
raw_string = extracted_list[0].strip()
if raw_string is not None:
return_string = htmlparser.unescape(raw_string)
return return_string
class TripAdvisorSpider(Spider):
name = 'tripadvisor'
allowed_domains = ["tripadvisor.com"]
base_uri = "http://www.tripadvisor.com"
start_urls = [
base_uri + '/Attractions-g155032-Activities-c47-t163-Montreal_Quebec.html'
]
# Entry point for BaseSpider
def parse(self, response):
tripadvisor_items = []
sel = Selector(response)
snode_attractions = sel.xpath('//[contains(@class, "attraction_element")]')
# Build item index
for snode_attraction in snode_attractions:
print clean_parsed_string(get_parsed_string(snode_attraction, '//[@class="property_title"]/a/@href'))
</code></pre>
| 2 | 2016-09-27T03:16:18Z | 39,715,490 | <p>Both are not valid XPath expressions, you need to add the tag names after the <code>//</code>. You can also use a wildcard <code>*</code>:</p>
<pre><code>snode_attractions = sel.xpath('//*[contains(@class, "attraction_element")]')
</code></pre>
<p>Note that aside from that you second XPath expression that is used in a loop has to be context specific and start with a dot:</p>
<pre><code># Build item index
for snode_attraction in snode_attractions:
print clean_parsed_string(get_parsed_string(snode_attraction, './/*[@class="property_title"]/a/@href'))
</code></pre>
<p>Also note that you don't need to instantiate a <code>Selector</code> object and ca use <code>response.xpath()</code> shortcut directly.</p>
<hr>
<p>Note that a more concise and, arguably, more readable version of the same logic implementation would be to use <em>CSS selectors</em>:</p>
<pre><code>snode_attractions = response.css('.attraction_element')
for snode_attraction in snode_attractions:
print snode_attraction.css('.property_title > a::attr("href")').extract_first()
</code></pre>
| 3 | 2016-09-27T03:25:40Z | [
"python",
"python-2.7",
"xpath",
"scrapy"
]
|
How to put an overlay on a video | 39,715,472 | <p>I am currently working in Python and using OpenCV's videocapture and cv.imshow to show a video. I am trying to put an overlay on this video so I can draw on it using cv.line, cv.rectangle, etc. Each time the frame changes it clears the image that was drawn so I am hoping if I was to put an overlay of some sort on top of this that it would allow me to draw multiple images on the video without clearing. Any advice? Thanks ahead!</p>
| 0 | 2016-09-27T03:22:46Z | 39,716,115 | <p>What you need are 2 Mat objects- one to stream the camera (say Mat_cam), and the other to hold the overlay (Mat_overlay).</p>
<p>When you draw on your main window, save the line and Rect objects on Mat_overlay, and make sure that it is not affected by the streaming video</p>
<p>When the next frame is received, Mat_cam will be updated and it'll have the next video frame, but Mat_overlay will be the same, since it will not be cleared/refreshed with every 'for' loop iteration. Adding Mat_overlay and Mat_cam using Weighted addition will give you the desired result.</p>
| 0 | 2016-09-27T04:43:31Z | [
"python",
"opencv",
"video",
"overlay"
]
|
How to put an overlay on a video | 39,715,472 | <p>I am currently working in Python and using OpenCV's videocapture and cv.imshow to show a video. I am trying to put an overlay on this video so I can draw on it using cv.line, cv.rectangle, etc. Each time the frame changes it clears the image that was drawn so I am hoping if I was to put an overlay of some sort on top of this that it would allow me to draw multiple images on the video without clearing. Any advice? Thanks ahead!</p>
| 0 | 2016-09-27T03:22:46Z | 39,721,387 | <p>I am not sure that I have understood your question properly.What I got from your question is that you want the overlay to remain on your frame, streamed from Videocapture, for that one simple solution is to declare your "Mat_cam"(camera streaming variable) outside the loop that is used to capture frames so that "Mat_cam" variable will not be freed every time you loop through it. </p>
| 0 | 2016-09-27T09:45:30Z | [
"python",
"opencv",
"video",
"overlay"
]
|
Find the number of terms required in the Leibniz formula to approximate pi to n significant figures? | 39,715,509 | <p>I have to find the number of terms required to approximate pi to n significant figures using the sum of the Leibniz series. I have already found the sum and the approximation of pi, but I don't know how to start writing the function that compares sigfigs in the two variables, or even how to determine the number of sigfigs in a given number. Any help would be appreciated, thanks.</p>
<pre><code>sum = 0
for i in range(800001):
int = ((-1)**i) / (2*i+1)
sum += int
print(sum)
pi = sum*4
</code></pre>
<p>print(pi)</p>
| 0 | 2016-09-27T03:27:39Z | 39,715,562 | <p>Because of the context of this question, I suspect you are not actually interested in knowing how to check for the least significant digits, but rather to know when your approximation is '<em>good enough</em>'.</p>
<p>When approximating any value by computing the sum of a sequence, the easiest way to terminate your calculation is to improve the answer until it is close enough so that its square (or absolute value) differs from the last term by less than a predetermined tolerance.</p>
<p>A good way to do this is to check, rather than using a <code>for</code> loop, to use a <code>while</code> loop that checks if the answer differs from the previous answer by this aforementioned tolerance. </p>
| 1 | 2016-09-27T03:35:35Z | [
"python",
"loops",
"pi"
]
|
Find the number of terms required in the Leibniz formula to approximate pi to n significant figures? | 39,715,509 | <p>I have to find the number of terms required to approximate pi to n significant figures using the sum of the Leibniz series. I have already found the sum and the approximation of pi, but I don't know how to start writing the function that compares sigfigs in the two variables, or even how to determine the number of sigfigs in a given number. Any help would be appreciated, thanks.</p>
<pre><code>sum = 0
for i in range(800001):
int = ((-1)**i) / (2*i+1)
sum += int
print(sum)
pi = sum*4
</code></pre>
<p>print(pi)</p>
| 0 | 2016-09-27T03:27:39Z | 39,715,563 | <p>The factor of (-1)**i means that the terms in the series have alternating signs. Also, the magnitudes of the terms are monotonically decreasing. One property of such series is that the error you incur by truncating the series is smaller than the smallest term that was included.</p>
| 1 | 2016-09-27T03:35:36Z | [
"python",
"loops",
"pi"
]
|
Can anyone assist with this task? | 39,715,532 | <p>Write a while loop that exits when the sum of the squares 1^2 + 2^2 + 3^2 + ⦠exceeds an input m. Print the largest sum less than m and the number of terms in the sum.</p>
<pre><code>Example: If m = 18 then
1^2 +2^2 + 3^2 = 1 + 4 + 9 = 14
1^2 +2^2 + 3^2 + 4^2 = 1 + 4 + 9 + 16 = 30
</code></pre>
<p>Therefore you should print out 3 and 14 for m = 18.</p>
<p>This is what I have so far, and I'm basically completely lost at this point:</p>
<pre><code>def sum_printer():
y = input("Please enter a maximum number: ")
y = int(y)
for result in range(y):
while result + result ** 2 >= y:
break
else:
print(str(result) + "^2 =", result ** 2, end=" ")
def sum_of_squares_result(m, n):
return sum(result ** 2 for result in range(m, n))
sum_printer()
</code></pre>
<p>Can't figure out where I should go from here. Computing the sums of squares isn't a problem, neither is breaking the for loop when the sum of squares exceeds the user's input. I just can't figure out how to print 3 and 14 based off of the input.</p>
| -1 | 2016-09-27T03:31:31Z | 39,715,628 | <p>What's wrong in your code is that you check <code>result + result**2</code> where <code>result</code> is each number in <code>range(y)</code>. You are basically checking if <code>1 + 1**2 >= m</code>, <code>2 + 2**2 >= m</code>, and so on. Here's how I would do it:</p>
<pre><code>def sum_printer():
ceil = int(input("What is the maximum to not be exceeded? "))
total = 0
curr = 1
while total + curr**2 < ceil:
total += curr**2
curr += 1
curr -= 1
print(curr, total)
</code></pre>
| 1 | 2016-09-27T03:45:48Z | [
"python"
]
|
While loop is not breaking when condition is met, What am I doing wrong? | 39,715,584 | <p>The program I am writing is a Python 3.5 rocks paper scissors game that plays rocks paper scissors until the cpu or player gets to a score of 5. The best way I could think of to do this was in a while loop, but by some mistake of mine the games continue infinitely and the loop does not break when the score becomes 5. I'll paste the code here(I hope my comments are helpful.) Thanks in advance for the help, I know this is very simple and silly!</p>
<pre><code> #import the "random" library
import random
#Welcome the user and explain the program
print("Welcome to the Rock Paper Scissor game! You will play until you or the computer gets 5 wins!")
#Set constants for rock paper and scissors for the computer's choice
ROCK = 1
PAPER = 2
SCISSOR = 3
#Define the scores for the user and computer and start them as 0
user_score = 0
cpu_score = 0
#Start the loop that runs until the user or computer reaches the score of 5
while user_score != 5 or cpu_score != 5:
#Gets the choice of the user and stores it in a variable
choice = input("Please enter your choice- 'r'ock, 'p'aper, or 's'cissors? \n")
#Prevents the loop from progressing until the user picks a valid command
while choice != "r" and choice != "p" and choice != "s":
choice = input("Invalid command: Please enter your choice- 'r'ock, 'p'aper, or 's'cissors? \n")
#get's a random pick between 1 and 3 for the cpu's choice
cpu_pick = random.randint(ROCK,SCISSOR)
#Prints the pick of the user prior to determining the winner so this does not have to be included in every if statement
if choice == "r":
print("You pick rock!")
elif choice == "s":
print("You pick scissors!")
elif choice == "p":
print("You pick paper!")
#Prints the pick of the cpu prior to determining the winner so this does not have to be included in every if statement
if cpu_pick == ROCK:
print("The cpu picks a rock!\n")
elif cpu_pick == SCISSOR:
print("The cpu picks scissors!\n")
elif cpu_pick == PAPER:
print("The cpu picks paper!\n")
#Accounts for all cases when the cpu pick is rock and adds to the scores when somebody wins
if cpu_pick == ROCK and choice == "r":
print("Tie! New round!")
elif cpu_pick == ROCK and choice == "s":
print("CPU wins this round!")
cpu_score += 1
elif cpu_pick == ROCK and choice == "p":
print("You win this round!")
user_score += 1
#Accounts for all cases when the cpu pick is Scissors and adds to the scores when somebody wins
if cpu_pick == SCISSOR and choice == "s":
print("Tie! New round!")
elif cpu_pick == SCISSOR and choice == "p":
print("CPU wins this round!")
cpu_score += 1
elif cpu_pick == SCISSOR and choice == "r":
print("You win this round!")
user_score += 1
# Accounts for all cases when the cpu pick is Paper and adds to the scores when somebody wins
if cpu_pick == PAPER and choice == "p":
print("Tie! New round!")
elif cpu_pick == PAPER and choice == "r":
print("CPU wins this round!")
cpu_score += 1
elif cpu_pick == PAPER and choice == "s":
print("You win this round!")
user_score += 1
#Prints the score after each round
print("Score: \nComputer: %d\nYou: %d" % (cpu_score, user_score))
#when the loop is broken check who won then print the final score and winner
if user_score == 5:
print("You win by a score of %d to %d!" % (user_score, cpu_score))
elif user_score == 5:
print("The CPU won bu a score of %d to %d!" % (cpu_score, user_score))
</code></pre>
| -3 | 2016-09-27T03:39:40Z | 39,715,659 | <p>Use <code>and</code> instead of <code>or</code> in the <code>while</code> condition:</p>
<pre><code>while user_score != 5 and cpu_score != 5:
</code></pre>
| 1 | 2016-09-27T03:49:41Z | [
"python",
"while-loop"
]
|
How to create a line chart using Matplotlib | 39,715,601 | <p>I am trying to create a line chart for a sample data shown in screenshot.
I googled quite a bit and looked at some links below and tried to use <code>matplotlib</code>, but I could not get the desired output as shown in the linegraph (screenshot) below, can anyone provide me a sample reference to get started? How to get the line chart with the sample input shown below?</p>
<p><a href="http://www.josechristian.com/programming/smooth-line-plots-python/" rel="nofollow">http://www.josechristian.com/programming/smooth-line-plots-python/</a>
<a href="http://yaboolog.blogspot.com/2011/07/python-tips-create-line-graph-with.html" rel="nofollow">http://yaboolog.blogspot.com/2011/07/python-tips-create-line-graph-with.html</a></p>
<p><a href="http://i.stack.imgur.com/ddQcE.png" rel="nofollow"><img src="http://i.stack.imgur.com/ddQcE.png" alt="linechart"></a></p>
<p><strong>Code:</strong></p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# just create some random data
fnx = lambda : np.random.randint(3, 10, 10)
y = np.row_stack((fnx(), fnx(), fnx()))
# this call to 'cumsum' (cumulative sum), passing in your y data,
# is necessary to avoid having to manually order the datasets
x = np.arange(10)
y_stack = np.cumsum(y, axis=0) # a 3x10 array
fig = plt.figure()
plt.savefig('smooth_plot.png')
</code></pre>
| 1 | 2016-09-27T03:42:35Z | 39,717,210 | <p>Using your data provided in screenshot:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
builds = np.array([1, 2, 3, 4])
y_stack = np.row_stack(([1, 2, 3, 4], [5, 2, 9, 1], [20, 10, 15, 1], [5, 10, 15, 20]))
fig = plt.figure(figsize=(11,8))
ax1 = fig.add_subplot(111)
ax1.plot(builds, y_stack[0,:], label='Component 1', color='c', marker='o')
ax1.plot(builds, y_stack[1,:], label='Component 2', color='g', marker='o')
ax1.plot(builds, y_stack[2,:], label='Component 3', color='r', marker='o')
ax1.plot(builds, y_stack[3,:], label='Component 4', color='b', marker='o')
plt.xticks(builds)
plt.xlabel('Builds')
handles, labels = ax1.get_legend_handles_labels()
lgd = ax1.legend(handles, labels, loc='upper center', bbox_to_anchor=(1.15,1))
ax1.grid('on')
plt.savefig('smooth_plot.png')
</code></pre>
<p><strong>Output:</strong>
<a href="http://i.stack.imgur.com/vFzEv.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/vFzEv.jpg" alt="enter image description here"></a></p>
<hr>
<p>If you want to plot just lines (based on random data that was in your code):</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
fnx = lambda : np.random.randint(3, 10, 10)
y = np.row_stack((fnx(), fnx(), fnx(), fnx(), fnx()))
x = np.arange(10)
y_stack = np.cumsum(y, axis=0)
fig = plt.figure(figsize=(11,8))
ax1 = fig.add_subplot(111)
ax1.plot(x, y_stack[0,:], label=1)
ax1.plot(x, y_stack[1,:], label=2)
ax1.plot(x, y_stack[2,:], label=3)
ax1.plot(x, y_stack[3,:], label=4)
ax1.plot(x, y_stack[4,:], label=5)
ax1.legend(loc=2)
colormap = plt.cm.gist_ncar
colors = [colormap(i) for i in np.linspace(0, 1,len(ax1.lines))]
for i,j in enumerate(ax1.lines):
j.set_color(colors[i])
plt.savefig('smooth_plot.png')
</code></pre>
<p><strong>Output:</strong>
<a href="http://i.stack.imgur.com/e7ttm.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/e7ttm.jpg" alt="enter image description here"></a></p>
<hr>
<p>But if you want stacked line graphs with color filling use this (based on random data that was in your code):</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
fnx = lambda : np.random.randint(3, 10, 10)
y = np.row_stack((fnx(), fnx(), fnx(), fnx(), fnx()))
x = np.arange(10)
y_stack = np.cumsum(y, axis=0)
fig = plt.figure(figsize=(11,8))
ax1 = fig.add_subplot(111)
ax1.fill_between(x, 0, y_stack[0,:], facecolor="#CC6666", alpha=0.7)
ax1.fill_between(x, y_stack[0,:], y_stack[1,:], facecolor="#1DACD6", alpha=0.7)
ax1.fill_between(x, y_stack[1,:], y_stack[2,:], facecolor="#6E5160", alpha=0.7)
ax1.fill_between(x, y_stack[2,:], y_stack[3,:], facecolor="#CC6666", alpha=0.7)
ax1.fill_between(x, y_stack[3,:], y_stack[4,:], facecolor="#1DACD6", alpha=0.7)
plt.savefig('smooth_plot.png')
</code></pre>
<p><strong>Output:</strong>
<a href="http://i.stack.imgur.com/c4Ruh.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/c4Ruh.jpg" alt="enter image description here"></a></p>
<p><strong>UPDATE:</strong></p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
builds = np.array([1, 2, 3, 4])
y_stack = np.row_stack(([1, 5, 20, 5], [2, 2, 10, 10], [3, 9, 15, 15], [4, 1, 11, 20]))
fig = plt.figure(figsize=(11,8))
ax1 = fig.add_subplot(111)
ax1.plot(builds, y_stack[0,:], label='Component 1', color='c', marker='o')
ax1.plot(builds, y_stack[1,:], label='Component 2', color='g', marker='o')
ax1.plot(builds, y_stack[2,:], label='Component 3', color='r', marker='o')
ax1.plot(builds, y_stack[3,:], label='Component 4', color='b', marker='o')
plt.xticks(builds)
plt.xlabel('Builds')
handles, labels = ax1.get_legend_handles_labels()
lgd = ax1.legend(handles, labels, loc='upper center', bbox_to_anchor=(1.15,1))
ax1.grid('on')
plt.savefig('smooth_plot.png')
</code></pre>
| 3 | 2016-09-27T06:11:22Z | [
"python",
"matplotlib",
"linechart"
]
|
Use a previous input while asking for input to define a new variable | 39,715,606 | <pre><code>userName = input("What is your name?")
firstInteger = (input ("Hi," , userName , "what is the first integer?"))
</code></pre>
<p>This just returns
TypeError: input expected at most 1 arguments, got 3</p>
<p>I tried changing the userName to str(input(...)) and the userName in the firstInteger input to str(UserName), but neither working or changed the error message.</p>
| -1 | 2016-09-27T03:43:07Z | 39,717,144 | <p>Here's the updated code based on the comments.</p>
<p>Also I'm not sure what version of python you're using, but older versions may require <code>raw_input</code> instead of <code>input</code>. If you're using Python 3.x, just <code>input</code> should work. Here's the updated code. Just replace <code>raw_input</code> with <code>input</code> if you are using Python 3.</p>
<pre><code>userName = raw_input("What is your name?")
firstInteger = (raw_input ("Hi," + userName + " what is the first integer?"))
</code></pre>
| 0 | 2016-09-27T06:06:40Z | [
"python",
"variables",
"input"
]
|
classification with LSTM RNN in tensorflow, ValueError: Shape (1, 10, 5) must have rank 2 | 39,715,612 | <p>I am trying to design a simple lstm in tensorflow. I want to classify a sequence of data into classes from 1 to 10.</p>
<p>I have <em>10 timestamps</em> and data X. I am only taking one sequence for now, so my batch size = 1.
At every epoch, a new sequence is generated. For example X is a numpy array like this-</p>
<pre><code>X [[ 2.52413028 2.49449348 2.46520466 2.43625973 2.40765466 2.37938545
2.35144815 2.32383888 2.29655379 2.26958905]]
</code></pre>
<p>To make it suitable for lstm input, I first converted in to a tensor and then reshaped it (batch_size, sequence_lenght, input dimension) -</p>
<pre><code>X= np.array([amplitude * np.exp(-t / tau)])
print 'X', X
#Sorting out the input
train_input = X
train_input = tf.convert_to_tensor(train_input)
train_input = tf.reshape(train_input,[1,10,1])
print 'ti', train_input
</code></pre>
<p>For output I am generating a one hot encoded label within a class range of 1 to 10. </p>
<pre><code>#------------sorting out the output
train_output= [int(math.ceil(tau/resolution))]
train_output= one_hot(train_output, num_labels=10)
print 'label', train_output
train_output = tf.convert_to_tensor(train_output)
>>label [[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]]
</code></pre>
<p>Then I created the placeholders for tensorflow graph, made the lstm cell and gave weights and bias-</p>
<pre><code>data = tf.placeholder(tf.float32, shape= [batch_size,len(t),1])
target = tf.placeholder(tf.float32, shape = [batch_size, num_classes])
cell = tf.nn.rnn_cell.LSTMCell(num_hidden)
output, state = rnn.dynamic_rnn(cell, data, dtype=tf.float32)
weight = tf.Variable(tf.random_normal([batch_size, num_classes, 1])),
bias = tf.Variable(tf.random_normal([num_classes]))
#training
prediction = tf.nn.softmax(tf.matmul(output,weight) + bias)
cross_entropy = -tf.reduce_sum(target * tf.log(prediction))
optimizer = tf.train.AdamOptimizer()
minimize = optimizer.minimize(cross_entropy)
</code></pre>
<p>I have written the code this far and got error at the training step. Is it to do with the input shapes? Here is the traceback---</p>
<p>Traceback (most recent call last):</p>
<pre><code> File "/home/raisa/PycharmProjects/RNN_test1/test3.py", line 66, in <module>
prediction = tf.nn.softmax(tf.matmul(output,weight) + bias)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 1036, in matmul
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 911, in _mat_mul
transpose_b=transpose_b, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 655, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2156, in create_op
set_shapes_for_outputs(ret)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1612, in set_shapes_for_outputs
shapes = shape_func(op)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/common_shapes.py", line 81, in matmul_shape
a_shape = op.inputs[0].get_shape().with_rank(2)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_shape.py", line 625, in with_rank
raise ValueError("Shape %s must have rank %d" % (self, rank))
ValueError: Shape (1, 10, 5) must have rank 2
</code></pre>
| 0 | 2016-09-27T03:44:06Z | 39,715,887 | <p>Looking at your code, your rnn output should have a dimension of <code>batch_size x 1 x num_hidden</code> while your w has dimension <code>batch_size x num_classes x 1</code> however you want multiplication of those two to be <code>batcH_size x num_classes</code>. </p>
<p>Can you try <code>output = tf.reshape(output, [batch_size, num_hidden])</code> and <code>weight = tf.Variable(tf.random_normal([num_hidden, num_classes]))</code> and let me know how that goes?</p>
| 0 | 2016-09-27T04:18:06Z | [
"python",
"tensorflow",
"deep-learning",
"recurrent-neural-network",
"lstm"
]
|
Selenium starts but does not load a page | 39,715,653 | <p>I am running python 2.7.12 with selenium version 2.53.6 and firefox 49.0. I have looked here for <a href="http://stackoverflow.com/questions/18768658/selenium-webdriver-firefox-starts-but-does-not-open-the-url">Selenium WebDriver: Firefox starts, but does not open the URL</a> but the solutions mentioned have not solved my problem. </p>
<p>Are there compatibility issues of which I am unaware? Any help would be greatly appreciated.</p>
| 1 | 2016-09-27T03:49:03Z | 39,716,270 | <p>you must used Firefox version<=46.0</p>
| 1 | 2016-09-27T04:58:39Z | [
"python",
"selenium"
]
|
Selenium starts but does not load a page | 39,715,653 | <p>I am running python 2.7.12 with selenium version 2.53.6 and firefox 49.0. I have looked here for <a href="http://stackoverflow.com/questions/18768658/selenium-webdriver-firefox-starts-but-does-not-open-the-url">Selenium WebDriver: Firefox starts, but does not open the URL</a> but the solutions mentioned have not solved my problem. </p>
<p>Are there compatibility issues of which I am unaware? Any help would be greatly appreciated.</p>
| 1 | 2016-09-27T03:49:03Z | 39,717,037 | <p>Download Firefox from this link <a href="https://ftp.mozilla.org/pub/firefox/releases/46.0.1/win64-EME-free/en-GB/Firefox%20Setup%2046.0.1.exe" rel="nofollow">https://ftp.mozilla.org/pub/firefox/releases/46.0.1/win64-EME-free/en-GB/Firefox%20Setup%2046.0.1.exe</a> and then try again </p>
| 1 | 2016-09-27T05:59:42Z | [
"python",
"selenium"
]
|
Selenium starts but does not load a page | 39,715,653 | <p>I am running python 2.7.12 with selenium version 2.53.6 and firefox 49.0. I have looked here for <a href="http://stackoverflow.com/questions/18768658/selenium-webdriver-firefox-starts-but-does-not-open-the-url">Selenium WebDriver: Firefox starts, but does not open the URL</a> but the solutions mentioned have not solved my problem. </p>
<p>Are there compatibility issues of which I am unaware? Any help would be greatly appreciated.</p>
| 1 | 2016-09-27T03:49:03Z | 39,719,909 | <p>For higher version of Firefox either use Selenium 3.0.x or use <code>geckodriver</code>.</p>
| 1 | 2016-09-27T08:36:02Z | [
"python",
"selenium"
]
|
Selenium starts but does not load a page | 39,715,653 | <p>I am running python 2.7.12 with selenium version 2.53.6 and firefox 49.0. I have looked here for <a href="http://stackoverflow.com/questions/18768658/selenium-webdriver-firefox-starts-but-does-not-open-the-url">Selenium WebDriver: Firefox starts, but does not open the URL</a> but the solutions mentioned have not solved my problem. </p>
<p>Are there compatibility issues of which I am unaware? Any help would be greatly appreciated.</p>
| 1 | 2016-09-27T03:49:03Z | 39,720,178 | <p>i am also facing the same problem. I downgraded the Firefox to 47.0.1 and it works</p>
| 1 | 2016-09-27T08:49:13Z | [
"python",
"selenium"
]
|
Selenium starts but does not load a page | 39,715,653 | <p>I am running python 2.7.12 with selenium version 2.53.6 and firefox 49.0. I have looked here for <a href="http://stackoverflow.com/questions/18768658/selenium-webdriver-firefox-starts-but-does-not-open-the-url">Selenium WebDriver: Firefox starts, but does not open the URL</a> but the solutions mentioned have not solved my problem. </p>
<p>Are there compatibility issues of which I am unaware? Any help would be greatly appreciated.</p>
| 1 | 2016-09-27T03:49:03Z | 39,721,992 | <p>As you are using selenium version 2.53.6 .So it is not compatible with firefox version 49.0.</p>
<p>You should downgrade your firefox version to <=46 </p>
<p>Download older version of firefox from the below address:</p>
<p><a href="https://support.mozilla.org/en-US/kb/install-older-version-of-firefox" rel="nofollow">https://support.mozilla.org/en-US/kb/install-older-version-of-firefox</a>.</p>
| 1 | 2016-09-27T10:13:34Z | [
"python",
"selenium"
]
|
Formatting a static AES Key for decryption in Python | 39,715,658 | <p>I'm working to decrypt several files in python using a given AES key via the PyCrypto AES implementation. I've currently set it to a static list of hex bytes (as this was how it was provided to me). However, when I try to decrypt the files, I get a warning stating the key size must be 16, 24, or 32 bytes. My code for converting the list to a string is as follows:</p>
<pre><code>''.join(str(x) for x in key)
</code></pre>
<p>I've verified that the key in list form has 16 bytes, but something I'm doing when converting it must be changing the size. What operations would be best for changing a key from something like </p>
<pre><code>[0x2a, 0x7e, 0x15, 0x16, 0x28, 0xae, 0xd2, 0xa6, 0xab, 0xf7, 0x15, 0x88, 0x09, 0xcf, 0x4f, 0x3c]
</code></pre>
<p>to a usable string for decryption? </p>
| 0 | 2016-09-27T03:49:28Z | 39,715,755 | <p>You didn't mention what AES implementation you're using, but the right answer is likely to look like</p>
<pre><code>k = bytes([0x01, 0x23, 0x34, 0x56])
</code></pre>
| 0 | 2016-09-27T04:01:42Z | [
"python",
"encryption",
"aes"
]
|
Search Text File and Email Results | 39,715,667 | <p>Working with a text file to search for a string and then email the nex few lines. </p>
<p>I have the search working and it prints the correct lines
The email sends successfully, however it only contains the last line of the output </p>
<p>Any thoughts? </p>
<p>File.txt </p>
<pre><code>first
second
thrid
------------------------------------------------------------------------------
after first
after second
after thrid
after forth
after fifth
after sixth
</code></pre>
<p>my code</p>
<pre><code>import smtplib
from email.mime.text import MIMEText
from_address = "fromaddress@domain.com"
to_address = "toaddress@domain.com"
with open("file.txt", "r") as f:
searchlines = f.readlines()
for i, line in enumerate(searchlines):
if "------------------------------------------------------------------------------" in line:
for l in searchlines[i:i+6]: print l,
output = l
msg = MIMEText(output)
msg['Subject'] = "Test email"
msg['From'] = from_address
msg['To'] = to_address
body = output
# Send the message via local SMTP server.
s = smtplib.SMTP('smtp.domain.com', '8025')
s.sendmail(from_address, to_address, msg.as_string())
s.quit()
</code></pre>
<p>Output via print: </p>
<pre><code>------------------------------------------------------------------------------
after first
after second
after thrid
after forth
after fifth
</code></pre>
<p>The email only contains in the body</p>
<pre><code>after fifth
</code></pre>
| 0 | 2016-09-27T03:50:27Z | 39,715,693 | <p>your for loop prints all the lines, but then the loop ends, and l is set to the last item in the loop: after fifth</p>
<p>then you email that last l</p>
| -1 | 2016-09-27T03:53:43Z | [
"python"
]
|
Search Text File and Email Results | 39,715,667 | <p>Working with a text file to search for a string and then email the nex few lines. </p>
<p>I have the search working and it prints the correct lines
The email sends successfully, however it only contains the last line of the output </p>
<p>Any thoughts? </p>
<p>File.txt </p>
<pre><code>first
second
thrid
------------------------------------------------------------------------------
after first
after second
after thrid
after forth
after fifth
after sixth
</code></pre>
<p>my code</p>
<pre><code>import smtplib
from email.mime.text import MIMEText
from_address = "fromaddress@domain.com"
to_address = "toaddress@domain.com"
with open("file.txt", "r") as f:
searchlines = f.readlines()
for i, line in enumerate(searchlines):
if "------------------------------------------------------------------------------" in line:
for l in searchlines[i:i+6]: print l,
output = l
msg = MIMEText(output)
msg['Subject'] = "Test email"
msg['From'] = from_address
msg['To'] = to_address
body = output
# Send the message via local SMTP server.
s = smtplib.SMTP('smtp.domain.com', '8025')
s.sendmail(from_address, to_address, msg.as_string())
s.quit()
</code></pre>
<p>Output via print: </p>
<pre><code>------------------------------------------------------------------------------
after first
after second
after thrid
after forth
after fifth
</code></pre>
<p>The email only contains in the body</p>
<pre><code>after fifth
</code></pre>
| 0 | 2016-09-27T03:50:27Z | 39,715,999 | <p>Your indentation is wonky. The loop should collect the lines, then you should send an email when you have collected them all.</p>
<pre><code>with open("file.txt", "r") as f:
searchlines = f.readlines()
# <-- unindent; we don't need this in the "with"
for i, line in enumerate(searchlines):
# Aesthetics: use "-" * 78
if "-" * 78 in line:
for l in searchlines[i:i+6]:
print l,
# collect all the lines in output, not just the last one
output = ''.join(searchlines[i:i+6])
# if you expect more than one result, this is wrong
break
# <-- unindent: we are done when the for loop is done
# Add an explicit encoding -- guessing here which one to use
msg = MIMEText(output, 'plain', 'utf-8')
msg['Subject'] = "Test email"
msg['From'] = from_address
msg['To'] = to_address
# no point in assigning body and not using it for anything
# Send the message via local SMTP server.
s = smtplib.SMTP('smtp.domain.com', '8025')
s.sendmail(from_address, to_address, msg.as_string())
s.quit()
</code></pre>
<p>The real bug was that <code>output = l</code> would only collect the last value of <code>l</code> after the innermost <code>for l</code> loop, but restructuring the program to not do things in a loop when they should only happen once (and vice versa!) hopefully makes it clearer as well.</p>
<p>If there can be more than one result, just removing the <code>break</code> will not suffice -- this would reinstate your original bug in a different form, where only the last match of six lines will be sent. If you need to support multiple results, your code needs to combine them into one message somehow.</p>
| 0 | 2016-09-27T04:31:24Z | [
"python"
]
|
Python SQL query using variables | 39,715,735 | <pre><code>#Delete suspense window
class dWindow(QtGui.QMainWindow, Ui_dWindow):
def __init__(self, parent = None):
QtGui.QMainWindow.__init__(self, parent)
self.setupUi(self)
for row in cursor.execute("SELECT FIRSTNAME FROM Staff"):
self.comboUser.addItems(row)
con.close()
self.btnDeleteSuspense.clicked.connect(self.btnDeleteSuspense_Clicked)
def btnDeleteSuspense_Clicked(self):
user = self.comboUser.currentText() #finds selected user
date = self.dateEdit.date().toString("M/d/yyyy")
numrecord = cursor.execute() ??
</code></pre>
<p>Here is a sample DB and program file to further help me explain</p>
<p><img src="http://i.stack.imgur.com/sYTN5.png" alt="programsample"></p>
<p><img src="http://i.stack.imgur.com/BL5XF.png" alt="dbsample"></p>
<p>I have created variables to hold the selection of the combobox and the dateEdit box.</p>
<p>The next step (the one I'm struggling with) is to then use those variables in an SQL query that will first find the count of rows with the selected user name and having a date <= than the selected date. That will populate the numrecord variable so that I can display a "This will delete 'x' rows, are you sure?"</p>
<p>If the user selects yes then I will then use the variable in a delete query to delete the selected rows.</p>
<p>I believe if I can figure out how to use the variables I have in a SQL query then the DELETE query should be rather simple.</p>
<p>An example of a possible DELETE query to show what I'm trying to do</p>
<pre><code>cursor.execute("DELETE TO, count(*) FROM Suspense where TO = [user] and DATE = [date]")
</code></pre>
<p>I know that is wrong but maybe it will help clarify.</p>
<p>I hope I have explained my question fully and I appreciate any help provided.</p>
<p>Edit: Thanks so much!!</p>
<p>Just before I saw that you had posted this I figured it out.</p>
<p>What I came up with was the following:</p>
<pre><code>qdate = self.dateTimeEdit.dateTime().toPyDateTime() #grabs the raw datetime from the QDateTimeEdit object and converts to python datetime
query = "SELECT DATE FROM Suspense WHERE DATE >= ?" #creates the query using ? as a placeholder for variable
cursor.execute(query, (qdate,)) #executes the query and passes qdate as a tuple to the placeholder
</code></pre>
<p>With this knowledge I can recreate my queries to include both variables.</p>
| -1 | 2016-09-27T03:59:50Z | 39,716,161 | <p>You use the <a href="http://www.w3schools.com/sql/sql_delete.asp" rel="nofollow"><code>DELETE</code></a> sql command.</p>
<p>This assumes your <code>DATE</code> field is actually a date field and not a string field.</p>
<pre><code>user = self.comboUser.currentText()
date = self.dateEdit.date().toString("yyyy-MM-dd")
cmd = "DELETE FROM Suspense WHERE TO = '{}' AND DATE >= '{}'".format(user, date)
cursor.execute(cmd)
</code></pre>
<p>Also, you may want to look into using an ORM framework (<a href="http://www.sqlalchemy.org/" rel="nofollow"><code>sqlalchemy</code></a> is probably the most popular, but there are others). It's best to avoid manually constructing sql queries if possible.</p>
| 0 | 2016-09-27T04:47:50Z | [
"python",
"sql",
"pyqt",
"pyodbc",
"pypyodbc"
]
|
Python SQL query using variables | 39,715,735 | <pre><code>#Delete suspense window
class dWindow(QtGui.QMainWindow, Ui_dWindow):
def __init__(self, parent = None):
QtGui.QMainWindow.__init__(self, parent)
self.setupUi(self)
for row in cursor.execute("SELECT FIRSTNAME FROM Staff"):
self.comboUser.addItems(row)
con.close()
self.btnDeleteSuspense.clicked.connect(self.btnDeleteSuspense_Clicked)
def btnDeleteSuspense_Clicked(self):
user = self.comboUser.currentText() #finds selected user
date = self.dateEdit.date().toString("M/d/yyyy")
numrecord = cursor.execute() ??
</code></pre>
<p>Here is a sample DB and program file to further help me explain</p>
<p><img src="http://i.stack.imgur.com/sYTN5.png" alt="programsample"></p>
<p><img src="http://i.stack.imgur.com/BL5XF.png" alt="dbsample"></p>
<p>I have created variables to hold the selection of the combobox and the dateEdit box.</p>
<p>The next step (the one I'm struggling with) is to then use those variables in an SQL query that will first find the count of rows with the selected user name and having a date <= than the selected date. That will populate the numrecord variable so that I can display a "This will delete 'x' rows, are you sure?"</p>
<p>If the user selects yes then I will then use the variable in a delete query to delete the selected rows.</p>
<p>I believe if I can figure out how to use the variables I have in a SQL query then the DELETE query should be rather simple.</p>
<p>An example of a possible DELETE query to show what I'm trying to do</p>
<pre><code>cursor.execute("DELETE TO, count(*) FROM Suspense where TO = [user] and DATE = [date]")
</code></pre>
<p>I know that is wrong but maybe it will help clarify.</p>
<p>I hope I have explained my question fully and I appreciate any help provided.</p>
<p>Edit: Thanks so much!!</p>
<p>Just before I saw that you had posted this I figured it out.</p>
<p>What I came up with was the following:</p>
<pre><code>qdate = self.dateTimeEdit.dateTime().toPyDateTime() #grabs the raw datetime from the QDateTimeEdit object and converts to python datetime
query = "SELECT DATE FROM Suspense WHERE DATE >= ?" #creates the query using ? as a placeholder for variable
cursor.execute(query, (qdate,)) #executes the query and passes qdate as a tuple to the placeholder
</code></pre>
<p>With this knowledge I can recreate my queries to include both variables.</p>
| -1 | 2016-09-27T03:59:50Z | 39,755,807 | <p>As mentioned in a comment to another answer, you should be using a proper <em>parameterized query</em>, for example:</p>
<pre class="lang-python prettyprint-override"><code># assumes that autocommit=False (the default)
crsr = conn.cursor()
sql = "DELETE FROM [Suspense] WHERE [TO]=? AND [DATE]<=?"
user = self.comboUser.currentText() # as before
date = self.dateEdit.date() # Note: no .toString(...) required
params = (user, date)
crsr.execute(sql, params)
msg = "About to delete {} row(s). Proceed?".format(crsr.rowcount)
if my_confirmation_dialog(msg):
conn.commit()
else:
conn.rollback()
</code></pre>
| 1 | 2016-09-28T19:10:34Z | [
"python",
"sql",
"pyqt",
"pyodbc",
"pypyodbc"
]
|
Python SQL query using variables | 39,715,735 | <pre><code>#Delete suspense window
class dWindow(QtGui.QMainWindow, Ui_dWindow):
def __init__(self, parent = None):
QtGui.QMainWindow.__init__(self, parent)
self.setupUi(self)
for row in cursor.execute("SELECT FIRSTNAME FROM Staff"):
self.comboUser.addItems(row)
con.close()
self.btnDeleteSuspense.clicked.connect(self.btnDeleteSuspense_Clicked)
def btnDeleteSuspense_Clicked(self):
user = self.comboUser.currentText() #finds selected user
date = self.dateEdit.date().toString("M/d/yyyy")
numrecord = cursor.execute() ??
</code></pre>
<p>Here is a sample DB and program file to further help me explain</p>
<p><img src="http://i.stack.imgur.com/sYTN5.png" alt="programsample"></p>
<p><img src="http://i.stack.imgur.com/BL5XF.png" alt="dbsample"></p>
<p>I have created variables to hold the selection of the combobox and the dateEdit box.</p>
<p>The next step (the one I'm struggling with) is to then use those variables in an SQL query that will first find the count of rows with the selected user name and having a date <= than the selected date. That will populate the numrecord variable so that I can display a "This will delete 'x' rows, are you sure?"</p>
<p>If the user selects yes then I will then use the variable in a delete query to delete the selected rows.</p>
<p>I believe if I can figure out how to use the variables I have in a SQL query then the DELETE query should be rather simple.</p>
<p>An example of a possible DELETE query to show what I'm trying to do</p>
<pre><code>cursor.execute("DELETE TO, count(*) FROM Suspense where TO = [user] and DATE = [date]")
</code></pre>
<p>I know that is wrong but maybe it will help clarify.</p>
<p>I hope I have explained my question fully and I appreciate any help provided.</p>
<p>Edit: Thanks so much!!</p>
<p>Just before I saw that you had posted this I figured it out.</p>
<p>What I came up with was the following:</p>
<pre><code>qdate = self.dateTimeEdit.dateTime().toPyDateTime() #grabs the raw datetime from the QDateTimeEdit object and converts to python datetime
query = "SELECT DATE FROM Suspense WHERE DATE >= ?" #creates the query using ? as a placeholder for variable
cursor.execute(query, (qdate,)) #executes the query and passes qdate as a tuple to the placeholder
</code></pre>
<p>With this knowledge I can recreate my queries to include both variables.</p>
| -1 | 2016-09-27T03:59:50Z | 39,757,110 | <p>What I came up with was the following:</p>
<pre><code>qdate = self.dateTimeEdit.dateTime().toPyDateTime() #grabs the raw datetime from the QDateTimeEdit object and converts to python datetime
query = "SELECT DATE FROM Suspense WHERE DATE >= ?" #creates the query using ? as a placeholder for variable
cursor.execute(query, (qdate,)) #executes the query and passes qdate as a tuple to the plac
</code></pre>
<p>With this knowledge I can now add both variables to the query as needed.</p>
<p>Thanks everyone for their help, especially Gord Thompson!</p>
| 1 | 2016-09-28T20:30:57Z | [
"python",
"sql",
"pyqt",
"pyodbc",
"pypyodbc"
]
|
Python counting in nested forloops | 39,715,841 | <p>I'm brand new to python and have an assignment to "Use two nested for loops. Count up in the outer for loop from 0 to 9 and then at every step count back down to zero."</p>
<p>The answer is supposed to be this:</p>
<pre><code>i= 0
k= 0
i= 1
k= 1
k= 0
i= 2
k= 2
k= 1
k= 0
i= 3
k= 3
k= 2
k= 1
k= 0
i= 4
k= 4
k= 3
k= 2
k= 1
k= 0
i= 5
k= 5
k= 4
k= 3
k= 2
k= 1
k= 0
i= 6
k= 6
k= 5
k= 4
k= 3
k= 2
k= 1
k= 0
i= 7
k= 7
k= 6
k= 5
k= 4
k= 3
k= 2
k= 1
k= 0
i= 8
k= 8
k= 7
k= 6
k= 5
k= 4
k= 3
k= 2
k= 1
k= 0
i= 9
k= 9
k= 8
k= 7
k= 6
k= 5
k= 4
k= 3
k= 2
k= 1
k= 0
</code></pre>
<p>So every time the i counts up, k counts down starting from the previous value of i. I believe I understand the general concept of nested forloops, but I'm not sure if my problem lies in identifying the range for k or in printing i and/or k. Here's what I have:</p>
<pre><code>for i in range(0,10):
for k in range(i+1):
print 'i=',i,''
print 'k=',k,''
</code></pre>
<p>But it doesn't give me what I need. It seems like k is going up when I run it, probably because of the (i+1) but it's the closest answer I've gotten so far and I've been having a fair amount of trouble. I'm not looking for the answer itself, but if someone could point me in the right direction that would be very helpful. Thanks!</p>
| -1 | 2016-09-27T04:13:26Z | 39,715,880 | <p>You just need your second for loop to go backwards instead of forwards. Right now it is going from 0 to i. </p>
<p>The syntax for this is:</p>
<pre><code>for k in range(i, -1, -1):
</code></pre>
<p>This starts k at i, until k is not -1, applying -1 to it in each iteration. </p>
<p>So your complete program would be:</p>
<pre><code>for i in range(0,10):
print 'i=',i,''
for k in range(i, -1, -1):
print 'k=',k,''
</code></pre>
<p>Output:</p>
<pre><code>i= 0
k= 0
i= 1
k= 1
k= 0
i= 2
k= 2
k= 1
k= 0
i= 3
k= 3
k= 2
k= 1
k= 0
i= 4
k= 4
k= 3
k= 2
k= 1
k= 0
i= 5
k= 5
k= 4
k= 3
k= 2
k= 1
k= 0
i= 6
k= 6
k= 5
k= 4
k= 3
k= 2
k= 1
k= 0
i= 7
k= 7
k= 6
k= 5
k= 4
k= 3
k= 2
k= 1
k= 0
i= 8
k= 8
k= 7
k= 6
k= 5
k= 4
k= 3
k= 2
k= 1
k= 0
i= 9
k= 9
k= 8
k= 7
k= 6
k= 5
k= 4
k= 3
k= 2
k= 1
k= 0
</code></pre>
| 0 | 2016-09-27T04:17:28Z | [
"python",
"for-loop",
"nested-loops"
]
|
Python Selenium - how to access methods from different modules using the same webdriver instance | 39,715,909 | <p>I am getting very confused and was hoping someone here could help me out.</p>
<p>So here is my scenario: I have one module containing the Base class where the webdriver instance is created and this will be inherited by all tests; a separate module containing a class that will be inherited by all tests for a specific page (let's call that page engagements); and a separate module for opening a different webpage (let's call it reports). I created another module with tests that will access the engagements page and the reports page.</p>
<p>When I try to access the methods under the reports module, it always fails. The reason is when I tried to access the methods, it was not using the webdriver instance that was created by the autouse fixture (which is in the Base class).</p>
<p>I got it working by multiple inheritance here:</p>
<p>This is base.py</p>
<pre><code>class Base:
@pytest.fixture(autouse=True):
def _init_browser(self):
self.browser = webdriver.PhantomJS()
</code></pre>
<p>This is base_engagements.py</p>
<pre><code>class BaseEngagements(Base):
@pytest.fixture(autouse=True)
def login_engagements(self):
print("login for engagements goes here")
</code></pre>
<p>This is engagement_reports.py</p>
<pre><code>class CheckReportsPage(Base):
def open_reports_for_engagement(self, voice_name):
print("opening reports")
self.browser.find_element_by_id("report_name")
</code></pre>
<p>And this is the test under test_voice_engagement.py</p>
<pre><code>class TestVoice(BaseEngagements, CheckReportsPage):
def test_open_reports_for_voice(self, voice_name):
self.open_reports_for_engagement(voice_name)
</code></pre>
<p>However, I would like to avoid multiple inheritance in the test module, for the main reason that more tests will have to use methods from other modules which are for specific webpages.</p>
<p>Is there a way to avoid multiple inheritance in this scenario?</p>
<p>Many thanks in advance.</p>
| 0 | 2016-09-27T04:20:30Z | 39,785,582 | <p>If you want to avoid <strong>multiple inheritance</strong> you could use <strong>composition</strong>.
This article explains inheritance vs composition in nice way with example
<a href="https://learnpythonthehardway.org/book/ex44.html" rel="nofollow">https://learnpythonthehardway.org/book/ex44.html</a></p>
<p><strong>Note</strong> - <em>Answer is generic to python and not specific to Pytest or Selenium</em></p>
<p><strong>Inheritance way</strong> <em>Child is inheriting Parent</em></p>
<pre><code>class Parent(object):
def override(self):
print "PARENT override()"
def implicit(self):
print "PARENT implicit()"
def altered(self):
print "PARENT altered()"
class Child(Parent):
def override(self):
print "CHILD override()"
def altered(self):
print "CHILD, BEFORE PARENT altered()"
super(Child, self).altered()
print "CHILD, AFTER PARENT altered()"
</code></pre>
<p><strong>Composition way</strong> Child in not inheriting Parent, rather creating another object of parent during its <code>init</code>. So child contains a parent object and have access to all methods of parents via this objct.</p>
<pre><code>class Other(object):
def override(self):
print "OTHER override()"
def implicit(self):
print "OTHER implicit()"
def altered(self):
print "OTHER altered()"
class Child(object):
def __init__(self):
self.other = Other()
def implicit(self):
self.other.implicit()
def override(self):
print "CHILD override()"
def altered(self):
print "CHILD, BEFORE OTHER altered()"
self.other.altered()
print "CHILD, AFTER OTHER altered()"
</code></pre>
| 0 | 2016-09-30T07:03:21Z | [
"python",
"selenium",
"inheritance",
"selenium-webdriver",
"py.test"
]
|
Python Selenium - how to access methods from different modules using the same webdriver instance | 39,715,909 | <p>I am getting very confused and was hoping someone here could help me out.</p>
<p>So here is my scenario: I have one module containing the Base class where the webdriver instance is created and this will be inherited by all tests; a separate module containing a class that will be inherited by all tests for a specific page (let's call that page engagements); and a separate module for opening a different webpage (let's call it reports). I created another module with tests that will access the engagements page and the reports page.</p>
<p>When I try to access the methods under the reports module, it always fails. The reason is when I tried to access the methods, it was not using the webdriver instance that was created by the autouse fixture (which is in the Base class).</p>
<p>I got it working by multiple inheritance here:</p>
<p>This is base.py</p>
<pre><code>class Base:
@pytest.fixture(autouse=True):
def _init_browser(self):
self.browser = webdriver.PhantomJS()
</code></pre>
<p>This is base_engagements.py</p>
<pre><code>class BaseEngagements(Base):
@pytest.fixture(autouse=True)
def login_engagements(self):
print("login for engagements goes here")
</code></pre>
<p>This is engagement_reports.py</p>
<pre><code>class CheckReportsPage(Base):
def open_reports_for_engagement(self, voice_name):
print("opening reports")
self.browser.find_element_by_id("report_name")
</code></pre>
<p>And this is the test under test_voice_engagement.py</p>
<pre><code>class TestVoice(BaseEngagements, CheckReportsPage):
def test_open_reports_for_voice(self, voice_name):
self.open_reports_for_engagement(voice_name)
</code></pre>
<p>However, I would like to avoid multiple inheritance in the test module, for the main reason that more tests will have to use methods from other modules which are for specific webpages.</p>
<p>Is there a way to avoid multiple inheritance in this scenario?</p>
<p>Many thanks in advance.</p>
| 0 | 2016-09-27T04:20:30Z | 39,786,311 | <p>I was able to use the methods from CheckReportsPage by adding a constructor to it and passing a browser instance when calling it. It basically looked like this:</p>
<pre><code>class CheckReportsPage(Base):
def __init__(self, browser):
self.browser = browser
</code></pre>
<p>So in TestVoice I did this:</p>
<pre><code>def test_open_reports_for_voice(self, voice_name):
self.reports = CheckReportsPage(self.wd)
self.reports.open_reports_for_engagement(voice_name)
</code></pre>
<p>This works for me, but there might be a better way, haven't found it yet :)</p>
| 0 | 2016-09-30T07:48:35Z | [
"python",
"selenium",
"inheritance",
"selenium-webdriver",
"py.test"
]
|
Trying to create a pandas series within a dataframe with values based on whether or not keys are in another dataframe | 39,715,910 | <p>Boiling it down simply...</p>
<p>Dataframe 1 = yellow_fruits
The columns are fruit_name, and location</p>
<p>Dataframe 2 = red_fruits
The columns are fruit_name, and location</p>
<p>Dataframe 3 = fruit_montage
The columns are fruit_name, pounds_of_fruit_needed, freshness</p>
<p>Let's say I want to add a column to Dataframe 3 called 'color.' The value will be yellow if the fruit is yellow, red if the fruit is red, and unknown if it's not red or yellow.</p>
<p>Basically, pseudocode...</p>
<p>If the fruit is in the yellow fruit dataframe, yellow goes in the column
If the fruit is in the red fruit dataframe, red goes in the column
If the fruit is not in either of those dataframes, 'unknown' goes in the column.</p>
<p>My code produced an error:</p>
<pre><code> if df3['fruit_name'].isin(df1['fruit_name']):
data = "'yellow"
elif df3['fruit_name'].isin(df2['fruit_name']):
data = "red"
else:
data = "unknown"
df3['color'] = pd.Series(data, index = df3.index)
</code></pre>
<p>The error:</p>
<p>C:\Anaconda2\lib\site-packages\pandas\core\generic.pyc in <strong>nonzero</strong>(self)
890 raise ValueError("The truth value of a {0} is ambiguous. "
891 "Use a.empty, a.bool(), a.item(), a.any() or a.all()."
--> 892 .format(self.<strong>class</strong>.<strong>name</strong>))
893
894 <strong>bool</strong> = <strong>nonzero</strong></p>
<p>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p>
| 3 | 2016-09-27T04:20:31Z | 39,716,580 | <p>The classic way would be to use your conditions as indexers:</p>
<pre><code>df1 = pd.DataFrame({'fruit_name':['banana', 'lemon']})
df2 = pd.DataFrame({'fruit_name':['strawberry', 'apple']})
df3 = pd.DataFrame({'fruit_name':['lemon', 'rockmelon', 'apple']})
df3["color"] = "unknown"
df3["color"][df3['fruit_name'].isin(df1['fruit_name'])] = "yellow"
df3["color"][df3['fruit_name'].isin(df2['fruit_name'])] = "red"
df3
# fruit_name color
# 0 lemon yellow
# 1 rockmelon unknown
# 2 apple red
</code></pre>
<p>A more functional way would be to write your logic as a function and map it along your series, however this is likely to be quite a bit slower, since a lot of the speed of pandas/numpy comes from using vectorized operations:</p>
<pre><code>def get_fruit_color(x):
if x in df1['fruit_name'].unique():
data = "yellow"
elif x in df2['fruit_name'].unique():
data = "red"
else:
data = "unknown"
return data
df3["color"] = df3["fruit_name"].map(get_fruit_color)
</code></pre>
<p>An SQL-inspired approach would be to store your mappings in a dataframe, and do a join (called a merge in pandas); this should be a very performant option. Specifying <code>how='left'</code> means that it will be a left join, so that if no match is found for the join condition, the row will still remain, with a null value:</p>
<pre><code>colors = ([(x, 'yellow') for x in df1['fruit_name'].unique()]
+ [(x, 'red') for x in df2['fruit_name'].unique()])
colors_df = pd.DataFrame(colors, columns = ['fruit_name', 'color'])
df3.merge(colors_df, how='left').fillna("unknown")
</code></pre>
<p>Finally my favourite method (although maybe it'a a little "clever") would be to use a dict to map your values (this is a special pandas trick), this will leave <code>NaN</code> if no match is found, so you can fill these with <code>fillna</code>:</p>
<pre><code>df3["color"] = df3["fruit_name"].map(dict(colors)).fillna("unknown")
</code></pre>
| 1 | 2016-09-27T05:24:51Z | [
"python",
"pandas",
"dataframe",
"series"
]
|
How do I run a binary search for words that start with a certain letter? | 39,715,933 | <p>I am asked to binary search a list of names and if these names start with a particular letter, for example A, then I am to print that name.
I can complete this task by doing much more simple code such as</p>
<pre><code>for i in list:
if i[0] == "A":
print(i)
</code></pre>
<p>but instead I am asked to use a binary search and I'm struggling to understand the process behind it. We are given base code which can output the position a given string. My problem is not knowing what to edit so that I can achieve the desired outcome</p>
<pre><code>name_list = ["Adolphus of Helborne", "Aldric Foxe", "Amanita Maleficant", "Aphra the Vicious", "Arachne the Gruesome", "Astarte Hellebore", "Brutus the Gruesome", "Cain of Avernus"]
def bin_search(list, item):
low_b = 0
up_b = len(list) - 1
found = False
while low_b <= up_b and found == False:
midPos = ((low_b + up_b) // 2)
if list[midPos] < item:
low_b = midPos + 1
elif list[midPos] > item:
up_b = midPos - 1
else:
found = True
if found:
print("The name is at positon " + str(midPos))
return midPos
else:
print("The name was not in the list.")
</code></pre>
<p>Desired outcome</p>
<pre><code>bin_search(name_list,"A")
</code></pre>
<p>Prints all the names starting with A (Adolphus of HelBorne, Aldric Foxe .... etc)</p>
<p>EDIT:
I was just doing some guess and check and found out how to do it. This is the solution code</p>
<pre><code>def bin_search(list, item):
low_b = 0
up_b = len(list) - 1
true_list = []
count = 100
while low_b <= up_b and count > 0:
midPos = ((low_b + up_b) // 2)
if list[midPos][0] == item:
true_list.append(list[midPos])
list.remove(list[midPos])
count -= 1
elif list[midPos] < item:
low_b = midPos + 1
count -= 1
else:
up_b = midPos - 1
count -= 1
print(true_list)
</code></pre>
| 0 | 2016-09-27T04:23:35Z | 39,716,337 | <p>Not too sure if this is what you want as it seems inefficient... as you mention it seems a lot more intuitive to just iterate over the entire list but using binary search i found <a href="http://code.activestate.com/recipes/81188-binary-search/" rel="nofollow">here</a> i have:</p>
<pre class="lang-python prettyprint-override"><code>def binary_search(seq, t):
min = 0
max = len(seq) - 1
while True:
if max < min:
return -1
m = (min + max) // 2
if seq[m][0] < t:
min = m + 1
elif seq[m][0] > t:
max = m - 1
else:
return m
index=0
while True:
index=binary_search(name_list,"A")
if index!=-1:
print(name_list[index])
else:
break
del name_list[index]
</code></pre>
<p>Output i get:</p>
<pre><code>Aphra the Vicious
Arachne the Gruesome
Amanita Maleficant
Astarte Hellebore
Aldric Foxe
Adolphus of Helborne
</code></pre>
| 0 | 2016-09-27T05:03:44Z | [
"python",
"binary-search"
]
|
How do I run a binary search for words that start with a certain letter? | 39,715,933 | <p>I am asked to binary search a list of names and if these names start with a particular letter, for example A, then I am to print that name.
I can complete this task by doing much more simple code such as</p>
<pre><code>for i in list:
if i[0] == "A":
print(i)
</code></pre>
<p>but instead I am asked to use a binary search and I'm struggling to understand the process behind it. We are given base code which can output the position a given string. My problem is not knowing what to edit so that I can achieve the desired outcome</p>
<pre><code>name_list = ["Adolphus of Helborne", "Aldric Foxe", "Amanita Maleficant", "Aphra the Vicious", "Arachne the Gruesome", "Astarte Hellebore", "Brutus the Gruesome", "Cain of Avernus"]
def bin_search(list, item):
low_b = 0
up_b = len(list) - 1
found = False
while low_b <= up_b and found == False:
midPos = ((low_b + up_b) // 2)
if list[midPos] < item:
low_b = midPos + 1
elif list[midPos] > item:
up_b = midPos - 1
else:
found = True
if found:
print("The name is at positon " + str(midPos))
return midPos
else:
print("The name was not in the list.")
</code></pre>
<p>Desired outcome</p>
<pre><code>bin_search(name_list,"A")
</code></pre>
<p>Prints all the names starting with A (Adolphus of HelBorne, Aldric Foxe .... etc)</p>
<p>EDIT:
I was just doing some guess and check and found out how to do it. This is the solution code</p>
<pre><code>def bin_search(list, item):
low_b = 0
up_b = len(list) - 1
true_list = []
count = 100
while low_b <= up_b and count > 0:
midPos = ((low_b + up_b) // 2)
if list[midPos][0] == item:
true_list.append(list[midPos])
list.remove(list[midPos])
count -= 1
elif list[midPos] < item:
low_b = midPos + 1
count -= 1
else:
up_b = midPos - 1
count -= 1
print(true_list)
</code></pre>
| 0 | 2016-09-27T04:23:35Z | 39,723,697 | <p>You just need to found one item starting with the letter, then you need to identify the range. This approach should be fast and memory efficient.</p>
<pre><code>def binary_search(list,item):
low_b = 0
up_b = len(list) - 1
found = False
midPos = ((low_b + up_b) // 2)
if list[low_b][0]==item:
midPos=low_b
found=True
elif list[up_b][0]==item:
midPos = up_b
found=True
while True:
if found:
break;
if list[low_b][0]>item:
break
if list[up_b][0]<item:
break
if up_b<low_b:
break;
midPos = ((low_b + up_b) // 2)
if list[midPos][0] < item:
low_b = midPos + 1
elif list[midPos] > item:
up_b = midPos - 1
else:
found = True
break
if found:
while True:
if midPos>0:
if list[midPos][0]==item:
midPos=midPos-1
continue
break;
while True:
if midPos<len(list):
if list[midPos][0]==item:
print list[midPos]
midPos=midPos+1
continue
break
else:
print("The name was not in the list.")
</code></pre>
<p>the output is</p>
<pre><code>>>> binary_search(name_list,"A")
Adolphus of Helborne
Aldric Foxe
Amanita Maleficant
Aphra the Vicious
Arachne the Gruesome
Astarte Hellebore
</code></pre>
| 0 | 2016-09-27T11:38:31Z | [
"python",
"binary-search"
]
|
Try to download image from image url, but get html instead | 39,715,939 | <p>similar to <a href="http://stackoverflow.com/questions/29433699/try-to-scrape-image-from-image-url-using-python-urllib-but-get-html-instead">Try to scrape image from image url (using python urllib ) but get html instead</a> , but the solution does not work for me.</p>
<pre><code>from BeautifulSoup import BeautifulSoup
import urllib2
import requests
img_url='http://7-themes.com/data_images/out/79/7041933-beautiful-backgrounds-wallpaper.jpg'
r = requests.get(img_url, allow_redirects=False)
headers = {}
headers['Referer'] = r.headers['location']
r = requests.get(img_url, headers=headers)
with open('7041933-beautiful-backgrounds-wallpaper.jpg', 'wb') as fh:
fh.write(r.content)
</code></pre>
<p>the downloaded file is still a html page, not an image.</p>
| 0 | 2016-09-27T04:24:13Z | 39,716,394 | <p>Your referrer was not being set correctly. I have hard coded the referrer and it works fine</p>
<pre><code>from BeautifulSoup import BeautifulSoup
import urllib2
import requests
img_url='http://7-themes.com/data_images/out/79/7041933-beautiful-backgrounds-wallpaper.jpg'
r = requests.get(img_url, allow_redirects=False)
headers = {}
headers['Referer'] = 'http://7-themes.com/7041933-beautiful-backgrounds-wallpaper.html'
r = requests.get(img_url, headers=headers, allow_redirects=False)
with open('7041933-beautiful-backgrounds-wallpaper.jpg', 'wb') as fh:
fh.write(r.content)
</code></pre>
| 0 | 2016-09-27T05:09:48Z | [
"python",
"httprequest",
"urllib2"
]
|
Try to download image from image url, but get html instead | 39,715,939 | <p>similar to <a href="http://stackoverflow.com/questions/29433699/try-to-scrape-image-from-image-url-using-python-urllib-but-get-html-instead">Try to scrape image from image url (using python urllib ) but get html instead</a> , but the solution does not work for me.</p>
<pre><code>from BeautifulSoup import BeautifulSoup
import urllib2
import requests
img_url='http://7-themes.com/data_images/out/79/7041933-beautiful-backgrounds-wallpaper.jpg'
r = requests.get(img_url, allow_redirects=False)
headers = {}
headers['Referer'] = r.headers['location']
r = requests.get(img_url, headers=headers)
with open('7041933-beautiful-backgrounds-wallpaper.jpg', 'wb') as fh:
fh.write(r.content)
</code></pre>
<p>the downloaded file is still a html page, not an image.</p>
| 0 | 2016-09-27T04:24:13Z | 39,716,410 | <p>I found a root cause in my code is that refer field in the header is still a html, not image.</p>
<p>So I change the refer field to the <code>img_url</code>, and this works.</p>
<pre><code>from BeautifulSoup import BeautifulSoup
import urllib2
import urllib
import requests
img_url='http://7-themes.com/data_images/out/79/7041933-beautiful-backgrounds-wallpaper.jpg'
headers = {}
headers['Referer'] = img_url
r = requests.get(img_url, headers=headers)
with open('7041933-beautiful-backgrounds-wallpaper.jpg', 'wb') as fh:
fh.write(r.content)
</code></pre>
| 0 | 2016-09-27T05:10:42Z | [
"python",
"httprequest",
"urllib2"
]
|
groupby and add some rows | 39,715,950 | <p>I have a dataframe below</p>
<pre><code> A B
0 a 1
1 a 2
2 c 3
3 c 4
4 e 5
</code></pre>
<p>I would like to get summing result below.key = column A</p>
<pre><code>df.B.groupby(df.A).agg(np.sum)
</code></pre>
<p>But I want to add specific row. </p>
<pre><code> B
a 3
b 0
c 7
d 0
e 5
f 0
</code></pre>
<p>but I should add row "b" and "d"."f"</p>
<p>How can I get this result ?</p>
| 3 | 2016-09-27T04:25:18Z | 39,716,069 | <p>Use <code>reindex</code></p>
<pre><code>df.groupby('A').B.sum().reindex(list('abcdef'), fill_value=0)
A
a 3
b 0
c 7
d 0
e 5
f 0
Name: B, dtype: int64
</code></pre>
| 3 | 2016-09-27T04:38:45Z | [
"python",
"pandas"
]
|
curve_fit doesn't work properly with 4 parameters | 39,716,026 | <p>Running the following code, </p>
<pre><code>x = np.array([50.849937, 53.849937, 56.849937, 59.849937, 62.849937, 65.849937, 68.849937, 71.849937, 74.849937, 77.849937, 80.849937, 83.849937, 86.849937, 89.849937, 92.849937])
y = np.array([410.67800, 402.63800, 402.63800, 386.55800, 330.27600, 217.71400, 72.98990, 16.70860, 8.66833, 40.82920, 241.83400, 386.55800, 394.59800, 394.59800, 402.63800])
def f(om, a, i , c):
return a - i*np.exp(- c* (om-74.)**2)
par, cov = curve_fit(f, x, y)
stdev = np.sqrt(np.diag(cov) )
</code></pre>
<p>produces this Graph,</p>
<p><a href="http://i.stack.imgur.com/0Dvop.png" rel="nofollow"><img src="http://i.stack.imgur.com/0Dvop.png" alt="enter image description here"></a></p>
<p>With the following parameters and standard deviation:</p>
<pre><code>par = [ 4.09652163e+02, 4.33961227e+02, 1.58719772e-02]
stdev = [ 1.46309578e+01, 2.44878171e+01, 2.40474753e-03]
</code></pre>
<p>However, when trying to fit this data to the following function:</p>
<pre><code>def f(om, a, i , c, omo):
return a - i*np.exp(- c* (om-omo)**2)
</code></pre>
<p>It doesn't work, it produces a standard deviation of </p>
<pre><code>stdev = [inf, inf, inf, inf, inf]
</code></pre>
<p>Is there any way to fix this?</p>
| 1 | 2016-09-27T04:33:49Z | 39,718,226 | <p>It looks like it isn't converging (see <a href="http://stackoverflow.com/questions/15624070/why-does-scipy-optimize-curve-fit-not-fit-to-the-data">this</a> and <a href="http://stackoverflow.com/questions/21420792/exponential-curve-fitting-in-scipy?rq=1">this</a>). Try adding an initial condition,</p>
<pre><code>par, cov = curve_fit(f, x, y, p0=[1.,1.,1.,74.])
</code></pre>
<p>which results in the </p>
<pre><code>par = [ 4.11892318e+02, 4.36953868e+02, 1.55741131e-02, 7.32560690e+01])
stdev = [ 1.17579445e+01, 1.94401006e+01, 1.86709423e-03, 2.62952690e-01]
</code></pre>
| 1 | 2016-09-27T07:09:20Z | [
"python",
"numpy",
"matplotlib",
"scipy",
"curve-fitting"
]
|
curve_fit doesn't work properly with 4 parameters | 39,716,026 | <p>Running the following code, </p>
<pre><code>x = np.array([50.849937, 53.849937, 56.849937, 59.849937, 62.849937, 65.849937, 68.849937, 71.849937, 74.849937, 77.849937, 80.849937, 83.849937, 86.849937, 89.849937, 92.849937])
y = np.array([410.67800, 402.63800, 402.63800, 386.55800, 330.27600, 217.71400, 72.98990, 16.70860, 8.66833, 40.82920, 241.83400, 386.55800, 394.59800, 394.59800, 402.63800])
def f(om, a, i , c):
return a - i*np.exp(- c* (om-74.)**2)
par, cov = curve_fit(f, x, y)
stdev = np.sqrt(np.diag(cov) )
</code></pre>
<p>produces this Graph,</p>
<p><a href="http://i.stack.imgur.com/0Dvop.png" rel="nofollow"><img src="http://i.stack.imgur.com/0Dvop.png" alt="enter image description here"></a></p>
<p>With the following parameters and standard deviation:</p>
<pre><code>par = [ 4.09652163e+02, 4.33961227e+02, 1.58719772e-02]
stdev = [ 1.46309578e+01, 2.44878171e+01, 2.40474753e-03]
</code></pre>
<p>However, when trying to fit this data to the following function:</p>
<pre><code>def f(om, a, i , c, omo):
return a - i*np.exp(- c* (om-omo)**2)
</code></pre>
<p>It doesn't work, it produces a standard deviation of </p>
<pre><code>stdev = [inf, inf, inf, inf, inf]
</code></pre>
<p>Is there any way to fix this?</p>
| 1 | 2016-09-27T04:33:49Z | 39,723,506 | <p>You can calculate the initial condition from data:</p>
<pre><code>%matplotlib inline
import pylab as pl
import numpy as np
from scipy.optimize import curve_fit
x = np.array([50.849937, 53.849937, 56.849937, 59.849937, 62.849937, 65.849937, 68.849937, 71.849937, 74.849937, 77.849937, 80.849937, 83.849937, 86.849937, 89.849937, 92.849937])
y = np.array([410.67800, 402.63800, 402.63800, 386.55800, 330.27600, 217.71400, 72.98990, 16.70860, 8.66833, 40.82920, 241.83400, 386.55800, 394.59800, 394.59800, 402.63800])
def f(om, a, i , c, omo):
return a - i*np.exp(- c* (om-omo)**2)
par, cov = curve_fit(f, x, y, p0=[y.max(), y.ptp(), 1, x[np.argmin(y)]])
stdev = np.sqrt(np.diag(cov) )
pl.plot(x, y, "o")
x2 = np.linspace(x.min(), x.max(), 100)
pl.plot(x2, f(x2, *par))
</code></pre>
| 0 | 2016-09-27T11:29:31Z | [
"python",
"numpy",
"matplotlib",
"scipy",
"curve-fitting"
]
|
How to iterate a list of string, using Sikuli | 39,716,156 | <p>I am using sikuli to automate an application; it process a file and save the output of this file.</p>
<p>I am taking a snapshot of the file itself, so Sikuli can find it, but I have to process 30 files; so taking 30 snapshot of each file is really not that logic. Is there a way to loop through a list of files, as string, so Sikuli can read the file name and retrieve it from a folder, instead of me taking snapshots of everything?</p>
<p>I did try to use the file name passed as text, but I get an error from Sikuli, since it can't find the file. </p>
<p>I call <code>findText("myfile.txt")</code> when the file prompt is on screen, but I get an error:</p>
<pre><code>[error] TextRecognizer: init: export tessdata not possible - run setup with option 3
[error] TextRecognizer not working: tessdata stuff not available at:
/User/test/Library/Application Support/Sikulix/SikulixTesseract/tessdata
[error] FindFailed ( null )
</code></pre>
<p>I did check with Google and found not much. I am aware that Sikuli is mainly for snapshot automation, but it has python bindings for Java, so it can use python logic like if cycles and other construct, so I assume there should be a way to process multiple files via code.</p>
| 0 | 2016-09-27T04:47:42Z | 39,760,995 | <p>I still don't completely understand what are you trying to do but the <code>findText()</code> function that you are using is actually attempting to find text on the screen by using OCR extraction of text in the region. Are you sure that's what you want to do? If yes you have to:</p>
<ol>
<li>Setup Sikuli properly to include the tesseract libraries. You have a detailed instruction on SikuliX website.</li>
<li>Be aware that OCR feature is rather flaky and usually unreliable unless you do some work on tweaking the OCR engine which outside SikuliX scope.</li>
</ol>
| 0 | 2016-09-29T03:36:15Z | [
"python",
"sikuli"
]
|
pandas series drop when multiindex is not unique | 39,716,210 | <p>consider the <code>pd.Series</code> <code>s</code></p>
<pre><code>midx = pd.MultiIndex.from_product([list('ABC'), np.arange(3)])
s = pd.Series(1, midx)
s
A 0 1
1 1
2 1
B 0 1
1 1
2 1
C 0 1
1 1
2 1
dtype: int64
</code></pre>
<p>It is very convenient to use <code>drop</code> to get rid of cross sections. For example</p>
<pre><code>s.drop('A')
B 0 1
1 1
2 1
C 0 1
1 1
2 1
dtype: int64
</code></pre>
<p>But if I make the index non-unique</p>
<pre><code>s = s.append(pd.Series(0, pd.MultiIndex.from_tuples([('A', 2)]))).sort_index()
s
A 0 1
1 1
2 1
2 0
B 0 1
1 1
2 1
C 0 1
1 1
2 1
dtype: int64
</code></pre>
<p>Then the same <code>drop</code> no longer works.</p>
<pre><code>s.drop('A')
A 0 1
1 1
2 1
2 0
B 0 1
1 1
2 1
C 0 1
1 1
2 1
dtype: int64
</code></pre>
<p>How do I drop like before</p>
<p>The desired result should be (this doesn't work, what does)</p>
<pre><code>s.drop('B')
A 0 1
1 1
2 1
2 0
C 0 1
1 1
2 1
dtype: int64
</code></pre>
| 2 | 2016-09-27T04:53:37Z | 39,716,240 | <p>I'm not sure why the <code>s.drop('B')</code> doesn't work but using the <code>level=0</code> parameter does.</p>
<pre><code>s.drop('B', level=0)
A 0 1
1 1
2 1
2 0
C 0 1
1 1
2 1
dtype: int64
</code></pre>
| 1 | 2016-09-27T04:55:41Z | [
"python",
"pandas",
"multi-index"
]
|
How to log and save file with date and timestamp in Python | 39,716,271 | <p>I am trying to log temperature from a DS18B20 sensor using Raspberry Pi 3 via Python code executed from shell.</p>
<p>I want to log temperature with timestamp and then save the file.</p>
<p>What I am doing presently is saving it to a filename entered in the code, but I want to log the file with date and timestamp in filename.</p>
<p><strong>Case 1 :</strong> When I put a filename in the code, then I can append data to the same file over and over, but I can't start a new separate logging without editing the code.</p>
<pre><code>#Writes data to file
def write_temp(temperature):
with open("/home/pi/temp.csv", "a") as log:
log.write("{0},{1}\n".format(strftime("%Y-%m-%d %H:%M:%S"),str(temperature)))
</code></pre>
<p>Problem is that the file is always temp.csv and data gets appended each time.</p>
<p><strong>Case 2</strong>: I tried to get filename from timestamp, but each second a new file is getting generated.</p>
<pre><code>def write_temp(temperature):
filename1 = strftime("%Y-%m-%d %H:%M:%S")
#filename1 = sys.argv[1]
with open('%s.csv' % filename1, 'a') as log:
log.write("{0},{1}\n".format(strftime("%Y-%m-%d %H:%M:%S"),str(temperature)))
</code></pre>
<p>In the above case, I would rather like to have the filename set at the start of logging each time or at the end of logging. I would also like to save the name as Log-<em>DateTime</em> instead of just DateTime. I tried this by doing <code>('"Log-" + %s.csv' % filename1, 'a')</code> instead of <code>('%s.csv' % filename1, 'a')</code>, but that didn't work out.</p>
<p><strong>Ideal Case:</strong> I want file name to be WORD-<em>DateTime</em>, where WORD is sent as an argument from the command line, as in below:</p>
<pre><code>sudo python TTLogging.py WORD
</code></pre>
<p>Can you point out where I am going wrong? I can share the full code of my work if required since it is a learning exercise.</p>
| 2 | 2016-09-27T04:58:47Z | 39,716,325 | <p>Try :</p>
<pre><code>with open('Log-%s.csv' % filename1, 'a') as log:
</code></pre>
| 2 | 2016-09-27T05:02:40Z | [
"python",
"csv",
"datetime",
"raspberry-pi"
]
|
How to log and save file with date and timestamp in Python | 39,716,271 | <p>I am trying to log temperature from a DS18B20 sensor using Raspberry Pi 3 via Python code executed from shell.</p>
<p>I want to log temperature with timestamp and then save the file.</p>
<p>What I am doing presently is saving it to a filename entered in the code, but I want to log the file with date and timestamp in filename.</p>
<p><strong>Case 1 :</strong> When I put a filename in the code, then I can append data to the same file over and over, but I can't start a new separate logging without editing the code.</p>
<pre><code>#Writes data to file
def write_temp(temperature):
with open("/home/pi/temp.csv", "a") as log:
log.write("{0},{1}\n".format(strftime("%Y-%m-%d %H:%M:%S"),str(temperature)))
</code></pre>
<p>Problem is that the file is always temp.csv and data gets appended each time.</p>
<p><strong>Case 2</strong>: I tried to get filename from timestamp, but each second a new file is getting generated.</p>
<pre><code>def write_temp(temperature):
filename1 = strftime("%Y-%m-%d %H:%M:%S")
#filename1 = sys.argv[1]
with open('%s.csv' % filename1, 'a') as log:
log.write("{0},{1}\n".format(strftime("%Y-%m-%d %H:%M:%S"),str(temperature)))
</code></pre>
<p>In the above case, I would rather like to have the filename set at the start of logging each time or at the end of logging. I would also like to save the name as Log-<em>DateTime</em> instead of just DateTime. I tried this by doing <code>('"Log-" + %s.csv' % filename1, 'a')</code> instead of <code>('%s.csv' % filename1, 'a')</code>, but that didn't work out.</p>
<p><strong>Ideal Case:</strong> I want file name to be WORD-<em>DateTime</em>, where WORD is sent as an argument from the command line, as in below:</p>
<pre><code>sudo python TTLogging.py WORD
</code></pre>
<p>Can you point out where I am going wrong? I can share the full code of my work if required since it is a learning exercise.</p>
| 2 | 2016-09-27T04:58:47Z | 39,716,356 | <p>In <strong>Case 2</strong>, every time <code>write_temp</code> is called, it is populating <code>filename1</code> with timestamp.</p>
<p>So consider,for example, you called it at <code>10:15:13 (hh:mm:ss)</code>, then <code>filename1</code> will be <code>10-15-13.csv</code>. When you will call it again at <code>10:15:14</code> then <code>filename1</code> will be <code>10-15-14.csv</code>.</p>
<p>That's why new file is getting created.</p>
<p><strong>Solution :</strong> Take out <code>filename1</code> from <code>temp_write</code> and pass filename to that function as argument.</p>
<pre><code>from datetime import *
import sys
def write_temp(temperature,file_name):
print ("In write_temp function - "+file_name)
with open(file_name, 'a') as log:
log.write("{0},{1}\n".format(datetime.now().strftime("%Y-%m-%d %H:%M:%S"),str(temperature)))
arg = sys.argv[1]
filename1 = str(arg) + "-" + datetime.now().strftime("%Y-%m-%d-%H-%M-%S")+".csv"
print ("File name is "+filename1)
write_temp(1,filename1)
</code></pre>
<p><strong>Output on console:</strong></p>
<pre><code>C:\Users\dinesh_pundkar\Desktop>python c.py LOG
File name is LOG-2016-09-27-11-03-16.csv
In write_temp function - LOG-2016-09-27-11-03-16.csv
C:\Users\dinesh_pundkar\Desktop>
</code></pre>
<p><strong>Output of LOG-TimeStamp.csv:</strong></p>
<pre><code>2016-09-27 10:47:06,1
2016-09-27 10:47:06,3
</code></pre>
| 1 | 2016-09-27T05:05:32Z | [
"python",
"csv",
"datetime",
"raspberry-pi"
]
|
Django: How to save the POST.get of a checkbox as false (0) in a DataBase? | 39,716,303 | <p>I'm trying to save the value of a checkbox as true or false in a database. I have to use a model for this. If the box is checked the value of '1' is saved. However, if the box is not checked I get the error message: </p>
<pre><code>Django Version: 1.9.4
Exception Type: IntegrityError
Exception Value: (1048, "Column 'completed' cannot be null")
</code></pre>
<p>Currently my setup looks like this: </p>
<p>In models.py I have:</p>
<pre><code>class myClass(models.Model):
completed = models.BooleanField(default=False, blank=True)
</code></pre>
<p>In views.py I have:</p>
<pre><code>def create_myClass(request):
completed = request.POST.get('completed')
toSave = models.myClass(completed=completed)
toSave.save()
</code></pre>
<p>and in the HTML I have:</p>
<pre><code><label class="col-md-5" for="completed"> Completed: </label>
<input id="completed" type="checkbox" name="completed">
</code></pre>
<p><br></p>
<p>I've tried to set required = False in the BooleanField as some other posts suggested but then get the error: <code>TypeError: __init__() got an unexpected keyword argument 'required'</code>.</p>
<p>I've also tried to set 'completed' to False in views.py like:</p>
<pre><code>if request.POST.get('completed', False ):
commpleted = False
</code></pre>
<p>and </p>
<pre><code>completed = request.POST.get('completed')
if completed == 'null':
commpleted = False
</code></pre>
<p>But neither work (not sure if my syntax is correct?)</p>
<p>Any ideas or suggestions are greatly appreciated! </p>
| 0 | 2016-09-27T05:01:17Z | 39,716,431 | <p>You can print the value of <code>request.POST</code> to see what you are getting in the views.</p>
<p>If you have not specified a <code>value</code> attribute in the checkbox HTML element, the default value which will passed in the <code>POST</code> is <code>on</code> if the checkbox is checked.</p>
<p>In the views you can check if the value of <code>completed</code> is <code>on</code>:</p>
<pre><code># this will set completed to True, only if the value of
# `completed` passed in the POST is on
completed = request.POST.get('completed', '') == 'on'
</code></pre>
<p>If the checkbox is not checked, then nothing will be passed and in that case you will get <code>False</code> from above statement.</p>
<hr>
<p>I would suggest that you use a <a href="https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/#django.forms.ModelForm" rel="nofollow">Django ModelForm</a> if you can so most of the things are automatically taken care for you.</p>
| 2 | 2016-09-27T05:12:27Z | [
"python",
"django",
"django-models",
"django-views"
]
|
Django: How to save the POST.get of a checkbox as false (0) in a DataBase? | 39,716,303 | <p>I'm trying to save the value of a checkbox as true or false in a database. I have to use a model for this. If the box is checked the value of '1' is saved. However, if the box is not checked I get the error message: </p>
<pre><code>Django Version: 1.9.4
Exception Type: IntegrityError
Exception Value: (1048, "Column 'completed' cannot be null")
</code></pre>
<p>Currently my setup looks like this: </p>
<p>In models.py I have:</p>
<pre><code>class myClass(models.Model):
completed = models.BooleanField(default=False, blank=True)
</code></pre>
<p>In views.py I have:</p>
<pre><code>def create_myClass(request):
completed = request.POST.get('completed')
toSave = models.myClass(completed=completed)
toSave.save()
</code></pre>
<p>and in the HTML I have:</p>
<pre><code><label class="col-md-5" for="completed"> Completed: </label>
<input id="completed" type="checkbox" name="completed">
</code></pre>
<p><br></p>
<p>I've tried to set required = False in the BooleanField as some other posts suggested but then get the error: <code>TypeError: __init__() got an unexpected keyword argument 'required'</code>.</p>
<p>I've also tried to set 'completed' to False in views.py like:</p>
<pre><code>if request.POST.get('completed', False ):
commpleted = False
</code></pre>
<p>and </p>
<pre><code>completed = request.POST.get('completed')
if completed == 'null':
commpleted = False
</code></pre>
<p>But neither work (not sure if my syntax is correct?)</p>
<p>Any ideas or suggestions are greatly appreciated! </p>
| 0 | 2016-09-27T05:01:17Z | 39,716,753 | <blockquote>
<p>Use this code snippet</p>
</blockquote>
<pre><code>def create_myClass(request):
completed = request.POST.get('completed')
if not completed:
completed = False
toSave = models.myClass(completed=completed)
toSave.save()
</code></pre>
| 0 | 2016-09-27T05:38:55Z | [
"python",
"django",
"django-models",
"django-views"
]
|
Django: How to save the POST.get of a checkbox as false (0) in a DataBase? | 39,716,303 | <p>I'm trying to save the value of a checkbox as true or false in a database. I have to use a model for this. If the box is checked the value of '1' is saved. However, if the box is not checked I get the error message: </p>
<pre><code>Django Version: 1.9.4
Exception Type: IntegrityError
Exception Value: (1048, "Column 'completed' cannot be null")
</code></pre>
<p>Currently my setup looks like this: </p>
<p>In models.py I have:</p>
<pre><code>class myClass(models.Model):
completed = models.BooleanField(default=False, blank=True)
</code></pre>
<p>In views.py I have:</p>
<pre><code>def create_myClass(request):
completed = request.POST.get('completed')
toSave = models.myClass(completed=completed)
toSave.save()
</code></pre>
<p>and in the HTML I have:</p>
<pre><code><label class="col-md-5" for="completed"> Completed: </label>
<input id="completed" type="checkbox" name="completed">
</code></pre>
<p><br></p>
<p>I've tried to set required = False in the BooleanField as some other posts suggested but then get the error: <code>TypeError: __init__() got an unexpected keyword argument 'required'</code>.</p>
<p>I've also tried to set 'completed' to False in views.py like:</p>
<pre><code>if request.POST.get('completed', False ):
commpleted = False
</code></pre>
<p>and </p>
<pre><code>completed = request.POST.get('completed')
if completed == 'null':
commpleted = False
</code></pre>
<p>But neither work (not sure if my syntax is correct?)</p>
<p>Any ideas or suggestions are greatly appreciated! </p>
| 0 | 2016-09-27T05:01:17Z | 39,720,049 | <p>To tackle that problem you have to know how checkboxes work: if they are checked, a value is passed to <code>request.POST</code> -- but if a checkbox isn't checked, no value will be passed <strong>at all</strong>. So if the statement</p>
<pre><code>'completed' in request.POST
</code></pre>
<p>is true, the checkbox has been checked (because only then 'completed' has been sent and is in the POST array), otherwise it hasn't.</p>
<p>I like this way more because it doesn't deal w/ any fancy default values, but is a plain and simple condition.</p>
<pre><code>completed = 'completed' in request.POST
toSave = models.myClass( completed = completed )
toSave.save()
</code></pre>
| 0 | 2016-09-27T08:43:09Z | [
"python",
"django",
"django-models",
"django-views"
]
|
Filtering data out of Excel file via CSV | 39,716,437 | <p>Is there eny alternative for this than making multiple for loop ? </p>
<p>I have an Excel file : </p>
<pre><code>|col1|col2|col3|
1 x y
2 s r
3 o o
</code></pre>
<p>I want an output like this: When first column argument equals 1, print argument from column 3 from the same row.</p>
<pre><code>import csv
reader = csv.reader(open("alerts.csv"), delimiter=',')
rows=[]
for row in reader:
rows.append(row)
for i in row:
x"i"?=row[i].split(";")
</code></pre>
<p>I'm trying to figure out a function that would make another list with split information form <code>row[i]</code> but that wont work I feel. </p>
| -2 | 2016-09-27T05:12:55Z | 39,716,740 | <pre><code>import pandas as pd
reader = pd.read_csv("alerts.csv")
print reader[['col1','col3']].loc[reader['col1'] == 1]
</code></pre>
| 0 | 2016-09-27T05:38:09Z | [
"python",
"excel",
"csv"
]
|
Filtering data out of Excel file via CSV | 39,716,437 | <p>Is there eny alternative for this than making multiple for loop ? </p>
<p>I have an Excel file : </p>
<pre><code>|col1|col2|col3|
1 x y
2 s r
3 o o
</code></pre>
<p>I want an output like this: When first column argument equals 1, print argument from column 3 from the same row.</p>
<pre><code>import csv
reader = csv.reader(open("alerts.csv"), delimiter=',')
rows=[]
for row in reader:
rows.append(row)
for i in row:
x"i"?=row[i].split(";")
</code></pre>
<p>I'm trying to figure out a function that would make another list with split information form <code>row[i]</code> but that wont work I feel. </p>
| -2 | 2016-09-27T05:12:55Z | 39,718,136 | <p>So i have difrent idea to sort everything out to diffrent list. </p>
<pre><code>import csv
reader = csv.reader(open("alerts.csv"), delimiter=',')
rows=[]
for row in reader:
rows.append(row)
num_lists=int(len(rows))
lists = []
for p in range(num_lists):
lists.append([])
for i in rows:
lists[i][i]=row[i].split(";")
print (lists[0][0])
</code></pre>
<p>but that seems not work :< </p>
| 0 | 2016-09-27T07:04:21Z | [
"python",
"excel",
"csv"
]
|
python array of unicodes to float conversion | 39,716,472 | <p>I have an array which looks as:</p>
<pre><code>MyArray
array(['1445.98', '1422.64', '1392.93', ..., '2012.21', '1861.19',
'1681.02'], dtype=object)
type(MyArray[0])
</code></pre>
<p>I tried:</p>
<pre><code>MyArray.astype(np.float)
</code></pre>
<p>Error:</p>
<pre><code>ValueError: could not convert string to float: -
</code></pre>
<p>How do I convert MyArray to array of floats instead.</p>
| -1 | 2016-09-27T05:16:01Z | 39,716,571 | <p>Maybe convert each memeber individually. Try something like, </p>
<pre><code>map(lambda x: float(x),mydata)
</code></pre>
| 1 | 2016-09-27T05:24:04Z | [
"python",
"unicode"
]
|
python array of unicodes to float conversion | 39,716,472 | <p>I have an array which looks as:</p>
<pre><code>MyArray
array(['1445.98', '1422.64', '1392.93', ..., '2012.21', '1861.19',
'1681.02'], dtype=object)
type(MyArray[0])
</code></pre>
<p>I tried:</p>
<pre><code>MyArray.astype(np.float)
</code></pre>
<p>Error:</p>
<pre><code>ValueError: could not convert string to float: -
</code></pre>
<p>How do I convert MyArray to array of floats instead.</p>
| -1 | 2016-09-27T05:16:01Z | 39,716,590 | <p>Obviously some of your lines don't have valid float data</p>
<pre><code>map(lambda x: float(x),MyArray)
</code></pre>
<p>or </p>
<p>if you have a list</p>
<pre><code>[float(x) for x in MyList]
</code></pre>
| 2 | 2016-09-27T05:25:21Z | [
"python",
"unicode"
]
|
Splitting string in python when last part may not exist | 39,716,473 | <p>How do I split the foll. in python:</p>
<pre><code>'a_b_c'
</code></pre>
<p>The appraoch should also work if string is <code>'a_b'</code></p>
| -4 | 2016-09-27T05:16:02Z | 39,716,513 | <pre><code>splited = "a_b_c".split('_')
</code></pre>
<p><code>inputstring.split('_')</code> will split it.</p>
| 2 | 2016-09-27T05:19:19Z | [
"python"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.