title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Getting child attributes from an XML document using element tree
| 39,475,897 |
<p>I have an xml pom file like the following:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.amirsys</groupId>
<artifactId>components-parent</artifactId>
<version>RELEASE</version>
</parent>
<artifactId>statdxws</artifactId>
<version>6.5.0-16</version>
<packaging>war</packaging>
<dependencies>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>9.4-1200-jdbc41</version>
<scope>provided</scope>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.amirsys</groupId>
<artifactId>referencedb</artifactId>
<version>5.0.0-1</version>
<exclusions>
<exclusion>
<groupId>com.amirsys</groupId>
<artifactId>jig</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
</code></pre>
<p></p>
<p>I am trying to pull the groupIds, artifactIds and versions using element tree to create a dependency object, but it won't find the dependency tags. This is my code so far:</p>
<pre><code>tree = ElementTree.parse('pomFile.xml')
root = tree.getroot()
namespace = '{http://maven.apache.org/POM/4.0.0}'
for dependency in root.iter(namespace+'dependency'):
groupId = dependency.get('groupId')
artifactId = dependency.get('artifactId')
version = dependency.get('version')
print groupId, artifactId, version
</code></pre>
<p>This outputs nothing, and I can't figure out why the code isn't iterating through the dependency tag. Any help would be appreciated.</p>
| 0 |
2016-09-13T17:34:06Z
| 39,476,557 |
<p>Your XML has a small mistake. There should be a closing tag <code></project></code> which you probably missed in the question.</p>
<p>The following works for me:</p>
<pre><code>from xml.etree import ElementTree
tree = ElementTree.parse('pomFile.xml')
root = tree.getroot()
namespace = '{http://maven.apache.org/POM/4.0.0}'
for dependency in root.iter(namespace+'dependency'):
groupId = dependency.find(namespace+'groupId').text
artifactId = dependency.find(namespace+'artifactId').text
version = dependency.find(namespace+'version').text
print groupId, artifactId, version
$ python -i a.py
org.postgresql postgresql 9.4-1200-jdbc41
com.amirsys referencedb 5.0.0-1
</code></pre>
<p>Your usage of <code>.get()</code> is wrong. See how <code>.get()</code> works. Let's say your xml is:</p>
<pre><code><?xml version="1.0"?>
<data>
<country name="Liechtenstein">
<rank>1</rank>
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E"/>
<neighbor name="Switzerland" direction="W"/>
</country>
<country name="Singapore">
<rank>4</rank>
<year>2011</year>
<gdppc>59900</gdppc>
<neighbor name="Malaysia" direction="N"/>
</country>
<country name="Panama">
<rank>68</rank>
<year>2011</year>
<gdppc>13600</gdppc>
<neighbor name="Costa Rica" direction="W"/>
<neighbor name="Colombia" direction="E"/>
</country>
</data>
</code></pre>
<p>And you write python code like:</p>
<pre><code>import xml.etree.ElementTree as ET
tree = ET.parse('country_data.xml')
root = tree.getroot()
for country in root.findall('country'):
rank = country.find('rank').text
name = country.get('name')
print rank, name
</code></pre>
<p>This will print:</p>
<blockquote>
<pre><code>Liechtenstein 1
Singapore 4
Panama 68
</code></pre>
</blockquote>
<p>As you can see, <code>.get()</code> gives you the value of the attribute. The <a href="https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.get" rel="nofollow">docs</a> are pretty clear on this.</p>
| 0 |
2016-09-13T18:16:04Z
|
[
"python",
"maven",
"elementtree"
] |
Aligning pandas DataFrames that don't have a common index
| 39,475,968 |
<p>I have DataFrames that represent data from two different sensors:</p>
<pre><code>In[0]: df0
Out[0]:
time foo
0 0.1 123
1 1.0 234
2 2.1 345
3 3.1 456
4 3.9 567
5 5.1 678
In[0]: df1
Out[0]:
time bar
0 -0.9 876
1 -0.1 765
2 0.7 654
3 2.1 543
4 3.0 432
</code></pre>
<p>The sensors provide a measure (<code>foo</code> or <code>bar</code>) and a timestamp (<code>time</code>) for each events that they are monitoring. A couple things to note:</p>
<ol>
<li>the timestamps are close, but not identical</li>
<li>the range over which data was collected is different across sensors (i.e. they were turned on and turned off independently)</li>
</ol>
<p>I'm trying to align <code>df0</code> and <code>df1</code> to get the following:</p>
<pre><code>In[3]: df3
Out[3]:
time_df0 foo time_df1 bar
0 nan nan -0.9 876
1 0.1 123 -0.1 765
2 1.0 234 0.7 654
3 2.1 345 2.1 543
4 3.1 456 3.0 432
5 3.9 567 nan nan
6 5.1 678 nan nan
</code></pre>
| 2 |
2016-09-13T17:39:16Z
| 39,476,878 |
<p><a href="http://stackoverflow.com/questions/39475968/aligning-pandas-dataframes-that-dont-have-a-common-index?noredirect=1#comment66271493_39475968">@Kartik posted a perfect links</a> to start with...</p>
<p>Here is a starting point:</p>
<pre><code>df0.set_index('time', inplace=True)
df1.set_index('time', inplace=True)
In [36]: df1.reindex(df0.index, method='nearest').join(df0)
Out[36]:
bar foo
time
0.1 765 123
1.0 654 234
2.1 543 345
3.1 432 456
3.9 432 567
5.1 432 678
</code></pre>
| 1 |
2016-09-13T18:35:43Z
|
[
"python",
"pandas",
"indexing",
"dataframe"
] |
Apply function to each cell in DataFrame
| 39,475,978 |
<p>I have a dataframe that may look like this:</p>
<pre><code>A B C
foo bar foo bar
bar foo foo bar
</code></pre>
<p>I want to look through every element of each row (or every element of each column) and apply the following function to get the subsequent DF:</p>
<pre><code>def foo_bar(x):
return x.replace('foo', 'wow')
A B C
wow bar wow bar
bar wow wow bar
</code></pre>
<p>Is there a simple one-liner that can apply a function to each cell? </p>
<p>This is a simplistic example so there may be an easier way to execute this specific example other than applying a function, but what I am really asking about is how to apply a function in every cell within a dataframe. </p>
| 0 |
2016-09-13T17:39:45Z
| 39,476,023 |
<p>You can do <code>df%2</code>, with the old question about finding the even numbers in the data frame:</p>
<pre><code>df%2 == 0
# A B C
#0 True False True
#1 False True False
</code></pre>
<p><em>Update</em>:</p>
<p>Since the question has been updated, as per @ayhan suggested, you can use <code>applymap()</code> which is concise for your case. </p>
<pre><code>df.applymap(foo_bar)
# A B C
#0 wow bar wow bar
#1 bar wow wow bar
</code></pre>
<p>Another option is to vectorize your function and then use <code>apply</code> method:</p>
<pre><code>import numpy as np
df.apply(np.vectorize(foo_bar))
# A B C
#0 wow bar wow bar
#1 bar wow wow bar
</code></pre>
| 4 |
2016-09-13T17:42:24Z
|
[
"python",
"pandas",
"dataframe",
"apply"
] |
Get feature and class names into decision tree using export graphviz
| 39,476,020 |
<p>Good Afternoon,</p>
<p>I am working on a decision tree classifier and am having trouble visualizing it. I can output the decision tree, however I cannot get my feature or class names/labels into it. My data is in a pandas dataframe format which I then move into a numpy array and pass to the classifier. I've tried a few things, but just seem to error out on the export when I try and specify class names. Any help would be appreciated. Code is below.</p>
<pre><code>all_inputs=df.ix[:,14:].values
all_classes=df['wic'].values
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_classes,train_size=0.75, random_state=1)
decision_tree_classifier=DecisionTreeClassifier()
decision_tree_classifier.fit(training_inputs,training_classes)
export_graphviz(decision_tree_classifier, out_file="mytree.dot",
feature_names=??,
class_names=??)
</code></pre>
<p>LIke I said, it runs fine and outputs a decision tree viz if I take out the feature_names and class_names parameters. I'd like to include them in the output though if possible and have hit a wall...</p>
<p>Any help would be greatly appreciated!</p>
<p>Thanks,</p>
<p>Scott</p>
| 0 |
2016-09-13T17:42:14Z
| 39,481,669 |
<p>The class names are stored in <code>decision_tree_classifier.classes_</code>, i.e. the <code>classes_</code> attribute of your <code>DecisionTreeClassifier</code> instance. And the feature names should be the columns of your input dataframe. For your case you will have </p>
<pre><code>classe_names = decision_tree_classifier.classes_
feature_names = df.columns[14:]
</code></pre>
| 2 |
2016-09-14T02:34:48Z
|
[
"python",
"scikit-learn",
"decision-tree"
] |
what does the -1 mean in Scipy's voronoi algorithm?
| 39,476,094 |
<p>I am trying to custom plot the <a href="http://scipy.github.io/devdocs/generated/scipy.spatial.Voronoi.html" rel="nofollow">Voronoi</a> regions of random points in 2D</p>
<pre><code>import matplotlib.pyplot as plt
%matplotlib inline
from scipy.spatial import Voronoi
pt = np.random.random((10,2))
x = sp.spatial.Voronoi(pt)
# trial an error to figure out the type structure of [x]
plt.plot(x.vertices[:,0], x.vertices[:,1], '.', markersize=5)
# how to iterate through the x.regions object?
for poly in x.regions:
z = np.array([ x.vertices[k] for k in poly])
print z
if z.shape[0] > 0:
plt.plot( z[:,0], z[:,1])
plt.xlim([0,2])
plt.ylim([0,2])
</code></pre>
<p>why do the regions overlap? any advice for plotting the infinite regions?</p>
<p><a href="http://i.stack.imgur.com/oXSjA.png" rel="nofollow"><img src="http://i.stack.imgur.com/oXSjA.png" alt="enter image description here"></a></p>
<hr>
<p>The data points are just random numbers:</p>
<pre><code>x.vertices
array([[ 0.59851675, 0.15271572],
[ 0.24473753, 0.70398382],
[ 0.10135325, 0.34601724],
[ 0.42672008, 0.26129443],
[ 0.54966835, 1.64315275],
[ 0.24770706, 0.70543002],
[ 0.39509645, 0.64211128],
[ 0.63353948, 0.86992423],
[ 0.57476256, 1.4533911 ],
[ 0.76421296, 0.6054079 ],
[ 0.9564816 , 0.79492684],
[ 0.94432943, 0.62496293]])
</code></pre>
<p>the regions are listed by number</p>
<pre><code>x.regions
[[],
[2, -1, 1],
[3, 0, -1, 2],
[5, 1, -1, 4],
[6, 3, 2, 1, 5],
[11, 9, 7, 8, 10],
[8, 4, 5, 6, 7],
[9, 0, 3, 6, 7],
[10, -1, 4, 8],
[11, -1, 0, 9],
[11, -1, 10]]
</code></pre>
<p>and from this we can re-construct the plot. My question is what does the <code>-1</code> mean?</p>
| 1 |
2016-09-13T17:46:21Z
| 39,476,507 |
<p><a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.Voronoi.html" rel="nofollow"><code>scipy.spatial.Voronoi</code></a> uses the Qhull library underneath. In my experience Qhull contains several usability bugs. You hit <a href="http://www.qhull.org/html/qvoronoi.htm#outputs" rel="nofollow">one of them</a>:</p>
<blockquote>
<h2>qvoronoi outputs</h2>
<h3>Voronoi vertices</h3>
<p>[...]</p>
<p><code>FN</code>: list the Voronoi vertices for each Voronoi region. The first line
is the number of Voronoi regions. Each remaining line starts with the
number of Voronoi vertices. <strong>Negative indices (e.g., -1) indicate
vertices outside of the Voronoi diagram.</strong></p>
</blockquote>
<hr>
<hr>
<blockquote>
<p>why do the regions overlap?</p>
</blockquote>
<p>So, <code>-1</code> in the first Voronoi region <code>[2, -1, 1]</code> from <code>x.regions</code> stands for <em>a vertex-at-infinity</em> (<em>that is not represented in</em> <code>x.vertices</code>). Yet, when you access <code>x.vertices</code> with that spurious index, you get the last vertex. This happens for every <code>-1</code> in your <code>x.regions</code> (note that those -1's denote different vertices-at-infinity). As a result you get spurious Voronoi edges connecting to the last vertex from <code>x.vertices</code>.</p>
<blockquote>
<p>any advice for plotting the infinite regions?</p>
</blockquote>
<p>Why don't you simply use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.voronoi_plot_2d.html" rel="nofollow"><code>scipy.spatial.voronoi_plot_2d()</code></a>?</p>
| 3 |
2016-09-13T18:13:00Z
|
[
"python",
"graphics",
"scipy",
"computational-geometry",
"voronoi"
] |
search a file for text using input from another file with a twist [Python]
| 39,476,150 |
<p>I want to use a queryfile.txt as the source file, which will be used for searching and matching each line to a datafile.txt. But the datafile.txt has a different structure.</p>
<p>queryfile.txt should look like this:</p>
<pre><code>Gina Cooper
Asthon Smith
Kim Lee
</code></pre>
<p>while the datafile.txt looks like this:</p>
<pre><code>Gina Cooper
112 Blahblah St., NY
Leigh Walsh
09D blablah, Blah
Asthon Smith
another address here
Kim Lee
another address here
</code></pre>
<p>I need to get the names AND the line after it. Here's the code to get matching names in both files, which is a modified code from dstromberg (<a href="http://stackoverflow.com/a/19934477">http://stackoverflow.com/a/19934477</a>):</p>
<pre><code>with open('querfile.txt', 'r') as input_file:
input_addresses = set(names.rstrip() for names in input_file)
with open('datafile.txt', 'r') as data_file:
data_addresses = set(names.rstrip() for names in data_file)
with open('names_address.txt', 'w') as output:
names_address=("\n".join(input_addresses.intersection(data_addresses)))
output.write(names_address)
</code></pre>
<p>In summary, what I want to see in my outfile (names_address.txt) are the names PLUS the addresses corresponding to their names, which is basically the next line. I just started playing with python a month ago and I believe I am stuck.Thanks for the help.</p>
| 0 |
2016-09-13T17:49:33Z
| 39,476,299 |
<p>Rewrite this:</p>
<pre><code>with open('datafile.txt', 'r') as data_file:
data_addresses = set(names.rstrip() for names in data_file)
</code></pre>
<p>To this:</p>
<pre><code>with open('datafile.txt', 'r') as data_file:
data = data_file.readlines()
data_addresses = list(filter(None, [line for line in data if not line[0].isdigit()]))
</code></pre>
| 0 |
2016-09-13T17:59:47Z
|
[
"python"
] |
search a file for text using input from another file with a twist [Python]
| 39,476,150 |
<p>I want to use a queryfile.txt as the source file, which will be used for searching and matching each line to a datafile.txt. But the datafile.txt has a different structure.</p>
<p>queryfile.txt should look like this:</p>
<pre><code>Gina Cooper
Asthon Smith
Kim Lee
</code></pre>
<p>while the datafile.txt looks like this:</p>
<pre><code>Gina Cooper
112 Blahblah St., NY
Leigh Walsh
09D blablah, Blah
Asthon Smith
another address here
Kim Lee
another address here
</code></pre>
<p>I need to get the names AND the line after it. Here's the code to get matching names in both files, which is a modified code from dstromberg (<a href="http://stackoverflow.com/a/19934477">http://stackoverflow.com/a/19934477</a>):</p>
<pre><code>with open('querfile.txt', 'r') as input_file:
input_addresses = set(names.rstrip() for names in input_file)
with open('datafile.txt', 'r') as data_file:
data_addresses = set(names.rstrip() for names in data_file)
with open('names_address.txt', 'w') as output:
names_address=("\n".join(input_addresses.intersection(data_addresses)))
output.write(names_address)
</code></pre>
<p>In summary, what I want to see in my outfile (names_address.txt) are the names PLUS the addresses corresponding to their names, which is basically the next line. I just started playing with python a month ago and I believe I am stuck.Thanks for the help.</p>
| 0 |
2016-09-13T17:49:33Z
| 39,476,306 |
<p>Loop through the options instead and then you can just grab the next index:</p>
<pre><code>for i in range(len(data_addresses):
for entry in input_addresses:
if entry==data_addresses[i]:
output.write(data_address[i] + data_address[i+1])
</code></pre>
<p>This might not have great time complexity, but your data set appears </p>
| 0 |
2016-09-13T18:00:02Z
|
[
"python"
] |
Django upload image to cdn using an API
| 39,476,243 |
<p>I am building a website in Django on Pythonanywhere.com and I am using Backblaze's B2 cloud storage to store all the static and media files. My css and images that I have uploaded to Backblaze are working but I can't figure out how everything should fit together, so here is where I am (minimalized):</p>
<p>In this model I want to store a thumbnail image, I have a form working to upload it.</p>
<pre><code>class Post(models.Model):
(...)
thumbnail = models.ImageField(null=True, blank=True, width_field="width_field", height_field="height_field")
(...)
</code></pre>
<p>Backblaze has all the code for the http requests and responses that I need, so I just pasted that in a seperate file. I first need to get an account authorization token, followed by an upload url and then I can send the file.</p>
<p>So that whole function needs three things as input: the file date, file name and file size. As output I get the file id (and other things, details <a href="https://www.backblaze.com/b2/docs/b2_upload_file.html" rel="nofollow">here</a>).</p>
<p>Now I wonder where I need to call that upload function, I assume it has to do with the "upload_to" parameter in the ImageField. And I wonder what actually gets stored in the ImageField, since I don't tell ImageField the location where to find the file. Does it use the media root in the settings file, how would I manage this?</p>
| 0 |
2016-09-13T17:56:15Z
| 39,476,527 |
<p>Using these commands in settings, You can set MEDIA_URL as well as where image is to be stored. Here image is stored in src/media_cdn</p>
<pre><code>MEDIA_URL = "/media/"
MEDIA_ROOT = os.path.join(BASE_DIR, "media_cdn")
</code></pre>
<p>You have to tell the path where upload_to should save the image inside media_cdn else it will save image inside media_cdn, if upload_to is set to None.</p>
<pre><code>models.ImageField(upload_to='profiles/')
</code></pre>
<p>You need to upload image. So, that it contains address where image exist.</p>
| 0 |
2016-09-13T18:14:15Z
|
[
"python",
"django",
"file-upload",
"cdn"
] |
picking a file and reading the words from it python
| 39,476,255 |
<p>I need help with this, I'm a total beginner at python. my assignment is to create a program that has the user pick a category, then scramble words from a file that are in that category. I just want to figure out why this first part isn't working, the first part being the first of four different methods that run depending on which category the user picks. </p>
<pre><code>print ("Instructions: Enter your chosen category, animals, places, names or colors.")
viewYourFile = input("Enter your category")
category = 'animals'
if category == 'animals':
animals = open('animals.txt')
next = animals.read(1)
while next != "":
animal1 = animals.read(1)
animal2 = animals.read(2)
animal3 = animals.read(3)
animal4 = animals.read(4)
animal5 = animals.read(5)
animalList = ['animal1', 'animal2', 'animal3', 'animal4', 'animal5']
chosenAnimal = random.choice(animalList)
animalLetters = list(chosenAnimal)
random.shuffle(animalLetters)
scrambledAnimal = ' '.join(animalLetters)
print(scrambledAnimal)
print("Enter the correct spelling of the word")
</code></pre>
| -3 |
2016-09-13T17:57:17Z
| 39,476,342 |
<p>The first problem is that you're reading only 1-5 letters from the file.
Please read the (documentation)[<a href="https://docs.python.org/2/tutorial/inputoutput.html]" rel="nofollow">https://docs.python.org/2/tutorial/inputoutput.html]</a> on how the <strong>read</strong> function works. The number in parentheses is how many bytes you want to read.</p>
<p>You may want a simpler solution, such as reading the entire file and splitting it into words. This would look something like:</p>
<pre><code>file_contents = animals.read()
animalList = file_contents.split()
</code></pre>
<p>If <strong>split</strong> is new to you, then (look up)[<a href="https://docs.python.org/2/library/string.html]" rel="nofollow">https://docs.python.org/2/library/string.html]</a> that method as well.</p>
<p>The next problem is that you've set your animal list to literal strings, rather than the input values you read. I think you want the line to read:</p>
<pre><code>animalList = [animal1, animal2, animal3, animal4, animal5]
</code></pre>
| 2 |
2016-09-13T18:02:01Z
|
[
"python"
] |
work around for former CStringIO and String IO function in Python 3 Pdfinterp (Pdfminer)
| 39,476,291 |
<p>I am using the pdfminer tool to convert pdf to .csv (text) and one of the subcommands in the tool <code>pdfinterp.py</code> still uses the CStringIO and StringIO for string to string translation -</p>
<pre><code>import re
try:
from CStringIO import StringIO
except ImportError:
from StringIO import StringIO
</code></pre>
<p>I am using Python 3 so I am aware of the need to change to io and io.StringIO. </p>
<p>How exactly should the above command be re-worded in <code>pdfinterp</code> to make it functional in Python 3.</p>
| 0 |
2016-09-13T17:59:14Z
| 39,476,413 |
<p>you could extend your import block to make it compatible with all versions (Python 2.x or 3.x). Ugly because of all try/except blocks but would work</p>
<pre><code>try:
from CStringIO import StringIO
except ImportError:
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
</code></pre>
<p>or (slightly better)</p>
<pre><code>import sys
if sys.version_info < (3,)
try:
from CStringIO import StringIO
except ImportError:
from StringIO import StringIO
else:
from io import StringIO
</code></pre>
<p>Be aware that python 3 has <code>BytesIO</code> too because binary data and text data is different now. So if <code>StringIO</code> is used to pass binary data it will fail.</p>
| 0 |
2016-09-13T18:06:20Z
|
[
"python",
"c-strings",
"pdfminer"
] |
Why "class Meta" is necessary while creating a model form?
| 39,476,334 |
<pre><code>from django import forms
from .models import NewsSignUp
class NewsSignUpForm(forms.ModelForm):
class Meta:
model = NewsSignUp
fields = ['email', 'first_name']
here
</code></pre>
<p>This code works perfectly fine. But, when I remove <strong>"class Meta:"</strong> as below, it throws a ValueError saying "<em>ModelForm has no model class specified.</em>"</p>
<pre><code>from django import forms
from .models import NewsSignUp
class NewsSignUpForm(forms.ModelForm):
model = NewsSignUp
fields = ['email', 'first_name']
</code></pre>
<p>Can someone please give an explanation? :(</p>
| 0 |
2016-09-13T18:01:21Z
| 39,476,404 |
<p>You are creating a <em><code>ModelForm</code></em> subclass. A model form <strong>has</strong> to have a model to work from, and the <code>Meta</code> object configures this.</p>
<p>Configuration like this is grouped into the <code>Meta</code> class to avoid name clashes; that way you can have a <code>model</code> <em>field</em> in your form without that interfering with the configuration. In other words, by using <code>class Meta:</code> you get a nested namespace used <em>just</em> to configure the <code>ModelForm</code> in relation to the model.</p>
<p>The namespace for the <code>ModelForm</code> class body itself then (outside <code>Meta</code>) is reserved for the form fields themselves, as well as form methods. You'd normally just let <code>ModelForm</code> generate those fields from your model, but you can, in principle, <em>add</em> fields to this. Another reason to put fields in the class is to completely replace any of the generated fields with your own version.</p>
<p>From the <a href="https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/" rel="nofollow"><em>Model Forms</em> documentation</a>:</p>
<blockquote>
<p><code>ModelForm</code> is a regular <code>Form</code> which can automatically generate certain fields. The fields that are automatically generated depend on the content of the <code>Meta</code> class and on which fields have already been defined declaratively. Basically, <code>ModelForm</code> will only generate fields that are missing from the form, or in other words, fields that werenât defined declaratively.</p>
<p>Fields defined declaratively are left as-is, therefore any customizations made to <code>Meta</code> attributes such as <code>widgets</code>, <code>labels</code>, <code>help_texts</code>, or <code>error_messages</code> are ignored; these only apply to fields that are generated automatically.</p>
</blockquote>
<p>Because <code>ModelForm</code> expects the configuration to be set under the <code>Meta</code> name, you can't just remove that and put <code>model</code> and <code>fields</code> in the <code>ModelForm</code> class itself; that's just the wrong place.</p>
| 3 |
2016-09-13T18:05:52Z
|
[
"python",
"django",
"django-models",
"django-forms"
] |
scikit-learn decision tree node depth
| 39,476,414 |
<p>My goal is to identify at what depth two samples separate within a decision tree. In the development version of scikit-learn you can use the <code>decision_path()</code> method to identify to last common node:</p>
<pre><code>from sklearn import tree
import numpy as np
clf = tree.DecisionTreeClassifier()
clf.fit(data, outcomes)
n_nodes = clf.tree_.node_count
node_indicator = clf.decision_path(data).toarray()
sample_ids = [0,1]
common_nodes = (node_indicator[sample_ids].sum(axis=0) == len(sample_ids))
common_node_id = np.arange(n_nodes)[common_nodes]
max_node = np.max(common_node_id)
</code></pre>
<p>Is there a way to determine at what depth the <code>max_node</code> occurs within the tree, possibly with <code>clf.tree_.children_right</code> and <code>clf.tree_.chrildren_left</code>?</p>
| 3 |
2016-09-13T18:06:22Z
| 39,501,867 |
<p>Here is a function that you could use to recursively traverse the nodes and calculate the node depths</p>
<pre><code>def get_node_depths(tree):
"""
Get the node depths of the decision tree
>>> d = DecisionTreeClassifier()
>>> d.fit([[1,2,3],[4,5,6],[7,8,9]], [1,2,3])
>>> get_node_depths(d.tree_)
array([0, 1, 1, 2, 2])
"""
def get_node_depths_(current_node, current_depth, l, r, depths):
depths += [current_depth]
if l[current_node] != -1 and r[current_node] != -1:
get_node_depths_(l[current_node], current_depth + 1, l, r, depths)
get_node_depths_(r[current_node], current_depth + 1, l, r, depths)
depths = []
get_node_depths_(0, 0, tree.children_left, tree.children_right, depths)
return np.array(depths)
</code></pre>
| 1 |
2016-09-15T01:18:54Z
|
[
"python",
"scikit-learn",
"decision-tree"
] |
Find all occurences of a specified match of two numbers in numpy array
| 39,476,430 |
<p>what i need to achieve is to get array of all indexes, where in my data array filled with zeros and ones is step from zero to one. I need very quick solution, because i have to work with milions of arrays of hundrets milions length. It will be running in computing centre. For instance..</p>
<pre><code>data_array = np.array([1,1,0,1,1,1,0,0,0,1,1,1,0,1,1,0])
result = [3,9,13]
</code></pre>
| 2 |
2016-09-13T18:07:40Z
| 39,476,537 |
<p>try this:</p>
<pre><code>In [23]: np.where(np.diff(a)==1)[0] + 1
Out[23]: array([ 3, 9, 13], dtype=int64)
</code></pre>
<p>Timing for 100M element array:</p>
<pre><code>In [46]: a = np.random.choice([0,1], 10**8)
In [47]: %timeit np.nonzero((a[1:] - a[:-1]) == 1)[0] + 1
1 loop, best of 3: 1.46 s per loop
In [48]: %timeit np.where(np.diff(a)==1)[0] + 1
1 loop, best of 3: 1.64 s per loop
</code></pre>
| 3 |
2016-09-13T18:14:49Z
|
[
"python",
"arrays",
"performance",
"numpy"
] |
Find all occurences of a specified match of two numbers in numpy array
| 39,476,430 |
<p>what i need to achieve is to get array of all indexes, where in my data array filled with zeros and ones is step from zero to one. I need very quick solution, because i have to work with milions of arrays of hundrets milions length. It will be running in computing centre. For instance..</p>
<pre><code>data_array = np.array([1,1,0,1,1,1,0,0,0,1,1,1,0,1,1,0])
result = [3,9,13]
</code></pre>
| 2 |
2016-09-13T18:07:40Z
| 39,476,555 |
<p>Here's the procedure:</p>
<ol>
<li>Compute the diff of the array</li>
<li>Find the index where the diff == 1</li>
<li>Add 1 to the results (b/c <code>len(diff) = len(orig) - 1</code>)</li>
</ol>
<p>So try this:</p>
<pre><code>index = numpy.nonzero((data_array[1:] - data_array[:-1]) == 1)[0] + 1
index
# [3, 9, 13]
</code></pre>
| 1 |
2016-09-13T18:15:55Z
|
[
"python",
"arrays",
"performance",
"numpy"
] |
Find all occurences of a specified match of two numbers in numpy array
| 39,476,430 |
<p>what i need to achieve is to get array of all indexes, where in my data array filled with zeros and ones is step from zero to one. I need very quick solution, because i have to work with milions of arrays of hundrets milions length. It will be running in computing centre. For instance..</p>
<pre><code>data_array = np.array([1,1,0,1,1,1,0,0,0,1,1,1,0,1,1,0])
result = [3,9,13]
</code></pre>
| 2 |
2016-09-13T18:07:40Z
| 39,478,145 |
<p>Well thanks a lot to all of you. Solution with nonzero is probably better for me, because I need to know steps from 0->1 and also 1->0 and finally calculate differences. So this is my solution. Any other advice appreciated .)</p>
<pre><code>i_in = np.nonzero( (data_array[1:] - data_array[:-1]) == 1 )[0] +1
i_out = np.nonzero( (data_array[1:] - data_array[:-1]) == -1 )[0] +1
i_return_in_time = (i_in - i_out[:i_in.size] )
</code></pre>
| 0 |
2016-09-13T20:03:10Z
|
[
"python",
"arrays",
"performance",
"numpy"
] |
Find all occurences of a specified match of two numbers in numpy array
| 39,476,430 |
<p>what i need to achieve is to get array of all indexes, where in my data array filled with zeros and ones is step from zero to one. I need very quick solution, because i have to work with milions of arrays of hundrets milions length. It will be running in computing centre. For instance..</p>
<pre><code>data_array = np.array([1,1,0,1,1,1,0,0,0,1,1,1,0,1,1,0])
result = [3,9,13]
</code></pre>
| 2 |
2016-09-13T18:07:40Z
| 39,479,474 |
<p>Since it's an array filled with <code>0s</code> and <code>1s</code>, you can benefit from just comparing rather than performing arithmetic operation between the one-shifted versions to directly give us the boolean array, which could be fed to <code>np.flatnonzero</code> to get us the indices and the final output. </p>
<p>Thus, we would have an implementation like so -</p>
<pre><code>np.flatnonzero(data_array[1:] > data_array[:-1])+1
</code></pre>
<p>Runtime test -</p>
<pre><code>In [26]: a = np.random.choice([0,1], 10**8)
In [27]: %timeit np.nonzero((a[1:] - a[:-1]) == 1)[0] + 1
1 loop, best of 3: 1.91 s per loop
In [28]: %timeit np.where(np.diff(a)==1)[0] + 1
1 loop, best of 3: 1.91 s per loop
In [29]: %timeit np.flatnonzero(a[1:] > a[:-1])+1
1 loop, best of 3: 954 ms per loop
</code></pre>
| 0 |
2016-09-13T21:45:05Z
|
[
"python",
"arrays",
"performance",
"numpy"
] |
Add context to every Django Admin page
| 39,476,439 |
<p>How do I add extra context to all admin webpages?</p>
<p>I use default Django Admin for my admin part of a site.</p>
<p>Here is an url entry for admin:</p>
<pre><code>urlpatterns = [
url(r'^admin/', admin.site.urls),
]
</code></pre>
<p>And my apps register their standard view models using:</p>
<pre><code>admin.site.register(Tag, TagAdmin)
</code></pre>
<p>My problem, is that I want to display an extra field in admin template header bar and I have no idea how to add this extra context.</p>
<p>My first bet was adding it in url patterns like below:</p>
<pre><code>urlpatterns = [
url(r'^admin/', admin.site.urls, {'mycontext': '123'}),
]
</code></pre>
<p>But that gives an error:</p>
<pre><code>TypeError at /admin/tickets/event/4/change/
change_view() got an unexpected keyword argument 'mycontext'
</code></pre>
<p>Can you give any suggestion? I really do not want to modify every AdminModel class I have to insert this context, as I need it on every admin page.</p>
<p>Thanks.</p>
| 1 |
2016-09-13T18:08:17Z
| 39,476,707 |
<p>Found the solution, url registration has to be:</p>
<pre><code>urlpatterns = [
url(r'^admin/', admin.site.urls, {'extra_context': {'mycontext': '123'}}),
]
</code></pre>
<p>Its a context dictionary inside of a dictionary with <code>'extra_context'</code> as a key.</p>
| 1 |
2016-09-13T18:25:14Z
|
[
"python",
"django",
"django-admin",
"django-views"
] |
Authorization error when parsing data from router
| 39,476,591 |
<p>I want to scrap the data from my router for some home automation, but im facing some trouble I cannot solve/crack.</p>
<p>I have managed to successfully login into the router, but when accessing the data with python script (opening links in router web interface) I recieve an error saying: You have no authority to access this router!</p>
<p>If I manually copy and paste url that python script is accessing into browser (with cookies set), the response is the same. But if i click the buttons inside router web interface i get no "authority" complains. Any ideas how to fix this?</p>
<p>here is the script:</p>
<pre><code>import re
import mechanize
import cookielib
br = mechanize.Browser()
cookies = cookielib.LWPCookieJar()
br.set_cookiejar(cookies)
#they "encrypt" the username and password and store it into the cookie. I stole this value from javascript in runtime.
br.addheaders = [('Cookie','Authorization=Basic YWRtaW46MjEyMzJmMjk3YTU3YTVhNzQzODk0YTBlNGE4MDFmYzM=;')]
#open connection to the router address
br.open('http://192.168.1.1/')
#the only form is "login" form (which we dont have to fill up, because we already have the cookie)
br.select_form(nr=0)
br.form.enctype = "application/x-www-form-urlencoded"
br.submit()
#then the router returns redirect script, so we have to parse it (get the url).
redirect_url = re.search('(http:\/\/[^"]+)',br.response().read()).group(1)
token = re.search("1\/([A-Z]+)\/",redirect_url).group(1) #url always has a random token inside (some kind of security?)
#So with this url I should be able to navigate to page containing list of DHCP clients
br.open("http://192.168.1.1/"+token+"/userRpm/AssignedIpAddrListRpm.htm")
print(br.response().read()) #But response contains html saying "You have no authority to access this router!".
</code></pre>
| 3 |
2016-09-13T18:18:29Z
| 39,477,096 |
<p>I have solved the issue by adding this:</p>
<pre><code>br.addheaders.append(
('Referer', "http://192.168.1.1/userRpm/LoginRpm.htm?Save=Save")
)
</code></pre>
<p>Reason:</p>
<p>Searching the message on web I navigated to a forum where users with firefox version (old) complained about the same warning. The fix was to enable the referrer to be sent, so I did the same in the script and it worked out.</p>
| 1 |
2016-09-13T18:52:45Z
|
[
"python",
"mechanize",
"router"
] |
print jsonfile key that it's value is selected by input
| 39,476,609 |
<p>I have following codes and problems , Any idea would help. Thanks...</p>
<p><strong>Nouns.json :</strong></p>
<pre><code>{
"hello":["hi","hello"],
"beautiful":["pretty","lovely","handsome","attractive","gorgeous","beautiful"],
"brave":["fearless","daring","valiant","valorous","brave"],
"big":["huge","large","big"]
}
</code></pre>
<p><strong>Python file :</strong> this code would find word synonyms from json file and prints them</p>
<pre><code>import random
import json
def allnouns(xinput):
data = json.load(open('Nouns.json'))
h = ''
items = []
T = False
for k in data.keys():
if k == xinput:
T = True
if T == True:
for item in data[xinput]:
items.append(item)
h = random.choice(items)
else:
for k in data.keys():
d = list(data.values())
ost = " ".join(' '.join(x) for x in d)
if xinput in ost:
j = ost.split()
for item in j:
if item == xinput :
h = k
break
else:
pass
else :
h = xinput
print(h)
xinput = input(' input : ')
allnouns(xinput)
</code></pre>
<p><strong>Example:</strong></p>
<pre><code>example for input = one of keys :
>> xinput = 'hello' >> hello
>> 'hi' or 'hello' >> hi
example for input = one of values :
>> xinput = 'pretty' >> pretty
>> it should print it's key (beautiful) but it prints the first key (hello) >> hello
</code></pre>
<p>Problem is the last line of example</p>
<p>Any ideas how to fix it?</p>
| 1 |
2016-09-13T18:19:13Z
| 39,476,888 |
<p>This looks massively over-complicated. Why not do something like:</p>
<pre><code>import json
def allnouns(xinput):
nouns = json.load(open('Nouns.json'))
for key, synonyms in nouns.items():
if xinput in synonyms:
print(key)
return;
xinput = input(' input : ')
allnouns(xinput)
</code></pre>
| 2 |
2016-09-13T18:36:53Z
|
[
"python",
"json",
"python-3.x"
] |
Remove first characters from a list
| 39,476,617 |
<p>I'm working on a program to have a user input what they want for dinner and then output a shopping list. Currently user can enter which meals they want and it will print out the list sorted in order from produce, meat, and other. </p>
<p>I want the program to output the material without the category number in front and with a line break after each entry but I've had some problems dealing with lists instead of strings. So far I've tried a regex to replace numbers with nothing or substituting the numbers with spaces. </p>
<p>Bonus points if someone knows a way to enter in nachos twice and print out 2xchicken instead of chicken twice. </p>
<pre><code>enter code here
strog = ["3 egg noddles", "3 beef broth", "2 steak"]
c_soup = ["2 bone in chicken", "1 carrots", "1 celery", "1 onion", "1 parsley"]
t_soup = ["3 tomato saucex2", "3 tomato paste", "1 celery"]
nachos = ["3 chips", "3 salsa", "3 black olives", "2 chicken", "3 cheese"]
grocery = []
done = []
food = ""
while food != done:
food = input("Please enter what you would like to eat. Enter done when finished: ")
grocery += (food)
grocery.sort()
print(grocery)
</code></pre>
| -2 |
2016-09-13T18:20:12Z
| 39,476,758 |
<p>Your code has some other issues, but you can use (for example) <code>strog[0][2:]</code> to get one item in strog or the entire list by doing <code>new_strog = [x[2:] for x in strog]</code> to parse out the category number and space.</p>
| 0 |
2016-09-13T18:28:44Z
|
[
"python"
] |
Remove first characters from a list
| 39,476,617 |
<p>I'm working on a program to have a user input what they want for dinner and then output a shopping list. Currently user can enter which meals they want and it will print out the list sorted in order from produce, meat, and other. </p>
<p>I want the program to output the material without the category number in front and with a line break after each entry but I've had some problems dealing with lists instead of strings. So far I've tried a regex to replace numbers with nothing or substituting the numbers with spaces. </p>
<p>Bonus points if someone knows a way to enter in nachos twice and print out 2xchicken instead of chicken twice. </p>
<pre><code>enter code here
strog = ["3 egg noddles", "3 beef broth", "2 steak"]
c_soup = ["2 bone in chicken", "1 carrots", "1 celery", "1 onion", "1 parsley"]
t_soup = ["3 tomato saucex2", "3 tomato paste", "1 celery"]
nachos = ["3 chips", "3 salsa", "3 black olives", "2 chicken", "3 cheese"]
grocery = []
done = []
food = ""
while food != done:
food = input("Please enter what you would like to eat. Enter done when finished: ")
grocery += (food)
grocery.sort()
print(grocery)
</code></pre>
| -2 |
2016-09-13T18:20:12Z
| 39,476,768 |
<p>Seems like you just need to learn normal python <a href="https://developers.google.com/edu/python/strings" rel="nofollow">string manipulation</a> and list<->string conversion (<code>split</code> and <code>join</code>).</p>
<p>Try something like this:</p>
<pre><code>strog = ["3 egg noddles", "3 beef broth", "2 steak"]
for ingredient in strog:
print(" ".join(ingredient.split()[1:]))
</code></pre>
<p>Or un-uglified:</p>
<pre><code>strog = ["3 egg noddles", "3 beef broth", "2 steak"]
for ingredient in strog:
pieces_list = ingredient.split()
food_list = pieces_list[1:]
ingredient_without_number = " ".join(food_list)
print(ingredient_without_number)
</code></pre>
| 0 |
2016-09-13T18:29:15Z
|
[
"python"
] |
Http requests freezes after severel requests
| 39,476,633 |
<p>Okay, here is my code:</p>
<pre><code>from lxml import html
from lxml import etree
from selenium import webdriver
import calendar
import math
import urllib
import progressbar
import requests
</code></pre>
<p><em>Using selenium</em></p>
<pre><code>path_to_driver = '/home/vladislav/Shit/geckodriver'
browser = webdriver.Firefox(executable_path = path_to_driver)
</code></pre>
<p><em>Create a dict, where i store data and create progressbars</em></p>
<pre><code>DataDict = {}
barY = progressbar.ProgressBar(max_value=progressbar.UnknownLength)
barM = progressbar.ProgressBar(max_value=progressbar.UnknownLength)
barW = progressbar.ProgressBar(max_value=progressbar.UnknownLength)
</code></pre>
<p><em>Forming parameters in a loop, constructing a url from them and send a <code>browser.get</code> request</em></p>
<pre><code>for year in (range(2014,2016)):
barY.update(year)
for month in range(1,13):
barM.update(month)
weeks = math.ceil(calendar.monthrange(year,month)[1]/4)
for week in range(weeks):
barW.update(week)
if (week > 2):
start_day = 22
end_day = calendar.monthrange(year,month)[1]
else:
start_day =7*week + 1
end_day = 7*(week + 1)
start_date = str(year) + '-' + str(month).zfill(2) +'-' + str(start_day).zfill(2)
end_date = str(year) + '-' +str(month).zfill(2) + '-' + str(end_day).zfill(2)
params = {'end-date': end_date, 'start-date': start_date}
url = 'http://www.finam.ru/profile/moex-akcii/aeroflot/news'
url = url + ('&' if urllib.parse.urlparse(url).query else '?') + urllib.parse.urlencode(params)
</code></pre>
<h1>The request itself</h1>
<pre><code> browser.get(url)
try:
news_list = browser.find_element_by_class_name('news-list')
news_list_text = news_list.text
news_list_text = news_list_text.split('\n')
for i in range(int(len(news_list_text)/2)):
DataDict.update({news_list_text[2*i]:news_list_text[2*i+1]})
print("Found! Adding news to the dictionary!")
except:
pass
</code></pre>
<h1>But after 2-4 requests it just freezes:(</h1>
<p>Whats the problem?
<a href="http://i.stack.imgur.com/F23eu.png" rel="nofollow"><img src="http://i.stack.imgur.com/F23eu.png" alt="enter image description here"></a></p>
| 0 |
2016-09-13T18:21:16Z
| 39,478,718 |
<p>Okay, the problem was in an advertising banner, which appeared after several requests. Solution is just to wait (<code>time.sleep</code>), untill the banner disapeares, and the send request again!:</p>
<pre><code> try:
browser.get(url)
try:
news_list = browser.find_element_by_class_name('news-list')
news_list_text = news_list.text
news_list_text = news_list_text.split('\n')
for i in range(int(len(news_list_text)/2)):
DataDict.update({news_list_text[2*i]:news_list_text[2*i+1]})
#print("Found! Adding news to the dictionary!")
except:
pass
time.sleep(10)
except:
print("perchaps this shitty AD?")
try:
news_list = browser.find_element_by_class_name('news-list')
news_list_text = news_list.text
news_list_text = news_list_text.split('\n')
for i in range(int(len(news_list_text)/2)):
DataDict.update({news_list_text[2*i]:news_list_text[2*i+1]})
#print("Found! Adding news to the dictionary!")
except:
pass
</code></pre>
| 0 |
2016-09-13T20:46:00Z
|
[
"python",
"html",
"selenium"
] |
Is it possible for django-rest-framework view to be called without a request object?
| 39,476,672 |
<p>I've inherited a Django code base using Django REST Framework that has many views that check for the existence of a <code>request</code> argument at the top, like this:</p>
<pre><code>class ExampleViewSet(viewsets.GenericViewSet):
def create(self, request):
if not request:
return Response(status=404)
</code></pre>
<p>This doesn't seem logical to me as I don't understand how the method can be called without a request object. I'm inclined to remove it since I haven't been able to find any documentation of this idiom. Is there some purpose I'm missing?</p>
| 0 |
2016-09-13T18:23:08Z
| 39,477,032 |
<p>They are required. This is how <a href="https://docs.djangoproject.com/en/1.10/topics/http/urls/#how-django-processes-a-request" rel="nofollow">Django processes the urls</a>.
You'll likely have an issue if you remove it as other part of the code expects this argument.</p>
| 0 |
2016-09-13T18:47:51Z
|
[
"python",
"django",
"django-rest-framework"
] |
Is it possible for django-rest-framework view to be called without a request object?
| 39,476,672 |
<p>I've inherited a Django code base using Django REST Framework that has many views that check for the existence of a <code>request</code> argument at the top, like this:</p>
<pre><code>class ExampleViewSet(viewsets.GenericViewSet):
def create(self, request):
if not request:
return Response(status=404)
</code></pre>
<p>This doesn't seem logical to me as I don't understand how the method can be called without a request object. I'm inclined to remove it since I haven't been able to find any documentation of this idiom. Is there some purpose I'm missing?</p>
| 0 |
2016-09-13T18:23:08Z
| 39,477,362 |
<p>That particular if statement is indeed probably useless; you are right that the method can never be called without a request. The only exception would be if some other methods called this method directly, passing an empty or falsey value for the request parameter, but that does seem unlikely.</p>
| 1 |
2016-09-13T19:11:32Z
|
[
"python",
"django",
"django-rest-framework"
] |
Python returns nothing after recursive call
| 39,476,732 |
<p>I am working on a python script that calculates an individual's tax based on their income.</p>
<p>The system of taxation requires that people are taxed based on how rich they are or how much they earn.</p>
<p>The first <strong>1000</strong> is not taxed,<br>
The next <strong>9000</strong> is taxed 10%
The next <strong>10200</strong> is taxed 15%
The next <strong>10550</strong> is taxed 20%
The next <strong>19250</strong> is taxed 25%<br>
Anything left after the above is taxed at 30%</p>
<p>I have the code running and working and I am able to get the code working to follow the conditions above using recursion.</p>
<p>However, I have a problem returning the total_tax for the which should be the return value of the function.</p>
<p>For example, an income of <strong>20500</strong> should be taxed <strong>2490.0</strong>.</p>
<p>Here is my code snippet below:</p>
<pre><code>def get_tax(income, current_level=0, total_tax=0,):
level = [0, 0.1, 0.15, 0.2, 0.25, 0.3]
amount = [1000, 9000, 10200, 10550, 19250, income]
if income > 0 and current_level <=5:
if income < amount[current_level]:
this_tax = ( income * level[current_level] )
income -= income
else:
this_tax = ( level[current_level] * amount[current_level] )
income -= amount[current_level]
current_level += 1
total_tax += this_tax
print total_tax
get_tax(income, current_level, total_tax)
else:
final = total_tax
return final
get_tax(20500)
</code></pre>
<p>As you can see from the snippet, it does not work when I put the return statement in an else block, I have also tried doing it without the else block but it still does not work.</p>
<p>Here is a link to the snippet on <a href="https://repl.it/D8si/1" rel="nofollow">Repl.it</a></p>
| -3 |
2016-09-13T18:26:50Z
| 39,476,752 |
<p>It's returning nothing because you're not <code>return</code>ing.</p>
<p><code>return get_tax(income, current_level, total_tax)</code>.</p>
<p>Now that it's returning something, you need to do something with the returned value. </p>
| 3 |
2016-09-13T18:28:27Z
|
[
"python",
"recursion",
"return-value"
] |
Python compiled file generated even though wrong syntax
| 39,476,828 |
<p>I wrote a small program 'test1.py'</p>
<pre><code>print abc
print 'the above is invalid'
</code></pre>
<p>Now i write a different python program 'test2.py'</p>
<pre><code>import test1
print 'this line will not get executed'
</code></pre>
<p>Q1:To my surprise,i can see test1.pyc file is successfully generated. Why?
Since test1.py conatins an invalid statement in first line, why at all is test1.pyc file getting generated? What exactly does the compiler check(is it the syntax or something else?).I am getting confused.Please clarify.</p>
<p>Q2: I also read that the compiled python file will be further interpreted.Is that true? </p>
<p>Q3:Compiler converts the program to machine code as a whole.This does not need to be further interpreted via an interpreter? Is it true? If so,then what about Question2?</p>
<p>Q4:I also read that compiled code is closer to the machine. When we use an interpreter,it converts the code into intermediate code which needs to be further converted to machine code.Is that correct? So, compiled code is closer to the machine than interpreted code?</p>
| 3 |
2016-09-13T18:32:14Z
| 39,477,760 |
<p><h2>Q1:</h2>
Neither file contains any syntax errors. The module test1 can therefore successfully be compiled into instructions the interpreter can read. The *compiling however has no code introspection that can determine ahead of time whether a variable is defined at any given point or not. </p>
<p>*I like to think of this conversion more as translation than compilation, as it largely does not alter the code structure at all, but rather translates it into instructions that are easier for the interpreter to read. Compilation implies that code inspection is taking place beyond simple syntax checking. (Google translate will often give something grammatically correct, but it may or may not make any sense.</p>
<h2>Q2:</h2>
<p>The python interpreter understands what's called <a href="http://akaptur.com/blog/2013/11/17/introduction-to-the-python-interpreter-3/" rel="nofollow">bytecode</a>. It's functionally a program written in c that takes the *compiled code and executes it on the machine. For each variation of hardware you want to run your code on, a version of this program(specifically ceval.c) is compiled to work with that hardware (be it x86, arm, mips, etc...), and interprets the bytecode which is the same no matter what hardware you are running. This is what allows python (and many other interpreted languages) to be cross-platform</p>
<h2>Q3</h2>
<p>No this is not true. *compiled python code runs through the same interpreter normal code does. The benefit of *compiled python code is in loading time of modules. Before any python code is executed it is converted into bytecode then sent to the interpreter. With a script this is done each time, but when python imports a script as a module it saves a copy of the already parsed bytecode to save itself the trouble next time.</p>
<h2>Q4</h2>
<p>Your confusion here is likely due to the poor naming convention of python *compiled files. They are not truly compiled into machine instructions, so they must be executed by a program rather than on the hardware itself. True compilers are programs that translate and optimize code (c, c++, fortran etc..) and spit out actual binary hardware instructions as specified by the manufacturer. </p>
<p>I did my best to guess what you were confused about, but if you have any more questions feel free to ask..</p>
| 1 |
2016-09-13T19:37:47Z
|
[
"python",
"python-2.7",
"compilation"
] |
Return from canvas.get_group() call in kivy
| 39,476,837 |
<p>Calling <code>get_group()</code> from an instruction group yields back more that what I wanted.</p>
<p>I have the following code:</p>
<pre><code>for widget in self.selected:
dx, dy = (
widget.pos[0] - self.pos[0],
widget.pos[1] - self.pos[1]
)
self.shadows.add(Rectangle(size=widget.size, pos=widget.pos, group='my_shadows'))
self.canvas.add(self.shadows)
print self.shadows.get_group('my_shadows')
</code></pre>
<p>which in turn produces the following result:</p>
<pre><code><kivy.graphics.context_instructions.BindTexture object at 0x7ff992377050>
<kivy.graphics.vertex_instructions.Rectangle object at 0x7ff99493e638>
<kivy.graphics.context_instructions.BindTexture object at 0x7ff9923770e8>
<kivy.graphics.vertex_instructions.Rectangle object at 0x7ff99493e6e0>
</code></pre>
<p>What are BindTextures and why are they returned through <code>get_group()</code>? I expected only Rectangles.
If i intend to manipulate my Rectangles, do I need to do the same with my BindTextures?</p>
| 0 |
2016-09-13T18:32:38Z
| 39,480,009 |
<p>Maybe you've already noticed that with <code>Rectangle</code> you can set a background image of a Widget. That's what <a href="https://kivy.org/docs/api-kivy.graphics.html#kivy.graphics.BindTexture" rel="nofollow"><code>BindTexture</code></a> is for as it provides <code>source</code> parameter for a path to an image that can be used as a background.</p>
<p>If you don't intend to use those Rectangles as background images (from file, not drawing with <code>Color</code> + <code>Rectangle</code>), I think it is safe to ignore the textures.</p>
| 0 |
2016-09-13T22:37:24Z
|
[
"python",
"kivy",
"kivy-language"
] |
Python: Create a user and send email with account details to the user
| 39,476,840 |
<p>Here is a script I have written which will create a new user account. I am trying to get help in adding a bit more to it. </p>
<p>I want to have it also send an email to the new user that is created. Ideally, the program will ask the user creating the new account, what their email is, and then it will use the user and password variables and send an email to that new user so they will know how to log in. What would be the best way to do this, thanks for any advice.</p>
<pre><code>#! /usr/bin/python
import commands, os, string
import sys
import fileinput
def print_menu(): ## Your menu design here
print 20 * "-" , "Perform Below Steps to Create a New TSM Account." , 20 * "-"
print "1. Create User Account"
print 67 * "-"
loop=True
while loop: ## While loop which will keep going until loop = False
print_menu() ## Displays menu
choice = input("Enter your choice [1-5]: ")
if choice==1:
user = raw_input("Enter the Username to be created: " )
password = raw_input( "Enter the password for the user: " )
SRnumber = raw_input( "Enter the Service Request Number: ")
user = user + " "
output = os.system('create user' + user)
output = os.system('set password' + password)
</code></pre>
| 0 |
2016-09-13T18:32:41Z
| 39,476,938 |
<p>You can easily send mails with gmail and smtplib (you maybe need to install it first). This way you can send any message you want. </p>
<pre><code>import smtplib
toaddrs = raw_input('what is your e mail?')
fromaddr = 'youremail@email.com'
msg = 'the message you want to send'
server.starttls()
server.login(fromaddr, "your gmail password")
server = smtplib.SMTP('smtp.gmail.com', 587)
server.set_debuglevel(1)
server.sendmail(fromaddr, toaddrs, msg)
server.quit()
</code></pre>
<p>You will have to allow less secure apps in your gmail settings. </p>
| 1 |
2016-09-13T18:40:22Z
|
[
"python",
"linux",
"email"
] |
Use Flask current_app.logger inside threading
| 39,476,889 |
<p>I am using <code>current_app.logger</code> and when I tried to log inside thread it says "working outside of application context". How do I log a message from a method running in a thread?</p>
<pre><code>def background():
current_app.logger.debug('logged from thread')
@app.route('/')
def index():
Thread(target=background).start()
return 'Hello, World!'
</code></pre>
<pre><code>Exception in thread Thread-16:
Traceback (most recent call last):
File "/usr/lib64/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/home/sapam/demo.py", line 57, in background
current_app.logger.critical('test')
File "/home/sapam/.virtualenvs/demo/lib/python3.5/site-packages/werkzeug/local.py", line 343, in __getattr__
return getattr(self._get_current_object(), name)
File "/home/sapam/.virtualenvs/demo/lib/python3.5/site-packages/werkzeug/local.py", line 302, in _get_current_object
return self.__local()
File "/home/sapam/.virtualenvs/demo/lib/python3.5/site-packages/flask/globals.py", line 51, in _find_app
raise RuntimeError(_app_ctx_err_msg)
RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that needed
to interface with the current application object in a way. To solve
this set up an application context with app.app_context(). See the
documentation for more information.
127.0.0.1 - - [13/Sep/2016 12:28:24] "GET / HTTP/1.1" 200 -
</code></pre>
| 3 |
2016-09-13T18:36:54Z
| 39,477,756 |
<p>You use the standard <code>logging</code> module in the standard way: get the logger for the current module and log a message with it.</p>
<pre><code>def background():
logging.getLogger(__name__).debug('logged from thread')
</code></pre>
<hr>
<p><code>app.logger</code> is mostly meant for internal Flask logging, or at least logging within an app context. If you're in a thread, you're no longer in the same app context.</p>
<p>You can pass <code>current_app._get_current_object()</code> to the thread and use that instead of <code>current_app</code>. Or you can subclass <code>Thread</code> to do something similar.</p>
<pre><code>def background(app):
app.logger.debug('logged from thread')
@app.route('/')
def index():
Thread(target=background, kwargs={'app': current_app._get_current_object()}).start()
return 'Hello, World!'
</code></pre>
<pre><code>class FlaskThread(Thread):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.app = current_app._get_current_object()
def run(self):
with self.app.app_context():
super().run()
def background():
current_app.logger.debug('logged from thread')
@app.route('/')
def index():
FlaskThread(target=background).start()
return 'Hello, World!'
</code></pre>
| 4 |
2016-09-13T19:37:34Z
|
[
"python",
"multithreading",
"logging",
"flask"
] |
Variable as statement in python?
| 39,476,937 |
<p>I was reading some python code and come across this. Since I mostly write C and Java (And variable as statement doesn't even compile in these language) I'm not sure what it is about in python.</p>
<p>What does <code>self.current</code>, the "variable as statement", means here? Is it just some way to print the variable out, or this is a special grammar thing / practice in dealing with exceptions in python?</p>
<pre><code>class PriorityQueue():
def __init__(self):
self.queue = []
self.current = 0
def next(self):
if self.current >=len(self.queue):
self.current
raise StopIteration
out = self.queue[self.current]
self.current += 1
return out
</code></pre>
| 3 |
2016-09-13T18:40:22Z
| 39,477,040 |
<p>It really doesn't do anything, the only way it can do anything in particular, as @Daniel said in the comments, is if <code>self.current</code> refers to a <a href="https://docs.python.org/2/library/functions.html#property" rel="nofollow">property method</a>. Something like the following:</p>
<pre><code>class X():
@property
def current(self):
mail_admins()
return whatever
def next(self):
...
</code></pre>
<p>This way, calling <code>self.current</code>, would actually do something.</p>
<p>But anyways its definitely not considered good practice since a property is just that, a property, if it's supposed to do something, it should be method.</p>
| 3 |
2016-09-13T18:48:26Z
|
[
"python"
] |
Variable as statement in python?
| 39,476,937 |
<p>I was reading some python code and come across this. Since I mostly write C and Java (And variable as statement doesn't even compile in these language) I'm not sure what it is about in python.</p>
<p>What does <code>self.current</code>, the "variable as statement", means here? Is it just some way to print the variable out, or this is a special grammar thing / practice in dealing with exceptions in python?</p>
<pre><code>class PriorityQueue():
def __init__(self):
self.queue = []
self.current = 0
def next(self):
if self.current >=len(self.queue):
self.current
raise StopIteration
out = self.queue[self.current]
self.current += 1
return out
</code></pre>
| 3 |
2016-09-13T18:40:22Z
| 39,904,090 |
<p>I realized that this can be used for checking whether the attribute/method actually exist in the passed parameter. May be useful for input sanity check.</p>
<pre><code>def test(value):
try:
value.testAttr
except AttributeError:
print "No testAttr attribute found"
</code></pre>
| 0 |
2016-10-06T19:22:25Z
|
[
"python"
] |
python subprocess.Popen hanging
| 39,477,003 |
<pre><code> child = subprocess.Popen(command,
shell=True,
env=environment,
close_fds=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
bufsize=1,
)
subout = ""
with child.stdout:
for line in iter(child.stdout.readline, b''):
subout += line
logging.info(subout)
rc = child.wait()
</code></pre>
<p>some times (intermittently) this hangs forever.
not sure if it hangs on <code>iter(child.stdout.readline)</code> or <code>child.wait()</code></p>
<p>i <code>ps -ef</code> for the process it Popens and that process no longer exists</p>
<p>my guess is that it has do with bufsize so that child.stdout.readline is going on forever but i have no idea how to test it and as this happens intermittently</p>
<p>I could implement alarm but i m not sure if that's appropriate as i cant really tell whether the popen'd process is just slow or hanging</p>
<p>let's assume that either child.stdout.readline or wait() hangs forever, what actions could i take besides alarm ?</p>
| 1 |
2016-09-13T18:45:29Z
| 39,477,247 |
<p>You're likely hitting the deadlock that's <a href="https://docs.python.org/2/library/subprocess.html#subprocess.Popen.wait" rel="nofollow">explained in the documentation</a>:</p>
<blockquote>
<p><code>Popen.wait()</code>:</p>
<p>Wait for child process to terminate. Set and return <code>returncode</code> attribute.</p>
<p><strong>Warning:</strong> This will deadlock when using <code>stdout=PIPE</code> and/or <code>stderr=PIPE</code> and the child process generates enough output to a pipe such that it blocks waiting for the OS pipe buffer to accept more data. Use <code>communicate()</code> to avoid that.</p>
</blockquote>
<p>The solution is to use <a href="https://docs.python.org/2/library/subprocess.html#subprocess.Popen.communicate" rel="nofollow"><code>Popen.communicate()</code></a>.</p>
| 2 |
2016-09-13T19:03:25Z
|
[
"python",
"subprocess"
] |
Error "mach-o, but wrong architecture" after installing anaconda on mac
| 39,477,023 |
<p>I am getting an architecture error while importing any package, i understand my Python might not be compatible, can't understand it.
Current Python Version - 2.7.10</p>
<blockquote>
<p>`MyMachine:desktop *********$ python pythonmath.py
Traceback (most recent call last):
File "pythonmath.py", line 1, in
import math
ImportError: dlopen(/Users/*********/anaconda/lib/python2.7/lib-dynload/math.so, 2): no suitable image found. Did find:
/Users/**********/anaconda/lib/python2.7/lib-dynload/math.so: mach-o, but wrong architecture
MyMachine:desktop ***********$ python pythonmath.py
Traceback (most recent call last):
File "pythonmath.py", line 1, in
import math
ImportError: dlopen(/Users/anaconda/lib/python2.7/lib-dynload/math.so, 2): no suitable image found. Did find:
/Users/***********/anaconda/lib/python2.7/lib-dynload/math.so: mach-o, but wrong architecture</p>
</blockquote>
| 0 |
2016-09-13T18:47:15Z
| 39,477,667 |
<p>you are mixing 32bit and 64bit versions of python.
probably you installed 64bit python version on a 32bit computer.
go on and uninstall python and reinstall it with the right configuration.</p>
| 0 |
2016-09-13T19:31:55Z
|
[
"python",
"osx",
"python-2.7"
] |
Convert GroupBy Object to Ordered List in Pyspark
| 39,477,027 |
<p>I'm using Spark 2.0.0 and dataframe.
Here is my input dataframe as</p>
<pre><code>| id | year | qty |
|----|-------------|--------|
| a | 2012 | 10 |
| b | 2012 | 12 |
| c | 2013 | 5 |
| b | 2014 | 7 |
| c | 2012 | 3 |
</code></pre>
<p>What I want is </p>
<pre><code>| id | year_2012 | year_2013 | year_2014 |
|----|-----------|-----------|-----------|
| a | 10 | 0 | 0 |
| b | 12 | 0 | 7 |
| c | 3 | 5 | 0 |
</code></pre>
<p>or</p>
<pre><code>| id | yearly_qty |
|----|---------------|
| a | [10, 0, 0] |
| b | [12, 0, 7] |
| c | [3, 5, 0] |
</code></pre>
<p>The closest solution I found is <code>collect_list()</code> but this function doesn't provide order for the list. In my mind the solution should be like:</p>
<pre><code>data.groupBy('id').agg(collect_function)
</code></pre>
<p>Is there a way to generate this without filtering every id out using a loop?</p>
| 0 |
2016-09-13T18:47:24Z
| 39,477,680 |
<p>The first one can be easily achieved using <code>pivot</code>:</p>
<pre><code>from itertools import chain
years = sorted(chain(*df.select("year").distinct().collect()))
df.groupBy("id").pivot("year", years).sum("qty")
</code></pre>
<p>which can be further converted to array form:</p>
<pre><code>from pyspark.sql.functions import array, col
(...
.na.fill(0)
.select("id", array(*[col(str(x)) for x in years]).alias("yearly_qty")))
</code></pre>
<p>Obtaining the second one directly is probably not worth all the fuss since you'd have to fill the blanks first. Nevertheless you could try:</p>
<pre><code>from pyspark.sql.functions import collect_list, struct, sort_array, broadcast
years_df = sc.parallelize([(x, ) for x in years], 1).toDF(["year"])
(broadcast(years_df)
.join(df.select("id").distinct())
.join(df, ["year", "id"], "leftouter")
.na.fill(0)
.groupBy("id")
.agg(sort_array(collect_list(struct("year", "qty"))).qty.alias("qty")))
</code></pre>
<p>It also requires Spark 2.0+ to get a support for <code>struct</code> collecting.</p>
<p>Both methods are quite expensive so you should be careful when using these. As a rule of thumb long is better than wide.</p>
| 2 |
2016-09-13T19:32:35Z
|
[
"python",
"apache-spark",
"pyspark",
"apache-spark-sql",
"spark-dataframe"
] |
Python - Searching .csv file with rows from a different .csv file
| 39,477,061 |
<p>All -</p>
<p>I am attempting to read a single row from a csv file and then have it search another csv file.</p>
<p>I have a masterlist.csv that has a single column called empID. It contains thousands of rows of 9 digit numbers. As well I have ids.csv that also contains a single column called number. It contains hundreds of rows. I am attempting to pull a row from the ids.csv do a search on the masterlist.csv and print out whether it has been found. Then it needs to move to the next row in ids.csv until each row in ids.csv has been searched within the masterlist.csv.
I thought it would be as simple as this, however it is not throwing any errors nor returning any results.</p>
<p>Using Python 2.7.12
import csv</p>
<pre><code>masterReader = csv.reader(open("masterlist.csv", "rt"), delimiter=",")
idsReader = csv.reader(open("ids.csv", "rt"), delimiter=",")
for number in idsReader:
for empID in masterReader:
if number == empID:
print (" Found in MasterFile")
else:
print ("Is Not Found in MasterFile")
</code></pre>
<p>Edit: Adding snippet of data used for testing.</p>
<p><a href="http://i.stack.imgur.com/vJroB.png" rel="nofollow"><img src="http://i.stack.imgur.com/vJroB.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/nYhe4.png" rel="nofollow"><img src="http://i.stack.imgur.com/nYhe4.png" alt="enter image description here"></a></p>
| 3 |
2016-09-13T18:50:17Z
| 39,477,319 |
<p>You could easily find the numbers that are common to both by using the <em>intersection of sets</em> and the <code>csv.DictReader</code> as your reader object (not sure if the file actually contains a single column):</p>
<pre><code>with open("masterlist.csv") as f1, open("ids.csv") as f2:
masterReader = csv.DictReader(f1)
idsReader = csv.DictReader(f2)
common = set(row['empID'] for row in masterReader) \
& set(row['number'] for row in idsReader)
print(common)
</code></pre>
<hr>
<p>Or use a check of list membership to find rows in <code>idsReader</code> that are contained in <code>masterReader</code>:</p>
<pre><code>masterReader = [row['empID'] for row in masterReader]
for row in idsReader:
if row['number'] in masterReader:
print (" Found in MasterFile")
else:
print ("Is Not Found in MasterFile")
</code></pre>
<p><em>P.S. considering the update to your question, you may not even need the csv module to do this</em></p>
| 0 |
2016-09-13T19:09:05Z
|
[
"python",
"csv"
] |
Python - Searching .csv file with rows from a different .csv file
| 39,477,061 |
<p>All -</p>
<p>I am attempting to read a single row from a csv file and then have it search another csv file.</p>
<p>I have a masterlist.csv that has a single column called empID. It contains thousands of rows of 9 digit numbers. As well I have ids.csv that also contains a single column called number. It contains hundreds of rows. I am attempting to pull a row from the ids.csv do a search on the masterlist.csv and print out whether it has been found. Then it needs to move to the next row in ids.csv until each row in ids.csv has been searched within the masterlist.csv.
I thought it would be as simple as this, however it is not throwing any errors nor returning any results.</p>
<p>Using Python 2.7.12
import csv</p>
<pre><code>masterReader = csv.reader(open("masterlist.csv", "rt"), delimiter=",")
idsReader = csv.reader(open("ids.csv", "rt"), delimiter=",")
for number in idsReader:
for empID in masterReader:
if number == empID:
print (" Found in MasterFile")
else:
print ("Is Not Found in MasterFile")
</code></pre>
<p>Edit: Adding snippet of data used for testing.</p>
<p><a href="http://i.stack.imgur.com/vJroB.png" rel="nofollow"><img src="http://i.stack.imgur.com/vJroB.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/nYhe4.png" rel="nofollow"><img src="http://i.stack.imgur.com/nYhe4.png" alt="enter image description here"></a></p>
| 3 |
2016-09-13T18:50:17Z
| 39,477,387 |
<p><strong>Content of master.csv</strong></p>
<pre><code>EmpId
111111111
222222222
333333333
444444444
</code></pre>
<p><strong>Content of ids.csv:</strong></p>
<pre><code>Number
111111111
999999999
444444444
555555555
222222222
</code></pre>
<p><strong>Code:</strong></p>
<pre><code>import csv
f1 = file('master.csv', 'r')
f2 = file('ids.csv', 'r')
c1 = csv.reader(f1)
c2 = csv.reader(f2)
idlist = list(c2)
masterlist = list(c1)
for id in idlist[1:]:
found = False
#Need to ignore heading thats why masterlist[1:]
for master in masterlist[1:]:
if id == master:
found = True
if found:
print "Found in master file"
else:
print "Not found in master file"
f1.close()
f2.close()
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>C:\Users\dinesh_pundkar\Desktop>python c.py
Found in master file
Not found in master file
Found in master file
Not found in master file
Found in master file
C:\Users\dinesh_pundkar\Desktop>
</code></pre>
<p><strong><em>More shorter version of code without CSV module</em></strong></p>
<pre><code>with open('master.csv','r') as master:
with open('ids.csv','r') as id:
id_list = id.readlines()[1:]
master_list = master.readlines()[1:]
for id in id_list:
if id in master_list:
print "Found in master file"
else:
print "Not found in master file"
</code></pre>
| 1 |
2016-09-13T19:12:47Z
|
[
"python",
"csv"
] |
PyQt - trouble with reimplementing data method of QSqlTableModel
| 39,477,070 |
<p>I'm a newbie with python and mainly with pyqt. The problem is simple: I have a <code>QTableView</code> and I want to "simply" change the color of some rows. Reading all around I found that the simplest solution should be to override the data method in the model in such a way:</p>
<pre><code>class MyModel(QtSql.QSqlTableModel):
def data(self,idx,role):
testindex=self.index(idx.row(),idx.column(),idx.parent())
if(role==QtCore.Qt.BackgroundRole):
return QtGui.QColor(255,0,0)
elif role == QtCore.Qt.DisplayRole:
return QtSql.QSqlTableModel().data(testindex)
</code></pre>
<p>When I use this model reimplementation, the rows are changing color but the cell values disappear and the return statement <code>QtSql.QSqlTableModel().data(testindex)</code> is always <code>None</code>.
I'm getting crazy to find out a solution. Could you help me?</p>
| 0 |
2016-09-13T18:50:51Z
| 39,477,501 |
<p>Your implementation is broken in a couple of ways: (1) it always returns <code>None</code> for any unspecified roles, (2) it creates a new instance of <code>QSqlTableModel</code> every time the display role is requested, instead of calling the base-class method.</p>
<p>The implementation should probably be something like this:</p>
<pre><code>class MyModel(QtSql.QSqlTableModel):
def data(self, index, role):
if role == QtCore.Qt.BackgroundRole:
return QtGui.QColor(255, 0, 0)
return super(MyModel, self).data(index, role)
</code></pre>
| 0 |
2016-09-13T19:20:29Z
|
[
"python",
"pyqt",
"override",
"qsqltablemodel"
] |
Python Calculate log(1+x)/x for x near 0
| 39,477,176 |
<p>Is there a way to correctly calculate the value of log(1+x)/x in python for values of x close to 0? When I do it normally using np.log1p(x)/x, I get 1. I somehow seem to get the correct values when I use np.log(x). Isn't log1p supposed to be more stable?</p>
| 1 |
2016-09-13T18:58:14Z
| 39,477,218 |
<pre><code>np.log1p(1+x)
</code></pre>
<p>That gives you <code>log(2+x)</code>. Change it to <code>np.log1p(x)</code>. </p>
| 1 |
2016-09-13T19:01:38Z
|
[
"python",
"numerical-stability"
] |
Python Calculate log(1+x)/x for x near 0
| 39,477,176 |
<p>Is there a way to correctly calculate the value of log(1+x)/x in python for values of x close to 0? When I do it normally using np.log1p(x)/x, I get 1. I somehow seem to get the correct values when I use np.log(x). Isn't log1p supposed to be more stable?</p>
| 1 |
2016-09-13T18:58:14Z
| 39,885,108 |
<p>So I found one answer to this. I used a library called decimal.</p>
<pre><code>from decimal import Decimal
x = Decimal('1e-13')
xp1 = Decimal(1) + x
print(xp1.ln()/x)
</code></pre>
<p>This library seems to be much more stable than numpy.</p>
| 0 |
2016-10-05T23:15:18Z
|
[
"python",
"numerical-stability"
] |
Python using Pyodbc connecting to Sql 2012 calling up a stored procedure
| 39,477,214 |
<p>I have a stored procedure that works 100% when run from the Sql server. It updates at least 5 different tables. When I run it from Python it only updates the first two tables. Does not complete on the remaining tables. The parameters passed are exactly the same as run from the sql server directly. The data is reset to a common starting point with each test. Has anyone come across this issue with Python execution of stored procs? I am using Python 3.5 and Pyodbc from less than a month ago, Sql 2012 client and server on windows. It is not a commit issue because the first two tables are updating/commiting. The sql statement it fails on is not at all complex. I am guessing some sort of limitation like time or only update so many tables with a given sql call? My next step is to do each of the steps from Python as separate steps not from a single do everything Stored Proc to see if I get any differences but I am hoping to not have to do that.</p>
<pre><code>str_rs_SqlCommand = "{call dbo.usp_LaborLogBatchPerson ('Test User', '1')}" ### Passes parameters to the stored procedure
print (str_rs_SqlCommand)
obj_dbc_Connection2 = pyodbc.connect("DRIVER={SQL Server} " + " ;SERVER=" + str_dbc_ServerName + " ;DATABASE=" + str_dbc_Name + " ;UID=" + str_dbc_Uid + " ;PWD=" + str_dbc_Pwd + "" + "" ) #;autocommit=True #tried with and without autocommit
conn = obj_dbc_Connection2.cursor() ### Create a cursor for the sql connection
conn.execute(str_rs_SqlCommand)
conn.commit()
obj_dbc_Connection2.close
</code></pre>
| -1 |
2016-09-13T19:01:08Z
| 39,479,209 |
<p>I found it by stepping through piece by piece of what it was doing. Having a "Print Statement" in the stored procedure ends the execution of the stored procedure at that step. Make sure you don't use print statements in stored procedures if you are using Python 3.5 with Pyodbc with MSSql 2012 on a Winderz platform......</p>
| 0 |
2016-09-13T21:23:57Z
|
[
"python",
"sql-server-2012",
"pyodbc"
] |
How to disable log file for py2exe?
| 39,477,242 |
<p>I've created a small script using <code>Python</code> and <code>PyQt4</code>, I converted it to <code>exe</code>. But there's some cases in my script that I'm not handling so a <code>log</code> file being created while using the program. So I wanna disable creating this <code>log</code> file.</p>
<p>How can I do that?</p>
<p>Here's my <code>setup.py</code> file:</p>
<pre><code>from distutils.core import setup
import py2exe
setup(
windows=['DumbCalculator.py'],
options = {
"py2exe": {
"dll_excludes": ["MSVCP90.dll"],
}
},
)
</code></pre>
| 0 |
2016-09-13T19:03:14Z
| 39,478,211 |
<p>I finally found how to do that.</p>
<p>I went to <code>C:\Python27\Lib\site-packages\py2exe</code> and then opened <code>boot_common.py</code> file and commented the 56, 57, 58, 59, 60, 63, 64,65 lines and saved it.</p>
<p>I run py2exe again and tried the program works great. It makes a log file but doesn't run its annoying prompt. It worked for me !</p>
| 0 |
2016-09-13T20:08:33Z
|
[
"python",
"python-2.7",
"pyqt4",
"py2exe"
] |
Python Mysql class to call a stored procedure
| 39,477,316 |
<p>I am very new. All of my experience is on the DB side so I am lost on the Python side of things. That said I am trying to create a class that I can use to execute stored procedures. I am using Python 3.4.3. I found a mysql class on github and simplified/modified it to make a proc call and it is not working. </p>
<pre><code>mysqlquery.py
import mysql.connector, sys
from collections import OrderedDict
class MysqlPython(object):
__host = None
__user = None
__password = None
__database = None
__procname = None
__inputvals = None
def __init__(self, host='localhost', user='root', password='', database=''):
self.__host = host
self.__user = user
self.__password = password
self.__database = database
## End def __init__
def __open(self):
cnx = mysql.connector.connect(self.__host, self.__user, self.__password, self.__database)
self.__connection = cnx
self.__session = cnx.cursor()
## End def __open
def __close(self):
self.__session.close()
self.__connection.close()
## End def __close
def proc(self,procname,inputvals):
self.__open()
self.__session.callproc(procname, inputvals)
## End for proc
## End class
test.py
from mysqlquery import MysqlPython
connect_mysql = MysqlPython()
result = connect_mysql.proc ('insertlink','1,www.test.com')
</code></pre>
<p>I get this error</p>
<pre><code>TypeError: __init__() takes 1 positional argument but 5 were given
</code></pre>
<p>Looking at my <strong>init</strong>, it take 5 args as it should. Not sure why I am getting this. Again, I am very new so it could be a simple problem.</p>
<p>Thanks for any help.</p>
<p>G</p>
| 0 |
2016-09-13T19:08:46Z
| 39,477,423 |
<p><code>mysql.connector.connect()</code> takes named arguments, not positional arguments, and you're missing the names.</p>
<pre><code>cnx = mysql.connector.connect(host=self.__host, user=self.__user, password=self.__password, database=self.__database)
</code></pre>
| 1 |
2016-09-13T19:15:04Z
|
[
"python",
"mysql",
"stored-procedures",
"typeerror"
] |
Conditionally extracting Pandas rows based on another Pandas dataframe
| 39,477,328 |
<p>I have two dataframes:</p>
<p><code>df1:</code></p>
<pre><code>col1 col2
1 2
1 3
2 4
</code></pre>
<p><code>df2:</code></p>
<pre><code>col1
2
3
</code></pre>
<p>I want to extract all the rows in <code>df1</code> where <code>df1</code>'s <code>col2</code> <code>not in</code> <code>df2</code>'s <code>col1</code>. So in this case it would be:</p>
<pre><code>col1 col2
2 4
</code></pre>
<p>I first tried:</p>
<pre><code>df1[df1['col2'] not in df2['col1']]
</code></pre>
<p>But it returned:</p>
<blockquote>
<p>TypeError: 'Series' objects are mutable, thus they cannot be hashed</p>
</blockquote>
<p>I then tried:</p>
<pre><code>df1[df1['col2'] not in df2['col1'].tolist]
</code></pre>
<p>But it returned:</p>
<blockquote>
<p>TypeError: argument of type 'instancemethod' is not iterable</p>
</blockquote>
| 1 |
2016-09-13T19:09:22Z
| 39,477,357 |
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a> with <code>~</code> for inverting boolean mask:</p>
<pre><code>print (df1['col2'].isin(df2['col1']))
0 True
1 True
2 False
Name: col2, dtype: bool
print (~df1['col2'].isin(df2['col1']))
0 False
1 False
2 True
Name: col2, dtype: bool
print (df1[~df1['col2'].isin(df2['col1'])])
col1 col2
2 2 4
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>In [8]: %timeit (df1.query('col2 not in @df2.col1'))
1000 loops, best of 3: 1.57 ms per loop
In [9]: %timeit (df1[~df1['col2'].isin(df2['col1'])])
1000 loops, best of 3: 466 µs per loop
</code></pre>
| 1 |
2016-09-13T19:11:09Z
|
[
"python",
"pandas",
"indexing",
"dataframe",
"condition"
] |
Conditionally extracting Pandas rows based on another Pandas dataframe
| 39,477,328 |
<p>I have two dataframes:</p>
<p><code>df1:</code></p>
<pre><code>col1 col2
1 2
1 3
2 4
</code></pre>
<p><code>df2:</code></p>
<pre><code>col1
2
3
</code></pre>
<p>I want to extract all the rows in <code>df1</code> where <code>df1</code>'s <code>col2</code> <code>not in</code> <code>df2</code>'s <code>col1</code>. So in this case it would be:</p>
<pre><code>col1 col2
2 4
</code></pre>
<p>I first tried:</p>
<pre><code>df1[df1['col2'] not in df2['col1']]
</code></pre>
<p>But it returned:</p>
<blockquote>
<p>TypeError: 'Series' objects are mutable, thus they cannot be hashed</p>
</blockquote>
<p>I then tried:</p>
<pre><code>df1[df1['col2'] not in df2['col1'].tolist]
</code></pre>
<p>But it returned:</p>
<blockquote>
<p>TypeError: argument of type 'instancemethod' is not iterable</p>
</blockquote>
| 1 |
2016-09-13T19:09:22Z
| 39,477,421 |
<p>using <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#the-query-method-experimental" rel="nofollow">.query()</a> method:</p>
<pre><code>In [9]: df1.query('col2 not in @df2.col1')
Out[9]:
col1 col2
2 2 4
</code></pre>
<p>Timing for bigger DFs:</p>
<pre><code>In [44]: df1.shape
Out[44]: (30000000, 2)
In [45]: df2.shape
Out[45]: (20000000, 1)
In [46]: %timeit (df1[~df1['col2'].isin(df2['col1'])])
1 loop, best of 3: 5.56 s per loop
In [47]: %timeit (df1.query('col2 not in @df2.col1'))
1 loop, best of 3: 5.96 s per loop
</code></pre>
| 1 |
2016-09-13T19:14:59Z
|
[
"python",
"pandas",
"indexing",
"dataframe",
"condition"
] |
Finding the average of two consecutive rows in pandas
| 39,477,393 |
<p>I am trying to find the average of two consecutive rows in each column</p>
<pre><code>In[207]: df = DataFrame({"A": [9, 4, 2, 1, 4], "B": [12, 7, 5, 4,8]})
In[208]: df
Out[207]:
A B
0 9 12
1 4 7
2 2 5
3 1 4
4 4 8
</code></pre>
<p>The result should be: </p>
<pre><code>Out[207]:
A B
0 6.5 9.5
1 1.5 4.5
</code></pre>
<p>If the number of elements id odd, discard the last row. </p>
| 2 |
2016-09-13T19:13:09Z
| 39,477,558 |
<p>try this:</p>
<pre><code>In [29]: idx = len(df) - 1 if len(df) % 2 else len(df)
In [30]: df[:idx].groupby(df.index[:idx] // 2).mean()
Out[30]:
A B
0 6.5 9.5
1 1.5 4.5
</code></pre>
| 4 |
2016-09-13T19:24:08Z
|
[
"python",
"pandas",
"dataframe"
] |
Python 3 regex word boundary unclear
| 39,477,394 |
<p>I am using a regex to find the string 'my car' and detect up to four words before it. My reference text is:</p>
<pre><code>my house is painted white, my car is red.
A horse is galloping very fast in the road, I drive my car slowly.
</code></pre>
<p>if I use the regex:</p>
<pre><code>re.finditer(r'(?:\w+[ \t,]+){0,4}my car',txt,re.IGNORECASE|re.MULTILINE)
</code></pre>
<p>I am getting the expected results.For example: house is painted white, my car</p>
<p>if I use the regex:</p>
<pre><code>re.finditer(r'(?:\w+\b){0,4}my car',txt,re.IGNORECASE|re.MULTILINE)
</code></pre>
<p>I am getting only: 'my car' and 'my car'
That is, I am not getting up to four words before it.
Why I cannot use the \b to match the words in the group {0,4}?</p>
| 3 |
2016-09-13T19:13:10Z
| 39,477,472 |
<p>Because <code>\b</code> is a <em>zero-width assertion</em> <a href="http://www.regular-expressions.info/wordboundaries.html" rel="nofollow"><strong>word boundary</strong></a> matching a <em>location</em> between the start of string and a word char, between a non-word char and a word char, between a word char and a non-word char and between a word char and end of string. <strong>It does not <em>consume</em> the text</strong>.</p>
<p>The <code>(?:\w+\b){0,4}</code> just matches an empty string since there is no 1+ word chasrs followed with a word boundary before <code>my car</code>. </p>
<p>Instead, you may want to match 1+ non-word chars that will effectively imitate a word boundary:</p>
<pre><code>(?:\w+\W+){0,4}my car\b
</code></pre>
<p>See the <a href="https://regex101.com/r/kP5nD4/2" rel="nofollow">regex demo</a></p>
| 1 |
2016-09-13T19:18:01Z
|
[
"python",
"regex",
"python-3.x"
] |
Python 3 regex word boundary unclear
| 39,477,394 |
<p>I am using a regex to find the string 'my car' and detect up to four words before it. My reference text is:</p>
<pre><code>my house is painted white, my car is red.
A horse is galloping very fast in the road, I drive my car slowly.
</code></pre>
<p>if I use the regex:</p>
<pre><code>re.finditer(r'(?:\w+[ \t,]+){0,4}my car',txt,re.IGNORECASE|re.MULTILINE)
</code></pre>
<p>I am getting the expected results.For example: house is painted white, my car</p>
<p>if I use the regex:</p>
<pre><code>re.finditer(r'(?:\w+\b){0,4}my car',txt,re.IGNORECASE|re.MULTILINE)
</code></pre>
<p>I am getting only: 'my car' and 'my car'
That is, I am not getting up to four words before it.
Why I cannot use the \b to match the words in the group {0,4}?</p>
| 3 |
2016-09-13T19:13:10Z
| 39,477,490 |
<p>You could use:</p>
<pre><code>(?:\b\w+\W+){4}
\b(?:my\ car)\b
</code></pre>
<p>See <a href="https://regex101.com/r/iB6xL0/3" rel="nofollow"><strong>a demo on regex101.com</strong></a>.<hr>
In <code>Python</code> this will be:</p>
<pre><code>import re
rx = re.compile(r'''
(?:\b\w+\W+){0,4}
\b(?:my\ car)\b
''', re.VERBOSE)
string = """
my house is painted white, my car is red.
A horse is galloping very fast in the road, I drive my car slowly.
"""
words = rx.findall(string)
print(words)
# ['house is painted white, my car', 'the road, I drive my car']
</code></pre>
| 2 |
2016-09-13T19:19:30Z
|
[
"python",
"regex",
"python-3.x"
] |
How to Access Pandas dataframe columns when the number of columns are unknown before hand
| 39,477,397 |
<p>I am new to python and data science altogether. I am writing a program to read and analyze a csv with pandas. The problem is that the csv will be supplied by the user and it can have variable number of columns depending on the user. I do not have a prior knowledge of the column names.
I went about the problem by reading the csv using pandas and read the column names into a python list. However problem ensued when I attempted to access the dataframe column by supplying the indexed list as a column name. something like this:</p>
<pre><code>#List of column names, coln
coln = df.columns
df.ix[:, df.coln[0]] # to access the first column of the dataframe.
</code></pre>
<p>But this did not work. Please help how do I do this? PLEASE HELP!</p>
| 1 |
2016-09-13T19:13:25Z
| 39,477,470 |
<p>Better is use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow"><code>iloc</code></a>:</p>
<pre><code>df.iloc[:, 0]
</code></pre>
<p>output is same as:</p>
<pre><code>coln = df.columns
print (df.ix[:, coln[0]])
</code></pre>
| 2 |
2016-09-13T19:17:52Z
|
[
"python",
"pandas",
"data-science"
] |
How to Access Pandas dataframe columns when the number of columns are unknown before hand
| 39,477,397 |
<p>I am new to python and data science altogether. I am writing a program to read and analyze a csv with pandas. The problem is that the csv will be supplied by the user and it can have variable number of columns depending on the user. I do not have a prior knowledge of the column names.
I went about the problem by reading the csv using pandas and read the column names into a python list. However problem ensued when I attempted to access the dataframe column by supplying the indexed list as a column name. something like this:</p>
<pre><code>#List of column names, coln
coln = df.columns
df.ix[:, df.coln[0]] # to access the first column of the dataframe.
</code></pre>
<p>But this did not work. Please help how do I do this? PLEASE HELP!</p>
| 1 |
2016-09-13T19:13:25Z
| 39,477,528 |
<p>You can use <code>iloc</code></p>
<pre><code>df.iloc[:,0]
</code></pre>
<p>Btw, <code>df.coln</code> does not exist you created <code>coln</code> as separate variable. </p>
| 0 |
2016-09-13T19:22:05Z
|
[
"python",
"pandas",
"data-science"
] |
How to Access Pandas dataframe columns when the number of columns are unknown before hand
| 39,477,397 |
<p>I am new to python and data science altogether. I am writing a program to read and analyze a csv with pandas. The problem is that the csv will be supplied by the user and it can have variable number of columns depending on the user. I do not have a prior knowledge of the column names.
I went about the problem by reading the csv using pandas and read the column names into a python list. However problem ensued when I attempted to access the dataframe column by supplying the indexed list as a column name. something like this:</p>
<pre><code>#List of column names, coln
coln = df.columns
df.ix[:, df.coln[0]] # to access the first column of the dataframe.
</code></pre>
<p>But this did not work. Please help how do I do this? PLEASE HELP!</p>
| 1 |
2016-09-13T19:13:25Z
| 39,477,625 |
<p>You should be using iloc and not the method I corrected below as the other answers show, but to fix your original error:</p>
<pre><code>coln = df.columns
df.ix[:, coln[0]] # to access the first column of the dataframe.
</code></pre>
<p>You wrote df.coln[0] instead of coln[0]. coln is a list, there is no such thing as df.coln. </p>
| 0 |
2016-09-13T19:28:59Z
|
[
"python",
"pandas",
"data-science"
] |
Unable to run psql commands on remote server from local python script
| 39,477,545 |
<p>I'm writing a script that is called locally to run psql queries on a remote server. I'm writing the script in Python 2.7 using the subprocess module. I'm using subprocess.Popen to ssh into the remote server and directly run a psql command. My local machine is osx and I think the server is CentOS.</p>
<p>When I call my script locally, I get back an error saying <code>psql: command not found</code>. If I run the same exact psql command on the remote server, then I get back the correct query result.</p>
<p>I suspected there might be something wrong with my ssh code, so instead of sending over psql commands to the remote server, I tried sending over simple commands like <code>ls</code>, <code>cd</code>, <code>mkdir</code>, and even <code>scp</code>. All of those worked fine, so I don't think there's a problem with my code that ssh's over the commands to the remote server.</p>
<p><strong>Does anybody understand what is going wrong with my code and why I'm getting back <code>psql: command not found</code>?</strong> I researched and found this earlier SO question, but psql is installed on the remote server as evidenced by the fact that I can run psql commands manually on there.
<br>
<a href="http://stackoverflow.com/questions/6790088/postgresql-bash-psql-command-not-found">Postgresql -bash: psql: command not found</a>
<br>
<br>
<br>
From an external class file:</p>
<pre><code>def db_cmd(self, query):
# check that query ends in semicolon
if query[-1] != ';':
query = query + ';'
formatted_query = 'psql ' + self.db_string + \
' -c "begin; ' + query + ' ' + self.db_mode + ';"'
print formatted_query
ssh = subprocess.Popen(['ssh', self.host, formatted_query],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
if not result:
error = ssh.stderr.readlines()
print >>sys.stderr
for e in error:
print 'ERROR: %s' % e,
else:
for r in result:
print r,
</code></pre>
<p><br>
<br>
Excerpt from a local script that imports above class and sends out the psql command to remote server:</p>
<pre><code>s = Jump(env, db_mode)
s.db_cmd('select * from student.students limit 5;')
</code></pre>
<p><br>
<br>
Running my script locally.
Note that the script prints out the psql command for debugging. If I copy and paste that same psql command and run it on the remote server, I get back the correct query result.</p>
<pre><code>$ ./script.py 1 PROD ROLLBACK
psql -h <server_host> -d <db_name> -U <db_user> -p <port> -c "begin; select * from student.students limit 5; ROLLBACK;"
ERROR: bash: psql: command not found
</code></pre>
<p>Thanks</p>
| 0 |
2016-09-13T19:23:20Z
| 39,478,596 |
<p>When you run ssh for an interactive session on a remote system, ssh requests a PTY (pseudo TTY) for the remote system. When you run ssh to run a command on a remote system, ssh doesn't allocate a PTY by default.</p>
<p>Having a TTY/PTY changes the way your shell initializes itself. Your actual problem is you need to do any or all of the following to run psql on the remote system:</p>
<ul>
<li>Add one or more directories to your command path.</li>
<li>Add one or more environment variables to your environment.</li>
<li>define one or more aliases.</li>
</ul>
<p>You're doing these things in a shell startup file (.bash_profile, for example) and it's only happening for interactive sessions. When you use ssh to run psql remotely, your shell is skipping the initialization which permits psql to run.</p>
<p>There are two simple approaches to fixing this:</p>
<ol>
<li>Run ssh with the "-t" or "-tt" option, which causes ssh to allocate a TTY on the remote system for psql.</li>
<li>Change your shell startup files on the remote system to perform the necessary initialization on non-interactive sessions.</li>
</ol>
| 1 |
2016-09-13T20:37:32Z
|
[
"python",
"python-2.7",
"ssh",
"subprocess",
"psql"
] |
Pandas: what is the data-type of object passed to the agg function
| 39,477,618 |
<p>I have been curious about what exactly is passed to the agg function </p>
<pre><code>Id NAME SUB_ID
276956 A 5933
276956 B 5934
276956 C 5935
287266 D 1589
</code></pre>
<p>So when I call an agg function, what exactly is the datatype of x.</p>
<pre><code>df.groupby('Id').agg(lambda x: set(x))
</code></pre>
<p>From my own digging up, I find x to be <code><type 'property'></code> but I dont understand what exactly it is. What I am trying to do is compress the records into one row for any particular group. So for id 276956 , I want to have A,B,C in one cell under the Name column. I have been doing it by converting it into a set but its causing me some grief with Nan and None values. I was wondering whats the best way to compress in a single row. If these are numpy arrays then I don't really need to convert but something like </p>
<pre><code>df.groupby('Id').agg(lambda x: x)
</code></pre>
<p>throws an error</p>
| 1 |
2016-09-13T19:28:25Z
| 39,477,645 |
<p>You working with <code>Series</code>:</p>
<pre><code>print (df.groupby('Id').agg(lambda x: print(x)))
0 A
1 B
2 C
Name: NAME, dtype: object
3 D
Name: NAME, dtype: object
0 5933
1 5934
2 5935
Name: SUB_ID, dtype: int64
3 1589
Name: SUB_ID, dtype: int64
</code></pre>
<p>You can working with custom function, but output has to be aggregated:</p>
<pre><code>def f(x):
print (x)
return set(x)
print (df.groupby('Id').agg(f))
NAME SUB_ID
Id
276956 {C, B, A} {5933, 5934, 5935}
287266 {D} {1589}
</code></pre>
<p>If need aggreagate <code>join</code>, numeric columns are <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#automatic-exclusion-of-nuisance-columns" rel="nofollow">omited</a>:</p>
<pre><code>print (df.groupby('Id').agg(', '.join))
NAME
Id
276956 A, B, C
287266 D
</code></pre>
<p>If <code>mean</code>, <code>string</code> columns are omited:</p>
<pre><code>print (df.groupby('Id').mean())
SUB_ID
Id
276956 5934
287266 1589
</code></pre>
<hr>
<p>More common is used function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.apply.html" rel="nofollow"><code>apply</code></a> - see <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#flexible-apply" rel="nofollow">flexible apply</a>:</p>
<pre><code>def f(x):
print (x)
return ', '.join(x)
print (df.groupby('Id')['NAME'].apply(f))
Id
276956 A, B, C
287266 D
Name: NAME, dtype: object
</code></pre>
| 4 |
2016-09-13T19:30:30Z
|
[
"python",
"pandas",
"numpy"
] |
Pandas: what is the data-type of object passed to the agg function
| 39,477,618 |
<p>I have been curious about what exactly is passed to the agg function </p>
<pre><code>Id NAME SUB_ID
276956 A 5933
276956 B 5934
276956 C 5935
287266 D 1589
</code></pre>
<p>So when I call an agg function, what exactly is the datatype of x.</p>
<pre><code>df.groupby('Id').agg(lambda x: set(x))
</code></pre>
<p>From my own digging up, I find x to be <code><type 'property'></code> but I dont understand what exactly it is. What I am trying to do is compress the records into one row for any particular group. So for id 276956 , I want to have A,B,C in one cell under the Name column. I have been doing it by converting it into a set but its causing me some grief with Nan and None values. I was wondering whats the best way to compress in a single row. If these are numpy arrays then I don't really need to convert but something like </p>
<pre><code>df.groupby('Id').agg(lambda x: x)
</code></pre>
<p>throws an error</p>
| 1 |
2016-09-13T19:28:25Z
| 39,477,776 |
<pre><code>>>> df[['Id', 'NAME']].groupby('Id').agg(lambda x: ', '.join(x))
NAME
Id
276956 A, B, C
287266 D
</code></pre>
<p>The <code>x</code> in this case will be the series for each relevant grouping on <code>Id</code>.</p>
<p>To actually get a list of values:</p>
<pre><code>>>> df[['Id', 'NAME']].groupby('Id').agg(lambda x: x.values.tolist())
NAME
Id
276956 [A, B, C]
287266 [D]
</code></pre>
<p>More generally, <code>x</code> will be a dataframe for the relevant grouping and you can perform any action on it that you could normally do with a dataframe, e.g.</p>
<pre><code>>>> df.groupby('Id').agg(lambda x: x.shape)
NAME SUB_ID
Id
276956 (3,) (3,)
287266 (1,) (1,)
</code></pre>
| 3 |
2016-09-13T19:38:43Z
|
[
"python",
"pandas",
"numpy"
] |
Pop from empty list error ONLY on lists with even number of elements
| 39,477,629 |
<p>So this is just an exercise I'm doing for fun on codewars.com. The point is to take a string and pull it apart by taking the last character and adding it to a string, taking the first character and adding it to another string until you have either 1 (for a string with an odd number of letters) or 0 (for a string with even number of letters) characters left. Here's a link to the <a href="https://www.codewars.com/kata/popshift/train/python" rel="nofollow">challenge</a> if you're interested.</p>
<pre><code>def pop_shift(test):
firstSol = []
secondSol = []
testList = list(test)
while len(testList) != 1:
firstSol.append(testList.pop())
secondSol.append(testList.pop(0))
return [''.join(firstSol), ''.join(secondSol), ''.join(testList)]
</code></pre>
<p>My code gives me the right result if the string has an odd number of characters:</p>
<pre><code>['erehtse', 'example', 't']
</code></pre>
<p>But with an even number of characters I get this error:</p>
<pre><code>Traceback (most recent call last):
File "<pyshell#37>", line 1, in <module>
pop_shift("egrets")
File "<pyshell#35>", line 6, in pop_shift
firstSol.append(testList.pop())
IndexError: pop from empty list
</code></pre>
<p>I've looked through a bunch of questions involving the pop() method, but nothing sounded quite similar to this. I've also tested this with a variety of strings and looked into the documentation for the pop method. There must be something that I'm missing. Any pointers are appreciated. This is also my first question, so if there's anything else you'd like to see, please let me know.</p>
| 1 |
2016-09-13T19:29:25Z
| 39,477,685 |
<p>Your loop is checking to see if the length of the list isn't 1; for an even length list, since you always pop 2 items at a time, it will never see a length of 1.</p>
| 0 |
2016-09-13T19:33:02Z
|
[
"python",
"list",
"pop"
] |
Pop from empty list error ONLY on lists with even number of elements
| 39,477,629 |
<p>So this is just an exercise I'm doing for fun on codewars.com. The point is to take a string and pull it apart by taking the last character and adding it to a string, taking the first character and adding it to another string until you have either 1 (for a string with an odd number of letters) or 0 (for a string with even number of letters) characters left. Here's a link to the <a href="https://www.codewars.com/kata/popshift/train/python" rel="nofollow">challenge</a> if you're interested.</p>
<pre><code>def pop_shift(test):
firstSol = []
secondSol = []
testList = list(test)
while len(testList) != 1:
firstSol.append(testList.pop())
secondSol.append(testList.pop(0))
return [''.join(firstSol), ''.join(secondSol), ''.join(testList)]
</code></pre>
<p>My code gives me the right result if the string has an odd number of characters:</p>
<pre><code>['erehtse', 'example', 't']
</code></pre>
<p>But with an even number of characters I get this error:</p>
<pre><code>Traceback (most recent call last):
File "<pyshell#37>", line 1, in <module>
pop_shift("egrets")
File "<pyshell#35>", line 6, in pop_shift
firstSol.append(testList.pop())
IndexError: pop from empty list
</code></pre>
<p>I've looked through a bunch of questions involving the pop() method, but nothing sounded quite similar to this. I've also tested this with a variety of strings and looked into the documentation for the pop method. There must be something that I'm missing. Any pointers are appreciated. This is also my first question, so if there's anything else you'd like to see, please let me know.</p>
| 1 |
2016-09-13T19:29:25Z
| 39,478,086 |
<p>Instead of <code>while len(testList) != 1</code>, you need: <code>while len(testList) > 1</code>, since <code>len(testList)</code> will jump from <code>2</code> to <code>0</code> on "even" strings:</p>
<pre><code>def pop_shift(test):
firstSol = []
secondSol = []
testList = list(test)
while len(testList) > 1:
firstSol.append(testList.pop())
secondSol.append(testList.pop(0))
return [''.join(firstSol), ''.join(secondSol), ''.join(testList)]
</code></pre>
<p>Then:</p>
<pre><code>print(pop_shift("Monkeys"))
> ['sye', 'Mon', 'k']
print(pop_shift("Monkey"))
> ['yek', 'Mon', '']
</code></pre>
| 0 |
2016-09-13T19:59:15Z
|
[
"python",
"list",
"pop"
] |
Scrapy csv file has uniform empty rows?
| 39,477,662 |
<p>here is the spider:</p>
<pre><code>import scrapy
from danmurphys.items import DanmurphysItem
class MySpider(scrapy.Spider):
name = 'danmurphys'
allowed_domains = ['danmurphys.com.au']
start_urls = ['https://www.danmurphys.com.au/dm/navigation/navigation_results_gallery.jsp?params=fh_location%3D%2F%2Fcatalog01%2Fen_AU%2Fcategories%3C%7Bcatalog01_2534374302084767_2534374302027742%7D%26fh_view_size%3D120%26fh_sort%3D-sales_value_30_days%26fh_modification%3D&resetnav=false&storeExclusivePage=false']
def parse(self, response):
urls = response.xpath('//h2/a/@href').extract()
for url in urls:
request = scrapy.Request(url , callback=self.parse_page)
yield request
def parse_page(self , response):
item = DanmurphysItem()
item['brand'] = response.xpath('//span[@itemprop="brand"]/text()').extract_first().strip()
item['name'] = response.xpath('//span[@itemprop="name"]/text()').extract_first().strip()
item['url'] = response.url
return item
</code></pre>
<p>and here is the items : </p>
<pre><code>import scrapy
class DanmurphysItem(scrapy.Item):
brand = scrapy.Field()
name = scrapy.Field()
url = scrapy.Field()
</code></pre>
<p>when I run the spider with this command :</p>
<pre><code>scrapy crawl danmurphys -o output.csv
</code></pre>
<p>the output is like this :
<a href="http://i.stack.imgur.com/p2AAL.png" rel="nofollow"><img src="http://i.stack.imgur.com/p2AAL.png" alt="enter image description here"></a></p>
| 0 |
2016-09-13T19:31:28Z
| 39,477,789 |
<p>This output shows the typical symptom of csv file handle opened using <code>"w"</code> mode on windows (to fix Python 3 compatibility maybe) but omitting <code>newline</code>.</p>
<p>While this has no effect on Linux/Unix based systems, on Windows, 2 carriage return chars are issued, inserting a fake blank line after every data line.</p>
<pre><code>with open("output.csv","w") as f:
cr = csv.writer(f)
</code></pre>
<p>correct way of doing it (python 3):</p>
<pre><code>with open("output.csv","w",newline='') as f: # python 3
cr = csv.writer(f)
</code></pre>
<p>(in python 2, setting <code>"wb"</code> as open mode fixes it)</p>
<p>If the file is created by a program you cannot or do not want to modify, you can always post-process the file as follows:</p>
<pre><code>with open("output.csv","rb") as f:
with open("output_fix.csv","w") as f2:
f2.write(f.read().decode().replace("\r","")) # python 3
f2.write(f.read().replace("\r","")) # python 2
</code></pre>
| 1 |
2016-09-13T19:39:29Z
|
[
"python",
"scrapy"
] |
Scrapy csv file has uniform empty rows?
| 39,477,662 |
<p>here is the spider:</p>
<pre><code>import scrapy
from danmurphys.items import DanmurphysItem
class MySpider(scrapy.Spider):
name = 'danmurphys'
allowed_domains = ['danmurphys.com.au']
start_urls = ['https://www.danmurphys.com.au/dm/navigation/navigation_results_gallery.jsp?params=fh_location%3D%2F%2Fcatalog01%2Fen_AU%2Fcategories%3C%7Bcatalog01_2534374302084767_2534374302027742%7D%26fh_view_size%3D120%26fh_sort%3D-sales_value_30_days%26fh_modification%3D&resetnav=false&storeExclusivePage=false']
def parse(self, response):
urls = response.xpath('//h2/a/@href').extract()
for url in urls:
request = scrapy.Request(url , callback=self.parse_page)
yield request
def parse_page(self , response):
item = DanmurphysItem()
item['brand'] = response.xpath('//span[@itemprop="brand"]/text()').extract_first().strip()
item['name'] = response.xpath('//span[@itemprop="name"]/text()').extract_first().strip()
item['url'] = response.url
return item
</code></pre>
<p>and here is the items : </p>
<pre><code>import scrapy
class DanmurphysItem(scrapy.Item):
brand = scrapy.Field()
name = scrapy.Field()
url = scrapy.Field()
</code></pre>
<p>when I run the spider with this command :</p>
<pre><code>scrapy crawl danmurphys -o output.csv
</code></pre>
<p>the output is like this :
<a href="http://i.stack.imgur.com/p2AAL.png" rel="nofollow"><img src="http://i.stack.imgur.com/p2AAL.png" alt="enter image description here"></a></p>
| 0 |
2016-09-13T19:31:28Z
| 39,489,264 |
<p>Thanks everybody especially (Jean-François)</p>
<p>the problem was that I've installed another scrapy version 1.1.0 installed from conda for python 3.5
once I added python 2.7 in system path ,the original scrapy 1.1.2 returned to work as default .
and everything works just fine .</p>
| 0 |
2016-09-14T11:26:16Z
|
[
"python",
"scrapy"
] |
How do i mount a usb device or a hard drive partition using python
| 39,477,682 |
<p>I have been trying to write a program that would mount a device in a specified location, everything will be done by the user inputs.</p>
<p>I have used <strong>ctypes</strong>.
The place where I'm stuck is at this part </p>
<pre><code> def mount(source, target, fs, options=''):
ret = ctypes.CDLL('libc.so.6', use_errno=True).mount(source, target, fs, 0, options)
if ret < 0:
errno = ctypes.get_errno()
raise RuntimeError("Error mounting {} ({}) on {} : {}".
format(source, fs, target, os.strerror(errno)))
</code></pre>
<p>I'm receiving an error saying <em>'Invalid argument'</em> and that is at<br>
<br><code>mount(a, b, 'ntfs', ' -w')</code></p>
<p>The following is my whole code: </p>
<pre><code>import os
import ctypes
print "Usb device management"
def mount(source, target, fs, options=''):
ret = ctypes.CDLL('libc.so.6', use_errno=True).mount(source, target, fs, 0, options)
if ret < 0:
errno = ctypes.get_errno()
raise RuntimeError("Error mounting {} ({}) on {} : {}".
format(source, fs, target, os.strerror(errno)))
def umount(source):
retu = ctypes.CDLL('libc.so.6', use_errno = True).umount(source)
if retu < 0:
errno1 = ctypes.get_errno1()
raise RuntimeError("error unmounting {} ".
format(source))
while True :
print "\n 1. Mount \n 2. Unmount \n 3. Write to file \n 4. Read File \n 5. Copy \n 6. Exit"
choice = int(raw_input('Enter the choice : '))
if choice == 1:
a = raw_input('Enter device name ')
b = raw_input('Enter mount location ')
mount(a, b, 'ntfs', ' -w')
print "USB mounted"
elif choice == 2:
print "Unmounting USB device"
c=raw_input('Enter USB device location ')
umount (c)
print "USB device unmounted"
elif choice == 3:
string = raw_input('Give input to write to file')
fd = open("%s/file.txt"%(b), 'w')
fd.write('string')
print "file Write successfull"
fd.close()
elif choice == 4:
print "Reading file"
fd = open("%s/file.txt"%(b),'r')
print "File read successfull "
fd.close()
elif choice == 5:
copy_location = raw_input('Enter location to where it has to be copied')
print "Copying file "
os.system("cp %s/file.txt %s"%(b, copy_location))
print "%s %s"%(b, copy_location)
print("File copied to location $s "%(copylocation))
if choice == 6:
print "Exit bye"
break;
</code></pre>
<p>And my system is Ubuntu 15.10.</p>
| -2 |
2016-09-13T19:32:42Z
| 39,477,913 |
<p>I would just use the command line mount. </p>
<pre><code>import os
os.system("mount /dev/sd(x) /mountpoint")
</code></pre>
| 0 |
2016-09-13T19:47:43Z
|
[
"python",
"linux",
"python-2.7",
"ctypes",
"mount"
] |
AWS Lambda read contents of file in zip uploaded as source code
| 39,477,729 |
<p>I have two files:</p>
<pre><code>MyLambdaFunction.py
config.json
</code></pre>
<p>I zip those two together to create <em>MyLambdaFunction.zip</em>. I then upload that through the AWS console to my lambda function. </p>
<p>The contents of <em>config.json</em> are various environmental variables. I need a way to read the contents of the file each time the lambda function runs, and then use the data inside to set run time variables.</p>
<p>How do I get my Python Lambda function to read the contents of a file, <em>config.json</em>, that was uploaded in the zip file with the source code?</p>
| 0 |
2016-09-13T19:35:58Z
| 39,479,685 |
<p>Try this. The file you uploaded can be accessed like:</p>
<pre><code>import os
os.environ['LAMBDA_TASK_ROOT']/config.json
</code></pre>
| 2 |
2016-09-13T22:04:38Z
|
[
"python",
"amazon-web-services",
"aws-lambda",
"amazon-lambda"
] |
AWS Lambda read contents of file in zip uploaded as source code
| 39,477,729 |
<p>I have two files:</p>
<pre><code>MyLambdaFunction.py
config.json
</code></pre>
<p>I zip those two together to create <em>MyLambdaFunction.zip</em>. I then upload that through the AWS console to my lambda function. </p>
<p>The contents of <em>config.json</em> are various environmental variables. I need a way to read the contents of the file each time the lambda function runs, and then use the data inside to set run time variables.</p>
<p>How do I get my Python Lambda function to read the contents of a file, <em>config.json</em>, that was uploaded in the zip file with the source code?</p>
| 0 |
2016-09-13T19:35:58Z
| 39,550,486 |
<p>Figured it out with the push in the right direction from @helloV.</p>
<p>At the top of the python file put "import os"</p>
<p>Inside your function handler put the following:</p>
<pre><code>configPath = os.environ['LAMBDA_TASK_ROOT'] + "/config.json"
print("Looking for config.json at " + configPath)
configContents = open(configPath).read()
configJson = json.loads(configContents)
environment = configJson['environment']
print("Environment: " + environment)
</code></pre>
<p>That bit right there, line by line, does the following:</p>
<ul>
<li>Get the path where the config.json file is stored</li>
<li>Print that path for viewing in CloudWatch logs</li>
<li>Open the file stored at that path, read the contents</li>
<li>Load the contents to a json object for easy navigating</li>
<li>Grab the value of one of the variables stored in the json</li>
<li>Print that for viewing in the CloudWatch logs</li>
</ul>
<p>Here is what the config.json looks like:</p>
<pre><code>{
"environment":"dev"
}
</code></pre>
| 0 |
2016-09-17T18:45:55Z
|
[
"python",
"amazon-web-services",
"aws-lambda",
"amazon-lambda"
] |
running python script with an ECS task
| 39,477,751 |
<p>I have an ECS task setup which, when with a Command override <strong>ls</strong>, produces expected results with my CloudWatch log stream: <strong>test.py</strong>. my script <strong>test.py</strong> takes one parameter. I am wondering how I can execute this script with python3 (which exists in my container) using the command override. Essentially, I want to execute the command:</p>
<pre><code> python3 test.py hello
</code></pre>
<p>how can I do this?</p>
| 0 |
2016-09-13T19:37:09Z
| 39,478,393 |
<p>Here's how I did something similar:</p>
<p>In your docker build file, make the command you want to run as the last instruction. In your case:</p>
<pre><code>CMD python3 test.py hello
</code></pre>
<p>To make it more extensible, use environment variables. For instance, do something like:</p>
<pre><code>CMD ["python3", "test.py"]
</code></pre>
<p>But make the parameter come from an environment variable you pass into the container definition in your task.</p>
| 1 |
2016-09-13T20:21:08Z
|
[
"python",
"amazon-ecs"
] |
How to rename the date stamp portion of a csv file in python
| 39,477,778 |
<p>I have created a script which should remove the date stamp portion from a file. For example, rename the file from existing name_2016-09-13.csv to name.csv. The problem is that the filename changes everyday. So, I need to rename and overwrite the existing file when it gets renamed the next day.</p>
<pre><code>import os
import re
path = "C:\New\Test"
for filename in os.listdir(path):
if filename.startswith('name_'):
print filename
os.rename(filename, filename.translate("0123456789"))
</code></pre>
| 0 |
2016-09-13T19:38:43Z
| 39,478,526 |
<p>If your question is how to grab the 'name' portion of the filename, here is a capturing regular expression that does the trick: <code>r'(\S+)_\d\d\d\d-\d\d-\d\d\.csv'</code>. Since you <code>import re</code> but then never use it, I imagine someone told you that you can accomplish your task using regular expressions, but you don't know how. I just gave you a hint, but you will probably need to read the docs for the re module in order to find out how to use the hint. Pay particular attention to the section on "capturing parentheses" and on the functions with names like find, search, match.</p>
| 0 |
2016-09-13T20:32:32Z
|
[
"python",
"csv"
] |
Python 3 Requests or urllib - how to always add a header?
| 39,477,838 |
<p>Title says it all: is there a 'best' way to always add a header to every request? I've got an internal tool that wants to send request ids to other internal tools; I'm looking for a blessed solution. I've skimmed the docs of both and it seems that this isn't a popular thing to ask for as I can't find a cookbook example.</p>
<p>I'm thinking of a couple solutions:</p>
<ol>
<li>Wrap requests in my own thin wrapper and use that. Need to teach codevelopers to remember to not <code>import requests</code> but <code>import myrequestswrapper as requests</code>.</li>
<li>Monkey-patch requests. I don't like monkey patching, but maybe just this once...? I dread the time when there comes a need to <em>not</em> send a header to this one particular system.</li>
</ol>
<p>edit: Why I'm not considering a requests.Session: it stores cookies and needs to be disposed of as it keeps its connection open.</p>
| 0 |
2016-09-13T19:43:10Z
| 39,477,977 |
<p>Create a <a href="http://docs.python-requests.org/en/master/user/advanced/#session-objects" rel="nofollow">session object</a>, which is the 1st thing shown under <a href="http://docs.python-requests.org/en/master/user/advanced/" rel="nofollow">advanced usage</a>:</p>
<pre><code>s = requests.Session()
s.headers.update({'x-some-header': 'the value'})
s.get('http://httpbin.org/headers')
</code></pre>
<p>and use the session to perform requests. As you've stated that you do not wish to persist cookies between requests, you could subclass the <code>Session</code>:</p>
<pre><code>In [64]: from requests.adapters import HTTPAdapter
In [65]: from requests.cookies import cookiejar_from_dict
In [66]: class CookieMonsterSession(Session):
...:
...: def __init__(self, *args, **kwgs):
...: super(CookieMonsterSession, self).__init__(*args, **kwgs)
...: # Override default adapters with 0-pooling adapters
...: self.mount('https://', HTTPAdapter(pool_connections=1,
...: pool_maxsize=0))
...: self.mount('http://', HTTPAdapter(pool_connections=1,
...: pool_maxsize=0))
...: @property
...: def cookies(self):
...: """ Freshly baked cookies, always!"""
...: return cookiejar_from_dict({})
...: @cookies.setter
...: def cookies(self, newcookies):
...: """ OM NOM NOM NOM..."""
...: pass
...:
In [67]: s = CookieMonsterSession()
In [69]: real_s = Session()
In [70]: s.get('http://www.google.fi')
Out[70]: <Response [200]>
In [71]: s.cookies
Out[71]: <RequestsCookieJar[]>
In [72]: real_s.get('http://www.google.fi')
Out[72]: <Response [200]>
In [73]: real_s.cookies
Out[73]: <RequestsCookieJar[Cookie(version=0, name='NID', value='86=14qy...Rurx', port=None, port_specified=False, domain='.google.fi', domain_specified=True, domain_initial_dot=True, path='/', path_specified=True, secure=False, expires=1489744358, discard=False, comment=None, comment_url=None, rest={'HttpOnly': None}, rfc2109=False)]>
</code></pre>
<p>It is unfortunate that the <code>Session</code> is by design difficult to extend and configure, so "disabling" cookies, and modifying pooling like this is a hack and prone to break, if and when <code>Session</code> is updated the slightest. Also we've disabled the 2 main features of <code>Session</code> just for persistent headers.</p>
<p>Wrapping the basic API methods is probably the cleaner and safer approach:</p>
<pre><code># customrequests.py
from functools import wraps
from requests import api as requests_api
custom_headers = {}
def _header_wrapper(f):
@wraps(f)
def wrapper(*args, **kwgs):
headers = kwgs.pop('headers', None) or {}
headers.update(custom_headers)
return f(*args, headers=headers, **kwgs)
return wrapper
request = _header_wrapper(requests_api.request)
get = _header_wrapper(requests_api.get)
options = _header_wrapper(requests_api.options)
head = _header_wrapper(requests_api.head)
post = _header_wrapper(requests_api.post)
put = _header_wrapper(requests_api.put)
patch = _header_wrapper(requests_api.patch)
delete = _header_wrapper(requests_api.delete)
</code></pre>
<p>In action:</p>
<pre><code>In [1]: import customrequests as requests
In [2]: print(requests.get('http://httpbin.org/headers').text)
{
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"Host": "httpbin.org",
"User-Agent": "python-requests/2.11.1"
}
}
In [3]: requests.custom_headers['X-Test'] = "I'm always here"
In [4]: print(requests.get('http://httpbin.org/headers').text)
{
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"Host": "httpbin.org",
"User-Agent": "python-requests/2.11.1",
"X-Test": "I'm always here"
}
}
</code></pre>
| 2 |
2016-09-13T19:52:12Z
|
[
"python",
"python-requests",
"urllib"
] |
MySQL On Update not triggering for Django/TastyPie REST API
| 39,477,897 |
<p>We have a resource table which has a field <code>last_updated</code> which we setup with mysql-workbench to have the following properties:</p>
<p>Datatype: <code>TIMESTAMP</code></p>
<p>NN (NotNull) is <code>checked</code></p>
<p>Default: <code>CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP</code></p>
<p>When I modify a row through the workbench and apply it, the <code>last_updated</code> field properly updates.</p>
<p>When I use the REST api we've setup, and issue a put:</p>
<pre><code>update = requests.put('http://127.0.0.1:8000/api/resources/16',
data=json.dumps(dict(status="/api/status/4", timeout=timeout_time)),
headers=HEADER)
</code></pre>
<p>I can properly change any of the values (including status and timeout, and receive a 204 response), but <code>last_updated</code> does not update.</p>
<p><a href="https://docs.djangoproject.com/en/1.9/ref/models/instances/#how-django-knows-to-update-vs-insert" rel="nofollow">Django's model documentation</a> says in this case it should be sending an <code>UPDATE</code>.</p>
<p>Anyone have and ideas on why it's missing these updates? </p>
<p>I can provide further details regarding our specific Django/tastypie setup, but as long as they are issuing an <code>UPDATE</code>, they should be triggering the databases <code>ON UPDATE</code>.</p>
| 0 |
2016-09-13T19:46:26Z
| 39,478,144 |
<p>I suspect that the UPDATE statement issued by Django may be including an assignment to the <code>last_updated</code> column. This is just a guess, there's not enough information provided.</p>
<p>But if the Django model contains the <code>last_updated</code> column, and that column is fetched from the database into the model, I believe a <strong>save()</strong> will assign a value to the <code>last_updated</code> column, in the UPDATE statement. </p>
<p><a href="https://docs.djangoproject.com/en/1.9/ref/models/instances/#specifying-which-fields-to-save" rel="nofollow">https://docs.djangoproject.com/en/1.9/ref/models/instances/#specifying-which-fields-to-save</a></p>
<hr>
<p>Consider the behavior when we issue an UPDATE statement like this:</p>
<pre><code> UPDATE mytable
SET last_updated = last_updated
, some_col = 'some_value'
WHERE id = 42
</code></pre>
<p>Because the UPDATE statement is assigning a value to the <code>last_updated</code> column, the automatic assignment to the timestamp column won't happen. The value assigned in the statement takes precedence.</p>
<p>To get the automatic assignment to <code>last_updated</code>, that column has to be omitted from the <code>SET</code> clause, e.g.</p>
<pre><code> UPDATE mytable
SET some_col = 'some_value'
WHERE id = 42
</code></pre>
<p>To debug this, you'd want to inspect the actual SQL statement.</p>
| 1 |
2016-09-13T20:03:08Z
|
[
"python",
"mysql",
"django",
"mysql-workbench",
"tastypie"
] |
MySQL On Update not triggering for Django/TastyPie REST API
| 39,477,897 |
<p>We have a resource table which has a field <code>last_updated</code> which we setup with mysql-workbench to have the following properties:</p>
<p>Datatype: <code>TIMESTAMP</code></p>
<p>NN (NotNull) is <code>checked</code></p>
<p>Default: <code>CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP</code></p>
<p>When I modify a row through the workbench and apply it, the <code>last_updated</code> field properly updates.</p>
<p>When I use the REST api we've setup, and issue a put:</p>
<pre><code>update = requests.put('http://127.0.0.1:8000/api/resources/16',
data=json.dumps(dict(status="/api/status/4", timeout=timeout_time)),
headers=HEADER)
</code></pre>
<p>I can properly change any of the values (including status and timeout, and receive a 204 response), but <code>last_updated</code> does not update.</p>
<p><a href="https://docs.djangoproject.com/en/1.9/ref/models/instances/#how-django-knows-to-update-vs-insert" rel="nofollow">Django's model documentation</a> says in this case it should be sending an <code>UPDATE</code>.</p>
<p>Anyone have and ideas on why it's missing these updates? </p>
<p>I can provide further details regarding our specific Django/tastypie setup, but as long as they are issuing an <code>UPDATE</code>, they should be triggering the databases <code>ON UPDATE</code>.</p>
| 0 |
2016-09-13T19:46:26Z
| 39,499,796 |
<p>With the added information from spencer7593's <a href="http://stackoverflow.com/a/39478144/3579910">answer</a>, I was able to track down how to do this through tastypie:</p>
<p>The <code>BaseModelResource.save()</code> (from <code>tastypie/resources.py</code>):</p>
<pre><code>def save(self, bundle, skip_errors=False):
if bundle.via_uri:
return bundle
self.is_valid(bundle)
if bundle.errors and not skip_errors:
raise ImmediateHttpResponse(response=self.error_response(bundle.request, bundle.errors))
# Check if they're authorized.
if bundle.obj.pk:
self.authorized_update_detail(self.get_object_list(bundle.request), bundle)
else:
self.authorized_create_detail(self.get_object_list(bundle.request), bundle)
# Save FKs just in case.
self.save_related(bundle)
# Save the main object.
obj_id = self.create_identifier(bundle.obj)
if obj_id not in bundle.objects_saved or bundle.obj._state.adding:
bundle.obj.save()
bundle.objects_saved.add(obj_id)
# Now pick up the M2M bits.
m2m_bundle = self.hydrate_m2m(bundle)
self.save_m2m(m2m_bundle)
return bundle
</code></pre>
<p>Needs to be overridden in your class, so that you can change the Django <a href="https://docs.djangoproject.com/en/1.9/ref/models/instances/#specifying-which-fields-to-save" rel="nofollow">save()</a>, which has the <code>update_fields</code> parameter we want to modify:</p>
<pre><code> if obj_id not in bundle.objects_saved or bundle.obj._state.adding:
bundle.obj.save()
</code></pre>
<p>To, for example:</p>
<pre><code>class ResourcesResource(ModelResource):
# ...
def save(self, bundle, skip_errors=False):
# ...
if obj_id not in bundle.objects_saved or bundle.obj._state.adding:
resource_fields = [field.name for field in Resources._meta.get_fields()
if not field.name in ['id', 'last_updated']]
bundle.obj.save(update_fields=resource_fields)
# ...
</code></pre>
<p>This properly excludes the <code>last_updated</code> column from the sql <code>UPDATE</code>.</p>
| 0 |
2016-09-14T21:19:44Z
|
[
"python",
"mysql",
"django",
"mysql-workbench",
"tastypie"
] |
Check for very specified numbers padding
| 39,477,931 |
<p>I am trying to check for a list of items in my scene to see if they bear 3 (version) paddings at the end of their name - eg. <code>test_model_001</code> and if they do, that item will be pass and items that do not pass the condition will be affected by a certain function..</p>
<p>Suppose if my list of items is as follows:</p>
<ul>
<li>test_model_01</li>
<li>test_romeo_005</li>
<li>test_charlie_rig</li>
</ul>
<p>I tried and used the following code:</p>
<pre><code>eg_list = ['test_model_01', 'test_romeo_005', 'test_charlie_rig']
for item in eg_list:
mo = re.sub('.*?([0-9]*)$',r'\1', item)
print mo
</code></pre>
<p>And it return me <code>01</code> and <code>005</code> as the output, in which I am hoping it will return me just the <code>005</code> only.. How do I ask it to check if it contains 3 paddings? Also, is it possible to include underscore in the check? Is that the best way?</p>
| 0 |
2016-09-13T19:48:53Z
| 39,478,021 |
<p>You can use the <code>{3}</code> to ask for 3 consecutive digits only and prepend underscore:</p>
<pre><code>eg_list = ['test_model_01', 'test_romeo_005', 'test_charlie_rig']
for item in eg_list:
match = re.search(r'_([0-9]{3})$', item)
if match:
print(match.group(1))
</code></pre>
<p>This would print <code>005</code> only.</p>
| 3 |
2016-09-13T19:54:50Z
|
[
"python",
"maya"
] |
Check for very specified numbers padding
| 39,477,931 |
<p>I am trying to check for a list of items in my scene to see if they bear 3 (version) paddings at the end of their name - eg. <code>test_model_001</code> and if they do, that item will be pass and items that do not pass the condition will be affected by a certain function..</p>
<p>Suppose if my list of items is as follows:</p>
<ul>
<li>test_model_01</li>
<li>test_romeo_005</li>
<li>test_charlie_rig</li>
</ul>
<p>I tried and used the following code:</p>
<pre><code>eg_list = ['test_model_01', 'test_romeo_005', 'test_charlie_rig']
for item in eg_list:
mo = re.sub('.*?([0-9]*)$',r'\1', item)
print mo
</code></pre>
<p>And it return me <code>01</code> and <code>005</code> as the output, in which I am hoping it will return me just the <code>005</code> only.. How do I ask it to check if it contains 3 paddings? Also, is it possible to include underscore in the check? Is that the best way?</p>
| 0 |
2016-09-13T19:48:53Z
| 39,478,039 |
<pre><code>for item in eg_list:
if re.match(".*_\d{3}$", item):
print item.split('_')[-1]
</code></pre>
<p>This matches anything which ends in:
<code>_</code> and underscore, <code>\d</code> a digit, <code>{3}</code> three of them, and <code>$</code> the end of the line.</p>
<p><img src="https://www.debuggex.com/i/J09VgczoxIm8zauF.png" alt="Regular expression visualization"></p>
<p><a href="https://www.debuggex.com/r/J09VgczoxIm8zauF" rel="nofollow">Debuggex Demo</a></p>
<p>printing the item, we split it on <code>_</code> underscores and take the last value, index <code>[-1]</code></p>
<hr>
<p>The reason <code>.*?([0-9]*)$</code> doesn't work is because <code>[0-9]*</code> matches 0 or more times, so it can match nothing. This means it will also match <code>.*?$</code>, which will match any string.</p>
<p>See the example on <a href="https://regex101.com/r/lS1rG6/1" rel="nofollow">regex101.com</a></p>
| 1 |
2016-09-13T19:56:28Z
|
[
"python",
"maya"
] |
Check for very specified numbers padding
| 39,477,931 |
<p>I am trying to check for a list of items in my scene to see if they bear 3 (version) paddings at the end of their name - eg. <code>test_model_001</code> and if they do, that item will be pass and items that do not pass the condition will be affected by a certain function..</p>
<p>Suppose if my list of items is as follows:</p>
<ul>
<li>test_model_01</li>
<li>test_romeo_005</li>
<li>test_charlie_rig</li>
</ul>
<p>I tried and used the following code:</p>
<pre><code>eg_list = ['test_model_01', 'test_romeo_005', 'test_charlie_rig']
for item in eg_list:
mo = re.sub('.*?([0-9]*)$',r'\1', item)
print mo
</code></pre>
<p>And it return me <code>01</code> and <code>005</code> as the output, in which I am hoping it will return me just the <code>005</code> only.. How do I ask it to check if it contains 3 paddings? Also, is it possible to include underscore in the check? Is that the best way?</p>
| 0 |
2016-09-13T19:48:53Z
| 39,478,042 |
<p>The asterisk after the [0-9] specification means that you are expecting any random number of occurrences of the digits 0-9. Technically this expression matches test_charlie_rig as well. You can test that out here <a href="http://pythex.org/" rel="nofollow">http://pythex.org/</a></p>
<p>Replacing the asterisk with a {3} says that you want 3 digits.</p>
<pre><code>.*?([0-9]{3})$
</code></pre>
<p>If you know your format will be close to the examples you showed, you can be a bit more explicit with the regex pattern to prevent even more accidental matches</p>
<pre><code>^.+_(\d{3})$
</code></pre>
| 1 |
2016-09-13T19:56:44Z
|
[
"python",
"maya"
] |
Check for very specified numbers padding
| 39,477,931 |
<p>I am trying to check for a list of items in my scene to see if they bear 3 (version) paddings at the end of their name - eg. <code>test_model_001</code> and if they do, that item will be pass and items that do not pass the condition will be affected by a certain function..</p>
<p>Suppose if my list of items is as follows:</p>
<ul>
<li>test_model_01</li>
<li>test_romeo_005</li>
<li>test_charlie_rig</li>
</ul>
<p>I tried and used the following code:</p>
<pre><code>eg_list = ['test_model_01', 'test_romeo_005', 'test_charlie_rig']
for item in eg_list:
mo = re.sub('.*?([0-9]*)$',r'\1', item)
print mo
</code></pre>
<p>And it return me <code>01</code> and <code>005</code> as the output, in which I am hoping it will return me just the <code>005</code> only.. How do I ask it to check if it contains 3 paddings? Also, is it possible to include underscore in the check? Is that the best way?</p>
| 0 |
2016-09-13T19:48:53Z
| 39,481,623 |
<p>I usually don't like regex unless needed. This should work and be more readable.</p>
<pre><code>def name_validator(name, padding_count=3):
number = name.split("_")[-1]
if number.isdigit() and number == number.zfill(padding_count):
return True
return False
name_validator("test_model_01") # Returns False
name_validator("test_romeo_005") # Returns True
name_validator("test_charlie_rig") # Returns False
</code></pre>
| 1 |
2016-09-14T02:27:59Z
|
[
"python",
"maya"
] |
SyntaxError: invalid syntax - python 2.7 - Odoo v9 community
| 39,478,005 |
<p>I have this code which checks if there's a provider specified, and a pem key, in order to send over an xml to a server:</p>
<pre><code> @api.multi
def send_xml_file(self, envio_dte=None, file_name="envio",company_id=False):
if not company_id.dte_service_provider:
raise UserError(_("Not Service provider selected!"))
try:
signature_d = self.get_digital_signature_pem(
company_id)
seed = self.get_seed(company_id)
template_string = self.create_template_seed(seed)
seed_firmado = self.sign_seed(
template_string, signature_d['priv_key'],
signature_d['cert'])
token = self.get_token(seed_firmado,company_id)
_logger.info(_("Token is: {}").format(token))
except:
raise Warning(connection_status[response.e])
return {'sii_result': 'NoEnviado'}
</code></pre>
<p>On this line: <code>_logger.info(_("Token is: {}").format(token))</code> is throwing me <code>SyntaxError: invalid syntax</code> this is my traceback:</p>
<pre><code>Traceback (most recent call last):
File "/home/kristian/.virtualenvs/odoov9/lib/python2.7/site-packages/werkzeug/serving.py", line 177, in run_wsgi
execute(self.server.app)
File "/home/kristian/.virtualenvs/odoov9/lib/python2.7/site-packages/werkzeug/serving.py", line 165, in execute
application_iter = app(environ, start_response)
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/service/server.py", line 246, in app
return self.app(e, s)
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/service/wsgi_server.py", line 184, in application
return application_unproxied(environ, start_response)
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/service/wsgi_server.py", line 170, in application_unproxied
result = handler(environ, start_response)
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/http.py", line 1492, in __call__
self.load_addons()
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/http.py", line 1513, in load_addons
m = __import__('openerp.addons.' + module)
File "/home/kristian/odoov9/odoo-9.0c-20160712/openerp/modules/module.py", line 61, in load_module
mod = imp.load_module('openerp.addons.' + module_part, f, path, descr)
File "/home/kristian/odoov9/solti/l10n_cl_dte/__init__.py", line 2, in <module>
from . import models, controllers, wizard
File "/home/kristian/odoov9/solti/l10n_cl_dte/models/__init__.py", line 2, in <module>
from . import invoice, partner, company, payment_term, sii_regional_offices
File "/home/kristian/odoov9/solti/l10n_cl_dte/models/invoice.py", line 500
_logger.info(_("Token is: {}").format(token))
^
SyntaxError: invalid syntax
</code></pre>
<p>I've checked for missing parenthesis, and stuff like that, but I still cannot figure it out.</p>
<p>Any ideas on this?</p>
<p>Thanks in advance!</p>
| 0 |
2016-09-13T19:53:33Z
| 39,498,297 |
<p>Logger needs to be tabbed over, to be in the try block.</p>
<pre><code>@api.multi
def send_xml_file(self, envio_dte=None, file_name="envio",company_id=False):
if not company_id.dte_service_provider:
raise UserError(_("Not Service provider selected!"))
try:
signature_d = self.get_digital_signature_pem(
company_id)
seed = self.get_seed(company_id)
template_string = self.create_template_seed(seed)
seed_firmado = self.sign_seed(
template_string, signature_d['priv_key'],
signature_d['cert'])
token = self.get_token(seed_firmado,company_id)
_logger.info(_("Token is: {}").format(token))
except:
# This is probably not doing what you expect
# raise will stop program execution, so the
# return will not actually return.
raise Warning(connection_status[response.e])
return {'sii_result': 'NoEnviado'}
</code></pre>
| 1 |
2016-09-14T19:32:59Z
|
[
"python",
"logging",
"openerp",
"odoo-9"
] |
launch selenium from python on ubuntu
| 39,478,101 |
<p>I have the following script</p>
<pre><code>from selenium import webdriver
browser = webdriver.Firefox()
browser.get('http://localhost:8000')
assert 'Django' in browser.title
</code></pre>
<p>I get the following error</p>
<pre><code>$ python3 functional_tests.py
Traceback (most recent call last): File "functional_tests.py", line 3, in <module>
browser = webdriver.Firefox() File "/usr/local/lib/python3.5/dist-packages/selenium/webdriver/firefox/webdriver.py", line 80, in __init__
self.binary, timeout) File "/usr/local/lib/python3.5/dist-packages/selenium/webdriver/firefox/extension_connection.py", line 52, in __init__
self.binary.launch_browser(self.profile, timeout=timeout) File "/usr/local/lib/python3.5/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 68, in launch_browser
self._wait_until_connectable(timeout=timeout) File "/usr/local/lib/python3.5/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 99, in _wait_until_connectable
"The browser appears to have exited " selenium.common.exceptions.WebDriverException: Message: The browser appears to have exited before we could connect. If you specified a log_file in the FirefoxBinary constructor, check it for details.
</code></pre>
<p><code>pip3 list</code> shows <code>selenium (2.53.6)</code>.</p>
<p><code>firefox -v</code> shows <code>Mozilla Firefox 47.0</code>.</p>
| 1 |
2016-09-13T20:00:05Z
| 39,479,712 |
<p>The last version of Firefox is not working properly with selenium. Try with 46 or 45. </p>
<p>You can download here: ftp.mozilla.org/pub/firefox/releases</p>
<p>or <code>sudo apt-get install firefox=45.0.2+build1-0ubuntu1</code></p>
| 0 |
2016-09-13T22:08:11Z
|
[
"python",
"selenium",
"ubuntu"
] |
launch selenium from python on ubuntu
| 39,478,101 |
<p>I have the following script</p>
<pre><code>from selenium import webdriver
browser = webdriver.Firefox()
browser.get('http://localhost:8000')
assert 'Django' in browser.title
</code></pre>
<p>I get the following error</p>
<pre><code>$ python3 functional_tests.py
Traceback (most recent call last): File "functional_tests.py", line 3, in <module>
browser = webdriver.Firefox() File "/usr/local/lib/python3.5/dist-packages/selenium/webdriver/firefox/webdriver.py", line 80, in __init__
self.binary, timeout) File "/usr/local/lib/python3.5/dist-packages/selenium/webdriver/firefox/extension_connection.py", line 52, in __init__
self.binary.launch_browser(self.profile, timeout=timeout) File "/usr/local/lib/python3.5/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 68, in launch_browser
self._wait_until_connectable(timeout=timeout) File "/usr/local/lib/python3.5/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 99, in _wait_until_connectable
"The browser appears to have exited " selenium.common.exceptions.WebDriverException: Message: The browser appears to have exited before we could connect. If you specified a log_file in the FirefoxBinary constructor, check it for details.
</code></pre>
<p><code>pip3 list</code> shows <code>selenium (2.53.6)</code>.</p>
<p><code>firefox -v</code> shows <code>Mozilla Firefox 47.0</code>.</p>
| 1 |
2016-09-13T20:00:05Z
| 39,536,091 |
<p>I struggled with this problem as well, and I was unhappy with having to use older versions of Firefox. Here's my solution that uses the latest version of Firefox. It however involves several steps</p>
<p><strong>Step 1.</strong> Download v0.9.0 <em>Marionette</em>, the next generation of FirefoxDriver, from this location: <a href="https://github.com/mozilla/geckodriver/releases/download/v0.9.0/geckodriver-v0.9.0-linux64.tar.gz" rel="nofollow">https://github.com/mozilla/geckodriver/releases/download/v0.9.0/geckodriver-v0.9.0-linux64.tar.gz</a></p>
<p><strong>Step 2.</strong> Extract the file to a desired folder, and rename it to "wires". In my case I created a folder named "add_to_system_path" under Documents. So the file is in Documents/add_to_system_path/wires (also make sure that the wires file is executable under its properties)</p>
<p><strong>Step 3.</strong> Create a file named ".pam_environment" under your home folder, and then adding this line on it and save</p>
<p><code>PATH DEFAULT=${PATH}:/absolute/path/to/the/folder/where/wires/is/saved
</code></p>
<p>What this does is it tells ubuntu to add the enumerated dir in .pam_environment to your system path </p>
<p><strong>Step 4.</strong> Save the file, log out of your user session, and log back in. This is necessary to do so that the files in the newly added system path is recognized by ubuntu</p>
<p><strong>Step 5.</strong> Use the code below to instantiate the browser instance:</p>
<pre><code>`
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
capabilities = DesiredCapabilities.FIREFOX
capabilities["marionette"] = True
browser = webdriver.Firefox(capabilities=capabilities)
browser.get('http://your-target-url')`
</code></pre>
<p>Firefox should now be able to instantiate as usual.</p>
| 3 |
2016-09-16T16:21:41Z
|
[
"python",
"selenium",
"ubuntu"
] |
python scikit-learn TfidfVectorizer: why ValueError when input is 2 single-character strings?
| 39,478,120 |
<p>I am trying to run something like this:</p>
<pre><code>from sklearn.feature_extraction.text import TfidfVectorizer
test_text = ["q", "r"]
vect = TfidfVectorizer(min_df=1,
stop_words=None,
lowercase=False)
tfidf = vect.fit_transform(test_text)
print vect.get_feature_names()
</code></pre>
<p>But get a ValueError:</p>
<p><code>ValueError: empty vocabulary; perhaps the documents only contain stop words</code></p>
<p>Does guidance exist on what limitations or constraints for the input are? I was not able to find anything on the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html" rel="nofollow">TfidfVectorizer doc page</a>. I tried to trace it, and got to the <code>_count_vocab</code> function, but I have trouble reading it. Also, when I change the strings to length 2 or more, code runs fine.</p>
| 0 |
2016-09-13T20:01:15Z
| 39,478,863 |
<p>The error is because of the min_df parameter. When you set the value of min_df =0, it will work fine as it will not be bounded by the 'minimum threshold' which is currently set to 1 and each of your word also appears for once only.</p>
| 0 |
2016-09-13T20:57:54Z
|
[
"python",
"scikit-learn",
"nlp",
"tf-idf"
] |
Calculate the number of combinations of unique positive integers with minimum and maximum differences between each other?
| 39,478,125 |
<p>How do I write a Python program to calculate the number of combinations of unique sorted positive integers over a range of integers that can be selected where the minimum difference between each of the numbers in the set is one number and the maximum difference is another number?</p>
<p>For instance, if I want to calculate the number of ways I can select 6 numbers from the positive integers from 1-50 such that the minimum difference between each number is 4 and the maximum difference between each number is 7, I would want to count the combination {1,6,12,18,24,28} since the minimum difference is 4 and the maximum difference is 6, but I would not want to count combinations like {7,19,21,29,41,49} since the minimum difference is 2 and the maximum difference is 12.</p>
<p>I have the following code so far, but the problem is that it has to loop through every combination, which takes an extremely long time in many cases.</p>
<pre><code>import itertools
def min_max_differences(integer_list):
i = 1
diff_min = max(integer_list)
diff_max = 1
while i < len(integer_list):
diff = (integer_list[i]-integer_list[i-1])
if diff < diff_min:
diff_min = diff
if diff > diff_max:
diff_max = diff
i += 1
return (diff_min,diff_max)
def total_combinations(lower_bound,upper_bound,min_difference,max_difference,elements_selected):
numbers_range = list(range(lower_bound,upper_bound+1,1))
all_combos = itertools.combinations(numbers_range,elements_selected)
min_max_diff_combos = 0
for c in all_combos:
if min_max_differences(c)[0] >= min_difference and min_max_differences(c)[1] <= max_difference:
min_max_diff_combos += 1
return min_max_diff_combos
</code></pre>
<p>I do not have a background in combinatorics, but I am guessing there is a much more algorithmically efficient way to do this using some combinatorial methods.</p>
| 0 |
2016-09-13T20:01:32Z
| 39,480,253 |
<p>You can use a recursive function with caching to get your answer.
This method will work even if you have a large array because some positions are repeated many times with the same parameters.<br>
Here is a code for you (forgive me if I made any mistakes in python cause I don't normally use it).
If there is any flow in the logic, please let me know</p>
<pre><code># function to get the number of ways to select {target} numbers from the
# array {numbers} with minimum difference {min} and maximum difference {max}
# starting from position {p}, with the help of caching
dict = {}
def Combinations(numbers, target, min, max, p):
if target == 1: return 1
# get a unique key for this position
key = target * 1000000000000 + min * 100000000 + max * 10000 + p
if dict.has_key(key): return dict[key]
ans = 0
# current start value
pivot = numbers[p]
p += 1;
# increase the position until you reach the minimum
while p < len(numbers) and numbers[p] - pivot < min:
p += 1
# get all the values in the range of min <--> max
while p < len(numbers) and numbers[p] - pivot <= max:
ans += Combinations(numbers, target - 1, min, max, p)
p += 1
# store the ans for further inquiry
dict[key] = ans
return ans
# any range of numbers (must be SORTED as you asked)
numbers = []
for i in range(0,50): numbers.append(i+1)
# number of numbers to select
count = 6
# minimum difference
min = 4
# maximum difference
max = 7
ans = 0
for i in range(0,len(numbers)):
ans += Combinations(numbers, count, min, max, i)
print ans
</code></pre>
| 1 |
2016-09-13T23:07:19Z
|
[
"python",
"algorithm",
"combinatorics",
"itertools"
] |
Calculate the number of combinations of unique positive integers with minimum and maximum differences between each other?
| 39,478,125 |
<p>How do I write a Python program to calculate the number of combinations of unique sorted positive integers over a range of integers that can be selected where the minimum difference between each of the numbers in the set is one number and the maximum difference is another number?</p>
<p>For instance, if I want to calculate the number of ways I can select 6 numbers from the positive integers from 1-50 such that the minimum difference between each number is 4 and the maximum difference between each number is 7, I would want to count the combination {1,6,12,18,24,28} since the minimum difference is 4 and the maximum difference is 6, but I would not want to count combinations like {7,19,21,29,41,49} since the minimum difference is 2 and the maximum difference is 12.</p>
<p>I have the following code so far, but the problem is that it has to loop through every combination, which takes an extremely long time in many cases.</p>
<pre><code>import itertools
def min_max_differences(integer_list):
i = 1
diff_min = max(integer_list)
diff_max = 1
while i < len(integer_list):
diff = (integer_list[i]-integer_list[i-1])
if diff < diff_min:
diff_min = diff
if diff > diff_max:
diff_max = diff
i += 1
return (diff_min,diff_max)
def total_combinations(lower_bound,upper_bound,min_difference,max_difference,elements_selected):
numbers_range = list(range(lower_bound,upper_bound+1,1))
all_combos = itertools.combinations(numbers_range,elements_selected)
min_max_diff_combos = 0
for c in all_combos:
if min_max_differences(c)[0] >= min_difference and min_max_differences(c)[1] <= max_difference:
min_max_diff_combos += 1
return min_max_diff_combos
</code></pre>
<p>I do not have a background in combinatorics, but I am guessing there is a much more algorithmically efficient way to do this using some combinatorial methods.</p>
| 0 |
2016-09-13T20:01:32Z
| 39,480,350 |
<p>Here is a very simple (and non-optimized) recursive approach:</p>
<h3>Code</h3>
<pre><code>import numpy as np
from time import time
""" PARAMETERS """
SET = range(50) # Set of elements to choose from
N = 6 # N elements to choose
MIN_GAP = 4 # Gaps
MAX_GAP = 7 # ""
def count(N, CHOSEN=[]):
""" assumption: N > 0 at start """
if N == 0:
return 1
else:
return sum([count(N-1, CHOSEN + [val])
for val in SET if (val not in CHOSEN)
and ((not CHOSEN) or ((val - CHOSEN[-1]) >= MIN_GAP))
and ((not CHOSEN) or ((val - CHOSEN[-1]) <= MAX_GAP))])
start_time = time()
count_ = count(N)
print('used time in secs: ', time() - start_time)
print('# solutions: ', count_)
</code></pre>
<h3>Output</h3>
<pre><code>('used time in secs: ', 0.1174919605255127)
('# solutions: ', 23040)
</code></pre>
<h3>Remarks</h3>
<ul>
<li>It outputs the same solution as Ayman's approach</li>
<li><strong>Ayman's approach is much more powerful (in terms of asymptotical speed)</strong></li>
</ul>
| 0 |
2016-09-13T23:20:02Z
|
[
"python",
"algorithm",
"combinatorics",
"itertools"
] |
Chrome webdriver cannot connect to the service chromedriver.exe on Windows
| 39,478,170 |
<p>Hello !</p>
<p>I am currently using Selenium with Python on Windows 7, and I tried to use the Chrome webdriver for the hide function <code>--no-startup-window</code>. After I installed Chrome (x86), copied chromedriver.exe on path <code>C:\Python27\Scripts\</code> and added it on PATH environment, I tried to launch it via the following code:</p>
<pre><code>opt = Options()
opt.add_argument("--no-startup-window")
driver = webdriver.Chrome(chrome_options=opt)
</code></pre>
<p>However, I have the following error when I execute it:</p>
<pre><code>(env) c:\opt\project\auto\>python program_test.py
Traceback (most recent call last):
File "program_test.py", line 234, in <module>
main()
File "program_test.py", line 36, in main
initChromeWebDriver()
File "c:\opt\project\auto\common\driver.py", line 32, in initChromeWebDriver
service_log_path=)
File "c:\opt\project\auto\lib\site-packages\selenium\webdriver\chrome\webdriver.p
y", line 61, in __init__
self.service.start()
File "c:\opt\project\auto\lib\site-packages\selenium\webdriver\common\service.py"
, line 88, in start
raise WebDriverException("Can not connect to the Service %s" % self.path)
selenium.common.exceptions.WebDriverException: Message: Can not connect to the Service chromedriver
</code></pre>
<p>Note: I am currently using a <code>virtualenv</code> also, so I also copied the chromedriver.exe on his <code>Scripts</code> folder. Any idea about the issue here ?</p>
| 0 |
2016-09-13T20:04:41Z
| 39,478,225 |
<p>first , instead of using <strong>Options()</strong> method you should use <strong>webdriver.ChromeOptions()</strong> method to have the result that you want, secondly you should specify the <strong>path to the Chromedriver</strong> installed on your computer.</p>
<p>for example put chormedriver.exe file on drive C:\ and use:</p>
<pre><code> chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--no-startup-window")
driver = webdriver.Chrome("C:\\chromedriver.exe", chrome_options=chrome_options)
driver.get("www.google.com")
</code></pre>
| 2 |
2016-09-13T20:10:01Z
|
[
"python",
"google-chrome",
"selenium",
"selenium-webdriver",
"webdriver"
] |
Trouble running lighttpd with webpy
| 39,478,190 |
<p>This is what my lighttpd.conf file looks like:</p>
<pre><code>server.modules = (
"mod_access",
"mod_alias",
"mod_compress",
"mod_accesslog",
)
server.document-root = "/home/ashley/leagueratings"
server.upload-dirs = ( "/var/cache/lighttpd/uploads" )
server.errorlog = "/var/log/lighttpd/error.log"
server.pid-file = "/var/run/lighttpd.pid"
server.username = "www-data"
server.groupname = "www-data"
## Use ipv6 if available
#include_shell "/usr/share/lighttpd/use-ipv6.pl"
compress.cache-dir = "/var/cache/lighttpd/compress/"
compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" )
include_shell "/usr/share/lighttpd/create-mime.assign.pl"
include_shell "/usr/share/lighttpd/include-conf-enabled.pl"
server.modules += ( "mod_fastcgi" )
server.modules += ( "mod_rewrite" )
fastcgi.server = ( "/leagueratings.py" =>
("/" => ( "socket" => "/tmp/fastcgi.socket",
"bin-path" => "/home/ashley/leagueratings.py",
"max-procs" => 1,
"bin-environment" => (
"REAL_SCRIPT_NAME" => ""
),
"check-local" => "disable"
))
)
url.rewrite-once = (
"^/favicon.ico$" => "/static/favicon.ico",
"^/static/(.*)$" => "/static/$1",
"^/(.*)$" => "/leagueratings.py/$1",
)
</code></pre>
<p>I've done both</p>
<pre><code>chown www-data:www-data leagueratings.py
</code></pre>
<p>and</p>
<pre><code>chmod +x leagueratings.py
</code></pre>
<p>But I cannot connect to my website. The default site work previously before I changed lighttpd.conf</p>
<p>This is the error log</p>
<pre><code>2016-09-13 19:37:35: (log.c.164) server started
2016-09-13 19:49:49: (server.c.1558) server stopped by UID = 0 PID = 1
2016-09-13 19:49:50: (log.c.164) server started
2016-09-13 19:49:50: (mod_fastcgi.c.1112) the fastcgi-backend /home/ashley/leagueratings.py failed to start:
2016-09-13 19:49:50: (mod_fastcgi.c.1116) child exited with status 2 /home/ashley/leagueratings.py
2016-09-13 19:49:50: (mod_fastcgi.c.1119) If you're trying to run your app as a FastCGI backend, make sure you're using the FastCGI-enabled version.
If this is PHP on Gentoo, add 'fastcgi' to the USE flags.
2016-09-13 19:49:50: (mod_fastcgi.c.1406) [ERROR]: spawning fcgi failed.
2016-09-13 19:49:50: (server.c.1022) Configuration of plugins failed. Going down.
</code></pre>
<p>Please help me I've been trying to get my webpy server up and running for production for a long time now. I've also tried apache2 and nginx but nothing seems to work. Thank you.</p>
| 1 |
2016-09-13T20:06:53Z
| 39,485,273 |
<blockquote>
<p>2016-09-13 19:49:50: (mod_fastcgi.c.1112) the fastcgi-backend /home/ashley/leagueratings.py failed to start:</p>
</blockquote>
<p>This does not appear to be a web server issue.</p>
<p>Have you tried manually starting up leagueratings.py? It is possible that a needed python module is missing (and needs to be installed), or that there is a syntax error in the python script.</p>
| 0 |
2016-09-14T07:57:17Z
|
[
"python",
"lighttpd",
"web.py"
] |
Python while loop closing console
| 39,478,204 |
<p>My console is closing every time I start the loop and I don't get why...</p>
<pre><code>index = ""
while not index:
index = int(input("Enter the index that you want: "))
</code></pre>
| -2 |
2016-09-13T20:08:02Z
| 39,478,311 |
<p>I think your issue is that your loop is executing only once and is exiting after that (<em>that's what I can think of based on your code</em>).</p>
<p><strong>The reason is:</strong> at the start, your index is <code>""</code>. Hence <code>not index</code> is evaluated as <code>True</code> since python considers empty string as <code>False</code>. But within the loop you are assigning value to the index. Hence, in the next run <code>not index</code> is returning <code>False</code>. </p>
<p>Below is the sample on how ti works:</p>
<pre><code>>>> index = ""
>>> not index
True <--- True since string is empty
>>> index = 3
>>> not index
False <--- False since string is having some value
</code></pre>
| 1 |
2016-09-13T20:15:47Z
|
[
"python",
"loops",
"console"
] |
Python while loop closing console
| 39,478,204 |
<p>My console is closing every time I start the loop and I don't get why...</p>
<pre><code>index = ""
while not index:
index = int(input("Enter the index that you want: "))
</code></pre>
| -2 |
2016-09-13T20:08:02Z
| 39,478,443 |
<p>a) your code seems fine.</p>
<pre><code>$ python
Python 2.7.10 (default, Oct 14 2015, 16:09:02)
[GCC 5.2.1 20151010] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> index=""
>>> while not index:
... index=int(input("blabla: "))
...
blabla: 2
>>> print(index)
2
</code></pre>
<p>Or with python 3:</p>
<pre><code>$ python3
Python 3.4.3+ (default, Oct 14 2015, 16:03:50)
[GCC 5.2.1 20151010] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> index=""
>>> while not index:
... index=int(input("bla: "))
...
bla: 4
>>> print(index)
4
</code></pre>
<p>b) My console is closing: This seems to be the actual problem. Can you specify your operating system and terminal? How did you invoke the python interpreter? I guess you are on Windows? Start a console by hitting windowskey+r and type "cmd", then run python from there to see the output.</p>
| 0 |
2016-09-13T20:25:18Z
|
[
"python",
"loops",
"console"
] |
nested absolute_imports python
| 39,478,271 |
<p>I am a bit confused on getting imports done properly. I have a project as follows</p>
<pre><code>mainpackge/
packageone/
__init__.py
file.py
file2.py
file3.py
subpackageone/
__init__.py
submodule1.py
submodule2.py
packagetwo/
__init__.py
file.py
file2.py
file3.py
subpackagetwo/
__init__.py
submodule1.py
submodule2.py
</code></pre>
<p>Inside the <code>packageone/subpackage/__init__.py</code> there is an absolute import as follows</p>
<pre><code>from __future__ import absolute_import
from subpackageone.submodule1 import ClassSubmodule1
</code></pre>
<p>Where ClassSubmodule1 is a class that i wish to use inside </p>
<pre><code>packagetwo/subpackagetwo/submodule1.py
</code></pre>
<p>What needs to go inside the <code>packageone/__init__.py</code> file (currently its empty) in order for me to be able to import that class inside <code>packagetwo/subpackagetwo/submodule1.py</code>. Also can someone show how i would import the class (give import code).</p>
<p>Thanks alot!</p>
| 0 |
2016-09-13T20:13:02Z
| 39,479,751 |
<p>Figured it out. Basically, the import here:</p>
<pre><code>from __future__ import absolute_import
from subpackageone.submodule1 import ClassSubmodule1
</code></pre>
<p>needs to be moved up to <code>packageone/__init__.py</code>. After i did that i was able to import into packagetwo no problem.</p>
| 0 |
2016-09-13T22:12:11Z
|
[
"python",
"python-import"
] |
How to convert convert csv to list of dictionaries (UTF-8)?
| 39,478,281 |
<p>I have a csv file (in.csv)</p>
<pre><code>col1, col2, col3
Kapitän, Böse, Füller
...
</code></pre>
<p>and I want to create a list of dictionaries:</p>
<pre><code>a = [{'col1': 'Kapitän', 'col2': 'Böse', 'col3': 'Füller'},{...}]
</code></pre>
<p>With Python 3 it's working with</p>
<pre><code> import codecs
with codecs.open('in.csv', encoding='utf-8') as f:
a = [{k: v for k, v in row.items()}
for row in csv.DictReader(f, skipinitialspace=True)]
print(a)
</code></pre>
<p>(I've got this code from <a href="http://stackoverflow.com/questions/21572175/convert-csv-file-to-list-of-dictionaries">convert csv file to list of dictionaries</a>).</p>
<p>Unfortunately I need this for Python 2, but I don't come along with it. </p>
<p>I tried to understand <a href="https://docs.python.org/2.7/howto/unicode.html" rel="nofollow">https://docs.python.org/2.7/howto/unicode.html</a>, but I think I'm too stupid, because </p>
<pre><code>import codecs
f = codecs.open('in.csv', encoding='utf-8')
for line in f:
print repr(line)
</code></pre>
<p>gives me</p>
<pre><code>u'col1,col2,col3\n'
u'K\xe4pten,B\xf6se,F\xfcller\n'
u'\n'
</code></pre>
<p>Do you have a solution for Python 2?</p>
<p>There is a similar problem solved here: <a href="http://stackoverflow.com/questions/6740918/creating-a-dictionary-from-a-csv-file">Creating a dictionary from a csv file?</a> But with the marked solution I get <code>('K\xc3\xa4pten', 'B\xc3\xb6se', 'F\xc3\xbcller')</code>. Maybe it's easy to edit it for getting <code>[{u'col1': u'K\xe4pten', u'col2': u'B\xf6se', u'col3': u'F\xfcller'}]</code>?</p>
| 0 |
2016-09-13T20:13:42Z
| 39,478,500 |
<p>for print use <code>print line</code> instead <code>print repr(line)</code></p>
<p>and for dict i use this solution </p>
<p><a href="https://docs.python.org/2/library/csv.html#csv-examples" rel="nofollow">https://docs.python.org/2/library/csv.html#csv-examples</a></p>
<p>The csv module doesnât directly support reading and writing Unicode</p>
<pre><code>import codecs
import csv
def utf_8_encoder(unicode_csv_data):
for line in unicode_csv_data:
yield line.encode('utf-8')
def unicode_csv_reader(unicode_csv_data, dialect=csv.excel, **kwargs):
# csv.py doesn't do Unicode; encode temporarily as UTF-8:
csv_reader = csv.reader(utf_8_encoder(unicode_csv_data),
dialect=dialect, **kwargs)
for row in csv_reader:
# decode UTF-8 back to Unicode, cell by cell:
yield [unicode(cell, 'utf-8') for cell in row]
with codecs.open('in.csv', encoding='utf-8') as f:
reader = unicode_csv_reader(f)
keys = [k.strip() for k in reader.next()]
result = []
for row in reader:
d=dict(zip(keys, row))
result.append(d)
for d in result:
for k, v in d.iteritems():
print k, v
print result
</code></pre>
| 0 |
2016-09-13T20:29:51Z
|
[
"python",
"list",
"python-2.7",
"csv",
"dictionary"
] |
How to convert convert csv to list of dictionaries (UTF-8)?
| 39,478,281 |
<p>I have a csv file (in.csv)</p>
<pre><code>col1, col2, col3
Kapitän, Böse, Füller
...
</code></pre>
<p>and I want to create a list of dictionaries:</p>
<pre><code>a = [{'col1': 'Kapitän', 'col2': 'Böse', 'col3': 'Füller'},{...}]
</code></pre>
<p>With Python 3 it's working with</p>
<pre><code> import codecs
with codecs.open('in.csv', encoding='utf-8') as f:
a = [{k: v for k, v in row.items()}
for row in csv.DictReader(f, skipinitialspace=True)]
print(a)
</code></pre>
<p>(I've got this code from <a href="http://stackoverflow.com/questions/21572175/convert-csv-file-to-list-of-dictionaries">convert csv file to list of dictionaries</a>).</p>
<p>Unfortunately I need this for Python 2, but I don't come along with it. </p>
<p>I tried to understand <a href="https://docs.python.org/2.7/howto/unicode.html" rel="nofollow">https://docs.python.org/2.7/howto/unicode.html</a>, but I think I'm too stupid, because </p>
<pre><code>import codecs
f = codecs.open('in.csv', encoding='utf-8')
for line in f:
print repr(line)
</code></pre>
<p>gives me</p>
<pre><code>u'col1,col2,col3\n'
u'K\xe4pten,B\xf6se,F\xfcller\n'
u'\n'
</code></pre>
<p>Do you have a solution for Python 2?</p>
<p>There is a similar problem solved here: <a href="http://stackoverflow.com/questions/6740918/creating-a-dictionary-from-a-csv-file">Creating a dictionary from a csv file?</a> But with the marked solution I get <code>('K\xc3\xa4pten', 'B\xc3\xb6se', 'F\xc3\xbcller')</code>. Maybe it's easy to edit it for getting <code>[{u'col1': u'K\xe4pten', u'col2': u'B\xf6se', u'col3': u'F\xfcller'}]</code>?</p>
| 0 |
2016-09-13T20:13:42Z
| 39,478,754 |
<p>you can leverage the csv lib for the job.</p>
<pre><code>import csv
li_of_dicts = []
with open('in.csv', 'r') as infile:
reader = csv.DictReader(infile, encoding='utf-8')
for row in reader:
li_of_dicts.append(row)
</code></pre>
| 1 |
2016-09-13T20:48:51Z
|
[
"python",
"list",
"python-2.7",
"csv",
"dictionary"
] |
python linear regression implementation
| 39,478,437 |
<p>I've been trying to do my own implementation of a simple linear regression algorithm, but I'm having some trouble with the gradient descent.</p>
<p>Here's how I coded it:</p>
<pre><code>def gradientDescentVector(data, alpha, iterations):
a = 0.0
b = 0.0
X = data[:,0]
y = data[:,1]
m = data.shape[0]
it = np.ones(shape=(m,2))
for i in range(iterations):
predictions = X.dot(a).flatten() + b
errors_b = (predictions - y)
errors_a = (predictions - y) * X
a = a - alpha * (1.0 / m) * errors_a.sum()
b = b - alpha * (1.0 / m) * errors_b.sum()
return a, b
</code></pre>
<p>Now, I know this won't scale well with more variables, but I was just trying with the simple version first, and follow up from there.</p>
<p>I was following the gradient descent algorithm from the machine learning course at coursera:</p>
<p><a href="http://i.stack.imgur.com/w1BRm.png" rel="nofollow"><img src="http://i.stack.imgur.com/w1BRm.png" alt="enter image description here"></a></p>
<p>But I'm getting infinite values after ~90 iterations (on a specific dataset), and haven't been able to wrap my head around this so far.</p>
<p>I've tried iterating over each value before I learned about numpy's broadcasting and was getting the same results.</p>
<p>If anyone could shed some light on what could be the problem here, it would be great.</p>
| 2 |
2016-09-13T20:25:02Z
| 39,488,807 |
<p>It is clear that the parameters are diverging from the optimum ones. One possible reason may be that you are using too large a value for the learning rate ("alpha"). Try decreasing the learning rate. Here is a rule of thumb. Start always from a small value like 0.001. Then try increasing the learning rate by taking a three time higher learning rate than previous. If it gives less MSE error (or whatever error function you are using), then its fine. If not try taking a value between 0.001 and 0.003. Next if the latter hold true, then try this recursively until you reach a satisfying MSE.</p>
| 1 |
2016-09-14T11:01:59Z
|
[
"python",
"numpy",
"machine-learning",
"linear-regression"
] |
Python Bigger is Greater optimization
| 39,478,440 |
<p>Thank you all. I found below function which is perfect, I mark this question closed</p>
<p><a href="https://www.nayuki.io/page/next-lexicographical-permutation-algorithm" rel="nofollow">https://www.nayuki.io/page/next-lexicographical-permutation-algorithm</a></p>
<pre><code>def next_permutation(arr):
# Find non-increasing suffix
i = len(arr) - 1
while i > 0 and arr[i - 1] >= arr[i]:
i -= 1
if i <= 0:
return False
# Find successor to pivot
j = len(arr) - 1
while arr[j] <= arr[i - 1]:
j -= 1
arr[i - 1], arr[j] = arr[j], arr[i - 1]
# Reverse suffix
arr[i : ] = arr[len(arr) - 1 : i - 1 : -1]
return True
</code></pre>
<p>Python gurus, I am trying to finish the below challenge and don't want to use permutation.
<a href="https://www.hackerrank.com/challenges/bigger-is-greater" rel="nofollow">https://www.hackerrank.com/challenges/bigger-is-greater</a>
The below code works well for small piece of test data, however it cannot pass 100000 cases. Could any masters help to provide some suggestions to optimize the below code. Much appreciated.</p>
<pre><code>t=int(raw_input())
for _ in range(t):
s=list(raw_input().strip())#change to list
pos = -1#check pos, if pos is bigger, it's smallest bigger lexilogical, so only choose big pos
i_temp=0
for i in reversed(range(len(s))):
for j in reversed(range(i)):
if s[i]>s[j]: #last letter is bigger than previous, in this case , we can swap to previous one, and found bigger one.
if j>pos:
pos=j#new place
i_temp=i
break
if j<pos:
break #already found good one
if i<pos:
break #already found good one
if pos>=0:
s_tmp=s[pos]
s[pos]=s[i_temp]
s[i_temp]=s_tmp
s1 = s[pos+1:] #get string for smallest
s1.sort()
print ("".join(s[:pos+1]+s1))
else:
print ("no answer")
</code></pre>
| -4 |
2016-09-13T20:25:07Z
| 39,479,648 |
<p>Your instinct is right, so I'll try to help.</p>
<p>Step1: You're iterating in reverse looking for a case where a[i] < a[i+n], then you know you have a solution.
Step2: Then you paste everything (the prefix, the character, and the sorted suffix junk.) </p>
<p>Just make it easy on yourself: find the solution point first, then compute the output. Don't try to track the variables needed for step 2 in step 1. Step 2 is only going to get called once per string:</p>
<pre><code>def f(w):
best = ''
for i in range(len(w)):
idx = -i-1
c = w[idx]
if c >= best:
best = c
else:
l = sorted(w[idx:])
for j, ch in enumerate(l):
if ch > c:
return w[:idx] + ch + ''.join(l[:j] + l[j+1:])
return 'no answer'
n = input()
for i in range(n):
print f(raw_input())
</code></pre>
| 1 |
2016-09-13T22:01:28Z
|
[
"python",
"algorithm"
] |
Jenkins user unable to run python script
| 39,478,448 |
<p>I have a boto python script that lives in /var/lib/jenkins/workspace/project/python-script.py that is being ran in the execute shell of a jenkins build. </p>
<p>When I ssh into my jenkins server and execute the command python python-script.py arg1 arg2 as root or the ec2-user the python script runs exactly how I wish it to run. When I run the jenkins build or sudo -u jenkins python python-script arg1 arg 2 I get the same error as follows: </p>
<pre><code> Traceback (most recent call last):
File "ec2-elb.py", line 31, in <module>
main()
File "ec2-elb.py", line 17, in main
elb_conn = boto.ec2.elb.connect_to_region(args.region)
File "/usr/local/lib/python2.7/site-packages/boto-2.42.0-py2.7.egg/boto/ec2/elb/__init__.py", line 63, in connect_to_region
return region.connect(**kw_params)
File "/usr/local/lib/python2.7/site-packages/boto-2.42.0-py2.7.egg/boto/regioninfo.py", line 187, in connect
return self.connection_cls(region=self, **kw_params)
File "/usr/local/lib/python2.7/site-packages/boto-2.42.0-py2.7.egg/boto/ec2/elb/__init__.py", line 98, in __init__
profile_name=profile_name)
File "/usr/local/lib/python2.7/site-packages/boto-2.42.0-py2.7.egg/boto/connection.py", line 1100, in __init__
provider=provider)
File "/usr/local/lib/python2.7/site-packages/boto-2.42.0-py2.7.egg/boto/connection.py", line 569, in __init__
host, config, self.provider, self._required_auth_capability())
File "/usr/local/lib/python2.7/site-packages/boto-2.42.0-py2.7.egg/boto/auth.py", line 991, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials
</code></pre>
<p>I have tried to change the PATH to the path to python, changed permissions on the python file to the jenkins user and made the file executable. </p>
<p>I am not sure where to go from here as searches are beginning to bring back repetitive answers. ANY help would be greatly appreciated.</p>
<p>When printing the environments this is what I get: </p>
<p>for root user: </p>
<pre><code>LESS_TERMCAP_mb=
HOSTNAME=ip-172-31-3-2
LESS_TERMCAP_md=
LESS_TERMCAP_me=
SHELL=/bin/bash
TERM=xterm-256color
HISTSIZE=1000
EC2_AMITOOL_HOME=/opt/aws/amitools/ec2
LESS_TERMCAP_ue=
USER=root
LS_COLORS=rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:
SUDO_USER=ec2-user
EC2_HOME=/opt/aws/apitools/ec2
SUDO_UID=500
USERNAME=root
LESS_TERMCAP_us=
PATH=/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin:/root/bin
MAIL=/var/spool/mail/root
PWD=/root
JAVA_HOME=/usr/lib/jvm/jre
AWS_CLOUDWATCH_HOME=/opt/aws/apitools/mon
LANG=en_US.UTF-8
HISTCONTROL=ignoredups
SHLVL=1
SUDO_COMMAND=/bin/bash
HOME=/root
AWS_PATH=/opt/aws
AWS_AUTO_SCALING_HOME=/opt/aws/apitools/as
LOGNAME=root
AWS_ELB_HOME=/opt/aws/apitools/elb
LC_CTYPE=en_US.UTF-8
LESSOPEN=||/usr/bin/lesspipe.sh %s
SUDO_GID=500
LESS_TERMCAP_se=
_=/usr/bin/printenv
</code></pre>
<p>and for jenkins user:</p>
<pre><code>HOSTNAME=ip-172-31-3-2
TERM=xterm-256color
HISTSIZE=1000
LS_COLORS=rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:
USERNAME=root
MAIL=/var/spool/mail/root
LANG=en_US.UTF-8
LC_CTYPE=en_US.UTF-8
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
LOGNAME=jenkins
USER=jenkins
HOME=/var/lib/jenkins
SUDO_COMMAND=/usr/bin/printenv
SUDO_USER=root
SUDO_UID=0
SUDO_GID=0
</code></pre>
| 0 |
2016-09-13T20:25:54Z
| 39,479,237 |
<p>Ended up that I just needed to put my .boto file that contains my access and secret keys in the root of the jenkins user. Once I created the file in the root of the jenkins user it began to work.</p>
| 0 |
2016-09-13T21:26:27Z
|
[
"python",
"python-2.7",
"jenkins",
"amazon-ec2",
"boto"
] |
Best Design Pattern to execute steps in python
| 39,478,460 |
<p>I have to execute multiple actions sequentially in an order dependant manner. </p>
<pre><code>StepOne(arg1, arg2).execute()
StepTwo(arg1, arg2).execute()
StepThree(arg1, arg2).execute()
StepFour(arg1, arg2).execute()
StepFive(arg1, arg2).execute()
</code></pre>
<p>They all inherit from the same <code>Step</code> class and receive the same 2 args.</p>
<pre><code>class Step:
def __init__(self, arg1, arg2):
self.arg1 = arg1
self.arg2 = arg2
def execute(self):
raise NotImplementedError('This is an "abstract" method!')
</code></pre>
<p>What's the most idiomatic way to execute these actions in order? Is there a design pattern that would apply here?</p>
| 2 |
2016-09-13T20:26:54Z
| 39,478,489 |
<p>You could create a list of the step classes, then instantiate and call them in a loop.</p>
<pre><code>step_classes = [StepOne, StepTwo, StepThree, ...]
for c in step_classes:
c(arg1, arg2).execute()
</code></pre>
| 6 |
2016-09-13T20:29:15Z
|
[
"python",
"oop",
"design-patterns"
] |
Change django object name
| 39,478,510 |
<p>I have <code>UserDetailsSerializer</code> class as shown below. I would like to change it's object name from user to data to meet API endpoint requirements on my front-end application. I tried searching through internet but wasn't quite sure how to get such result. </p>
<pre><code>class UserDetailsSerializer(serializers.ModelSerializer):
uid = serializers.SerializerMethodField('get_username')
"""
User model w/o password
"""
class Meta:
model = UserModel
fields = ('uid', 'email', 'first_name', 'last_name', 'id')
read_only_fields = ('email', )
def get_username(self, obj):
return obj.username
</code></pre>
<p>There are few other methods that I could think of, which are reassigning the object in the view with a different name (again, I'm not exactly sure how it works with serializers.) and changing the front-end application API requirement. Please let me know if you can help.</p>
| 0 |
2016-09-13T20:30:58Z
| 39,478,782 |
<p>You haven't provided the content of your view. I believe currently the response dict that your are returning is something like:</p>
<pre><code>{'user': UserDetailsSerializer(your_object).data}
</code></pre>
<p>In this replace the key <code>user</code> with <code>data</code>.</p>
<p>If that is not the case, please mention where is <code>user</code> defined?</p>
| 0 |
2016-09-13T20:51:03Z
|
[
"python",
"django",
"django-rest-framework",
"django-rest-auth"
] |
Nested list comprehension where inner loop range is dependent on outer loop
| 39,478,528 |
<p>I am trying to represent the following as a list comprehension:</p>
<pre><code>L = []
for x in range(n):
for y in range(x):
L.append( (x, y) )
</code></pre>
<p>I have done nested list comprehension in the more typical matrix scenario where the inner loop range is not dependent on the outer loop.</p>
<p>I have considered there may be solutions in <code>itertools</code>, using <code>product()</code> or <code>chain()</code> but have been unsuccessful there as well.</p>
| 0 |
2016-09-13T20:32:39Z
| 39,478,566 |
<p>Remember to wrap the <code>x, y</code> in parentheses this is the only slight caveat that if omitted leads to a <code>SyntaxError</code>. </p>
<p>Other than that, the translation is pretty straightforward; the order of the <code>for</code>s inside the comprehension is similar to that with the nested statements:</p>
<pre><code>n = 5
[(x, y) for x in range(n) for y in range(x)]
</code></pre>
<p>Yields similar results to its nested loop counterpart:</p>
<pre><code>[(1, 0),
(2, 0),
(2, 1),
(3, 0),
(3, 1),
(3, 2),
(4, 0),
(4, 1),
(4, 2),
(4, 3)]
</code></pre>
| 3 |
2016-09-13T20:35:19Z
|
[
"python",
"python-3.x",
"list-comprehension",
"nested-loops"
] |
Nested list comprehension where inner loop range is dependent on outer loop
| 39,478,528 |
<p>I am trying to represent the following as a list comprehension:</p>
<pre><code>L = []
for x in range(n):
for y in range(x):
L.append( (x, y) )
</code></pre>
<p>I have done nested list comprehension in the more typical matrix scenario where the inner loop range is not dependent on the outer loop.</p>
<p>I have considered there may be solutions in <code>itertools</code>, using <code>product()</code> or <code>chain()</code> but have been unsuccessful there as well.</p>
| 0 |
2016-09-13T20:32:39Z
| 39,478,568 |
<p>Below is the example to convert your code to list comprehension.</p>
<pre><code>>>> n = 10
>>> [ (x,y) for x in range(n) for y in range(x)]
[(1, 0), (2, 0), (2, 1), (3, 0), (3, 1), (3, 2), (4, 0), (4, 1), (4, 2), (4, 3), (5, 0), (5, 1), (5, 2), (5, 3), (5, 4), (6, 0), (6, 1), (6, 2), (6, 3), (6, 4), (6, 5), (7, 0), (7, 1), (7, 2), (7, 3), (7, 4), (7, 5), (7, 6), (8, 0), (8, 1), (8, 2), (8, 3), (8, 4), (8, 5), (8, 6), (8, 7), (9, 0), (9, 1), (9, 2), (9, 3), (9, 4), (9, 5), (9, 6), (9, 7), (9, 8)]
</code></pre>
<p><strong>Alternatively,</strong> you achieve the same result using <code>itertools</code> library (sharing it just for your knowledge info, but is not recommended for this problem statement):</p>
<pre><code>>>> import itertools
>>> list(itertools.chain.from_iterable(([(list(itertools.product([x], range(x)))) for x in range(n)])))
[(1, 0), (2, 0), (2, 1), (3, 0), (3, 1), (3, 2), (4, 0), (4, 1), (4, 2), (4, 3), (5, 0), (5, 1), (5, 2), (5, 3), (5, 4), (6, 0), (6, 1), (6, 2), (6, 3), (6, 4), (6, 5), (7, 0), (7, 1), (7, 2), (7, 3), (7, 4), (7, 5), (7, 6), (8, 0), (8, 1), (8, 2), (8, 3), (8, 4), (8, 5), (8, 6), (8, 7), (9, 0), (9, 1), (9, 2), (9, 3), (9, 4), (9, 5), (9, 6), (9, 7), (9, 8)]
</code></pre>
| 0 |
2016-09-13T20:35:37Z
|
[
"python",
"python-3.x",
"list-comprehension",
"nested-loops"
] |
Nested list comprehension where inner loop range is dependent on outer loop
| 39,478,528 |
<p>I am trying to represent the following as a list comprehension:</p>
<pre><code>L = []
for x in range(n):
for y in range(x):
L.append( (x, y) )
</code></pre>
<p>I have done nested list comprehension in the more typical matrix scenario where the inner loop range is not dependent on the outer loop.</p>
<p>I have considered there may be solutions in <code>itertools</code>, using <code>product()</code> or <code>chain()</code> but have been unsuccessful there as well.</p>
| 0 |
2016-09-13T20:32:39Z
| 39,478,597 |
<p>List comprehensions are designed to make a straightforward translation of that loop possible:</p>
<pre><code>[ (x,y) for x in range(3) for y in range(x) ]
</code></pre>
<p>Is that not what you wanted?</p>
| 0 |
2016-09-13T20:37:39Z
|
[
"python",
"python-3.x",
"list-comprehension",
"nested-loops"
] |
Why doesn't OrderedDict use super?
| 39,478,747 |
<p>We can create an <code>OrderedCounter</code> trivially by using multiple inheritance:</p>
<pre><code>>>> from collections import Counter, OrderedDict
>>> class OrderedCounter(Counter, OrderedDict):
... pass
...
>>> OrderedCounter('Mississippi').items()
[('M', 1), ('i', 4), ('s', 4), ('p', 2)]
</code></pre>
<p>Correct me if I'm wrong, but this crucially relies on the fact that <a href="https://github.com/python/cpython/blob/761d139852063b9aa3caf58023184c4a399d594f/Lib/collections/__init__.py#L533" rel="nofollow"><code>Counter</code> uses <code>super</code></a>:</p>
<pre><code>class Counter(dict):
def __init__(*args, **kwds):
...
super(Counter, self).__init__()
...
</code></pre>
<p>That is, the magic trick works because </p>
<pre><code>>>> OrderedCounter.__mro__
(__main__.OrderedCounter,
collections.Counter,
collections.OrderedDict,
dict,
object)
</code></pre>
<p>The <code>super</code> call must delegate according to 'siblings before parents' rule of the <a href="https://www.python.org/download/releases/2.3/mro/" rel="nofollow">mro</a>, whence the custom class uses an <code>OrderedDict</code> as the storage backend. </p>
<p>However a colleague recently pointed out, to my surprise, that <code>OrderedDict</code> <a href="https://github.com/python/cpython/blob/761d139852063b9aa3caf58023184c4a399d594f/Lib/collections/__init__.py#L107-L119" rel="nofollow">doesn't</a> use super: </p>
<pre><code>def __setitem__(self, key, value,
dict_setitem=dict.__setitem__, proxy=_proxy, Link=_Link):
...
# <some weird stuff to maintain the ordering here>
dict_setitem(self, key, value)
</code></pre>
<p>At first I thought it could be because <code>OrderedDict</code> came first and Raymond didn't bother to change it later, but it seems that <code>super</code> predates <code>OrderedDict</code>. </p>
<p><strong>Why does <code>OrderedDict</code> call <code>dict.__setitem__</code> explicitly?</strong></p>
<p>And why does it need to be a kwarg? Doesn't this cause trouble when using <code>OrderedDict</code> in diamond inheritance situations, since it passes directly to the parent class instead of delegating to the next in line in the mro?</p>
| 4 |
2016-09-13T20:48:11Z
| 39,478,934 |
<p>It's a microoptimization. Looking up a <code>dict_setitem</code> argument is slightly faster than looking up <code>dict.__setitem__</code> or <code>super().__setitem__</code>.</p>
<p>This might cause problems with multiple inheritance if you have another class that overrides <code>__setitem__</code>, but <code>OrderedDict</code> isn't designed for that kind of diamond-structured method overriding anyway. For <code>OrderedDict</code> to support that, it'd have to make very careful guarantees about what another class's methods might see if they try to index the <code>OrderedDict</code> while the ordering information is inconsistent with the dict structure. Such guarantees would be way too messy to make.</p>
| 0 |
2016-09-13T21:02:10Z
|
[
"python",
"oop",
"multiple-inheritance",
"super",
"python-collections"
] |
Create process remotely by Supervisord API
| 39,478,769 |
<p>I'm creating an application to manage my supervisord process. I want to create process remotely by this application using the API.</p>
<p>I've checked the <a href="http://supervisord.org/api.html" rel="nofollow">supervisord XML-RPC API</a> but doesn't helped.</p>
| -2 |
2016-09-13T20:49:59Z
| 39,519,716 |
<p>I used this interface to create process by API: <a href="https://github.com/mnaberez/supervisor_twiddler" rel="nofollow">https://github.com/mnaberez/supervisor_twiddler</a></p>
| 0 |
2016-09-15T20:21:57Z
|
[
"python",
"supervisord"
] |
Unable to correctly use Pandas Interpolate over a series
| 39,478,838 |
<p>I am trying to use the interpolation functionality provided by Pandas, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.interpolate.html" rel="nofollow">here</a> but for some reason, cannot get my Series to adjust to the correct values. I casted them to a float64, but that did not appear to help. Any recommendations?</p>
<p><strong>The code:</strong></p>
<pre><code>for feature in price_data:
print price_data[feature]
print "type:"
print type(price_data[feature])
newSeries = price_data[feature].astype(float).interpolate()
print "newSeries: "
print newSeries
</code></pre>
<p><strong>The output:</strong></p>
<pre><code>0 178.9000
1 0.0000
2 178.1200
Name: open_price, dtype: object
type:
<class 'pandas.core.series.Series'>
newSeries:
0 178.90
1 0.00
2 178.12
Name: open_price, dtype: float64
</code></pre>
| 1 |
2016-09-13T20:55:42Z
| 39,478,926 |
<p>The problem is that there is nothing to interpolate. I'm assuming you want to interpolate the value where zero is. In that case, replace the zero with <code>np.nan</code> then interpolate. One way to do this is</p>
<pre><code>price_data.where(price_data != 0, np.nan).interpolate()
0 178.90
1 178.51
2 178.12
Name: open_price, dtype: float64
</code></pre>
| 0 |
2016-09-13T21:01:34Z
|
[
"python",
"python-2.7",
"pandas",
"interpolation",
"series"
] |
Django Makemigrations not working in version 1.10 after adding new table
| 39,478,845 |
<p>I added some table models in <code>models.py</code> for the first time running the app and then ran <code>python manage.py makemigrations</code> followed by <code>python manage.py migrate</code>. This works well but after adding two more tables it doesn't work again. </p>
<p>It created <em>migrations</em> for the changes made but when I run <code>python manage.py migrate</code> nothing happens. My new tables are not added to the database.</p>
<p>Things I have done:</p>
<ol>
<li>Deleted all files in <em>migrations</em> folder and then run <code>python manage.py makemigrations</code>followed by <code>python manage.py migrate</code> but the new tables are not still not getting added to the database even though the new table models show in the migration that was created i.e <strong>0001_initial.py</strong>.</li>
<li>Deleted the database followed by the steps in 1 above but it still didn't solve my problem. Only the first set of tables get created.</li>
<li>Tried <code>python manage.py makemigrations app_name</code> but it still didn't help.</li>
</ol>
| 0 |
2016-09-13T20:56:32Z
| 39,481,262 |
<p>You could try:</p>
<ol>
<li>Deleted everything in the django-migrations table.</li>
<li>Deleted all files in migrations folder and then run python manage.py makemigrations followed by python manage.py migrate as you said.</li>
</ol>
<p>If this doesn't work, try:</p>
<ol>
<li>Deleted everything in the django-migrations table.</li>
<li>Deleted all files in migrations folder, use your old model.py to run python manage.py makemigrations followed by python manage.py migrate.</li>
<li>Add new model, run python manage.py makemigrations followed by python manage.py migrate again.</li>
</ol>
| -1 |
2016-09-14T01:36:44Z
|
[
"python",
"django",
"python-2.7"
] |
Django Makemigrations not working in version 1.10 after adding new table
| 39,478,845 |
<p>I added some table models in <code>models.py</code> for the first time running the app and then ran <code>python manage.py makemigrations</code> followed by <code>python manage.py migrate</code>. This works well but after adding two more tables it doesn't work again. </p>
<p>It created <em>migrations</em> for the changes made but when I run <code>python manage.py migrate</code> nothing happens. My new tables are not added to the database.</p>
<p>Things I have done:</p>
<ol>
<li>Deleted all files in <em>migrations</em> folder and then run <code>python manage.py makemigrations</code>followed by <code>python manage.py migrate</code> but the new tables are not still not getting added to the database even though the new table models show in the migration that was created i.e <strong>0001_initial.py</strong>.</li>
<li>Deleted the database followed by the steps in 1 above but it still didn't solve my problem. Only the first set of tables get created.</li>
<li>Tried <code>python manage.py makemigrations app_name</code> but it still didn't help.</li>
</ol>
| 0 |
2016-09-13T20:56:32Z
| 39,481,477 |
<p>I have run into this problem before and found that running <code>manage.py</code> for specific tables in this fashion worked:</p>
<pre><code>python manage.py schemamigration mytablename --auto
python manage.py migrate
</code></pre>
<p>Also make sure that your new table is listed under <code>INSTALLED_APPS</code> in your <code>settings.py</code>.</p>
| 1 |
2016-09-14T02:07:01Z
|
[
"python",
"django",
"python-2.7"
] |
Django Makemigrations not working in version 1.10 after adding new table
| 39,478,845 |
<p>I added some table models in <code>models.py</code> for the first time running the app and then ran <code>python manage.py makemigrations</code> followed by <code>python manage.py migrate</code>. This works well but after adding two more tables it doesn't work again. </p>
<p>It created <em>migrations</em> for the changes made but when I run <code>python manage.py migrate</code> nothing happens. My new tables are not added to the database.</p>
<p>Things I have done:</p>
<ol>
<li>Deleted all files in <em>migrations</em> folder and then run <code>python manage.py makemigrations</code>followed by <code>python manage.py migrate</code> but the new tables are not still not getting added to the database even though the new table models show in the migration that was created i.e <strong>0001_initial.py</strong>.</li>
<li>Deleted the database followed by the steps in 1 above but it still didn't solve my problem. Only the first set of tables get created.</li>
<li>Tried <code>python manage.py makemigrations app_name</code> but it still didn't help.</li>
</ol>
| 0 |
2016-09-13T20:56:32Z
| 39,482,630 |
<p>Can you post your models?</p>
<p>Have you edited manage.py in any way?</p>
<p>Try deleting the migrations and the database again after ensuring that your models are valid, then run manage.py makemigrations appname and then manage.py migrate.</p>
| 0 |
2016-09-14T04:45:46Z
|
[
"python",
"django",
"python-2.7"
] |
Pandas pct change from initial value
| 39,478,853 |
<p>I want to find the pct_change of <code>Dew_P Temp (C)</code> from the initial value of -3.9. I want the pct_change in a new column.</p>
<p>Source here:</p>
<pre><code>weather = pd.read_csv('https://raw.githubusercontent.com/jvns/pandas-cookbook/master/data/weather_2012.csv')
weather[weather.columns[:4]].head()
Date/Time Temp (C) Dew_P Temp (C) Rel Hum (%)
0 2012-01-01 -1.8 -3.9 86
1 2012-01-01 -1.8 -3.7 87
2 2012-01-01 -1.8 -3.4 89
3 2012-01-01 -1.5 -3.2 88
4 2012-01-01 -1.5 -3.3 88
</code></pre>
<p>I have tried variations of this for loop (even going as far as adding an index shown here) but to no avail:</p>
<pre><code>for index, dew_point in weather['Dew_P Temp (C)'].iteritems():
new = weather['Dew_P Temp (C)'][index]
old = weather['Dew_P Temp (C)'][0]
pct_diff = (new-old)/old*100
weather['pct_diff'] = pct_diff
</code></pre>
<p>I think the problem is the <code>weather['pct_diff']</code>, it doesn't take the <code>new</code> it takes the last value of the data frame and subtracts it from <code>old</code> </p>
<p>So its always (2.1-3.9)/3.9*100 thus my percent change is always -46%.</p>
<p>The end result I want is this:</p>
<pre><code>Date/Time Temp (C) Dew_P Temp (C) Rel Hum (%) pct_diff
0 2012-01-01 -1.8 -3.9 86 0.00%
1 2012-01-01 -1.8 -3.7 87 5.12%
2 2012-01-01 -1.8 -3.4 89 12.82%
</code></pre>
<p>Any ideas? Thanks!</p>
| 5 |
2016-09-13T20:57:16Z
| 39,479,039 |
<p>IIUC you can do it this way:</p>
<pre><code>In [88]: ((weather['Dew Point Temp (C)'] - weather.ix[0, 'Dew Point Temp (C)']).abs() / weather.ix[0, 'Dew Point Temp (C)']).abs() * 100
Out[88]:
0 0.000000
1 5.128205
2 12.820513
3 17.948718
4 15.384615
5 15.384615
6 20.512821
7 7.692308
8 7.692308
9 20.512821
</code></pre>
| 4 |
2016-09-13T21:09:29Z
|
[
"python",
"python-3.x",
"pandas",
"dataframe"
] |
Pandas pct change from initial value
| 39,478,853 |
<p>I want to find the pct_change of <code>Dew_P Temp (C)</code> from the initial value of -3.9. I want the pct_change in a new column.</p>
<p>Source here:</p>
<pre><code>weather = pd.read_csv('https://raw.githubusercontent.com/jvns/pandas-cookbook/master/data/weather_2012.csv')
weather[weather.columns[:4]].head()
Date/Time Temp (C) Dew_P Temp (C) Rel Hum (%)
0 2012-01-01 -1.8 -3.9 86
1 2012-01-01 -1.8 -3.7 87
2 2012-01-01 -1.8 -3.4 89
3 2012-01-01 -1.5 -3.2 88
4 2012-01-01 -1.5 -3.3 88
</code></pre>
<p>I have tried variations of this for loop (even going as far as adding an index shown here) but to no avail:</p>
<pre><code>for index, dew_point in weather['Dew_P Temp (C)'].iteritems():
new = weather['Dew_P Temp (C)'][index]
old = weather['Dew_P Temp (C)'][0]
pct_diff = (new-old)/old*100
weather['pct_diff'] = pct_diff
</code></pre>
<p>I think the problem is the <code>weather['pct_diff']</code>, it doesn't take the <code>new</code> it takes the last value of the data frame and subtracts it from <code>old</code> </p>
<p>So its always (2.1-3.9)/3.9*100 thus my percent change is always -46%.</p>
<p>The end result I want is this:</p>
<pre><code>Date/Time Temp (C) Dew_P Temp (C) Rel Hum (%) pct_diff
0 2012-01-01 -1.8 -3.9 86 0.00%
1 2012-01-01 -1.8 -3.7 87 5.12%
2 2012-01-01 -1.8 -3.4 89 12.82%
</code></pre>
<p>Any ideas? Thanks!</p>
| 5 |
2016-09-13T20:57:16Z
| 39,479,042 |
<p>You can use <code>iat</code> to access the scalar value (e.g. <code>iat[0]</code> accesses the first value in the series).</p>
<pre><code>df = weather
df['pct_diff'] = df['Dew_P Temp (C)'] / df['Dew_P Temp (C)'].iat[0] - 1
</code></pre>
| 6 |
2016-09-13T21:09:51Z
|
[
"python",
"python-3.x",
"pandas",
"dataframe"
] |
Pandas pct change from initial value
| 39,478,853 |
<p>I want to find the pct_change of <code>Dew_P Temp (C)</code> from the initial value of -3.9. I want the pct_change in a new column.</p>
<p>Source here:</p>
<pre><code>weather = pd.read_csv('https://raw.githubusercontent.com/jvns/pandas-cookbook/master/data/weather_2012.csv')
weather[weather.columns[:4]].head()
Date/Time Temp (C) Dew_P Temp (C) Rel Hum (%)
0 2012-01-01 -1.8 -3.9 86
1 2012-01-01 -1.8 -3.7 87
2 2012-01-01 -1.8 -3.4 89
3 2012-01-01 -1.5 -3.2 88
4 2012-01-01 -1.5 -3.3 88
</code></pre>
<p>I have tried variations of this for loop (even going as far as adding an index shown here) but to no avail:</p>
<pre><code>for index, dew_point in weather['Dew_P Temp (C)'].iteritems():
new = weather['Dew_P Temp (C)'][index]
old = weather['Dew_P Temp (C)'][0]
pct_diff = (new-old)/old*100
weather['pct_diff'] = pct_diff
</code></pre>
<p>I think the problem is the <code>weather['pct_diff']</code>, it doesn't take the <code>new</code> it takes the last value of the data frame and subtracts it from <code>old</code> </p>
<p>So its always (2.1-3.9)/3.9*100 thus my percent change is always -46%.</p>
<p>The end result I want is this:</p>
<pre><code>Date/Time Temp (C) Dew_P Temp (C) Rel Hum (%) pct_diff
0 2012-01-01 -1.8 -3.9 86 0.00%
1 2012-01-01 -1.8 -3.7 87 5.12%
2 2012-01-01 -1.8 -3.4 89 12.82%
</code></pre>
<p>Any ideas? Thanks!</p>
| 5 |
2016-09-13T20:57:16Z
| 39,479,091 |
<p>I find this more graceful</p>
<pre><code>weather['Dew_P Temp (C)'].pct_change().fillna(0).add(1).cumprod().sub(1)
0 0.000000
1 -0.051282
2 -0.128205
3 -0.179487
4 -0.153846
Name: Dew_P Temp (C), dtype: float64
</code></pre>
<hr>
<p>To get your expected output with absolute values</p>
<pre><code>weather['pct_diff'] = weather['Dew_P Temp (C)'].pct_change().fillna(0).add(1).cumprod().sub(1).abs()
weather
</code></pre>
<p><a href="http://i.stack.imgur.com/h1wYM.png" rel="nofollow"><img src="http://i.stack.imgur.com/h1wYM.png" alt="enter image description here"></a></p>
| 3 |
2016-09-13T21:15:03Z
|
[
"python",
"python-3.x",
"pandas",
"dataframe"
] |
BeautifulSoup, findAll after findAll?
| 39,478,865 |
<p>I'm pretty new to Python and mainly need it for getting information from websites.
Here I tried to get the short headlines from the bottom of the website, but cant quite get them.</p>
<pre><code>from bfs4 import BeautifulSoup
import requests
url = "http://some-website"
r = requests.get(url)
soup = BeautifulSoup(r.content, "html.parser")
nachrichten = soup.findAll('ul', {'class':'list'})
</code></pre>
<p>Now I would need another findAll to get all the links/a from the var "nachrichten", but how can I do this ?</p>
| 1 |
2016-09-13T20:58:01Z
| 39,479,047 |
<p>Use a <em>css selector</em> with select if you want all the links in a single list:</p>
<pre><code>anchors = soup.select('ul.list a')
</code></pre>
<p>If you want individual lists:</p>
<pre><code>anchors = [ ul.find_all(a) for a in soup.find_all('ul', {'class':'list'})]
</code></pre>
<p>Also if you want the hrefs you can make sure you only find the anchors with <em>href</em> attributes and extract:</p>
<pre><code>hrefs = [a["href"] for a in soup.select('ul.list a[href]')]
</code></pre>
<p>With <code>find_all</code> set <em>href=True</em> i.e <code>ul.find_all(a, href=True)</code> .</p>
| 0 |
2016-09-13T21:10:14Z
|
[
"python",
"beautifulsoup",
"python-requests"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.