title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Is it possible to access a local variable in another function?
| 39,682,842 |
<p>Goal: Need to use local variable in another function. Is that possible in Python?</p>
<p>I would like to use the local variable in some other function. Because in my case I need to use a counter to see the number of connections happening and number of connections releasing/ For that I am maintaining a counter. To implement that I have written sample code for count and returning local variable in another function.</p>
<p>How can I print <code>t</code> & <code>my_reply</code> in the <code>test()</code> function?</p>
<p>Code: counter_glob.py</p>
<pre><code>my_test = 0
t = 0
def test():
print("I: ",t)
print("IIIIIIII: ",my_reply)
def my():
global t
reply = foo()
t = reply
print("reply:",reply)
print("ttttt:",t)
def foo():
global my_test
my_test1 = 0
my_test += 1
print my_test1
my_test1 = my_test
my_test += 1
print("my_test:",my_test1)
return my_test1
my()
</code></pre>
<p>Result:</p>
<pre class="lang-none prettyprint-override"><code>> $ python counter_glob.py
0
('my_test:', 1)
('reply:', 1)
('ttttt:', 1)
</code></pre>
| 0 |
2016-09-25T02:32:22Z
| 39,683,886 |
<p>Except <code>closure</code> you wont have access to local vars outside the scope of your functions.
If variables have to be shared across different methods better make them as global as @pavnik has mentioned.</p>
| 0 |
2016-09-25T05:57:07Z
|
[
"python",
"python-2.7",
"python-3.x"
] |
'int' object is not callable error in class
| 39,682,852 |
<p>In python 2.7, I am writing a class called <code>Zillion</code>, which is to act as a counter for very large integers. I believe I have it riddled out, but I keep running into <code>TypeError: 'int' object is not callable</code> , which seems to mean that at some point in my code I tried to call an <code>int</code> like it was a function. Many of the examples I found on this site were simply a mathematical error where the writer omitted an operator. I can't seem to find my error.</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
z.increment()
TypeError: 'int' object is not callable
</code></pre>
<p>My code:</p>
<pre><code>class Zillion:
def __init__(self, digits):
self.new = []
self.count = 0 # for use in process and increment
self.increment = 1 # for use in increment
def process(self, digits):
if digits == '':
raise RuntimeError
elif digits[0].isdigit() == False:
if digits[0] == ' ' or digits[0] == ',':
digits = digits[1:]
else:
raise RuntimeError
elif digits[0].isdigit():
self.new.append(int(digits[0]))
digits = digits[1:]
self.count += 1
if digits != '':
process(self, digits)
process(self, digits)
if self.count == 0:
raise RuntimeError
self.new2 = self.new # for use in isZero
def toString(self):
self.mystring =''
self.x = 0
while self.x < self.count:
self.mystring = self.mystring + str(self.new[self.x])
self.x += 1
print(self.mystring)
def isZero(self):
if self.new2[0] != '0':
return False
elif self.new2[0] == '0':
self.new2 = self.new2[1:]
isZero(self)
return True
def increment(self):
if self.new[self.count - self.increment] == 9:
self.new[self.count - self.increment] = 0
if isZero(self):
self.count += 1
self.new= [1] + self.new
else:
self.increment += 1
increment(self)
elif self.new[self.count - self.increment] != 9:
self.new[self.count - self.increment] = self.new[self.count - self.increment] + 1
</code></pre>
| -1 |
2016-09-25T02:34:19Z
| 39,682,886 |
<p>You have both an instance variable and a method named <code>increment</code> that seems to be your problem with that traceback at least.</p>
<p>in <code>__init__</code> you define <code>self.increment = 1</code> and that masks the method with the same name</p>
<p>To fix, just rename one of them (and if it's the variable name, make sure you change all the places that use it--like throughout the <code>increment</code> method)</p>
<p>One way to see what's happening here is to use <code>type</code> to investigate. For example:</p>
<pre><code>>>> type(Zillion.increment)
<type 'instancemethod'>
>>> z = Zillion('5')
>>> type(z.incremenet)
<type 'int'>
</code></pre>
| 1 |
2016-09-25T02:43:09Z
|
[
"python"
] |
'int' object is not callable error in class
| 39,682,852 |
<p>In python 2.7, I am writing a class called <code>Zillion</code>, which is to act as a counter for very large integers. I believe I have it riddled out, but I keep running into <code>TypeError: 'int' object is not callable</code> , which seems to mean that at some point in my code I tried to call an <code>int</code> like it was a function. Many of the examples I found on this site were simply a mathematical error where the writer omitted an operator. I can't seem to find my error.</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
z.increment()
TypeError: 'int' object is not callable
</code></pre>
<p>My code:</p>
<pre><code>class Zillion:
def __init__(self, digits):
self.new = []
self.count = 0 # for use in process and increment
self.increment = 1 # for use in increment
def process(self, digits):
if digits == '':
raise RuntimeError
elif digits[0].isdigit() == False:
if digits[0] == ' ' or digits[0] == ',':
digits = digits[1:]
else:
raise RuntimeError
elif digits[0].isdigit():
self.new.append(int(digits[0]))
digits = digits[1:]
self.count += 1
if digits != '':
process(self, digits)
process(self, digits)
if self.count == 0:
raise RuntimeError
self.new2 = self.new # for use in isZero
def toString(self):
self.mystring =''
self.x = 0
while self.x < self.count:
self.mystring = self.mystring + str(self.new[self.x])
self.x += 1
print(self.mystring)
def isZero(self):
if self.new2[0] != '0':
return False
elif self.new2[0] == '0':
self.new2 = self.new2[1:]
isZero(self)
return True
def increment(self):
if self.new[self.count - self.increment] == 9:
self.new[self.count - self.increment] = 0
if isZero(self):
self.count += 1
self.new= [1] + self.new
else:
self.increment += 1
increment(self)
elif self.new[self.count - self.increment] != 9:
self.new[self.count - self.increment] = self.new[self.count - self.increment] + 1
</code></pre>
| -1 |
2016-09-25T02:34:19Z
| 39,682,897 |
<p>You have defined an instance variable in <code>Zillion.__init__()</code></p>
<pre><code>def __init__(self, digits):
self.new = []
self.count = 0
self.increment = 1 # Here!
</code></pre>
<p>Then you defined a method with the same name 'Zillion.increment()`:</p>
<pre><code>def increment(self):
[â¦]
</code></pre>
<p>So if you try to call your method like this:</p>
<pre><code>big_number = Zillion()
big_number.increment()
</code></pre>
<p><code>.ìncrement</code> will be the integer you have defined in <code>.__init__()</code> and not the method.</p>
| 0 |
2016-09-25T02:45:51Z
|
[
"python"
] |
'int' object is not callable error in class
| 39,682,852 |
<p>In python 2.7, I am writing a class called <code>Zillion</code>, which is to act as a counter for very large integers. I believe I have it riddled out, but I keep running into <code>TypeError: 'int' object is not callable</code> , which seems to mean that at some point in my code I tried to call an <code>int</code> like it was a function. Many of the examples I found on this site were simply a mathematical error where the writer omitted an operator. I can't seem to find my error.</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
z.increment()
TypeError: 'int' object is not callable
</code></pre>
<p>My code:</p>
<pre><code>class Zillion:
def __init__(self, digits):
self.new = []
self.count = 0 # for use in process and increment
self.increment = 1 # for use in increment
def process(self, digits):
if digits == '':
raise RuntimeError
elif digits[0].isdigit() == False:
if digits[0] == ' ' or digits[0] == ',':
digits = digits[1:]
else:
raise RuntimeError
elif digits[0].isdigit():
self.new.append(int(digits[0]))
digits = digits[1:]
self.count += 1
if digits != '':
process(self, digits)
process(self, digits)
if self.count == 0:
raise RuntimeError
self.new2 = self.new # for use in isZero
def toString(self):
self.mystring =''
self.x = 0
while self.x < self.count:
self.mystring = self.mystring + str(self.new[self.x])
self.x += 1
print(self.mystring)
def isZero(self):
if self.new2[0] != '0':
return False
elif self.new2[0] == '0':
self.new2 = self.new2[1:]
isZero(self)
return True
def increment(self):
if self.new[self.count - self.increment] == 9:
self.new[self.count - self.increment] = 0
if isZero(self):
self.count += 1
self.new= [1] + self.new
else:
self.increment += 1
increment(self)
elif self.new[self.count - self.increment] != 9:
self.new[self.count - self.increment] = self.new[self.count - self.increment] + 1
</code></pre>
| -1 |
2016-09-25T02:34:19Z
| 39,682,934 |
<p>Because you have a variable member <code>self.increment</code>ï¼and it has been set to the 1 in your <code>__init__</code> function.</p>
<p><code>z.increment</code> represents the variable member which set to 1.</p>
<p>You can rename your function from <code>increment</code> to the <code>_increment</code>(or any other names), and it will works. </p>
| 0 |
2016-09-25T02:54:19Z
|
[
"python"
] |
Spurious ValueError using learning_curve on sklearn.svm.SVC(kernel='rbf') classifier
| 39,682,885 |
<p>I am getting weird ValueError when using learning_curve on svm.SVC(kernel='rbf') classifier.</p>
<p>I am using:</p>
<pre><code>from sklearn import svm
from sklearn import cross_validation, datasets, preprocessing
clf=svm.SVC(kernel='rbf')
cv=cross_validation.StratifiedKFold(y, n_folds=10)
for enum, (train, test) in enumerate(cv):
print("Fold {0}, classes in train {1}, \t classes in test {2}".format(enum, set(y[train]), set(y[test])))
train_sizes, train_scores, test_scores = learning_curve(
clf, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
</code></pre>
<p>I can see, there are both classes in train and test sets.</p>
<pre><code>Fold 0, classes in train set([0, 1]), classes in test set([0, 1])
Fold 1, classes in train set([0, 1]), classes in test set([0, 1])
Fold 2, classes in train set([0, 1]), classes in test set([0, 1])
Fold 3, classes in train set([0, 1]), classes in test set([0, 1])
Fold 4, classes in train set([0, 1]), classes in test set([0, 1])
Fold 5, classes in train set([0, 1]), classes in test set([0, 1])
Fold 6, classes in train set([0, 1]), classes in test set([0, 1])
Fold 7, classes in train set([0, 1]), classes in test set([0, 1])
Fold 8, classes in train set([0, 1]), classes in test set([0, 1])
Fold 9, classes in train set([0, 1]), classes in test set([0, 1])
</code></pre>
<p>But then I get following error:</p>
<pre><code>ValueError: The number of classes has to be greater than one; got 1
</code></pre>
<p>Could somebody please help in finding a workaround?
Thanks!</p>
| 0 |
2016-09-25T02:43:03Z
| 39,693,288 |
<p>This looks likely to be caused by the <code>learning_curve</code> which retrains the model on different sized subsamples of your data; by default the sample sizes are <code>train_sizes=array([ 0.1, 0.33, 0.55, 0.78, 1. ])</code>, depending on your data you may be able address the issue by leaving out the smaller fractions, for example by setting <code>train_sizes=array([0.55, 0.78, 1. ])</code>, you also should consider reducing the number of folds in your cross-validation.</p>
| 1 |
2016-09-26T00:48:56Z
|
[
"python",
"machine-learning",
"scikit-learn"
] |
Comparing element in list and string
| 39,682,927 |
<p>I am just learning python, and I come across the question where i have to read a file and see search for a word</p>
<p>my code is</p>
<pre><code>Search_Word = input("Type your search word : ")
file = open(input("Your file name"), 'r')
read_line = file.readlines()
file.close()
def isPartOf(read_line, Search_Word):
x = False
for i in range (0, len(read_line)):
if(str(read_line[i) == Search_Word):
x = True
return x
isPartOf(read_line, Search_Word)
print(isPartOf(read_line, Search_Word))
</code></pre>
<p>The problem is that after I change every line into a list then compare every element in a list to user input (What i have to search)
problem is that even though element in list and user input match exactly the program does not recognize them as same thing...</p>
| 1 |
2016-09-25T02:52:18Z
| 39,683,284 |
<p>Just replace</p>
<pre><code>for i in range (0, len(read_line)):
if(str(read_line[i) == Search_Word):
</code></pre>
<p>by</p>
<pre><code>for i in read_line:
if i.strip('\n') == Search_Word:
</code></pre>
| 0 |
2016-09-25T04:10:03Z
|
[
"python",
"string-comparison",
"helpers"
] |
Why use itertools.groupby instead of doing it yourself?
| 39,683,041 |
<pre><code>from collections import defaultdict
import itertools
items = [(0, 0), (0, 1), (1, 0), (1, 1)]
keyfunc = lambda x: x[0]
# Grouping yourself
item_map = defaultdict(list)
for item in items:
item_map[keyfunc(item)].append(item)
# Using itertools.groupby
item_map = {}
for key, group in itertools.groupby(items, keyfunc):
item_map[key] = [i for i in group]
</code></pre>
<p>What is so great about <code>itertools.groupby</code> that I should use it instead of doing it myself? Can it perform the grouping in less time complexity? Or, am I missing the point with my use case, and <code>groupby</code> should be used for other cases?</p>
<hr>
<p>Another poster mentioned that <code>itertools.groupby</code> will return a different result if the items to be grouped are not sorted by the key (or rather just that keys are consecutive to one another).</p>
<p>For example, with <code>items = [(0, 0), (1, 1), (0, 2)]</code>, if we don't sort on the key, <code>itertools.groupby</code> returns </p>
<pre><code>{0: [(0, 2)], 1: [(1, 1)]}
</code></pre>
<p>Whereas my implementation returns </p>
<pre><code>{0: [(0, 0), (0, 2)], 1: [(1, 1)]}
</code></pre>
<p>Unless I'm misunderstanding the point, it would seem that the DIY method is better because it doesn't require the data to be sorted.</p>
<p>Here is the <a href="https://docs.python.org/2/library/itertools.html#itertools.groupby" rel="nofollow">documentation</a>:</p>
<blockquote>
<p>Make an iterator that returns consecutive keys and groups from the iterable. The key is a function computing a key value for each element. If not specified or is None, key defaults to an identity function and returns the element unchanged. Generally, the iterable needs to already be sorted on the same key function</p>
</blockquote>
| 1 |
2016-09-25T03:18:34Z
| 39,683,156 |
<p>Generally the point of using iterators is to avoid keeping an entire data set in memory. In your example, it doesn't matter because:</p>
<ul>
<li>The input is already all in memory.</li>
<li>You're just dumping everything into a <code>dict</code>, so the output is also all in memory.</li>
</ul>
<blockquote>
<p>Or, am I missing the point with my use case, and groupby should be used for other cases?</p>
</blockquote>
<p>I think that's an accurate assessment.</p>
<p>Suppose <code>items</code> is an iterator (e.g. let's say it's lines being read from stdin) and the output is something other than an in-memory data structure (e.g. stdout):</p>
<pre><code>for key, group in itertools.groupby(items, keyfunc):
print("{}: {}".format(key, str([i for i in group])))
</code></pre>
<p>Now it would be less trivial to do that yourself.</p>
| 1 |
2016-09-25T03:40:40Z
|
[
"python"
] |
Cannot find python 3.5.x interpreter using .msi extension
| 39,683,090 |
<p>I'm new to stack overflow. I was wondering if anybody knew if there was a .msi package for a python interpreter for python 3.5, I'm teaching a basic python class and wanted to be prepared for when it starts in a few weeks. There is a .msi packaged interpreter for 2.7 python on the official python.org downloads page but not 3.5 it seems. I am trying to use the interpreter in the community PyCharm IDE because I'm assuming most of the students will be using windows, not Linux (like I'm using). Any help would be greatly appreciated. Thank you in advance.</p>
| -1 |
2016-09-25T03:28:56Z
| 39,683,413 |
<p>After <a href="https://www.python.org/downloads/release/python-344/" rel="nofollow">Python 3.4.4</a> was released python.org stopped providing MSI installers for their Windows releases. Web-based, exectuable, and zipped installers are now provided for both 32-bit and 64-bit Windows releases. I'm not sure what the reason for this switch was, but an exectuable will install Python just fine. You can find Python 3.5.2 Windows executable installer at the bottom of <a href="https://www.python.org/downloads/release/python-352/" rel="nofollow">this page</a>.</p>
<p>All Python installs come with a Python interpreter. Make sure you select to add Python to your PATH during the install process. After Python finishes installing, open a Command Prompt, and type <code>python</code> to access the Python interpreter.</p>
| 0 |
2016-09-25T04:35:06Z
|
[
"python",
"python-2.7",
"ide",
"pycharm",
"python-3.5"
] |
Find documents which contain a particular value - Mongo, Python
| 39,683,102 |
<p>I'm trying to add a search option to my website but it doesn't work. I looked up solutions but they all refer to using an actual string, whereas in my case I'm using a variable, and I can't make those solutions work. Here is my code:</p>
<pre><code>cursor = source.find({'title': search_term}).limit(25)
for document in cursor:
result_list.append(document)
</code></pre>
<p>Unfortunately this only gives back results which match the search_term variable's value exactly. I want it to give back any results where the title contains the search term - regardless what other strings it contains. How can I do it if I want to pass a variable to it, and not an actual string? Thanks.</p>
| 1 |
2016-09-25T03:31:28Z
| 39,683,190 |
<p>You can use <code>$regex</code> to do contains searches.</p>
<pre><code>cursor = collection.find({'field': {'$regex':'regular expression'}})
</code></pre>
<p>And to make it case insensitive:</p>
<pre><code>cursor = collection.find({'field': {'$regex':'regular expression', '$options'ââ:'i'}})
</code></pre>
<p>Please try <code>cursor = source.find({'title': {'$regex':search_term}}).limit(25)</code></p>
| 1 |
2016-09-25T03:48:17Z
|
[
"python",
"mongodb"
] |
Find documents which contain a particular value - Mongo, Python
| 39,683,102 |
<p>I'm trying to add a search option to my website but it doesn't work. I looked up solutions but they all refer to using an actual string, whereas in my case I'm using a variable, and I can't make those solutions work. Here is my code:</p>
<pre><code>cursor = source.find({'title': search_term}).limit(25)
for document in cursor:
result_list.append(document)
</code></pre>
<p>Unfortunately this only gives back results which match the search_term variable's value exactly. I want it to give back any results where the title contains the search term - regardless what other strings it contains. How can I do it if I want to pass a variable to it, and not an actual string? Thanks.</p>
| 1 |
2016-09-25T03:31:28Z
| 39,683,196 |
<h3>$text</h3>
<p>You can perform a text search using <code>$text</code> & <code>$search</code>. You first need to set a text index, then use it:</p>
<pre><code>$ db.docs.createIndex( { title: "text" } )
$ db.docs.find( { $text: { $search: "search_term" } } )
</code></pre>
<h3>$regex</h3>
<p>You may also use <code>$regex</code>, as answered here: <a href="http://stackoverflow.com/a/10616781/641627">http://stackoverflow.com/a/10616781/641627</a></p>
<pre><code>$ db.users.findOne({"username" : {$regex : ".*son.*"}});
</code></pre>
<h3>Both solutions compared</h3>
<ul>
<li><a href="https://dzone.com/articles/mongodb-full-text-search-vs" rel="nofollow">Full Text Search vs. Regular Expressions</a></li>
</ul>
<blockquote>
<p>... The regular expression search takes longer for queries with just a
few results while the full text search gets faster and is clearly
superior in those cases.</p>
</blockquote>
| 0 |
2016-09-25T03:49:23Z
|
[
"python",
"mongodb"
] |
Is there a good way to find the rank of a matrix in a field of characteristic p>0?
| 39,683,153 |
<p>I need an efficient algorithm or a known way to determine <a href="https://en.wikipedia.org/wiki/Rank_(linear_algebra)" rel="nofollow">the mathematical rank</a> of a matrix A with coefficients in a field of positive characteristic. </p>
<p>For example, in the finite field of 5 elements I have the following matrix:</p>
<pre><code>import numpy
A=[[2,3],[3,2]]
print numpy.linalg.matrix_rank(A)
</code></pre>
<p>This method gives me the result of 2, but in characteristic 5 this matrix has rank 1 since <code>[2,3]+[3,2]=[0,0]</code>.</p>
| 3 |
2016-09-25T03:39:58Z
| 39,725,042 |
<p>Numpy doesn't have built-in support for finite fields. The matrix <code>A</code> in your code is treated as a matrix of real numbers, and hence has rank 2. </p>
<p>If you really need to support finite fields with Numpy, you'll have to define your own data type along with the arithmetic operations yourself, as shown <a href="http://stackoverflow.com/questions/17044064/how-to-calculate-numpy-arrays-on-galois-field">here</a>. There are of course the concerns about proper error handling (like divide by zero).</p>
<p>Even then, many common routines will have to be rewritten to support your field data types. For example, from the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.matrix_rank.html" rel="nofollow">numpy.linalg.matrix_rank</a> documentation, the routine uses Singular Value Decomposition (SVD), which is not well defined for finite fields, so you'll have to code the rank finding algorithm yourself.</p>
<p>As for the algorithm itself, you could try implementing plain old Gaussian Elimination along <a href="http://stackoverflow.com/questions/16254654/test-if-matrix-is-invertible-over-finite-field">these lines</a>, but this can be a pain in the neck and really slow, so you will likely be better off with other tools/packages like <strong>Sage</strong>.</p>
| 1 |
2016-09-27T12:44:01Z
|
[
"python",
"numpy",
"matrix"
] |
How would I use the M.E.A.N. Stack with a Python script backend where the python script is in a container?
| 39,683,166 |
<p>So I am in the process of writing a web app with the full M.E.A.N. stack and know how to work with both the front end and the back end. My only problem is incorporating things outside of M.E.A.N. As an example, I am going to be running a Python script as the algorithmic back end for the web app (mainly due to necessity of specific libraries) where that Python script will be inside of a docker container. What would be the most optimal way to connect to and run said Python code through M.E.A.N.?</p>
| -1 |
2016-09-25T03:41:57Z
| 39,845,346 |
<p>Getting started with the MEAN stack might seem very daunting at first. You might think that there is too much new technology to learn. The truth is that you really only need to know âJavascript.â Thatâs right, MEAN is simply a javascript web development stack.</p>
<p>So, how do you actually get started developing on the MEAN stack?</p>
<p>The first step is to set up a project structure. Iâve found the following structure to make the most sense:</p>
<p>controllers/ db/ package.json server.js public/</p>
<p>This structure lets you keep the entire stack in a single project. Your AngularJS front end can go into the public folder while all your Express API logic goes into controller and your MongoDB collections and logic go into the db folder.</p>
<p>Now that youâve set up a general project structure, you need to initialize your public folder as an Angular project. It is best to do this using a tool called Yeoman.</p>
<p>Yeoman is a toolkit that makes it easy for you to get started with a variety of Javascript frameworks and other web frameworks like Bootstrap and foundation. You can learn more about Yeoman at Yeoman.io.</p>
<p>You can read more about the M.E.A.N. stack here and how to get started: <a href="http://www.citizentekk.com/mean-stack-tutorial-how-to-build-loosely-coupled-scalable-web-apps-nodejs-angularjs-applications/" rel="nofollow">http://www.citizentekk.com/mean-stack-tutorial-how-to-build-loosely-coupled-scalable-web-apps-nodejs-angularjs-applications/</a></p>
| 0 |
2016-10-04T06:17:35Z
|
[
"python",
"docker",
"containers",
"mean-stack",
"microservices"
] |
I am trying to use PTVS in visual studio, but cannot set python interpreter
| 39,683,194 |
<p>I am trying to use <code>PTVS in visual studio</code>, but cannot set <code>python interpreter</code>. I installed visual studio enterprise 2015 and installed python 3.5.2. </p>
<p>I opened python environment in visual studio, but I cannot find installed interpreter, even cannot click the '<code>+custom</code>' button. </p>
<p>Please let me know if someone experienced same issue and solved it.</p>
| 0 |
2016-09-25T03:49:02Z
| 39,732,669 |
<p>I have just reinstalled windows 10, python, and visual studio again. It works now. I have no clue why it did not work before.</p>
| 0 |
2016-09-27T19:19:24Z
|
[
"python",
"visual-studio",
"ptvs"
] |
How to set sender email Id while using Flask Mail
| 39,683,211 |
<p>I am using Flak Mail to send email from a website's contact form. Contact form has a <code>sender</code> email id field as well. So when I receive the email at "recipient" email id I want <code>from</code> to be set as <code>sender</code> email Id to have the value entered on the form. Following is the class that handles emails:</p>
<pre><code>from flask_mail import Mail, Message
class SendMail:
def __init__(self, app):
app.config['MAIL_SERVER']='smtp.gmail.com'
app.config['MAIL_PORT'] = 465
app.config['MAIL_USERNAME'] = 'smtpsetup@gmail.com'
app.config['MAIL_PASSWORD'] = '<password>'
app.config['MAIL_USE_TLS'] = False
app.config['MAIL_USE_SSL'] = True
self.recipient_email = 'recipient@gmail.com'
self.app = app
self.mail = Mail(self.app)
def compile_mail(self, name, email_id, phone_no, msg):
message = Message(name+" has a query",
sender = email_id,
recipients=[self.recipient_email])
message.html = "<b>Name : </b>"+name+"<br><b>Phone No :</b> "+phone_no+"<br><b>Message : </b>"+msg
return message
def send_mail(self, name, email_id, phone_no, msg):
message = self.compile_mail(name, email_id, phone_no, msg)
with self.app.app_context():
self.mail.send(message)
</code></pre>
<p><code>send_mail</code> method receives four arguments which are the fields entered on the contact form. The problem is, when I send email like this, the <code>from</code> in the email received is set to the SMTP MAIL_USERNAME <code>smtpsetup@gmail.com</code>, despite the fact that I am setting <code>sender</code> parameter in Message object as <code>email_id</code> received from the contact form.
Not able to figure out how to set sender to a value i want. </p>
| -1 |
2016-09-25T03:52:28Z
| 39,686,127 |
<p>Have you checked that the <code>email_id</code> you are passing in is what you think it is?</p>
<p><a href="https://github.com/mattupstate/flask-mail/blob/master/flask_mail.py#L275" rel="nofollow">The Flask Mail source code</a> suggests that it will only use the default if the <code>sender</code> you pass in is falsy (e.g. None or an empty string).</p>
| 0 |
2016-09-25T11:04:14Z
|
[
"python",
"flask",
"smtp",
"flask-mail"
] |
Read all columns from CSV file?
| 39,683,221 |
<p>I am trying to read in a CSV file and then take all values from each column and put into a separate list. I do not want the values by row. Since the CSV reader only allows to loop through the file once, I am using the seek() method to go back to the beginning and read the next column. Besides using a Dict mapping, is there a better way to do this?</p>
<pre><code>infile = open(fpath, "r")
reader = csv.reader(infile)
NOUNS = [col[0] for col in reader]
infile.seek(0) # <-- set the iterator to beginning of the input file
VERBS = [col[1] for col in reader]
infile.seek(0)
ADJECTIVES = [col[2] for col in reader]
infile.seek(0)
SENTENCES = [col[3] for col in reader]
</code></pre>
| 0 |
2016-09-25T03:54:15Z
| 39,683,290 |
<p>You could feed the <code>reader</code> to <a href="https://docs.python.org/3.5/library/functions.html#zip" rel="nofollow"><code>zip</code></a> and unpack it to variables as you wish.</p>
<pre><code>import csv
with open('input.csv') as f:
first, second, third, fourth = zip(*csv.reader(f))
print('first: {}, second: {}, third: {}, fourth: {}'.format(
first, second, third, fourth
))
</code></pre>
<p>With following input:</p>
<pre><code>1,2,3,4
A,B,C,D
</code></pre>
<p>It will produce output:</p>
<pre><code>first: ('1', 'A'), second: ('2', 'B'), third: ('3', 'C'), fourth: ('4', 'D')
</code></pre>
| 1 |
2016-09-25T04:11:45Z
|
[
"python",
"python-3.x",
"csv",
"list-comprehension"
] |
Read all columns from CSV file?
| 39,683,221 |
<p>I am trying to read in a CSV file and then take all values from each column and put into a separate list. I do not want the values by row. Since the CSV reader only allows to loop through the file once, I am using the seek() method to go back to the beginning and read the next column. Besides using a Dict mapping, is there a better way to do this?</p>
<pre><code>infile = open(fpath, "r")
reader = csv.reader(infile)
NOUNS = [col[0] for col in reader]
infile.seek(0) # <-- set the iterator to beginning of the input file
VERBS = [col[1] for col in reader]
infile.seek(0)
ADJECTIVES = [col[2] for col in reader]
infile.seek(0)
SENTENCES = [col[3] for col in reader]
</code></pre>
| 0 |
2016-09-25T03:54:15Z
| 39,683,297 |
<p>I am not sure why you dont want to use dict mapping. This is what I end up doing</p>
<p><strong>Data</strong></p>
<pre><code>col1,col2,col3
val1,val2,val3
val4,val5,val6
</code></pre>
<p><strong>Code</strong></p>
<pre><code>import csv
d = dict()
with open("abc.text") as csv_file:
reader = csv.DictReader(csv_file)
for row in reader:
for key, value in row.items():
if d.get(key) is None:
d[key] = [value]
else:
d[key].append(value)
print d
{'col2': ['val2', 'val5'], 'col3': ['val3', 'val6'], 'col1': ['val1', 'val4']}
</code></pre>
| 0 |
2016-09-25T04:14:10Z
|
[
"python",
"python-3.x",
"csv",
"list-comprehension"
] |
Read all columns from CSV file?
| 39,683,221 |
<p>I am trying to read in a CSV file and then take all values from each column and put into a separate list. I do not want the values by row. Since the CSV reader only allows to loop through the file once, I am using the seek() method to go back to the beginning and read the next column. Besides using a Dict mapping, is there a better way to do this?</p>
<pre><code>infile = open(fpath, "r")
reader = csv.reader(infile)
NOUNS = [col[0] for col in reader]
infile.seek(0) # <-- set the iterator to beginning of the input file
VERBS = [col[1] for col in reader]
infile.seek(0)
ADJECTIVES = [col[2] for col in reader]
infile.seek(0)
SENTENCES = [col[3] for col in reader]
</code></pre>
| 0 |
2016-09-25T03:54:15Z
| 39,683,298 |
<p>Something like this would do it in one pass:</p>
<pre><code>kinds = NOUNS, VERBS, ADJECTIVES, SENTENCES = [], [], [], []
with open(fpath, "r") as infile:
for cols in csv.reader(infile):
for i, kind in enumerate(kinds):
kind.append(cols[i])
</code></pre>
| 1 |
2016-09-25T04:14:14Z
|
[
"python",
"python-3.x",
"csv",
"list-comprehension"
] |
Read all columns from CSV file?
| 39,683,221 |
<p>I am trying to read in a CSV file and then take all values from each column and put into a separate list. I do not want the values by row. Since the CSV reader only allows to loop through the file once, I am using the seek() method to go back to the beginning and read the next column. Besides using a Dict mapping, is there a better way to do this?</p>
<pre><code>infile = open(fpath, "r")
reader = csv.reader(infile)
NOUNS = [col[0] for col in reader]
infile.seek(0) # <-- set the iterator to beginning of the input file
VERBS = [col[1] for col in reader]
infile.seek(0)
ADJECTIVES = [col[2] for col in reader]
infile.seek(0)
SENTENCES = [col[3] for col in reader]
</code></pre>
| 0 |
2016-09-25T03:54:15Z
| 39,683,370 |
<p>This works assuming you know exactly how many columns are in the csv (and there isn't a header row).</p>
<pre><code>NOUNS = []
VERBS = []
ADJECTIVES = []
SENTENCES = []
with open(fpath, "r") as infile:
reader = csv.reader(infile)
for row in reader:
NOUNS.append(row[0])
VERBS.append(row[1])
ADJECTIVES.append(row[2])
SENTENCES.append(row[3])
</code></pre>
<p>If you don't know the column headers, you're going to have to be clever and read off the first row, make lists for every column you encounter, and loop through every new row and insert in the appropriate list. You'll probably need to do a list of lists.</p>
<p>If you don't mind adding a dependency, use <a href="http://pandas.pydata.org" rel="nofollow">Pandas</a>. Use a <code>DataFrame</code> and the method <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html#pandas.read_csv" rel="nofollow"><code>read_csv()</code></a>. Access each column using the column name i.e.</p>
<pre><code>df = pandas.DataFrame.read_csv(fpath)
print df['NOUN']
print df['VERBS']
</code></pre>
| 1 |
2016-09-25T04:27:34Z
|
[
"python",
"python-3.x",
"csv",
"list-comprehension"
] |
Google api client for python log level and debuglevel not working
| 39,683,228 |
<p>While building a basic python app on AppEngine I found this page:
<a href="https://developers.google.com/api-client-library/python/guide/logging" rel="nofollow">https://developers.google.com/api-client-library/python/guide/logging</a></p>
<p>Which states you can do the following to set the log level:</p>
<pre><code>import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
</code></pre>
<p>However it doesn't seem to have any impact on the output which is always INFO for me. I set to logging.DEBUG and don't see any debug entries. I set to logging.WARNING and still see info entries. Never seems to change.</p>
<p>I also tried setting httplib2 to debuglevel 4:</p>
<pre><code>import httplib2
httplib2.debuglevel = 4
</code></pre>
<p>Yet I don't see any HTTP headers in the log :/</p>
<p>Running python 2.7.10 in PyCharm.</p>
<p>Has anyone got these settings to work?</p>
| 2 |
2016-09-25T03:56:33Z
| 39,683,997 |
<p>In PyCharm edit your project's Run configuration (<code>Run</code> -> <code>Edit Configurations</code> then select your project) and in the <code>Additional options</code> field add <code>--log_level=debug</code>.</p>
<p>BTW - you don't need to set the <code>logger</code> options, the above should suffice.</p>
| 0 |
2016-09-25T06:14:19Z
|
[
"python",
"google-app-engine",
"debugging",
"google-api",
"pycharm"
] |
How to close a current window and open a new window at the same time?
| 39,683,274 |
<p>This code opens a button, which links to another button. The other button can close its self, but the first button can't close itself and open a new one at the same time, How do I fix this?</p>
<pre><code>import tkinter as tk
class Demo1:
def __init__(self, master):
self.master = master
self.frame = tk.Frame(self.master)
self.HelloButton = tk.Button(self.frame, text = 'Hello', width = 25, command = self.new_window,)
self.HelloButton.pack()
self.frame.pack()
def close_windows(self):
self.master.destroy()
self.new_window
def new_window(self):
self.new_window = tk.Toplevel(self.master)
self.app = Demo2(self.new_window)
class Demo2:
def __init__(self, master):
self.master = master
self.frame = tk.Frame(self.master)
self.quitButton = tk.Button(self.frame, text = 'Quit', width = 25, command = self.close_windows)
self.quitButton.pack()
self.frame.pack()
def close_windows(self):
self.master.destroy()
def main():
root = tk.Tk()
app = Demo1(root)
root.mainloop()
if __name__ == '__main__':
main()
</code></pre>
| 2 |
2016-09-25T04:07:06Z
| 39,683,581 |
<p>Try this:</p>
<pre><code>import tkinter as tk
class windowclass():
def __init__(self, master):
self.master = master
self.btn = tk.Button(master, text="Button", command=self.command)
self.btn.pack()
def command(self):
self.master.withdraw()
toplevel = tk.Toplevel(self.master)
toplevel.geometry("350x350")
app = Demo2(toplevel)
class Demo2:
def __init__(self, master):
self.master = master
self.frame = tk.Frame(self.master)
self.quitButton = tk.Button(self.frame, text = 'Quit', width = 25, command = self.close_windows)
self.quitButton.pack()
self.frame.pack()
def close_windows(self):
self.master.destroy()
root = tk.Tk()
root.title("window")
root.geometry("350x350")
cls = windowclass(root)
root.mainloop()
</code></pre>
<p>Or maybe sometimes,you want to do something else,you can hide it.</p>
<pre><code>def __init__(self, parent):
self.root = parent
self.root.title("Main")
self.frame = Tk.Frame(parent)
self.frame.pack()
btn = Tk.Button(self.frame, text="New", command=self.openFrame)
btn.pack()
def hide(self):
self.root.withdraw()
def openFrame(self):
self.hide()
otherFrame = Tk.Toplevel()
otherFrame.geometry("400x300")
handler = lambda: self.CloseOtherFrame(otherFrame)
btn = Tk.Button(otherFrame, text="Close", command=handler)
btn.pack()
def CloseOtherFrame(self, otherFrame):
otherFrame.destroy()
self.show()
</code></pre>
<p><strong>Hope this helps.<br>
If this works for you,please accept this answer.</strong></p>
| -1 |
2016-09-25T05:04:40Z
|
[
"python",
"tkinter"
] |
How to close a current window and open a new window at the same time?
| 39,683,274 |
<p>This code opens a button, which links to another button. The other button can close its self, but the first button can't close itself and open a new one at the same time, How do I fix this?</p>
<pre><code>import tkinter as tk
class Demo1:
def __init__(self, master):
self.master = master
self.frame = tk.Frame(self.master)
self.HelloButton = tk.Button(self.frame, text = 'Hello', width = 25, command = self.new_window,)
self.HelloButton.pack()
self.frame.pack()
def close_windows(self):
self.master.destroy()
self.new_window
def new_window(self):
self.new_window = tk.Toplevel(self.master)
self.app = Demo2(self.new_window)
class Demo2:
def __init__(self, master):
self.master = master
self.frame = tk.Frame(self.master)
self.quitButton = tk.Button(self.frame, text = 'Quit', width = 25, command = self.close_windows)
self.quitButton.pack()
self.frame.pack()
def close_windows(self):
self.master.destroy()
def main():
root = tk.Tk()
app = Demo1(root)
root.mainloop()
if __name__ == '__main__':
main()
</code></pre>
| 2 |
2016-09-25T04:07:06Z
| 39,718,403 |
<p>Try redefine your <code>Demo1.new_window()</code> as below:</p>
<pre><code>def new_window(self):
self.master.destroy() # close the current window
self.master = tk.Tk() # create another Tk instance
self.app = Demo2(self.master) # create Demo2 window
self.master.mainloop()
</code></pre>
| 0 |
2016-09-27T07:17:07Z
|
[
"python",
"tkinter"
] |
How can I improve formatting of floats output from my tax calculator?
| 39,683,352 |
<p>I didn't have any problems writing this much, but the output numbers are a little wonky. Sometimes I'll get something like 83.78812, for example, and I'd rather round it up to 83.79.</p>
<p>Here's the code itself:</p>
<pre><code>#This is a simple tax calculator based on Missouri's tax rate.
while True:
tax = 0.076
cost = float(raw_input("How much does the item cost? $"))
taxAmount = tax * cost
final = taxAmount + cost
if cost > 0:
print "Taxes are $" + str(taxAmount) + "."
print "The total cost is $" + str(final) + "."
else:
print 'Not a valid number. Please try again.'
</code></pre>
<p>I've seen people mention that I should be using ints instead of floats, but my tax-rate is over three characters past the decimal. Furthermore, typing in a string results in an error that crashes the program, but I'd rather it simply give an error message and loop back to the beginning. I don't know how to fix either of these things.</p>
| -2 |
2016-09-25T04:24:19Z
| 39,683,475 |
<p>You should use <strong>round</strong>:</p>
<pre><code>round(final,2)
</code></pre>
| 0 |
2016-09-25T04:43:47Z
|
[
"python",
"python-2.7",
"calculator",
"number-formatting"
] |
How can I improve formatting of floats output from my tax calculator?
| 39,683,352 |
<p>I didn't have any problems writing this much, but the output numbers are a little wonky. Sometimes I'll get something like 83.78812, for example, and I'd rather round it up to 83.79.</p>
<p>Here's the code itself:</p>
<pre><code>#This is a simple tax calculator based on Missouri's tax rate.
while True:
tax = 0.076
cost = float(raw_input("How much does the item cost? $"))
taxAmount = tax * cost
final = taxAmount + cost
if cost > 0:
print "Taxes are $" + str(taxAmount) + "."
print "The total cost is $" + str(final) + "."
else:
print 'Not a valid number. Please try again.'
</code></pre>
<p>I've seen people mention that I should be using ints instead of floats, but my tax-rate is over three characters past the decimal. Furthermore, typing in a string results in an error that crashes the program, but I'd rather it simply give an error message and loop back to the beginning. I don't know how to fix either of these things.</p>
| -2 |
2016-09-25T04:24:19Z
| 39,683,903 |
<blockquote>
<p>"typing in a string results in an error that crashes the program, but I'd rather it simply give an error message and loop back to the beginning." </p>
</blockquote>
<p>To do this, you can use a while loop with <code>try & catch</code> That will keep on prompting for the item cost until it gets appropriate value</p>
<p>Use <code>round()</code> method to round up your value. It takes two parameters, first one is the value and second one is the position where to round up. </p>
<p>Format your result with python string formatting using the placeholder <code>%.2f</code> (2 digits after decimal point)</p>
<pre><code>tax = 0.076
cost = 0
parsed = False
while not parsed:
try:
cost = float(raw_input("How much does the item cost? $"))
parsed = True
except ValueError:
print 'Invalid value!'
taxAmount = tax * cost
final = taxAmount + cost
if cost > 0:
print "Taxes are $%.2f." % round(taxAmount, 2)
print "The total cost is $%.2f." % round(final, 2)
else:
print 'Not a valid number. Please try again.'
</code></pre>
| 2 |
2016-09-25T06:00:27Z
|
[
"python",
"python-2.7",
"calculator",
"number-formatting"
] |
List in Function in Python Read as Multiple Objects
| 39,683,536 |
<p>I have a function that looks like this:</p>
<pre><code>class Question:
Ans = "";
checkAns = 0;
isPos=False;
def Ask(Q, PosAns):
checkAns=0;
Ans = raw_input("{} ({})".format(Q,PosAns));
while(checkAns<len(PosAns) and isPos!=True):
if(Ans==PosAns[x]):
isPos=True; #if it IS a possible answer, the loop ends
checkAns+=1;
if(checkAns==len(PosAns)):
#If the loop goes through all the possible answers and still doesn't find a
#match, it asks again and resets checkAns to zero.
Ans = raw_input("{} ({})".format(Q,PosAns));
checkAns=0;
return ("Good Answer");
ques = Question();
print(ques.Ask("Do you like to code?",["Yes","No"]));
</code></pre>
<p>First off, the point of this function is to take in a question (Q) and all the possible answers (PosAns), and if the user puts in something that is not one of the possible answers, then the function will simply ask again.</p>
<p>Every time I run it, however, it says that the Ask() function can only handle two parameters and that I've given it three (note that YesNo has two strings inside). Why does it read the list's objects instead of taking the list as a parameter? How can I make it take the list as the parameter?</p>
<p>I do recognize that the way I code is roundabout and strange to most people, but it's just the way things make sense to me. I'm more interested in the answer to my question than a new way to write the whole function (I'm still working on it).</p>
| 0 |
2016-09-25T04:55:24Z
| 39,683,667 |
<p>You missed the 'self' in method declaration. Every class method (except static methods) require first argument to be <code>self</code>. self is implicily passed so doesnt show up in our method calls.
self can be used to refer the the other attributes of call like <code>isPos</code>, <code>checkAns</code> and <code>ans</code> variables in this example </p>
<p>Though one thins I wann't able to figure out what is <code>x</code> here
<code>if (Ans == PosAns[x])</code></p>
<pre><code>class Question:
Ans = "";
checkAns = 0;
isPos = False;
def Ask(self, Q, PosAns):
Ans = raw_input("{} ({})".format(Q, PosAns));
while (self.checkAns < len(PosAns) and self.isPos != True):
if (Ans == PosAns[x]):
isPos = True; # if it IS a possible answer, the loop ends
self.checkAns += 1;
if (self.checkAns == len(PosAns)):
# If the loop goes through all the possible answers and still doesn't find a
# match, it asks again and resets checkAns to zero.
self.Ans = raw_input("{} ({})".format(Q, PosAns));
self.checkAns = 0;
return ("Good Answer");
</code></pre>
| 0 |
2016-09-25T05:22:15Z
|
[
"python"
] |
Why can't I import a module from within the same directory?
| 39,683,616 |
<p>If I have two python modules in a directory <code>main.py</code> and <code>somemodule.py</code>, I can import <code>somemodule</code> by using <code>import somemodule</code>.</p>
<p><code>./
main.py
somemodule.py
__init__.py</code></p>
<p>In django application where we have <code>urls.py</code> and <code>views.py</code>, why won't <code>import views</code> work in this case? But relative import <code>from . import views</code> works?</p>
| 0 |
2016-09-25T05:12:39Z
| 39,683,817 |
<p>That's because of python 3 import style and is irrelevant to Django.</p>
<p>Read this for more details:
<a href="http://stackoverflow.com/questions/12172791/changes-in-import-statement-python3">Changes in import statement python3</a></p>
| 0 |
2016-09-25T05:47:31Z
|
[
"python",
"python-3.x",
"python-import"
] |
Viewing h264 stream over TCP
| 39,683,734 |
<p>I have a small wifi based FPV camera for a drone. I've managed to get it to the point where I can download and save an h264 file using python.</p>
<pre><code>TCP_IP = '193.168.0.1'
TCP_PORT = 6200
BUFFER_SIZE = 2056
f = open('stream.h264', 'wb')
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((TCP_IP,TCP_PORT))
while True:
data = sock.recv(BUFFER_SIZE)
f.write(data)
print("Writing")
sock.close()
f.close()
</code></pre>
<p>What I've been trying to do for a while now is play the stream. I've found the stream, I can download it and save it, but now I want to open it live.
I've tried using VLC's 'open network stream' with a variety of options, but none of them seemed to work.</p>
| 0 |
2016-09-25T05:33:02Z
| 39,691,426 |
<p>I successfully output to mplayer using </p>
<p><code>data = sock.recv(BUFFER_SIZE)
sys.stdout.buffer.write(data)</code></p>
<p>and then having mplayer pipe the input</p>
<p><code>python cam.py - | mplayer -fps 20 -nosound -vc ffh264 -noidx -mc 0 -</code></p>
| 0 |
2016-09-25T20:18:20Z
|
[
"python",
"stream",
"video-streaming",
"h.264"
] |
Viewing h264 stream over TCP
| 39,683,734 |
<p>I have a small wifi based FPV camera for a drone. I've managed to get it to the point where I can download and save an h264 file using python.</p>
<pre><code>TCP_IP = '193.168.0.1'
TCP_PORT = 6200
BUFFER_SIZE = 2056
f = open('stream.h264', 'wb')
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((TCP_IP,TCP_PORT))
while True:
data = sock.recv(BUFFER_SIZE)
f.write(data)
print("Writing")
sock.close()
f.close()
</code></pre>
<p>What I've been trying to do for a while now is play the stream. I've found the stream, I can download it and save it, but now I want to open it live.
I've tried using VLC's 'open network stream' with a variety of options, but none of them seemed to work.</p>
| 0 |
2016-09-25T05:33:02Z
| 39,696,111 |
<p>It is a simple way, yes: send H.264 NALU stream (you put 0,0,0,1 prefix before each nal unit and it is ok).</p>
<p>If you want something more cool, then you can add packing to RTP and send it via multicast. It will be rather simple to code and easy to read.</p>
| 0 |
2016-09-26T06:35:42Z
|
[
"python",
"stream",
"video-streaming",
"h.264"
] |
This is a dfs search implemented using python which i have taken from internet
| 39,683,744 |
<pre><code>graph={
'A':set(['B','C']),
'B':set(['A','D','E']),
'C':set(['A','F']),
'D':set(['B']),
'E':set(['B','F']),
'F':set(['C','E'])}
def dfs(graph, start):
visited, stack = set(), [start]
while stack:
vertex = stack.pop()
if vertex not in visited:
visited.add(vertex)
stack.extend(graph[vertex] - visited)
return visited
dfs(graph, 'A')
</code></pre>
<p>Can anyone explain why we are using these</p>
<ol>
<li><code>visited,stack = set(), [start]</code></li>
<li><code>graph[vertex] - visited</code></li>
<li><code>stack.extend(graph[vertex] - visited)</code></li>
</ol>
| 0 |
2016-09-25T05:35:30Z
| 39,684,497 |
<p>The first line you're asking about is initializing the <code>visited</code> and <code>stack</code> variables. You could write it on two lines if you wanted to:</p>
<pre><code>visited = set()
stack = []
</code></pre>
<p>I like initializing things on separate lines a bit better, but it's mostly a matter of style. I know a lot of other Python programmers like to combine their initializations, probably because it lets them use fewer lines (which becomes especially valuable if your function grows longer than you can see on one screen at a time). Both versions work fine.</p>
<p>The second thing you ask about is a <code>set</code> subtraction expression. In this case, it's subtracting <code>visited</code> from <code>graph[vertex]</code>, which is a <code>set</code> containing the neighbors of <code>vertex</code>. The difference operation finds all the values in <code>graph[vertex]</code> that are not also in <code>visited</code>, and returns a <code>set</code> containing them.</p>
<p>The third thing you ask about is a <code>list.extend</code> call. This appends each of the values from the <code>set</code> subtraction onto <code>stack</code> (which is a list). You could write a loop and repeatedly call <code>append</code> instead of the one call to <code>extend</code>, but there's really no reason to do that in this situation. <code>set</code>s are iterable, but it's worth noting that they yield their items in arbitrary order, so you can't tell ahead of time exactly which order the items will end up in when they're in the list. Fortunately, order doesn't matter for this algorithm.</p>
<p>It's also worth noting that the function would still work without the set subtraction. It would just be a little bit less efficient, since more values would get added to the <code>stack</code> only to be skipped (since they've already been <code>visited</code>) when they get popped off the stack later.</p>
| 0 |
2016-09-25T07:38:54Z
|
[
"python"
] |
How to store multiple users in a MySQL table row
| 39,683,748 |
<p>What I have is a program that allows users to monitor posts by specifying what they are looking for. I'm predicting that many users will monitor the same posts, and would like to instead of having them all listed as separate posts, have all of the users grouped into the same row.</p>
<p>I'm unsure of wether MySQL has a way to point to multiple users in a different table, or some other way of doing it. I've seen things such as joining mentioned, but have not seen any ways to implement it in the way that I would like.</p>
<p>What i'm wondering if it's possible to do:</p>
<pre><code>Table 1: Table 2:
Platform Post Name Users Users
----------------------------------- ----------------
PC Title [ ] ----------> Jane
|---> Bob
|---> Roger
</code></pre>
<p>Is it possible to do this, or what way should I be going about this?</p>
| 0 |
2016-09-25T05:36:11Z
| 39,683,827 |
<p>Putting user names in a comma-separated list is a <em>terrible</em> solution. Please don't do that. It leads to all sorts of problems.</p>
<p>The best way to do this is to create a new table with two columns: <code>USER_ID</code> and <code>POST_ID</code>. Whenever a user wants to monitor a post, add a row with the user's ID and the post's ID.</p>
| 3 |
2016-09-25T05:49:38Z
|
[
"python",
"mysql"
] |
Nested Loops calculation output is incorrect, but program runs
| 39,683,920 |
<p>How am I able to get my program to display year 1 for the first 12 months and then year 2 for the next 12 months if the input value for years = 2?</p>
<p>Also I don't know where my calculation is going wrong. According to my desired output, the total rainfall output should be 37, but I am getting 39.</p>
<p><a href="http://i.stack.imgur.com/YgaH7.png" rel="nofollow"><img src="http://i.stack.imgur.com/YgaH7.png" alt="Added picture of current output. Total should be at 37 not 39. I also need the second batch of 12 months to say year 2"></a></p>
<pre><code>#the following are the values for input:
#year 1 month 1 THROUGH year 1 month 11 = 1
#year 1 month 12 THROUGH year 2 month 12 = 2
def main():
#desired year = 2
years = int(input("Enter the number of years you want the rainfall calculator to determine: "))
calcRainFall(years)
def calcRainFall(yearsF):
months = 12
grandTotal = 0.0
for years_rain in range(yearsF):
total= 0.0
for month in range(months):
print('Enter the number of inches of rainfall for year 1 month', month + 1, end='')
rain = int(input(': '))
total += rain
grandTotal += total
#This is not giving me the total I need. output should be 37.
#rainTotal = rain + grandTotal
#print("The total amount of inches of rainfall for 2 year(s), is", rainTotal)
print("The total amount of inches of rainfall for 2 year(s), is", grandTotal)
main()
</code></pre>
| 0 |
2016-09-25T06:02:40Z
| 39,683,980 |
<p>Before the print statement, you don't need to add rain value again for rainTotal. This is because grandTotal accounts for the rain per year. And it is being added twice for two years already. So what you're doing is essentially adding the last value of rain twice (2 in this case)
Make your print statement this and remove rainTotal -</p>
<pre><code>print("The total amount of inches of rainfall for 2 year(s), is", grandTotal)
</code></pre>
| 3 |
2016-09-25T06:11:41Z
|
[
"python",
"nested-loops"
] |
Nested Loops calculation output is incorrect, but program runs
| 39,683,920 |
<p>How am I able to get my program to display year 1 for the first 12 months and then year 2 for the next 12 months if the input value for years = 2?</p>
<p>Also I don't know where my calculation is going wrong. According to my desired output, the total rainfall output should be 37, but I am getting 39.</p>
<p><a href="http://i.stack.imgur.com/YgaH7.png" rel="nofollow"><img src="http://i.stack.imgur.com/YgaH7.png" alt="Added picture of current output. Total should be at 37 not 39. I also need the second batch of 12 months to say year 2"></a></p>
<pre><code>#the following are the values for input:
#year 1 month 1 THROUGH year 1 month 11 = 1
#year 1 month 12 THROUGH year 2 month 12 = 2
def main():
#desired year = 2
years = int(input("Enter the number of years you want the rainfall calculator to determine: "))
calcRainFall(years)
def calcRainFall(yearsF):
months = 12
grandTotal = 0.0
for years_rain in range(yearsF):
total= 0.0
for month in range(months):
print('Enter the number of inches of rainfall for year 1 month', month + 1, end='')
rain = int(input(': '))
total += rain
grandTotal += total
#This is not giving me the total I need. output should be 37.
#rainTotal = rain + grandTotal
#print("The total amount of inches of rainfall for 2 year(s), is", rainTotal)
print("The total amount of inches of rainfall for 2 year(s), is", grandTotal)
main()
</code></pre>
| 0 |
2016-09-25T06:02:40Z
| 39,684,030 |
<pre><code>rainTotal = rain + grandTotal
</code></pre>
<blockquote>
<p>is performing following: 2 + 37 because you have last rain input = 2 and already total or grandTotal is = 37 (total input for each year) so <em>rainTotal = rain + grandTotal</em> is <strong>not needed</strong></p>
</blockquote>
| 0 |
2016-09-25T06:18:58Z
|
[
"python",
"nested-loops"
] |
Nested Loops calculation output is incorrect, but program runs
| 39,683,920 |
<p>How am I able to get my program to display year 1 for the first 12 months and then year 2 for the next 12 months if the input value for years = 2?</p>
<p>Also I don't know where my calculation is going wrong. According to my desired output, the total rainfall output should be 37, but I am getting 39.</p>
<p><a href="http://i.stack.imgur.com/YgaH7.png" rel="nofollow"><img src="http://i.stack.imgur.com/YgaH7.png" alt="Added picture of current output. Total should be at 37 not 39. I also need the second batch of 12 months to say year 2"></a></p>
<pre><code>#the following are the values for input:
#year 1 month 1 THROUGH year 1 month 11 = 1
#year 1 month 12 THROUGH year 2 month 12 = 2
def main():
#desired year = 2
years = int(input("Enter the number of years you want the rainfall calculator to determine: "))
calcRainFall(years)
def calcRainFall(yearsF):
months = 12
grandTotal = 0.0
for years_rain in range(yearsF):
total= 0.0
for month in range(months):
print('Enter the number of inches of rainfall for year 1 month', month + 1, end='')
rain = int(input(': '))
total += rain
grandTotal += total
#This is not giving me the total I need. output should be 37.
#rainTotal = rain + grandTotal
#print("The total amount of inches of rainfall for 2 year(s), is", rainTotal)
print("The total amount of inches of rainfall for 2 year(s), is", grandTotal)
main()
</code></pre>
| 0 |
2016-09-25T06:02:40Z
| 39,684,058 |
<p>I've shortened your code a little bit. Hopefully this is a complete and correct program:</p>
<pre><code>def main():
years = int(input("Enter the number of years you want the rainfall calculator to determine: "))
calcRainFall(years)
def calcRainFall(yearsF):
months = 12 * yearsF # total number of months
grandTotal = 0.0 # inches of rain
for month in range(months):
# int(month / 12): rounds down to the nearest integer. Add 1 to start from year 1, not year 0.
# month % 12: finds the remainder when divided by 12. Add 1 to start from month 1, not month 0.
print('Enter the number of inches of rainfall for year', int(month / 12) + 1, 'month', month % 12 + 1, end='')
rain = int(input(': '))
grandTotal += rain
print("The total amount of inches of rainfall for", yearsF, "year(s), is", grandTotal)
main()
</code></pre>
| 2 |
2016-09-25T06:23:26Z
|
[
"python",
"nested-loops"
] |
Nested Loops calculation output is incorrect, but program runs
| 39,683,920 |
<p>How am I able to get my program to display year 1 for the first 12 months and then year 2 for the next 12 months if the input value for years = 2?</p>
<p>Also I don't know where my calculation is going wrong. According to my desired output, the total rainfall output should be 37, but I am getting 39.</p>
<p><a href="http://i.stack.imgur.com/YgaH7.png" rel="nofollow"><img src="http://i.stack.imgur.com/YgaH7.png" alt="Added picture of current output. Total should be at 37 not 39. I also need the second batch of 12 months to say year 2"></a></p>
<pre><code>#the following are the values for input:
#year 1 month 1 THROUGH year 1 month 11 = 1
#year 1 month 12 THROUGH year 2 month 12 = 2
def main():
#desired year = 2
years = int(input("Enter the number of years you want the rainfall calculator to determine: "))
calcRainFall(years)
def calcRainFall(yearsF):
months = 12
grandTotal = 0.0
for years_rain in range(yearsF):
total= 0.0
for month in range(months):
print('Enter the number of inches of rainfall for year 1 month', month + 1, end='')
rain = int(input(': '))
total += rain
grandTotal += total
#This is not giving me the total I need. output should be 37.
#rainTotal = rain + grandTotal
#print("The total amount of inches of rainfall for 2 year(s), is", rainTotal)
print("The total amount of inches of rainfall for 2 year(s), is", grandTotal)
main()
</code></pre>
| 0 |
2016-09-25T06:02:40Z
| 39,684,102 |
<p>Notes for your code:</p>
<p>As stated earlier, rainTotal is unneeded.</p>
<p>You can try: </p>
<pre><code>print 'Enter the number of inches of rainfall for year %d month %d' % (years_rain, month), end='')
</code></pre>
<p>This will fill the first %d for the value for years_rain and the second %d with the value for month as your for loops run. </p>
<p>This trick can also be used on the final print line as shown below:</p>
<pre><code>print("The total amount of inches of rainfall for %d year(s) % yearsF, is", grandTotal)
</code></pre>
| 0 |
2016-09-25T06:32:38Z
|
[
"python",
"nested-loops"
] |
scipy.sparse.coo_matrix how to fast find all zeros column, fill with 1 and normalize
| 39,683,931 |
<p>For a matrix, i want to find columns with all zeros and fill with 1s, and then normalize the matrix by column. I know how to do that with np.arrays</p>
<pre><code>[[0 0 0 0 0]
[0 0 1 0 0]
[1 0 0 1 0]
[0 0 0 0 1]
[1 0 0 0 0]]
|
V
[[0 1 0 0 0]
[0 1 1 0 0]
[1 1 0 1 0]
[0 1 0 0 1]
[1 1 0 0 0]]
|
V
[[0 0.2 0 0 0]
[0 0.2 1 0 0]
[0.5 0.2 0 1 0]
[0 0.2 0 0 1]
[0.5 0.2 0 0 0]]
</code></pre>
<p>But how can I do the same thing when the matrix is in scipy.sparse.coo.coo_matrix form, without converting it back to np.arrays. how can I achieve the same thing?</p>
| 0 |
2016-09-25T06:04:12Z
| 39,684,081 |
<p>This will be a lot easier with the <code>lil</code> format, and working with rows rather than columns:</p>
<pre><code>In [1]: from scipy import sparse
In [2]: A=np.array([[0,0,0,0,0],[0,0,1,0,0],[1,0,0,1,0],[0,0,0,0,1],[1,0,0,0,0]])
In [3]: A
Out[3]:
array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[1, 0, 0, 1, 0],
[0, 0, 0, 0, 1],
[1, 0, 0, 0, 0]])
In [4]: At=A.T # switch to work with rows
In [5]: M=sparse.lil_matrix(At)
</code></pre>
<p>Now it is obvious which row is all zeros</p>
<pre><code>In [6]: M.data
Out[6]: array([[1, 1], [], [1], [1], [1]], dtype=object)
In [7]: M.rows
Out[7]: array([[2, 4], [], [1], [2], [3]], dtype=object)
</code></pre>
<p>And <code>lil</code> format allows us to fill that row:</p>
<pre><code>In [8]: M.data[1]=[1,1,1,1,1]
In [9]: M.rows[1]=[0,1,2,3,4]
In [10]: M.A
Out[10]:
array([[0, 0, 1, 0, 1],
[1, 1, 1, 1, 1],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 1, 0]], dtype=int32)
</code></pre>
<p>I could have also used <code>M[1,:]=np.ones(5,int)</code></p>
<p>The <code>coo</code> format is great for creating the array from the <code>data/row/col</code> arrays, but doesn't implement indexing or math. It has to be transformed to <code>csr</code> for that. And <code>csc</code> for column oriented stuff.</p>
<p>The row that I filled isn't so obvious in the csr format:</p>
<pre><code>In [14]: Mc=M.tocsr()
In [15]: Mc.data
Out[15]: array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32)
In [16]: Mc.indices
Out[16]: array([2, 4, 0, 1, 2, 3, 4, 1, 2, 3], dtype=int32)
In [17]: Mc.indptr
Out[17]: array([ 0, 2, 7, 8, 9, 10], dtype=int32)
</code></pre>
<p>On the other hand normalizing is probably easier in this format.</p>
<pre><code>In [18]: Mc.sum(axis=1)
Out[18]:
matrix([[2],
[5],
[1],
[1],
[1]], dtype=int32)
In [19]: Mc/Mc.sum(axis=1)
Out[19]:
matrix([[ 0. , 0. , 0.5, 0. , 0.5],
[ 0.2, 0.2, 0.2, 0.2, 0.2],
[ 0. , 1. , 0. , 0. , 0. ],
[ 0. , 0. , 1. , 0. , 0. ],
[ 0. , 0. , 0. , 1. , 0. ]])
</code></pre>
<p>Notice that it's converted the sparse matrix to a dense one. The <code>sum</code> is dense, and math involving sparse and dense usually produces dense.</p>
<p>I have to use a more round about calculation to preserve the sparse status:</p>
<pre><code>In [27]: Mc.multiply(sparse.csr_matrix(1/Mc.sum(axis=1)))
Out[27]:
<5x5 sparse matrix of type '<class 'numpy.float64'>'
with 10 stored elements in Compressed Sparse Row format>
</code></pre>
<p>Here's a way of doing this with the <code>csc</code> format (on <code>A</code>)</p>
<pre><code>In [40]: Ms=sparse.csc_matrix(A)
In [41]: Ms.sum(axis=0)
Out[41]: matrix([[2, 0, 1, 1, 1]], dtype=int32)
</code></pre>
<p>Use <code>sum</code> to find the all-zeros column. Obviously this could be wrong if the columns have negative values and happen to sum to 0. If that's a concern I can see making a copy of the matrix with all <code>data</code> values replaced by 1.</p>
<pre><code>In [43]: Ms[:,1]=np.ones(5,int)[:,None]
/usr/lib/python3/dist-packages/scipy/sparse/compressed.py:730: SparseEfficiencyWarning: Changing the sparsity structure of a csc_matrix is expensive. lil_matrix is more efficient.
SparseEfficiencyWarning)
In [44]: Ms.A
Out[44]:
array([[0, 1, 0, 0, 0],
[0, 1, 1, 0, 0],
[1, 1, 0, 1, 0],
[0, 1, 0, 0, 1],
[1, 1, 0, 0, 0]])
</code></pre>
<p>The warning matters more if you do this sort of change repeatedly. Notice I have to adjust the dimension of the LHS array. Depending on the number of all-zero columns this action can change the sparsity of the matrix substantially.</p>
<p>==================</p>
<p>I could search the <code>col</code> of <code>coo</code> format for missing values with:</p>
<pre><code>In [69]: Mo=sparse.coo_matrix(A)
In [70]: Mo.col
Out[70]: array([2, 0, 3, 4, 0], dtype=int32)
In [71]: Mo.col==np.arange(Mo.shape[1])[:,None]
Out[71]:
array([[False, True, False, False, True],
[False, False, False, False, False],
[ True, False, False, False, False],
[False, False, True, False, False],
[False, False, False, True, False]], dtype=bool)
In [72]: idx = np.nonzero(~(Mo.col==np.arange(Mo.shape[1])[:,None]).any(axis=1))[0]
In [73]: idx
Out[73]: array([1], dtype=int32)
</code></pre>
<p>I could then add a column of 1s at this <code>idx</code> with:</p>
<pre><code>In [75]: N=Mo.shape[0]
In [76]: data = np.concatenate([Mo.data, np.ones(N,int)])
In [77]: row = np.concatenate([Mo.row, np.arange(N)])
In [78]: col = np.concatenate([Mo.col, np.ones(N,int)*idx])
In [79]: Mo1 = sparse.coo_matrix((data,(row, col)), shape=Mo.shape)
In [80]: Mo1.A
Out[80]:
array([[0, 1, 0, 0, 0],
[0, 1, 1, 0, 0],
[1, 1, 0, 1, 0],
[0, 1, 0, 0, 1],
[1, 1, 0, 0, 0]])
</code></pre>
<p>As written it works for just one column, but it could be generalized to several. I also created a new matrix rather than update <code>Mo</code>. But this in-place seems to work as well:</p>
<pre><code>Mo.data,Mo.col,Mo.row = data,col,row
</code></pre>
<p>The normalization still requires <code>csr</code> conversion, though I think <code>sparse</code> can hide that for you.</p>
<pre><code>In [87]: Mo1/Mo1.sum(axis=0)
Out[87]:
matrix([[ 0. , 0.2, 0. , 0. , 0. ],
[ 0. , 0.2, 1. , 0. , 0. ],
[ 0.5, 0.2, 0. , 1. , 0. ],
[ 0. , 0.2, 0. , 0. , 1. ],
[ 0.5, 0.2, 0. , 0. , 0. ]])
</code></pre>
<p>Even when I take the extra work of maintaining the sparse nature, I still get a <code>csr</code> matrix:</p>
<pre><code>In [89]: Mo1.multiply(sparse.coo_matrix(1/Mo1.sum(axis=0)))
Out[89]:
<5x5 sparse matrix of type '<class 'numpy.float64'>'
with 10 stored elements in Compressed Sparse Row format>
</code></pre>
<p>See</p>
<p><a href="http://stackoverflow.com/questions/39711838/find-all-zero-columns-in-pandas-sparse-matrix">Find all-zero columns in pandas sparse matrix</a></p>
<p>for more methods of finding the 0 columns. It turns out <code>Mo.col==np.arange(Mo.shape[1])[:,None]</code> is too slow with large <code>Mo</code>. A test using <code>np.in1d</code> is much better.</p>
<pre><code>1 - np.in1d(np.arange(Mo.shape[1]),Mo.col)
</code></pre>
| 1 |
2016-09-25T06:28:02Z
|
[
"python",
"numpy",
"scipy",
"linear-algebra",
"sparse-matrix"
] |
What is wrong with the following code for computing distance between all pairs of vectors?
| 39,684,114 |
<p>I'm trying to find manhattan distance between all pairs of vectors.</p>
<pre><code>import numpy as np
import itertools
class vector:
def __init__(self):
self.a = 0
self.b = 0
c = vector()
d = vector()
l = vector()
m = vector()
e = [c,d]
n = [l,m]
o = np.array(n)
f = np.array(e)
p = itertools.product(o,f)
p = list(p)
def comp(x):
return (x[0].a-x[1].a) + (x[0].b-x[1].b)
g = np.vectorize(comp)
print g(p)
</code></pre>
<p>I get the error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/numpy/lib/function_base.py", line 2207, in __call__
return self._vectorize_call(func=func, args=vargs)
File "/usr/local/lib/python2.7/site-packages/numpy/lib/function_base.py", line 2270, in _vectorize_call
ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args)
File "/usr/local/lib/python2.7/site-packages/numpy/lib/function_base.py", line 2232, in _get_ufunc_and_otypes
outputs = func(*inputs)
File "<stdin>", line 2, in comp
AttributeError: vector instance has no attribute '__getitem__'
</code></pre>
| 1 |
2016-09-25T06:34:21Z
| 39,684,213 |
<p>I have to say I'd approach this differently. Numerical Python doesn't deal well with Python classes and such.</p>
<p>Your class</p>
<pre><code>class vector:
def __init__(self):
self.a = 0
self.b = 0
</code></pre>
<p>Is basically a length-2 vector. So, if you're going to operate on many length-2 vectors, I'd suggest something like this:</p>
<pre><code>In [13]: p = np.array([[1, 2], [3, 4], [5, 6]])
In [14]: p
Out[14]:
array([[1, 2],
[3, 4],
[5, 6]])
</code></pre>
<p>Each row is a length-2 vector. There are 3 such vectors. This is far far far more efficient than a Python <code>list</code> of Python classes.</p>
<p>Now your <code>comp</code> function</p>
<pre><code>def comp(x):
return (x[0].a-x[1].a) + (x[0].b-x[1].b)
</code></pre>
<p>is basically equivalent to</p>
<pre><code>def comp(x):
return (x[0].a+x[0].b) - (x[1].a+x[1].b)
</code></pre>
<p>i.e., the component sum of the first vector, minus the component sum of the second vector. That being the case, you can efficiently calculate the pairwise outputs via</p>
<pre><code>In [15]: q = p.sum(axis=1)
</code></pre>
<p>for calculating the component sum of each vector, followed by</p>
<pre><code>In [16]: np.subtract.outer(q, q)
Out[16]:
array([[ 0, -4, -8],
[ 4, 0, -4],
[ 8, 4, 0]])
</code></pre>
| 2 |
2016-09-25T06:50:27Z
|
[
"python",
"vectorization",
"itertools"
] |
What is wrong with the following code for computing distance between all pairs of vectors?
| 39,684,114 |
<p>I'm trying to find manhattan distance between all pairs of vectors.</p>
<pre><code>import numpy as np
import itertools
class vector:
def __init__(self):
self.a = 0
self.b = 0
c = vector()
d = vector()
l = vector()
m = vector()
e = [c,d]
n = [l,m]
o = np.array(n)
f = np.array(e)
p = itertools.product(o,f)
p = list(p)
def comp(x):
return (x[0].a-x[1].a) + (x[0].b-x[1].b)
g = np.vectorize(comp)
print g(p)
</code></pre>
<p>I get the error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/numpy/lib/function_base.py", line 2207, in __call__
return self._vectorize_call(func=func, args=vargs)
File "/usr/local/lib/python2.7/site-packages/numpy/lib/function_base.py", line 2270, in _vectorize_call
ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args)
File "/usr/local/lib/python2.7/site-packages/numpy/lib/function_base.py", line 2232, in _get_ufunc_and_otypes
outputs = func(*inputs)
File "<stdin>", line 2, in comp
AttributeError: vector instance has no attribute '__getitem__'
</code></pre>
| 1 |
2016-09-25T06:34:21Z
| 39,684,233 |
<p>The way you've written <code>comp</code>, it expects to be called with a two-tuple as an argument, but that's not what happens. <code>p</code> is a list of tuples. When you call a vectorized function on it, it is converted to a numpy array. The tuples are split into separate columns so you get a 4x2 array. Your function is then called on each cell of this array. So it gets called with just one vector object as an argument.</p>
<p>It's not really clear what you're trying to accomplish here. If your objects are not numbers, you won't gain anything by using things like <code>np.vectorize</code> on them; you should just call your function in a loop. If your objects are numbers, then just store them in an ordinary numpy array, and make use of better ways to compute such distances, like the <code>pdist</code> function in <code>scipy</code>.</p>
| 0 |
2016-09-25T06:53:19Z
|
[
"python",
"vectorization",
"itertools"
] |
Writing a mandelbrot set to an image in python
| 39,684,141 |
<p>I am trying to write a mandelbrot set to an image in python, and am having a problem with one of my functions. </p>
<p>The issue is: While I expect something like <a href="http://i.stack.imgur.com/mOwfm.jpg" rel="nofollow">this</a>. I am getting a plain white image. Here is my code:</p>
<p>Quick Summary of code:
Check if value is in set, if it is, mark it as true in an array of booleans. Then, draw the image based on the array of booleans, coloring the true, and leaving the false ones.</p>
<pre><code>import math
import numpy as np
import scipy.misc as smp
from PIL import PILLOW_VERSION
from PIL import Image
def iterate(x, y, iterationNum):
z = 0
coord = complex(x, y)
for a in xrange(iterationNum):
#Don't use fabs. It can be negative.
z = z * z + coord
#This is a comparison between complex and int. It probably won't work.
#You want |Z| which is: z.real ** 2 + z.imag ** 2 > 4
if math.fabs(z) > 2:
return False
return True
def pixel(image,x,y,r,g,b):
"""Place pixel at pos=(x,y) on image, with color=(r,g,b)"""
image.put("#%02x%02x%02x" % (r,g,b), (y, x))
#here's some example coloring code that may help:
def draw(grid):
#Create a white image with the size of the grid as the number of pixels
img = Image.new('RGB', (len(grid), len(grid)), "white")
pixels = img.load()
for row in xrange(len(grid)):
for col in xrange(len(grid[row])):
if grid[row][col] == True:
#If that point is True (it's in the set), color it blue
pixels[row, col] = (0, 0, 255)
return img
def mandelbrot():
#you should probably use a square, it's easier to deal with
#The mandelbrot set fits completely within (-2, 2) and (2, -2)
#(-200, 200), (200, -200) is way too big!
TopLeftX = -2; BottomRightX = 2
TopLeftY = 2; BottomRightY = -2
#increment should be calculated based on the size of the bounds and the number of pixels
#For example, if you're between -2 and 2 on the X-Plane, and your image is 400 pixels wide
#Then your increment = (2 - (-2)) / 400 = 4 / 400 = .01 so that each pixel is 1/400th of the
#Total width of the bounding area
increment = 0.01
maxIt = 100
w = BottomRightX - TopLeftX
h = TopLeftY - BottomRightY
#This should be based on the size of the image, one spot in the area for one pixel
npArr = np.zeros((w / increment, h / increment), dtype=bool)
#Use the increment variable from above. It won't work with xrange because that doesn't
#Support decimals. You probably want to use a while loop or something
x = -2
y = 2
while TopLeftX <= x <= BottomRightX:
while TopLeftY <= y <= BottomRightY:
#I recommend using True or False in here (in the set or not)
#And then do your color calculations as I explained above
#Saves a lot of memory
if iterate(x, y, maxIt):
npArr[x, y] = True
y += increment
#once you've calculated the Trues and Falses, you'd call the draw() function
#using the npArr as the parameter. I haven't tested the code, so there may
#be a few bugs, but it should be helpful!
x += increment
return npArr
img = draw(mandelbrot())
img.save("mandelbrot.png")
</code></pre>
<p>I suspect the problem is with the "iterate" function in my code, because none of the values i put in iterate are returning true.</p>
<p><strong>EDIT</strong>
I have another issue as well, The second for loop I have here isnt even running.</p>
| -1 |
2016-09-25T06:37:45Z
| 39,689,842 |
<p>Your handling of the <code>y</code> coordinate is faulty. You begin the outer loop with </p>
<pre><code>y = 2
</code></pre>
<p>and have the loop condition as </p>
<pre><code>while TopLeftY <= y <= BottomRightY:
</code></pre>
<p>After substituting their values, this is</p>
<pre><code>while 2 <= y <= -2:
</code></pre>
<p>which is a nonsense. This is followed by</p>
<pre><code>y += increment
</code></pre>
<p>but <code>y</code> is already at the top end of the range. Moreover, you fail to reset <code>y</code> for each inner loop.</p>
<p>To summarise, the loop should be</p>
<pre><code>x = TopLeftX # use the value you already defined!
while TopLeftX <= x <= BottomRightX:
y = TopLeftY # moved to inside x loop
while TopLeftY >= y >= BottomRightY: # change the loop condition
# ... the Mandelbrot iteration
y -= increment # reverse direction
x += increment
</code></pre>
<p>I am no Python expert, so there may be other problems too.</p>
| 0 |
2016-09-25T17:39:16Z
|
[
"python",
"mandelbrot"
] |
How can I make use of intel-mkl with tensorflow
| 39,684,300 |
<p>I've seen a lot of documentation about making using of a CPU with tensorflow, however, I don't have a GPU. What I do have is a fairly capable CPU and a holing 5GB of intel math kernel, which, I hope, might help me speed up tensorflow a fair bit.</p>
<p>Does anyone know how I can "make" tensorflow use the intel-mlk ?</p>
| 3 |
2016-09-25T07:04:11Z
| 39,686,012 |
<p>You can install <a href="https://software.intel.com/en-us/intel-distribution-for-python" rel="nofollow">Intel Python Distribution</a>, this comes packaged with optimized NumPy, SciPy etc. You can install Tensorflow post this installation, Tensorflow uses Eigen, which is optimized to use SIMD vector lanes on CPU well. </p>
| 0 |
2016-09-25T10:49:40Z
|
[
"python",
"c++",
"numpy",
"tensorflow",
"blas"
] |
How can I make use of intel-mkl with tensorflow
| 39,684,300 |
<p>I've seen a lot of documentation about making using of a CPU with tensorflow, however, I don't have a GPU. What I do have is a fairly capable CPU and a holing 5GB of intel math kernel, which, I hope, might help me speed up tensorflow a fair bit.</p>
<p>Does anyone know how I can "make" tensorflow use the intel-mlk ?</p>
| 3 |
2016-09-25T07:04:11Z
| 39,771,199 |
<p>Since tensorflow uses Eigen, try to use an MKL enabled version of Eigen as described <a href="https://eigen.tuxfamily.org/dox/TopicUsingIntelMKL.html" rel="nofollow">here</a>:</p>
<blockquote>
<ol>
<li>define the EIGEN_USE_MKL_ALL macro before including any Eigen's header</li>
<li>link your program to MKL libraries (see the <a href="http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/" rel="nofollow">MKL linking advisor</a>)</li>
<li>on a 64bits system, you must use the LP64 interface (not the ILP64 one)</li>
</ol>
</blockquote>
<p>So one way to do it is to follow the above steps to modify the source of tensorflow, recompile and install on your machine. While you're at it you should also try the Intel compiler, which might provide a decent performance boost by itself, if you <a href="https://software.intel.com/en-us/articles/step-by-step-optimizing-with-intel-c-compiler" rel="nofollow">set the correct flags</a>: <code>-O3 -xHost -ipo</code>.</p>
| 0 |
2016-09-29T13:07:32Z
|
[
"python",
"c++",
"numpy",
"tensorflow",
"blas"
] |
Heroku gunicorn flask login is not working properly
| 39,684,364 |
<p>I have a flask app that uses Flask-Login for authentication. Everything works fine locally both using flask's built in web server and gunicorn run locally. But when it's on heroku it's faulty, sometimes it logs me in and sometimes it does not. When I successfully logged in within a few seconds of navigating my session just gets destroyed and have me logged out automatically. This should just happen when the user logged out.</p>
<p>The following code snippet in my view might be relevant:</p>
<pre><code>@app.before_request
def before_request():
g.user = current_user
# I have index (/) and other views (/new) decorated with @login_required
</code></pre>
<p>I might be having similar issues <a href="http://stackoverflow.com/questions/13614877/flask-login-and-heroku-issues">with this</a>. It does not have any answers yet and from what I read from the comments, the author just ran his app with <code>python app.py</code>. That is through using flask's built-in web server. However I can't seem to duplicate his workaround since running <code>app.run(host='0.0.0.0')</code> runs the app in port <code>5000</code> and I can't seem to set <code>port=80</code> because of permission.</p>
<p>I don't see anything helpful with the logs except that it does not authenticate even when I should.</p>
<p>Part of the logs when I got authenticated and tried to navigate to <code>/new</code> and <code>/</code> alternately until it logs me out:</p>
<pre><code>2016-09-25T06:57:53.052378+00:00 app[web.1]: authenticated - IP:10.179.239.229
2016-09-25T06:57:53.455145+00:00 heroku[router]: at=info method=GET path="/" host=testdep0.herokuapp.com request_id=c7c8f4c9-b003-446e-92d8-af0a81985e72 fwd="124.100.201.61" dyno=web.1 connect=0ms service=116ms status=200 bytes=6526
2016-09-25T06:58:11.415837+00:00 heroku[router]: at=info method=GET path="/new" host=testdep0.herokuapp.com request_id=ae5e4e29-0345-4a09-90c4-36fb64785079 fwd="124.100.201.61" dyno=web.1 connect=0ms service=7ms status=200 bytes=2552
2016-09-25T06:58:13.543098+00:00 heroku[router]: at=info method=GET path="/" host=testdep0.herokuapp.com request_id=47696ab9-57b9-4f20-810a-66033e3e9e50 fwd="124.100.201.61" dyno=web.1 connect=0ms service=8ms status=200 bytes=5982
2016-09-25T06:58:18.037766+00:00 heroku[router]: at=info method=GET path="/new" host=testdep0.herokuapp.com request_id=98912601-6342-4d71-a106-26056e4bbb21 fwd="124.100.201.61" dyno=web.1 connect=0ms service=3ms status=200 bytes=2552
2016-09-25T06:58:19.619369+00:00 heroku[router]: at=info method=GET path="/" host=testdep0.herokuapp.com request_id=2b04d31f-93a2-4653-83a4-f95ca9b97149 fwd="124.100.201.61" dyno=web.1 connect=0ms service=3ms status=302 bytes=640
2016-09-25T06:58:19.953910+00:00 heroku[router]: at=info method=GET path="/login?next=%2F" host=testdep0.herokuapp.com request_id=e80d15cd-e9ad-45ff-ae54-e156412fe4ff fwd="124.100.201.61" dyno=web.1 connect=0ms service=3ms status=200 bytes=2793
</code></pre>
<p>Procfile:</p>
<p><code>web: gunicorn app:app</code></p>
| 0 |
2016-09-25T07:15:29Z
| 39,768,181 |
<p>The problem was solved by adding the <code>--preload</code> option to gunicorn. I'm not entirely sure how that solved the problem and would appreciate if someone can explain.</p>
<p>Updated Procfile:</p>
<p><code>web: gunicorn app:app --preload</code></p>
| 0 |
2016-09-29T10:46:00Z
|
[
"python",
"heroku",
"flask",
"gunicorn",
"flask-login"
] |
How to end a while loop properly?
| 39,684,365 |
<p>I want to create a program that asks a user for sentences that then combine to create a story that is displayed to the user. The user decides how many sentences he or she wishes to write. </p>
<p>This is probably a dumb question with a simple answer, but with the code below a <code>q</code> or <code>Q</code> is always added to the end of the story when I don't want the end command to be included. How can I eliminate this <code>q</code> from the printed story so that the user only gets his or her story returned to them. </p>
<p>Thank you for any help. </p>
<pre><code>sent = ""
story = ""
while sent != 'q' and sent != 'Q':
sent = input("Enter the sentence(Enter 'q' to quit): ")
story += sent
print(story)
</code></pre>
| -1 |
2016-09-25T07:15:44Z
| 39,684,389 |
<p>You should check after input, if the look should end or not. If so you can exit using <code>break</code></p>
<pre><code>story = ""
while True
sentence = input("Enter the sentence(Enter 'q' to quit): ")
if sentence.lower() != 'q':
story += sent
else:
break
print(story)
</code></pre>
| 2 |
2016-09-25T07:19:18Z
|
[
"python"
] |
How to end a while loop properly?
| 39,684,365 |
<p>I want to create a program that asks a user for sentences that then combine to create a story that is displayed to the user. The user decides how many sentences he or she wishes to write. </p>
<p>This is probably a dumb question with a simple answer, but with the code below a <code>q</code> or <code>Q</code> is always added to the end of the story when I don't want the end command to be included. How can I eliminate this <code>q</code> from the printed story so that the user only gets his or her story returned to them. </p>
<p>Thank you for any help. </p>
<pre><code>sent = ""
story = ""
while sent != 'q' and sent != 'Q':
sent = input("Enter the sentence(Enter 'q' to quit): ")
story += sent
print(story)
</code></pre>
| -1 |
2016-09-25T07:15:44Z
| 39,684,407 |
<p>The extra 'q' gets inserted because you are storing the character 'q' in variable sent. Perform story+=sent before the input statement instead.</p>
<pre><code>sent = ""
story = ""
while sent.lower() != 'q':
story += sent
sent = input("Enter the sentence(Enter 'q' to quit): ")
print(story)
</code></pre>
| 1 |
2016-09-25T07:22:48Z
|
[
"python"
] |
TensorFlow getting elements of every row for specific columns
| 39,684,415 |
<p>If <code>A</code> is a TensorFlow variable like so </p>
<pre><code>A = tf.Variable([[1, 2], [3, 4]])
</code></pre>
<p>and <code>index</code> is another variable </p>
<pre><code>index = tf.Variable([0, 1])
</code></pre>
<p>I want to use this index to select columns in each row. In this case, item 0 from first row and item 1 from second row.</p>
<p>If A was a Numpy array then to get the columns of corresponding rows mentioned in index we can do </p>
<pre><code>x = A[np.arange(A.shape[0]), index]
</code></pre>
<p>and the result would be </p>
<pre><code>[1, 4]
</code></pre>
<p>What is the TensorFlow equivalent operation/operations for this? I know TensorFlow doesn't support many indexing operations. What would be the work around if it cannot be done directly?</p>
| 2 |
2016-09-25T07:23:46Z
| 39,686,130 |
<p>After dabbling around for quite a while. I found two functions that could be useful.</p>
<p>One is <code>tf.gather_nd()</code> which might be useful if you can produce a tensor
of the form <code>[[0, 0], [1, 1]]</code> and thereby you could do </p>
<p><code>index = tf.constant([[0, 0], [1, 1]])</code></p>
<p><code>tf.gather_nd(A, index)</code></p>
<p>If you are unable to produce a vector of the form <code>[[0, 0], [1, 1]]</code>(I couldn't produce this as the number of rows in my case was dependent on a placeholder) for some reason then the work around I found is to use the <code>tf.py_func()</code>. Here is an example code on how this can be done </p>
<pre><code>import tensorflow as tf
import numpy as np
def index_along_every_row(array, index):
N, _ = array.shape
return array[np.arange(N), index]
a = tf.Variable([[1, 2], [3, 4]], dtype=tf.int32)
index = tf.Variable([0, 1], dtype=tf.int32)
a_slice_op = tf.py_func(index_along_every_row, [a, index], [tf.int32])[0]
session = tf.InteractiveSession()
a.initializer.run()
index.initializer.run()
a_slice = a_slice_op.eval()
</code></pre>
<p><code>a_slice</code> will be a numpy array <code>[1, 4]</code></p>
| 2 |
2016-09-25T11:05:06Z
|
[
"python",
"numpy",
"tensorflow"
] |
sqlite - return all columns for max of one column without repeats
| 39,684,477 |
<p>Im using Python to query a SQL database. I'm fairly new with databases. I've tried looking up this question, but I can't find a similar enough question to get the right answer.</p>
<p>I have a table with multiple columns/rows. I want to find the MAX of a single column, I want ALL columns returned (the entire ROW), and I want only one instance of the MAX. Right now I'm getting ten ROWS returned, because the MAX is repeated ten times. I only want one ROW returned.</p>
<p>The query strings I've tried so far:</p>
<pre><code>sql = 'select max(f) from cbar'
# this returns one ROW, but only a single COLUMN (a single value)
sql = 'select * from cbar where f = (select max(f) from cbar)'
# this returns all COLUMNS, but it also returns multiple ROWS
</code></pre>
<p>I've tried a bunch more, but they returned nothing. They weren't right somehow. That's the problem, I'm too new to find the middle ground between my two working query statements.</p>
| 0 |
2016-09-25T07:35:19Z
| 39,688,400 |
<p>In SQLite 3.7.11 or later, you can just retrieve all columns together with the maximum value:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT *, max(f) FROM cbar;
</code></pre>
<p>But your Python might be too old. In the general case, you can sort the table by that column, and then just read the first row:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT * FROM cbar ORDER BY f DESC LIMIT 1;
</code></pre>
| 1 |
2016-09-25T15:08:50Z
|
[
"python",
"sqlite",
"max"
] |
Searching a file for words from a list
| 39,684,483 |
<p>I am trying to search for words in a file. Those words are stored in a separate list.
The words that are found are stored in another list and that list is returned in the end.</p>
<p>The code looks like:</p>
<pre><code>def scanEducation(file):
education = []
qualities = ["python", "java", "sql", "mysql", "sqlite", "c#", "c++", "c", "javascript", "pascal",
"html", "css", "jquery", "linux", "windows"]
with open("C:\Users\Vadim\Desktop\Python\New_cvs\\" + file, 'r') as file1:
for line in file1:
for word in line.split():
matching = [s for s in qualities if word.lower() in s]
if matching is not None:
education.append(matching)
return education
</code></pre>
<p>First it returns me a list with bunch of empty "seats" which means my comparison isn't working?</p>
<p>The result (scans 4 files):</p>
<pre><code>"C:\Program Files (x86)\Python2\python.exe" C:/Users/Vadim/PycharmProjects/TestFiles/ReadTXT.py
[[], [], [], [], [], [], [], [], [], ['java', 'javascript']]
[[], [], [], [], [], [], [], [], [], ['pascal']]
[[], [], [], [], [], [], [], [], [], ['linux']]
[[], [], [], [], [], [], [], [], [], [], ['c#']]
Process finished with exit code 0
</code></pre>
<p>The input file contains:</p>
<pre><code>Name: Some Name
Phone: 1234567890
email: some@email.com
python,excel,linux
</code></pre>
<p>Second issue each file containes 3 different skills, but the function finds only 1 or 2. Is it also a bad comparison or do I have a different error here?</p>
<p>I would expect the result being a list of just the found skills without the empty places and to find all the skills in the file, not just some of them.</p>
<p><strong>Edit</strong>: The function does find all the skills when I do <code>word.split(', ')</code>
but if I would like it to be more universal, what could be a good way to find those skills if I don't know exactly what will separate them?</p>
| 1 |
2016-09-25T07:36:16Z
| 39,684,613 |
<p>You get empty lists because <code>None</code> is not equal to an empty list. What you might want is to change the condition to the following:</p>
<pre><code>if matching:
# do your stuff
</code></pre>
<p>It seems that you're checking if a substring is present in the strings in the qualities list. Which might not be what you want. If you want to check the words on a line that appear on the qualities list, you might want to change your list comprehension to:</p>
<pre><code>words = line.split()
match = [word for word in words if word.lower() in qualities]
</code></pre>
<p>If you're looking into matching both <code>,</code> and spaces, you might want to look into regex. See <a href="http://stackoverflow.com/questions/1059559/python-split-strings-with-multiple-delimiters">Python - Split Strings with Multiple Delimiters</a>.</p>
| 1 |
2016-09-25T07:56:22Z
|
[
"python",
"python-2.7"
] |
Searching a file for words from a list
| 39,684,483 |
<p>I am trying to search for words in a file. Those words are stored in a separate list.
The words that are found are stored in another list and that list is returned in the end.</p>
<p>The code looks like:</p>
<pre><code>def scanEducation(file):
education = []
qualities = ["python", "java", "sql", "mysql", "sqlite", "c#", "c++", "c", "javascript", "pascal",
"html", "css", "jquery", "linux", "windows"]
with open("C:\Users\Vadim\Desktop\Python\New_cvs\\" + file, 'r') as file1:
for line in file1:
for word in line.split():
matching = [s for s in qualities if word.lower() in s]
if matching is not None:
education.append(matching)
return education
</code></pre>
<p>First it returns me a list with bunch of empty "seats" which means my comparison isn't working?</p>
<p>The result (scans 4 files):</p>
<pre><code>"C:\Program Files (x86)\Python2\python.exe" C:/Users/Vadim/PycharmProjects/TestFiles/ReadTXT.py
[[], [], [], [], [], [], [], [], [], ['java', 'javascript']]
[[], [], [], [], [], [], [], [], [], ['pascal']]
[[], [], [], [], [], [], [], [], [], ['linux']]
[[], [], [], [], [], [], [], [], [], [], ['c#']]
Process finished with exit code 0
</code></pre>
<p>The input file contains:</p>
<pre><code>Name: Some Name
Phone: 1234567890
email: some@email.com
python,excel,linux
</code></pre>
<p>Second issue each file containes 3 different skills, but the function finds only 1 or 2. Is it also a bad comparison or do I have a different error here?</p>
<p>I would expect the result being a list of just the found skills without the empty places and to find all the skills in the file, not just some of them.</p>
<p><strong>Edit</strong>: The function does find all the skills when I do <code>word.split(', ')</code>
but if I would like it to be more universal, what could be a good way to find those skills if I don't know exactly what will separate them?</p>
| 1 |
2016-09-25T07:36:16Z
| 39,684,616 |
<p>The code should be written as follows (if I understand the desired output format correctly):</p>
<pre><code>def scanEducation(file):
education = []
qualities = ["python", "java", "sql", "mysql", "sqlite", "c#", "c++", "c", "javascript", "pascal",
"html", "css", "jquery", "linux", "windows"]
with open("C:\Users\Vadim\Desktop\Python\New_cvs\\" + file, 'r') as file1:
for line in file1:
matching = []
for word.lower() in line.strip().split(","):
if word in qualities:
matching.append(word)
if len(matching) != 0:
education.append(matching)
return education
</code></pre>
| 1 |
2016-09-25T07:56:43Z
|
[
"python",
"python-2.7"
] |
Searching a file for words from a list
| 39,684,483 |
<p>I am trying to search for words in a file. Those words are stored in a separate list.
The words that are found are stored in another list and that list is returned in the end.</p>
<p>The code looks like:</p>
<pre><code>def scanEducation(file):
education = []
qualities = ["python", "java", "sql", "mysql", "sqlite", "c#", "c++", "c", "javascript", "pascal",
"html", "css", "jquery", "linux", "windows"]
with open("C:\Users\Vadim\Desktop\Python\New_cvs\\" + file, 'r') as file1:
for line in file1:
for word in line.split():
matching = [s for s in qualities if word.lower() in s]
if matching is not None:
education.append(matching)
return education
</code></pre>
<p>First it returns me a list with bunch of empty "seats" which means my comparison isn't working?</p>
<p>The result (scans 4 files):</p>
<pre><code>"C:\Program Files (x86)\Python2\python.exe" C:/Users/Vadim/PycharmProjects/TestFiles/ReadTXT.py
[[], [], [], [], [], [], [], [], [], ['java', 'javascript']]
[[], [], [], [], [], [], [], [], [], ['pascal']]
[[], [], [], [], [], [], [], [], [], ['linux']]
[[], [], [], [], [], [], [], [], [], [], ['c#']]
Process finished with exit code 0
</code></pre>
<p>The input file contains:</p>
<pre><code>Name: Some Name
Phone: 1234567890
email: some@email.com
python,excel,linux
</code></pre>
<p>Second issue each file containes 3 different skills, but the function finds only 1 or 2. Is it also a bad comparison or do I have a different error here?</p>
<p>I would expect the result being a list of just the found skills without the empty places and to find all the skills in the file, not just some of them.</p>
<p><strong>Edit</strong>: The function does find all the skills when I do <code>word.split(', ')</code>
but if I would like it to be more universal, what could be a good way to find those skills if I don't know exactly what will separate them?</p>
| 1 |
2016-09-25T07:36:16Z
| 39,684,635 |
<p>First of all, you are getting a bunch of "empty seats" because your condition is not defined correctly. If matching is an empty list, it is not None. That is: <code>[] is not None</code> evaluates to <code>True</code>. This is why you are getting all these "empty seats".</p>
<p>Seconds of all, the condition in your list comprehension is also not what you'd want. Unless I've misunderstood your goal here, the condition you are looking for is this:</p>
<p><code>[s for s in qualities if word.lower() == s]</code></p>
<p>This checks the list of qualities and will return a list that is not empty only if the word is one of the qualities. However, you since the length of this list will always be either 1 (if there's a match) or 0 (if there isn't) we can exchange it to a boolean by using python's built-in <code>any()</code> function:</p>
<pre><code>if any(s == word.lower() for s in qualities):
education.append(word)
</code></pre>
<p>I hope this helps, please don't hesitate to ask any follow-up questions if you have or tell me if I've misunderstood your goals.</p>
<p>For your convinevce, here is the modifed source I've used to check myself:</p>
<pre><code>def scanEducation(file):
education = []
qualities = ["python", "java", "sql", "mysql", "sqlite", "c#", "c++", "c", "javascript", "pascal",
"html", "css", "jquery", "linux", "windows"]
with open(file, 'r') as file1:
for line in file1:
for word in line.split():
if any(s == word.lower() for s in qualities):
education.append(word)
return education
</code></pre>
| 1 |
2016-09-25T07:59:43Z
|
[
"python",
"python-2.7"
] |
Searching a file for words from a list
| 39,684,483 |
<p>I am trying to search for words in a file. Those words are stored in a separate list.
The words that are found are stored in another list and that list is returned in the end.</p>
<p>The code looks like:</p>
<pre><code>def scanEducation(file):
education = []
qualities = ["python", "java", "sql", "mysql", "sqlite", "c#", "c++", "c", "javascript", "pascal",
"html", "css", "jquery", "linux", "windows"]
with open("C:\Users\Vadim\Desktop\Python\New_cvs\\" + file, 'r') as file1:
for line in file1:
for word in line.split():
matching = [s for s in qualities if word.lower() in s]
if matching is not None:
education.append(matching)
return education
</code></pre>
<p>First it returns me a list with bunch of empty "seats" which means my comparison isn't working?</p>
<p>The result (scans 4 files):</p>
<pre><code>"C:\Program Files (x86)\Python2\python.exe" C:/Users/Vadim/PycharmProjects/TestFiles/ReadTXT.py
[[], [], [], [], [], [], [], [], [], ['java', 'javascript']]
[[], [], [], [], [], [], [], [], [], ['pascal']]
[[], [], [], [], [], [], [], [], [], ['linux']]
[[], [], [], [], [], [], [], [], [], [], ['c#']]
Process finished with exit code 0
</code></pre>
<p>The input file contains:</p>
<pre><code>Name: Some Name
Phone: 1234567890
email: some@email.com
python,excel,linux
</code></pre>
<p>Second issue each file containes 3 different skills, but the function finds only 1 or 2. Is it also a bad comparison or do I have a different error here?</p>
<p>I would expect the result being a list of just the found skills without the empty places and to find all the skills in the file, not just some of them.</p>
<p><strong>Edit</strong>: The function does find all the skills when I do <code>word.split(', ')</code>
but if I would like it to be more universal, what could be a good way to find those skills if I don't know exactly what will separate them?</p>
| 1 |
2016-09-25T07:36:16Z
| 39,684,857 |
<p>You can also use regular expression like this:</p>
<pre><code>def scan_education(file_name):
education = []
qualities_list = ["python", "java", "sql", "mysql", "sqlite", "c\#", "c\+\+", "c", "javascript", "pascal",
"html", "css", "jquery", "linux", "windows"]
qualities = re.compile(r'\b(?:%s)\b' % '|'.join(qualities_list))
for line in open(file_name, 'r'):
education += re.findall(qualities, line.lower())
return list(set(education))
</code></pre>
| 1 |
2016-09-25T08:29:39Z
|
[
"python",
"python-2.7"
] |
Searching a file for words from a list
| 39,684,483 |
<p>I am trying to search for words in a file. Those words are stored in a separate list.
The words that are found are stored in another list and that list is returned in the end.</p>
<p>The code looks like:</p>
<pre><code>def scanEducation(file):
education = []
qualities = ["python", "java", "sql", "mysql", "sqlite", "c#", "c++", "c", "javascript", "pascal",
"html", "css", "jquery", "linux", "windows"]
with open("C:\Users\Vadim\Desktop\Python\New_cvs\\" + file, 'r') as file1:
for line in file1:
for word in line.split():
matching = [s for s in qualities if word.lower() in s]
if matching is not None:
education.append(matching)
return education
</code></pre>
<p>First it returns me a list with bunch of empty "seats" which means my comparison isn't working?</p>
<p>The result (scans 4 files):</p>
<pre><code>"C:\Program Files (x86)\Python2\python.exe" C:/Users/Vadim/PycharmProjects/TestFiles/ReadTXT.py
[[], [], [], [], [], [], [], [], [], ['java', 'javascript']]
[[], [], [], [], [], [], [], [], [], ['pascal']]
[[], [], [], [], [], [], [], [], [], ['linux']]
[[], [], [], [], [], [], [], [], [], [], ['c#']]
Process finished with exit code 0
</code></pre>
<p>The input file contains:</p>
<pre><code>Name: Some Name
Phone: 1234567890
email: some@email.com
python,excel,linux
</code></pre>
<p>Second issue each file containes 3 different skills, but the function finds only 1 or 2. Is it also a bad comparison or do I have a different error here?</p>
<p>I would expect the result being a list of just the found skills without the empty places and to find all the skills in the file, not just some of them.</p>
<p><strong>Edit</strong>: The function does find all the skills when I do <code>word.split(', ')</code>
but if I would like it to be more universal, what could be a good way to find those skills if I don't know exactly what will separate them?</p>
| 1 |
2016-09-25T07:36:16Z
| 39,684,873 |
<p>Here's a short example of using sets and a little bit of list comprehension filtering to find the common words between a text file (or as I used just a text string) and a list that you provide. This is faster and imho clearer than trying to use a loop.</p>
<pre><code>import string
try:
with open('myfile.txt') as f:
text = f.read()
except:
text = "harry met sally; the boys went to the park. my friend is purple?"
my_words = set(("harry", "george", "phil", "green", "purple", "blue"))
text = ''.join(x for x in text if x in string.ascii_letters or x in string.whitespace)
text = set(text.split()) # split on any whitespace
common_words = my_words & text # my_words.intersection(text) also does the same
print common_words
</code></pre>
| 1 |
2016-09-25T08:32:27Z
|
[
"python",
"python-2.7"
] |
Convert the string 2.90K to 2900 or 5.2M to 5200000 in pandas dataframe
| 39,684,548 |
<p>Need some help on processing data inside a pandas dataframe.
Any help is most welcome.</p>
<p>I have OHCLV data in CSV format. I have loaded the file in to pandas dataframe.</p>
<p>How do I convert the volume column from 2.90K to 2900 or 5.2M to 5200000.
The column can contain both K in form of thousands and M in millions.</p>
<pre><code>import pandas as pd
file_path = '/home/fatjoe/UCHM.csv'
df = pd.read_csv(file_path, parse_dates=[0], index_col=0)
df.columns = [
"closing_price",
"opening_price",
"high_price",
"low_price",
"volume",
"change"]
df['opening_price'] = df['closing_price']
df['opening_price'] = df['opening_price'].shift(-1)
df = df.replace('-', 0)
df = df[:-1]
print(df.head())
Console:
Date
2016-09-23 0
2016-09-22 9.60K
2016-09-21 54.20K
2016-09-20 115.30K
2016-09-19 18.90K
2016-09-16 176.10K
2016-09-15 31.60K
2016-09-14 10.00K
2016-09-13 3.20K
</code></pre>
| 2 |
2016-09-25T07:47:20Z
| 39,684,629 |
<p>assuming you have the following DF:</p>
<pre><code>In [30]: df
Out[30]:
Date Val
0 2016-09-23 100
1 2016-09-22 9.60M
2 2016-09-21 54.20K
3 2016-09-20 115.30K
4 2016-09-19 18.90K
5 2016-09-16 176.10K
6 2016-09-15 31.60K
7 2016-09-14 10.00K
8 2016-09-13 3.20M
</code></pre>
<p>you can do it this way:</p>
<pre><code>In [31]: df.Val = (df.Val.replace(r'[KM]+$', '', regex=True).astype(float) * \
....: df.Val.str.extract(r'[\d\.]+([KM]+)', expand=False)
....: .fillna(1)
....: .replace(['K','M'], [10**3, 10**6]).astype(int))
In [32]: df
Out[32]:
Date Val
0 2016-09-23 100.0
1 2016-09-22 9600000.0
2 2016-09-21 54200.0
3 2016-09-20 115300.0
4 2016-09-19 18900.0
5 2016-09-16 176100.0
6 2016-09-15 31600.0
7 2016-09-14 10000.0
8 2016-09-13 3200000.0
</code></pre>
<p>Explanation:</p>
<pre><code>In [36]: df.Val.replace(r'[KM]+$', '', regex=True).astype(float)
Out[36]:
0 100.0
1 9.6
2 54.2
3 115.3
4 18.9
5 176.1
6 31.6
7 10.0
8 3.2
Name: Val, dtype: float64
In [37]: df.Val.str.extract(r'[\d\.]+([KM]+)', expand=False)
Out[37]:
0 NaN
1 M
2 K
3 K
4 K
5 K
6 K
7 K
8 M
Name: Val, dtype: object
In [38]: df.Val.str.extract(r'[\d\.]+([KM]+)', expand=False).fillna(1)
Out[38]:
0 1
1 M
2 K
3 K
4 K
5 K
6 K
7 K
8 M
Name: Val, dtype: object
In [39]: df.Val.str.extract(r'[\d\.]+([KM]+)', expand=False).fillna(1).replace(['K','M'], [10**3, 10**6]).astype(int)
Out[39]:
0 1
1 1000000
2 1000
3 1000
4 1000
5 1000
6 1000
7 1000
8 1000000
Name: Val, dtype: int32
</code></pre>
| 0 |
2016-09-25T07:59:01Z
|
[
"python",
"pandas",
"dataframe"
] |
Regex, filter url ending with and without digit
| 39,684,549 |
<p>I have below four url out of which I want to filter only two using regex expression. </p>
<p>/chassis/motherboard/cpu <---- this</p>
<p>/chassis/motherboard/cpu/core0</p>
<p>/chassis/motherboard/cpu0 <---- this</p>
<p>/chassis/motherboard/cpu1/core0</p>
<p>I have tried to play around with ^.*[0-9a-z_].cpu but unable to get the solution.</p>
| 1 |
2016-09-25T07:47:31Z
| 39,684,592 |
<p>You can use this regex:</p>
<pre><code>^.*/cpu[0-9]*$
</code></pre>
<p>This will match any text that ends with <code>/cpu</code> or <code>/cpy<number></code></p>
| 1 |
2016-09-25T07:52:52Z
|
[
"python",
"regex",
"url",
"filter",
"uri"
] |
run celery tasks in random times
| 39,684,554 |
<p>i need to run few celery tasks in random times - every run should be at a new random time - the random number should generated every run. <br>
what I did in the past is:<br></p>
<pre><code> "my_task": {
"task": "path.to.my_task",
"schedule": crontab(minute='*/%s' % rand),
},
rand = random(1,12)
</code></pre>
<p>but this code is not good for my needs anu more:<br>
1. I need different (as possible with random0 number for each tenant <br>
2. different number will generated every time and not only on settings.py load (once) <br></p>
<p>I tried to overwrite Schedule as explained in <a href="http://stackoverflow.com/questions/7172584/django-celery-how-to-set-task-to-run-at-specific-interval-programmatically">THIS</a> answer but it did not work, is there better way? am I missing something?</p>
<p>(for example in tenant A the task will run at 23 and the day after at 8, and in tenant B the task will run at 4 and the day after at 20 etc)</p>
<p>Thanks! </p>
<p>======== update ====</p>
<p>after the great answer I got, I added option to my task and process it in apply_asynch method as suggested- <br>.</p>
<pre><code>"my-task": { # deprecated task
"task": "mdm_sync.tasks.test_task",
# "schedule": new_sc(),
"schedule": crontab(minute=39, hour=11),
"options": {
"eta": datetime.utcnow()
}
},
</code></pre>
<p><br></p>
<pre><code>entry.options["eta"] = datetime.datetime.utcnow() + datetime.timedelta(seconds=random(3600,12*3600)
</code></pre>
<p>works great!</p>
| 0 |
2016-09-25T07:47:48Z
| 39,685,496 |
<p>I faced a similar problem in which i had to generate <strong>facebook</strong> like notifications between <strong>random users</strong> at <strong>random interval of time</strong></p>
<p>Initially i was also doing the same as you, using the <code>random</code> function to give the <code>minute</code> value to <code>crontab</code>. But, as <code>settings.py</code> and <code>celery.py</code> are loaded only once when you hit <code>python manage.py runserver</code>, that random function runs only once and hence that random value is selected only once, say <strong>5 minutes</strong> or <strong>7 minutes</strong>, but then this random value is used for repeating the task at every 5 or 7 minutes, hence making the tasks <strong>repeat periodically</strong>.</p>
<p>So, what you need to do is that instead of defining the timing of task in <code>settings.py</code> or <code>celery.py</code> you need to <strong>recursively</strong> call the function/ method in your <code>tasks.py</code>. But the key here is that you need to call the same function <strong>recursively and asynchronously</strong>, and while calling it asynchronously, you need to pass a parameter <strong>delay</strong> whose value will be calculated by using the <strong>random function</strong></p>
<p>See my <code>tasks.py</code> -> <a href="https://github.com/ankushrgv/notification/blob/master/apps/notifications/tasks.py" rel="nofollow">https://github.com/ankushrgv/notification/blob/master/apps/notifications/tasks.py</a></p>
<p>And</p>
<p><code>celery.py</code> -> <a href="https://github.com/ankushrgv/notification/blob/master/config/celery.py" rel="nofollow">https://github.com/ankushrgv/notification/blob/master/config/celery.py</a></p>
<p>You'll have to <code>python manage.py shell</code> and then call the function / method from there only once to start the recursion.</p>
<p>something like</p>
<pre><code>`myFunction().apply_async()`
</code></pre>
<p>In my case it used to be</p>
<pre><code>`CreateNotifications().apply_async()`
</code></pre>
| 1 |
2016-09-25T09:49:10Z
|
[
"python",
"django",
"celery",
"celerybeat"
] |
Retrieve the last migration for a custom migration, to load initial data
| 39,684,568 |
<p>I created a custom migration "0000_initial_data.py" and I want that to get applied after all the other migrations are done. Though when I try to use ____latest____ in the dependencies I get "dependencies reference nonexistent parent node" error I feel it is trying to find ____latest____ named migration in the folder but it is unable to find. I got an idea of finding the latest migration in myapp/migrations/ using "ls -Art | tail -n 1" which gave me 0001_initial.pyc [pyc] file rather than the latest migration .py file. Though even if I get the name of the latest migration I have to replace in the custom migration file using a shell script like </p>
<pre><code>$ replace '__latest__' 'output of ls -Art' -- 0000_initial_data.py
</code></pre>
<p>as am automating my deployment. I would like to know the best way to get the latest migration from all myapps in the project and plug my custom migration after it.</p>
<p>Note: using django==1.8.13, ubuntu 14.04, python 2.7</p>
| 0 |
2016-09-25T07:50:06Z
| 39,684,884 |
<p>Generally custom migrations are used for changing existing data. If you wanna create new data, I recommend you to put your code inside a management command and run that after all migrations.</p>
| 1 |
2016-09-25T08:33:56Z
|
[
"python",
"django",
"django-migrations"
] |
Selenium TypeError: __init__() takes 2 positional arguments but 3 were given
| 39,684,653 |
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions
driver = webdriver.Firefox()
driver.get("http://somelink.com/")
WebDriverWait(driver, 10).until(expected_conditions.invisibility_of_element_located(By.XPATH, "//input[@id='message']"))
# Gives me an error:
TypeError: __init__() takes 2 positional arguments but 3 were given
</code></pre>
<p>...</p>
<pre><code># Simply:
expected_conditions.invisibility_of_element_located(By.XPATH, "//input[@id='message']"))
# Gives me the same error.
TypeError: __init__() takes 2 positional arguments but 3 were given
</code></pre>
<p>The error repeats itself whether I use By.XPATH, By.ID or anything else.</p>
<p>Also, find_element works just fine:</p>
<pre><code>el = driver.find_element(By.XPATH, "//input[@id='message']")
print(el) # returns:
[<selenium.webdriver.firefox.webelement.FirefoxWebElement (session="03cfc338-f668-4fcd-b312-8e4a1cfd9f24", element="c7f76445-08b3-4a4c-9d04-90263a1ef80e")>]
</code></pre>
<p>Suggestions appreciated.</p>
<p><strong>Edit:</strong></p>
<p>Extra parentheses () around <code>By.XPATH, "//input[@id='message']"</code> as suggested in the comments solved the problem.</p>
| 0 |
2016-09-25T08:01:45Z
| 39,685,597 |
<p>Change this</p>
<pre><code>WebDriverWait(driver, 10).until(expected_conditions.invisibility_of_element_locateââd((By.XPATH, "//input[@id='message']")))
</code></pre>
<p>I have added extra () , hope this should work.</p>
| 0 |
2016-09-25T10:01:30Z
|
[
"python",
"selenium",
"webdriver"
] |
Create Python dictionary from MS-Access table
| 39,684,757 |
<p>I have an MS-Access table that has 6 columns. I want to extract the first column and use it as the key and then extract the 2nd and 3rd columns and use them as values in a Python dictionary. There are multiple values for one key.</p>
<p>This is what I have so far but I can't figure out what to do next:</p>
<pre><code>import numpy
import pyodbc
access_database_file = r"C:\Users\david\Documents\\LISTS.mdb"
ODBC_CONN_STR = r"DRIVER={Microsoft Access Driver (*.mdb)};DBQ=%s;" % access_database_file
conn = pyodbc.connect(ODBC_CONN_STR)
cursor = conn.cursor()
cursor.execute("select * from LISTS")
print "..processing..."
rows = cursor.fetchall()
fieldDomains = {}
for row in rows:
k = row[0]
v1 = row[1]
v2 = row[2]
fieldDomains = {k: {v1: v2}}
print fieldDomains
</code></pre>
<p>When I print fieldDomains I get this:</p>
<pre><code>{u'MAIN_VW': {u'PRESSURE_ZONE_NUM': u'LU_PRESSURE_ZONE_VW'}}
{u'MAIN_VW': {u'DIAMETER': u'LU_MAIN_DIAMR_LK_MV'}}
{u'MAIN_VW': {u'MATERIAL': u'LU_MAIN_MATRL_LK_MV'}}
{u'WATER_VW': {u'SUBTYPE': u'LU_WATER_SUBTYP_LK_MV'}}
{u'WATER_VW': {u'IS_RESTRAINED': u'LU_YES_NO_LK'}}
{u'PIPE_VW': {u'IS_TIE_IN': u'LU_YES_NO_LK'}}
{u'PIPE_VW': {u'ORIGIN': u'LU_PIPE_ORIGN_LK_MV'}}
</code></pre>
<p>I want need to merge these separate dictionaries into one large dictionary - but i'm not sure how to do this in my current script? This is what I want my output to be this:</p>
<pre><code>{u'MAIN_VW': {u'PRESSURE_ZONE_NUM': u'LU_PRESSURE_ZONE_VW', u'DIAMETER': u'LU_MAIN_DIAMR_LK_MV', u'MATERIAL': u'LU_MAIN_MATRL_LK_MV'}, u'WATER_VW': {u'SUBTYPE': u'LU_WATER_SUBTYP_LK_MV', u'IS_RESTRAINED': u'LU_YES_NO_LK', u'PIPE_VW': {u'IS_TIE_IN': u'LU_YES_NO_LK', u'ORIGIN': u'LU_PIPE_ORIGN_LK_MV'}}
</code></pre>
| 0 |
2016-09-25T08:15:00Z
| 39,684,800 |
<p>You can use <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow"><code>defaultdict</code></a> to do that</p>
<pre><code>>>> from collections import defaultdict
>>> data = [{u'MAIN_VW': {u'PRESSURE_ZONE_NUM': u'LU_PRESSURE_ZONE_VW'}},
{u'MAIN_VW': {u'DIAMETER': u'LU_MAIN_DIAMR_LK_MV'}},
{u'MAIN_VW': {u'MATERIAL': u'LU_MAIN_MATRL_LK_MV'}},
{u'WATER_VW': {u'SUBTYPE': u'LU_WATER_SUBTYP_LK_MV'}},
{u'WATER_VW': {u'IS_RESTRAINED': u'LU_YES_NO_LK'}},
{u'PIPE_VW': {u'IS_TIE_IN': u'LU_YES_NO_LK'}},
{u'PIPE_VW': {u'ORIGIN': u'LU_PIPE_ORIGN_LK_MV'}}]
>>> output = defaultdict(dict)
>>> for item in data:
... for k, v in item.items():
... output[k].update(v)
>>> dict(output)
{'MAIN_VW': {'DIAMETER': 'LU_MAIN_DIAMR_LK_MV',
'MATERIAL': 'LU_MAIN_MATRL_LK_MV',
'PRESSURE_ZONE_NUM': 'LU_PRESSURE_ZONE_VW'},
'PIPE_VW': {'IS_TIE_IN': 'LU_YES_NO_LK', 'ORIGIN': 'LU_PIPE_ORIGN_LK_MV'},
'WATER_VW': {'IS_RESTRAINED': 'LU_YES_NO_LK',
'SUBTYPE': 'LU_WATER_SUBTYP_LK_MV'}}
</code></pre>
<p><strong>UPDATE</strong> </p>
<p>Since you are getting data in another format, like <code>data2</code> bellow it's better to </p>
<pre><code>>>> data2 = [[u'MAIN_VW', u'PRESSURE_ZONE_NUM', u'LU_PRESSURE_ZONE_VW'],
[u'MAIN_VW', u'DIAMETER', u'LU_MAIN_DIAMR_LK_MV'],
[u'MAIN_VW', u'MATERIAL', u'LU_MAIN_MATRL_LK_MV'],
[u'WATER_VW', u'SUBTYPE', u'LU_WATER_SUBTYP_LK_MV'],
[u'WATER_VW', u'IS_RESTRAINED', u'LU_YES_NO_LK'],
[u'PIPE_VW', u'IS_TIE_IN', u'LU_YES_NO_LK'],
[u'PIPE_VW', u'ORIGIN', u'LU_PIPE_ORIGN_LK_MV']]
>>> output2 = defaultdict(dict)
>>> for row in data2:
... output2[row[0]].update({row[1]: row[2]})
>>> dict(output2)
{'MAIN_VW': {'DIAMETER': 'LU_MAIN_DIAMR_LK_MV',
'MATERIAL': 'LU_MAIN_MATRL_LK_MV',
'PRESSURE_ZONE_NUM': 'LU_PRESSURE_ZONE_VW'},
'PIPE_VW': {'IS_TIE_IN': 'LU_YES_NO_LK', 'ORIGIN': 'LU_PIPE_ORIGN_LK_MV'},
'WATER_VW': {'IS_RESTRAINED': 'LU_YES_NO_LK',
'SUBTYPE': 'LU_WATER_SUBTYP_LK_MV'}}
</code></pre>
<p>So basically <code>data2</code> is your <code>rows = cursor.fetchall()</code> and you can replace <code>data2</code> with <code>rows</code> variable</p>
| 1 |
2016-09-25T08:20:50Z
|
[
"python",
"dictionary"
] |
App Engine frontend waiting for backend to finish and return data - what's the right way to do it?
| 39,684,847 |
<p>I'm using a frontend built in angularjs and a backend built in python and webapp2 in app engine.</p>
<p>The backend makes calls to a third party API, fetches data and returns to the frontend.</p>
<p>The API request from the backend may take upto 30s or more. The problem is the frontend can't really progress any further until it gets the data.</p>
<p>I tried running 3 simultaneous requests to the backend using different tabs and 2 of them failed. I'm afraid that this seems to suggest that the app only allows one user at a time.</p>
<p>What's the best way to handle this? One thought I have is:</p>
<ul>
<li>Use <a href="https://cloud.google.com/appengine/docs/java/taskqueue/?csw=1" rel="nofollow">task queues</a> to run the API call to 3rd party in the background</li>
<li>Create a new handler which reads from the queue for the last task sent and let the frontend poll this one at regular intervals</li>
<li>Update the frontend once data is available</li>
</ul>
<p>Is that the right way? I'm sure this is a problem solved in a frontend+backend kind of world, but I just don't know what to search for.</p>
<p>Thanks!</p>
| 0 |
2016-09-25T08:28:18Z
| 39,687,501 |
<p>Requests from the frontend are capped at 30 seconds; after that they time out in the server side. That is part of GAE's design. Requests originating from the task queue get 10 minutes, so your idea is viable. However, you'll want some identifier to use for polling, rather than just using "the last sent," to distinguish between concurrent tasks.</p>
| 1 |
2016-09-25T13:45:27Z
|
[
"python",
"angularjs",
"google-app-engine"
] |
How to make word boundary \b not match on dashes
| 39,684,942 |
<p>I simplified my code to the specific problem I am having.</p>
<pre><code>import re
pattern = re.compile(r'\bword\b')
result = pattern.sub(lambda x: "match", "-word- word")
</code></pre>
<p>I am getting</p>
<pre><code>'-match- match'
</code></pre>
<p>but I want </p>
<pre><code>'-word- match'
</code></pre>
<p>edit: </p>
<p>Or for the string <code>"word -word-"</code></p>
<p>I want</p>
<pre><code>"match -word-"
</code></pre>
| 6 |
2016-09-25T08:42:29Z
| 39,685,053 |
<p>What you need is a negative lookbehind.</p>
<pre><code>pattern = re.compile(r'(?<!-)\bword\b')
result = pattern.sub(lambda x: "match", "-word- word")
</code></pre>
<p>To cite the <a href="https://docs.python.org/3/library/re.html#regular-expression-syntax" rel="nofollow">documentation</a>:</p>
<blockquote>
<p><code>(?<!...)</code>
Matches if the current position in the string is not preceded by a match for ....</p>
</blockquote>
<p>So this will only match, if the word-break <code>\b</code> is not preceded with a minus sign <code>-</code>.</p>
<p>If you need this for the end of the string you'll have to use a negative lookahead which will look like this: <code>(?!-)</code>. The complete regular expression will then result in: <code>(?<!-)\bword(?!-)\b</code> </p>
| 5 |
2016-09-25T08:54:28Z
|
[
"python",
"regex"
] |
How to make word boundary \b not match on dashes
| 39,684,942 |
<p>I simplified my code to the specific problem I am having.</p>
<pre><code>import re
pattern = re.compile(r'\bword\b')
result = pattern.sub(lambda x: "match", "-word- word")
</code></pre>
<p>I am getting</p>
<pre><code>'-match- match'
</code></pre>
<p>but I want </p>
<pre><code>'-word- match'
</code></pre>
<p>edit: </p>
<p>Or for the string <code>"word -word-"</code></p>
<p>I want</p>
<pre><code>"match -word-"
</code></pre>
| 6 |
2016-09-25T08:42:29Z
| 39,685,126 |
<p><code>\b</code> basically denotes a word boundary on characters other than <code>[a-zA-Z0-9_]</code> which includes spaces as well. Surround <code>word</code> with negative lookarounds to ensure there is no non-space character after and before it:</p>
<pre><code>re.compile(r'(?<!\S)word(?!\S)')
</code></pre>
| 1 |
2016-09-25T09:02:50Z
|
[
"python",
"regex"
] |
How to make word boundary \b not match on dashes
| 39,684,942 |
<p>I simplified my code to the specific problem I am having.</p>
<pre><code>import re
pattern = re.compile(r'\bword\b')
result = pattern.sub(lambda x: "match", "-word- word")
</code></pre>
<p>I am getting</p>
<pre><code>'-match- match'
</code></pre>
<p>but I want </p>
<pre><code>'-word- match'
</code></pre>
<p>edit: </p>
<p>Or for the string <code>"word -word-"</code></p>
<p>I want</p>
<pre><code>"match -word-"
</code></pre>
| 6 |
2016-09-25T08:42:29Z
| 39,685,218 |
<p>Instead of word boundaries, you could also match the character before and after the word with a <code>(\s|^)</code> and <code>(\s|$)</code> pattern. </p>
<p><strong>Breakdown</strong>: <code>\s</code> matches every whitespace character, which seems to be what you are trying to achieve, as you are excluding the dashes. The <code>^</code> and <code>$</code> ensure that if the word is either the first or last in the string(ie. no character before or after) those are matched too.</p>
<p>Your code would become something like this:</p>
<pre><code>pattern = re.compile(r'(\s|^)(word)(\s|$)')
result = pattern.sub(r"\1match\3", "-word- word")
</code></pre>
<p>Because this solution uses character classes such as <code>\s</code>, it means that those could be easily replaced or extended. For example if you wanted your words to be delimited by spaces or commas, your pattern would become something like this: <code>r'(,|\s|^)(word)(,|\s|$)'</code>.</p>
| 0 |
2016-09-25T09:13:32Z
|
[
"python",
"regex"
] |
scipy.sparse matrix: subtract row mean to nonzero elements
| 39,685,168 |
<p>I have a sparse matrix in <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.sparse.csr_matrix.html" rel="nofollow">csr_matrix</a> format. For each row i need to subtract row mean from the nonzero elements. The means must be computed on the number of the nonzero elements of the row (instead of the length of the row).
I found a fast way to compute the row means with the following code:</p>
<pre><code># M is a csr_matrix
sums = np.squeeze(np.asarray(M.sum(1))) # sum of the nonzero elements, for each row
counts = np.diff(M.tocsr().indptr) # count of the nonzero elements, for each row
# for the i-th row the mean is just sums[i] / float(counts[i])
</code></pre>
<p>The problem is the updates part. I need a fast way to do this.
Actually what i am doing is to transform M to a <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.lil_matrix.html" rel="nofollow">lil_matrix</a> and perform the updates in this way:</p>
<pre><code>M = M.tolil()
for i in xrange(len(sums)):
for j in M.getrow(i).nonzero()[1]:
M[i, j] -= sums[i] / float(counts[i])
</code></pre>
<p>which is slow. Any suggestion for a faster solution?</p>
| 2 |
2016-09-25T09:07:35Z
| 39,685,793 |
<p>This one is tricky. I think I have it. The basic idea is that we try to get a diagonal matrix with the means on the diagonal, and a matrix that is like M, but has ones at the nonzero data locations in M. Then we multiply those and subtract the product from M. Here goes... </p>
<pre><code>>>> import numpy as np
>>> import scipy.sparse as sp
>>> a = sp.csr_matrix([[1., 0., 2.], [1.,2.,3.]])
>>> a.todense()
matrix([[ 1., 0., 2.],
[ 1., 2., 3.]])
>>> tot = np.array(a.sum(axis=1).squeeze())[0]
>>> tot
array([ 3., 6.])
>>> cts = np.diff(a.indptr)
>>> cts
array([2, 3], dtype=int32)
>>> mu = tot/cts
>>> mu
array([ 1.5, 2. ])
>>> d = sp.diags(mu, 0)
>>> d.todense()
matrix([[ 1.5, 0. ],
[ 0. , 2. ]])
>>> b = a.copy()
>>> b.data = np.ones_like(b.data)
>>> b.todense()
matrix([[ 1., 0., 1.],
[ 1., 1., 1.]])
>>> (d * b).todense()
matrix([[ 1.5, 0. , 1.5],
[ 2. , 2. , 2. ]])
>>> (a - d*b).todense()
matrix([[-0.5, 0. , 0.5],
[-1. , 0. , 1. ]])
</code></pre>
<p>Good Luck! Hope that helps.</p>
| 2 |
2016-09-25T10:23:54Z
|
[
"python",
"scipy",
"updates",
"sparse-matrix",
"mean"
] |
scipy.sparse matrix: subtract row mean to nonzero elements
| 39,685,168 |
<p>I have a sparse matrix in <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.sparse.csr_matrix.html" rel="nofollow">csr_matrix</a> format. For each row i need to subtract row mean from the nonzero elements. The means must be computed on the number of the nonzero elements of the row (instead of the length of the row).
I found a fast way to compute the row means with the following code:</p>
<pre><code># M is a csr_matrix
sums = np.squeeze(np.asarray(M.sum(1))) # sum of the nonzero elements, for each row
counts = np.diff(M.tocsr().indptr) # count of the nonzero elements, for each row
# for the i-th row the mean is just sums[i] / float(counts[i])
</code></pre>
<p>The problem is the updates part. I need a fast way to do this.
Actually what i am doing is to transform M to a <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.lil_matrix.html" rel="nofollow">lil_matrix</a> and perform the updates in this way:</p>
<pre><code>M = M.tolil()
for i in xrange(len(sums)):
for j in M.getrow(i).nonzero()[1]:
M[i, j] -= sums[i] / float(counts[i])
</code></pre>
<p>which is slow. Any suggestion for a faster solution?</p>
| 2 |
2016-09-25T09:07:35Z
| 39,688,751 |
<p>Starting with <code>@Dthal's</code> sample:</p>
<pre><code>In [92]: a = sparse.csr_matrix([[1.,0,2],[1,2,3]])
In [93]: a.A
Out[93]:
array([[ 1., 0., 2.],
[ 1., 2., 3.]])
In [94]: sums=np.squeeze(a.sum(1).A)
# sums=a.sum(1).A1 # shortcut
In [95]: counts=np.diff(a.tocsr().indptr)
In [96]: means=sums/counts
In [97]: sums
Out[97]: array([ 3., 6.])
In [98]: counts
Out[98]: array([2, 3], dtype=int32)
In [99]: means
Out[99]: array([ 1.5, 2. ])
</code></pre>
<p><code>repeat</code> lets us replicate the <code>means</code>, producing an array that matches the matrix <code>data</code> in size.</p>
<pre><code>In [100]: mc = np.repeat(means, counts)
In [101]: mc
Out[101]: array([ 1.5, 1.5, 2. , 2. , 2. ])
</code></pre>
<p>This <code>mc</code> is the same as <code>@Dthal's</code> <code>(b*d).data</code>.</p>
<p>Now just subtract it from <code>data</code>.</p>
<pre><code>In [102]: a.data -= mc
In [103]: a.A
Out[103]:
array([[-0.5, 0. , 0.5],
[-1. , 0. , 1. ]])
</code></pre>
| 2 |
2016-09-25T15:44:57Z
|
[
"python",
"scipy",
"updates",
"sparse-matrix",
"mean"
] |
Scapy send probe request and receive probe response
| 39,685,316 |
<p>I am trying to do a job that send <strong>802.11</strong> probe request and receive probe response from. But the result is not good.</p>
<p>Here is my sending frame part, I use <code>Scapy</code> in python:</p>
<pre><code> class Scapy80211():
def __init__(self,intf='wlan0',ssid='test',\
source='00:00:de:ad:be:ef',\
bssid='00:11:22:33:44:55',srcip='10.10.10.10'):
self.rates = "\x03\x12\x96\x18\x24\x30\x48\x60"
self.ssid = ssid
self.source = source
self.srcip = srcip
self.bssid = bssid
self.intf = intf
self.intfmon = intf + 'mon'
def ProbeReq(self,count=10,ssid='',dst='ff:ff:ff:ff:ff:ff', fc=0):
if not ssid: ssid=self.ssid
param = Dot11ProbeReq()
essid = Dot11Elt(ID='SSID',info=ssid)
rates = Dot11Elt(ID='Rates',info=self.rates)
dsset = Dot11Elt(ID='DSset',info='\x01')
pkt = RadioTap()\
/Dot11(type=0,subtype=4,FCfield=fc,addr1=dst,addr2=self.source,addr3=self.bssid)\
/param/essid/rates/dsset
print '[*] 802.11 Probe Request: SSID=[%s], count=%d' % (ssid,count)
try:
sendp(pkt,count=count,inter=0.1,verbose=1)
except:
raise
ssid = 'aa' #This is the AP I want to interact with
sdot11 = Scapy80211(intf='mon0')
sdot11.ProbeReq(ssid=ssid)
sniff(count=10, timeout=5, prn=PacketHandler, filter="type mgt subtype probe-resp")
</code></pre>
<p>I run the code for 20 times there is one time I can get the result.</p>
<p>Besides, the result is also a little strange, when I could receive the response, I often receive a lot at that time.</p>
<p>So, can anyone help me? How do you usually do the send and receive job?</p>
<hr>
<p>I have changed my code to <code>srp()</code>. I remove the sniff() statement and replace the sendp() with srp(). And here is my result, I am quite confused about it. </p>
<pre><code>[*] 802.11 Probe Request: SSID=[aa], count=10
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Received 0 packets, got 0 answers, remaining 1 packets
[*] 802.11 Probe Request: SSID=[aa], count=10
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Begin emission:
Finished to send 1 packets.
Received 12 packets, got 0 answers, remaining 1 packets
</code></pre>
<p>I want to receive the probe response frame from the <strong>aa</strong>, the one I send probe request to. </p>
<p>So the result is no answer? And I am not sure is it related to that I did not fill in the right parameters like SSID, source, bssid. And should I change the destination from "ff:ff:ff:ff:ff:ff" to the MAC address of <strong>aa</strong>?</p>
| 1 |
2016-09-25T09:27:19Z
| 39,696,177 |
<p>Unless I'm wrong, you are sending your probes, then sniffing the responses. If an answer arrives, it is likely that it arrives meanwhile.</p>
<p>You should probably use <code>srp()</code> function that does the job of sending frames and matching the answers.</p>
| 0 |
2016-09-26T06:39:55Z
|
[
"python",
"wireless",
"scapy",
"802.11"
] |
Using Sphinx to document multiple Python pojects
| 39,685,369 |
<p>I'm getting started using Sphinx. I've read a few tutorials but I'm still a bit hung up on how I set up multiple projects.</p>
<p>What I mean is, after installing sphinx, <a href="https://pythonhosted.org/an_example_pypi_project/sphinx.html" rel="nofollow">this guide</a> says</p>
<blockquote>
<p>To get started, cd into the documentation directory and type: <code>$ sphinx-quickstart</code></p>
</blockquote>
<p>Let's say I have 5 separate Python projects each in it's own directory (all individual git repos, etc).</p>
<p><strong>My question is, what exactly is <em>"the documentation directory"</em> (aside from the obvious) and how do I set up Sphinx when working with multiple projects?</strong></p>
<p>Do I make one "master documentation directory" somewhere and as I use Sphinx, do I create sub directories for each project or similar?</p>
<p>Or do I create a "documentation directory" inside of each of my projects and run <code>$ sphinx-quickstart</code> to set up Sphinx for each individual project?</p>
<p>I'm trying to understand the big picture here but cant find a tutorial that spells out this aspect of things.</p>
| 0 |
2016-09-25T09:32:40Z
| 39,685,586 |
<p>The <code>sphinx-quickstart</code> command generates a documentation skeleton for <em>a single project</em>, so if you have multiple separate projects you will have to run it in each one of them. The link you posted uses the phrase <em>"documentation directory"</em> because the directory name and relative position in the project directory is up to you (they appear to be putting it in <code>project_root/doc</code>), not because there should be some centralised directory of documentation for all of your projects. </p>
| 2 |
2016-09-25T10:00:01Z
|
[
"python",
"documentation",
"python-sphinx",
"autodoc"
] |
Django 1.10 - how to load initial users
| 39,685,512 |
<p>What is the right way to load initial users in Django 1.10?</p>
<p>When we talk about our own django app then it is recommended to have a /fixtures/initial_data.json (as mentioned <a href="http://stackoverflow.com/questions/25960850/loading-initial-data-with-django-1-7-and-data-migrations">here</a>).</p>
<p>But in the case of User, where django.contrib.auth is not our app, where should we put the fixtures directory and how do we load it?</p>
<p>Thank you,</p>
<p>Rami</p>
| 0 |
2016-09-25T09:50:31Z
| 39,685,751 |
<p>You can create fixture from default user model by below command:</p>
<pre><code>python manage.py dumpdata auth.user -o FIXTURE_ADDRESS
</code></pre>
<p>and load initial data by below command:</p>
<pre><code>python manage.py loaddata FIXTURENAME
</code></pre>
<p>that <code>FIXTURENAME</code> is name of your fixture in one of <code>FIXTURE_DIRS</code>.
The <code>FIXTURE_DIRS</code> by default is <strong>fixtures</strong>.</p>
| 1 |
2016-09-25T10:19:08Z
|
[
"python",
"django",
"django-models",
"django-migrations"
] |
Trying to use pandas pivot like excel pivot
| 39,685,518 |
<p>I have a pandas data frame like this that I want to pivot using pd.pivot_table</p>
<pre><code>import pandas
df = pd.DataFrame({"Id":[1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 10],
"Error":[0, 99, 0, 0, 0, 98, 0, 0, 0, 0, 33, 0, 23, 0, 0, 0, 83, 0]})
</code></pre>
<p>Im trying to pivot it like this (pivot made in Excel):</p>
<p><a href="http://i.stack.imgur.com/urqnc.png" rel="nofollow"><img src="http://i.stack.imgur.com/urqnc.png" alt="enter image description here"></a></p>
<p>I tried this:</p>
<pre><code>dfPivot = pd.pivot_table(df, index = "Id", columns = df.Error.unique(), values = "Error", aggfunc="count")
</code></pre>
<p>I got following error.</p>
<pre><code>AssertionError: Grouper and axis must be same length
</code></pre>
<p>Thank you in advance.</p>
| 0 |
2016-09-25T09:50:58Z
| 39,685,532 |
<p>IIUC you can do it this way:</p>
<pre><code>In [7]: df.pivot_table(index='Id', columns='Error', aggfunc='size', fill_value=0)
Out[7]:
Error 0 23 33 83 98 99
Id
1 1 0 0 0 0 1
2 2 0 0 0 0 0
3 1 0 0 0 1 0
4 2 0 0 0 0 0
5 2 0 0 0 0 0
6 1 0 1 0 0 0
7 1 1 0 0 0 0
8 2 0 0 0 0 0
9 0 0 0 1 0 0
10 1 0 0 0 0 0
In [8]: df.pivot_table(index='Id', columns='Error', aggfunc='size', fill_value='')
Out[8]:
Error 0 23 33 83 98 99
Id
1 1 1
2 2
3 1 1
4 2
5 2
6 1 1
7 1 1
8 2
9 1
10 1
</code></pre>
<p>If you want to have <code>Grand Total</code> - you can use <code>margins=True</code> parameter, but it'll be bit tricky:</p>
<pre><code>In [42]: df.pivot_table(index='Id', columns='Error', aggfunc='size', fill_value=0, margins=True)
...skipped...
TypeError: 'str' object is not callable
</code></pre>
<p>but this hacky variant works:</p>
<pre><code>In [43]: (df.assign(x=0)
....: .pivot_table(index='Id', columns='Error', aggfunc='count',
....: fill_value=0, margins=True, margins_name='Grand Total')
....: .astype(int)
....: )
Out[43]:
x
Error 0 23 33 83 98 99 Grand Total
Id
1 1 0 0 0 0 1 2
2 2 0 0 0 0 0 2
3 1 0 0 0 1 0 2
4 2 0 0 0 0 0 2
5 2 0 0 0 0 0 2
6 1 0 1 0 0 0 2
7 1 1 0 0 0 0 2
8 2 0 0 0 0 0 2
9 0 0 0 1 0 0 1
10 1 0 0 0 0 0 1
Grand Total 13 1 1 1 1 1 18
</code></pre>
| 2 |
2016-09-25T09:52:59Z
|
[
"python",
"excel",
"pandas",
"dataframe",
"pivot"
] |
GraphAPIError: An active access token must be used to query information about the current user
| 39,685,650 |
<p>On Facebook GraphAPI, with the Python SDK, I am trying to send a notification to a user. I receive the error:</p>
<pre><code>GraphAPIError: An active access token must be used to query information about the current user
</code></pre>
<p>My Python code is:</p>
<pre><code>graph = facebook.GraphAPI(access_token=TOKENS["app_token"])
graph.put_object(parent_object='me', connection_name='notifications', template='Tell us how you like the app!', href='https://www.google.com')
</code></pre>
| 0 |
2016-09-25T10:07:10Z
| 39,688,017 |
<p>Solution:</p>
<pre><code>graph = facebook.GraphAPI(access_token=TOKENS["user_token"])
user_info= graph.get_object(id='me', fields='id')
graph = facebook.GraphAPI(access_token=TOKENS["app_token"])
graph.put_object(parent_object= user_info['id'], connection_name='notifications', template='Tell us how you like the app!', href='https://www.google.com')
</code></pre>
| 0 |
2016-09-25T14:33:45Z
|
[
"python",
"facebook",
"facebook-graph-api"
] |
Do something to every row on a pandas dataframe
| 39,685,737 |
<p>Im trying to do something to a pandas dataframe as follows:</p>
<p>If say row 2 has a 'nan' value in the 'start' column, then I can replace all row entries with '999999'</p>
<pre><code>if pd.isnull(dfSleep.ix[2,'start']):
dfSleep.ix[2,:] = 999999
</code></pre>
<p>The above code works but I want to do it for every row, ive tried replacing the '2' with a ':' but that does not work</p>
<pre><code>if pd.isnull(dfSleep.ix[:,'start']):
dfSleep.ix[:,:] = 999999
</code></pre>
<p>and ive tried something like this</p>
<pre><code>for row in df.iterrows():
if pd.isnull(dfSleep.ix[row,'start']):
dfSleep.ix[row,:] = 999999
</code></pre>
<p>but again no luck, any ideas?</p>
| 0 |
2016-09-25T10:17:22Z
| 39,685,789 |
<p>UPDATE:</p>
<pre><code>In [63]: df
Out[63]:
a b c
0 0 3 NaN
1 3 7 5.0
2 0 5 NaN
3 4 1 6.0
4 7 9 NaN
In [64]: df.ix[df.c.isnull()] = [999999] * len(df.columns)
In [65]: df
Out[65]:
a b c
0 999999 999999 999999.0
1 3 7 5.0
2 999999 999999 999999.0
3 4 1 6.0
4 999999 999999 999999.0
</code></pre>
<p>You can use vectorized approach (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html" rel="nofollow">.fillna()</a> method):</p>
<pre><code>In [50]: df
Out[50]:
a b c
0 1 8 NaN
1 8 8 6.0
2 5 2 NaN
3 9 4 1.0
4 4 2 NaN
In [51]: df.c = df.c.fillna(999999)
In [52]: df
Out[52]:
a b c
0 1 8 999999.0
1 8 8 6.0
2 5 2 999999.0
3 9 4 1.0
4 4 2 999999.0
</code></pre>
| 1 |
2016-09-25T10:23:39Z
|
[
"python",
"pandas",
"dataframe"
] |
Do something to every row on a pandas dataframe
| 39,685,737 |
<p>Im trying to do something to a pandas dataframe as follows:</p>
<p>If say row 2 has a 'nan' value in the 'start' column, then I can replace all row entries with '999999'</p>
<pre><code>if pd.isnull(dfSleep.ix[2,'start']):
dfSleep.ix[2,:] = 999999
</code></pre>
<p>The above code works but I want to do it for every row, ive tried replacing the '2' with a ':' but that does not work</p>
<pre><code>if pd.isnull(dfSleep.ix[:,'start']):
dfSleep.ix[:,:] = 999999
</code></pre>
<p>and ive tried something like this</p>
<pre><code>for row in df.iterrows():
if pd.isnull(dfSleep.ix[row,'start']):
dfSleep.ix[row,:] = 999999
</code></pre>
<p>but again no luck, any ideas?</p>
| 0 |
2016-09-25T10:17:22Z
| 39,685,898 |
<p>I think <code>row</code> in your approach is not an row index. It's a row of the DataFrame</p>
<p>You can use this instead:</p>
<pre><code>for row in df.iterrows():
if pd.isnull(dfSleep.ix[row[0],'start']):
dfSleep.ix[row[0],:] = 999999
</code></pre>
| 1 |
2016-09-25T10:36:55Z
|
[
"python",
"pandas",
"dataframe"
] |
Calculate sklearn.roc_auc_score for multi-class
| 39,685,740 |
<p>I would like to calculate AUC, precision, accuracy for my classifier.
I am doing supervised learning:</p>
<p>Here is my working code.
This code is working fine for binary class, but not for multi class.
Please assume that you have a dataframe with binary classes:</p>
<pre><code>sample_features_dataframe = self._get_sample_features_dataframe()
labeled_sample_features_dataframe = retrieve_labeled_sample_dataframe(sample_features_dataframe)
labeled_sample_features_dataframe, binary_class_series, multi_class_series = self._prepare_dataframe_for_learning(labeled_sample_features_dataframe)
k = 10
k_folds = StratifiedKFold(binary_class_series, k)
for train_indexes, test_indexes in k_folds:
train_set_dataframe = labeled_sample_features_dataframe.loc[train_indexes.tolist()]
test_set_dataframe = labeled_sample_features_dataframe.loc[test_indexes.tolist()]
train_class = binary_class_series[train_indexes]
test_class = binary_class_series[test_indexes]
selected_classifier = RandomForestClassifier(n_estimators=100)
selected_classifier.fit(train_set_dataframe, train_class)
predictions = selected_classifier.predict(test_set_dataframe)
predictions_proba = selected_classifier.predict_proba(test_set_dataframe)
roc += roc_auc_score(test_class, predictions_proba[:,1])
accuracy += accuracy_score(test_class, predictions)
recall += recall_score(test_class, predictions)
precision += precision_score(test_class, predictions)
</code></pre>
<p>In the end I divided the results in K of course for getting average AUC, precision, etc.
This code is working fine.
However, I cannot calculate the same for multi class: </p>
<pre><code> train_class = multi_class_series[train_indexes]
test_class = multi_class_series[test_indexes]
selected_classifier = RandomForestClassifier(n_estimators=100)
selected_classifier.fit(train_set_dataframe, train_class)
predictions = selected_classifier.predict(test_set_dataframe)
predictions_proba = selected_classifier.predict_proba(test_set_dataframe)
</code></pre>
<p>I found that for multi class I have to add the parameter "weighted" for average.</p>
<pre><code> roc += roc_auc_score(test_class, predictions_proba[:,1], average="weighted")
</code></pre>
<p>I got an error: raise ValueError("{0} format is not supported".format(y_type))</p>
<p>ValueError: multiclass format is not supported</p>
| 0 |
2016-09-25T10:17:43Z
| 39,693,007 |
<p>You can't use <code>roc_auc</code> as a single summary metric for multiclass models. If you want, you could calculate per-class <code>roc_auc</code>, as </p>
<pre><code>roc = {label: [] for label in multi_class_series.unique()}
for label in multi_class_series.unique():
selected_classifier.fit(train_set_dataframe, train_class == label)
predictions_proba = selected_classifier.predict_proba(test_set_dataframe)
roc[label] += roc_auc_score(test_class, predictions_proba[:,1])
</code></pre>
<p>However it's more usual to use <code>sklearn.metrics.confusion_matrix</code> to evaluate the performance of a multiclass model.</p>
| 3 |
2016-09-26T00:00:46Z
|
[
"python",
"scikit-learn",
"supervised-learning"
] |
Calculate sklearn.roc_auc_score for multi-class
| 39,685,740 |
<p>I would like to calculate AUC, precision, accuracy for my classifier.
I am doing supervised learning:</p>
<p>Here is my working code.
This code is working fine for binary class, but not for multi class.
Please assume that you have a dataframe with binary classes:</p>
<pre><code>sample_features_dataframe = self._get_sample_features_dataframe()
labeled_sample_features_dataframe = retrieve_labeled_sample_dataframe(sample_features_dataframe)
labeled_sample_features_dataframe, binary_class_series, multi_class_series = self._prepare_dataframe_for_learning(labeled_sample_features_dataframe)
k = 10
k_folds = StratifiedKFold(binary_class_series, k)
for train_indexes, test_indexes in k_folds:
train_set_dataframe = labeled_sample_features_dataframe.loc[train_indexes.tolist()]
test_set_dataframe = labeled_sample_features_dataframe.loc[test_indexes.tolist()]
train_class = binary_class_series[train_indexes]
test_class = binary_class_series[test_indexes]
selected_classifier = RandomForestClassifier(n_estimators=100)
selected_classifier.fit(train_set_dataframe, train_class)
predictions = selected_classifier.predict(test_set_dataframe)
predictions_proba = selected_classifier.predict_proba(test_set_dataframe)
roc += roc_auc_score(test_class, predictions_proba[:,1])
accuracy += accuracy_score(test_class, predictions)
recall += recall_score(test_class, predictions)
precision += precision_score(test_class, predictions)
</code></pre>
<p>In the end I divided the results in K of course for getting average AUC, precision, etc.
This code is working fine.
However, I cannot calculate the same for multi class: </p>
<pre><code> train_class = multi_class_series[train_indexes]
test_class = multi_class_series[test_indexes]
selected_classifier = RandomForestClassifier(n_estimators=100)
selected_classifier.fit(train_set_dataframe, train_class)
predictions = selected_classifier.predict(test_set_dataframe)
predictions_proba = selected_classifier.predict_proba(test_set_dataframe)
</code></pre>
<p>I found that for multi class I have to add the parameter "weighted" for average.</p>
<pre><code> roc += roc_auc_score(test_class, predictions_proba[:,1], average="weighted")
</code></pre>
<p>I got an error: raise ValueError("{0} format is not supported".format(y_type))</p>
<p>ValueError: multiclass format is not supported</p>
| 0 |
2016-09-25T10:17:43Z
| 39,703,870 |
<p>The <code>average</code> option of <code>roc_auc_score</code> is only defined for multilabel problems.</p>
<p>You can take a look at the following example from the scikit-learn documentation to define you own micro- or macro-averaged scores for multiclass problems:</p>
<p><a href="http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html#multiclass-settings" rel="nofollow">http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html#multiclass-settings</a></p>
<p><em>Edit</em>: there is an issue on the scikit-learn tracker to implement ROC AUC for multiclass problems: <a href="https://github.com/scikit-learn/scikit-learn/issues/3298" rel="nofollow">https://github.com/scikit-learn/scikit-learn/issues/3298</a></p>
| 2 |
2016-09-26T13:13:06Z
|
[
"python",
"scikit-learn",
"supervised-learning"
] |
How to make a new filter and apply it on an image using cv2 in python2.7?
| 39,685,757 |
<p>How to make a new filter and apply it on an image using cv2 in python2.7?</p>
<p>For example:</p>
<pre><code>kernel = np.array([[-1, -1, -1],
[-1, 4, -1],
[-1, -1, -1]])
</code></pre>
<p>i'm new to opencv so if you can explain that'd be great. thanks!</p>
| 2 |
2016-09-25T10:19:22Z
| 39,687,055 |
<p>As far as applying a custom kernel to a given image you may simply use <a href="http://docs.opencv.org/3.1.0/d4/d13/tutorial_py_filtering.html" rel="nofollow">filter2D</a> method to feed in a custom filter. You may also copy the following code to get you going. But The results with current filter seem a bit weird:</p>
<pre><code>import cv2
import numpy as np
# Create a dummy input image.
canvas = np.zeros((100, 100), dtype=np.uint8)
canvas = cv2.circle(canvas, (50, 50), 20, (255,), -1)
kernel = np.array([[-1, -1, -1],
[-1, 4, -1],
[-1, -1, -1]])
dst = cv2.filter2D(canvas, -1, kernel)
cv2.imwrite("./filtered.png", dst)
</code></pre>
<p>Input image:</p>
<p><a href="http://i.stack.imgur.com/D03Bq.png" rel="nofollow"><img src="http://i.stack.imgur.com/D03Bq.png" alt="enter image description here"></a></p>
<p>Output Image:</p>
<p><a href="http://i.stack.imgur.com/wQ2u8.png" rel="nofollow"><img src="http://i.stack.imgur.com/wQ2u8.png" alt="enter image description here"></a></p>
| 3 |
2016-09-25T12:52:57Z
|
[
"python",
"python-2.7",
"opencv"
] |
How to make a new filter and apply it on an image using cv2 in python2.7?
| 39,685,757 |
<p>How to make a new filter and apply it on an image using cv2 in python2.7?</p>
<p>For example:</p>
<pre><code>kernel = np.array([[-1, -1, -1],
[-1, 4, -1],
[-1, -1, -1]])
</code></pre>
<p>i'm new to opencv so if you can explain that'd be great. thanks!</p>
| 2 |
2016-09-25T10:19:22Z
| 39,687,099 |
<p>You can just adapt the code at <a href="http://docs.opencv.org/3.1.0/d4/d13/tutorial_py_filtering.html" rel="nofollow">http://docs.opencv.org/3.1.0/d4/d13/tutorial_py_filtering.html</a>.
That is an OpenCV 3 page but it will work for OpenCV 2.</p>
<p>The only difference in the code below is how the kernel is set.</p>
<pre><code>import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('opencv_logo.png')
kernel = np.ones((3,3),np.float32) * (-1)
kernel[1,1] = 8
print(kernel)
dst = cv2.filter2D(img,-1,kernel)
plt.subplot(121),plt.imshow(img),plt.title('Original')
plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(dst),plt.title('Filters')
plt.xticks([]), plt.yticks([])
plt.show()
</code></pre>
<p>Notice that I have used 8 for the centre pixel instead of 4, because using 4 darkens the results too much. Here is the result from the code above:</p>
<p><a href="http://i.stack.imgur.com/o5bNQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/o5bNQ.png" alt="enter image description here"></a></p>
| 2 |
2016-09-25T12:57:33Z
|
[
"python",
"python-2.7",
"opencv"
] |
error using where with pandas and categorical columns
| 39,685,764 |
<p>Problem: using the where clause with a dataframe with categorical columns produces <strong>ValueError: Wrong number of dimensions</strong></p>
<p>I Just can't figure out what am I doing wrong.</p>
<pre><code>df=pd.read_csv("F:/python/projects/mail/Inbox_20160911-1646/rows.csv",header=0,sep=",",quotechar="'",quoting=1)
df.where(df > 100) # WORKS !!!!
for c in [x for x in df.columns[2:] if df[x].dtype == "object" ]:
cl="c"+c
df[cl]=df[c].astype("category")
df.where(df > 100) # ---> ValueError: Wrong number of dimensions
df.where(df > 100)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-278-7469c620cf83> in <module>()
----> 1 df.where(df > 100)
F:\python\anaconda3\lib\site-packages\pandas\core\ops.py in f(self, other)
1182 # straight boolean comparisions we want to allow all columns
1183 # (regardless of dtype to pass thru) See #4537 for discussion.
-> 1184 res = self._combine_const(other, func, raise_on_error=False)
1185 return res.fillna(True).astype(bool)
1186
F:\python\anaconda3\lib\site-packages\pandas\core\frame.py in _combine_const(self, other, func, raise_on_error)
3553
3554 new_data = self._data.eval(func=func, other=other,
-> 3555 raise_on_error=raise_on_error)
3556 return self._constructor(new_data)
3557
F:\python\anaconda3\lib\site-packages\pandas\core\internals.py in eval(self, **kwargs)
2909
2910 def eval(self, **kwargs):
-> 2911 return self.apply('eval', **kwargs)
2912
2913 def quantile(self, **kwargs):
F:\python\anaconda3\lib\site-packages\pandas\core\internals.py in apply(self, f, axes, filter, do_integrity_check, consolidate, raw, **kwargs)
2888
2889 kwargs['mgr'] = self
-> 2890 applied = getattr(b, f)(**kwargs)
2891 result_blocks = _extend_blocks(applied, result_blocks)
2892
F:\python\anaconda3\lib\site-packages\pandas\core\internals.py in eval(self, func, other, raise_on_error, try_cast, mgr)
1160 result = self._try_cast_result(result)
1161
-> 1162 return [self.make_block(result, fastpath=True, )]
1163
1164 def where(self, other, cond, align=True, raise_on_error=True,
F:\python\anaconda3\lib\site-packages\pandas\core\internals.py in make_block(self, values, placement, ndim, **kwargs)
179 ndim = self.ndim
180
--> 181 return make_block(values, placement=placement, ndim=ndim, **kwargs)
182
183 def make_block_same_class(self, values, placement=None, fastpath=True,
F:\python\anaconda3\lib\site-packages\pandas\core\internals.py in make_block(values, placement, klass, ndim, dtype, fastpath)
2516 placement=placement, dtype=dtype)
2517
-> 2518 return klass(values, ndim=ndim, fastpath=fastpath, placement=placement)
2519
2520 # TODO: flexible with index=None and/or items=None
F:\python\anaconda3\lib\site-packages\pandas\core\internals.py in __init__(self, values, ndim, fastpath, placement, **kwargs)
1661
1662 super(ObjectBlock, self).__init__(values, ndim=ndim, fastpath=fastpath,
-> 1663 placement=placement, **kwargs)
1664
1665 @property
F:\python\anaconda3\lib\site-packages\pandas\core\internals.py in __init__(self, values, placement, ndim, fastpath)
79 ndim = values.ndim
80 elif values.ndim != ndim:
---> 81 raise ValueError('Wrong number of dimensions')
82 self.ndim = ndim
83
</code></pre>
<p>ValueError: Wrong number of dimensions</p>
| 1 |
2016-09-25T10:19:54Z
| 39,685,946 |
<p>Here is a small demo, that reproduces your error:</p>
<pre><code>In [11]: df = pd.DataFrame(np.random.randint(0, 10, (5,3)), columns=list('abc'))
In [12]: df
Out[12]:
a b c
0 9 9 8
1 5 6 1
2 2 9 8
3 8 1 3
4 1 5 1
</code></pre>
<p>this works:</p>
<pre><code>In [13]: df > 1
Out[13]:
a b c
0 True True True
1 True True False
2 True True True
3 True False True
4 False True False
In [14]: df['cat'] = df.c.astype('category')
In [15]: df
Out[15]:
a b c cat
0 9 9 8 8
1 5 6 1 1
2 2 9 8 8
3 8 1 3 3
4 1 5 1 1
</code></pre>
<p>this throws an <code>Wrong number of dimensions</code> exception:</p>
<pre><code>In [16]: df > 1
...skipped...
ValueError: Wrong number of dimensions
</code></pre>
<p>and this is a real reason for the previous error:</p>
<pre><code>In [19]: df.cat > 1
...skipped...
TypeError: Unordered Categoricals can only compare equality or not
</code></pre>
<p><strong>Solution:</strong></p>
<pre><code>In [22]: df.select_dtypes(include=['number']) > 1
Out[22]:
a b c
0 True True True
1 True True False
2 True True True
3 True False True
4 False True False
In [23]: np.where(df.select_dtypes(exclude=['category']) > 1)
Out[23]:
(array([0, 0, 0, 1, 1, 2, 2, 2, 3, 3, 4], dtype=int64),
array([0, 1, 2, 0, 1, 0, 1, 2, 0, 2, 1], dtype=int64))
</code></pre>
| 0 |
2016-09-25T10:43:10Z
|
[
"python",
"pandas",
"categorical-data"
] |
Conversion of integers to Roman numerals (python)
| 39,685,765 |
<p>I understand there were many similar questions being asked on this topic. But I still have some doubts need to be clear.</p>
<pre><code>def int_to_roman(input):
if type(input) != type(1):
raise TypeError, "expected integer, got %s" % type(input)
if not 0 < input < 4000:
raise ValueError, "Argument must be between 1 and 3999"
ints = (1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1)
nums = ('M', 'CM', 'D', 'CD','C', 'XC','L','XL','X','IX','V','IV','I')
result = ""
for i in range(len(ints)):
count = int(input / ints[i])
result += nums[i] * count
input -= ints[i] * count
return result
</code></pre>
<p>I dont really understand the code below:</p>
<pre><code>for i in range(len(ints)):
count = int(input / ints[i])
result += nums[i] * count
input -= ints[i] * count
</code></pre>
<p>Does:</p>
<pre><code>for i in range (len(ints)):
</code></pre>
<p>it means '1000','900','800' (the integers respectively) or it means 13 (13 integers in integers)?</p>
<pre><code>count = int(input / ints[i])
</code></pre>
<p>what does ints [i] mean?</p>
<p>Anyone please explain these codes? It would be best if you could show the examples (like substitute numbers and show how it works)</p>
| -2 |
2016-09-25T10:19:56Z
| 39,685,825 |
<p>The names of the two lists are awful (<code>ints</code> and <code>nums</code>). </p>
<p>However, starting with the highest roman numeral (<code>nums[0]</code> = 'M'), the loop finds out how many times the value of that numeral (<code>ints[0]</code> = 1000) divides into the input value, and appends the numeral that many times to the result string. </p>
<p>Then it subtracts the value of that string just added to the result (<code>ints[0] * count</code>) from the input, and moves on to the next roman numeral (<code>nums[1]</code>) to repeat the process with that remainder.</p>
| 1 |
2016-09-25T10:27:12Z
|
[
"python",
"python-2.7",
"function",
"roman-numerals"
] |
Conversion of integers to Roman numerals (python)
| 39,685,765 |
<p>I understand there were many similar questions being asked on this topic. But I still have some doubts need to be clear.</p>
<pre><code>def int_to_roman(input):
if type(input) != type(1):
raise TypeError, "expected integer, got %s" % type(input)
if not 0 < input < 4000:
raise ValueError, "Argument must be between 1 and 3999"
ints = (1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1)
nums = ('M', 'CM', 'D', 'CD','C', 'XC','L','XL','X','IX','V','IV','I')
result = ""
for i in range(len(ints)):
count = int(input / ints[i])
result += nums[i] * count
input -= ints[i] * count
return result
</code></pre>
<p>I dont really understand the code below:</p>
<pre><code>for i in range(len(ints)):
count = int(input / ints[i])
result += nums[i] * count
input -= ints[i] * count
</code></pre>
<p>Does:</p>
<pre><code>for i in range (len(ints)):
</code></pre>
<p>it means '1000','900','800' (the integers respectively) or it means 13 (13 integers in integers)?</p>
<pre><code>count = int(input / ints[i])
</code></pre>
<p>what does ints [i] mean?</p>
<p>Anyone please explain these codes? It would be best if you could show the examples (like substitute numbers and show how it works)</p>
| -2 |
2016-09-25T10:19:56Z
| 39,685,870 |
<p>The two lists, <code>ints</code> and <code>nums</code> are the same length. The loop iterates over the length of <code>ints</code>, which means that the variable <code>i</code> can access the same position of either list, matching one to the other.</p>
<p>If we step through the loop, <code>count</code> is assigned the integer value of the input divided by the first number in <code>ints</code>, which is 1000. If the <code>input</code> variable is, say, 10, then 10/1000 will result in a number <1, and using <code>int()</code> on the result will cause it to assign 0 to <code>count</code>. When 0 is multiplied by the matching string from <code>nums</code>, the assigned result is basically nothing. Then the same amount is subtracted from <code>input</code>, which in this case leaves it unchanged.</p>
<p>Eventually, the loop will reach a point when the result of the division is a number >=1, so the following steps will do something.</p>
<p>Let's say the result of <code>int(input / ints[i])</code> is 3. <code>"X" * 3</code> results in <code>"XXX"</code>, which is added to <code>result</code>, and the <code>input</code> is reduced by the appropriate amount, in this case 30. So on, until the loop ends.</p>
| 1 |
2016-09-25T10:32:54Z
|
[
"python",
"python-2.7",
"function",
"roman-numerals"
] |
how to change file name and extension in pyhon?
| 39,685,775 |
<p>I want to move and rename file called <code>malicious.txt</code> located at path <code>/home/dina/A</code> to a new location <code>/home/dina/b</code> with a new name based on <code>apkName</code> (e.g. <code>a.apk</code> or <code>b.apk</code>, etc) I want the final name to have <code>.json</code> extension instead of <code>.apk</code> extension - e.g <code>a.json</code> and <code>b.json</code></p>
<p>I tried :</p>
<pre><code>import os
os.rename("malicious.txt",apkName)
</code></pre>
<p>But it removes <code>malicious.txt</code> without appear any other files.</p>
| -1 |
2016-09-25T10:21:33Z
| 39,686,594 |
<p>The following should rename "malicious.txt" to be name of apkfile with extension of ".json"</p>
<pre><code>import os
apkName = "a.apk"
apkFullpath = os.path.join(os.path.sep,"home","dina","a",apkName)
jsonName = os.path.splitext(apkName)[0]+".json"
jsonFullpath = os.path.join(os.path.sep,"home","dina","b",jsonName)
os.rename("malicious.txt",jsonName)
</code></pre>
<p>Note that you can rename file name only once (after the first time of rename, you won't be able to access the old name)</p>
<p>for more info about <strong>os.path.join</strong> and <strong>os.path.sep</strong></p>
<p><a href="https://docs.python.org/2/library/os.path.html" rel="nofollow">https://docs.python.org/2/library/os.path.html</a></p>
<p><strong>os.path.join(path, *paths)</strong></p>
<p>Join one or more path components intelligently. The return value is the concatenation of path and any members of *paths with exactly one directory separator (os.sep) following each non-empty part except the last, meaning that the result will only end in a separator if the last part is empty. If a component is an absolute path, all previous components are thrown away and joining continues from the absolute path component.</p>
<p>On Windows, the drive letter is not reset when an absolute path component (e.g., r'\foo') is encountered. If a component contains a drive letter, all previous components are thrown away and the drive letter is reset. Note that since there is a current directory for each drive, os.path.join("c:", "foo") represents a path relative to the current directory on drive C: (c:foo), not c:\foo.</p>
| 0 |
2016-09-25T12:05:13Z
|
[
"python"
] |
Assertion error, eventhough my return value is the same
| 39,685,804 |
<pre><code>def interleave(s1,s2): #This function interleaves s1,s2 together
guess = 0
total = 0
while (guess < len(s1)) and (guess < len(s2)):
x = s1[guess]
y = s2[guess]
m = x + y
print ((m),end ="")
guess += 1
if (len(s1) == len(s2)):
return ("")
elif(len(s1) > len(s2)):
return (s1[guess:])
elif(len(s2) > len(s1)):
return (s2[guess:])
print (interleave("Smlksgeneg n a!", "a ie re gsadhm"))
</code></pre>
<p>For some reason, my test function gives an assertion error eventhough the output is the same as the code below.
Eg - "Smlksgeneg n a!", "a ie re gsadhm" returns "Sam likes green eggs and ham!"
but an assertion error still comes out</p>
<pre><code>def testInterleave():
print("Testing interleave()...", end="")
assert(interleave("abcdefg", "abcdefg")) == ("aabbccddeeffgg")
assert(interleave("abcde", "abcdefgh") == "aabbccddeefgh")
assert(interleave("abcdefgh","abcde") == "aabbccddeefgh")
assert(interleave("Smlksgeneg n a!", "a ie re gsadhm") ==
"Sam likes green eggs and ham!")
assert(interleave("","") == "")
print("Passed!")
testInterleave()
</code></pre>
| -1 |
2016-09-25T10:25:04Z
| 39,686,026 |
<p>You are confusing what is printed by interleave() from what is returned by it. The assert is testing the returned value. For example, when s1 and s2 are the same length, your code prints the interleave (on the <code>print((m),end="")</code> line) but returns an empty string (in the line <code>return ("")</code></p>
<p>If you want interleave to return the interleaved string, you need to collect the x and y variables (not very well named if they are always holding characters) into a single string and return that.</p>
| 1 |
2016-09-25T10:51:14Z
|
[
"python"
] |
Assertion error, eventhough my return value is the same
| 39,685,804 |
<pre><code>def interleave(s1,s2): #This function interleaves s1,s2 together
guess = 0
total = 0
while (guess < len(s1)) and (guess < len(s2)):
x = s1[guess]
y = s2[guess]
m = x + y
print ((m),end ="")
guess += 1
if (len(s1) == len(s2)):
return ("")
elif(len(s1) > len(s2)):
return (s1[guess:])
elif(len(s2) > len(s1)):
return (s2[guess:])
print (interleave("Smlksgeneg n a!", "a ie re gsadhm"))
</code></pre>
<p>For some reason, my test function gives an assertion error eventhough the output is the same as the code below.
Eg - "Smlksgeneg n a!", "a ie re gsadhm" returns "Sam likes green eggs and ham!"
but an assertion error still comes out</p>
<pre><code>def testInterleave():
print("Testing interleave()...", end="")
assert(interleave("abcdefg", "abcdefg")) == ("aabbccddeeffgg")
assert(interleave("abcde", "abcdefgh") == "aabbccddeefgh")
assert(interleave("abcdefgh","abcde") == "aabbccddeefgh")
assert(interleave("Smlksgeneg n a!", "a ie re gsadhm") ==
"Sam likes green eggs and ham!")
assert(interleave("","") == "")
print("Passed!")
testInterleave()
</code></pre>
| -1 |
2016-09-25T10:25:04Z
| 39,686,082 |
<p>The problem is that your function just prints the interleaved portion of the resulting string, it doesn't return it, it only returns the tail of the longer string.</p>
<p>Here's a repaired and simplified version of your code. You don't need to do those <code>if... elif</code> tests. Also, your code has a lot of superfluous parentheses (and one misplaced parenthesis), which I've removed.</p>
<pre><code>def interleave(s1, s2):
''' Interleave strings s1 and s2 '''
guess = 0
result = ""
while (guess < len(s1)) and (guess < len(s2)):
x = s1[guess]
y = s2[guess]
result += x + y
guess += 1
return result + s1[guess:] + s2[guess:]
def testInterleave():
print("Testing interleave()...", end="")
assert interleave("abcdefg", "abcdefg") == "aabbccddeeffgg"
assert interleave("abcde", "abcdefgh") == "aabbccddeefgh"
assert interleave("abcdefgh","abcde") == "aabbccddeefgh"
assert (interleave("Smlksgeneg n a!", "a ie re gsadhm")
== "Sam likes green eggs and ham!")
assert interleave("", "") == ""
print("Passed!")
print(interleave("Smlksgeneg n a!", "a ie re gsadhm"))
testInterleave()
</code></pre>
<p><strong>output</strong></p>
<pre><code>Sam likes green eggs and ham!
Testing interleave()...Passed!
</code></pre>
<p>Here's a slightly improved version of <code>interleave</code>. It uses a list to store the result, rather than using repeated string concatenation. Using lists to build string like this is a common Python practice because it's more efficient than repeated string concatenation using <code>+</code> or <code>+=</code>; OTOH,<code>+</code> and <code>+=</code> on strings have been optimised so that they're fairly efficient for short strings (up to 1000 chars or so).</p>
<pre><code>def interleave(s1, s2):
result = []
i = 0
for i, t in enumerate(zip(s1, s2)):
result.extend(t)
i += 1
result.extend(s1[i:] + s2[i:])
return ''.join(result)
</code></pre>
<p>That <code>i = 0</code> is necessary in case either <code>s1</code> or <code>s2</code> are empty strings. When that happens the <code>for</code> loop isn't entered and so <code>i</code> won't get assigned a value.</p>
<p>Finally, here's a compact version using a list comprehension and the standard <code>itertools.zip_longest</code> function.</p>
<pre><code>def interleave(s1, s2):
return ''.join([u+v for u,v in zip_longest(s1, s2, fillvalue='')])
</code></pre>
| 0 |
2016-09-25T10:58:15Z
|
[
"python"
] |
Complex search in xml using lxml
| 39,685,806 |
<p><strong>Background</strong> I am trying to read a password from a keepass2 file using <a href="https://github.com/phpwutz/libkeepass" rel="nofollow">libkeepass</a> python library.</p>
<p>Using <a href="http://lxml.de/" rel="nofollow">lxml</a> (beause that is what libkeepass gives me) I have to search for an entry like this and take the password value from it</p>
<pre><code><Entry>
<String>
<Key>Password</Key>
<Value Protected="False" ProtectedValue="XXX">XXX</Value>
</String>
<String>
<Key>Title</Key>
<Value>PasswordName</Value>
</Entry>
</code></pre>
<p>So I have to find an entry:</p>
<ul>
<li>with a child "String"
<ul>
<li>with a child "Key" with value "Title"</li>
<li>with a child "Value" with value "PasswordName"</li>
</ul></li>
<li>with a child "String"
<ul>
<li>with a child "Key" with value "Password"</li>
<li>with a child "Value" -> and the value of that child is what I need</li>
</ul></li>
</ul>
<p>I already got this far (kdb beeing the password file object):</p>
<pre><code>kdb.obj_root.findall(".//Entry/String[Key='Title'][Value='PasswordName']")
</code></pre>
<p>This gives me the String Element of the correct entry.</p>
| 1 |
2016-09-25T10:25:08Z
| 39,685,834 |
<p>I just realized, I can navigate up using "..". So the solution is:</p>
<pre><code>kdb.obj_root.findall(".//Entry/String[Key='Title'][Value='MyPassword']/../String[Key='Password']/Value")
</code></pre>
| 1 |
2016-09-25T10:27:57Z
|
[
"python",
"lxml"
] |
Complex search in xml using lxml
| 39,685,806 |
<p><strong>Background</strong> I am trying to read a password from a keepass2 file using <a href="https://github.com/phpwutz/libkeepass" rel="nofollow">libkeepass</a> python library.</p>
<p>Using <a href="http://lxml.de/" rel="nofollow">lxml</a> (beause that is what libkeepass gives me) I have to search for an entry like this and take the password value from it</p>
<pre><code><Entry>
<String>
<Key>Password</Key>
<Value Protected="False" ProtectedValue="XXX">XXX</Value>
</String>
<String>
<Key>Title</Key>
<Value>PasswordName</Value>
</Entry>
</code></pre>
<p>So I have to find an entry:</p>
<ul>
<li>with a child "String"
<ul>
<li>with a child "Key" with value "Title"</li>
<li>with a child "Value" with value "PasswordName"</li>
</ul></li>
<li>with a child "String"
<ul>
<li>with a child "Key" with value "Password"</li>
<li>with a child "Value" -> and the value of that child is what I need</li>
</ul></li>
</ul>
<p>I already got this far (kdb beeing the password file object):</p>
<pre><code>kdb.obj_root.findall(".//Entry/String[Key='Title'][Value='PasswordName']")
</code></pre>
<p>This gives me the String Element of the correct entry.</p>
| 1 |
2016-09-25T10:25:08Z
| 39,686,298 |
<p>Alternatively, instead of going down to <code>String</code>, back up to <code>Entry</code> and then down again to another <code>String</code>, you can just use predicate on <code>Entry</code> element and then return the target <code>String</code> element from there. </p>
<p>Since you're using <code>lxml</code>, I'd also suggest to use <code>xpath()</code> method instead of <code>findall()</code>. The former provides full support for XPath 1.0 expression, while the latter only supports <a href="https://docs.python.org/2/library/xml.etree.elementtree.html#elementtree-xpath" rel="nofollow">subset of XPath 1.0</a> :</p>
<pre><code>query = """
.//Entry[String[Key='Title' and Value='MyPassword']]
/String[Key='Password']
/Value
"""
kdb.obj_root.xpath(query)
</code></pre>
| 1 |
2016-09-25T11:27:56Z
|
[
"python",
"lxml"
] |
Python 3 defining a variable within a function
| 39,685,969 |
<p>I am working on my own code to encrypt/decrypt messages and all is working fine. Now I am just trying to tidy up the code a bit and I'm also trying to add error catching. I want to make this error catching within a function so I don't have to type out the error catching like 6 times within one block of code.</p>
<pre><code>c = 1
list = ['Y', 'N']
test1 = "H"
def f(test1):
while c == 1:
try:
test1 = raw_input("Input something yo")
if test1 not in list:
raise ValueError("Enter Y or N")
else:
return test1
break
except ValueError as error:
print (error)
a = f
a(test1)
a = test1
print (a)
if a == "Y":
print ("Yes")
else:
print ("No")
</code></pre>
<p>This is a test to practise doing this. However, I have not been able to successfully do it. So, in this code, I want to define a as either "Y" or "N" as a user input. I want a to call the function f and then test1 is the variable name. I want a to be test1 after the function has run. So if the function is running and the user types "Y", then "Yes" will be printed. If not, then "No" will be printed. For my actual script, I will need multiple values to be defined as the returned value from this function since I don't want to type out the error catching process so many times. If the user does not type "Y" or "N" then they have to type it again, so that part works. It's just returning the value test1 that I'm having trouble with.</p>
<p>At the moment, test1 is always "H" but if I don't have that line, I get this error:</p>
<pre><code>Traceback (most recent call last):
File "FuncTest.py", line 19, in <module>
a(test1)
NameError: name 'test1' is not defined
</code></pre>
<p>Any ideas how I can fix this? Thanks in advance everyone! :D</p>
| -1 |
2016-09-25T10:45:27Z
| 39,689,152 |
<p>You are getting an error when you remove line 3 of your example as you are removing the definition of test1.</p>
<p>It isn't neccessary to pass values to a function in order to return them, a more general application of the same code is as follows:</p>
<pre><code>def wait_and_validate (validation_list):
while True:
the_input= raw_input("Input something yo")
if the_input not in validation_list:
raise ValueError("Input not In Validation List")
return the_input
try:
valid_input = wait_and_validate(['Y', 'N'])
except ValueError as error:
print(error)
#do stuff with valid_input
</code></pre>
<p>You may wish to experiment with different return values rather than raising errors. Not sure what the official wisdom is, but from an ease of reuse perspective returning a tuple is probably easier to handle and saves defining your errors in your validation function (i.e. you might want to handle it differently depending on the part of the program you're in).</p>
| 0 |
2016-09-25T16:26:26Z
|
[
"python",
"function"
] |
Is it possible to test a REST API against documentation in Django?
| 39,685,973 |
<p>I develop a RESTful API server using Django REST Framework, and while the app matures, entity signatures sometimes change.</p>
<p>While writing tests for this app, I began to wonder if there are tools to check that API returns data as stated in documentation, i.e. User entity contains all the required fields etc.</p>
<p>I know that there are API auto-documenting tools like django-rest-swagger and others, and maybe there is some tool that helps asserting that data returned to user has same signature as in documentation?</p>
| 0 |
2016-09-25T10:45:43Z
| 39,691,509 |
<p>Why won't you test it with simple unit tests?
I assume that you have your API urls mapped properly to Django'u url_patterns.</p>
<p>Then you can simply unit test them with Django REST Framework <a href="http://www.django-rest-framework.org/api-guide/testing/#test-cases" rel="nofollow">Test Cases</a></p>
<p>Here is a code snippet:
from rest_framework.test import APITestCase</p>
<pre><code>class InboxNotificationForPlayerViewTest(APITestCase):
def test_returns_delivered_inbox_notifications(self):
"""..."""
response = self.client.get(reverse(
'notifications-api:inbox-for-player', kwargs={'player_id': self.subscriber.player_id}
))
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertItemsEqual(response.data, {
'count': 3,
'not_read': 2,
'notifications': {
'read': [
inbox_payload(classic),
inbox_payload(without_window)
],
'not_read': [
inbox_payload(read)
]
}
})
</code></pre>
<p>I know that it is possibly quite long solution but I'm sure that it will help you in future development. Note that every change in response data format will be tracked per test-launch.</p>
| 0 |
2016-09-25T20:27:15Z
|
[
"python",
"django",
"rest",
"testing"
] |
Is it possible to test a REST API against documentation in Django?
| 39,685,973 |
<p>I develop a RESTful API server using Django REST Framework, and while the app matures, entity signatures sometimes change.</p>
<p>While writing tests for this app, I began to wonder if there are tools to check that API returns data as stated in documentation, i.e. User entity contains all the required fields etc.</p>
<p>I know that there are API auto-documenting tools like django-rest-swagger and others, and maybe there is some tool that helps asserting that data returned to user has same signature as in documentation?</p>
| 0 |
2016-09-25T10:45:43Z
| 39,691,858 |
<p>There are dedicated tools for API documentation (i.e. Swagger: <a href="http://swagger.io/" rel="nofollow">http://swagger.io/</a>). You can also google for "API contracting".</p>
<p>You can validate your server against API spec using DREDD (<a href="http://dredd.readthedocs.io/en/latest/" rel="nofollow">http://dredd.readthedocs.io/en/latest/</a>).</p>
<p>Bonus article: <a href="https://blog.codeship.com/api-documentation-when-preferences-matter/" rel="nofollow">https://blog.codeship.com/api-documentation-when-preferences-matter/</a></p>
| 0 |
2016-09-25T21:07:54Z
|
[
"python",
"django",
"rest",
"testing"
] |
Addition of every two columns
| 39,686,055 |
<p>I would like calculate the sum of two in two column in a matrix(the sum between the columns 0 and 1, between 2 and 3...).</p>
<p>So I tried to do nested "for" loops but at every time I haven't the good results.</p>
<p>For example:</p>
<pre><code>c = np.array([[0,0,0.25,0.5],[0,0.5,0.25,0],[0.5,0,0,0]],float)
freq=np.zeros(6,float).reshape((3, 2))
#I calculate the sum between the first and second column, and between the fird and the fourth column
for i in range(0,4,2):
for j in range(1,4,2):
for p in range(0,2):
freq[:,p]=(c[:,i]+c[:,j])
</code></pre>
<p>But the result is:</p>
<pre><code> print freq
array([[ 0.75, 0.75],
[ 0.25, 0.25],
[ 0. , 0. ]])
</code></pre>
<p>Normaly the good result must be (0., 0.5,0.5) and (0.75,0.25,0). So I think the problem is in the nested "for" loops.</p>
<p>Is there a person who know how I can calculate the sum every two columns, because I have a matrix with 400 columns?</p>
| 1 |
2016-09-25T10:54:12Z
| 39,686,134 |
<p>Here is one way using <code>np.split()</code>:</p>
<pre><code>In [36]: np.array(np.split(c, np.arange(2, c.shape[1], 2), axis=1)).sum(axis=-1)
Out[36]:
array([[ 0. , 0.5 , 0.5 ],
[ 0.75, 0.25, 0. ]])
</code></pre>
<p>Or as a more general way even for odd length arrays:</p>
<pre><code>In [87]: def vertical_adder(array):
return np.column_stack([np.sum(arr, axis=1) for arr in np.array_split(array, np.arange(2, array.shape[1], 2), axis=1)])
....:
In [88]: vertical_adder(c)
Out[88]:
array([[ 0. , 0.75],
[ 0.5 , 0.25],
[ 0.5 , 0. ]])
In [94]: a
Out[94]:
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]])
In [95]: vertical_adder(a)
Out[95]:
array([[ 1, 5, 4],
[11, 15, 9],
[21, 25, 14]])
</code></pre>
| 2 |
2016-09-25T11:05:38Z
|
[
"python",
"numpy"
] |
Addition of every two columns
| 39,686,055 |
<p>I would like calculate the sum of two in two column in a matrix(the sum between the columns 0 and 1, between 2 and 3...).</p>
<p>So I tried to do nested "for" loops but at every time I haven't the good results.</p>
<p>For example:</p>
<pre><code>c = np.array([[0,0,0.25,0.5],[0,0.5,0.25,0],[0.5,0,0,0]],float)
freq=np.zeros(6,float).reshape((3, 2))
#I calculate the sum between the first and second column, and between the fird and the fourth column
for i in range(0,4,2):
for j in range(1,4,2):
for p in range(0,2):
freq[:,p]=(c[:,i]+c[:,j])
</code></pre>
<p>But the result is:</p>
<pre><code> print freq
array([[ 0.75, 0.75],
[ 0.25, 0.25],
[ 0. , 0. ]])
</code></pre>
<p>Normaly the good result must be (0., 0.5,0.5) and (0.75,0.25,0). So I think the problem is in the nested "for" loops.</p>
<p>Is there a person who know how I can calculate the sum every two columns, because I have a matrix with 400 columns?</p>
| 1 |
2016-09-25T10:54:12Z
| 39,686,188 |
<p>You can simply reshape to split the last dimension into two dimensions, with the last dimension of length <code>2</code> and then sum along it, like so -</p>
<pre><code>freq = c.reshape(c.shape[0],-1,2).sum(2).T
</code></pre>
<p>Reshaping only creates a view into the array, so effectively, we are just using the summing operation here and as such must be efficient.</p>
<p>Sample run -</p>
<pre><code>In [17]: c
Out[17]:
array([[ 0. , 0. , 0.25, 0.5 ],
[ 0. , 0.5 , 0.25, 0. ],
[ 0.5 , 0. , 0. , 0. ]])
In [18]: c.reshape(c.shape[0],-1,2).sum(2).T
Out[18]:
array([[ 0. , 0.5 , 0.5 ],
[ 0.75, 0.25, 0. ]])
</code></pre>
| 3 |
2016-09-25T11:13:53Z
|
[
"python",
"numpy"
] |
Addition of every two columns
| 39,686,055 |
<p>I would like calculate the sum of two in two column in a matrix(the sum between the columns 0 and 1, between 2 and 3...).</p>
<p>So I tried to do nested "for" loops but at every time I haven't the good results.</p>
<p>For example:</p>
<pre><code>c = np.array([[0,0,0.25,0.5],[0,0.5,0.25,0],[0.5,0,0,0]],float)
freq=np.zeros(6,float).reshape((3, 2))
#I calculate the sum between the first and second column, and between the fird and the fourth column
for i in range(0,4,2):
for j in range(1,4,2):
for p in range(0,2):
freq[:,p]=(c[:,i]+c[:,j])
</code></pre>
<p>But the result is:</p>
<pre><code> print freq
array([[ 0.75, 0.75],
[ 0.25, 0.25],
[ 0. , 0. ]])
</code></pre>
<p>Normaly the good result must be (0., 0.5,0.5) and (0.75,0.25,0). So I think the problem is in the nested "for" loops.</p>
<p>Is there a person who know how I can calculate the sum every two columns, because I have a matrix with 400 columns?</p>
| 1 |
2016-09-25T10:54:12Z
| 39,686,325 |
<p>Add the slices <code>c[:, ::2]</code> and <code>c[:, 1::2]</code>:</p>
<pre><code>In [62]: c
Out[62]:
array([[ 0. , 0. , 0.25, 0.5 ],
[ 0. , 0.5 , 0.25, 0. ],
[ 0.5 , 0. , 0. , 0. ]])
In [63]: c[:, ::2] + c[:, 1::2]
Out[63]:
array([[ 0. , 0.75],
[ 0.5 , 0.25],
[ 0.5 , 0. ]])
</code></pre>
| 3 |
2016-09-25T11:31:53Z
|
[
"python",
"numpy"
] |
Find edges of images
| 39,686,084 |
<p>I have software that generates several images like the following four images:</p>
<p><a href="http://i.stack.imgur.com/XwbtA.png" rel="nofollow"><img src="http://i.stack.imgur.com/XwbtA.png" alt="image 01"></a></p>
<p><a href="http://i.stack.imgur.com/OiezM.png" rel="nofollow"><img src="http://i.stack.imgur.com/OiezM.png" alt="image 02"></a></p>
<p><a href="http://i.stack.imgur.com/0vm2f.png" rel="nofollow"><img src="http://i.stack.imgur.com/0vm2f.png" alt="image 03"></a></p>
<p><a href="http://i.stack.imgur.com/GSu1Q.png" rel="nofollow"><img src="http://i.stack.imgur.com/GSu1Q.png" alt="image 04"></a></p>
<p>Does an algorithm exist that detects the (horizontal & vertical) edges and creates a binary output like this?</p>
<p><a href="http://i.stack.imgur.com/iq4Hb.png" rel="nofollow"><img src="http://i.stack.imgur.com/iq4Hb.png" alt="enter image description here"></a></p>
<p>If possible I'd like to implement this with <code>numpy</code> and <code>scipy</code>. I already tried to implement an algorithm, but I failed because I didn't find a place to start. I also tried to use a neural network to do this, but this seems to be overpowered and does not work perfectly.</p>
| 0 |
2016-09-25T10:58:47Z
| 39,780,360 |
<p>The simplest thing to try is to:</p>
<ul>
<li>Convert your images to binary images (by a simple threshold)</li>
<li>Apply the Hough transform (OpenCV, Matlab have it already implemented)</li>
<li>In the Hough transform results, detect the peaks for angles 0 degree, + and - 90 degrees. (Vertical and horizontal lines)</li>
</ul>
<p>In OpenCV and Matlab, you have extra options for the Hough transform which allow you to fill the gaps between two disconnected segments belonging to a same straight line. You may need a few extra operations for post-processing your results but the main steps should be these ones.</p>
| 1 |
2016-09-29T21:31:55Z
|
[
"python",
"algorithm",
"python-2.7",
"numpy",
"image-processing"
] |
How to ignore or override calls to all methods of a class for testing
| 39,686,119 |
<p>I have set up a unit test that looks around like that:</p>
<pre><code>from unittest import TestCase
from . import main
from PIL import Image
class TestTableScreenBased(TestCase):
def test_get_game_number_on_screen2(self):
t = main.TableScreenBased()
t.entireScreenPIL = Image.open('tests/1773793_PreFlop_0.png')
t.get_dealer_position()
</code></pre>
<p>The function that I want to test is called get_dealer_position. In this function I'm updating some items on my gui which is not initialised for the test, so I get the expected error:<code>NameError: name 'ui_action_and_signals' is not defined</code></p>
<pre><code>def get_dealer_position(self):
func_dict = self.coo[inspect.stack()[0][3]][self.tbl]
ui_action_and_signals.signal_progressbar_increase.emit(5)
ui_action_and_signals.signal_status.emit("Analyse dealer position")
pil_image = self.crop_image(self.entireScreenPIL, self.tlc[0] + 0, self.tlc[1] + 0,
self.tlc[0] +800, self.tlc[1] + 500)
</code></pre>
<p>What is the best way to 'ignore' or override all calls to methods in that class <code>ui_action_and_signals</code>? This class contains plenty of methods (for hundreds of gui items) and I would prefer not to have to override each of them separately. Is there a way to tell the python test that everything related to the ui_action_and_signals should be ignored? Is there any elegant way with monkey patching or mocking that would use application in this?</p>
| 0 |
2016-09-25T11:02:58Z
| 39,686,492 |
<p>If you are using Python >= 3.3 you can use the built in <a href="https://docs.python.org/3/library/unittest.mock.html" rel="nofollow">unittest.mock</a> module. If you are using an earlier version of Python you can use the same tools by installing <a href="https://pypi.python.org/pypi/mock" rel="nofollow">the backport</a> using Pip.</p>
<p>You will need to replace your missing dependency with a Mock object - there are many ways to do it, but one way is to use the patch decorator which takes care of removing the Mock object after the test:</p>
<pre><code>from unittest.mock import patch
from unittest import TestCase
from . import main
from PIL import Image
class TestTableScreenBased(TestCase):
@patch('module.path.of.ui_action_and_signals')
def test_get_game_number_on_screen2(self, mock_ui_action_and_signals):
t = main.TableScreenBased()
t.entireScreenPIL = Image.open('tests/1773793_PreFlop_0.png')
t.get_dealer_position()
</code></pre>
<p>There is more information about the patch decorator <a href="https://docs.python.org/3/library/unittest.mock-examples.html#patch-decorators" rel="nofollow">in the official documentation</a> including some <a href="https://docs.python.org/3/library/unittest.mock.html#where-to-patch" rel="nofollow">hints on where to patch</a> which is sometimes not entirely obvious.</p>
<p>The mock system has many other features which you might want to use, such as duplicating the spec of an existing class, or finding out what calls were made to your Mock object during the test.</p>
| 1 |
2016-09-25T11:52:45Z
|
[
"python",
"unit-testing"
] |
Pandas: How to open certain files
| 39,686,132 |
<p>I am currently working on the data set from this <a href="https://github.com/smarthi/UnivOfWashington-Machine-Learning/tree/master/MLFoundations/week4/people_wiki.gl" rel="nofollow">link</a>. But I am unable to read these files from Pandas? Has anyone tried to play with such files?</p>
<p>I am trying the following:</p>
<pre><code>import pandas as pd
df = pd.read_csv("m_4549381c276b46c6.0000")
</code></pre>
<p>But I get the following error</p>
<pre><code>Error tokenizing data. C error: Buffer overflow caught - possible malformed input file.
</code></pre>
| 2 |
2016-09-25T11:05:21Z
| 39,686,244 |
<p>Those files are parts of a saved <a href="https://turi.com/products/create/docs/generated/graphlab.SFrame.html" rel="nofollow">SFrame</a>.</p>
<p>So you can load them this way:</p>
<pre><code>import sframe
sf = sframe.SFrame('/path/to/dir/')
</code></pre>
<p>Demo: I've downloaded all files from <a href="https://github.com/smarthi/UnivOfWashington-Machine-Learning/tree/master/MLFoundations/week4/people_wiki.gl" rel="nofollow">people_wiki.gl</a> and put them under: <code>D:/download/sframe/</code></p>
<pre><code>In [7]: import sframe
In [7]: sf = sframe.SFrame('D:/download/sframe/')
In [8]: sf
Out[8]:
Columns:
URI str
name str
text str
Rows: 59071
Data:
+-------------------------------+---------------------+
| URI | name |
+-------------------------------+---------------------+
| <http://dbpedia.org/resour... | Digby Morrell |
| <http://dbpedia.org/resour... | Alfred J. Lewy |
| <http://dbpedia.org/resour... | Harpdog Brown |
| <http://dbpedia.org/resour... | Franz Rottensteiner |
| <http://dbpedia.org/resour... | G-Enka |
| <http://dbpedia.org/resour... | Sam Henderson |
| <http://dbpedia.org/resour... | Aaron LaCrate |
| <http://dbpedia.org/resour... | Trevor Ferguson |
| <http://dbpedia.org/resour... | Grant Nelson |
| <http://dbpedia.org/resour... | Cathy Caruth |
+-------------------------------+---------------------+
+-------------------------------+
| text |
+-------------------------------+
| digby morrell born 10 octo... |
| alfred j lewy aka sandy le... |
| harpdog brown is a singer ... |
| franz rottensteiner born i... |
| henry krvits born 30 decem... |
| sam henderson born october... |
| aaron lacrate is an americ... |
| trevor ferguson aka john f... |
| grant nelson born 27 april... |
| cathy caruth born 1955 is ... |
+-------------------------------+
[59071 rows x 3 columns]
Note: Only the head of the SFrame is printed.
You can use print_rows(num_rows=m, num_columns=n) to print more rows and columns.
</code></pre>
<p>Now you can convert it to Pandas DF if you need:</p>
<pre><code>In [17]: df = sf.to_dataframe()
In [18]: pd.options.display.max_colwidth = 40
In [19]: df.head()
Out[19]:
URI name text
0 <http://dbpedia.org/resource/Digby_M... Digby Morrell digby morrell born 10 october 1979 i...
1 <http://dbpedia.org/resource/Alfred_... Alfred J. Lewy alfred j lewy aka sandy lewy graduat...
2 <http://dbpedia.org/resource/Harpdog... Harpdog Brown harpdog brown is a singer and harmon...
3 <http://dbpedia.org/resource/Franz_R... Franz Rottensteiner franz rottensteiner born in waidmann...
4 <http://dbpedia.org/resource/G-Enka> G-Enka henry krvits born 30 december 1974 i...
In [20]: df.shape
Out[20]: (59071, 3)
</code></pre>
| 3 |
2016-09-25T11:21:35Z
|
[
"python",
"python-2.7",
"pandas",
"dataframe"
] |
Pandas: How to open certain files
| 39,686,132 |
<p>I am currently working on the data set from this <a href="https://github.com/smarthi/UnivOfWashington-Machine-Learning/tree/master/MLFoundations/week4/people_wiki.gl" rel="nofollow">link</a>. But I am unable to read these files from Pandas? Has anyone tried to play with such files?</p>
<p>I am trying the following:</p>
<pre><code>import pandas as pd
df = pd.read_csv("m_4549381c276b46c6.0000")
</code></pre>
<p>But I get the following error</p>
<pre><code>Error tokenizing data. C error: Buffer overflow caught - possible malformed input file.
</code></pre>
| 2 |
2016-09-25T11:05:21Z
| 39,686,415 |
<p>Just clarifying on the answer by <a href="http://stackoverflow.com/a/39686244/6842947">MaxU</a>, you are trying to read it the wrong way. It is a raw file and its formatting is contained in the other files which are there in the same folder in that <a href="https://github.com/smarthi/UnivOfWashington-Machine-Learning/tree/master/MLFoundations/week4/people_wiki.gl" rel="nofollow">link</a>. Pandas requires you to know the encoded format of the file beforehand (i.e delimiters, number of columns etc). It cannot be used as a magic wand to read any file without being aware of it.</p>
<p>The IPython notebook just outside the folder in your <a href="https://github.com/smarthi/UnivOfWashington-Machine-Learning/tree/master/MLFoundations/week4/people_wiki.gl" rel="nofollow">link</a>, shows exactly how to read that data. <a href="http://stackoverflow.com/a/39686244/6842947">MaxU</a> has correctly mentioned that the specific file in question is just a part of the SFrame which is a structure of GraphLab framework. Hence, you are trying to extract meaningful data just from a part of the whole and hence you can't do that meaningfully.</p>
<p>You can however read the graphlab file and convert it into a Pandas dataframe. For details see <a href="http://stackoverflow.com/questions/33461953/opening-folder-with-gl-extension-in-python-or-pandas">here</a>.</p>
| 2 |
2016-09-25T11:42:53Z
|
[
"python",
"python-2.7",
"pandas",
"dataframe"
] |
Python text game - Cant exit a while loop
| 39,686,135 |
<p>this is the main code:</p>
<pre><code>import MainMod
print("Welcome!")
print("Note: In this games you use wasd+enter to move!\nYou press 1 key and then enter,if you press multiple kets it wont work.\nYou will always move by 5 meters.")
CurrentRoom = 1
#Limits work this way!1st and 2nd number are X values(1st is <---- limit,2nd is ---> limit)
#3rd and 4th are y values(1st is v limit,2nd is ^ limit)
# X and Y are coordinates; 0,0 is the starting point of every room
while True:
if CurrentRoom ==1:
print("This is room 1")
MainMod.roomlimits = [-15 , 15, -15 , 15]
MainMod.doorloc1 = [-15,10,15]
MainMod.doorloc2 = [15,-2,2]
while CurrentRoom == 1:
MainMod.MainLel()
if MainMod.door1 == 1:
print("DAMN SON")
CurrentRoom = 2
break
elif MainMod.door2 == 1:
print("Plz no")
CurrentRoom = 3
break
while CurrentRoom == 2:
MainMod.MainLel()
</code></pre>
<p>and this is the MainMod module is :</p>
<pre><code>x = 0
y = 0
roomlimits = 0
doorloc1=0
doorloc2=0
door1 = 0
door2 = 0
direct = 0
def MainLel():
global direct
movementinput()
movement(direct)
doorcheck()
def movement(dir):
global x,y,roomlimits,door1,door2,doorloc1,doorloc2
if dir == "w":
y += 5
if y > roomlimits[3]:
y = roomlimits[3]
print("Youre current coordinates are x:",x," y:",y)
elif dir == "s":
y -= 5
if y < roomlimits[2]:
y = roomlimits[2]
print("Youre current coordinates are x:",x," y:",y)
elif dir == "d":
x += 5
if x > roomlimits[1]:
x = roomlimits[1]
print("Youre current coordinates are x:",x," y:",y)
elif dir == "a":
x -= 5
if x < roomlimits[0]:
x = roomlimits[2]
print("Youre current coordinates are x:",x," y:",y)
def movementinput():
global direct
while True:
direct = input("")
if direct in ("w","a","s","d","W","A","D","S"):
break
else:
print("You failure.")
def doorcheck():
global x,y,doorloc1,doorloc2,door1,door2
if x == doorloc1[0] and doorloc1[1] <= y <= doorloc1[2]:
door1 = 1
elif y == doorloc2[0] and doorloc2[1] <= x <= doorloc2[2]:
door2 = 1
else:
door1,door2 = 0,0
</code></pre>
<p>Im using a module instead of classes because i dont know how to use classes yet,anyways,what happens in the program is that if i am in the door location,it simply prints "DAMN SON" and doesnt break out of the Room loop,any help? EDIT NOTE: I added the break statement later on to try if it would help,sadly it didnt,i am also a bit tired so im guessing i made a logic mistake somewhere,thanks in advance for help.</p>
<p>Final edit: The code was functional all along,i was just testing it incorrectly!Thanks for the awnsers,ill close this question now.</p>
| 0 |
2016-09-25T11:05:48Z
| 39,686,850 |
<p>Since I could not imagine it didn't work, I added two markers (print commands), to room 1 and 2:</p>
<pre><code>while CurrentRoom == 1:
print("one")
mod.MainLel()
</code></pre>
<p>and</p>
<pre><code>while CurrentRoom == 2:
print("two")
mod.MainLel()
</code></pre>
<p>This is what happened:</p>
<pre class="lang-none prettyprint-override"><code>Youre current coordinates are x: -5 y: 15
one
a
Youre current coordinates are x: -10 y: 15
one
a
Youre current coordinates are x: -15 y: 15
DAMN SON
two
a
Youre current coordinates are x: -15 y: 15
two
</code></pre>
<p>It turned out to be working fine. The <code>break</code> is redundant however. The loop will break anyway, since the condition becomes <code>False</code>.</p>
| 1 |
2016-09-25T12:32:38Z
|
[
"python"
] |
SMTP AUTH extension not supported by server in python
| 39,686,141 |
<p>I'm using the following def to send email based on status , </p>
<pre><code>def sendMail(fbase, status):
server = smtplib.SMTP(config["global"]["smtp_server"], config["global"]["smtp_port"])
server.login(config["global"]["smtp_user"],config["global"]["smtp_pass"])
server.ehlo()
server.starttls()
from_addr = config["global"]["smtp_from"]
if status == "Success":
subject = "%s Uploaded sucessfully" % fbase
msg = "\nHi,\n Video file - %s - uploaded successfully \n Thanks \n Online Team" % fbase
to_addr_list = config["global"]["smtp_to_success"]
else:
subject = "%s Failed to upload" % fbase
msg = "\n Hi!\n Failed to upload %s \n Please check the log file immediatly \n Thanks" % fbase
to_addr_list = config["global"]["smtp_to_failed"]
header = 'From: %s\n' % from_addr
header += 'To: %s\n' % ','.join(to_addr_list)
header += 'Subject: %s\n\n' % subject
message = header + msg
server.sendmail(from_addr, to_addr_list, message)
server.quit()
logger.info("Mail send for status: %s" %(status))
</code></pre>
<p>i start getting the following error after Ad admins upgrade the exchange </p>
<pre><code> raise ("SMTP AUTH extension not supported by server.")
SMTPException: SMTP ASMTPExceptionUTH extension not supported by server.
</code></pre>
<p>I added </p>
<pre><code>server.ehlo()
server.starttls()
</code></pre>
<p>and still getting the same error , </p>
<p>any advise here </p>
| 0 |
2016-09-25T11:07:24Z
| 39,686,285 |
<p>Perform the login step <em>after</em> you've started TLS.</p>
<pre><code>def sendMail(fbase, status):
server = smtplib.SMTP(config["global"]["smtp_server"], config["global"]["smtp_port"])
server.ehlo()
server.starttls()
server.login(config["global"]["smtp_user"],config["global"]["smtp_pass"])
....
</code></pre>
| 0 |
2016-09-25T11:26:29Z
|
[
"python",
"python-2.7"
] |
NetworkX most efficient way to find the longest path in a DAG at start vertex with no errors
| 39,686,213 |
<p>Starting from a certain vertex, how would one find the longest path relative to that vertex? I've been browsing all over and can't find a solution to this problem which actually works for all possible cases of DAGs. Source code in NetworkX would be preferred but regular python is fine too. Im genuinely curious as to why I can't manage to find any proper working example, I do understand it is an NP-type problem but I would like to know the most efficient way it is done.</p>
| -1 |
2016-09-25T11:16:46Z
| 39,687,051 |
<p>First, I would check if this graph is connected. If so, then the longest path is the path across all nodes. If not, it means this vertex is contained in a connected component. Then I would use <a href="https://networkx.github.io/documentation/networkx-1.9.1/reference/generated/networkx.algorithms.components.connected.connected_component_subgraphs.html" rel="nofollow"><code>connected_component_subgraphs</code></a> to find the largest component this vertex lies in. After that, the longest path is the path across all nodes in this largest component. </p>
<p>Of course, this works only if you don't allow cycles in your path. </p>
<pre><code>import networkx as nx
G = nx.DiGraph()
G.add_edges_from([(0,1),(0,4),(4,5),(4,6),(5,6),(6,1),(0,2),(2,3),(1,2)])
for path in nx.all_simple_paths(G, source=0, target=3):
print(path)
</code></pre>
<p>The result:</p>
<pre><code>[0, 1, 2, 3]
[0, 2, 3]
[0, 4, 5, 6, 1, 2, 3]
[0, 4, 6, 1, 2, 3]
</code></pre>
<p>The third one is what you like.</p>
| 0 |
2016-09-25T12:52:32Z
|
[
"python",
"python-3.x",
"networkx",
"directed-acyclic-graphs",
"longest-path"
] |
Python program that sends txt file to email
| 39,686,410 |
<p>I've recently created a python keylogger. The code is :</p>
<pre><code>import win32api
import win32console
import win32gui
import pythoncom,pyHook
win=win32console.GetConsoleWindow()
win32gui.ShowWindow(win,0)
def OnKeyboardEvent(event):
if event.Ascii==5:
_exit(1)
if event.Ascii !=0 or 8:
#open output.txt to read current keystrokes
f=open('c:\output.txt','r+')
buffer=f.read()
f.close()
#open output.txt to write current + new keystrokes
f=open('c:\output.txt','w')
keylogs=chr(event.Ascii)
if event.Ascii==13:
keylogs='/n'
buffer+=keylogs
f.write(buffer)
f.close()
# create a hook manager object
hm=pyHook.HookManager()
hm.KeyDown=OnKeyboardEvent
# set the hook
hm.HookKeyboard()
# wait forever
pythoncom.PumpMessages()
</code></pre>
<p>However, I would like this to send to my e-mail. Do you have any idea what I could add to allow this, or a separate program that would do this. </p>
<p>Thanks in advance </p>
| -3 |
2016-09-25T11:42:27Z
| 39,686,489 |
<p><a href="https://docs.python.org/3/library/email-examples.html" rel="nofollow">The python docs has good documentation of emails in python.</a></p>
<pre><code># Import smtplib for the actual sending function
import smtplib
# Import the email modules we'll need
from email.mime.text import MIMEText
# Open a plain text file for reading. For this example, assume that
# the text file contains only ASCII characters.
with open(textfile) as fp:
# Create a text/plain message
msg = MIMEText(fp.read())
# me == the sender's email address
# you == the recipient's email address
msg['Subject'] = 'The contents of %s' % textfile
msg['From'] = me
msg['To'] = you
# Send the message via our own SMTP server.
s = smtplib.SMTP('localhost')
s.send_message(msg)
s.quit()
</code></pre>
<p>This example has exactly what you are asking for.</p>
| 0 |
2016-09-25T11:52:27Z
|
[
"python",
"keylogger"
] |
What does python return on the leap second
| 39,686,553 |
<p>What does python <code>time</code> and <code>datetime</code> module return on the leap second?</p>
<p>What will I get when we are at <em>23:59:60.5</em> if I call:</p>
<ul>
<li><code>time.time()</code></li>
<li><code>datetime.datetime.utcnow()</code></li>
<li><code>datetime.datetime.now(pytz.utc)</code></li>
</ul>
<p>Also, any difference between py2.7 and py3?</p>
<hr>
<p>Why it is confusing (at least for me):</p>
<p>From the <a href="https://docs.python.org/2/library/datetime.html" rel="nofollow">datetime docs</a> I see:</p>
<blockquote>
<p>Unlike the time module, the datetime module does not support leap seconds.</p>
</blockquote>
<p>On the <a href="https://docs.python.org/2/library/time.html#module-time" rel="nofollow">time docs</a> I see there is "support" for leap seconds when parsing with <code>strptime</code>. But there is no comment about <code>time.time()</code>.</p>
<p>I see that using <code>time</code> I get:</p>
<pre><code>>>> time.mktime(time.strptime('2016-06-30T23:59:59', "%Y-%m-%dT%H:%M:%S"))
1467327599.0
>>> time.mktime(time.strptime('2016-06-30T23:59:60', "%Y-%m-%dT%H:%M:%S"))
1467327600.0
>>> time.mktime(time.strptime('2016-07-01T00:00:00', "%Y-%m-%dT%H:%M:%S"))
1467327600.0
</code></pre>
<p>And <code>datetime</code> just blows up:</p>
<pre><code>>>> dt.datetime.strptime('2016-06-30T23:59:60', "%Y-%m-%dT%H:%M:%S")
Traceback (most recent call last):
File "<stdin>", line 1, in &lt;module>
ValueError: second must be in 0..59
</code></pre>
<p>Then what will I get at that exact time (in the middle of the leap second)?</p>
<p>I have read about rubber times, clocks slowing down, repeating seconds, and all kind of crazy ideas, but what should I expect on python?</p>
<p>Note: In case you wonder if I don't have anything better to do that care about it, a leap second is approaching!!!!</p>
| 4 |
2016-09-25T11:59:26Z
| 39,686,629 |
<p>Leap seconds are occasionally <em>manually</em> scheduled. Currently, computer clocks have no facility to honour leap seconds; there is no standard to tell them up-front to insert one. Instead, computer clocks periodically re-synch their time keeping via the NTP protocol and adjust automatically after the leap second has been inserted.</p>
<p>Next, computer clocks usually report the time as <em>seconds since the epoch</em>. It'd be up to the <code>datetime</code> module to adjust its accounting when converting that second count to include leap seconds. It doesn't do this at present. <code>time.time()</code> will just report a time count based on the seconds-since-the-epoch.</p>
<p>So, <em>nothing different</em> will happen when the leap second is officially in effect, other than that your computer clock will be 1 second of for a little while.</p>
<p>The issues with <code>datetime</code> only cover <em>representing</em> a leap second timestamp, which it can't. It won't be asked to do so anyway.</p>
| 2 |
2016-09-25T12:08:58Z
|
[
"python",
"datetime",
"time",
"leap-second"
] |
Add elements properly to a ListStore in python
| 39,686,558 |
<p>I have a textfile with the current data:</p>
<pre><code>Firefox 2002 C++
Eclipse 2004 Java
Pitivi 2004 Python
Netbeans 1996 Java
Chrome 2008 C++
Filezilla 2001 C++
Bazaar 2005 Python
Git 2005 C
Linux Kernel 1991 C
GCC 1987 C
Frostwire 2004 Java
</code></pre>
<p>I want to read it from my python program and add it to a ListStore to have something like this</p>
<p><a href="http://i.stack.imgur.com/fqnvM.png" rel="nofollow">http://i.stack.imgur.com/fqnvM.png</a></p>
<p>This is my code:</p>
<pre><code>import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk
filename = 'data.txt'
with open(filename) as f:
data = f.readlines()
class TreeViewFilterWindow(Gtk.Window):
def __init__(self):
Gtk.Window.__init__(self, title="Treeview Filter Demo")
self.grid = Gtk.Grid()
self.grid.set_column_homogeneous(True)
self.grid.set_row_homogeneous(True)
self.add(self.grid)
self.software_liststore = Gtk.ListStore(str, str, str)
self.software_liststore.append(data[0].split())
self.software_liststore.append(data[10].split())
i=0
while (i<(len(data))):
print(data[i].split())
self.software_liststore.append(data[i].split())
i=i+1
self.treeview = Gtk.TreeView(model=self.software_liststore)
for i, column_title in enumerate(["Software", "Release Year", "Programming Language"]):
renderer = Gtk.CellRendererText()
column = Gtk.TreeViewColumn(column_title, renderer, text=i)
self.treeview.append_column(column)
self.scrollable_treelist = Gtk.ScrolledWindow()
self.scrollable_treelist.set_vexpand(True)
self.grid.attach(self.scrollable_treelist, 0, 0, 8, 10)
self.scrollable_treelist.add(self.treeview)
self.show_all()
win = TreeViewFilterWindow()
win.connect("delete-event", Gtk.main_quit)
win.show_all()
Gtk.main()
</code></pre>
<p>The problem is that there is no problem doing this:</p>
<pre><code>self.software_liststore.append(data[0].split())
self.software_liststore.append(data[10].split())
</code></pre>
<p>But when I try to use the while loop to insert data, it says:</p>
<pre><code>Traceback (most recent call last):
File "tut20.py", line 83, in <module>
win = TreeViewFilterWindow()
File "tut20.py", line 68, in __init__
self.software_liststore.append(data[i].split())
File "/usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py", line 956, in append
return self._do_insert(-1, row)
File "/usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py", line 947, in _do_insert
row, columns = self._convert_row(row)
File "/usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py", line 849, in _convert_row
raise ValueError('row sequence has the incorrect number of elements')
</code></pre>
<p>ValueError: row sequence has the incorrect number of elements</p>
<p>What am I doing wrong?</p>
| 0 |
2016-09-25T12:01:06Z
| 39,813,485 |
<p>Ok, line 9 in data.txt is:</p>
<pre><code>Linux Kernel 1991 C
</code></pre>
<p>which contains 4 elements. Try:</p>
<pre><code>Linux_Kernel 1991 C
</code></pre>
<p>or simply:</p>
<pre><code>Linux 1991 C
</code></pre>
| 0 |
2016-10-02T03:08:32Z
|
[
"python",
"gtk",
"pygobject"
] |
Python: Converting multiple files from xls to csv
| 39,686,591 |
<p>I'm trying to write a script in Python 2.7 that would convert all .xls and .xlsx files in the current directory into .csv with preserving their original file names.</p>
<p>With help from other similar questions here (sadly, not sure who to credit for the pieces of code I borrowed), here's what I've got so far:</p>
<pre><code>import xlrd
import csv
import os
def csv_from_excel(xlfile):
wb = xlrd.open_workbook(xlfile)
sh = wb.sheet_by_index(0)
your_csv_file = open(os.path.splitext(sxlfile)[0], 'wb')
wr = csv.writer(your_csv_file, dialect='excel', quoting=csv.QUOTE_ALL)
for rownum in xrange(sh.nrows):
wr.writerow(sh.row_values(rownum))
your_csv_file.close()
for file in os.listdir(os.getcwd()):
if file.lower().endswith(('.xls','.xlsx')):
csv_from_excel(file)
</code></pre>
<p>I have two questions:</p>
<p>1) I can't figure out why the program when run, only converts one file and doesn't iterate through all files in the current directory. </p>
<p>2) I can't figure out how to keep the original filename through the conversion. I.e. that an output file has the same name as an input.</p>
<p>Thank you</p>
| 0 |
2016-09-25T12:04:39Z
| 39,686,752 |
<p>One possible solution would be using <code>glob</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/io.html" rel="nofollow"><code>pandas</code></a>.</p>
<pre><code>excel_files = glob('*xls*')
for excel in excel_files:
out = excel.split('.')[0]+'.csv'
df = pd.read_excel(excel, 'Sheet1')
df.to_csv(out)
</code></pre>
| 1 |
2016-09-25T12:22:19Z
|
[
"python",
"excel",
"csv"
] |
Merging the 2 lists into 1 which are values in a Dictionary
| 39,686,642 |
<p>Hello all I am very new to the programming.</p>
<p>I have a dictC</p>
<p>dictC = {'a':[1,2,3,4,5],'b':[5,6,7,8,9,10]}</p>
<p>I want my output like </p>
<p>mergedlist = [1,2,3,4,5,6,7,8,9,10]</p>
<p>Could any one help me with the logic to define a function?</p>
<p>I have tried some thing like this </p>
<p>enter code here
dictC = {'a': [1, 2, 3, 4, 5, 6, 7], 'b': [3, 7, 8, 9, 10]}</p>
<pre><code>result = MergeDictValues(dictC)
print result
</code></pre>
<p>dictC = {'a': [1, 2, 3, 4, 5, 6, 7], 'b': [3, 7, 8, 9, 10]}
dictc= (dictC.values())</p>
<pre><code>dictc.extend(dictc)
print dictc
</code></pre>
<p>def MergeDictValues(inputDict):
resultList = []
mergedList = # I am missing my logic here
mergedList.extend( )
return resultList()</p>
<pre><code>MergeDictValues(dictc)
resultList= MergeDictValues(dictc)
print resultList
</code></pre>
| 0 |
2016-09-25T12:10:22Z
| 39,686,689 |
<pre><code>mergedlist = dictC['a']+ dictC['b']
</code></pre>
<p>edit - wait - do you know there are repeated elements in the list( last element of list 1 is 5, but so is the first element of list 2) - is this an invariant feature of the data. More info required I think..</p>
| 0 |
2016-09-25T12:15:29Z
|
[
"python",
"python-2.7",
"python-3.x"
] |
Merging the 2 lists into 1 which are values in a Dictionary
| 39,686,642 |
<p>Hello all I am very new to the programming.</p>
<p>I have a dictC</p>
<p>dictC = {'a':[1,2,3,4,5],'b':[5,6,7,8,9,10]}</p>
<p>I want my output like </p>
<p>mergedlist = [1,2,3,4,5,6,7,8,9,10]</p>
<p>Could any one help me with the logic to define a function?</p>
<p>I have tried some thing like this </p>
<p>enter code here
dictC = {'a': [1, 2, 3, 4, 5, 6, 7], 'b': [3, 7, 8, 9, 10]}</p>
<pre><code>result = MergeDictValues(dictC)
print result
</code></pre>
<p>dictC = {'a': [1, 2, 3, 4, 5, 6, 7], 'b': [3, 7, 8, 9, 10]}
dictc= (dictC.values())</p>
<pre><code>dictc.extend(dictc)
print dictc
</code></pre>
<p>def MergeDictValues(inputDict):
resultList = []
mergedList = # I am missing my logic here
mergedList.extend( )
return resultList()</p>
<pre><code>MergeDictValues(dictc)
resultList= MergeDictValues(dictc)
print resultList
</code></pre>
| 0 |
2016-09-25T12:10:22Z
| 39,686,855 |
<pre><code>def MergeDictValues(dictC):
return [x for y in dictC.values() for x in y]
</code></pre>
<p>Input :</p>
<pre><code>dictC = {'a':[1,2,3,4,5],'b':[5,6,7,8,9,10]}
MergeDictValues(dictC)
</code></pre>
<p>Out put :</p>
<pre><code>[1, 2, 3, 4, 5, 5, 6, 7, 8, 9, 10]
</code></pre>
| 0 |
2016-09-25T12:33:06Z
|
[
"python",
"python-2.7",
"python-3.x"
] |
How can I have a spinner while waiting for socket.accept?
| 39,686,644 |
<p>I have a spinner right here:</p>
<pre><code>spinner = itertools.cycle(['-', '/', '|', '\\'])
while True:
sys.stdout.write(spinner.next())
sys.stdout.flush()
sys.stdout.write('\b')
time.sleep(0.1)
</code></pre>
<p>I'm stuck at my code</p>
<pre><code>server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind((target,port))
server.listen(10)
print "Waiting for a client!"
client, addr = server.accept()
</code></pre>
<p>I don't know where or what condition I must use but I know how to implement a spinner. I want my <code>"Waiting for a client"</code> to make it clear for the user that it is waiting. I thought of a spinner.</p>
| 0 |
2016-09-25T12:10:36Z
| 39,688,037 |
<p>You should set a timeout before calling <code>accept()</code> in a loop. See here: <a href="http://stackoverflow.com/questions/7354476/python-socket-object-accept-time-out">python socket object accept time out</a> for details on how, but the idea is to try to accept a new connection for say 0.1 seconds, then bump the spinner, then repeat.</p>
| 0 |
2016-09-25T14:35:58Z
|
[
"python",
"python-2.7"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.