title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
python3 print to string | 39,823,303 | <p>Using Python 3, I have a console application that I am porting to a GUI. The code has a bunch of print statements, something like this:</p>
<p><code>print(' ', 'p', ' ', sep='', end='')</code></p>
<p>(In actuality the positional parameters are actually returns from functions, and there may be more or less than 3, like this:</p>
<p><code>print(f1(), f2(), f3(), sep='', end='')</code></p>
<p>I would like to convert these calls into something like:</p>
<p><code>GuiPrintLine(' ', 'p', ' ', sep='', end='')</code></p>
<p>where each line is eventually rendered using the GUI framework.</p>
<p>This is easy to do if I can convert the arguments to a string. I'd like to format the parameter list as a single string that print would normally output with something like this:</p>
<pre><code>def GuiPrintLine(*args, **kwargs):
s = text.format(*args, **kwargs)
# ... then display the string on the gui ...
</code></pre>
<p>However this requires a format string (text). Is there a simple way to do this that automatically produces the same output as the print function? I realize I could emulate print by concatinating all the args with sep and ends, but I would prefer to use a built-in solution if there is one. </p>
<p>using print and redirecting sysout is not attractive since it requires modifying the global state of the app and sysout might be used for other diagnostics at the same time.</p>
<p>It seems like this should be trivial in Python, so maybe I'm just missing something obvious.</p>
<p>Thanks for any help!</p>
| 0 | 2016-10-03T01:06:29Z | 39,823,534 | <p>Found the answer via string io. With this I don't have to emulate Print's handling of sep/end or even check for existence.</p>
<pre><code>import io
def print_to_string(*args, **kwargs):
output = io.StringIO()
print(*args, file=output, **kwargs)
contents = output.getvalue()
output.close()
return contents
</code></pre>
| 0 | 2016-10-03T01:46:21Z | [
"python",
"string",
"python-3.x",
"stringio"
]
|
How to change rows to columns by date in pandas? | 39,823,323 | <p>I have a csv data file in following format, I want to change the rows to columns , but this conversion needs to be done per stock and per date.</p>
<pre><code>Ticker,Indicator,Date,Value
STOCK A,ACCRUALS,3/31/2005,-10.44
STOCK A,ACCRUALS,3/31/2006,0.44
STOCK A,AE,3/31/2005,3.97
STOCK A,AE,3/31/2006,3.67
STOCK A,ASETTO,3/31/2005,0.762
STOCK A,ASETTO,3/31/2006,0.9099
</code></pre>
<p>Output</p>
<pre><code>Ticker,Date,ACCRUALS,AE,ASETTO
STOCK A,3/31/2005,-10.44,3.97,0.762
STOCK A,3/31/2006,0.44,3.67,0.9099
</code></pre>
| 0 | 2016-10-03T01:09:39Z | 39,823,373 | <pre><code>Ticker,Indicator,Date,Value
STOCK A,ACCRUALS,3/31/2005,-10.44
STOCK A,ACCRUALS,3/31/2006,0.44
STOCK A,AE,3/31/2005,3.97
STOCK A,AE,3/31/2006,3.67
STOCK A,ASETTO,3/31/2005,0.762
STOCK A,ASETTO,3/31/2006,0.9099
</code></pre>
<p>Let's just say your data are in a dataframe called <code>df</code>:</p>
<pre><code>>>> import pandas as pd
>>> df = df.set_index(df['Date'])
>>> for ind in set(df['Indicator']):
... filtered_df = df[df['Indicator'] == ind]
... df[ind] = filtered_df['Value']
...
>>> cols_to_keep = ['Ticker', 'Date'] + list(set(df['Indicator']))
>>> trimmed_df = df[cols_to_keep]
>>> trimmed_df = trimmed_df.drop_duplicates()
>>> trimmed_df
Ticker Date ACCRUALS AE ASETTO
Date
3/31/2005 STOCK A 3/31/2005 -10.44 3.97 0.7620
3/31/2006 STOCK A 3/31/2006 0.44 3.67 0.9099
</code></pre>
<p>That should take each unique value for <code>df['Indicator']</code> and make a new column out of the <code>df['Value']</code> column for that particular indicator.</p>
<p>You can use <code>reset_index()</code> to set the dataframe's indices back to zero:</p>
<pre><code>>>> trimmed_df.reset_index(drop = True)
</code></pre>
<p>And, instead of using <code>cols_to_keep</code>, you can do:</p>
<pre><code>>>> trimmed_df.drop("Indicator", axis = 1, inplace = True)
</code></pre>
| 0 | 2016-10-03T01:21:06Z | [
"python",
"pandas"
]
|
Asking user to plot two columns in a csv file without him typing the entire column name? | 39,823,350 | <p>My current code:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>import pandas as pd
df = pd.DataFrame.from_csv('data.csv',index_col=False)
while True:
print (df)
print ('Select X-Axis')
xaxis = input()
print ('Select Y-Axis')
yaxis = input()
break
df.plot(x= xaxis, y= yaxis)</code></pre>
</div>
</div>
</p>
<p>My current works only if user types out the entire column name. However, some of the columns are large. I want to make it easier for my user so that if he types out part of the column name it works. Or if somehow I can assign each column name with a number without manually going through each column and assigning a number. Then I can show a list to the user and he types out the number he would like to plot.</p>
| -1 | 2016-10-03T01:16:57Z | 39,823,417 | <p>One way to do is to use filter in pandas for that purpose.</p>
<pre><code>df.filter(regex=(yaxis))
</code></pre>
<p>It will display all the columns matching that substring of yaxis</p>
<p>here is an example.</p>
<pre><code>A = { 'Name': [ 'John', 'Andrew', 'Smith'] , 'Age' : [20,23,42]}
A
Out[19]: {'Age': [20, 23, 42], 'Name': ['John', 'Andrew', 'Smith']}
df = pd.DataFrame(A)
df
Out[21]:
Age Name
0 20 John
1 23 Andrew
2 42 Smith
df.filter(regex=('Na'))
Out[22]:
Name
0 John
1 Andrew
2 Smith
df.filter(regex=('e'))
Out[23]:
Age Name
0 20 John
1 23 Andrew
2 42 Smith
</code></pre>
| 0 | 2016-10-03T01:27:37Z | [
"python",
"pandas"
]
|
How to return different types of arrays? | 39,823,371 | <p>The high level problem I'm having in C# is to make a single copy of a data structure that describes a robot control network packet (Ethercat), and then to use that single data structure to extract data from a collection of packets. </p>
<p>The problem arises when attempting to use the data from the accumulated packets as there is implicit duplication of the data structure with casting or calling functions that specify the type. To assist in explaining the goal, I've written a python program that does what I want and would like help to determine if its possible to do this in C#.</p>
<p>The challenge for me in C# is the single function "get_vector" which returns a homogeneous collection of a variable numerical type. This type is defined in the packet structure and in python can be used without defining re-defining the data structure.</p>
<pre><code>import struct
# description of the complete packet
class PACKET_STRUCTURE :
# given a field name and a list of packets, return a vector
# this is the function that seems impossible in C# because the type of what is returned changes
def get_vector(self, name, packet_list):
# locate the packet definition by the name of the vector
result = [x for x in self.packet_def if x.field_name == name]
# without error checking, pos contains the location of the definition
pos = result[0].position;
# decode ALL the pacckets in the (encoded) packet list - returning a list of [time_sec, status, position
# in C# this step is similar to using Marshal.PtrToStructure to transform from byte[] to a struct
decoded_packet_list = [struct.unpack(self.fmt_str, packet) for packet in packet_list];
# from the list of decoded_packets, extract the desired field into its own list
vector = [decode[pos] for decode in decoded_packet_list]
# in C# this is similar to:
# var CS_vector = decode_packet_list.Select(item => item.field_name).ToArray();
# so far in C# there is no duplication of the packet structure.
# but after this point, assume I cast CS_vector to object and return it -
# to use the object, I've not figured out how to avoid casting it to some type of array
# eg double[], int32[]
return vector
def __init__(self):
self.packet_def = list();
self.fmt_str = "<";
self.cnt = 0;
# add description of single item to the structure
def add(self, struct_def) :
struct_def.position = len(self.packet_def);
self.packet_def.append(struct_def);
self.fmt_str += struct_def.type;
# create a simple packet based on a counter based on the defined structure
def make_packet(self):
vals = [self.cnt*10+x for x in range(0, len(self.packet_def))];
self.cnt += 1;
pk = apply(struct.pack, [self.fmt_str] + vals)
# print len(pk), ["%c" % x for x in pk]
return pk
def get_names(self):
return [packet_items.field_name for packet_items in self.packet_def];
# the description of a single field within the packet
class PACKET_ITEM :
def __init__(self, field_name, type):
self.field_name = field_name
self.type = type;
# self.offset = 0;
self.position = 0;
if __name__ == "__main__" :
INT32 = "l";
UINT16 = "H";
FLOAT = "f";
packet_def = PACKET_STRUCTURE();
# create an example packet structure - which is arbituary and could be anything - it could even be read from a file
# this definition is the ONLY defintion of the packet structure
# changes here require NO changes elsewhere in the program
packet_def.add(PACKET_ITEM("time_sec", FLOAT))
packet_def.add(PACKET_ITEM ("status",UINT16))
packet_def.add(PACKET_ITEM ("position",INT32))
# create a list of packets
pk_list = list()
for cnt in range(0,10) :
pk_list.append(packet_def.make_packet());
################################
# get the vectors without replicating the structure
# eg no int32[] position = (int32[])get_vector()
name_list = packet_def.get_names();
for name in name_list :
vector = packet_def.get_vector(name, pk_list);
print name, vector
</code></pre>
| -1 | 2016-10-03T01:20:26Z | 39,901,281 | <p>The answer is to store the arrays in a collection of type <code>List<dynamic></code></p>
<p>The return type of function that returns elements from the collection should also be dynamic.</p>
<p>Here is the <a href="http://stackoverflow.com/a/39901046/4462371">more complete answer</a> to my <a href="http://stackoverflow.com/q/39798481/4462371">miss-understood question </a> which this one attempted to clarify.</p>
| 0 | 2016-10-06T16:32:08Z | [
"c#",
"python",
"arrays",
"data-structures",
"strong-typing"
]
|
Callback URL error when setting up webhook for messenger | 39,823,415 | <p>I'm trying to follow this <a href="https://blog.hartleybrody.com/fb-messenger-bot/" rel="nofollow">tutorial</a> to set a chatbot for messenger. I'm stuck on the webhook setup. I added the page token and verify token to heroku, but when I try to add the heroku URL as the callback URL I get </p>
<blockquote>
<p>The URL couldn't be validated. Callback verification failed with the
following errors: HTTP Status Code = 403; HTTP Message = FORBIDDEN</p>
</blockquote>
| -1 | 2016-10-03T01:27:23Z | 39,842,513 | <p>The problem was the <code>PAGE_ACCESS_TOKEN</code> and <code>VERIFY_TOKEN</code> were not set. The tutorial said to use the command <code>heroku config:add</code> the correct command should be <code>heroku config:set</code>. </p>
| 0 | 2016-10-04T00:35:44Z | [
"python",
"facebook",
"bots"
]
|
Getting PostgreSQL percent_rank and scipy.stats.percentileofscore results to match | 39,823,470 | <p>I'm trying to QAQC the results of calculations that are done in a PostgreSQL database, using a python script to read in the inputs to the calculation and echo the calculation steps and compare the final results of the python script against the results from the PostgreSQL calculation. </p>
<p>The calculations in the PostgreSQL database use the <a href="https://www.postgresql.org/docs/current/static/functions-aggregate.html" rel="nofollow">percent_rank function</a>, returning the percentile rank (from 0 to 1) of a single value in a list of values. In the python script I am using the <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.percentileofscore.html#scipy-stats-percentileofscore" rel="nofollow">Scipy percentileofscore function.</a> </p>
<p>So, here's the question: I can't get the results to match, and I am wondering if anyone knows what settings I should use in the Scipy percentileofscore function to match the PostgreSQL percent_rank function.</p>
| 1 | 2016-10-03T01:35:44Z | 39,824,105 | <p>You can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rankdata.html" rel="nofollow"><code>scipy.stats.rankdata</code></a>. The following example reproduces the result shown at <a href="http://docs.aws.amazon.com/redshift/latest/dg/r_WF_PERCENT_RANK.html" rel="nofollow">http://docs.aws.amazon.com/redshift/latest/dg/r_WF_PERCENT_RANK.html</a>:</p>
<pre><code>In [12]: import numpy as np
In [13]: from scipy.stats import rankdata
In [14]: values = np.array([15, 20, 20, 20, 30, 30, 40])
</code></pre>
<p><code>rankdata(values, method='min')</code> gives the desired rank:</p>
<pre><code>In [15]: rank = rankdata(values, method='min')
In [16]: rank
Out[16]: array([1, 2, 2, 2, 5, 5, 7])
</code></pre>
<p>Then a basic calculation gives the equivalent of <code>percent_rank</code>:</p>
<pre><code>In [17]: (rank - 1) / (len(values) - 1)
Out[17]:
array([ 0. , 0.16666667, 0.16666667, 0.16666667, 0.66666667,
0.66666667, 1. ])
</code></pre>
<p>(I'm using Python 3.5. In Python 2, use something like <code>(rank - 1) / float(len(values) - 1)</code>.)</p>
<hr>
<p>You can use <code>percentileofscore</code>, but:</p>
<ul>
<li>You have to use the argument <code>kind='strict'</code>.</li>
<li>You have to scale the result by <code>n/(n-1)</code>, where <code>n</code> is the number of values.</li>
<li>You have to divide by 100 to convert from a true percentage to a fraction between 0 and 1.</li>
<li><code>percentileofscore</code> expects its second argument to be a scalar, so you have to use a loop to compute the result separately for each value.</li>
</ul>
<p>Here's an example using the same values as above:</p>
<pre><code>In [87]: import numpy as np
In [88]: from scipy.stats import percentileofscore
In [89]: values = np.array([15, 20, 20, 20, 30, 30, 40])
In [90]: n = len(values)
</code></pre>
<p>Here I use a list comprehension to generate the result:</p>
<pre><code>In [91]: [n*percentileofscore(values, val, kind='strict')/100/(n-1) for val in values]
Out[91]:
[0.0,
0.16666666666666666,
0.16666666666666666,
0.16666666666666666,
0.66666666666666663,
0.66666666666666663,
1.0]
</code></pre>
| 2 | 2016-10-03T03:12:36Z | [
"python",
"postgresql",
"scipy",
"rank",
"percentile"
]
|
I can not get my loop to end | 39,823,570 | <p>I am trying to create a higher or lower game in python but can't get my loop to end. it just keeps going. My code looks like this</p>
<pre><code>a = 0
def ask(b, d, p):
global a
while a < d:
global question
question = int(input())
if question < b:
print"bigger"
elif question > b:
print"smaller"
else:
print p
break
del question
a += 1
if d1 == p1:
print "wow. you lost. shame on you."
time.sleep(3)
quit()
</code></pre>
<p>if anybody can tell me what I did wrong that would be great.</p>
| -4 | 2016-10-03T01:51:18Z | 39,823,913 | <p>Without knowing what the rest of the code is here is what I can offer.</p>
<p>In the code provided you have <code>d1 == p1</code> but they are never called or assigned in that area so they will never change. Therefore your loop can never progress because they variables inside it do not control it.</p>
<p>edit: I'm sorry you are correct @Evert. If <code>a</code> ever goes above what <code>d</code> is then it should end. So the loop should end eventually, but likely not when intended should they get it right. Should be <code>if question == answer</code>.</p>
| 0 | 2016-10-03T02:42:14Z | [
"python",
"python-2.7"
]
|
List comprehensions Conversion | 39,823,593 | <p>i started learning Python recently and i am facing difficulty converting the below piece of code into a list comprehension:</p>
<pre><code> list = [] #An empty List
for key,value in defaultDict.items():#iterate through the default dict
for i in defaultDict[key]:#iterate through the list in the defaultDict
if i not in list:#If the item in the list is not present in the main list
list.append(i)#append it
</code></pre>
<p>Is it possible for me to even do it??Any help with this is much appreciated.</p>
| 1 | 2016-10-03T01:54:10Z | 39,823,607 | <p>Very straightforward: use a nested list comprehension to get all <code>i</code>s and a set to remove duplicates.</p>
<pre><code>list(set([item for __, value in defaultDict.items() for item in value]))
</code></pre>
<p>Let's break it down:</p>
<ul>
<li><code>[item for key,value in defaultDict.items() for item in value]</code> is a <a href="http://stackoverflow.com/questions/18072759/python-nested-list-comprehension">nested list comprehension</a>.</li>
<li><code>set(...)</code> will remove all duplicates - the equivalent of <code>if i not in list: list.append(i)</code> logic you have</li>
<li><code>list(set(...))</code> will convert the set back to a list for you.</li>
</ul>
| 4 | 2016-10-03T01:56:00Z | [
"python",
"list",
"list-comprehension",
"defaultdict"
]
|
creating sum of odd indexes python | 39,823,625 | <p>I'm trying to create a function equal to the sum of every other digit in a list. For example, if the list is [0,1,2,3,4,5], the function should equal 5+3+1. How could I do this? My knowledge of Python does not extend much farther than while and for loops. Thanks. </p>
| 1 | 2016-10-03T01:58:31Z | 39,823,637 | <p>Here is a simple one-liner:</p>
<pre><code>In [37]: L
Out[37]: [0, 1, 2, 3, 4, 5]
In [38]: sum(L[1::2])
Out[38]: 9
</code></pre>
<p>In the above code, <code>L[1::2]</code> says "get ever second element in <code>L</code>, starting at index 1"</p>
<p>Here is a way to do all the heavy lifting yourself:</p>
<pre><code>L = [0, 1, 2, 3, 4, 5]
total = 0
for i in range(len(L)):
if i%2: # if this is an odd index
total += L[i]
</code></pre>
<p>Here's another way, using <code>enumerate</code>:</p>
<pre><code>L = [0, 1, 2, 3, 4, 5]
total = 0
for i,num in enumerate(L):
if i%2:
total += num
</code></pre>
| 5 | 2016-10-03T02:00:33Z | [
"python",
"for-loop",
"while-loop",
"sum"
]
|
creating sum of odd indexes python | 39,823,625 | <p>I'm trying to create a function equal to the sum of every other digit in a list. For example, if the list is [0,1,2,3,4,5], the function should equal 5+3+1. How could I do this? My knowledge of Python does not extend much farther than while and for loops. Thanks. </p>
| 1 | 2016-10-03T01:58:31Z | 39,823,675 | <pre><code>>>> arr = [0,1,2,3,4,5]
>>> sum([x for idx, x in enumerate(arr) if idx%2 != 0])
9
</code></pre>
<p>This is just a list comprehension that only includes elements in <code>arr</code> that have an odd index.</p>
<p>To illustrate in a traditional <code>for</code> loop:</p>
<pre><code>>>> my_sum = 0
>>> for idx, x in enumerate(arr):
... if idx % 2 != 0:
... my_sum += x
... print("%d was odd, so %d was added. Current sum is %d" % (idx, x, my_sum))
... else:
... print("%d was even, so %d was not added. Current sum is %d" % (idx, x, my_sum))
...
0 was even, so 0 was not added. Current sum is 0
1 was odd, so 1 was added. Current sum is 1
2 was even, so 2 was not added. Current sum is 1
3 was odd, so 3 was added. Current sum is 4
4 was even, so 4 was not added. Current sum is 4
5 was odd, so 5 was added. Current sum is 9
</code></pre>
| 0 | 2016-10-03T02:04:58Z | [
"python",
"for-loop",
"while-loop",
"sum"
]
|
Error pwd.getpwuid with google cloud storage | 39,823,682 | <p>I <code>pip install --upgrade google-cloud-storage -t libs</code> to my app engine app.</p>
<p>In the appengine_config.py, I added: </p>
<pre><code>vendor.add('libs')
vendor.add(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'libs'))
</code></pre>
<p>It works on app engine online, but not on app engine sandbox locally.</p>
<pre><code>ERROR 2016-10-03 00:22:01,311 wsgi.py:263]
Traceback (most recent call last):
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/runtime/wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
File "/Users/charlesng/Documents/Codes/python/web/myapp/src/main.py", line 19, in <module>
from handlers import page_handlers, user_handlers, repo_handlers, doc_handlers
File "/Users/charlesng/Documents/Codes/python/web/myapp/src/handlers/repo_handlers.py", line 28, in <module>
from google.cloud import storage
File "/Users/charlesng/Documents/Codes/python/web/myapp/src/libs/google/cloud/storage/__init__.py", line 42, in <module>
from google.cloud.storage.batch import Batch
File "/Users/charlesng/Documents/Codes/python/web/myapp/src/libs/google/cloud/storage/batch.py", line 29, in <module>
from google.cloud.exceptions import make_exception
File "/Users/charlesng/Documents/Codes/python/web/myapp/src/libs/google/cloud/exceptions.py", line 24, in <module>
from google.cloud._helpers import _to_bytes
File "/Users/charlesng/Documents/Codes/python/web/myapp/src/libs/google/cloud/_helpers.py", line 62, in <module>
_USER_ROOT = os.path.expanduser('~')
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/posixpath.py", line 262, in expanduser
userhome = pwd.getpwuid(os.getuid()).pw_dir
KeyError: 'getpwuid(): uid not found: 429123195'
</code></pre>
<p>Folder structures:</p>
<pre><code>myapp/
/src/main.py
/src/libs
/env/(virtualenv files)
/env/lib
</code></pre>
<p>normally if you pip a library, the files are in lib/ but for app engine third party library, we have to pip -t libs so they are in libs instead of lib. </p>
<p>when I use python2 or python3, from google.cloud import storage, they are good but not running the appengine sandbox because it's calling src/libs/google/cloud instead of env/lib/google/cloud.</p>
<p>How should I approach to solve this issue? Any advice or direction would be thankful.</p>
| 1 | 2016-10-03T02:05:38Z | 39,824,494 | <p>Not sure if this entirely answers your question, however, I am assuming that you are using <a href="https://cloud.google.com/appengine/docs/python/download" rel="nofollow">Google App Engine SDK</a> to launch the app locally on your machine. And this, uses a patched <code>os</code> module for sandboxing:</p>
<pre><code>/tmp/google_appengine$ cat google/appengine/tools/devappserver2/python/sandbox.py
...
...
...
def apply_policy(self, module_dict):
"""Apply this policy to the provided module dict.
In order, one of the following will apply:
- Symbols in overrides are set to the override value.
- Symbols in deletes are removed.
- Whitelisted symbols and symbols with a constant type are unchanged.
- If a default stub is set, all other symbols are replaced by it.
- If default_pass_through is True, all other symbols are unchanged.
- If default_pass_through is False, all other symbols are removed.
Args:
module_dict: The module dict to be filtered.
"""
for symbol in module_dict.keys():
if symbol in self.overrides:
module_dict[symbol] = self.overrides[symbol]
elif symbol in self.deletes:
del module_dict[symbol]
elif not (symbol in self.whitelist or
isinstance(module_dict[symbol], self.constant_types) or
(symbol.startswith('__') and symbol.endswith('__'))):
if self.default_stub:
module_dict[symbol] = self.default_stub
elif not self.default_pass_through:
del module_dict[symbol]
_MODULE_OVERRIDE_POLICIES = {
'os': ModuleOverridePolicy(
default_stub=stubs.os_error_not_implemented,
whitelist=['altsep', 'curdir', 'defpath', 'devnull', 'environ', 'error',
'fstat', 'getcwd', 'getcwdu', 'getenv', '_get_exports_list',
'name', 'open', 'pardir', 'path', 'pathsep', 'sep',
'stat_float_times', 'stat_result', 'strerror', 'sys',
'walk'],
overrides={
'access': stubs.fake_access,
'listdir': stubs.RestrictedPathFunction(os.listdir),
# Alias lstat() to stat() to match the behavior in production.
'lstat': stubs.RestrictedPathFunction(os.stat),
'open': stubs.fake_open,
'stat': stubs.RestrictedPathFunction(os.stat),
'uname': stubs.fake_uname,
'getpid': stubs.return_minus_one,
'getppid': stubs.return_minus_one,
'getpgrp': stubs.return_minus_one,
'getgid': stubs.return_minus_one,
'getegid': stubs.return_minus_one,
'geteuid': stubs.return_minus_one,
'getuid': stubs.return_minus_one,
'urandom': stubs.fake_urandom,
'system': stubs.return_minus_one,
},
deletes=['execv', 'execve']),
'signal': ModuleOverridePolicy(overrides={'__doc__': None}),
'locale': ModuleOverridePolicy(
overrides={'setlocale': stubs.fake_set_locale},
default_pass_through=True),
'distutils.util': ModuleOverridePolicy(
overrides={'get_platform': stubs.fake_get_platform},
default_pass_through=True),
# TODO: Stub out imp.find_module and friends.
}
</code></pre>
<p>And as you can see, os.getuid() will always return -1:</p>
<pre><code>/tmp/google_appengine$ grep -A1 return_minus_one google/appengine/tools/devappserver2/python/stubs.py
def return_minus_one(*unused_args, **unused_kwargs):
return -1
</code></pre>
<p>And <code>-1</code> gets converted to <code>429123195</code> because, in python source code (Modules/pwdmodule.c)...</p>
<pre><code>static PyObject *
pwd_getpwuid(PyObject *self, PyObject *args)
{
uid_t uid;
struct passwd *p;
if (!PyArg_ParseTuple(args, "O&:getpwuid", _Py_Uid_Converter, &uid)) {
if (PyErr_ExceptionMatches(PyExc_OverflowError))
PyErr_Format(PyExc_KeyError,
"getpwuid(): uid not found");
return NULL;
}
if ((p = getpwuid(uid)) == NULL) {
if (uid < 0)
PyErr_Format(PyExc_KeyError,
"getpwuid(): uid not found: %ld", (long)uid);
else
PyErr_Format(PyExc_KeyError,
"getpwuid(): uid not found: %lu", (unsigned long)uid);
return NULL;
}
return mkpwent(p);
}
</code></pre>
<p>... uid_t is type-casted to long</p>
<p>As of today (03-10-2016), the <a href="https://cloud.google.com/appengine/kb/" rel="nofollow">google app engine knowledge base</a> article says (under the Python section):</p>
<blockquote>
<p>The system does not allow you to invoke subprocesses, as a result some
os module methods are disabled</p>
</blockquote>
| 2 | 2016-10-03T04:07:45Z | [
"python",
"google-app-engine",
"google-cloud-storage"
]
|
python: list of lists of matching dicts | 39,823,688 | <p>I have a few dicts like:</p>
<p><code>a = [ { 'event_id': 1}, { 'event_id': 1}, { 'event_id': 1}, { 'event_id': 2}, { 'event_id': 2}, { 'event_id': 3} ]</code></p>
<p>I want a list of lists, each sublist containing all the dicts with similar <code>event_id</code> values:</p>
<p><code>[ [ { 'event_id': 1}, { 'event_id': 1}, { 'event_id': 1} ], [ { 'event_id': 2}, { 'event_id': 2} ], [ { 'event_id': 3} ] ]</code></p>
<p>Is there a quick recipe for this?</p>
| 1 | 2016-10-03T02:07:01Z | 39,823,731 | <p><code>itertools.groupby</code> is perfect for this:</p>
<pre><code>map(lambda x: list(x[1]), (itertools.groupby(a, lambda x: x['event_id'])))
# => [[{'event_id': 1}, {'event_id': 1}, {'event_id': 1}], [{'event_id': 2}, {'event_id': 2}], [{'event_id': 3}]]
</code></pre>
<p>EDIT: As Justin Turner Arthur says in comments, this is not great (mostly because I completely forgot that Python groups only adjacent elements). Please use his solution.</p>
| 2 | 2016-10-03T02:12:15Z | [
"python"
]
|
python: list of lists of matching dicts | 39,823,688 | <p>I have a few dicts like:</p>
<p><code>a = [ { 'event_id': 1}, { 'event_id': 1}, { 'event_id': 1}, { 'event_id': 2}, { 'event_id': 2}, { 'event_id': 3} ]</code></p>
<p>I want a list of lists, each sublist containing all the dicts with similar <code>event_id</code> values:</p>
<p><code>[ [ { 'event_id': 1}, { 'event_id': 1}, { 'event_id': 1} ], [ { 'event_id': 2}, { 'event_id': 2} ], [ { 'event_id': 3} ] ]</code></p>
<p>Is there a quick recipe for this?</p>
| 1 | 2016-10-03T02:07:01Z | 39,823,739 | <p>First, make a dictionary mapping <code>event_id</code> values to lists of dicts.</p>
<pre><code>d = {}
for dict_ in a:
if dict_['event_id'] in d:
d[dict_['event_id']].append(dict_)
else:
d[dict['event_id']] = [dict_]
</code></pre>
<p>Then make a list comprehension to gather them all into one list:</p>
<pre><code>[val for key , val in d.items()]
</code></pre>
| 0 | 2016-10-03T02:13:12Z | [
"python"
]
|
python: list of lists of matching dicts | 39,823,688 | <p>I have a few dicts like:</p>
<p><code>a = [ { 'event_id': 1}, { 'event_id': 1}, { 'event_id': 1}, { 'event_id': 2}, { 'event_id': 2}, { 'event_id': 3} ]</code></p>
<p>I want a list of lists, each sublist containing all the dicts with similar <code>event_id</code> values:</p>
<p><code>[ [ { 'event_id': 1}, { 'event_id': 1}, { 'event_id': 1} ], [ { 'event_id': 2}, { 'event_id': 2} ], [ { 'event_id': 3} ] ]</code></p>
<p>Is there a quick recipe for this?</p>
| 1 | 2016-10-03T02:07:01Z | 39,837,004 | <p>As <a href="http://stackoverflow.com/a/39823731/1843865">amadan suggests</a>, <code>itertools.groupby</code> is a great approach to this problem. It takes an iterable and a key function to group by. You'll first want to make sure your list is sorted by the same key so that groupby doesn't end up creating multiple groups for the same key (similar to how you do an ORDER BY before a GROUP BY in SQL).</p>
<pre><code>from itertools import groupby
from operator import itemgetter
grouping_key = itemgetter('event_id') # produces a function like lambda x: x['event_id']
# If your events weren't already ordered with adjacent 'event_id's:
ordered_a = sorted(a, key=grouping_key)
grouped_a = [list(group) for _grouper, group in groupby(ordered_a, key=grouping_key)]
</code></pre>
| 2 | 2016-10-03T17:19:30Z | [
"python"
]
|
Python Image Processing Threading | 39,823,742 | <p>so I am working on a robotics project where we have to recognize a pattern on a wall and position our robot accordingly. I developed this image processing code on my laptop that grabbed an image, converted it to HSV, applied a bit-wise mask, used Canny edge Detection, and found contours. I thought I could just copy and paste the code onto a raspberry pi 3; however, because of the decreased processing power, the fps is less than 1. I have been trying to segregate the code into threads, so I can have one thread that captures the images, one thread that converts the image to HSV and filters it, and one thread to do contour fitting. In order to have these communicate with each other, I have made queues. </p>
<p>Here is my initial vision code:</p>
<pre><code>import numpy as np
import cv2
import time
import matplotlib.pyplot as plt
import sys
def onmouse(k, x, y, s, p):
global hsv
if k == 1: # left mouse, print pixel at x,y
print(hsv[y, x])
def distance_to_camera(Kwidth, focalLength, pixelWidth):
return (Kwidth * focalLength) / pixelWidth
def contourArea(contours):
area = []
for i in range(0,len(contours)):
area.append([cv2.contourArea(contours[i]),i])
area.sort()
if(area[len(area) - 1] >= 5 * area[0]):
return area[len(area)-1]
else: return 0
if __name__ == '__main__':
cap = cv2.VideoCapture(0)
"""
cap.set(3, 1920)
cap.set(4, 1080)
cap.set(5, 30)
time.sleep(2)
cap.set(15, -8.0)
"""
KNOWN_WIDTH = 18
# focalLength = focalLength = (rect[1][1] * 74) / 18
focalLength = 341.7075686984592
distance_data = []
counter1 = 0
numFrames = 100
samples = 1
start_time = time.time()
while (samples < numFrames):
# Capture frame-by-frame
ret, img = cap.read()
length1, width1, channels = img.shape
img = cv2.GaussianBlur(img, (5, 5), 0)
hsv = cv2.cvtColor(img.copy(), cv2.COLOR_BGR2HSV)
# lower_green = np.array([75, 200, 170])
# lower_green = np.array([53,180,122])
#lower_green = np.array([70, 120, 120])
lower_green = np.array([70, 50, 120])
upper_green = np.array([120, 200, 255])
#upper_green = np.array([120, 200, 255])
mask = cv2.inRange(hsv, lower_green, upper_green)
res = cv2.bitwise_and(hsv, hsv, mask=mask)
gray = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)
edged = cv2.Canny(res, 35, 125)
im2, contours, hierarchy = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
if (len(contours) > 1):
area,place = contourArea(contours)
#print(area)
if(area != 0):
# print("Contxours: %d" % contours.size())
# print("Hierarchy: %d" % hierarchy.size())
c = contours[place]
cv2.drawContours(img, c, -1, (0, 0, 255), 3)
cv2.drawContours(edged,c, -1, (255, 0, 0), 3)
perimeter = cv2.arcLength(c, True)
M = cv2.moments(c)
cx = 0
cy = 0
if (M['m00'] != 0):
cx = int(M['m10'] / M['m00']) # Center of MASS Coordinates
cy = int(M['m01'] / M['m00'])
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img, [box], 0, (255, 0, 0), 2)
cv2.circle(img, (cx, cy), 7, (0, 0, 255), -1)
cv2.line(img, (int(width1 / 2), int(length1 / 2)), (cx, cy), (255, 0, 0), 2)
if(rect[1][1] != 0):
inches = distance_to_camera(KNOWN_WIDTH, focalLength, rect[1][1])
#print(inches)
distance_data.append(inches)
counter1+=1
samples+=1
"""
cv2.namedWindow("Image w Contours")
cv2.setMouseCallback("Image w Contours", onmouse)
cv2.imshow('Image w Contours', img)
cv2.namedWindow("HSV")
cv2.setMouseCallback("HSV", onmouse)
cv2.imshow('HSV', edged)
if cv2.waitKey(1) & 0xFF == ord('x'):
break
"""
# When everything done, release the capture
totTime = time.time() - start_time
print("--- %s seconds ---" % (totTime))
print('----%s fps ----' % (numFrames/totTime))
cap.release()
cv2.destroyAllWindows()
--- 13.469419717788696 seconds ---
----7.42422480665093 fps ----
plt.plot(distance_data)
plt.xlabel('TimeData')
plt.ylabel('Distance to Target(in) ')
plt.title('Distance vs Time From Camera')
plt.show()
</code></pre>
<p>This is my threaded code, which grabs frames in the main and filters it in another thread; I would like to have another thread for contour fitting, but even with these two processes the threaded code has nearly the same FPS as the previous code. Also these results are from my laptop, not the raspberry pi. </p>
<pre><code>import cv2
import threading
import datetime
import numpy as np
import queue
import time
frame = queue.Queue(0)
canny = queue.Queue(0)
lower_green = np.array([70, 50, 120])
upper_green = np.array([120, 200, 255])
class FilterFrames(threading.Thread):
def __init__(self,threadID,lock):
threading.Thread.__init__(self)
self.lock = lock
self.name = threadID
self.setDaemon(True)
self.start()
def run(self):
while(True):
img1 = frame.get()
img1 = cv2.GaussianBlur(img1, (5, 5), 0)
hsv = cv2.cvtColor(img1.copy(), cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, lower_green, upper_green)
res = cv2.bitwise_and(hsv, hsv, mask=mask)
edged = cv2.Canny(res, 35, 125)
canny.put(edged)
if __name__ == '__main__':
lock = threading.Lock()
numframes = 100
frames = 0
cap = cv2.VideoCapture(0)
filter = FilterFrames(lock=lock, threadID='Filter')
start_time = time.time()
while(frames < numframes):
ret,img = cap.read()
frame.put(img)
frames+=1
totTime = time.time() - start_time
print("--- %s seconds ---" % (totTime))
print('----%s fps ----' % (numframes/totTime))
"""
Results were:
--- 13.590131759643555 seconds ---
----7.358280388197121 fps ----
"""
cap.release()
</code></pre>
<p>I was wondering if there is something I am doing wrong, whether the access of the queues is slowing down the code, and if I should be using the multiprocessing module instead of threading for this application.</p>
| 1 | 2016-10-03T02:13:24Z | 39,824,403 | <p>You can profile the code using <code>cProfile</code> module. It will tell you what part of the program is the bottleneck.</p>
<p>Python in CPython implementation has the Global Interpreter Lock (GIL). This means that even if your app is multithreaded, it will use only one of your CPUs. You can try <code>multiprocessing</code> module. Although Jython and IronPython have no GIL, they don't have none or no stable Python3 support.</p>
<p>In your code <code>self.lock</code> is never used. Use a good IDE with pylint to catch these sort of errors. Queues maintain their own locks.</p>
<p><code>threading.Thread.__init__(self)</code> is an outdated syntax coming from Python2. Use instead <code>super().__init__()</code></p>
| 1 | 2016-10-03T03:56:57Z | [
"python",
"multithreading",
"python-3.x",
"image-processing",
"python-multithreading"
]
|
Catching Keypresses on Windows with Python using MSVCRT when terminal window not in Focus | 39,823,758 | <p>I want to be able to use the <code>msvcrt</code> package in python to catch keypresses via the <code>msvcrt.getch()</code> method but it appears the the terminal window needs to be in focus for it to work. Is there a way around this?</p>
| 0 | 2016-10-03T02:15:02Z | 39,843,157 | <p>I found a python wrapper for Ctypes as suggested by @IInspectable. It wraps the low_level Keyboard hooks with a nice monitor class.</p>
<p><a href="https://github.com/ethanhs/pyhooked" rel="nofollow">https://github.com/ethanhs/pyhooked</a></p>
| 0 | 2016-10-04T02:10:49Z | [
"python",
"windows",
"keypress",
"msvcrt",
"getch"
]
|
haskell: recursive function that return the char in a tuple list with certain condition(compare) | 39,823,833 | <p>I'm learning recursive function in haskell that confused with such conditon:</p>
<p>I got a tuple list here:</p>
<pre><code>[(0.5,'!'),(1,'*'),(1.5,'#')]
</code></pre>
<p>What I want to do is input a number n and compare with fist number in each tuple of the list</p>
<p>so suppose n=0.1, when it compared 0.5 and find it is smaller than 0.5, it will return char '!'</p>
<p>suppose n=0.7, which is > 0.5 and keep comparing, find that it < 1, then it will return char '*'</p>
<p>and after compare the whole list and find d is still bigger than the last one, it will just return char 'n'</p>
<p>I've been work with such condition lots of time,but still cannot get to this, here is my code:</p>
<pre><code>find :: Double -> [(Double,Char)] -> Char
find d [] = ' '
find d xs
| d <= Double(xs[0]) = xs[0]
| d > Double(xs[0]) = find d tail(xs)
</code></pre>
<p>please use recursion!</p>
| 0 | 2016-10-03T02:27:55Z | 39,823,923 | <p><a href="https://en.wikibooks.org/wiki/Haskell/Lists_and_tuples" rel="nofollow">tuple</a> is different from array in Haskell</p>
<pre><code>find :: Double -> [(Double,Char)] -> Char
find d [] = ' '
find d (x:xs)
| d <= fst x = snd x
| otherwise = find d xs
</code></pre>
| 3 | 2016-10-03T02:43:30Z | [
"python",
"haskell"
]
|
Python Tkinter Display Loading Animation While Performing Certain Tasks | 39,823,834 | <p>I have located this useful code for Tkinter animations from <a href="https://www.daniweb.com/programming/software-development/threads/396918/how-to-use-animated-gifs-with-tkinter" rel="nofollow">https://www.daniweb.com/programming/software-development/threads/396918/how-to-use-animated-gifs-with-tkinter</a> ,supplied by "vegaseat". </p>
<p>I have adapted a similar design for displaying gifs animations to a project. I am wishing to implement this as a function to certain areas of a script, e.g. importing modules etc. I have tried a few approaches but when I called this as a function, it first runs the animation and then imports the module (as we would expect).</p>
<p>I guess I am exploring ways to get this to work concurrently...while the script is importing modules( or running another process where I wish to display the animation), the animation would be displayed, and then disappear, until the next call. Suggestions would be appreciated.</p>
<p>Thanks a lot.</p>
<pre><code># mimic an animated GIF displaying a series of GIFs
# an animated GIF was used to create the series of GIFs
# with a common GIF animator utility
import time
from Tkinter import *
root = Tk()
imagelist = ["dog001.gif","dog002.gif","dog003.gif",
"dog004.gif","dog005.gif","dog006.gif","dog007.gif"]
# extract width and height info
photo = PhotoImage(file=imagelist[0])
width = photo.width()
height = photo.height()
canvas = Canvas(width=width, height=height)
canvas.pack()
# create a list of image objects
giflist = []
for imagefile in imagelist:
photo = PhotoImage(file=imagefile)
giflist.append(photo)
# loop through the gif image objects for a while
for k in range(0, 1000):
for gif in giflist:
canvas.delete(ALL)
canvas.create_image(width/2.0, height/2.0, image=gif)
canvas.update()
time.sleep(0.1)
root.mainloop()
</code></pre>
<p>EDIT: I am attempting to implement the code,below, per some helpful suggestions. The goal is to begin the animation, while the application is importing the modules in the "IMPORTS" function, and then have it destroyed after the imports are completed.</p>
<pre><code># Import modules
from Tkinter import *
from PIL import ImageTk
from PIL import Image
import os,time
from os.path import dirname
from os.path import join
def IMPORTS():
import tkMessageBox
from ttk import Combobox
import csv,datetime
import xlrd,xlwt
import getpass
import traceback
import arcpy
from arcpy import AddMessage
import win32com.client
inGif = #root image (.gif)
FramesFolder = #Folder containing frames of the root image
W=Toplevel()
W.wm_overrideredirect(True) # I wish to only display the widget spinning without the window frame
imagelist = [os.path.join(FramesFolder,s) for s in os.listdir(FramesFolder) if not s.endswith('db')]
# extract width and height info
photo = PhotoImage(file=imagelist[0])
width = photo.width()
height = photo.height()
canvas = Canvas(W,width=width, height=height)
canvas.pack()
# create a list of image objects
giflist = []
for imagefile in imagelist:
photo = PhotoImage(file=imagefile)
giflist.append(photo)
timer_id = None
def start_loading(n=0):
global timer_id
gif = giflist[n%len(giflist)]
canvas.create_image(gif.width()//2, gif.height()//2, image=gif)
timer_id = W.after(100, start_loading, n+1) # call this function every 100ms
def stop_loading():
if timer_id:
W.after_cancel(timer_id)
canvas.delete(ALL)
start_loading()
IMPORTS()
stop_loading()
# The spinning widget should be completely destroyed before moving on...
</code></pre>
<p>It is returning </p>
<pre><code>"NameError: name 'tkMessageBox' is not defined"
</code></pre>
| 0 | 2016-10-03T02:28:19Z | 39,827,771 | <p>You can use <code>Tk.after()</code> and <code>Tk.after_cancel()</code> to start and stop the animation:</p>
<pre><code>timer_id = None
def start_loading(n=0):
global timer_id
gif = giflist[n%len(giflist)]
canvas.create_image(gif.width()//2, gif.height()//2, image=gif)
timer_id = root.after(100, start_loading, n+1) # call this function every 100ms
def stop_loading():
if timer_id:
root.after_cancel(timer_id)
canvas.delete(ALL)
</code></pre>
<p>Then, you can call <code>start_loading()</code> before the long process and call <code>stop_loading()</code> after the long process:</p>
<pre><code>start_loading()
long_process() # your long process
stop_loading()
</code></pre>
| 0 | 2016-10-03T08:54:21Z | [
"python",
"animation",
"tkinter"
]
|
both tow are unicode,but when compare,it doesn't workï¼ | 39,823,844 | <pre><code># -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
import urllib2
response = urllib2.urlopen('http://bbs.szhome.com/80300-0-0-0-3005.html')
html = response.read()
soup = BeautifulSoup(html)
returntext = soup.find('dl',class_='fix').text
print returntext
if isinstance(returntext,unicode):
print 'ok1' #true,print ok1
text = u'ææ ç¸å
³æ°æ®...'
if isinstance(text,unicode):
print 'ok2' #true,print ok2
text2 = 'ææ ç¸å
³æ°æ®...'
if isinstance(text2,str):
print 'ok3' #true,print ok3
if returntext == text:
print 'ok4' #both unicode,but not excute,why?
</code></pre>
| 0 | 2016-10-03T02:30:06Z | 39,823,871 | <p>You need to <a href="https://docs.python.org/2/library/string.html#string.strip" rel="nofollow">strip</a> all newlines before comparison:</p>
<pre><code>>>> print returntext
ææ ç¸å
³æ°æ®...
>>> print text
ææ ç¸å
³æ°æ®...
>>> returntext
u'\n\u6682\u65e0\u76f8\u5173\u6570\u636e...\n'
>>> text
u'\u6682\u65e0\u76f8\u5173\u6570\u636e...'
>>> returntext.strip() == text
True
</code></pre>
| 1 | 2016-10-03T02:35:32Z | [
"python",
"unicode"
]
|
both tow are unicode,but when compare,it doesn't workï¼ | 39,823,844 | <pre><code># -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
import urllib2
response = urllib2.urlopen('http://bbs.szhome.com/80300-0-0-0-3005.html')
html = response.read()
soup = BeautifulSoup(html)
returntext = soup.find('dl',class_='fix').text
print returntext
if isinstance(returntext,unicode):
print 'ok1' #true,print ok1
text = u'ææ ç¸å
³æ°æ®...'
if isinstance(text,unicode):
print 'ok2' #true,print ok2
text2 = 'ææ ç¸å
³æ°æ®...'
if isinstance(text2,str):
print 'ok3' #true,print ok3
if returntext == text:
print 'ok4' #both unicode,but not excute,why?
</code></pre>
| 0 | 2016-10-03T02:30:06Z | 39,824,084 | <p>They aren't both unicode, they just look that way to you because your display decodes text2 for you. In fact, <code>isinstance(text2,str)</code> shows that text2 is clearly a string, not unicode. </p>
<p>Unicode was bolted onto python 2 and its usage is a little strange. <code>str</code> can hold any octet (typically ascii characters but binary binary data) and in your case its holding a utf-8 encoded version of the string. </p>
<p><code>text</code> is unicode and has 9 characters</p>
<pre><code>>>> text = u'ææ ç¸å
³æ°æ®...'
>>> type(text), len(text)
(<type 'unicode'>, 9)
</code></pre>
<p><code>Text2</code> is a binary blob in a string, has 21 octets and is not the same as <code>text</code></p>
<pre><code>>>> text2 = 'ææ ç¸å
³æ°æ®...'
>>> type(text2), len(text2)
(<type 'str'>, 21)
>>> text == text2
__main__:1: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
False
</code></pre>
<p>But I can decode it into unicode and the comparison works</p>
<pre><code>>>> text2_unicode = text2.decode('utf-8')
>>> type(text2_unicode), len(text2_unicode)
(<type 'unicode'>, 9)
>>> text == text2_unicode
True
</code></pre>
<p>This all works much better and much more clearly in python 3. You should not be using 2.x if you can avoid it.</p>
| 0 | 2016-10-03T03:09:13Z | [
"python",
"unicode"
]
|
Convert part of data frame into MultiIndex in Pandas | 39,823,852 | <p>I am having this form of data in XLS format:</p>
<pre><code>+--------+---------+-------------+---------------+---------+
| ID | Branch | Customer ID | Customer Name | Balance |
+--------+---------+-------------+---------------+---------+
| 111111 | Branch1 | 1 | Company A | 10 |
+--------+---------+-------------+---------------+---------+
| 222222 | Branch2 | 2 | Company B | 20 |
+--------+---------+-------------+---------------+---------+
| 111111 | Branch1 | 2 | Company B | 30 |
+--------+---------+-------------+---------------+---------+
| 222222 | Branch2 | 3 | Company C | 10 |
+--------+---------+-------------+---------------+---------+
</code></pre>
<p>And I would like to use Pandas to process it. Pandas would read it as a single sheet, but I would like to use MultiIndex here, like</p>
<pre><code>+--------+---------+-------------+---------------+---------+
| ID | Branch | Customer ID | Customer Name | Balance |
+--------+---------+-------------+---------------+---------+
| | | 1 | Company A | 10 |
+ 111111 + Branch1 +-------------+---------------+---------+
| | | 2 | Company B | 30 |
+--------+---------+-------------+---------------+---------+
| | | 2 | Company B | 20 |
+ 222222 + Branch2 +-------------+---------------+---------+
| | | 3 | Company C | 10 |
+--------+---------+-------------+---------------+---------+
</code></pre>
<p>Here <code>111111</code> and <code>Branch1</code> are level 1 index and <code>1</code> <code>Company A</code> are level 2 index. Is there a built-in method to do it?</p>
| 1 | 2016-10-03T02:31:58Z | 39,824,870 | <p>If need only <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html" rel="nofollow"><code>sort_index</code></a>, use:</p>
<pre><code>df.set_index(['ID','Branch', 'Customer ID','Customer Name'], inplace=True)
df.sort_index(inplace=True)
print (df)
Balance
ID Branch Customer ID Customer Name
111111 Branch1 1 Company A 10
2 Company B 30
222222 Branch2 2 Company B 20
3 Company C 10
</code></pre>
<p>But if need only two levels in <code>MultiIndex</code> (<code>a</code>,<code>b</code> in my solution), is necessary concatenate first with second column and third with fourth column:</p>
<pre><code>df['a'] = df.ID.astype(str) + '_' + df.Branch
df['b'] = df['Customer ID'].astype(str) + '_' + df['Customer Name']
#delete original columns
df.drop(['ID','Branch', 'Customer ID','Customer Name'], axis=1, inplace=True)
df.set_index(['a','b'], inplace=True)
df.sort_index(inplace=True)
print (df)
Balance
a b
111111_Branch1 1_Company A 10
2_Company B 30
222222_Branch2 2_Company B 20
3_Company C 10
</code></pre>
<p>If need aggregate last column by previous columns, use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.mean.html" rel="nofollow"><code>GroupBy.mean</code></a>:</p>
<pre><code>df = df.groupby(['ID','Branch', 'Customer ID','Customer Name'])['Balance'].mean().to_frame()
print (df)
Balance
ID Branch Customer ID Customer Name
111111 Branch1 1 Company A 10
2 Company B 30
222222 Branch2 2 Company B 20
3 Company C 10
</code></pre>
<hr>
<p>If working with <code>MultiIndex</code> in columns need <code>tuples</code> for <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a>:</p>
<pre><code>df.columns = pd.MultiIndex.from_arrays([['a'] * 2 + ['b']* 2 + ['c'], df.columns])
print (df)
a b c
ID Branch Customer ID Customer Name Balance
0 111111 Branch1 1 Company A 10
1 222222 Branch2 2 Company B 20
2 111111 Branch1 2 Company B 30
3 222222 Branch2 3 Company C 10
df.set_index([('a','ID'), ('a','Branch'),
('b','Customer ID'), ('b','Customer Name')], inplace=True)
df.sort_index(inplace=True)
print (df)
c
Balance
(a, ID) (a, Branch) (b, Customer ID) (b, Customer Name)
111111 Branch1 1 Company A 10
2 Company B 30
222222 Branch2 2 Company B 20
3 Company C 10
</code></pre>
| 1 | 2016-10-03T05:04:53Z | [
"python",
"pandas",
"xls"
]
|
How to draw right angled triangle with python | 39,823,862 | <pre><code>def drawTri(a):
b = (a*math.tan(45))
c = (a/math.cos(45))
t.forward(a)
t.left(135)
t.forward(c)
t.left(135)
t.forward(b)
</code></pre>
<p><img src="http://i.stack.imgur.com/anF4W.png" alt="enter image description here"></p>
| 0 | 2016-10-03T02:33:44Z | 39,823,898 | <pre><code>import turtle
def drawTri(a):
hyp = a * 2**0.5
s = turtle.Screen()
t = turtle.Turtle()
t.forward(a)
t.left(135)
t.forward(hyp)
t.left(135)
t.forward(a)
</code></pre>
| 0 | 2016-10-03T02:39:34Z | [
"python",
"turtle-graphics"
]
|
How to draw right angled triangle with python | 39,823,862 | <pre><code>def drawTri(a):
b = (a*math.tan(45))
c = (a/math.cos(45))
t.forward(a)
t.left(135)
t.forward(c)
t.left(135)
t.forward(b)
</code></pre>
<p><img src="http://i.stack.imgur.com/anF4W.png" alt="enter image description here"></p>
| 0 | 2016-10-03T02:33:44Z | 39,880,887 | <p>The problem here is close to that described in <a href="http://stackoverflow.com/questions/32980003/basic-trigonometry-isnt-working-correctly-in-python/32980217">Basic trigonometry isn't working correctly in python</a></p>
<p>The turtle module uses degrees for angles, the math module uses radians</p>
<p>To calculate the cosine of 45 degrees you can use</p>
<pre><code>math.cos(math.radians(45))
</code></pre>
| 0 | 2016-10-05T18:05:49Z | [
"python",
"turtle-graphics"
]
|
How to draw right angled triangle with python | 39,823,862 | <pre><code>def drawTri(a):
b = (a*math.tan(45))
c = (a/math.cos(45))
t.forward(a)
t.left(135)
t.forward(c)
t.left(135)
t.forward(b)
</code></pre>
<p><img src="http://i.stack.imgur.com/anF4W.png" alt="enter image description here"></p>
| 0 | 2016-10-03T02:33:44Z | 39,906,499 | <p>Who needs angles?</p>
<pre><code>def drawTri(a):
x, y = turtle.position()
turtle.goto(x + a, 0)
turtle.goto(0, y + a)
turtle.goto(x, y)
</code></pre>
| 0 | 2016-10-06T22:20:55Z | [
"python",
"turtle-graphics"
]
|
Splitting sentences using nltk.sent_tokenize, it does not provide correct result | 39,823,863 | <p>I am trying to split some customers' comments to sentences using <code>nltk.sent_tokenize</code>. I already tried to solve some of the problems using the following code: </p>
<pre><code>comment = comment.replace('?', '? ').replace('!', '! ').replace('..','.').replace('.', '. ')
</code></pre>
<p>But I do not know how to solve the following problems: </p>
<ol>
<li><p>Customer used several <code>"."</code> after some sentences. For example:</p>
<pre><code>Think tool is a huge factor in this....i have only
</code></pre></li>
<li><p>Customer used several <code>"!"</code> after some sentences, such as <code>auditory subject everyday!!!!!</code> </p></li>
<li><p>some of them used combination of <code>"!"</code> and <code>"."</code> at the end of sentences. </p></li>
<li><p>Because I already used <code>replace('.', '. ')</code>, it also causes the following problem:</p>
<p>Weight gain <code>(20lbs.)</code>, was split to <code>(20lbs.</code> <code>)</code></p></li>
</ol>
<p>Any suggestion? I am using Python.</p>
| 0 | 2016-10-03T02:33:46Z | 39,845,063 | <p>Try using the Punkt Sentence Tokenizer. It is pre-trained to split sentences effectively and can easily be pickled into your code.</p>
| 0 | 2016-10-04T05:54:52Z | [
"python",
"nlp",
"nltk"
]
|
beautiful soup vs espn | 39,823,864 | <p>I'm working on scraping the espn nhl stats using beautifulsoup, trying to create something like</p>
<blockquote>
<p>PLAYER, TEAM, GP, G, A, PTS, +/-, PIM, PTS/G, SOG, PCT, GWG, G, A, G, A,</p>
<p>Patrick Kane, RW, CHI, 82, 46, 60, 106, 17, 30, 1.29, 287, 16.0, 9, 17, 20, 0, 0</p>
<p>Jamie Benn, LW, DAL, 82, 41, 48, 89, 7, 64, 1.09, 247, 16.6, 5, 17, 13 2 3</p>
<p>Sidney Crosby, C, PIT, 80, 36, 49, 85, 19, 42, 1.06, 248, 14.5, 9, 10, 14, 0, 0</p>
</blockquote>
<p>Thus far I've gotten something that loops through and pulls in all the data but it's all one column without the commas and headers</p>
<pre><code>import urllib2
from bs4 import BeautifulSoup
url = "http://www.espn.com/nhl/statistics/player/_/stat/points"
page = urllib2.urlopen(url)
f = open('nhlstarter.txt', 'w')
soup=BeautifulSoup(page, "html.parser")
for tr in soup.select("#my-players-table tr[class*=player]"):
for ob in range(1,15):
player_info = tr('td')[ob].get_text(strip=True)
print(player_info)
f.write(player_info + '\n')
f.close()
</code></pre>
<p>This gets</p>
<pre><code>Patrick Kane, RW
CHI
82
46
60
106
17
30
1.29
287
16.0
9
17
20
</code></pre>
<p>etc</p>
<p>how do I convert the columnar data into usable rows? I thought I might be able to do something like the following:</p>
<pre><code>for tr in soup.select("#my-players-table tr[class*=player]"):
for ob in range(1,15):
player_info + str(ob) = tr('td')[ob].get_text(strip=True)
print(player_info + str(ob))
f.write(player_info + str(ob) "," + player_info + str(ob) '\n')
</code></pre>
<p>but that failed miserably as it didn't properly increase the variables by loop</p>
<p>any advice on how to either grab all columns of the table at once or loop through to get an usable csv would be greatly appreciated.</p>
<p>thanks for any help</p>
| 0 | 2016-10-03T02:33:47Z | 39,824,573 | <p>You could append the player information into a list initially to represent the row and then join the list into a string as you write it to the file:</p>
<pre><code>for tr in soup.select("#my-players-table tr[class*=player]"):
row = []
for ob in range(1,15):
## -- Assuming player_info has the column data
player_info = tr('td')[ob].get_text(strip=True)
row.append(player_info)
f.write(",".join(row) + "\n")
</code></pre>
| 0 | 2016-10-03T04:19:27Z | [
"python",
"python-2.7",
"beautifulsoup"
]
|
TypeError: Calculating dot product in python | 39,823,879 | <p>I need to write a Python function that returns the sum of the pairwise products of listA and listB (the two lists will always have the same length and are two lists of integer numbers).</p>
<p>For example, if listA = [1, 2, 3] and listB = [4, 5, 6], the dot product is 1*4 + 2*5 + 3*6, so the function should return: 32</p>
<p>This is how I wrote the code so far, but it produces an error.</p>
<pre><code>def dotProduct(listA, listB):
'''
listA: a list of numbers
listB: a list of numbers of the same length as listA
'''
sum( [listA[i][0]*listB[i] for i in range(len(listB))] )
</code></pre>
<p>It prints: <code>TypeError: 'int' object is not subscriptable</code></p>
<p>How can I change this code so that the elements in the list can be multiplied element-wise?</p>
| 1 | 2016-10-03T02:36:48Z | 39,823,889 | <p>Remove the offending portion (attempting to subscript an int):</p>
<pre><code>sum([listA[i]*listB[i] for i in range(len(listB))])
</code></pre>
| 1 | 2016-10-03T02:38:23Z | [
"python",
"function",
"python-3.x",
"typeerror",
"dot-product"
]
|
TypeError: Calculating dot product in python | 39,823,879 | <p>I need to write a Python function that returns the sum of the pairwise products of listA and listB (the two lists will always have the same length and are two lists of integer numbers).</p>
<p>For example, if listA = [1, 2, 3] and listB = [4, 5, 6], the dot product is 1*4 + 2*5 + 3*6, so the function should return: 32</p>
<p>This is how I wrote the code so far, but it produces an error.</p>
<pre><code>def dotProduct(listA, listB):
'''
listA: a list of numbers
listB: a list of numbers of the same length as listA
'''
sum( [listA[i][0]*listB[i] for i in range(len(listB))] )
</code></pre>
<p>It prints: <code>TypeError: 'int' object is not subscriptable</code></p>
<p>How can I change this code so that the elements in the list can be multiplied element-wise?</p>
| 1 | 2016-10-03T02:36:48Z | 39,828,058 | <p>Simply remove <code>[0]</code>, and it works:</p>
<p><code>sum( [listA[i]*listB[i] for i in range(len(listB))] )</code></p>
<p>More elegant and readable, do:</p>
<p><code>sum(x*y for x,y in zip(listA,listB))</code></p>
<p>Or even better:</p>
<pre><code>import numpy
numpy.dot(listA, listB)
</code></pre>
| 0 | 2016-10-03T09:11:13Z | [
"python",
"function",
"python-3.x",
"typeerror",
"dot-product"
]
|
Installing Ansible via Pip on Windows 7. Getting ValueError | 39,823,998 | <p>Firstly some basic details:</p>
<p>OS: Windows 7 Home x64</p>
<p>Relevant libraries installed: </p>
<p>.NET Framework 4.0, Windows SDK (in order to have visual c++ 2010 compiler)</p>
<p>Python: 3.4 (tried 32 and 64 bit, same issue)</p>
<p>Pip: 6.0.8</p>
<p>I'm trying to install Ansible (via command prompt) but I get the error: </p>
<pre><code>File "C:\Python34\lib\distutils\msvc9compiler.py", line 287, in query_vcvarsall
raise ValueError(str(list(result.keys())))
ValueError: ['path']
Command "C:\Python34\python.exe -c "import setuptools, tokenize;__file__='C:\\Users\\<myname>\\AppData\\Local\\Temp\\pip-build-bxrpw5rf\\cffi\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record C:\Users\myname>\AppData\Local\Temp\pip-z1_s87va-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\<myname>\AppData\Local\Temp\pip-build-bxrpw5rf\cffi
</code></pre>
<p>So far my own googling hasn't turned up anything that I can distinguish as a solution relevant to this particular case. Has anyone encountered this before?</p>
| 0 | 2016-10-03T02:55:55Z | 39,825,576 | <p>I guess your are on you own here, because there is no support for Python 3 in Ansible yet, neither for Windows as control machine.</p>
<p><a href="http://docs.ansible.com/ansible/intro_installation.html#control-machine-requirements" rel="nofollow">Control Machine Requirements</a>:</p>
<blockquote>
<p>Currently Ansible can be run from any machine with Python 2.6 or 2.7 installed (Windows isnât supported for the control machine).</p>
<p><strong>Note</strong> Python 3 is a slightly different language than Python 2 and some Python programs (including Ansible) are not switching over yet.</p>
</blockquote>
<p>There are some success stories with Ansible under <code>Cygwin</code> on Windows.</p>
| 0 | 2016-10-03T06:23:20Z | [
"python",
"windows",
"pip",
"ansible"
]
|
Initiate a call, record it, and play the recording back | 39,824,121 | <p>I am putting together a POC for a client that wants to do phone based testing. In the POC, we simply want to let the user enter a phone# on a web page. We would then display a question and call their number. We would record their response to the question and play it back to them.</p>
<p>I can initiate the call, but can't figure out how to indicate that I want to record it. Ideally, I would like to say something and start recording after the beep.</p>
<p>I have all of 3 hours of experience with Twilio, so forgive my ignorance.</p>
<p>Here is my code so far:</p>
<pre><code>import logging
# [START imports]
from flask import Flask, render_template, request
import twilio.twiml
from twilio.rest import TwilioRestClient
# [END imports]
app = Flask(__name__)
# [START form]
@app.route('/form')
def form():
return render_template('form.html')
# [END form]
# [START submitted]
@app.route('/submitted', methods=['POST'])
def submitted_form():
phone = request.form['phone']
account_sid = "AC60***********************"
auth_token = "27ea************************"
client = TwilioRestClient(account_sid, auth_token)
call = client.calls.create(to=phone, # Any phone number
from_="+160#######", # Must be a valid Twilio number
url="https://my-host/static/prompt.xml")
call.record(maxLength="30", action="/handle-recording")
return render_template(
'submitted_form.html',
phone=phone)
# [END render_template]
@app.route("/handle-recording", methods=['GET', 'POST'])
def handle_recording():
"""Play back the caller's recording."""
recording_url = request.values.get("RecordingUrl", None)
resp = twilio.twiml.Response()
resp.say("Thanks for your response... take a listen to what you responded.")
resp.play(recording_url)
resp.say("Goodbye.")
return str(resp)
@app.errorhandler(500)
def server_error(e):
# Log the error and stacktrace.
logging.exception('An error occurred during a request.')
return 'An internal error occurred.', 500
# [END app]
</code></pre>
| 0 | 2016-10-03T03:14:59Z | 39,838,183 | <p>Twilio developer evangelist here.</p>
<p>When you create the call, you pass a URL to the call. That URL will be the one called when the user answers the phone. The response to that request should be the <a href="https://www.twilio.com/docs/api/twiml" rel="nofollow">TwiML</a> to instruct Twilio to <a href="https://www.twilio.com/docs/api/twiml/say" rel="nofollow">say the message</a> and <a href="https://www.twilio.com/docs/api/twiml/record" rel="nofollow">record</a> the response. Like so:</p>
<pre><code>@app.route("/handle-call", methods=['GET', 'POST'])
def handle_call():
resp = twilio.twiml.Response()
resp.say("Please leave your message after the beep")
resp.record(action="/handle-recording", method="POST")
return str(resp)
</code></pre>
<p>Then you just need to update your call creation to point to that URL</p>
<pre><code>call = client.calls.create(to=phone, # Any phone number
from_="+160#######", # Must be a valid Twilio number
url="https://my-host/handle-call")
</code></pre>
<p>Your <code>/handle-recording</code> path looks as though it will do what you want already.</p>
<p>Just a quick tip, as you're new to Twilio, when developing using webhooks I recommend using <a href="http://ngrok.com" rel="nofollow">ngrok</a> to tunnel to your dev machine and expose your application to Twilio. I wrote a <a href="https://www.twilio.com/blog/2015/09/6-awesome-reasons-to-use-ngrok-when-testing-webhooks.html" rel="nofollow">blog post about how to use ngrok</a> and some of the features I like too.</p>
<p>Let me know if this helps at all.</p>
| 1 | 2016-10-03T18:32:29Z | [
"python",
"twilio"
]
|
rock scissor paper program | 39,824,145 | <p>Here are code for writing rock scissors paper game in python.
If I run the code, it works but, when it becomes tie, it outputs like this.
Is there anyway I can eliminate print(round) when I get result of tie?
I want to look it like as shown at bottom of example</p>
<p>*********************ROUND #1*********************</p>
<p>Pick your throw: [r]ock, [p]aper, or [s]cissors? p
Tie!</p>
<p>*********************ROUND #1*********************</p>
<p>Pick your throw: [r]ock, [p]aper, or [s]cissors? s
Computer threws rock, you lose!</p>
<p>Your Score: 0
Computer Score: 1</p>
<p>********************* ROUND #3 *********************</p>
<p>Pick your throw: [r]ock, [p]aper, or [s]cissors? s
Tie!</p>
<p>Pick your throw: [r]ock, [p]aper, or [s]cissors? p
Tie!</p>
<p>Pick your throw: [r]ock, [p]aper, or [s]cissors? r
Computer threw scissors, you win!</p>
<p>Your score: 2
Computerâs score: 1</p>
<pre><code># A Python program for the Rock, Paper, Scissors game.
import random
def rock_paper_scissors():
''' Write your code for playing Rock Paper Scissors here. '''
user = 0
computer = 0
rounds = 1
print()
score = (int(input('How many points does it take to win? ')))
print()
while (computer < score and user < score):
RPS = random.randint(0,2)
if (RPS == 0):
RPS = 'rock'
elif (RPS == 1):
RPS = 'paper'
elif(RPS == 2):
RPS = 'scissors'
print('*'*21 + 'ROUND #'+str(rounds) + '*'*21)
print()
player = (input('Pick your throw: [r]ock, [p]aper, or [s]cissors? '))
if RPS == 'rock' and player == 'r':
print('Tie!')
elif RPS == 'rock' and player == 's':
print('Computer threws rock, you lose!')
computer+=1
rounds += 1
print()
print('Your Score: ',user)
print('Computer Score: ',computer)
elif RPS == 'rock' and player == 'p':
print('Computer threw rock, you win!')
user+=1
rounds +=1
print()
print('Your Score: ',user)
print('Computer Score: ',computer)
if RPS == 'paper' and player == 'p':
print('Tie!')
elif RPS == 'paper' and player == 'r':
print('Computer threw paper, you lose!')
computer +=1
rounds += 1
print()
print('Your Score: ',user)
print('Computer Score: ',computer)
elif RPS == 'paper' and player == 's':
print('Computer threw paper, you win!')
user +=1
rounds +=1
print()
print('Your Score: ',user)
print('Computer Score: ',computer)
if RPS == 'scissors' and player == 's':
print('Tie!')
elif RPS == 'scissors'and player == 'p':
print('Computer threw scissors, you lose!')
computer +=1
rounds+=1
print()
print('Your Score: ',user)
print('Computer Score: ',computer)
elif RPS == 'scissors' and player == 'r':
print('Computer threw scissors, you win!')
user +=1
rounds+=1
print()
print('Your Score: ',user)
print('Computer Score: ',computer)
print()
if user> computer:
print('*'*21 + 'GAME OVER' + '*'*21)
print('You win!')
else:
print('*'*21 + 'GAME OVER' + '*'*21)
print('Computer win!')
print()
def main():
print('ROCK PAPER SCISSORS in Python')
print()
print('Rules: 1) Rock wins over Scissors.')
print(' 2) Scissors wins over Paper.')
print(' 3) Paper wins over Rock.')
rock_paper_scissors()
main()
</code></pre>
| 0 | 2016-10-03T03:20:18Z | 39,824,191 | <p>I'd recommend creating a boolean <code>was_tied</code>, setting it to equal <code>False</code> at the start of the program, and setting it to either <code>True</code> or <code>False</code> at the end of each possible outcome. Then you can put your print-round code inside of an <code>if not was_tied</code> statement.</p>
| 0 | 2016-10-03T03:27:29Z | [
"python"
]
|
rock scissor paper program | 39,824,145 | <p>Here are code for writing rock scissors paper game in python.
If I run the code, it works but, when it becomes tie, it outputs like this.
Is there anyway I can eliminate print(round) when I get result of tie?
I want to look it like as shown at bottom of example</p>
<p>*********************ROUND #1*********************</p>
<p>Pick your throw: [r]ock, [p]aper, or [s]cissors? p
Tie!</p>
<p>*********************ROUND #1*********************</p>
<p>Pick your throw: [r]ock, [p]aper, or [s]cissors? s
Computer threws rock, you lose!</p>
<p>Your Score: 0
Computer Score: 1</p>
<p>********************* ROUND #3 *********************</p>
<p>Pick your throw: [r]ock, [p]aper, or [s]cissors? s
Tie!</p>
<p>Pick your throw: [r]ock, [p]aper, or [s]cissors? p
Tie!</p>
<p>Pick your throw: [r]ock, [p]aper, or [s]cissors? r
Computer threw scissors, you win!</p>
<p>Your score: 2
Computerâs score: 1</p>
<pre><code># A Python program for the Rock, Paper, Scissors game.
import random
def rock_paper_scissors():
''' Write your code for playing Rock Paper Scissors here. '''
user = 0
computer = 0
rounds = 1
print()
score = (int(input('How many points does it take to win? ')))
print()
while (computer < score and user < score):
RPS = random.randint(0,2)
if (RPS == 0):
RPS = 'rock'
elif (RPS == 1):
RPS = 'paper'
elif(RPS == 2):
RPS = 'scissors'
print('*'*21 + 'ROUND #'+str(rounds) + '*'*21)
print()
player = (input('Pick your throw: [r]ock, [p]aper, or [s]cissors? '))
if RPS == 'rock' and player == 'r':
print('Tie!')
elif RPS == 'rock' and player == 's':
print('Computer threws rock, you lose!')
computer+=1
rounds += 1
print()
print('Your Score: ',user)
print('Computer Score: ',computer)
elif RPS == 'rock' and player == 'p':
print('Computer threw rock, you win!')
user+=1
rounds +=1
print()
print('Your Score: ',user)
print('Computer Score: ',computer)
if RPS == 'paper' and player == 'p':
print('Tie!')
elif RPS == 'paper' and player == 'r':
print('Computer threw paper, you lose!')
computer +=1
rounds += 1
print()
print('Your Score: ',user)
print('Computer Score: ',computer)
elif RPS == 'paper' and player == 's':
print('Computer threw paper, you win!')
user +=1
rounds +=1
print()
print('Your Score: ',user)
print('Computer Score: ',computer)
if RPS == 'scissors' and player == 's':
print('Tie!')
elif RPS == 'scissors'and player == 'p':
print('Computer threw scissors, you lose!')
computer +=1
rounds+=1
print()
print('Your Score: ',user)
print('Computer Score: ',computer)
elif RPS == 'scissors' and player == 'r':
print('Computer threw scissors, you win!')
user +=1
rounds+=1
print()
print('Your Score: ',user)
print('Computer Score: ',computer)
print()
if user> computer:
print('*'*21 + 'GAME OVER' + '*'*21)
print('You win!')
else:
print('*'*21 + 'GAME OVER' + '*'*21)
print('Computer win!')
print()
def main():
print('ROCK PAPER SCISSORS in Python')
print()
print('Rules: 1) Rock wins over Scissors.')
print(' 2) Scissors wins over Paper.')
print(' 3) Paper wins over Rock.')
rock_paper_scissors()
main()
</code></pre>
| 0 | 2016-10-03T03:20:18Z | 39,824,230 | <p>So I think you have a number of areas that could simplify this code, but for a quick solution, at this line: </p>
<pre><code>print('*'*21 + 'ROUND #'+str(rounds) + '*'*21)
print()
</code></pre>
<p>change it to:</p>
<pre><code>if not previous_round_was_tie:
print('*'*21 + 'ROUND #'+str(rounds) + '*'*21)
print()
</code></pre>
<p>And in all the cases where your tie scenario is true, add a line <code>previous_round_was_tie = True</code>. You'll also have to create the boolean and set it to <code>False</code> outside of your <code>while</code> loop.</p>
| 0 | 2016-10-03T03:32:04Z | [
"python"
]
|
Python how to convert a value with shape (1000L, 1L) to the value of the shape (1000L,) | 39,824,210 | <p>I has a variable with a shape of (1000L, 1L), but the structure causes some errors for subsequent analysis. It needs to be converted to the one with the shape (1000L,). Let me be more specific. </p>
<pre><code>import numpy as np
a = np.array([1,2,3])
b = np.array([[1],[2],[3]])
</code></pre>
<p>I want to convert b to a. Is there any quick way to do that?</p>
| 1 | 2016-10-03T03:30:05Z | 39,824,444 | <p>There are a lot of ways you could do that, such as indexing:</p>
<pre><code>a = b[:, 0]
</code></pre>
<p>raveling:</p>
<pre><code>a = numpy.ravel(b)
</code></pre>
<p>or reshaping:</p>
<pre><code>a = numpy.reshape(b, (-1,))
</code></pre>
| 2 | 2016-10-03T04:00:19Z | [
"python",
"numpy"
]
|
Having trouble creating a new file and then manipulating its contents | 39,824,263 | <p>My goal is to create a python program that will receive user input to open a specific file, copy the contents of that file into a new file (named by the user), and then remove all spaces and digits from the new file.</p>
<p>I am successfully copying the contents of my 'sequence' file into a newly-created 'newsequence' file. Everything falls apart in my second 'for' loop when I then try to manipulate the 'newsequence' file.</p>
<p>I was running into an error where it couldn't read 'newsequence' (maybe because it thought I wanted it to read a variable, rather than the contents of a file?). I thought converting the file to a string might help (although I admit I thought it should have worked without doing so), so in line 7 I converted 'newsequence' to a string.</p>
<p>This at least got the program to run to completion, without raising the "io.UnsupportedOperation: not readable" error, but it's still not right. My output file is identical to the original 'sequence' file, except it contains an extra line at the end that simply reads, "8". Where is this "8" coming from, and what changes do I need to make to my code to get it to do what I want it to do?</p>
<p>I am very new to programming, so I am likely making a series of silly mistakes. Thank you for any help you can give me.</p>
<pre><code>sequence = open(input("Choose sequence file: "))
name = input('Enter name of new text file, without .txt: ') + '.txt'
with open(name, 'a') as newsequence:
for line in sequence:
newsequence.write(line)
for line in str(newsequence):
result = ''.join(i for i in line if i.isdigit())
result = result.replace(" ","")
newsequence.write(result)
sequence.close()
newsequence.close()
</code></pre>
| 0 | 2016-10-03T03:35:08Z | 39,824,307 | <p>So I think you're making the mistake of writing to the file twice unnecessarily. Instead of writing to the file and then trying to read it and edit it, why don't you change the file while you are transcribing. I would change your function like this:</p>
<pre><code>sequence = open(input("Choose sequence file: "))
name = input('Enter name of new text file, without .txt: ') + '.txt'
with open(name, 'w') as newsequence: # notice the `w` instead of the `a`
for line in sequence:
result = ''.join(i for i in line if i.isdigit())
result = result.replace(" ","")
newsequence.write(result)
sequence.close()
newsequence.close()
</code></pre>
| 1 | 2016-10-03T03:42:27Z | [
"python"
]
|
Having trouble creating a new file and then manipulating its contents | 39,824,263 | <p>My goal is to create a python program that will receive user input to open a specific file, copy the contents of that file into a new file (named by the user), and then remove all spaces and digits from the new file.</p>
<p>I am successfully copying the contents of my 'sequence' file into a newly-created 'newsequence' file. Everything falls apart in my second 'for' loop when I then try to manipulate the 'newsequence' file.</p>
<p>I was running into an error where it couldn't read 'newsequence' (maybe because it thought I wanted it to read a variable, rather than the contents of a file?). I thought converting the file to a string might help (although I admit I thought it should have worked without doing so), so in line 7 I converted 'newsequence' to a string.</p>
<p>This at least got the program to run to completion, without raising the "io.UnsupportedOperation: not readable" error, but it's still not right. My output file is identical to the original 'sequence' file, except it contains an extra line at the end that simply reads, "8". Where is this "8" coming from, and what changes do I need to make to my code to get it to do what I want it to do?</p>
<p>I am very new to programming, so I am likely making a series of silly mistakes. Thank you for any help you can give me.</p>
<pre><code>sequence = open(input("Choose sequence file: "))
name = input('Enter name of new text file, without .txt: ') + '.txt'
with open(name, 'a') as newsequence:
for line in sequence:
newsequence.write(line)
for line in str(newsequence):
result = ''.join(i for i in line if i.isdigit())
result = result.replace(" ","")
newsequence.write(result)
sequence.close()
newsequence.close()
</code></pre>
| 0 | 2016-10-03T03:35:08Z | 39,824,474 | <p>Why don't you do this in one <code>for</code> loop ? </p>
<p>Not tested:</p>
<pre><code>file_in = open(input("Choose sequence file: "))
name = input('Enter name of new text file, without .txt: ') + '.txt'
with open(name, 'a') as file_out:
for line in file_in:
line = ''.join(i for i in line if i.isdigit())
line = line.replace(" ","")
file_out.write(line)
file_in.close()
</code></pre>
<p>You can't read (line by line) and write (line by line) in the same file - you will overwrite old text before you read it. </p>
| 0 | 2016-10-03T04:04:43Z | [
"python"
]
|
Multiprocessing and Selenium Python | 39,824,273 | <p>I have 3 drivers (Firefox browsers) and I want them to <code>do something</code> in a list of websites.</p>
<p>I have a worker defined as:</p>
<pre><code>def worker(browser, queue):
while True:
id_ = queue.get(True)
obj = ReviewID(id_)
obj.search(browser)
if obj.exists(browser):
print(obj.get_url(browser))
else:
print("Nothing")
</code></pre>
<p>So the worker will just acces to a queue that contains the ids and use the browser to do something.</p>
<p>I want to have a pool of workers so that as soon as a worker has finished using the browser to do something on the website defined by id_, then it immediately starts to work using the same browser to do something on the next id_ found in queue. I have then this:</p>
<pre><code>pool = Pool(processes=3) # I want to have 3 drivers
manager = Manager()
queue = manager.Queue()
# Define here my workers in the pool
for id_ in ids:
queue.put(id_)
for i in range(3):
queue.put(None)
</code></pre>
<p>Here I have a problem, I don't know how to define my workers so that they are in the pool. To each driver I need to assign a worker, and all the workers share the same queue of ids. Is this possible? How can I do it?</p>
<p>Another idea that I have is to create a queue of browsers so that if a driver is doing nothing, it is taken by a worker, along with an id_ from the queue in order to perform a new process. But I'm completely new to multiprocessing and actually don't know how to write this.</p>
<p>I appreciate your help.</p>
| 4 | 2016-10-03T03:36:18Z | 39,843,502 | <p>You could try instantiating the browser in the worker:</p>
<pre><code>def worker(queue):
browser = webdriver.Chrome()
try:
while True:
id_ = queue.get(True)
obj = ReviewID(id_)
obj.search(browser)
if obj.exists(browser):
print(obj.get_url(browser))
else:
print("Nothing")
finally:
brower.quit()
</code></pre>
| 1 | 2016-10-04T03:02:18Z | [
"python",
"selenium",
"python-multiprocessing"
]
|
Replace the first 3 white space coming with a character and ignore other white spaces in a text line with python | 39,824,284 | <p>I have this line text: </p>
<pre><code>09-15-16 05:23:44 A:VCOM 09064 Port 4 Device 10400 Remote 1 10401 Link Up RP2009
</code></pre>
<p>the output should be like this:</p>
<pre><code>09-15-16|05:23:44|A:VCOM|09064|Port 4 Device 10400 Remote 1 10401 Link Up RP2009
</code></pre>
<p>It should just replace the first 4 white spaces with | and ignore the rest.</p>
<p>This is the simple code I used :</p>
<pre><code>import re
text = "09-15-16 05:23:44 A:VCOM 09064 Port 4 Device 10400 Remote 1 10401 Link Up RP2009"
i = 0
while i< 3:
text = re.sub(' ', '|', text)
text = re.sub(' ', '|', text)
i +=1
print text
</code></pre>
<p>I got this output :</p>
<pre><code>09-15-16|05:23:44|A:VCOM|||||09064|Port|4|Device|10400|Remote|1|10401|Link|Up|RP2009
</code></pre>
| 0 | 2016-10-03T03:37:58Z | 39,824,299 | <p>You don't need regex for that. Just use <code>str.split</code> with a <code>maxsplit</code>:</p>
<pre><code>>>> s = '09-15-16 05:23:44 A:VCOM 09064 Port 4 Device 10400 Remote 1 10401 Link Up RP2009'
>>> *first, last = s.split(maxsplit=4)
>>> '|'.join(first) + '|' + last
'09-15-16|05:23:44|A:VCOM|09064|Port 4 Device 10400 Remote 1 10401 Link Up RP2009'
</code></pre>
<p>For Python 2:</p>
<pre><code>>>> s = '09-15-16 05:23:44 A:VCOM 09064 Port 4 Device 10400 Remote 1 10401 Link Up RP2009'
>>> items = s.split(None, 4)
>>> '|'.join(items[:-1]) + '|' + items[-1]
'09-15-16|05:23:44|A:VCOM|09064|Port 4 Device 10400 Remote 1 10401 Link Up RP2009'
</code></pre>
| 2 | 2016-10-03T03:41:21Z | [
"python",
"regex"
]
|
Replace the first 3 white space coming with a character and ignore other white spaces in a text line with python | 39,824,284 | <p>I have this line text: </p>
<pre><code>09-15-16 05:23:44 A:VCOM 09064 Port 4 Device 10400 Remote 1 10401 Link Up RP2009
</code></pre>
<p>the output should be like this:</p>
<pre><code>09-15-16|05:23:44|A:VCOM|09064|Port 4 Device 10400 Remote 1 10401 Link Up RP2009
</code></pre>
<p>It should just replace the first 4 white spaces with | and ignore the rest.</p>
<p>This is the simple code I used :</p>
<pre><code>import re
text = "09-15-16 05:23:44 A:VCOM 09064 Port 4 Device 10400 Remote 1 10401 Link Up RP2009"
i = 0
while i< 3:
text = re.sub(' ', '|', text)
text = re.sub(' ', '|', text)
i +=1
print text
</code></pre>
<p>I got this output :</p>
<pre><code>09-15-16|05:23:44|A:VCOM|||||09064|Port|4|Device|10400|Remote|1|10401|Link|Up|RP2009
</code></pre>
| 0 | 2016-10-03T03:37:58Z | 39,824,357 | <p>Maybe try something like this: </p>
<pre><code>text = "09-15-16 05:23:44 A:VCOM 09064 Port 4 Device 10400 Remote 1 10401 Link Up RP2009"
text = text.replace(" ", "|", 3)
text = text.replace(" ", "")
text = text.replace(" ", "|", 1)
</code></pre>
| 1 | 2016-10-03T03:50:05Z | [
"python",
"regex"
]
|
Replace the first 3 white space coming with a character and ignore other white spaces in a text line with python | 39,824,284 | <p>I have this line text: </p>
<pre><code>09-15-16 05:23:44 A:VCOM 09064 Port 4 Device 10400 Remote 1 10401 Link Up RP2009
</code></pre>
<p>the output should be like this:</p>
<pre><code>09-15-16|05:23:44|A:VCOM|09064|Port 4 Device 10400 Remote 1 10401 Link Up RP2009
</code></pre>
<p>It should just replace the first 4 white spaces with | and ignore the rest.</p>
<p>This is the simple code I used :</p>
<pre><code>import re
text = "09-15-16 05:23:44 A:VCOM 09064 Port 4 Device 10400 Remote 1 10401 Link Up RP2009"
i = 0
while i< 3:
text = re.sub(' ', '|', text)
text = re.sub(' ', '|', text)
i +=1
print text
</code></pre>
<p>I got this output :</p>
<pre><code>09-15-16|05:23:44|A:VCOM|||||09064|Port|4|Device|10400|Remote|1|10401|Link|Up|RP2009
</code></pre>
| 0 | 2016-10-03T03:37:58Z | 39,824,413 | <p>you can try this</p>
<p>*</p>
<pre><code>str = '09-15-16 05:23:44 A:VCOM 09064 Port 4 Device 10400 Remote 1 10401 Link Up RP2009'
idx=0
newStr =''
for token in str.split(' '):
if(token!=''):
if(idx <4):
newStr=newStr + token+'|'
idx+=1
else:
newStr = newStr + token+' '
idx+=1
print(newStr)
</code></pre>
<p>desired output</p>
<pre><code>09-15-16|05:23:44|A:VCOM|09064|Port 4 Device 10400 Remote 1 10401 Link Up RP2
</code></pre>
| 1 | 2016-10-03T03:57:55Z | [
"python",
"regex"
]
|
Replace the first 3 white space coming with a character and ignore other white spaces in a text line with python | 39,824,284 | <p>I have this line text: </p>
<pre><code>09-15-16 05:23:44 A:VCOM 09064 Port 4 Device 10400 Remote 1 10401 Link Up RP2009
</code></pre>
<p>the output should be like this:</p>
<pre><code>09-15-16|05:23:44|A:VCOM|09064|Port 4 Device 10400 Remote 1 10401 Link Up RP2009
</code></pre>
<p>It should just replace the first 4 white spaces with | and ignore the rest.</p>
<p>This is the simple code I used :</p>
<pre><code>import re
text = "09-15-16 05:23:44 A:VCOM 09064 Port 4 Device 10400 Remote 1 10401 Link Up RP2009"
i = 0
while i< 3:
text = re.sub(' ', '|', text)
text = re.sub(' ', '|', text)
i +=1
print text
</code></pre>
<p>I got this output :</p>
<pre><code>09-15-16|05:23:44|A:VCOM|||||09064|Port|4|Device|10400|Remote|1|10401|Link|Up|RP2009
</code></pre>
| 0 | 2016-10-03T03:37:58Z | 39,824,676 | <pre><code>import re
print re.sub(r' +','|',text,4)
</code></pre>
| 0 | 2016-10-03T04:37:27Z | [
"python",
"regex"
]
|
How to implement SQL level expression for this hybrid property in SQLAlchemy? | 39,824,292 | <p>I have a ledger table and a corresponding python class.
I defined the model using SQLAlchemy, as follows,</p>
<pre><code>class Ledger(Base):
__tablename__ = 'ledger'
currency_exchange_rate_lookup = {('CNY', 'CAD'): 0.2}
amount = Column(Numeric(10, 2), nullable=False)
currency = Column(String, nullable=False)
payment_method = Column(String)
notes = Column(UnicodeText)
@hybrid_property
def amountInCAD(self):
if self.currency == 'CAD':
return self.amount
exchange_rate = self.currency_exchange_rate_lookup[(self.currency, 'CAD')]
CAD_value = self.amount * Decimal(exchange_rate)
CAD_value = round(CAD_value, 2)
return CAD_value
@amountInCAD.expression
def amountInCAD(cls):
amount = cls.__table__.c.amount
currency_name = cls.__table__.c.currency
exchange_rate = cls.currency_exchange_rate_lookup[(currency_name, 'CAD')]
return case([
(cls.currency == 'CAD', amount),
], else_ = round((amount * Decimal(exchange_rate)),2))
</code></pre>
<p>Now as you can see, I want to create a hybrid property called "amountInCAD". The Python level getter seems to be working fine. However the SQL expression doesn't work.</p>
<p>Now if I run a query like this:</p>
<pre><code>>>>db_session.query(Ledger).filter(Ledger.amountInCAD > 1000)
</code></pre>
<p>SQLAlchemy gives me this error:</p>
<pre><code> File "ledger_db.py", line 43, in amountInCAD
exchange_rate = cls.currency_exchange_rate_lookup[(currency_name, 'CAD')]
KeyError: (Column('currency', String(), table=<ledger>, nullable=False), 'CAD')
</code></pre>
<p>I've researched SQLAlchemy's online documentation regarding hybrid property. <a href="http://docs.sqlalchemy.org/en/latest/orm/mapped_sql_expr.html#using-a-hybrid" rel="nofollow">http://docs.sqlalchemy.org/en/latest/orm/mapped_sql_expr.html#using-a-hybrid</a>
Comparing my code to the example code, I don't understand why mine doesn't work. If in the official example, <code>cls.firstname</code> can refer to a column of value, why in my code the <code>cls.__table__.c.currency</code> only returns a <code>Column</code> not its value?</p>
| 0 | 2016-10-03T03:40:09Z | 39,840,780 | <p><code>cls.firstname</code> does not "refer to value", but the <code>Column</code>. <code>cls.firstname + " " + cls.lastname</code> in the <a href="http://docs.sqlalchemy.org/en/latest/orm/mapped_sql_expr.html#using-a-hybrid" rel="nofollow">example</a> produces a string concatenation SQL expression along the lines of:
</p>
<pre><code>firstname || ' ' || lastname
</code></pre>
<p>That is part of the magic of hybrid properties: they make it relatively easy to write simple expressions that can work in both domains, but you still have to understand when you're handling a python instance and when building an SQL expression.</p>
<p>You could rethink your own hybrid a bit and actually pass the conversion options to the DB in your <code>case</code> expression:</p>
<pre><code>from sqlalchemy import func
...
@amountInCAD.expression
def amountInCAD(cls):
# This builds a list of (predicate, expression) tuples for case. The
# predicates compare each row's `currency` column against the bound
# `from_` currencies in SQL.
exchange_rates = [(cls.currency == from_,
# Note that this does not call python's round, but
# creates an SQL function expression. It also does not
# perform a multiplication, but produces an SQL expression
# `amount * :rate`. Not quite sure
# why you had the Decimal conversion, so kept it.
func.round(cls.amount * Decimal(rate), 2))
for (from_, to_), rate in
cls.currency_exchange_rate_lookup.items()
# Include only conversions to 'CAD'
if to_ == 'CAD']
return case(exchange_rates + [
# The default for 'CAD'
(cls.currency == 'CAD', cls.amount),
])
</code></pre>
<p>This way you effectively pass your exchange rate lookup as a <code>CASE</code> expression to SQL.</p>
| 1 | 2016-10-03T21:34:39Z | [
"python",
"sqlalchemy",
"descriptor"
]
|
Spark problems with imports in Python | 39,824,381 | <p>We are running a spark-submit command on a python script that uses Spark to parallelize object detection in Python using Caffe. The script itself runs perfectly fine if run in a Python-only script, but it returns an import error when using it with Spark code. I know the spark code is not the problem because it works perfectly fine on my home machine, but it is not functioning well on AWS. I am not sure if this somehow has to do with the environment variables, it is as if it doesn't detect them.</p>
<p>These environment variables are set: </p>
<pre><code>SPARK_HOME=/opt/spark/spark-2.0.0-bin-hadoop2.7
PATH=$SPARK_HOME/bin:$PATH
PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
PYTHONPATH=/opt/caffe/python:${PYTHONPATH}
</code></pre>
<p>Error:</p>
<pre><code>16/10/03 01:36:21 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 172.31.50.167): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/opt/spark/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 161, in main
func, profiler, deserializer, serializer = read_command(pickleSer, infile)
File "/opt/spark/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 54, in read_command
command = serializer._read_with_length(file)
File "/opt/spark/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/opt/spark/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
return pickle.loads(obj)
File "/opt/spark/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/cloudpickle.py", line 664, in subimport
__import__(name)
ImportError: ('No module named caffe', <function subimport at 0x7efc34a68b90>, ('caffe',))
</code></pre>
<p>Does anyone know why this would be an issue?</p>
<p>This package from Yahoo manages what we're trying to do by shipping Caffe as a jar dependency and then uses it again in Python. But I haven't found any resources on how to build it and import it ourselves.</p>
<p><a href="https://github.com/yahoo/CaffeOnSpark">https://github.com/yahoo/CaffeOnSpark</a></p>
| 8 | 2016-10-03T03:54:02Z | 40,054,364 | <p>You probably havenât compiled the caffe python wrappers in your AWS environment. For reasons that completely escape me (and several others, <a href="https://github.com/BVLC/caffe/issues/2440" rel="nofollow">https://github.com/BVLC/caffe/issues/2440</a>) pycaffe is not available as a pypi package, and you have to compile it yourself. You should follow the compilation/make instructions here or automate it using ebextensions if you are in an AWS EB environment: <a href="http://caffe.berkeleyvision.org/installation.html#python" rel="nofollow">http://caffe.berkeleyvision.org/installation.html#python</a></p>
| 4 | 2016-10-15T02:33:56Z | [
"python",
"apache-spark",
"pyspark",
"caffe",
"pycaffe"
]
|
Replacing Elements in 3 dimensional Python List | 39,824,469 | <p>I've a Python list if converted to NumPy array would have the following dimensions: (5, 47151, 10)</p>
<pre><code>np.array(y_pred_list).shape
# returns (5, 47151, 10)
len(y_pred_list)
# returns 5
</code></pre>
<p>I would like to go through every element and replace the element where:</p>
<ul>
<li>If the element >= 0.5 then 1.</li>
<li>If the element < 0.5 then 0.</li>
</ul>
<p>Any idea? </p>
| 0 | 2016-10-03T04:04:02Z | 39,824,585 | <p>To create an array with a value True if the element is >= 0.5, and False otherwise:</p>
<pre><code>new_array = y_pred_list >= 0.5
</code></pre>
<p>use the .astype() method for Numpy arrays to make all True elements 1 and all False elements 0:</p>
<pre><code>new_array.astype(int)
</code></pre>
| 1 | 2016-10-03T04:21:54Z | [
"python",
"arrays",
"list",
"numpy"
]
|
Replacing Elements in 3 dimensional Python List | 39,824,469 | <p>I've a Python list if converted to NumPy array would have the following dimensions: (5, 47151, 10)</p>
<pre><code>np.array(y_pred_list).shape
# returns (5, 47151, 10)
len(y_pred_list)
# returns 5
</code></pre>
<p>I would like to go through every element and replace the element where:</p>
<ul>
<li>If the element >= 0.5 then 1.</li>
<li>If the element < 0.5 then 0.</li>
</ul>
<p>Any idea? </p>
| 0 | 2016-10-03T04:04:02Z | 39,825,191 | <pre><code>arr=np.array(y_pred_list) #list to narray
arr[arr<0.5]=0 # arr<0.5 is a mask narray
arr[arr>=0.5]=1
y_pred_list=arr.tolist() # narray to list
</code></pre>
| -2 | 2016-10-03T05:42:11Z | [
"python",
"arrays",
"list",
"numpy"
]
|
Replacing Elements in 3 dimensional Python List | 39,824,469 | <p>I've a Python list if converted to NumPy array would have the following dimensions: (5, 47151, 10)</p>
<pre><code>np.array(y_pred_list).shape
# returns (5, 47151, 10)
len(y_pred_list)
# returns 5
</code></pre>
<p>I would like to go through every element and replace the element where:</p>
<ul>
<li>If the element >= 0.5 then 1.</li>
<li>If the element < 0.5 then 0.</li>
</ul>
<p>Any idea? </p>
| 0 | 2016-10-03T04:04:02Z | 39,827,874 | <p>ibredeson's answer is the way to go in your specific case. When you have an array a and want to construct an array b of the same shape which takes only two values, depending on a condition on a, consider using <code>np.where</code> (see <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow">the doc here</a>):</p>
<pre><code>import numpy as np
a = np.array([0, 1, 0.3, 0.5])
b = np.where(a > 0.5, 2, 7) # 2 and 7 are the values you want to see in b, whether the
# corresponding element in a is lesser than 0.5 or not.
>>> b
array([7, 2, 7, 7])
</code></pre>
| 0 | 2016-10-03T09:00:27Z | [
"python",
"arrays",
"list",
"numpy"
]
|
Scipy sparse matrix to the power spare matrix | 39,824,486 | <p>I have a scipy.sparse.csc.csc_matrix. I want to use a power function on each element of this matrix such that each element is raise to the power itself. How should I do that?</p>
<p>I tried this:</p>
<pre><code>B.data ** B.data
</code></pre>
<p>But this removes 0 from the data</p>
| 0 | 2016-10-03T04:06:22Z | 39,825,692 | <p>I have no problems with that approach:</p>
<pre><code>In [155]: B=sparse.random(10,10,.1,'csc')
In [156]: B
Out[156]:
<10x10 sparse matrix of type '<class 'numpy.float64'>'
with 10 stored elements in Compressed Sparse Column format>
In [157]: B.data
Out[157]:
array([ 0.79437782, 0.74414493, 0.3922551 , 0.61980213, 0.45231045,
0.94498933, 0.53086532, 0.54611246, 0.52941419, 0.81069106])
In [158]: B.data=B.data**B.data
In [159]: B.data
Out[159]:
array([ 0.83288253, 0.80259158, 0.69274789, 0.74342645, 0.69847422,
0.94793528, 0.71450246, 0.71866496, 0.71412372, 0.84354814])
In [160]: B
Out[160]:
<10x10 sparse matrix of type '<class 'numpy.float64'>'
with 10 stored elements in Compressed Sparse Column format>
</code></pre>
| 1 | 2016-10-03T06:33:03Z | [
"python",
"scipy"
]
|
python matplotlib legend linestyle '---' | 39,824,599 | <p>I need a way to make matplotlib linestyle '---'. 3's of '-'.</p>
<pre><code>character description
'-' solid line style
'--' dashed line style
'-.' dash-dot line style
':' dotted line style
etc.
</code></pre>
<p>I can see '-' and '--' in the list, but on the right up side, my legend comes up like " -- red dotted line" (If I write linestyle = '--'). I want '--- red dotted line' on my legend box</p>
<p>Is there any way I can make the legend show three dashes?
Here's what I'm doing.</p>
<pre><code>import matplotlib.pyplot as mpt
def main():
mpt.title("hi")
mpt.xlabel("x axis")
mpt.ylim([0,50])
mpt.xlim([0,10])
mpt.ylabel("y axis")
mpt.plot([1,2,3,4],[2,3,4,5],'r', linestyle = '???????')
mpt.legend(["red dotted line"])
mpt.show()
main()
</code></pre>
| 0 | 2016-10-03T04:25:24Z | 39,824,845 | <p>Use <code>mpt.legend(handlelength=3)</code> and <code>linestyle='--'</code></p>
<pre><code>mpt.plot([1,2,3,4],[2,3,4,5],'r', linestyle='--')
mpt.legend(["red dotted line"], handlelength=3)
</code></pre>
| 0 | 2016-10-03T05:00:39Z | [
"python",
"matplotlib"
]
|
ValueError: Cannot have number of splits n_splits=3 greater than the number of samples: 1 | 39,824,600 | <p>I am trying this training modeling using train_test_split and a decision tree regressor:</p>
<pre><code>import sklearn
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import cross_val_score
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
new_data = samples.drop('Fresh', 1)
# TODO: Split the data into training and testing sets using the given feature as the target
X_train, X_test, y_train, y_test = train_test_split(new_data, samples['Fresh'], test_size=0.25, random_state=0)
# TODO: Create a decision tree regressor and fit it to the training set
regressor = DecisionTreeRegressor(random_state=0)
regressor = regressor.fit(X_train, y_train)
# TODO: Report the score of the prediction using the testing set
score = cross_val_score(regressor, X_test, y_test, cv=3)
print score
</code></pre>
<p>When running this, I am getting the error:</p>
<pre><code>ValueError: Cannot have number of splits n_splits=3 greater than the number of samples: 1.
</code></pre>
<p>If I change the value of cv to 1, I get:</p>
<pre><code>ValueError: k-fold cross-validation requires at least one train/test split by setting n_splits=2 or more, got n_splits=1.
</code></pre>
<p>Some sample rows of the data look like: </p>
<pre><code> Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
0 14755 899 1382 1765 56 749
1 1838 6380 2824 1218 1216 295
2 22096 3575 7041 11422 343 2564
</code></pre>
| 0 | 2016-10-03T04:25:28Z | 39,824,687 | <p>If the number of split is greater than number of sample, you will get the first error. Check the snippet from the <a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/model_selection/_split.py#L315" rel="nofollow">source code</a> given below.</p>
<pre><code>if self.n_splits > n_samples:
raise ValueError(
("Cannot have number of splits n_splits={0} greater"
" than the number of samples: {1}.").format(self.n_splits,
n_samples))
</code></pre>
<p>If the number of fold is less than or equal <code>1</code>, you will get the second error. In your case, the <code>cv = 1</code>. Check the <a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/cross_validation.py#L249" rel="nofollow">source code</a>.</p>
<pre><code>if n_folds <= 1:
raise ValueError(
"k-fold cross validation requires at least one"
" train / test split by setting n_folds=2 or more,"
" got n_folds={0}.".format(n_folds))
</code></pre>
<p>An educated guess, the number of sample in <code>X_test</code> is less than <code>3</code>. Check that carefully.</p>
| 0 | 2016-10-03T04:39:13Z | [
"python",
"scikit-learn",
"cross-validation",
"sklearn-pandas"
]
|
introducing synchronous chained execution in celery workflow | 39,824,624 | <p>Suppose there are 4 tasks <code>T1</code>, <code>T2</code>, <code>T3</code>, <code>T4</code>. They are chained together as <code>T1.si() | T2.si() | T3.si() | T4.si()</code>. <code>T3</code> spawns further tasks <code>T30 .. T3n</code> asynchronously like <code>chord(T30,...,T3n)(reduce.s())</code>. I don't know <code>n</code> in advance (i.e., the no of subtasks <code>T3i</code>s that would be spawned).</p>
<p>I want <code>T4</code> to execute only after all of the <code>T3i..T3n</code> tasks complete. As expected, <code>T3</code> returns immediately due to the asynchronous behavior and then T4 starts executing before the chord is complete.</p>
<p>I could add a synchronous task that just does <code>T3.get()</code> before <code>T4</code>, but it will block one of the worker processes.</p>
<p>Is there a way to fix this design to avoid the blocking task, or a better design?</p>
| 1 | 2016-10-03T04:29:57Z | 39,832,625 | <p>I will improve @jenner-felton's comment a bit...</p>
<p>You may call it like this:</p>
<pre><code>chain(T1.s(), T2.s(), T3.s(T4.s()))
</code></pre>
<p>e.g. <code>T4.s()</code> passed as one of parameters to the T3 task.</p>
<p>And T3 will run a <code>chord</code> itself with T4 passed as a callback.</p>
| 0 | 2016-10-03T13:17:29Z | [
"python",
"asynchronous",
"celery"
]
|
solving multiple equations with many variables and inequality constraints | 39,824,681 | <p>I am trying to solve a problem with many variables using scipy and linear programming. I have a set of variables X which are real numbers between 0.5 and 3 and I have to solve the following equations :</p>
<pre><code>346 <= x0*C0 + x1*C1 + x2*C2 +......xN*CN <= 468
25 <= x0*p0 + x1*p1 + x2*p2 +......xN*pN <= 33
12 <= x0*c0 + x1*c1 + x2*c2 +......xN*cN <= 17
22 <= x0*f0 + x1*f1 + x2*f2 +......xN*fN <= 30
</code></pre>
<p>the numbers C0...CN , p0...pN , c0...cN , f0...fN are already given to me. I tried to solve this in the following way:</p>
<pre><code>import numpy as np
from scipy.optimize import linprog
from numpy.linalg import solve
A_ub = np.array([
[34, 56, 32, 21, 24, 16, 19, 22, 30, 27, 40, 33],
[2, 3, 2, 1.5, 3, 4, 1, 2, 2.5, 1, 1.2, 1.3],
[1, 2, 3, 1.2, 2, 3, 0.6, 1, 1, 1.2, 1.1, 0.8],
[0.5, 2, 2, 1, 3, 4, 1, 1, 1, 0.5, 0.3, 1.2],
[-34, -56, -32, -21, -24, -16, -19, -22, -30, -27, -40, -33],
[-2, -3, -2, -1.5, -3, -4, -1, -2, -2.5, -1, -1.2, -1.3],
[-1, -2, -3, -1.2, -2, -3, -0.6, -1, -1, -1.2, -1.1, -0.8],
[-0.5, -2, -2, -1, -3, -4, -1, -1, -1, -0.5, -0.3, -1.2]])
b_ub = np.array([468, 33, 17, 30, -346, -25, -12, -22])
c = np.array([34, 56, 32, 21, 24, 16, 19, 22, 30, 27, 40, 33])
res = linprog(c, A_eq=None, b_eq=None, A_ub=A_ub, b_ub=b_ub, bounds=(0.5, 3))
</code></pre>
<p>Explanation for the equations the first row of A_ub is the same as b_ub, as we are trying to maximize the equation as well as make sure it is within the given boundary limits i.e 468 and 346 meaning that I want to get the value as close as possible to the upper limit.</p>
<p>I put <code>[-34, -56, -32, -21, -24, -16, -19, -22, -30, -27, -40, -33]</code> in A_ub and -346 in b_ub with the logic :</p>
<p><code>-346 > -(x0*C0 + x1*C1 + x2*C2 +......xN*CN)</code> which would solve the problem of lower bounds for the equation. I do the same with the rest of them.</p>
<p>But I feel my approach is wrong as I get the answer as <code>0.425</code> for <code>res.fun</code> and <code>nan</code> as the value of <code>res.x</code></p>
<p>The upper bound for x is 3 and the lower bound is 0.5 </p>
<p>How do I define the problem as shown above in order to get a maximum value close to 468 while keeping in mind the upper bounds? How do I define lower bounds using scipy? I am working on linear programming for the first time so I may have missed out on ideas that can help me out.</p>
<p>I am also open to any other solutions.</p>
| 1 | 2016-10-03T04:38:10Z | 39,825,221 | <p>This system of inequalities is not feasible: there is no solution that satisfies all constraints. You can see that from <code>res</code>:</p>
<pre><code> fun: 0.42500000000000243
message: 'Optimization failed. Unable to find a feasible starting point.'
nit: 28
status: 2
success: False
x: nan
</code></pre>
<p>I believe this is a correct result (I verified this with another LP system).</p>
<p>Note: if you change the bounds to <code>(0,3)</code>, you will get a feasible solution.</p>
| 3 | 2016-10-03T05:45:44Z | [
"python",
"scipy",
"linear-algebra",
"linear-programming"
]
|
Filter list's elements by type of each element | 39,824,683 | <p>I have list with different types of data (string, int, etc.). I need to create a new list with, for example, only int elements, and another list with only string elements. How to do it?</p>
| -1 | 2016-10-03T04:38:15Z | 39,824,724 | <p>You can accomplish this with <a href="https://docs.python.org/3.5/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehension</a>:</p>
<pre><code>integers = [elm for elm in data if isinstance(elm, int)]
</code></pre>
<p>Where <code>data</code> is the data. What the above does is create a new list, populate it with elements of <code>data</code> (<code>elm</code>) that meet the condition after the <code>if</code>, which is checking if element is instance of an <code>int</code>. You can also use <a href="https://docs.python.org/3.5/library/functions.html#filter" rel="nofollow"><code>filter</code></a>:</p>
<pre><code>integers = list(filter(lambda elm: isinstance(elm, int), data))
</code></pre>
<p>The above will filter out elements based on the passed lambda, which filters out all non-integers. You can then apply it to the strings too, using <code>isinstance(elm, str)</code> to check if instance of string.</p>
| 1 | 2016-10-03T04:44:29Z | [
"python"
]
|
Filter list's elements by type of each element | 39,824,683 | <p>I have list with different types of data (string, int, etc.). I need to create a new list with, for example, only int elements, and another list with only string elements. How to do it?</p>
| -1 | 2016-10-03T04:38:15Z | 39,825,022 | <p>Sort the list by type, and then use <code>groupby</code> to group it:</p>
<pre><code>>>> import itertools
>>> l = ['a', 1, 2, 'b', 'e', 9.2, 'l']
>>> l.sort(key=lambda x: str(type(x)))
>>> lists = [list(v) for k,v in itertools.groupby(l, lambda x: str(type(x)))]
>>> lists
[[9.2], [1, 2], ['a', 'b', 'e', 'l']]
</code></pre>
| 2 | 2016-10-03T05:22:25Z | [
"python"
]
|
python: could not broadcast input array from shape (3,1) into shape (3,) | 39,824,700 | <pre><code>import numpy as np
def qrhouse(A):
(m,n) = A.shape
R = A
V = np.zeros((m,n))
for k in range(0,min(m-1,n)):
x = R[k:m,k]
x.shape = (m-k,1)
v = x + np.sin(x[0])*np.linalg.norm(x.T)*np.eye(m-k,1)
V[k:m,k] = v
R[k:m,k:n] = R[k:m,k:n]-(2*v)*(np.transpose(v)*R[k:m,k:n])/(np.transpose(v)*v)
R = np.triu(R[0:n,0:n])
return V, R
A = np.array( [[1,1,2],[4,3,1],[1,6,6]] )
print qrhouse(A)
</code></pre>
<p>It's qr factorization pytho code I don't know why error happens...
value error happens in <code>V[k:m,k] = v</code></p>
<pre><code>value error :
could not broadcast input array from shape (3,1) into shape (3)
</code></pre>
<p>.</p>
| -2 | 2016-10-03T04:40:27Z | 39,825,046 | <p><code>V[k:m,k] = v</code>; <code>v</code> has shape (3,1), but the target is (3,). <code>k:m</code> is a 3 term slice; <code>k</code> is a scalar.</p>
<p>Try using <code>v.ravel()</code>. Or <code>V[k:m,[k]]</code>. </p>
<p>But also understand why <code>v</code> has its shape.</p>
| 0 | 2016-10-03T05:25:01Z | [
"python",
"numpy"
]
|
How to add a new column (Python list) to a Postgresql table? | 39,824,748 | <p>I have a Python list <code>newcol</code> that I want to add to an existing Postgresql table. I have used the following code:</p>
<pre><code>conn = psycopg2.connect(host='***', database='***', user='***', password='***')
cur = conn.cursor()
cur.execute('ALTER TABLE %s ADD COLUMN %s text' % ('mytable', 'newcol'))
conn.commit()
</code></pre>
<p>This added the list <code>newcol</code> to my table, however the new column has no values in it. In python, when I print the the list in python, it is a populated list.</p>
<p>Also, the number of rows in the table and in the list I want to add are the same. I'm a little confused.</p>
<p>Thanks in advance for the help.</p>
| 0 | 2016-10-03T04:47:24Z | 39,825,439 | <p><code>ALTER TABLE</code> only changes table schema -- in your case it will create the new column and initialize it with empty (NULL) values.</p>
<p>To add list of values to this column you can do:
<code>UPDATE TABLE <table> SET ...</code> in a loop.</p>
| 0 | 2016-10-03T06:11:04Z | [
"python",
"postgresql"
]
|
Outer merge on large pandas DataFrames causes MemoryError---how to do "big data" merges with pandas? | 39,824,952 | <p>I have two pandas DataFrames <code>df1</code> and <code>df2</code> with a fairly standard format:</p>
<pre><code> one two three feature
A 1 2 3 feature1
B 4 5 6 feature2
C 7 8 9 feature3
D 10 11 12 feature4
E 13 14 15 feature5
F 16 17 18 feature6
...
</code></pre>
<p>And the same format for <code>df2</code>. The sizes of these DataFrames are around 175MB and 140 MB. </p>
<pre><code>merged_df = pd.merge(df1, df2, on='feature', how='outer', suffixes=('','_features'))
</code></pre>
<p>I get the following MemoryError: </p>
<pre><code>File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/tools/merge.py", line 39, in merge
return op.get_result()
File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/tools/merge.py", line 217, in get_result
join_index, left_indexer, right_indexer = self._get_join_info()
File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/tools/merge.py", line 353, in _get_join_info
sort=self.sort, how=self.how)
File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/tools/merge.py", line 559, in _get_join_indexers
return join_func(lkey, rkey, count, **kwargs)
File "pandas/src/join.pyx", line 187, in pandas.algos.full_outer_join (pandas/algos.c:61680)
File "pandas/src/join.pyx", line 196, in pandas.algos._get_result_indexer (pandas/algos.c:61978)
MemoryError
</code></pre>
<p>Is it possible there is a "size limit" for pandas dataframes when merging? I am surprised that this wouldn't work. Maybe this is a bug in a certain version of pandas?</p>
<p>EDIT: As mentioned in the comments, many duplicates in the merge column can easily cause RAM issues. See: <a href="http://stackoverflow.com/questions/32750970/python-pandas-merge-causing-memory-overflow">Python Pandas Merge Causing Memory Overflow</a></p>
<p>The question now is, how can we do this merge? It seems the best way would be to partition the dataframe, somehow.</p>
| 0 | 2016-10-03T05:16:27Z | 39,825,139 | <p>Try specifying a data type for the numeric columns to reduce the size of the existing data frames, such as: </p>
<pre><code>df[['one','two', 'three']] = df[['one','two', 'three']].astype(np.int32)
</code></pre>
<p>This should reduce the memory significantly and will hopefully let you preform the merge. </p>
| 1 | 2016-10-03T05:36:28Z | [
"python",
"pandas",
"memory",
"dataframe",
"out-of-memory"
]
|
Outer merge on large pandas DataFrames causes MemoryError---how to do "big data" merges with pandas? | 39,824,952 | <p>I have two pandas DataFrames <code>df1</code> and <code>df2</code> with a fairly standard format:</p>
<pre><code> one two three feature
A 1 2 3 feature1
B 4 5 6 feature2
C 7 8 9 feature3
D 10 11 12 feature4
E 13 14 15 feature5
F 16 17 18 feature6
...
</code></pre>
<p>And the same format for <code>df2</code>. The sizes of these DataFrames are around 175MB and 140 MB. </p>
<pre><code>merged_df = pd.merge(df1, df2, on='feature', how='outer', suffixes=('','_features'))
</code></pre>
<p>I get the following MemoryError: </p>
<pre><code>File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/tools/merge.py", line 39, in merge
return op.get_result()
File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/tools/merge.py", line 217, in get_result
join_index, left_indexer, right_indexer = self._get_join_info()
File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/tools/merge.py", line 353, in _get_join_info
sort=self.sort, how=self.how)
File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/tools/merge.py", line 559, in _get_join_indexers
return join_func(lkey, rkey, count, **kwargs)
File "pandas/src/join.pyx", line 187, in pandas.algos.full_outer_join (pandas/algos.c:61680)
File "pandas/src/join.pyx", line 196, in pandas.algos._get_result_indexer (pandas/algos.c:61978)
MemoryError
</code></pre>
<p>Is it possible there is a "size limit" for pandas dataframes when merging? I am surprised that this wouldn't work. Maybe this is a bug in a certain version of pandas?</p>
<p>EDIT: As mentioned in the comments, many duplicates in the merge column can easily cause RAM issues. See: <a href="http://stackoverflow.com/questions/32750970/python-pandas-merge-causing-memory-overflow">Python Pandas Merge Causing Memory Overflow</a></p>
<p>The question now is, how can we do this merge? It seems the best way would be to partition the dataframe, somehow.</p>
| 0 | 2016-10-03T05:16:27Z | 39,825,304 | <p>You can try first filter <code>df1</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unique.html" rel="nofollow"><code>unique</code></a> values, <code>merge</code> and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a> output.</p>
<p>If need only outer join, I think there is memory problem also. But if add some other code for filter output of each loop, it can works.</p>
<pre><code>dfs = []
for val in df.feature.unique():
df1 = pd.merge(df[df.feature==val], df2, on='feature', how='outer', suffixes=('','_key'))
#http://stackoverflow.com/a/39786538/2901002
#df1 = df1[(df1.start <= df1.start_key) & (df1.end <= df1.end_key)]
print (df1)
dfs.append(df1)
df = pd.concat(dfs, ignore_index=True)
print (df)
</code></pre>
<hr>
<p>Other solution is use <a href="http://dask.pydata.org/en/latest/dataframe-api.html#dask.dataframe.DataFrame.merge" rel="nofollow"><code>dask.dataframe.DataFrame.merge</code></a>.</p>
| 1 | 2016-10-03T05:55:28Z | [
"python",
"pandas",
"memory",
"dataframe",
"out-of-memory"
]
|
Vocabulary Processor function | 39,825,043 | <p>I am researching about embedding input for Convolution Neural Network and I understand Word2vec. However, in <a href="https://github.com/dennybritz/cnn-text-classification-tf/blob/master/train.py" rel="nofollow">CNN text classification</a>. dennybritz used function <code>learn.preprocessing.VocabularyProcessor</code>. In the <a href="http://tflearn.org/data_utils/" rel="nofollow">document</a>. They said it Maps documents to sequences of word ids. I am not quite sure how this function work. Does it creates a list of Ids then maps the Ids with Words or It has an dictionary of words and their Ids, when run function it only give the ids ? </p>
| 0 | 2016-10-03T05:24:53Z | 39,826,331 | <p>Lets say that you have just two documents <code>I like pizza</code> and <code>I like Pasta</code>. Your whole vocabulary consists of these words <code>(I, like, pizza, pasta)</code> For every word in the vocabulary, there is an index associated like so (1, 2, 3, 4). Now given a document like <code>I like pasta</code> it can be converted into a vector [1, 2, 4]. This is what the <code>learn.preprocessing.VocabularyProcessor</code> does. The parameter <code>max_document_length</code> makes sure that all the documents are represented by a vector of length <code>max_document_length</code> either by padding numbers if their length is shorter than <code>max_document_length</code> and clipping them if their length is greater than <code>max_document_length</code> Hope this helps you </p>
| 2 | 2016-10-03T07:20:43Z | [
"python",
"tensorflow",
"text-classification"
]
|
Strange behaviour while reading data from Cassandra Columnfamily | 39,825,307 | <p>I am trying to do the following query on column-family using cqlsh (<code>cqlsh 5.0.1 | Cassandra 2.1.12 | CQL spec 3.2.1 | Native protocol v3</code>)</p>
<p><strong>Query :</strong></p>
<pre><code>select * from CassandraColumnFamily limit 10
</code></pre>
<p>But it gives the following error</p>
<p><strong>error :</strong></p>
<pre><code>ReadTimeout: code=1200 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
</code></pre>
<p>Where as i am able to read the data using the following python script. I am not able to figure out what could be the issue here.</p>
<pre><code>cluster = Cluster(
contact_points = ['IP1','IP2','IP3']
)
session = cluster.connect('cw')
query = "select col1 , col2, col3, col4, col5 from CassandraColumnFamily"
statement = SimpleStatement(query, fetch_size=50000)
</code></pre>
| 1 | 2016-10-03T05:55:32Z | 39,841,723 | <p>I am not sure how large are the rows that you are trying to fetch, and how many there are. But when you are doing select in CQL without any condition on primary key, you are doing a range scan which is costly. Remember, this is not MySQL. Cassandra works at its best when you are doing lookups on specific row keys. </p>
<p>Anyhow, you can try increasing the timeout for cqlsh to make this work.</p>
<p>In your home folder, create a file called cqlshrc with the following content:
[connection]
client_timeout = 10</p>
<p>You can also set it like this to disable timeout:
client_timeout = None</p>
<p>More on cqlshrc
<a href="https://docs.datastax.com/en/cql/3.1/cql/cql_reference/cqlshrc.html" rel="nofollow">https://docs.datastax.com/en/cql/3.1/cql/cql_reference/cqlshrc.html</a></p>
| 0 | 2016-10-03T22:59:52Z | [
"python",
"cassandra",
"datastax",
"cassandra-2.0"
]
|
How to plot distribution with given mean and SD on a histogram in Python | 39,825,328 | <p>I have a following Pandas dataframe named Scores,following is the subset of it</p>
<pre><code> 0
0 25.104179
1 60.908747
2 23.222238
3 51.553491
4 22.629690
5 53.338099
6 22.360882
7 26.515078
8 52.737316
9 40.235152
</code></pre>
<p>When I plot a histogram it looks like following</p>
<p><a href="http://i.stack.imgur.com/uv5ok.png" rel="nofollow"><img src="http://i.stack.imgur.com/uv5ok.png" alt="enter image description here"></a></p>
<p>Now, I want to plot a distribution on this histogram with mean=37.72 and SD=2.72
I am able to generate a distribution with following code </p>
<pre><code>x= np.linspace(10,90,1000)
y = norm.pdf(x, loc=37.72, scale=2.71) # for example
pylab.plot(x,y)
pylab.show()
</code></pre>
<p>How can I embed this on histogram ?</p>
| 0 | 2016-10-03T05:57:46Z | 39,825,502 | <p>You can do something like this, </p>
<pre><code>plt.hist( The histogram data)
plt.plot( The distribution data )
plt.show()
</code></pre>
<p>The <code>plt.show</code> will show both figures embedded in a single figure.</p>
| 1 | 2016-10-03T06:16:38Z | [
"python",
"matplotlib",
"histogram"
]
|
Sqlalchemy User Product Reviews Relation model deisgn | 39,825,375 | <p>Need models so that a User can have a Products and Users can leave Reviews on Products made by other users. I was thinking having a one to many relationship from products to reviews but then how do which users left which review. This is what i have so far. </p>
<pre><code>class User(db.Model, UserMixin):
id = db.Column(db.Integer, primary_key=True)
email = db.Column(db.String(255), unique=True)
products = db.relationship('Product', backref='products',
lazy='dynamic')
class Review(db.Model):
id = db.Column(db.Integer(), primary_key=True)
stars = db.Column(db.String(255))
description = db.Column(db.String(255))
class Product(db.Model):
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
id = db.Column(db.Integer(), primary_key=True)
</code></pre>
| 0 | 2016-10-03T06:03:22Z | 39,826,122 | <p>I'm imagining setting it up like this (with the new one-to-many included as well - I think I've got that right). You should know the product's ID at the time you're creating the entry in the Python code, so you can simply add it in. I don't think you would necessarily need to create a relationship for that.</p>
<pre><code>class User(db.Model, UserMixin):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(255), unique=True)
products = db.relationship('Product', backref='user',
lazy='dynamic')
reviews= db.relationship("Review", backref="user", lazy='dynamic')
class Review(db.Model):
id = db.Column(db.Integer(), primary_key=True)
product_id = db.Column(db.Integer, db.ForeignKey('product.id'))
stars = db.Column(db.String(255))
description = db.Column(db.String(255))
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
class Product(db.Model):
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
id = db.Column(db.Integer(), primary_key=True)
</code></pre>
<p>I changed a line in your <code>User</code> class. You had</p>
<pre><code>products = db.relationship('Product', backref='product',
lazy='dynamic')
</code></pre>
<p>But I think it's supposed to be</p>
<pre><code>products = db.relationship('Product', backref='user',
lazy='dynamic')
</code></pre>
| 0 | 2016-10-03T07:05:29Z | [
"python",
"sql",
"flask",
"sqlalchemy",
"flask-sqlalchemy"
]
|
Sqlalchemy User Product Reviews Relation model deisgn | 39,825,375 | <p>Need models so that a User can have a Products and Users can leave Reviews on Products made by other users. I was thinking having a one to many relationship from products to reviews but then how do which users left which review. This is what i have so far. </p>
<pre><code>class User(db.Model, UserMixin):
id = db.Column(db.Integer, primary_key=True)
email = db.Column(db.String(255), unique=True)
products = db.relationship('Product', backref='products',
lazy='dynamic')
class Review(db.Model):
id = db.Column(db.Integer(), primary_key=True)
stars = db.Column(db.String(255))
description = db.Column(db.String(255))
class Product(db.Model):
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
id = db.Column(db.Integer(), primary_key=True)
</code></pre>
| 0 | 2016-10-03T06:03:22Z | 39,891,847 | <p>You just need to add foreign keys for <code>User</code> and <code>Product</code> into your <code>Review</code> table:</p>
<pre><code>class Review(db.Model):
# ...
product_id = db.Column(db.Integer, db.ForeignKey('product.id'))
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
</code></pre>
<p>You can also then add backrefs for <code>Reviews</code> on to your <code>User</code> and <code>Product</code>, and you can filter the <code>user.reviews</code> by <code>product_id</code>. </p>
| 1 | 2016-10-06T09:02:08Z | [
"python",
"sql",
"flask",
"sqlalchemy",
"flask-sqlalchemy"
]
|
How to get all the tweets from the specific trend in twitter from tweepy | 39,825,413 | <p>I have a requirement to get all the tweets from the specific trend in twitter.</p>
<pre><code>Ex: api.get_all_tweets(trend="ABCD")
</code></pre>
<p>How can I achieve this with tweepy?</p>
| -2 | 2016-10-03T06:07:13Z | 39,842,482 | <p>There is no way to be guaranteed to capture all tweets. Twitter reserves the right to send you whatever they feel like. For something that is tweeted about infrequently like #upsidedownwalrus you'd probably get all of them, but for something that is a trending topic, you will only ever receive a sample.</p>
| 1 | 2016-10-04T00:31:55Z | [
"python",
"twitter",
"tweepy",
"python-twitter"
]
|
How to get all the tweets from the specific trend in twitter from tweepy | 39,825,413 | <p>I have a requirement to get all the tweets from the specific trend in twitter.</p>
<pre><code>Ex: api.get_all_tweets(trend="ABCD")
</code></pre>
<p>How can I achieve this with tweepy?</p>
| -2 | 2016-10-03T06:07:13Z | 39,843,708 | <p>The <a href="https://dev.twitter.com/rest/public/rate-limits" rel="nofollow">Twitter API has limits</a> on what you can retrieve. If you require a census of posted Tweets, you'll need to pay for a service that provides such, but they are few and far between as they need to actively respect deletions.</p>
| 0 | 2016-10-04T03:30:54Z | [
"python",
"twitter",
"tweepy",
"python-twitter"
]
|
Passing parameters from one SConstruct to another SConstruct using scons | 39,825,526 | <p>I have a C project built using SCons that links with a C library also built by Scons. Both the library and the project have there own SConstruct files. I read in <a href="http://stackoverflow.com/a/23898880/6198533">this topic</a> that you can call a SConstruct from another SConstruct in the same way as you would call a SConscript:</p>
<pre><code>SConscript('folder/to/other/SConstruct')
</code></pre>
<p>Command line parameters provided to the top level SConstruct are automatically passed to the called SConstruct. But now I want to pass additional variables to the called SConstruct. I figured out that you can do this in the same way as you would do with SConscripts:</p>
<pre><code>SConscript('folder/to/other/SConsctruct', exports='my_variable')
</code></pre>
<p>And then import them in the called SConstruct:</p>
<pre><code>Import('my_variable')
</code></pre>
<p>The problem is that when I call the SConstruct from the C library directly from the command line, 'my_variabled' does not exists and scons raises an error:</p>
<pre><code>scons: *** Import of non-existent variable ''my_variable''
</code></pre>
<p>Should I fix this using a try/except block in the called SConstruct as a switch to get the variable from scons or get the default, or are there more elegant solutions to this? Any suggestions on different approaches are welcome.</p>
| 0 | 2016-10-03T06:19:00Z | 39,830,811 | <p>My guess is that you're searching for the "<code>-u</code>" or the "<code>-U</code>" option. Please consult the <a href="http://scons.org/doc/production/HTML/scons-man.html" rel="nofollow">MAN page</a> and have a pick for your needs.</p>
| 0 | 2016-10-03T11:44:09Z | [
"python",
"scons"
]
|
Pandas reorder data | 39,825,550 | <p>This is probably an easy one using pivot, but since I am not adding the numbers (every row is unique) how should I go about doing this?</p>
<p>Input:</p>
<pre><code> Col1 Col2 Col3
0 123.0 33.0 ABC
1 345.0 39.0 ABC
2 567.0 100.0 ABC
3 123.0 82.0 PQR
4 345.0 10.0 PQR
5 789.0 38.0 PQR
6 890.0 97.0 XYZ
7 345.0 96.0 XYZ
</code></pre>
<p>Output:</p>
<pre><code> Col1 ABC PQR XYZ
0 123.0 33.0 82.0 NaN
1 345.0 39.0 10.0 96.0
2 567.0 100.0 NaN NaN
3 789.0 NaN 38.0 NaN
4 890.0 NaN NaN 97.0
</code></pre>
<p>And could I get this output in dataframe format then pls? Thanks so much for taking a look!</p>
| 1 | 2016-10-03T06:21:12Z | 39,825,566 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="nofollow"><code>pivot</code></a>:</p>
<pre><code>print (df.pivot(index='Col1', columns='Col3', values='Col2'))
Col3 ABC PQR XYZ
Col1
123.0 33.0 82.0 NaN
345.0 39.0 10.0 96.0
567.0 100.0 NaN NaN
789.0 NaN 38.0 NaN
890.0 NaN NaN 97.0
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a>:</p>
<pre><code>print (df.set_index(['Col1','Col3'])['Col2'].unstack())
Col3 ABC PQR XYZ
Col1
123.0 33.0 82.0 NaN
345.0 39.0 10.0 96.0
567.0 100.0 NaN NaN
789.0 NaN 38.0 NaN
890.0 NaN NaN 97.0
</code></pre>
<hr>
<p>EDIT by comment:</p>
<p>Need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow"><code>pivot_table</code></a>:</p>
<pre><code>print (df.pivot_table(index='Col1', columns='Col3', values='Col2'))
Col3 ABC PQR XYZ
Col1
123.0 33.0 82.0 NaN
345.0 39.0 10.0 96.0
567.0 100.0 NaN NaN
789.0 NaN 38.0 NaN
890.0 NaN NaN 97.0
</code></pre>
<p>You can also check <a class='doc-link' href="http://stackoverflow.com/documentation/pandas/1463/reshaping-and-pivoting/4771/pivoting-with-aggregating#t=201610030626083995389">SO documentation</a>.</p>
<p>Another faster solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a>, aggregating <code>mean</code> (by default <code>pivot_table</code> aggreagate <code>mean</code> also), convert to <code>Series</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.squeeze.html" rel="nofollow"><code>DataFrame.squeeze</code></a> and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="nofollow"><code>unstack</code></a>:</p>
<pre><code>print (df.groupby(['Col1','Col3']).mean().squeeze().unstack())
Col3 ABC PQR XYZ
Col1
123.0 33.0 82.0 NaN
345.0 39.0 10.0 96.0
567.0 100.0 NaN NaN
789.0 NaN 38.0 NaN
890.0 NaN NaN 97.0
</code></pre>
| 1 | 2016-10-03T06:23:00Z | [
"python",
"pandas",
"dataframe"
]
|
scrapy - spider module def functions not getting invoked | 39,825,740 | <p>My intention is to invoke start_requests method to login to the website. After login, scrape the website. Based on the log message, I see that
1. But, I see that start_request is not invoked.
2. call_back function of the parse is also not invoking. </p>
<p>Whats actually happening is spider is only loading the urls in the start_urls. </p>
<p>Question:</p>
<ol>
<li>Why the spider is not crawling through other pages(say page 2, 3, 4)?</li>
<li>Why looking from spider is not working?</li>
</ol>
<p>Note:</p>
<ol>
<li>My method to calculate page number and url creation is correct. I verified it.</li>
<li>I referred this link to write this code <a href="http://stackoverflow.com/questions/29809524/using-loginform-with-scrapy">Using loginform with scrapy</a></li>
</ol>
<p>My code:</p>
<p>zauba.py (spider)</p>
<pre><code>#!/usr/bin/env python
from scrapy.spiders import CrawlSpider
from scrapy.http import FormRequest
from scrapy.http.request import Request
from loginform import fill_login_form
import logging
logger = logging.getLogger('Zauba')
class zauba(CrawlSpider):
name = 'Zauba'
login_url = 'https://www.zauba.com/user'
login_user = 'scrapybot1@gmail.com'
login_password = 'scrapybot1'
logger.info('zauba')
start_urls = ['https://www.zauba.com/import-gold/p-1-hs-code.html']
def start_requests(self):
logger.info('start_request')
# let's start by sending a first request to login page
yield scrapy.Request(self.login_url, callback = self.parse_login)
def parse_login(self, response):
logger.warning('parse_login')
# got the login page, let's fill the login form...
data, url, method = fill_login_form(response.url, response.body,
self.login_user, self.login_password)
# ... and send a request with our login data
return FormRequest(url, formdata=dict(data),
method=method, callback=self.start_crawl)
def start_crawl(self, response):
logger.warning('start_crawl')
# OK, we're in, let's start crawling the protected pages
for url in self.start_urls:
yield scrapy.Request(url, callback=self.parse)
def parse(self, response):
logger.info('parse')
text = response.xpath('//div[@id="block-system-main"]/div[@class="content"]/div[@style="width:920px; margin-bottom:12px;"]/span/text()').extract_first()
total_entries = int(text.split()[0].replace(',', ''))
total_pages = int(math.ceil((total_entries*1.0)/30))
logger.warning('*************** : ' + total_pages)
print('*************** : ' + total_pages)
for page in xrange(1, (total_pages + 1)):
url = 'https://www.zauba.com/import-gold/p-' + page +'-hs-code.html'
log.msg('url%d : %s' % (pages,url))
yield scrapy.Request(url, callback=self.extract_entries)
def extract_entries(self, response):
logger.warning('extract_entries')
row_trs = response.xpath('//div[@id="block-system-main"]/div[@class="content"]/div/table/tr')
for row_tr in row_trs[1:]:
row_content = row_tr.xpath('.//td/text()').extract()
if (row_content.__len__() == 9):
print row_content
yield {
'date' : row_content[0].replace(' ', ''),
'hs_code' : int(row_content[1]),
'description' : row_content[2],
'origin_country' : row_content[3],
'port_of_discharge' : row_content[4],
'unit' : row_content[5],
'quantity' : int(row_content[6].replace(',', '')),
'value_inr' : int(row_content[7].replace(',', '')),
'per_unit_inr' : int(row_content[8].replace(',', '')),
}
</code></pre>
<p>loginform.py</p>
<pre><code>#!/usr/bin/env python
import sys
from argparse import ArgumentParser
from collections import defaultdict
from lxml import html
__version__ = '1.0' # also update setup.py
def _form_score(form):
score = 0
# In case of user/pass or user/pass/remember-me
if len(form.inputs.keys()) in (2, 3):
score += 10
typecount = defaultdict(int)
for x in form.inputs:
type_ = (x.type if isinstance(x, html.InputElement) else 'other'
)
typecount[type_] += 1
if typecount['text'] > 1:
score += 10
if not typecount['text']:
score -= 10
if typecount['password'] == 1:
score += 10
if not typecount['password']:
score -= 10
if typecount['checkbox'] > 1:
score -= 10
if typecount['radio']:
score -= 10
return score
def _pick_form(forms):
"""Return the form most likely to be a login form"""
return sorted(forms, key=_form_score, reverse=True)[0]
def _pick_fields(form):
"""Return the most likely field names for username and password"""
userfield = passfield = emailfield = None
for x in form.inputs:
if not isinstance(x, html.InputElement):
continue
type_ = x.type
if type_ == 'password' and passfield is None:
passfield = x.name
elif type_ == 'text' and userfield is None:
userfield = x.name
elif type_ == 'email' and emailfield is None:
emailfield = x.name
return (userfield or emailfield, passfield)
def submit_value(form):
"""Returns the value for the submit input, if any"""
for x in form.inputs:
if x.type == 'submit' and x.name:
return [(x.name, x.value)]
else:
return []
def fill_login_form(
url,
body,
username,
password,
):
doc = html.document_fromstring(body, base_url=url)
form = _pick_form(doc.xpath('//form'))
(userfield, passfield) = _pick_fields(form)
form.fields[userfield] = username
form.fields[passfield] = password
form_values = form.form_values() + submit_value(form)
return (form_values, form.action or form.base_url, form.method)
def main():
ap = ArgumentParser()
ap.add_argument('-u', '--username', default='username')
ap.add_argument('-p', '--password', default='secret')
ap.add_argument('url')
args = ap.parse_args()
try:
import requests
except ImportError:
print 'requests library is required to use loginform as a tool'
r = requests.get(args.url)
(values, action, method) = fill_login_form(args.url, r.text,
args.username, args.password)
print '''url: {0}
method: {1}
payload:'''.format(action, method)
for (k, v) in values:
print '- {0}: {1}'.format(k, v)
if __name__ == '__main__':
sys.exit(main())
</code></pre>
<p>The Log Message:</p>
<pre><code>2016-10-02 23:31:28 [scrapy] INFO: Scrapy 1.1.3 started (bot: scraptest)
2016-10-02 23:31:28 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scraptest.spiders', 'FEED_URI': 'medic.json', 'SPIDER_MODULES': ['scraptest.spiders'], 'BOT_NAME': 'scraptest', 'ROBOTSTXT_OBEY': True, 'USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:39.0) Gecko/20100101 Firefox/39.0', 'FEED_FORMAT': 'json', 'AUTOTHROTTLE_ENABLED': True}
2016-10-02 23:31:28 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.throttle.AutoThrottle']
2016-10-02 23:31:28 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-10-02 23:31:28 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-10-02 23:31:28 [scrapy] INFO: Enabled item pipelines:
[]
2016-10-02 23:31:28 [scrapy] INFO: Spider opened
2016-10-02 23:31:28 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-10-02 23:31:28 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6024
2016-10-02 23:31:29 [scrapy] DEBUG: Crawled (200) <GET https://www.zauba.com/robots.txt> (referer: None)
2016-10-02 23:31:38 [scrapy] DEBUG: Crawled (200) <GET https://www.zauba.com/import-gold/p-1-hs-code.html> (referer: None)
2016-10-02 23:31:38 [scrapy] INFO: Closing spider (finished)
2016-10-02 23:31:38 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 558,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 136267,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 10, 3, 6, 31, 38, 560012),
'log_count/DEBUG': 3,
'log_count/INFO': 7,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2016, 10, 3, 6, 31, 28, 927872)}
2016-10-02 23:31:38 [scrapy] INFO: Spider closed (finished)
</code></pre>
| 0 | 2016-10-03T06:36:13Z | 39,828,899 | <p>Scrapy already has form request manager called <code>FormRequest</code>.</p>
<p>In most of the cases it will find the correct form by itself. You can try:</p>
<pre><code>>>> scrapy shell "https://www.zauba.com/import-gold/p-1-hs-code.html"
from scrapy import FormRequest
login_data={'name':'mylogin', 'pass':'mypass'})
request = FormRequest.from_response(response, formdata=login_data)
print(request.body)
# b'form_build_id=form-Lf7bFJPTN57MZwoXykfyIV0q3wzZEQqtA5s6Ce-bl5Y&form_id=user_login_block&op=Log+in&pass=mypass&name=mylogin'
</code></pre>
<p>Once you log in any requests chained afterwards will have a session cookie attached to them so you only need to login once at the beginning of your chain.</p>
| -1 | 2016-10-03T09:56:15Z | [
"python",
"authentication",
"web-scraping",
"scrapy",
"scrapy-spider"
]
|
scrapy - spider module def functions not getting invoked | 39,825,740 | <p>My intention is to invoke start_requests method to login to the website. After login, scrape the website. Based on the log message, I see that
1. But, I see that start_request is not invoked.
2. call_back function of the parse is also not invoking. </p>
<p>Whats actually happening is spider is only loading the urls in the start_urls. </p>
<p>Question:</p>
<ol>
<li>Why the spider is not crawling through other pages(say page 2, 3, 4)?</li>
<li>Why looking from spider is not working?</li>
</ol>
<p>Note:</p>
<ol>
<li>My method to calculate page number and url creation is correct. I verified it.</li>
<li>I referred this link to write this code <a href="http://stackoverflow.com/questions/29809524/using-loginform-with-scrapy">Using loginform with scrapy</a></li>
</ol>
<p>My code:</p>
<p>zauba.py (spider)</p>
<pre><code>#!/usr/bin/env python
from scrapy.spiders import CrawlSpider
from scrapy.http import FormRequest
from scrapy.http.request import Request
from loginform import fill_login_form
import logging
logger = logging.getLogger('Zauba')
class zauba(CrawlSpider):
name = 'Zauba'
login_url = 'https://www.zauba.com/user'
login_user = 'scrapybot1@gmail.com'
login_password = 'scrapybot1'
logger.info('zauba')
start_urls = ['https://www.zauba.com/import-gold/p-1-hs-code.html']
def start_requests(self):
logger.info('start_request')
# let's start by sending a first request to login page
yield scrapy.Request(self.login_url, callback = self.parse_login)
def parse_login(self, response):
logger.warning('parse_login')
# got the login page, let's fill the login form...
data, url, method = fill_login_form(response.url, response.body,
self.login_user, self.login_password)
# ... and send a request with our login data
return FormRequest(url, formdata=dict(data),
method=method, callback=self.start_crawl)
def start_crawl(self, response):
logger.warning('start_crawl')
# OK, we're in, let's start crawling the protected pages
for url in self.start_urls:
yield scrapy.Request(url, callback=self.parse)
def parse(self, response):
logger.info('parse')
text = response.xpath('//div[@id="block-system-main"]/div[@class="content"]/div[@style="width:920px; margin-bottom:12px;"]/span/text()').extract_first()
total_entries = int(text.split()[0].replace(',', ''))
total_pages = int(math.ceil((total_entries*1.0)/30))
logger.warning('*************** : ' + total_pages)
print('*************** : ' + total_pages)
for page in xrange(1, (total_pages + 1)):
url = 'https://www.zauba.com/import-gold/p-' + page +'-hs-code.html'
log.msg('url%d : %s' % (pages,url))
yield scrapy.Request(url, callback=self.extract_entries)
def extract_entries(self, response):
logger.warning('extract_entries')
row_trs = response.xpath('//div[@id="block-system-main"]/div[@class="content"]/div/table/tr')
for row_tr in row_trs[1:]:
row_content = row_tr.xpath('.//td/text()').extract()
if (row_content.__len__() == 9):
print row_content
yield {
'date' : row_content[0].replace(' ', ''),
'hs_code' : int(row_content[1]),
'description' : row_content[2],
'origin_country' : row_content[3],
'port_of_discharge' : row_content[4],
'unit' : row_content[5],
'quantity' : int(row_content[6].replace(',', '')),
'value_inr' : int(row_content[7].replace(',', '')),
'per_unit_inr' : int(row_content[8].replace(',', '')),
}
</code></pre>
<p>loginform.py</p>
<pre><code>#!/usr/bin/env python
import sys
from argparse import ArgumentParser
from collections import defaultdict
from lxml import html
__version__ = '1.0' # also update setup.py
def _form_score(form):
score = 0
# In case of user/pass or user/pass/remember-me
if len(form.inputs.keys()) in (2, 3):
score += 10
typecount = defaultdict(int)
for x in form.inputs:
type_ = (x.type if isinstance(x, html.InputElement) else 'other'
)
typecount[type_] += 1
if typecount['text'] > 1:
score += 10
if not typecount['text']:
score -= 10
if typecount['password'] == 1:
score += 10
if not typecount['password']:
score -= 10
if typecount['checkbox'] > 1:
score -= 10
if typecount['radio']:
score -= 10
return score
def _pick_form(forms):
"""Return the form most likely to be a login form"""
return sorted(forms, key=_form_score, reverse=True)[0]
def _pick_fields(form):
"""Return the most likely field names for username and password"""
userfield = passfield = emailfield = None
for x in form.inputs:
if not isinstance(x, html.InputElement):
continue
type_ = x.type
if type_ == 'password' and passfield is None:
passfield = x.name
elif type_ == 'text' and userfield is None:
userfield = x.name
elif type_ == 'email' and emailfield is None:
emailfield = x.name
return (userfield or emailfield, passfield)
def submit_value(form):
"""Returns the value for the submit input, if any"""
for x in form.inputs:
if x.type == 'submit' and x.name:
return [(x.name, x.value)]
else:
return []
def fill_login_form(
url,
body,
username,
password,
):
doc = html.document_fromstring(body, base_url=url)
form = _pick_form(doc.xpath('//form'))
(userfield, passfield) = _pick_fields(form)
form.fields[userfield] = username
form.fields[passfield] = password
form_values = form.form_values() + submit_value(form)
return (form_values, form.action or form.base_url, form.method)
def main():
ap = ArgumentParser()
ap.add_argument('-u', '--username', default='username')
ap.add_argument('-p', '--password', default='secret')
ap.add_argument('url')
args = ap.parse_args()
try:
import requests
except ImportError:
print 'requests library is required to use loginform as a tool'
r = requests.get(args.url)
(values, action, method) = fill_login_form(args.url, r.text,
args.username, args.password)
print '''url: {0}
method: {1}
payload:'''.format(action, method)
for (k, v) in values:
print '- {0}: {1}'.format(k, v)
if __name__ == '__main__':
sys.exit(main())
</code></pre>
<p>The Log Message:</p>
<pre><code>2016-10-02 23:31:28 [scrapy] INFO: Scrapy 1.1.3 started (bot: scraptest)
2016-10-02 23:31:28 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scraptest.spiders', 'FEED_URI': 'medic.json', 'SPIDER_MODULES': ['scraptest.spiders'], 'BOT_NAME': 'scraptest', 'ROBOTSTXT_OBEY': True, 'USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:39.0) Gecko/20100101 Firefox/39.0', 'FEED_FORMAT': 'json', 'AUTOTHROTTLE_ENABLED': True}
2016-10-02 23:31:28 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.throttle.AutoThrottle']
2016-10-02 23:31:28 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-10-02 23:31:28 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-10-02 23:31:28 [scrapy] INFO: Enabled item pipelines:
[]
2016-10-02 23:31:28 [scrapy] INFO: Spider opened
2016-10-02 23:31:28 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-10-02 23:31:28 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6024
2016-10-02 23:31:29 [scrapy] DEBUG: Crawled (200) <GET https://www.zauba.com/robots.txt> (referer: None)
2016-10-02 23:31:38 [scrapy] DEBUG: Crawled (200) <GET https://www.zauba.com/import-gold/p-1-hs-code.html> (referer: None)
2016-10-02 23:31:38 [scrapy] INFO: Closing spider (finished)
2016-10-02 23:31:38 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 558,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 136267,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 10, 3, 6, 31, 38, 560012),
'log_count/DEBUG': 3,
'log_count/INFO': 7,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2016, 10, 3, 6, 31, 28, 927872)}
2016-10-02 23:31:38 [scrapy] INFO: Spider closed (finished)
</code></pre>
| 0 | 2016-10-03T06:36:13Z | 39,841,383 | <p>I figured out the crapy mistake i did!!!! </p>
<p><strong>I didn't place the functions inside the class. Thats why .... things didnt work as expected. Now, I added a tab space to all the fuctions and things started to work fine</strong></p>
<p>Thanks @user2989777 and @Granitosaurus for coming forward to debug</p>
| 0 | 2016-10-03T22:28:10Z | [
"python",
"authentication",
"web-scraping",
"scrapy",
"scrapy-spider"
]
|
How to send FCM notification at specific time? | 39,825,989 | <p>I can be able to send FCM notifications to single or multiple devices through <a href="https://github.com/olucurious/PyFCM/tree/master/pyfcm" rel="nofollow">PyFCM</a> instantly.</p>
<pre><code># Send to single device.
from pyfcm import FCMNotification
push_service = FCMNotification(api_key="<api-key>")
# OR initialize with proxies
proxy_dict = {
"http" : "http://127.0.0.1",
"https" : "http://127.0.0.1",
}
push_service = FCMNotification(api_key="<api-key>", proxy_dict=proxy_dict)
# Your api-key can be gotten from: https://console.firebase.google.com/project/<project-name>/settings/cloudmessaging
registration_id = "<device registration_id>"
message_title = "Uber update"
message_body = "Hi john, your customized news for today is ready"
result = push_service.notify_single_device(registration_id=registration_id, message_title=message_title, message_body=message_body)
print result
</code></pre>
<p>But I can't find a way to send notification to devices at specific time, say <code>03-10-2016 16:00:00</code> .</p>
| 0 | 2016-10-03T06:55:49Z | 39,826,064 | <p>If you're looking for a public API of FCM for a scheduled push or a payload parameter where you can set the push date, unfortunately, there's nothing like it as of the moment. </p>
<p>You must implement your own App Server and implement the scheduled push yourself (also mentioned it <a href="http://stackoverflow.com/a/39524734/4625829">here</a>).</p>
<hr>
<p><em>My answer from the tagged duplicate post.</em></p>
| 0 | 2016-10-03T07:01:30Z | [
"python",
"firebase-cloud-messaging",
"pyfcm"
]
|
Filtering through numpy arrays by one row's information | 39,825,997 | <p>I am asking for help on filtering through numpy arrays. I currently have a numpy array which contains the following information:</p>
<pre><code>[[x1_1, x1_2, ..., x1_n], [x2_1, x2_2, ..., x2_n], [y1, y2, ..., yn]
</code></pre>
<p>ie. the array is essentially a dataset where <em>x1, x2</em> are features (coordinates), and y is the output (value). Each data point has an appropriate <em>x1, x2</em>, and <em>y</em>, so for example, the info corresponding to data point <em>i</em> is <em>x1_i, x2_i</em>, and <em>yi</em>.</p>
<p>Now, I want to extract all the data points by filtering through y, meaning I want to know all the data points in which y > some value. In my case, I want the info (still with the same numpy structure) for all cases where y > 0. I don't really know how to do that -- I've been playing around with boolean indexing such as <code>d[0:2,y>0]</code> or <code>d[d[2]>0]</code>, but haven't gotten anywhere.</p>
<p>A clarifying example:</p>
<p>Given the dataset:</p>
<pre><code>d = [[0.1, 0.2, 0.3], [-0.1,-0.2,-0.3], [1,1,-1]]
</code></pre>
<p>I pull all points or instances where <code>y > 0</code>, ie. <code>d[2] > 0</code>, and it should return the values:</p>
<pre><code>[[0.1, 0.2],[-0.1,-0.2],[1,1]]
</code></pre>
<p>Any advice or help would be appreciated.</p>
| 2 | 2016-10-03T06:56:47Z | 39,826,070 | <p>You can use:</p>
<pre><code>import numpy as np
d = np.array([[0.1, 0.2, 0.3], [-0.1,-0.2,-0.3], [1,1,-1]])
print (d)
[[ 0.1 0.2 0.3]
[-0.1 -0.2 -0.3]
[ 1. 1. -1. ]]
#select last row by d[-1]
print (d[-1]>0)
[ True True False]
print (d[:,d[-1]>0])
[[ 0.1 0.2]
[-0.1 -0.2]
[ 1. 1. ]]
</code></pre>
| 3 | 2016-10-03T07:01:59Z | [
"python",
"arrays",
"numpy",
"filter"
]
|
what is means reconstruct object or recreate object in python? | 39,826,016 | <p>I am trying to understand difference between <code>__str__</code> and <code>__repr__</code> and following <a href="http://stackoverflow.com/questions/1436703/difference-between-str-and-repr-in-python/2626364#2626364">Difference between __str__ and __repr__ in Python</a> </p>
<p>In the answer it says <code>__repr__</code>'s goal is to be unambiguous, I am confused what that really means. Another answer says: <code>__repr__</code>: representation of python object usually <code>eval</code> will convert it back to that object, so I have read that <code>eval</code> function let execute the python code in it.</p>
<p>Can someone please explain these terms in simple language:</p>
<ul>
<li>What is object reconstruction? </li>
<li>What is <code>__repr__</code>? </li>
<li>What is the connection between <code>eval</code> and <code>__repr__</code>?</li>
</ul>
| 2 | 2016-10-03T06:58:17Z | 39,826,109 | <p>Perhaps a simple example will help clarify:</p>
<pre><code>class Object:
def __init__(self, a, b):
self.a = a
self.b = b
def __repr__(self):
return 'Object({0.a!r}, {0.b!r})'.format(self)
</code></pre>
<p>This object has two parameters defined in <code>__init__</code> and a sensible <code>__repr__</code> method.</p>
<pre><code>>>> o = Object(1, 2)
>>> repr(o)
'Object(1, 2)'
</code></pre>
<p>Note that the <code>__repr__</code> of our object now looks exactly like how we created it to begin with; it's legal Python code that supplies the same values. As the <a href="https://docs.python.org/3/reference/datamodel.html#object.__repr__" rel="nofollow">data model documentation</a> puts it:</p>
<blockquote>
<p>If at all possible, [<code>__repr__</code>] should look like a valid Python
expression that could be used to recreate an object with the same
value (given an appropriate environment).</p>
</blockquote>
<p>That's where <a href="https://docs.python.org/3/library/functions.html#eval" rel="nofollow"><code>eval</code></a> comes in:</p>
<pre><code>>>> x = eval(repr(o))
>>> x.a
1
>>> x.b
2
</code></pre>
<p>Because the representation is Python code, we can <code>eval</code>ulate it and get an object with the same values. This is <em>"reconstructing"</em> the original object <code>o</code>.</p>
<p>However, as pointed out in the comments, this is just an illustration and doesn't mean that you should rely on <code>repr(eval(x))</code> when trying to create a copy of <code>x</code>.</p>
| 2 | 2016-10-03T07:04:28Z | [
"python",
"python-3.x"
]
|
How to move file from one folder to another folder same ftp using python | 39,826,120 | <p>How move files from one folder to another folder in same ftp using python
i used this code but it doesn't work out </p>
<pre><code>ftp=FTP("host")
ftp.login("user name","password")
def downloadFiles(path,destination):
try:
ftp.cwd(path)
#clone path to destination
ftp.dir(destination)
#~ os.mkdir(destination[0:len(destination)-1]+path)
print destination[0:len(destination)-1]+path+" built"
except OSError:
#folder already exists at destination
pass
except ftplib.error_perm:
#invalid entry (ensure input form: "/dir/folder/something/")
print "error: could not change to "+path
sys.exit("ending session")
filelist=ftp.nlst()
for file in filelist:
try:
#this will check if file is folder:
ftp.cwd(path+file+"/")
#if so, explore it:
downloadFiles(path+file+"/",destination)
except ftplib.error_perm:
#not a folder with accessible content
#download & return
#~ os.chdir(destination[0:len(destination)]+path)
#possibly need a permission exception catch:
#~ ftp.retrbinary("RETR "+ file, open(ftp.path.join(destination,file),"wb").write)
ftp.storlines("STOR "+file, open(ftp.dir(destination, file),'r'))
print file + " downloaded"
return
</code></pre>
| -2 | 2016-10-03T07:05:14Z | 39,826,207 | <p>I would suggest using the excellent python file system abstraction <a href="https://pypi.python.org/pypi/fs" rel="nofollow">pyfs</a> as you can see from the <a href="http://docs.pyfilesystem.org/en/latest/interface.html" rel="nofollow">documents</a> all of the file systems, once mounted, have <code>copy</code>, <code>copydir</code>, <code>move</code> and <code>movedir</code> methods, <em>personally I would always copy, verify then delete for safety especially on a remote system.</em></p>
| 0 | 2016-10-03T07:13:12Z | [
"python",
"ftp"
]
|
SWIG tutorial problems | 39,826,248 | <p>I'm trying to follow the swig tutorial but I've got stuck, right now I'm using:</p>
<ul>
<li>Python 3.5.1 (v3.5.1:37a07cee5969, Dec 6 2015, 01:54:25) [MSC v.1900 64 bit (AMD64)] on win32</li>
<li>Vs2015 x64, Microsoft (R) C/C++ Optimizing Compiler Version 19.00.23918 for x64</li>
<li>SWIG Version 3.0.10</li>
</ul>
<p>The contents are:</p>
<p><strong>example.c</strong></p>
<pre><code> #include <time.h>
double My_variable = 3.0;
int fact(int n) {
if (n <= 1) return 1;
else return n*fact(n-1);
}
int my_mod(int x, int y) {
return (x%y);
}
char *get_time()
{
time_t ltime;
time(&ltime);
return ctime(&ltime);
}
</code></pre>
<p><strong>example.i</strong></p>
<pre><code> %module example
%{
/* Put header files here or function declarations like below */
extern double My_variable;
extern int fact(int n);
extern int my_mod(int x, int y);
extern char *get_time();
%}
extern double My_variable;
extern int fact(int n);
extern int my_mod(int x, int y);
extern char *get_time();
</code></pre>
<p>Then I do:</p>
<ul>
<li><code>swig -python example.i</code></li>
<li><code>cl /D_USRDLL /D_WINDLL example.c example_wrap.c -Ic:\Python351\include /link /DLL /out:example.pyd /libpath:c:\python351\libs python35.lib</code></li>
</ul>
<p>But when I try <code>python -c "import example"</code> I get:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: dynamic module does not define module export function (PyInit_example)
</code></pre>
<p>Question, what's going on and how do i fix it?</p>
| 1 | 2016-10-03T07:15:51Z | 39,827,046 | <p>The name of the dynamic-linked module for SWIG should begin with an underscore, in this case <code>_example.pyd</code>. The SWIG generated Python file is looking for the module named <code>_example</code>, see beginning of that file:</p>
<pre><code>from sys import version_info
if version_info >= (2, 6, 0):
def swig_import_helper():
from os.path import dirname
import imp
fp = None
try: # â SEE HERE
fp, pathname, description = imp.find_module('_example', [dirname(__file__)])
except ImportError:
import _example # â AND HERE
return _example # â AND HERE
if fp is not None:
try: # â AND HERE
_mod = imp.load_module('_example', fp, pathname, description)
finally:
fp.close()
return _mod
_example = swig_import_helper() # â AND HERE
del swig_import_helper
else: # â AND HERE
import _example
</code></pre>
<p>It is in fact the name of the C++ module that is wrapped by SWIG.</p>
| 1 | 2016-10-03T08:07:12Z | [
"python",
"c++",
"windows",
"visual-studio-2015",
"swig"
]
|
conda stuck on Proceed ([y]/n)? when updating packages etc | 39,826,250 | <p>I just downloaded Anaconda 4.2.0 (with python 3.5.2) for Mac OS X. Whenever I try to update any packages etc, my ipython console presents the package dependencies and displays "Proceed ([y]/n)?" but does not take any inputs. E.g. I press enter, or y-enter etc. and nothing happens. Here's an example:</p>
<pre><code>!conda create -n graphlab-env python=2.7 anaconda
Fetching package metadata .......
Solving package specifications: ..........
Package plan for installation in environment /Users/Abhijit/anaconda/envs/graphlab-env:
The following packages will be downloaded:
package | build
---------------------------|-----------------
python-2.7.12 | 1 9.5 MB
_license-1.1 | py27_1 80 KB
alabaster-0.7.9 | py27_0 11 KB
anaconda-clean-1.0.0 | py27_0 3 KB
.
.
.
nbpresent-3.0.2 | py27_0 463 KB
anaconda-4.2.0 | np111py27_0 6 KB
------------------------------------------------------------
Total: 143.9 MB
The following NEW packages will be INSTALLED:
_license: 1.1-py27_1
_nb_ext_conf: 0.3.0-py27_0
alabaster: 0.7.9-py27_0
anaconda: 4.2.0-np111py27_0
anaconda-clean: 1.0.0-py27_0
.
.
.
yaml: 0.1.6-0
zlib: 1.2.8-3
Proceed ([y]/n)?
</code></pre>
<p>It won't respond after this step. When I enter 'Ctrl-C' it breaks out of this loop. I have tried Shift-Enter, Alt-Enter, Ctrl-Enter, Cmd-Enter etc but no luck. Tearing my hair out over this. Am I missing something?</p>
| 0 | 2016-10-03T07:16:03Z | 39,841,757 | <p>You can launch shell commands with the <code>!</code> operator in ipython, but you can't interact with them after the process has launched.</p>
<p>Therefore, you could:</p>
<ol>
<li>execute your conda command outside of your ipython session (IOW, a normal shell); or</li>
<li>pass the <code>--yes</code> flag. e.g.: </li>
</ol>
<p><code>In[2]: !conda create -n graphlab-env python=2.7 anaconda --yes</code></p>
| 0 | 2016-10-03T23:05:59Z | [
"python",
"ipython",
"anaconda",
"spyder",
"graphlab"
]
|
conda stuck on Proceed ([y]/n)? when updating packages etc | 39,826,250 | <p>I just downloaded Anaconda 4.2.0 (with python 3.5.2) for Mac OS X. Whenever I try to update any packages etc, my ipython console presents the package dependencies and displays "Proceed ([y]/n)?" but does not take any inputs. E.g. I press enter, or y-enter etc. and nothing happens. Here's an example:</p>
<pre><code>!conda create -n graphlab-env python=2.7 anaconda
Fetching package metadata .......
Solving package specifications: ..........
Package plan for installation in environment /Users/Abhijit/anaconda/envs/graphlab-env:
The following packages will be downloaded:
package | build
---------------------------|-----------------
python-2.7.12 | 1 9.5 MB
_license-1.1 | py27_1 80 KB
alabaster-0.7.9 | py27_0 11 KB
anaconda-clean-1.0.0 | py27_0 3 KB
.
.
.
nbpresent-3.0.2 | py27_0 463 KB
anaconda-4.2.0 | np111py27_0 6 KB
------------------------------------------------------------
Total: 143.9 MB
The following NEW packages will be INSTALLED:
_license: 1.1-py27_1
_nb_ext_conf: 0.3.0-py27_0
alabaster: 0.7.9-py27_0
anaconda: 4.2.0-np111py27_0
anaconda-clean: 1.0.0-py27_0
.
.
.
yaml: 0.1.6-0
zlib: 1.2.8-3
Proceed ([y]/n)?
</code></pre>
<p>It won't respond after this step. When I enter 'Ctrl-C' it breaks out of this loop. I have tried Shift-Enter, Alt-Enter, Ctrl-Enter, Cmd-Enter etc but no luck. Tearing my hair out over this. Am I missing something?</p>
| 0 | 2016-10-03T07:16:03Z | 40,022,547 | <p>If you add a '--yes' at the end of the command it works. For example:</p>
<pre><code>>>>!conda install seaborn --yes
</code></pre>
| 0 | 2016-10-13T13:35:27Z | [
"python",
"ipython",
"anaconda",
"spyder",
"graphlab"
]
|
Model in Django 1.9. TypeError: __init__() got multiple values for argument 'verbose_name' | 39,826,485 | <p>I have Python 3.5 and Django 1.9
try to do the next</p>
<pre><code>class Question(models.Model):
def __init__(self, *args, question_text=None, pub_date=None, **kwargs):
self.question_text = question_text
self.pub_date = pub_date
question_text = models.CharField(max_length=200, verbose_name="Question")
pub_date = models.DateTimeField('date_published', verbose_name="Date")
def __str__(self):
return self.question_text
def __unicode__(self):
return self.question_text
class Meta:
verbose_name = "Question"
</code></pre>
<p>But got an error</p>
<blockquote>
<p>File "/home/donotyou/PycharmProjects/djangobook/polls/models.py",
line 15, in Question
pub_date = models.DateTimeField('date_published', verbose_name="Date") TypeError: <strong>init</strong>() got multiple values for
argument 'verbose_name'</p>
</blockquote>
<p>Please help</p>
| 0 | 2016-10-03T07:31:42Z | 39,826,574 | <p>You don't need to override <code>__init__</code> in Django. Django is doing everything for you, you just need to define your models and you are fine.</p>
<p>But the error you are getting because <code>pub_date = models.DateTimeField('date_published', verbose_name="Date")</code> here you are setting <code>verbose_name</code> twice, because the first argument of django <code>Field</code> is <code>verbose_name</code> and after that you setting the same <code>verbose_name</code> making two same arguments passing to class.</p>
<p>So basically you need to do is:</p>
<pre><code>class Question(models.Model):
question_text = models.CharField(max_length=200, verbose_name="Question")
pub_date = models.DateTimeField('date_published') # or pub_date = models.DateTimeField("Date")
def __str__(self):
return self.question_text
def __unicode__(self):
return self.question_text
class Meta:
verbose_name = "Question"
</code></pre>
<p><strong>NOTE</strong>: In most cases it's more readable to passing <code>verbose_name</code> as a first argument without any <code>verbose_name=</code> except relational fields.
From <a href="https://docs.djangoproject.com/en/1.10/topics/db/models/#verbose-field-names" rel="nofollow">docs</a>:</p>
<blockquote>
<p>Each field type, except for <code>ForeignKey</code>, <code>ManyToManyField</code> and <code>OneToOneField</code>, takes an optional first positional argument â a verbose name. If the verbose name isnât given, Django will automatically create it using the fieldâs attribute name, converting underscores to spaces.</p>
</blockquote>
| 3 | 2016-10-03T07:37:40Z | [
"python",
"django"
]
|
Model in Django 1.9. TypeError: __init__() got multiple values for argument 'verbose_name' | 39,826,485 | <p>I have Python 3.5 and Django 1.9
try to do the next</p>
<pre><code>class Question(models.Model):
def __init__(self, *args, question_text=None, pub_date=None, **kwargs):
self.question_text = question_text
self.pub_date = pub_date
question_text = models.CharField(max_length=200, verbose_name="Question")
pub_date = models.DateTimeField('date_published', verbose_name="Date")
def __str__(self):
return self.question_text
def __unicode__(self):
return self.question_text
class Meta:
verbose_name = "Question"
</code></pre>
<p>But got an error</p>
<blockquote>
<p>File "/home/donotyou/PycharmProjects/djangobook/polls/models.py",
line 15, in Question
pub_date = models.DateTimeField('date_published', verbose_name="Date") TypeError: <strong>init</strong>() got multiple values for
argument 'verbose_name'</p>
</blockquote>
<p>Please help</p>
| 0 | 2016-10-03T07:31:42Z | 39,826,634 | <p>I believe you should not override <code>__init__()</code> here (as @vishes_shell supposed too). Instead of this, if you want to made some customization of instances initialization, you can add classmethod <code>create</code> to the model. Here is documentation: <a href="https://docs.djangoproject.com/en/1.10/ref/models/instances/#creating-objects" rel="nofollow">https://docs.djangoproject.com/en/1.10/ref/models/instances/#creating-objects</a> </p>
| 0 | 2016-10-03T07:41:48Z | [
"python",
"django"
]
|
Tensorflow: how it trains the model? | 39,826,514 | <p>Working on Tensorflow, the first step is build a data graph and use session to run it. While, during my practice, such as the <a href="https://www.tensorflow.org/versions/r0.9/tutorials/mnist/beginners/index.html#Training" rel="nofollow">MNIST tutorial</a>. It firstly defines <em>loss</em> function and the <em>optimizer</em>, with the following codes (and the MLP model is defined before that):</p>
<pre><code>cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) #define cross entropy error function
loss = tf.reduce_mean(cross_entropy, name='xentropy_mean') #define loss
optimizer = tf.train.GradientDescentOptimizer(learning_rate) #define optimizer
global_step = tf.Variable(0, name='global_step', trainable=False) #learning rate
train_op = optimizer.minimize(loss, global_step=global_step) #train operation in the graph
</code></pre>
<p><strong>The training process:</strong></p>
<pre><code>train_step =tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
</code></pre>
<p>That's how Tensorflow did training in this case. But my question is, how did Tensorflow know which weight it needs to train and update? I mean, in the training codes, we only pass output <code>y</code> to <code>cross_entropy</code>, but for <code>optimizer</code> or <code>loss</code>, we didn't pass any information about the structure <strong>directly</strong>. In addition, we use dictionary to feed batch data to <code>train_step</code>, but <code>train_step</code> didn't directly use the data. <strong>How did Tensorflow know where to use these data as input?</strong></p>
<p>To my question, I thought it might be all those variables or constants are stored in <strong>Tensor</strong>. Operations such as <code>tf.matmul()</code> should a "subclass" of Tensorflow's operation class(I haven't check the code yet). There might be some mechanism for Tensorflow to recognise relations among tensors (<code>tf.Variable()</code>, <code>tf.constant()</code>) and operations (<code>tf.mul()</code>, <code>tf.div()</code>...). I guess, it could check the <code>tf.xxxx()</code>'s super class to find out whether it is a tensor or operation. This assumption raises <strong>my second question</strong>: should I use Tensorflow's 'tf.xxx' function as possible to ensure tensorflow could build correct data flow graph, even sometimes it is more complicated than normal Python methods or some functions are supported better in Numpy than Tensorflow? </p>
<p><strong>My last question</strong> is: Is there any link between Tensorflow and C++? I heard someone said Tensorflow is faster than normal Python since it uses C or C++ as backend. Is there any transform mechanism to transfer Tensorflow Python codes to C/C++?</p>
<p>I'd also be graceful if someone could share some debugging habits in coding with Tensorflow, since currently I just set up some terminals (Ubuntu) to test each part/functions of my codes.</p>
| 0 | 2016-10-03T07:33:12Z | 39,828,841 | <p>You do pass information about your structure to Tensorflow when you define your loss with:</p>
<pre><code>loss = tf.reduce_mean(cross_entropy, name='xentropy_mean')
</code></pre>
<p>Notice that with Tensorflow you build a graph of operations, and every operation you use in your code is a node in the graph.</p>
<p>When you define your <code>loss</code> you are passing the operation stored in <code>cross_entropy</code>, which depends on <code>y_</code> and <code>y</code>. <code>y_</code> is a placeholder for your input whereas <code>y</code> is the result of <code>y = tf.nn.softmax(tf.matmul(x, W) + b)</code>. See where I am going? The operation <code>loss</code> contains all the information it needs to build the model an process the input, because it depends on the operation <code>cross_entropy</code>, which depends on <code>y_</code> and <code>y</code>, which depends on the input <code>x</code> and the model weights <code>W</code>.</p>
<p>So when you call</p>
<pre><code>sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
</code></pre>
<p>Tensorflow knows perfectly which operations should be computed when you run <code>train_step</code>, and it knows exactly where to put in the operations graph the data you are passing through <code>feed_dict</code>.</p>
<p>As for how does Tensorflow know which variables should be trained, the answer is easy. It trains any <code>tf.Variable()</code> in the operations graph which is trainable. Notice how when you define the <code>global_step</code> you set <code>trainable=False</code> because you don't want to compute gradients w.r.t that variable.</p>
| 0 | 2016-10-03T09:52:23Z | [
"python",
"tensorflow"
]
|
Cross validation with particular dataset lists with Python | 39,826,538 | <p>I know sklearn has nice method to get cross validation scores:</p>
<pre><code> from sklearn.model_selection import cross_val_score
clf = svm.SVC(kernel='linear', C=1)
scores = cross_val_score(clf, iris.data, iris.target, cv=5)
scores
</code></pre>
<p>I'd like to know scores with specific training and test set:</p>
<pre><code>train_list = [train1, train2, train3] # train1,2,3 is the training data sets
test_list = [test1, test2, test3] # # test1,2,3 is the test data sets
clf = svm.SVC(kernel='linear', C=1)
scores = some_nice_method(clf, train_list, test_list)
</code></pre>
<p>Is there such kind of method giving scores of particular separated data set in python?</p>
| 2 | 2016-10-03T07:34:52Z | 39,826,758 | <p>This is exactly two lines of code:</p>
<pre><code>for tr, te in zip(train_list, test_list):
svm.SVC(kernel='linear', C=1).train(X[tr, :], y[tr]).score(X[te, :], y[te])
</code></pre>
<p>See <a href="http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC.score" rel="nofollow"><code>sklearn.svn.SVC.score</code></a>:</p>
<pre><code>score(X, y, sample_weight=None)
</code></pre>
<blockquote>
<p>Returns the mean accuracy on the given test data and labels.</p>
</blockquote>
| 1 | 2016-10-03T07:49:21Z | [
"python",
"machine-learning",
"scikit-learn"
]
|
Cross validation with particular dataset lists with Python | 39,826,538 | <p>I know sklearn has nice method to get cross validation scores:</p>
<pre><code> from sklearn.model_selection import cross_val_score
clf = svm.SVC(kernel='linear', C=1)
scores = cross_val_score(clf, iris.data, iris.target, cv=5)
scores
</code></pre>
<p>I'd like to know scores with specific training and test set:</p>
<pre><code>train_list = [train1, train2, train3] # train1,2,3 is the training data sets
test_list = [test1, test2, test3] # # test1,2,3 is the test data sets
clf = svm.SVC(kernel='linear', C=1)
scores = some_nice_method(clf, train_list, test_list)
</code></pre>
<p>Is there such kind of method giving scores of particular separated data set in python?</p>
| 2 | 2016-10-03T07:34:52Z | 39,826,951 | <p>My suggestion is to use <a href="http://scikit-learn.org/0.17/modules/generated/sklearn.cross_validation.KFold.html" rel="nofollow">kfold cross validation</a> like below. In this case, you will get both train, test indices for a particular split along with the accuracy score.</p>
<pre><code>from sklearn import svm
from sklearn import datasets
from sklearn.cross_validation import KFold
from sklearn.metrics import accuracy_score
iris = datasets.load_iris()
X = iris.data
y = iris.target
clf = svm.SVC(kernel='linear', C=1)
kf = KFold(len(y), n_folds=5)
for train_index, test_index in kf:
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
score = accuracy_score(y_test, y_pred)
print score
</code></pre>
| 1 | 2016-10-03T08:01:31Z | [
"python",
"machine-learning",
"scikit-learn"
]
|
loop stuck on first page | 39,826,586 | <p>Been using beautiful soup to iterate through pages, but for whatever reason I can't get the loop to advance beyond the first page. it seems like it should be easy because it's a text string, but it seems to loop back, maybe it's my structure not my text string?</p>
<p>Here's what I have:</p>
<pre><code>import csv
import urllib2
from bs4 import BeautifulSoup
f = open('nhlstats.csv', "w")
groups=['points', 'shooting', 'goaltending', 'defensive', 'timeonice', 'faceoffs', 'minor-penalties', 'major-penalties']
year = ["2016", "2015","2014","2013","2012"]
for yr in year:
for gr in groups:
url = "http://www.espn.com/nhl/statistics/player/_/stat/points/year/"+str(yr)
#www.espn.com/nhl/statistics/player/_/stat/points/year/2014/
page = urllib2.urlopen(url)
soup=BeautifulSoup(page, "html.parser")
pagecount = soup.findAll(attrs= {"class":"page-numbers"})[0].string
pageliteral = int(pagecount[5:])
for i in range(0,pageliteral):
number = int(((i*40) + 1))
URL = "http://www.espn.com/nhl/statistics/player/_/stat/points/sort/points/year/"+str(yr) + "/count/"+str(number)
page = urllib2.urlopen(url)
soup=BeautifulSoup(page, "html.parser")
for tr in soup.select("#my-players-table tr[class*=player]"):
row =[]
for ob in range(1,15):
player_info = tr('td')[ob].get_text(strip=True)
row.append(player_info)
f.write(str(yr) +","+",".join(row) + "\n")
f.close()
</code></pre>
<p>this gets the same first 40 records over and over. </p>
<p>I tried using <a href="https://automatetheboringstuff.com/chapter11/" rel="nofollow">this solution</a> as an if and did find that doing </p>
<pre><code>prevLink = soup.select('a[rel="nofollow"]')[0]
newurl = "http:" + prevLink.get('href')
</code></pre>
<p>did work better, but I'm not sure how to do the loop in such a way that it advances? possibly just tired but my loop there still just goes to the next set of records and gets stuck on <em>that</em> one. please help me fix my loop</p>
<p><strong>UPDATE</strong></p>
<p>my formatting was lost in the copy paste, my actual code looks like:</p>
<pre><code>import csv
import urllib2
from bs4 import BeautifulSoup
f = open('nhlstats.csv', "w")
groups=['points', 'shooting', 'goaltending', 'defensive', 'timeonice', 'faceoffs', 'minor-penalties', 'major-penalties']
year = ["2016", "2015","2014","2013","2012"]
for yr in year:
for gr in groups:
url = "http://www.espn.com/nhl/statistics/player/_/stat/points/year/"+str(yr)
#www.espn.com/nhl/statistics/player/_/stat/points/year/2014/
page = urllib2.urlopen(url)
soup=BeautifulSoup(page, "html.parser")
pagecount = soup.findAll(attrs= {"class":"page-numbers"})[0].string
pageliteral = int(pagecount[5:])
for i in range(0,pageliteral):
number = int(((i*40) + 1))
URL = "http://www.espn.com/nhl/statistics/player/_/stat/points/sort/points/year/"+str(yr) + "/count/"+str(number)
page = urllib2.urlopen(url)
soup=BeautifulSoup(page, "html.parser")
for tr in soup.select("#my-players-table tr[class*=player]"):
row =[]
for ob in range(1,15):
player_info = tr('td')[ob].get_text(strip=True)
row.append(player_info)
f.write(str(yr) +","+",".join(row) + "\n")
f.close()
</code></pre>
| 0 | 2016-10-03T07:38:46Z | 39,827,063 | <p>You are changing the URL many times before you are opening it the first time, due to an indentation error. Try this:</p>
<p><code>for gr in groups:
url = "...some_url..."
page = urllib2.urlopen(url)
...everything else should be indented....</code></p>
| 0 | 2016-10-03T08:08:24Z | [
"python",
"python-2.7",
"scripting",
"beautifulsoup"
]
|
loop stuck on first page | 39,826,586 | <p>Been using beautiful soup to iterate through pages, but for whatever reason I can't get the loop to advance beyond the first page. it seems like it should be easy because it's a text string, but it seems to loop back, maybe it's my structure not my text string?</p>
<p>Here's what I have:</p>
<pre><code>import csv
import urllib2
from bs4 import BeautifulSoup
f = open('nhlstats.csv', "w")
groups=['points', 'shooting', 'goaltending', 'defensive', 'timeonice', 'faceoffs', 'minor-penalties', 'major-penalties']
year = ["2016", "2015","2014","2013","2012"]
for yr in year:
for gr in groups:
url = "http://www.espn.com/nhl/statistics/player/_/stat/points/year/"+str(yr)
#www.espn.com/nhl/statistics/player/_/stat/points/year/2014/
page = urllib2.urlopen(url)
soup=BeautifulSoup(page, "html.parser")
pagecount = soup.findAll(attrs= {"class":"page-numbers"})[0].string
pageliteral = int(pagecount[5:])
for i in range(0,pageliteral):
number = int(((i*40) + 1))
URL = "http://www.espn.com/nhl/statistics/player/_/stat/points/sort/points/year/"+str(yr) + "/count/"+str(number)
page = urllib2.urlopen(url)
soup=BeautifulSoup(page, "html.parser")
for tr in soup.select("#my-players-table tr[class*=player]"):
row =[]
for ob in range(1,15):
player_info = tr('td')[ob].get_text(strip=True)
row.append(player_info)
f.write(str(yr) +","+",".join(row) + "\n")
f.close()
</code></pre>
<p>this gets the same first 40 records over and over. </p>
<p>I tried using <a href="https://automatetheboringstuff.com/chapter11/" rel="nofollow">this solution</a> as an if and did find that doing </p>
<pre><code>prevLink = soup.select('a[rel="nofollow"]')[0]
newurl = "http:" + prevLink.get('href')
</code></pre>
<p>did work better, but I'm not sure how to do the loop in such a way that it advances? possibly just tired but my loop there still just goes to the next set of records and gets stuck on <em>that</em> one. please help me fix my loop</p>
<p><strong>UPDATE</strong></p>
<p>my formatting was lost in the copy paste, my actual code looks like:</p>
<pre><code>import csv
import urllib2
from bs4 import BeautifulSoup
f = open('nhlstats.csv', "w")
groups=['points', 'shooting', 'goaltending', 'defensive', 'timeonice', 'faceoffs', 'minor-penalties', 'major-penalties']
year = ["2016", "2015","2014","2013","2012"]
for yr in year:
for gr in groups:
url = "http://www.espn.com/nhl/statistics/player/_/stat/points/year/"+str(yr)
#www.espn.com/nhl/statistics/player/_/stat/points/year/2014/
page = urllib2.urlopen(url)
soup=BeautifulSoup(page, "html.parser")
pagecount = soup.findAll(attrs= {"class":"page-numbers"})[0].string
pageliteral = int(pagecount[5:])
for i in range(0,pageliteral):
number = int(((i*40) + 1))
URL = "http://www.espn.com/nhl/statistics/player/_/stat/points/sort/points/year/"+str(yr) + "/count/"+str(number)
page = urllib2.urlopen(url)
soup=BeautifulSoup(page, "html.parser")
for tr in soup.select("#my-players-table tr[class*=player]"):
row =[]
for ob in range(1,15):
player_info = tr('td')[ob].get_text(strip=True)
row.append(player_info)
f.write(str(yr) +","+",".join(row) + "\n")
f.close()
</code></pre>
| 0 | 2016-10-03T07:38:46Z | 39,828,579 | <p>Your code indenting was mostly at fault. Also it would be wise to actually use the CSV library you imported, this will automatically wrap the player names in quotes to avoid any commas inside from ruining the csv structure.</p>
<p>This works by looking for the link to the next page and extracting the starting count. This is then used to build your the next page get. If no next page can be found, it moves to the next year group. Note, the count is not a page count but a starting entry count.</p>
<pre><code>import csv
import urllib2
from bs4 import BeautifulSoup
groups= ['points', 'shooting', 'goaltending', 'defensive', 'timeonice', 'faceoffs', 'minor-penalties', 'major-penalties']
year = ["2016", "2015", "2014", "2013", "2012"]
with open('nhlstats.csv', "wb") as f_output:
csv_output = csv.writer(f_output)
for yr in year:
for gr in groups:
start_count = 1
while True:
#print "{}, {}, {}".format(yr, gr, start_count) # show progress
url = "http://www.espn.com/nhl/statistics/player/_/stat/points/sort/points/year/{}/count/{}".format(yr, start_count)
page = urllib2.urlopen(url)
soup = BeautifulSoup(page, "html.parser")
for tr in soup.select("#my-players-table tr[class*=player]"):
row = [yr]
for ob in range(1, 15):
player_info = tr('td')[ob].get_text(strip=True)
row.append(player_info)
csv_output.writerow(row)
try:
start_count = int(soup.find(attrs= {"class":"page-numbers"}).find_next('a')['href'].rsplit('/', 1)[1])
except:
break
</code></pre>
<p>Using <code>with</code> will also automatically close your file at the end.</p>
<p>This would give you a csv file starting as follows:</p>
<pre><code>2016,"Patrick Kane, RW",CHI,82,46,60,106,17,30,1.29,287,16.0,9,17,20
2016,"Jamie Benn, LW",DAL,82,41,48,89,7,64,1.09,247,16.6,5,17,13
2016,"Sidney Crosby, C",PIT,80,36,49,85,19,42,1.06,248,14.5,9,10,14
2016,"Joe Thornton, C",SJ,82,19,63,82,25,54,1.00,121,15.7,6,8,21
</code></pre>
| 1 | 2016-10-03T09:37:16Z | [
"python",
"python-2.7",
"scripting",
"beautifulsoup"
]
|
calculating delta time between records in dataframe | 39,826,720 | <p>I have an interesting problem, I am trying to calculate the delta time between records done at different locations.</p>
<pre><code>id x y time
1 x1 y1 10
1 x1 y1 12
1 x2 y2 14
2 x4 y4 8
2 x5 y5 12
</code></pre>
<p>I am trying to get some thing like</p>
<pre><code>id x y time delta
1 x1 y1 10 4
1 x2 y2 14 0
2 x4 y4 8 4
2 x5 y5 12 0
</code></pre>
<p>I have done this type of processing with HiveQL by using custom UDTF but was thinking how can I achieve this with DataFrame in general (may it be in R, Pandas, PySpark). Ideally, I am trying to find a solution for Python pandas and pyspark.</p>
<p>Any hint is appreciated, thank you for your time !</p>
| 2 | 2016-10-03T07:47:05Z | 39,829,090 | <p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow"><code>drop_duplicates</code></a> with <code>groupby</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.diff.html" rel="nofollow"><code>DataFrameGroupBy.diff</code></a>, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.Series.shift.html" rel="nofollow"><code>shift</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="nofollow"><code>fillna</code></a>:</p>
<pre><code>df1 = df.drop_duplicates(subset=['id','x','y']).copy()
df1['delta'] = df1.groupby(['id'])['time'].diff().shift(-1).fillna(0)
</code></pre>
<p>Final code:</p>
<pre><code>import pandas as pd df = pd.read_csv("sampleInput.txt",
header=None,
usecols=[0,1,2,3],
names=['id','x','y','time'],
sep="\t")
delta = df.groupby(['id','x','y']).first().reset_index()
delta['delta'] = delta.groupby('id')['time'].diff().shift(-1).fillna(0)
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>In [111]: %timeit df.groupby(['id','x','y']).first().reset_index()
100 loops, best of 3: 2.42 ms per loop
In [112]: %timeit df.drop_duplicates(subset=['id','x','y']).copy()
1000 loops, best of 3: 658 µs per loop
</code></pre>
| 1 | 2016-10-03T10:06:06Z | [
"python",
"pandas",
"dataframe",
"spark-dataframe",
"pyspark-sql"
]
|
calculating delta time between records in dataframe | 39,826,720 | <p>I have an interesting problem, I am trying to calculate the delta time between records done at different locations.</p>
<pre><code>id x y time
1 x1 y1 10
1 x1 y1 12
1 x2 y2 14
2 x4 y4 8
2 x5 y5 12
</code></pre>
<p>I am trying to get some thing like</p>
<pre><code>id x y time delta
1 x1 y1 10 4
1 x2 y2 14 0
2 x4 y4 8 4
2 x5 y5 12 0
</code></pre>
<p>I have done this type of processing with HiveQL by using custom UDTF but was thinking how can I achieve this with DataFrame in general (may it be in R, Pandas, PySpark). Ideally, I am trying to find a solution for Python pandas and pyspark.</p>
<p>Any hint is appreciated, thank you for your time !</p>
| 2 | 2016-10-03T07:47:05Z | 39,829,093 | <p>@jezrael thank you for the hints, it was very useful, here is the code </p>
<pre><code>import pandas as pd
df = pd.read_csv("sampleInput.txt", header=None,usecols=[0,1,2,3], names=['id','x','y','time'],sep="\t")
delta = df.groupby(['id','x','y']).first().reset_index()
delta['delta'] = delta.groupby('id')['time'].diff().shift(-1).fillna(0)
</code></pre>
<p>That takes </p>
<pre><code>1 x1 y1 10
1 x1 y1 12
1 x2 y2 14
2 x4 y4 8
2 x5 y5 12
</code></pre>
<p>and gives,</p>
<pre><code> id x y time delta
0 1 x1 y1 10 4
1 1 x2 y2 14 0
2 2 x4 y4 8 4
3 2 x5 y5 12 0
</code></pre>
| 0 | 2016-10-03T10:06:16Z | [
"python",
"pandas",
"dataframe",
"spark-dataframe",
"pyspark-sql"
]
|
Using an existing python3 install with anaconda/miniconda | 39,826,735 | <p>With <code>python3</code> previously installed via <code>homebrew</code> on macOS, I just downloaded <code>miniconda</code> (via <code>homebrew cask</code>), which brought in another full python setup, I believe.</p>
<p>Is it possible to install anaconda/miniconda <strong>without</strong> reinstalling python?
And, if so, would that be a bad idea?</p>
| 0 | 2016-10-03T07:48:01Z | 39,827,107 | <p>Anaconda comes with python for you but do not remove the original python that comes with the system -- many of the operating system's libs depend on it.</p>
<p>Anaconda manages its python executable and packages in its own (conda) directory. It changes the system path so the python inside the conda directory is the one used when you access python.</p>
| 0 | 2016-10-03T08:11:19Z | [
"python",
"homebrew",
"anaconda",
"miniconda",
"homebrew-cask"
]
|
Display image in Grayscale using vispy | 39,826,807 | <p>i'm working with a spatial light modulator (SLM) which is connected as a second monbitor. The SLM has tzo recive 8-bit grayscale images.
I am currently working with vispy to display the images on the SLM, but i'm not shore if they are diplayed correctly.
Is there any possibility to display an image on grayscale using vispy?</p>
<p>I disply the images using this code </p>
<pre><code>import sys
from vispy import scene
from vispy import app
import numpy as np
canvas = scene.SceneCanvas(keys='interactive')
canvas.size = 800, 600
canvas.show()
# Set up a viewbox to display the image with interactive pan/zoom
view = canvas.central_widget.add_view()
# Create the image
img_data = *my image*
image = scene.visuals.Image(img_data, parent=view.scene)
# Set 2D camera (the camera will scale to the contents in the scene)
view.camera = scene.PanZoomCamera(aspect=1)
if __name__ == '__main__' and sys.flags.interactive == 0:
app.run()
</code></pre>
<p>from <a href="http://vispy.readthedocs.io/en/stable/examples/basics/scene/image.html" rel="nofollow">http://vispy.readthedocs.io/en/stable/examples/basics/scene/image.html</a></p>
<p>thanks for your help, and sorry for bad english</p>
| 0 | 2016-10-03T07:52:27Z | 39,843,353 | <p>You can transform your picture from RGB to gray (see <a href="http://stackoverflow.com/questions/12201577/how-can-i-convert-an-rgb-image-into-grayscale-in-python" title="rgba2gray">this post</a>) and then use the 'grays' colormap.</p>
<pre><code>import sys
from vispy import scene
from vispy import app
import numpy as np
from vispy.io import load_data_file, read_png
canvas = scene.SceneCanvas(keys='interactive')
canvas.size = 800, 600
canvas.show()
# Set up a viewbox to display the image with interactive pan/zoom
view = canvas.central_widget.add_view()
# Define a function to tranform a picture to gray
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
# Load the image
img_data = read_png(load_data_file('mona_lisa/mona_lisa_sm.png'))
# Apply transformation
img_data = rgb2gray(img_data)
# Image visual
image = scene.visuals.Image(img_data, cmap='grays', parent=view.scene)
# Set 2D camera (the camera will scale to the contents in the scene)
view.camera = scene.PanZoomCamera(aspect=1)
view.camera.set_range()
view.camera.flip = (0, 1, 0)
if __name__ == '__main__' and sys.flags.interactive == 0:
app.run()
</code></pre>
| 0 | 2016-10-04T02:40:42Z | [
"python",
"vispy"
]
|
How to average and compress data using django? | 39,826,859 | <p>This class has volts & frequency that are calculated every minute.
I want to take the average of each (volts, frequency ... etc) every 15 minutesof the recorded data and time ].</p>
<p>Should I do it in SQL or it can be done by django?</p>
<pre><code>class LogsN (models.Model):
syv = models.ForeignKey (smodel.Syved, related_name='%(class)s')
data = models.ForeignKey (smodel.Data, related_name='%(class)s')
val = models.FloatField (null=True, blank = True)
timestamp = models.DateTimeField ()
objects = AccessManager()
</code></pre>
| 0 | 2016-10-03T07:55:48Z | 39,826,966 | <p>I think you are looking for this
<a href="https://docs.djangoproject.com/en/1.10/topics/db/aggregation/" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/db/aggregation/</a></p>
<pre><code>LogsN.objects.all().aggregate(Avg('val'))
</code></pre>
| 0 | 2016-10-03T08:02:40Z | [
"python",
"sql",
"django",
"aggregation"
]
|
How to average and compress data using django? | 39,826,859 | <p>This class has volts & frequency that are calculated every minute.
I want to take the average of each (volts, frequency ... etc) every 15 minutesof the recorded data and time ].</p>
<p>Should I do it in SQL or it can be done by django?</p>
<pre><code>class LogsN (models.Model):
syv = models.ForeignKey (smodel.Syved, related_name='%(class)s')
data = models.ForeignKey (smodel.Data, related_name='%(class)s')
val = models.FloatField (null=True, blank = True)
timestamp = models.DateTimeField ()
objects = AccessManager()
</code></pre>
| 0 | 2016-10-03T07:55:48Z | 39,835,296 | <p>As posted by Sardorbek (I cannot comment yet): according to <a href="https://docs.djangoproject.com/en/1.10/topics/db/aggregation/" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/db/aggregation/</a> :</p>
<pre><code>LogsN.objects.all().aggregate(Avg('val'))['val__avg']
</code></pre>
<p>Just remember that the aggregate method returns a dictionary, if you want the value you will have to use the key ['val__avg'].</p>
| 0 | 2016-10-03T15:35:16Z | [
"python",
"sql",
"django",
"aggregation"
]
|
Use list of integer type in a loop in python | 39,826,961 | <p>I have the following code:</p>
<pre><code>a=[]
b=[]
for s in range(10):
dw = s%5
if dw == 1:
WD = random.randint(60,100)
DD =[int(round(dc*WD,0)) for dc in [.2,.2,.2,.2,.2]]
for k in range(5):
a.append(DD[k])
print a
TCV = DD[dw]
DDPT = [int(round(pt*TCV)) for pt in [.3,.5,.2]]
for i in range(3):
b.append(DDPT[i])
for PT in range(3):
for p in DDPT[PT]:
print 't'
</code></pre>
<p>After running the code I get this error:</p>
<pre><code>for p in DDPT[PT]:
TypeError: 'int' object is not iterable.
</code></pre>
<p>I was wondering if someone can help me in this regard.</p>
<p>Thanks!</p>
| -8 | 2016-10-03T08:02:07Z | 39,827,020 | <p><code>DDPT</code> is an array of integers, as evidenced in this line:</p>
<p><code>DDPT = [int(round(pt*TCV)) for pt in [.3,.5,.2]]</code></p>
<p><code>DDPT[PT]</code> is some integer, and you are trying to iterate through that. Hence the error.</p>
<p>I would recommend giving your variables more descriptive names so these issues are easier to debug.</p>
<p>edit:</p>
<p><code>for num_iters in DDPT:
for iteration in range(num_iters):
do_something()
</code></p>
| 4 | 2016-10-03T08:05:19Z | [
"python"
]
|
How can I get Spark to see code in a different module? | 39,827,165 | <p>I have complicated function that I run over a dataset in spark using the map function. It is in a different python module. When map is called, the executor nodes do not have that code and then the map function fails.</p>
<pre><code>s_cobDates = getCobDates() #returns a list of dates
sb_dataset = sc.broadcast(dataset) #fyi - it is not trivial to slice this into chunks per date
def sparkInnerLoop(n_cobDate):
n_dataset = sb_dataset.value
import someOtherModule
return someOtherModule.myComplicatedCalc(n_dataset)
results = s_cobDates.map(sparkInnerLoop).collect()
</code></pre>
<p>Spark then fails as it can't import myOtherModule.</p>
<p>So far I have got round it by creating a python package that contains someOtherModule and deploying that to the cluster in advance of my spark jobs, but that doesn't make for rapid prototyping.</p>
<p>How can I get spark to send the complete code to the executor nodes, without inlining all the code into "sparkInnerLoop"? That code is used elsewhere in my solution and I don't want code duplication.</p>
<p>I'm using an eight node cluster in stand alone mode, v 1.6.2, and the driver is running on my workstation in pycharm.</p>
| 1 | 2016-10-03T08:15:44Z | 39,836,437 | <p>It is possible to dynamically distribute Python modules using <code>SparkContext.addPyFile</code></p>
<pre><code>modules_to_distribute = ["foo.py", "bar.py"]
for module in modules_to_distribute:
sc.addPyFile(module)
</code></pre>
<p>All files distributed this way are placed on the Python path and accessible to the worker interpreters. It is also possible to distribute complete packages using egg files.</p>
<p>In general this method works only with pure Python libraries and cannot be used with C extensions.</p>
| 0 | 2016-10-03T16:42:44Z | [
"python",
"apache-spark",
"pyspark"
]
|
How can I get Spark to see code in a different module? | 39,827,165 | <p>I have complicated function that I run over a dataset in spark using the map function. It is in a different python module. When map is called, the executor nodes do not have that code and then the map function fails.</p>
<pre><code>s_cobDates = getCobDates() #returns a list of dates
sb_dataset = sc.broadcast(dataset) #fyi - it is not trivial to slice this into chunks per date
def sparkInnerLoop(n_cobDate):
n_dataset = sb_dataset.value
import someOtherModule
return someOtherModule.myComplicatedCalc(n_dataset)
results = s_cobDates.map(sparkInnerLoop).collect()
</code></pre>
<p>Spark then fails as it can't import myOtherModule.</p>
<p>So far I have got round it by creating a python package that contains someOtherModule and deploying that to the cluster in advance of my spark jobs, but that doesn't make for rapid prototyping.</p>
<p>How can I get spark to send the complete code to the executor nodes, without inlining all the code into "sparkInnerLoop"? That code is used elsewhere in my solution and I don't want code duplication.</p>
<p>I'm using an eight node cluster in stand alone mode, v 1.6.2, and the driver is running on my workstation in pycharm.</p>
| 1 | 2016-10-03T08:15:44Z | 39,878,043 | <p>Well the above answer works, it falls down if your modules are part of a package. Instead, its possible to zip your modules and then add the zip file to your spark context and then they have the correct package name.</p>
<pre><code>def ziplib():
libpath = os.path.dirname(__file__) # this should point to your packages directory
zippath = r'c:\Temp\mylib-' + randstr.randstr(6) + '.zip'
zippath = os.path.abspath(zippath)
zf = zipfile.PyZipFile(zippath, mode='w')
try:
zf.debug = 3 # making it verbose, good for debugging
zf.writepy(libpath)
return zippath # return path to generated zip archive
finally:
zf.close()
sc = SparkContext(conf=conf)
zip_path = ziplib() # generate zip archive containing your lib
zip_path = pathlib.Path(zip_path).as_uri()
sc.addPyFile(zip_path) # add the entire archive to SparkContext
</code></pre>
| 0 | 2016-10-05T15:25:16Z | [
"python",
"apache-spark",
"pyspark"
]
|
way to update multiple different sub documents with different values within single document in mongodb using python | 39,827,194 | <p>I am working in Python with pandas dataframes and currently my dataframe is :</p>
<pre><code>product_id mock_test_id q_id q_correct_option language_id is_corr is_p is_m
2790 2999 1 1 1 1 1 1
2790 2999 2 1 1 1 1 1
2790 2999 3 2 1 0 1 1
2790 2999 4 3 1 2 1 1
2790 2999 5 3 1 3 1 1
</code></pre>
<p>and the collection document is like :</p>
<pre><code>{ "_id" : ObjectId("57eea81cd65b43a872024522"),
"langid" : 1,
"mocktest_id" : 2999,
"pid" : 2970,
"userid" : 1223,
"attempt" : {
"1" : { "seqid" : 1,
"o" : NumberLong(1),
"t" : NumberLong(749825),
"is_m" : 1,
"is_p" : 1,
"is_corr" : 0 },
"2" : { "seqid" : 2,
"o" : NumberLong(2),
"t" : NumberLong(749825),
"is_m" : 1,
"is_p" : 1,
"is_corr" : 1 }
}
}
</code></pre>
<p>Now I want to update this document at one go, wherein, within the 'attempt' object we have q_id as keys in it. q_id is also present in dataframe as can be seen from above. Now, corresponding to every mock_test_id, product_id, q_id I want to update is_p, is_m, is_corr fields at one go where is_p or is_corr field may or may not be present. </p>
<p>Right now, I am iterating over dataframe and updating row by row : </p>
<pre><code>for index,row in stud_mock_result_df:
db.UserAttemptData.update({'userid':user_id,'mocktest_id':mock_test_id,'pid':pid},{'$set':{'attempt.'+str(row['q_id'])+'.is_m':0,'attempt.'+str(row['q_id'])+'.is_p':1,'attempt.'+str(row['q_id'])+'.is_corr':row['is_corr']}})
</code></pre>
<p>How can I update it in one go ? Please Help someone. </p>
| 0 | 2016-10-03T08:17:07Z | 39,828,415 | <p>Using one query you can set the same value to multiple objects of an array but to set different value to different objects you have to hit multiple queries.</p>
<p>In the case if you want to update multiple objects with same value then in that case use <strong>{multi: true}</strong> option.</p>
<pre><code> db.UserAttemptData.update(condition, update, {multi: true}).
</code></pre>
| 1 | 2016-10-03T09:28:57Z | [
"python",
"mongodb",
"pandas",
"dataframe"
]
|
Multiple images in a ttk label widget | 39,827,233 | <p>With ttk labels, it is possible to specify multiple images which are displayed according to the label's state. But I can't make it work. Here is the code.</p>
<pre><code>from tkinter import *
from tkinter.ttk import *
BITMAP0 = """
#define zero_width 24
#define zero_height 32
static char zero_bits[] = {
0x00,0x00,0x00, 0x00,0x00,0x00, 0xf0,0x3c,0x0f, 0xf0,0x3c,0x0f,
0xf0,0x3c,0x0f, 0xf0,0x3c,0x0f, 0x00,0x00,0x00, 0x00,0x00,0x00,
0xf0,0x00,0x0f, 0xf0,0x00,0x0f, 0xf0,0x00,0x0f, 0xf0,0x00,0x0f,
0x00,0x00,0x00, 0x00,0x00,0x00, 0xf0,0x00,0x0f, 0xf0,0x00,0x0f,
0xf0,0x00,0x0f, 0xf0,0x00,0x0f, 0x00,0x00,0x00, 0x00,0x00,0x00,
0xf0,0x00,0x0f, 0xf0,0x00,0x0f, 0xf0,0x00,0x0f, 0xf0,0x00,0x0f,
0x00,0x00,0x00, 0x00,0x00,0x00, 0xf0,0x3c,0x0f, 0xf0,0x3c,0x0f,
0xf0,0x3c,0x0f, 0xf0,0x3c,0x0f, 0x00,0x00,0x00, 0x00,0x00,0x00
};
"""
BITMAP1 = """
#define one_width 24
#define one_height 32
static char one_bits[] = {
0x00,0x00,0x00, 0x00,0x00,0x00, 0x00,0x00,0x0f, 0x00,0x00,0x0f,
0x00,0x00,0x0f, 0x00,0x00,0x0f, 0x00,0x00,0x00, 0x00,0x00,0x00,
0x00,0x00,0x0f, 0x00,0x00,0x0f, 0x00,0x00,0x0f, 0x00,0x00,0x0f,
0x00,0x00,0x00, 0x00,0x00,0x00, 0x00,0x00,0x0f, 0x00,0x00,0x0f,
0x00,0x00,0x0f, 0x00,0x00,0x0f, 0x00,0x00,0x00, 0x00,0x00,0x00,
0x00,0x00,0x0f, 0x00,0x00,0x0f, 0x00,0x00,0x0f, 0x00,0x00,0x0f,
0x00,0x00,0x00, 0x00,0x00,0x00, 0x00,0x00,0x0f, 0x00,0x00,0x0f,
0x00,0x00,0x0f, 0x00,0x00,0x0f, 0x00,0x00,0x00, 0x00,0x00,0x00
};
"""
root = Tk()
img0 = BitmapImage(data=BITMAP0, foreground='lime', background='black')
img1 = BitmapImage(data=BITMAP1, foreground='lime', background='black')
label = Label(root, image=(img0, 'active', img1))
label.pack()
</code></pre>
<p>The label is 'active' when the mouse pass over it. So the displayed digit should switch from a 0 to a 1 when the mouse passe over it. But it doesn't work.
Any help ?
Python 3.5.1 / Windows Vista</p>
| 1 | 2016-10-03T08:19:56Z | 39,829,722 | <p>I find the docs a bit confusing but it looks like you want <code>'hover'</code> instead of <code>'active'</code>.</p>
<p>I am not aware of any source explaining which state flags are automatically set in which wigdets in which conditions. What I did here was to place the mouse cursor over the label and then query the state by calling <code>label.state()</code>.</p>
| 1 | 2016-10-03T10:43:06Z | [
"python",
"tkinter",
"label",
"ttk"
]
|
I can not import caffe in pycharm but i can import in terminal. Why? | 39,827,242 | <p>I want to import caffe. I can import it in terminal but not in pycharm.</p>
<p>I have tried some suggestions like adding <code>include /usr/local/cuda-7.0/lib64</code> to <code>/user/etc/ld.so.conf</code> file but still it can not import this module. However, I think this is not a good solution as I am using the CPU mode only.</p>
<p><a href="http://i.stack.imgur.com/l2q3S.png" rel="nofollow"><img src="http://i.stack.imgur.com/l2q3S.png" alt="enter image description here"></a></p>
<p>I am using linux mint. </p>
<p>The output for <code>sys.path</code> in pycharm terminal is:</p>
<pre><code>>>> sys.path
['',
'/home/user/anaconda2/lib/python27.zip',
'/home/user/anaconda2/lib/python2.7',
'/home/user/anaconda2/lib/python2.7/plat-linux2',
'/home/user/anaconda2/lib/python2.7/lib-tk',
'/home/user/anaconda2/lib/python2.7/lib-old',
'/home/user/anaconda2/lib/python2.7/lib-dynload',
'/home/user/anaconda2/lib/python2.7/site-packages',
'/home/user/anaconda2/lib/python2.7/site-packages/Sphinx-1.4.1-y2.7.egg',
'/home/user/anaconda2/lib/python2.7/site-packages/setuptools-23.0.0-py2.7.egg']
>>>
</code></pre>
<p>and when I run <code>sys.path</code> in pycharm itself, I get:</p>
<pre><code> ['/opt/pycharm-community-2016.2.3/helpers/pydev',
'/home/user/',
'/opt/pycharm-community-2016.2.3/helpers/pydev',
'/home/user/anaconda2/lib/python27.zip',
'/home/user/anaconda2/lib/python2.7',
'/home/user/anaconda2/lib/python2.7/plat-linux2',
'/home/user/anaconda2/lib/python2.7/lib-tk',
'/home/user/anaconda2/lib/python2.7/lib-old',
'/home/user/anaconda2/lib/python2.7/lib-dynload',
'/home/user/anaconda2/lib/python2.7/site-packages',
'/home/user/anaconda2/lib/python2.7/site-packages/Sphinx-1.4.1-py2.7.egg',
'/home/user/anaconda2/lib/python2.7/site-packages/setuptools-23.0.0-py2.7.egg',
'/home/user/anaconda2/lib/python2.7/site-packages/IPython/extensions',
'/home/user/']
</code></pre>
<p>which is not exactly the same as the time I ran it in terminal.</p>
<p>moreover, as I run the <code>import caffe</code> in pycharm the error is as bellow:</p>
<pre><code>/home/user/anaconda2/bin/python /home/user/important_commands.py
Traceback (most recent call last):
File "/home/user/important_commands.py", line 11, in <module>
import caffe
ImportError: No module named caffe
Process finished with exit code 1
</code></pre>
<p>could you please help me?</p>
| 0 | 2016-10-03T08:20:24Z | 39,892,292 | <p>I installed caffe using pycharm terminal too but it did not work. Finally I added <code>sys.path.extend([/home/user/caffe-master/python])</code> to python consule and meanwhile I wrote the following in my code.</p>
<pre><code> import sys
sys.path.append("/home/user/caffe-master/python/")
import caffe
</code></pre>
<p>and it worked!!!</p>
| 0 | 2016-10-06T09:25:29Z | [
"python",
"pycharm",
"caffe"
]
|
Problems extending pandas Panels dynamically, DataFrame by DataFrame | 39,827,247 | <p>I want to construct a pandas panel dynamically, using the following - simplified - code:</p>
<pre><code>import pandas as pd
rows=range(0,3)
pl=pd.Panel()
for i in range(0,3):
pl[i]=pd.DataFrame()
for j in rows:
pl[i].set_value(j,'test_value','test')
</code></pre>
<p>Which seems to work fine. But when I the try to index the individual dataframes by </p>
<pre><code>for i in range(0,3):
print pl[i]
</code></pre>
<p>I get the output </p>
<pre><code>Empty DataFrame
Columns: []
Index: []
Empty DataFrame
Columns: []
Index: []
test_value
0 test20
1 test21
2 test22
</code></pre>
<p>Why are the first two frames empty? </p>
| 0 | 2016-10-03T08:20:46Z | 39,827,374 | <p>Use two for loops to solve this problem:</p>
<pre><code>import pandas as pd
rows=range(0,3)
pl=pd.Panel()
#first for loop to create dataframe
for i in range(0,3):
pl[i]=pd.DataFrame()
#Second for loop to assign values
for i in range(0,3):
for j in rows:
pl[i].set_value(j,'test_value','test')
</code></pre>
<p>This time you won't get empty dataframes :)</p>
| 1 | 2016-10-03T08:29:34Z | [
"python",
"pandas"
]
|
Pandas dataframe: how to plot candlesticks | 39,827,273 | <p>I have following data in dataframe to plot candlesticks. </p>
<pre><code> open high close low
date
2013-10-08 3.21 3.28 3.27 3.20
2013-10-09 3.25 3.28 3.26 3.22
2013-10-10 3.26 3.27 3.23 3.21
2013-10-11 3.25 3.28 3.27 3.23
2013-10-14 3.28 3.35 3.31 3.26
</code></pre>
<p>I tried to use candlestick_ohlc function from matplotlib.finance, but it seems that somehow I needed to change the data type of the index 'date'. I am new to python, still trying to figure out a way. </p>
<p>Any help would be appreciated, thank you. </p>
| 0 | 2016-10-03T08:22:44Z | 39,827,331 | <p>If the index is not a <code>datetime</code> (use <code>df.index.dtype</code> to find out which type it is), you can change the type to a datetime by using:</p>
<pre><code>df.index = pd.to_datetime(df.index)
</code></pre>
<p>(assuming your dataframe is called <code>df</code>)</p>
| 0 | 2016-10-03T08:26:40Z | [
"python",
"pandas",
"matplotlib",
"dataframe"
]
|
Read JSON Multiple Values into Bash Variable - not able to use any 3rd party tools like jq etc | 39,827,359 | <p>This has been asked a million times and I know there are a million solns. However im restricted in that I cant install anything on this client server , so I have whatever bash can come up with :)</p>
<p>I'm referencing <a href="http://stackoverflow.com/questions/1955505/parsing-json-with-unix-tools">Parsing JSON with UNIX tools</a> and using this to read data and split into lines. </p>
<pre><code>$ cat demo.json
{"rows":[{"name":"server1.domain.com","Access":"Owner","version":"99","Business":"Owner1","Owner2":"Main_Apprve","Owner1":"","Owner2":"","BUS":"Marketing","type":"data","Egroup":["ALPHA","BETA","GAMA","DELTA"],"Ename":["D","U","G","T","V"],"stage":"TEST"}]}
</code></pre>
<p>However as you can see it splits the "Egroup" and others with multiple entries into single lines making it a little bit more difficult.</p>
<pre><code>cat demo.json | sed -e 's/[{}]/''/g' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}>
"rows":["name":"server1.domain.com"
"Access":"Owner"
"version":"99"
"Business":"Owner1"
"Owner2":"Main_Apprve"
"Owner1":""
"Owner2":""
"BUS":"Marketing"
"type":"data"
"Egroup":["ALPHA"
"BETA"
"GAMA"
"DELTA"]
"Ename":["D"
"U"
"G"
"T"
"V"]
"stage":"TEST"]
</code></pre>
<p>Im trying to capture the data so i can list using a shell script. How would you advise me to capture each variable and then reuse in reporting in a shell script?</p>
<pre><code> grep -Po '"Egroup":.*?[^\\]",' demo.json
"Egroup":["ALPHA",
</code></pre>
<p>As you can see this wouldn't work for lines with more than 1 entry.</p>
<p>Thoughts appreciated. ( btw Im open to python and perl options but without having to install any extra modules to use with json ) </p>
| 1 | 2016-10-03T08:28:41Z | 39,827,466 | <p>It's simple using <code>Python</code>.</p>
<p><strong>Example</strong></p>
<pre><code>$ python -c 'import sys, json; print json.load(sys.stdin)["rows"][0]["Egroup"]' <demo.json
[u'ALPHA', u'BETA', u'GAMA', u'DELTA']
</code></pre>
| 0 | 2016-10-03T08:35:57Z | [
"python",
"json",
"bash",
"perl",
"shell"
]
|
Read JSON Multiple Values into Bash Variable - not able to use any 3rd party tools like jq etc | 39,827,359 | <p>This has been asked a million times and I know there are a million solns. However im restricted in that I cant install anything on this client server , so I have whatever bash can come up with :)</p>
<p>I'm referencing <a href="http://stackoverflow.com/questions/1955505/parsing-json-with-unix-tools">Parsing JSON with UNIX tools</a> and using this to read data and split into lines. </p>
<pre><code>$ cat demo.json
{"rows":[{"name":"server1.domain.com","Access":"Owner","version":"99","Business":"Owner1","Owner2":"Main_Apprve","Owner1":"","Owner2":"","BUS":"Marketing","type":"data","Egroup":["ALPHA","BETA","GAMA","DELTA"],"Ename":["D","U","G","T","V"],"stage":"TEST"}]}
</code></pre>
<p>However as you can see it splits the "Egroup" and others with multiple entries into single lines making it a little bit more difficult.</p>
<pre><code>cat demo.json | sed -e 's/[{}]/''/g' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}>
"rows":["name":"server1.domain.com"
"Access":"Owner"
"version":"99"
"Business":"Owner1"
"Owner2":"Main_Apprve"
"Owner1":""
"Owner2":""
"BUS":"Marketing"
"type":"data"
"Egroup":["ALPHA"
"BETA"
"GAMA"
"DELTA"]
"Ename":["D"
"U"
"G"
"T"
"V"]
"stage":"TEST"]
</code></pre>
<p>Im trying to capture the data so i can list using a shell script. How would you advise me to capture each variable and then reuse in reporting in a shell script?</p>
<pre><code> grep -Po '"Egroup":.*?[^\\]",' demo.json
"Egroup":["ALPHA",
</code></pre>
<p>As you can see this wouldn't work for lines with more than 1 entry.</p>
<p>Thoughts appreciated. ( btw Im open to python and perl options but without having to install any extra modules to use with json ) </p>
| 1 | 2016-10-03T08:28:41Z | 39,829,199 | <p>Perl 1-liner using JSON module:</p>
<pre><code>perl -lane 'use JSON; my $data = decode_json($_); print join( ",", @{ $data->{rows}->[0]->{Egroup} } )' demo.json
</code></pre>
<p><strong>Output</strong></p>
<pre><code>ALPHA,BETA,GAMA,DELTA
</code></pre>
<p>If you do not have <a href="http://search.cpan.org/~makamaka/JSON-2.90/lib/JSON.pm" rel="nofollow">JSON</a> installed, instead of trying to reinvent a JSON parser, you can copy the source of <a href="http://cpansearch.perl.org/src/MAKAMAKA/JSON-PP-2.27400/lib/JSON/PP.pm" rel="nofollow">JSON::PP</a> (<em>PP means Pure Perl</em>) and put it in your working directory:</p>
<pre><code>/working/demo.json
/working/JSON/PP.pm # <- put it here
perl -lane -I/working 'use JSON::PP; my $data = decode_json($_); print join( ",", @{ $data->{rows}->[0]->{Egroup} } )' demo.json
</code></pre>
| 0 | 2016-10-03T10:12:29Z | [
"python",
"json",
"bash",
"perl",
"shell"
]
|
How to add a form as field attribute in a django ModelForm | 39,827,397 | <p>I have a ModelForm for a Product object set up like this: </p>
<pre><code>class ProductForm(forms.ModelForm):
compositon_choices = ((2L, u'Calcium (100mg)'), (3L, u'Iron (500mg)'))
composition_selection = forms.\
MultipleChoiceField(widget=forms.CheckboxSelectMultiple,
choices=compositon_choices )
class Meta:
model = Product
fields = [
'title', 'title_de', 'title_es', 'upc', 'description',
'description_en_gb', 'description_de',
'description_es', 'is_discountable', 'structure',
'unit_type', 'product_concentration',]
widgets = {
'structure': forms.HiddenInput()
}
</code></pre>
<p>In the example above I extended the ModelForm with a MultipleChoiceField by adding the composition_selection field (this works): </p>
<p>I would like the composoition_selection to be a form itself and not just a MultipleChoiceField: </p>
<pre><code>class ProductComponentForm(forms.Form):
component_amount = forms.IntegerField()
component_name = forms.CharField()
</code></pre>
<p>and then extend the ModelForm with this new form like this:</p>
<pre><code>class ProductForm(forms.ModelForm):
composition_selection = ProductComponentForm()
class Meta:
model = Product
fields = [
'title', 'title_de', 'title_es', 'upc', 'description',
'description_en_gb', 'description_de',
'description_es', 'is_discountable', 'structure',
'unit_type', 'product_concentration',]
widgets = {
'structure': forms.HiddenInput()
}
</code></pre>
<p>But I cannot get this to work. This ProductForm that I want to create never gets rendered,and nothing appears. Am I doing something wrong or missing something? What would be the best way to extend a ModelForm with a SubForm? </p>
| 0 | 2016-10-03T08:30:57Z | 39,835,031 | <p>Finally I understand what I did wrong. To make subforms in Django one needs formsets.
In my case i needed two different types of formsets because I had two different relationships that I wanted to change from one form. </p>
<ul>
<li>one to many relationship</li>
<li>many to many relationship</li>
</ul>
<p>depending which side of the relationship and which relationship type one wants to edit from within a single form there are different approaches: </p>
<p>There is the inlineformset_factory:
<a href="https://docs.djangoproject.com/el/1.10/topics/forms/modelforms/#inline-formsets" rel="nofollow">https://docs.djangoproject.com/el/1.10/topics/forms/modelforms/#inline-formsets</a> . This type of formset is used when one wants to edit the <strong>many</strong> side of a one to many relationship</p>
<p>If one wants to edit the one side of a one to many relationship the
modelformset_factory gets used:
<a href="https://docs.djangoproject.com/el/1.10/topics/forms/modelforms/#model-formsets" rel="nofollow">https://docs.djangoproject.com/el/1.10/topics/forms/modelforms/#model-formsets</a></p>
<p>one can create a model_formset, and then add this formset to the main formset of the main form. </p>
<p>when one wants to edit a many to many relationship within a single form, an inline_formset_factory can be created with the intermediary table of the many to many relationship. </p>
<p>this formset than can be added to main form. </p>
<p>For my use case that I described above I ended up using a model_formset_factory and added that to the main form. </p>
| 0 | 2016-10-03T15:20:43Z | [
"python",
"django",
"django-models",
"django-forms"
]
|
Difference between the import(s) in python | 39,827,449 | <p>What is the difference between:</p>
<pre><code>from class_name import function_name
</code></pre>
<p>and:</p>
<pre><code>import class_name
</code></pre>
<p>Please specify if for the first one only the <code>function_name</code> is imported and for the second the full class.</p>
| -1 | 2016-10-03T08:34:29Z | 39,827,609 | <p>You can't import single methods from a class, only classes from modules or functions from modules. Please see <a href="http://stackoverflow.com/questions/8502287/how-to-import-only-the-class-methods-in-python">this</a> question and <a href="https://docs.python.org/3/tutorial/modules.html" rel="nofollow">this tutorial</a>.</p>
| 0 | 2016-10-03T08:44:35Z | [
"python",
"python-import"
]
|
Send terminal input data when restarting a program (python) | 39,827,526 | <p>My script asks for input from the terminal first thing:</p>
<pre><code>ans = raw_input("Do thing A (1) / Do thing B (2)")
</code></pre>
<p>Then runs the code and restarts itself when an exception arises.</p>
<pre><code>def restart_program():
python = sys.executable
os.execl(python, python, * sys.argv)
</code></pre>
<p>The problem is I would need a human to input the option again so I tried this:</p>
<pre><code>def restart_program():
python = sys.executable
os.execl(python, python, '1', * sys.argv)
</code></pre>
<p>But it didn't work. How can I send the option when restarting?</p>
| 1 | 2016-10-03T08:39:50Z | 39,828,030 | <p>like others mentioned in comments, try to run your python script via shell script using while loop, smthng like (first run wo cmd line arguments):</p>
<pre><code>arg=''
while true
do
python restart.py $arg
arg='1'
sleep 1800
done
</code></pre>
<p>and in your python code check if argument was provided:</p>
<pre><code>try:
ans = sys.argv[1]
except IndexError:
ans = raw_input("Do thing A (1) / Do thing B (2)")
</code></pre>
<p>You just need to exit your program after exception you are handling raises not to restart it from your code.</p>
| 0 | 2016-10-03T09:09:27Z | [
"python",
"terminal",
"restart"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.