title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Duplicates / common elements between two lists
| 39,614,083 |
<p>I have a stupid question for people who are familiar with lists in Python.
I want to get the common items in two lists. Assuming that I have this list :</p>
<pre><code>dates_list = ['2016-07-08 02:00:02',
'2016-07-08 02:00:17',
'2016-07-08 02:00:03',
'2016-07-08 02:00:20',
'2016-07-08 02:01:08',
'2016-07-08 02:00:09',
'2016-07-08 02:01:22',
'2016-07-08 02:01:33']
</code></pre>
<p>And a list named 'time_by_seconds' which contains a lists of all seconds of a day:</p>
<pre><code>time_by_seconds = [['2016-07-08 02:00:00',
'2016-07-08 02:00:01',
'2016-07-08 02:00:02',
'2016-07-08 02:00:03',
'2016-07-08 02:00:04',
'2016-07-08 02:00:05',
'2016-07-08 02:00:06',
etc ],
['2016-07-08 02:01:00',
'2016-07-08 02:01:01',
'2016-07-08 02:01:02',
'2016-07-08 02:01:03',
'2016-07-08 02:01:04',
etc ]]
</code></pre>
<p>This is my code to print the items if they are in this list:</p>
<pre><code>for item in dates_list:
for one_list in time_by_seconds:
if item in one_list:
print item
</code></pre>
<p>This is the result :</p>
<pre><code>2016-07-08 02:00:02
2016-07-08 02:00:17
2016-07-08 02:00:03
2016-07-08 02:00:20
2016-07-08 02:01:08
2016-07-08 02:00:09
2016-07-08 02:01:22
2016-07-08 02:01:33
</code></pre>
<p>But if I use another list, with 49 as length, I have duplicates. Concretely I must have 49 elements as result because all those dates exists in my time_by_seconds.
This is the list :</p>
<pre><code>beginning_time_list = ['2016-07-08 02:17:42',
'2016-07-08 02:05:35',
'2016-07-08 02:03:22',
'2016-07-08 02:26:33',
'2016-07-08 02:14:54',
'2016-07-08 02:05:13',
'2016-07-08 02:15:30',
'2016-07-08 02:01:53',
'2016-07-08 02:02:31',
'2016-07-08 02:00:08',
'2016-07-08 02:04:16',
'2016-07-08 02:08:44',
'2016-07-08 02:11:17',
'2016-07-08 02:01:40',
'2016-07-08 02:04:23',
'2016-07-08 02:01:34',
'2016-07-08 02:24:31',
'2016-07-08 02:00:27',
'2016-07-08 02:14:35',
'2016-07-08 02:00:57',
'2016-07-08 02:02:24',
'2016-07-08 02:02:46',
'2016-07-08 02:05:04',
'2016-07-08 02:11:26',
'2016-07-08 02:06:24',
'2016-07-08 02:04:32',
'2016-07-08 02:08:50',
'2016-07-08 02:08:27',
'2016-07-08 02:02:30',
'2016-07-08 02:03:59',
'2016-07-08 02:01:19',
'2016-07-08 02:02:09',
'2016-07-08 02:05:47',
'2016-07-08 02:02:36',
'2016-07-08 02:01:02',
'2016-07-08 02:02:58',
'2016-07-08 02:06:19',
'2016-07-08 02:02:34',
'2016-07-08 02:00:17',
'2016-07-08 02:10:03',
'2016-07-08 02:08:20',
'2016-07-08 02:02:36',
'2016-07-08 02:17:25',
'2016-07-08 02:07:19',
'2016-07-08 02:13:07',
'2016-07-08 02:03:51',
'2016-07-08 02:03:35',
'2016-07-08 02:14:53',
'2016-07-08 02:18:36']
</code></pre>
<p>The same code :</p>
<pre><code>for item in beginning_time_list:
for one_list in time_by_seconds:
if item in one_list:
print item
</code></pre>
<p>And this is the result :</p>
<pre><code>2016-07-08 02:17:42
2016-07-08 02:17:42
2016-07-08 02:17:42
2016-07-08 02:17:42
2016-07-08 02:05:35
2016-07-08 02:05:35
2016-07-08 02:03:22
2016-07-08 02:26:33
2016-07-08 02:26:33
2016-07-08 02:26:33
2016-07-08 02:26:33
2016-07-08 02:26:33
2016-07-08 02:26:33
2016-07-08 02:14:54
2016-07-08 02:14:54
2016-07-08 02:14:54
2016-07-08 02:05:13
2016-07-08 02:05:13
2016-07-08 02:15:30
2016-07-08 02:15:30
2016-07-08 02:15:30
2016-07-08 02:15:30
2016-07-08 02:01:53
2016-07-08 02:02:31
2016-07-08 02:00:08
2016-07-08 02:04:16
2016-07-08 02:08:44
2016-07-08 02:08:44
2016-07-08 02:11:17
2016-07-08 02:11:17
2016-07-08 02:11:17
2016-07-08 02:01:40
2016-07-08 02:04:23
2016-07-08 02:01:34
2016-07-08 02:24:31
2016-07-08 02:24:31
2016-07-08 02:24:31
2016-07-08 02:24:31
2016-07-08 02:24:31
2016-07-08 02:00:27
2016-07-08 02:14:35
2016-07-08 02:14:35
2016-07-08 02:14:35
2016-07-08 02:00:57
2016-07-08 02:02:24
2016-07-08 02:02:46
2016-07-08 02:05:04
2016-07-08 02:05:04
2016-07-08 02:11:26
2016-07-08 02:11:26
2016-07-08 02:11:26
2016-07-08 02:06:24
2016-07-08 02:06:24
etc
</code></pre>
<p>Sorry there are 95 items !</p>
<p>Someone knows why I have duplicates?
Thnx</p>
| 0 |
2016-09-21T10:32:09Z
| 39,614,264 |
<p>In order to find common elements in two lists, you may use <code>set()</code> as:</p>
<pre><code>>>> a = [1, 2, 3, 4]
>>> b = [3, 4, 5, 6]
>>> list(set(a).intersection(set(b)))
[3, 4]
</code></pre>
<p>In your case, <code>b</code> is the list of lists. You need to firstly flatten the list. For that, you can use <code>itertools.chain()</code></p>
<pre><code>>>> from itertools import chain
>>> a = [1, 2, 3, 4]
>>> b = [[3, 5, 6], [4, 8, 9]]
>>> list(set(a).intersection(set(chain.from_iterable((b)))))
[3, 4]
</code></pre>
| 2 |
2016-09-21T10:40:31Z
|
[
"python",
"list",
"python-2.7",
"for-loop"
] |
Duplicates / common elements between two lists
| 39,614,083 |
<p>I have a stupid question for people who are familiar with lists in Python.
I want to get the common items in two lists. Assuming that I have this list :</p>
<pre><code>dates_list = ['2016-07-08 02:00:02',
'2016-07-08 02:00:17',
'2016-07-08 02:00:03',
'2016-07-08 02:00:20',
'2016-07-08 02:01:08',
'2016-07-08 02:00:09',
'2016-07-08 02:01:22',
'2016-07-08 02:01:33']
</code></pre>
<p>And a list named 'time_by_seconds' which contains a lists of all seconds of a day:</p>
<pre><code>time_by_seconds = [['2016-07-08 02:00:00',
'2016-07-08 02:00:01',
'2016-07-08 02:00:02',
'2016-07-08 02:00:03',
'2016-07-08 02:00:04',
'2016-07-08 02:00:05',
'2016-07-08 02:00:06',
etc ],
['2016-07-08 02:01:00',
'2016-07-08 02:01:01',
'2016-07-08 02:01:02',
'2016-07-08 02:01:03',
'2016-07-08 02:01:04',
etc ]]
</code></pre>
<p>This is my code to print the items if they are in this list:</p>
<pre><code>for item in dates_list:
for one_list in time_by_seconds:
if item in one_list:
print item
</code></pre>
<p>This is the result :</p>
<pre><code>2016-07-08 02:00:02
2016-07-08 02:00:17
2016-07-08 02:00:03
2016-07-08 02:00:20
2016-07-08 02:01:08
2016-07-08 02:00:09
2016-07-08 02:01:22
2016-07-08 02:01:33
</code></pre>
<p>But if I use another list, with 49 as length, I have duplicates. Concretely I must have 49 elements as result because all those dates exists in my time_by_seconds.
This is the list :</p>
<pre><code>beginning_time_list = ['2016-07-08 02:17:42',
'2016-07-08 02:05:35',
'2016-07-08 02:03:22',
'2016-07-08 02:26:33',
'2016-07-08 02:14:54',
'2016-07-08 02:05:13',
'2016-07-08 02:15:30',
'2016-07-08 02:01:53',
'2016-07-08 02:02:31',
'2016-07-08 02:00:08',
'2016-07-08 02:04:16',
'2016-07-08 02:08:44',
'2016-07-08 02:11:17',
'2016-07-08 02:01:40',
'2016-07-08 02:04:23',
'2016-07-08 02:01:34',
'2016-07-08 02:24:31',
'2016-07-08 02:00:27',
'2016-07-08 02:14:35',
'2016-07-08 02:00:57',
'2016-07-08 02:02:24',
'2016-07-08 02:02:46',
'2016-07-08 02:05:04',
'2016-07-08 02:11:26',
'2016-07-08 02:06:24',
'2016-07-08 02:04:32',
'2016-07-08 02:08:50',
'2016-07-08 02:08:27',
'2016-07-08 02:02:30',
'2016-07-08 02:03:59',
'2016-07-08 02:01:19',
'2016-07-08 02:02:09',
'2016-07-08 02:05:47',
'2016-07-08 02:02:36',
'2016-07-08 02:01:02',
'2016-07-08 02:02:58',
'2016-07-08 02:06:19',
'2016-07-08 02:02:34',
'2016-07-08 02:00:17',
'2016-07-08 02:10:03',
'2016-07-08 02:08:20',
'2016-07-08 02:02:36',
'2016-07-08 02:17:25',
'2016-07-08 02:07:19',
'2016-07-08 02:13:07',
'2016-07-08 02:03:51',
'2016-07-08 02:03:35',
'2016-07-08 02:14:53',
'2016-07-08 02:18:36']
</code></pre>
<p>The same code :</p>
<pre><code>for item in beginning_time_list:
for one_list in time_by_seconds:
if item in one_list:
print item
</code></pre>
<p>And this is the result :</p>
<pre><code>2016-07-08 02:17:42
2016-07-08 02:17:42
2016-07-08 02:17:42
2016-07-08 02:17:42
2016-07-08 02:05:35
2016-07-08 02:05:35
2016-07-08 02:03:22
2016-07-08 02:26:33
2016-07-08 02:26:33
2016-07-08 02:26:33
2016-07-08 02:26:33
2016-07-08 02:26:33
2016-07-08 02:26:33
2016-07-08 02:14:54
2016-07-08 02:14:54
2016-07-08 02:14:54
2016-07-08 02:05:13
2016-07-08 02:05:13
2016-07-08 02:15:30
2016-07-08 02:15:30
2016-07-08 02:15:30
2016-07-08 02:15:30
2016-07-08 02:01:53
2016-07-08 02:02:31
2016-07-08 02:00:08
2016-07-08 02:04:16
2016-07-08 02:08:44
2016-07-08 02:08:44
2016-07-08 02:11:17
2016-07-08 02:11:17
2016-07-08 02:11:17
2016-07-08 02:01:40
2016-07-08 02:04:23
2016-07-08 02:01:34
2016-07-08 02:24:31
2016-07-08 02:24:31
2016-07-08 02:24:31
2016-07-08 02:24:31
2016-07-08 02:24:31
2016-07-08 02:00:27
2016-07-08 02:14:35
2016-07-08 02:14:35
2016-07-08 02:14:35
2016-07-08 02:00:57
2016-07-08 02:02:24
2016-07-08 02:02:46
2016-07-08 02:05:04
2016-07-08 02:05:04
2016-07-08 02:11:26
2016-07-08 02:11:26
2016-07-08 02:11:26
2016-07-08 02:06:24
2016-07-08 02:06:24
etc
</code></pre>
<p>Sorry there are 95 items !</p>
<p>Someone knows why I have duplicates?
Thnx</p>
| 0 |
2016-09-21T10:32:09Z
| 39,616,046 |
<pre><code>import collections
def flatten(iterable):
for item in iterable:
if isinstance(item, (str, bytes)):
yield item
if isinstance(item, collections.Sequence):
yield from flatten(item)
else:
yield item
a = [1, 6, 10]
b = [[0, 1, 2], 3, [4], [5, (6, 7), 8], 9]
common_items = set(a) & set(flatten(b))
</code></pre>
| 0 |
2016-09-21T12:02:06Z
|
[
"python",
"list",
"python-2.7",
"for-loop"
] |
Assigning Group ID to components in networkx
| 39,614,149 |
<p>I have a graph which consists of nodes having "parentid" of hotels and "phone_search" stored in them.
My main aim to build this graph was to connect all "parentid" which have similar "phone_search" (recursively), eg, if parentid A has phone_search 1,2; B has 2,3; C has 3,4; D has 5,6 and E has 6,7, then A,B, C will be grouped in 1 cluster and D and E in another cluster.</p>
<p>This is my code to build the nework:</p>
<pre><code>from pymongo import MongoClient # To import client for MongoDB
import networkx as nx
import pickle
G = nx.Graph()
#Defining variables
hotels = []
phones = []
allResult = []
finalResult = []
#dictNx = {}
# Initializing MongoDB client
client = MongoClient()
# Connection
db = client.hotel
collection = db.hotelData
for post in collection.find():
hotels.append(post)
for hotel in hotels:
try:
phones = hotel["phone_search"].split("|")
for phone in phones:
if phone == '':
pass
else:
G.add_edge(hotel["parentid"],phone)
except:
phones = hotel["phone_search"]
if phone == '':
pass
else:
G.add_edge(hotel["parentid"],phone)
# nx.write_gml(G,"export.gml")
pickle.dump(G, open('/home/justdial/newHotel/graph.txt', 'w'))
</code></pre>
<p><strong>What I want to do</strong>: I want to assign a group ID to each component and store it into a dictionary so that I can access them with ease every time directly from the dictionary. </p>
<p><strong>Example</strong> : Gid 1 will contain some parentids and phone_searches which are in the same cluster. Similarly Gid 2 will contain nodes from another cluster and so on... </p>
<p>I have one more doubt. Is accessing the nodes from dictionary using group ID faster than performing a bfs on networkx graph?</p>
| 1 |
2016-09-21T10:35:16Z
| 39,614,615 |
<p>You want basically a list of nodes based on their component (not cluster), which is fairly straightforward. You need <a href="http://networkx.readthedocs.io/en/stable/reference/generated/networkx.algorithms.components.connected.connected_component_subgraphs.htmlhttp://" rel="nofollow"><code>connected_component_subgraphs()</code></a>.</p>
<pre><code>G = nx.caveman_graph(3, 4) # generate example with 3 components of four members each
components = nx.connected_component_subgraphs(G)
comp_dict = {idx: comp.nodes() for idx, comp in enumerate(components)}
print comp_dict
# {0: [0, 1, 2, 3], 1: [4, 5, 6, 7], 2: [8, 9, 10, 11]}
</code></pre>
<p>In case you want the component IDs as node attributes:</p>
<pre><code>attr = {n: comp_id for comp_id, nodes in comp_dict.items() for n in nodes}
nx.set_node_attributes(G, "component", attr)
print G.nodes(data=True)
# [(0, {'component': 0}), (1, {'component': 0}), (2, {'component': 0}), (3, {'component': 0}), (4, {'component': 1}), (5, {'component': 1}), (6, {'component': 1}), (7, {'component': 1}), (8, {'component': 2}), (9, {'component': 2}), (10, {'component': 2}), (11, {'component': 2})]
</code></pre>
| 1 |
2016-09-21T10:57:03Z
|
[
"python",
"dictionary",
"grouping",
"networkx"
] |
How to work with cookies with httplib in python
| 39,614,307 |
<p>I am sending soap request using httplib but when I send it I am getting the following issue:</p>
<pre><code>INFO:root:Response:<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>401 Authorization Required</title>
</head><body>
<h1>Authorization Required</h1>
<p>This server could not verify that you
are authorized to access the document
requested. Either you supplied the wrong
credentials (e.g., bad password), or your
browser doesn't understand how to supply
the credentials required.</p>
</body></html>
</code></pre>
<p>I am setting already headers below:</p>
<pre><code>auth = base64.encodestring('%s:%s' % (username, password)).replace('\n', '')
headers = {"Content-type": "text/xml;charset=utf-8","SOAPAction":"\"\"","Authorization": "Basic %s" % auth}
</code></pre>
<p>Instead of python if I send by SoapUI then it seems when request is sent by SoapUI from server some cookie is being sent and then soapUI resends it with cookie. </p>
| 1 |
2016-09-21T10:42:53Z
| 39,633,251 |
<p>Instead of httplib, I used the <code>urllib2</code> module. I need not set a cookie, I changed the code to use Digest type authentication, instead of Basic authentication:</p>
<pre><code>password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None,uri=uri,user=username,passwd=password)
authHandler=urllib2.HTTPDigestAuthHandler(password_mgr)
opener = urllib2.build_opener(authHandler)
response=opener.open(webServiceUrl2,data)
res=response.read()
</code></pre>
| 1 |
2016-09-22T07:49:26Z
|
[
"python",
"python-2.7"
] |
how __name__ change in python decorator
| 39,614,346 |
<p>Recently I'm learning Python decorator and the use of <code>functools.wraps</code>.</p>
<pre><code>def a():
def b():
def c():
print('hello')
return c
return b
print a.__name__
#output:a
</code></pre>
<p>I understand why the output is a.But I don't know how <code>__name__</code> change in the following code.</p>
<pre><code>def log(text):
def decorator(func):
def wrapper(*args, **kw):
print('%s %s():' % (text, func.__name__))
return func(*args, **kw)
return wrapper
return decorator
@log('...')
def simple():
print('*' * 20)
print simple.__name__
#output:wrapper
</code></pre>
<p>Why the output is 'wrapper' rather than 'decorator' or 'log'? </p>
| 0 |
2016-09-21T10:44:37Z
| 39,614,464 |
<p>The point of decorators is to <em>replace</em> a function or class with what is returned by the decorator, when called with that function/class as argument. Decorators with arguments are a bit more convoluted, as you first call the outer method (<code>log</code>) to obtain a "parameterized" decorator (<code>decorator</code>), then you call that one and obtain the final function (<code>wrapper</code>), which will replace the decorated function (<code>simple</code>).</p>
<p>So, to give it some structure,</p>
<ol>
<li>call <code>log</code> with <code>'...'</code> as argument and obtain <code>decorator</code></li>
<li>call <code>decorator</code> with <code>simple</code> as argument and obtain <code>wrapper</code></li>
<li>replace <code>simple</code> with <code>wrapper</code></li>
</ol>
| 0 |
2016-09-21T10:50:01Z
|
[
"python"
] |
how __name__ change in python decorator
| 39,614,346 |
<p>Recently I'm learning Python decorator and the use of <code>functools.wraps</code>.</p>
<pre><code>def a():
def b():
def c():
print('hello')
return c
return b
print a.__name__
#output:a
</code></pre>
<p>I understand why the output is a.But I don't know how <code>__name__</code> change in the following code.</p>
<pre><code>def log(text):
def decorator(func):
def wrapper(*args, **kw):
print('%s %s():' % (text, func.__name__))
return func(*args, **kw)
return wrapper
return decorator
@log('...')
def simple():
print('*' * 20)
print simple.__name__
#output:wrapper
</code></pre>
<p>Why the output is 'wrapper' rather than 'decorator' or 'log'? </p>
| 0 |
2016-09-21T10:44:37Z
| 39,614,583 |
<pre><code>@log('...')
def simple(...
</code></pre>
<p>is equivalent to</p>
<pre><code>def simple(...
simple = log('...')(simple)
</code></pre>
<p>so <code>log</code> is actually called, returning <code>decorator</code>, which is called with <code>simple</code> as argument which is then replaced by <code>decorator</code>'s return value, which is the function <code>wrapper</code>, thus its <code>__name__</code> is <code>wrapper</code>.</p>
| 0 |
2016-09-21T10:55:36Z
|
[
"python"
] |
how __name__ change in python decorator
| 39,614,346 |
<p>Recently I'm learning Python decorator and the use of <code>functools.wraps</code>.</p>
<pre><code>def a():
def b():
def c():
print('hello')
return c
return b
print a.__name__
#output:a
</code></pre>
<p>I understand why the output is a.But I don't know how <code>__name__</code> change in the following code.</p>
<pre><code>def log(text):
def decorator(func):
def wrapper(*args, **kw):
print('%s %s():' % (text, func.__name__))
return func(*args, **kw)
return wrapper
return decorator
@log('...')
def simple():
print('*' * 20)
print simple.__name__
#output:wrapper
</code></pre>
<p>Why the output is 'wrapper' rather than 'decorator' or 'log'? </p>
| 0 |
2016-09-21T10:44:37Z
| 39,614,587 |
<p>Some basics:</p>
<pre><code>@decorator
def f():
pass
</code></pre>
<p>is equivalent to:</p>
<pre><code>def f():
pass
f = decorator(f)
</code></pre>
<p>Decorator with args:</p>
<pre><code>@decorator(*args, **kwargs)
def f():
pass
</code></pre>
<p>is equivalent to:</p>
<pre><code>def f():
pass
decorator_instance = decorator(*args, **kwargs)
f = decorator_instance(f)
</code></pre>
<p>With knowing so, we may rewrite your example to:</p>
<pre><code>def simple():
print('*' * 20)
log_instance = log('...')
simple = log_instance(simple)
</code></pre>
<p>Let's analyze what happens in last two lines:</p>
<ul>
<li><code>log_instance</code> is a <code>decorator</code> function, and text variable within it is equal to '...'</li>
<li>Since <code>decorator</code> (regardless of <code>text</code> value) returns function named wrapper, <code>simple</code> is replaced with function named <code>wrapper</code></li>
</ul>
| 1 |
2016-09-21T10:55:51Z
|
[
"python"
] |
Using an IF THEN loop with nested JSON files in Python
| 39,614,449 |
<p>I am currently writing a program which uses the ComapaniesHouse API to return a json file containing information about a certain company. </p>
<p>I am able to retrieve the data easily using the following commands:</p>
<pre><code>r = requests.get('https://api.companieshouse.gov.uk/company/COMPANY-NO/filing-history', auth=('API-KEY', ''))
data = r.json()
</code></pre>
<p>With that information I can do an awful lot, however I've ran into a problem which I was hoping you guys could possible help me with. What I aim to do is go through every nested entry in the json file and check if the value of certain keys matches certain criteria, if the values of 2 keys match a certain criteria then other code is executed. </p>
<p>One of the keys is the date of an entry, and I would like to ignore results that are older than a certain date, I have attempted to do this with the following:</p>
<pre><code>date_threshold = datetime.date.today() - datetime.timedelta(days=30)``
for each in data["items"]:
date = ['date']
type = ['type']
if date < date_threshold and type is "RM01":
print("wwwwww")
</code></pre>
<p>In case it isn't clear, what I'm attempting to do (albeit very badly) is assign each of the entries to a variable, which then gets tested against certain criteria.</p>
<p>Although this doesn't work, python spits out a variable mismatch error:</p>
<pre><code>TypeError: unorderable types: list() < datetime.date()
</code></pre>
<p>Which makes me think the date is being stored as a string, and so I can't compare it to the datetime value set earlier, but when I check the API documentation (<a href="https://developer.companieshouse.gov.uk/api/docs/company/company_number/filing-history/filingHistoryItem-resource.html" rel="nofollow">https://developer.companieshouse.gov.uk/api/docs/company/company_number/filing-history/filingHistoryItem-resource.html</a>), it says clearly that the 'date' entry is returned as a date type.</p>
<p>What am I doing wrong, its very clear that I'm extremely new to python given what I presume is the atrocity of my code, but in my head it seems to make at least a little sense. In case none of this clear, I basically want to go through all the entries in the json file, and the if the date and type match a certain description, then other code can be executed (in this case I have just used random text).</p>
<p>Any help is greatly appreciated! Let me know if you need anything cleared up.</p>
<p>:)</p>
<p>EDIT</p>
<p>After tweaking my code to the below:</p>
<pre><code>for each in data["items"]:
date = each['date']
type = each['type']
if date is '2016-09-15' and type is "RM01":
print("wwwwww")
</code></pre>
<p>The code executes without any errors, but the words aren't printed, even though I know there is an entry in the json file with that exact date, and that exact type, any thoughts?</p>
<p>SOLUTION:</p>
<p>Thanks to everyone for helping me out, I had made a couple of very basic errors, the code that works as expected is below::</p>
<pre><code>for each in data["items"]:
date = each['date']
typevariable = each['type']
if date == '2016-09-15' and typevariable == "RM01":
print("wwwwww")
</code></pre>
<p>This prints the word "wwwwww" 3 times, which is correct seeing as there are 3 entries in the JSON that fulfil those criteria.</p>
| 0 |
2016-09-21T10:49:05Z
| 39,614,543 |
<p>You need to first convert your date variable to a datetime type using <a href="https://docs.python.org/2/library/datetime.html#datetime.datetime.strptime" rel="nofollow">datetime.strptime()</a></p>
<p>You are comparing a list type variable <code>date</code> with datetime type variable <code>date_threshold</code>.</p>
| 0 |
2016-09-21T10:53:37Z
|
[
"python",
"json",
"python-3.x"
] |
Using a numpy array to assign values to another array
| 39,614,516 |
<p>I have the following numpy array <code>matrix</code> ,</p>
<pre><code>matrix = np.zeros((3,5), dtype = int)
array([[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]])
</code></pre>
<p>Suppose I have this numpy array <code>indices</code> as well</p>
<pre><code>indices = np.array([[1,3], [2,4], [0,4]])
array([[1, 3],
[2, 4],
[0, 4]])
</code></pre>
<p><strong>Question:</strong> How can I assign <code>1</code>s to the elements in the <code>matrix</code> where their indices are specified by the <code>indices</code> array. A vectorized implementation is expected.</p>
<p>For more clarity, the output should look like:</p>
<pre><code> array([[0, 1, 0, 1, 0], #[1,3] elements are changed
[0, 0, 1, 0, 1], #[2,4] elements are changed
[1, 0, 0, 0, 1]]) #[0,4] elements are changed
</code></pre>
| 2 |
2016-09-21T10:52:08Z
| 39,614,559 |
<p>Here's one approach using <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#combining-advanced-and-basic-indexing" rel="nofollow"><code>NumPy's fancy-indexing</code></a> -</p>
<pre><code>matrix[np.arange(matrix.shape[0])[:,None],indices] = 1
</code></pre>
<p><strong>Explanation</strong></p>
<p>We create the row indices with <code>np.arange(matrix.shape[0])</code> -</p>
<pre><code>In [16]: idx = np.arange(matrix.shape[0])
In [17]: idx
Out[17]: array([0, 1, 2])
In [18]: idx.shape
Out[18]: (3,)
</code></pre>
<p>The column indices are already given as <code>indices</code> -</p>
<pre><code>In [19]: indices
Out[19]:
array([[1, 3],
[2, 4],
[0, 4]])
In [20]: indices.shape
Out[20]: (3, 2)
</code></pre>
<p>Let's make a schematic diagram of the shapes of row and column indices, <code>idx</code> and <code>indices</code> -</p>
<pre><code>idx (row) : 3
indices (col) : 3 x 2
</code></pre>
<p>For using the row and column indices for indexing into input array <code>matrix</code>, we need to make them broadcastable against each other. One way would be to introduce a new axis into <code>idx</code>, making it <code>2D</code> by pushing the elements into the first axis and allowing a singleton dim as the last axis with <code>idx[:,None]</code>, as shown below -</p>
<pre><code>idx (row) : 3 x 1
indices (col) : 3 x 2
</code></pre>
<p>Internally, <code>idx</code> would be broadcasted, like so -</p>
<pre><code>In [22]: idx[:,None]
Out[22]:
array([[0],
[1],
[2]])
In [23]: indices
Out[23]:
array([[1, 3],
[2, 4],
[0, 4]])
In [24]: np.repeat(idx[:,None],2,axis=1) # indices has length of 2 along cols
Out[24]:
array([[0, 0], # Internally broadcasting would be like this
[1, 1],
[2, 2]])
</code></pre>
<p>Thus, the broadcasted elements from <code>idx</code> would be used as row indices and column indices from <code>indices</code> for indexing into <code>matrix</code> for setting elements in it. Since, we had -</p>
<p><code>idx = np.arange(matrix.shape[0])</code>, </p>
<p>Thus, we would end up with -</p>
<p><code>matrix[np.arange(matrix.shape[0])[:,None],indices]</code> for setting elements.</p>
| 5 |
2016-09-21T10:54:09Z
|
[
"python",
"arrays",
"numpy",
"indexing",
"vectorization"
] |
Using a numpy array to assign values to another array
| 39,614,516 |
<p>I have the following numpy array <code>matrix</code> ,</p>
<pre><code>matrix = np.zeros((3,5), dtype = int)
array([[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]])
</code></pre>
<p>Suppose I have this numpy array <code>indices</code> as well</p>
<pre><code>indices = np.array([[1,3], [2,4], [0,4]])
array([[1, 3],
[2, 4],
[0, 4]])
</code></pre>
<p><strong>Question:</strong> How can I assign <code>1</code>s to the elements in the <code>matrix</code> where their indices are specified by the <code>indices</code> array. A vectorized implementation is expected.</p>
<p>For more clarity, the output should look like:</p>
<pre><code> array([[0, 1, 0, 1, 0], #[1,3] elements are changed
[0, 0, 1, 0, 1], #[2,4] elements are changed
[1, 0, 0, 0, 1]]) #[0,4] elements are changed
</code></pre>
| 2 |
2016-09-21T10:52:08Z
| 39,647,476 |
<p>this involves loop and hence may not be very efficient for large arrays</p>
<pre><code>for i in range(len(indices)):
matrix[i,indices[i]] = 1
> matrix
Out[73]:
array([[0, 1, 0, 1, 0],
[0, 0, 1, 0, 1],
[1, 0, 0, 0, 1]])
</code></pre>
| 0 |
2016-09-22T19:33:13Z
|
[
"python",
"arrays",
"numpy",
"indexing",
"vectorization"
] |
sending raw data in python requests
| 39,614,675 |
<p>I'm trying to send a POST request with python requests, containing the following data: </p>
<blockquote>
<p>__VIEWSTATE=%2FwEPDwUJODgwODc4MzI2D2QWBAIEDxYCHgdWaXNpYmxlaGQCBg8WAh8AZxYCZg9kFhBmDw8WAh4EVGV4dAUl16jXmdep15XXnSDXntep16rXntepINeX15PXqSDXnNeQ16rXqGRkAgEPFgIeBWNsYXNzBSNmb3JtLWdyb3VwIGhhcy1mZWVkYmFjayBoYXMtc3VjY2VzcxYIAgEPDxYCHwEFLSog16nXnSDXntep16rXntepICjXkdeZ158gNCDXnCAxMiDXqteV15nXnSkgOmRkAgUPDxYCHwBoZGQCBw8PFgQeCENzc0NsYXNzZR4EXyFTQgICFgIfAgUwZ2x5cGhpY29uIGZvcm0tY29udHJvbC1mZWVkYmFjayBnbHlwaGljb24tcmVtb3ZlZAIJDxYGHwIFE2FsZXJ0IGFsZXJ0LXN1Y2Nlc3MeBXN0eWxlBQ5kaXNwbGF5OmJsb2NrOx4JaW5uZXJodG1sBRjXqdedINee16nXqtee16kg16TXoNeV15lkAgIPFgIfAGgWAmYPFgIfAgUXZm9ybS1ncm91cCBoYXMtZmVlZGJhY2sWBAIDDw8WBB8DZR8EAgJkZAIFDxYCHwUFDWRpc3BsYXk6bm9uZTtkAgMPFgIfAgUXZm9ybS1ncm91cCBoYXMtZmVlZGJhY2sWBgIBDw8WAh8AaGRkAgUPDxYEHwNlHwQCAmRkAgcPFgIfBQUNZGlzcGxheTpub25lO2QCBA8WAh8CBRdmb3JtLWdyb3VwIGhhcy1mZWVkYmFjaxYEAgMPDxYEHwNlHwQCAmRkAgUPFgIfBQUNZGlzcGxheTpub25lO2QCBQ8WAh8CBRdmb3JtLWdyb3VwIGhhcy1mZWVkYmFjaxYEAgMPDxYEHwNlHwQCAmRkAgUPFgIfBQUNZGlzcGxheTpub25lO2QCEw8WAh8CBRdmb3JtLWdyb3VwIGhhcy1mZWVkYmFjaxYEAgUPD2QWAh8FBQ1kaXNwbGF5Om5vbmU7ZAIHDw8WBB8DZR8EAgJkZAIXDxYGHwIFEmFsZXJ0IGFsZXJ0LWRhbmdlch8FBQ5kaXNwbGF5OmJsb2NrOx8GBSjXl9eV15HXlCDXnNeQ16nXqCDXkNeqINeq16DXkNeZINeU15DXqteoZBgBBR5fX0NvbnRyb2xzUmVxdWlyZVBvc3RCYWNrS2V5X18WAwUJYWxsb3dtYWlsBQhTZW5kTmV3cwUIY2hrdGVybXNHWETH5Z00Bw%2FRQ%2BPP7XKuKE1Yc7MfMt6j3NmJGEldOg%3D%3D&__VIEWSTATEGENERATOR=98F5786E&__EVENTVALIDATION=%2FwEWPwLc4fuhDgLPv6LBCALyveCRDwKt9JiEDQKyzcaDDQLQzfKVCgLi0PKVCgKwgbuWDQK4qIuIDAKDhrjXCAKT%2B%2B00AqGSlqELAoPh28wDAvSit9QCAsvep4YKAtj71VwC9pD07goC8rfjvgoCoryYqAECv6uC5A4C2dmTnQUCrdnKtgICyJiTrwsCjLGlBQKNsaUFAo6xpQUCj7GlBQKIsaUFAomxpQUCirGlBQKbsaUFApSxpQUCjLHlBgKMsekGAoyx7QYCjLHRBgKMsdUGAoyx2QYCjLHdBgKMscEGAoyxhQUCjLGJBQKNseUGAo2x6QYCjbHtBgKNsdEGAo2x1QYCjbHZBgKNsd0GAo2xwQYCjbGFBQKNsYkFAo6x5QYCjrHpBgLopM%2F9CwLsyZauAQL4oO3lBAK8yuqBCwL1hriZBwLUpvv7CALsqKmSDgLqoY%2BHAwLCi9reA5HII3R9bARNVKmrB9WBnfeJepHFJrdPAtcLnXlE%2BdKP&username2=gfk7ljlyks&password=boolbool1&password2=boolbool1&email=myeail%40gf.com&fname=&lname=&phone=&street=&city=&BYear=&Bmonth=%D7%99%D7%A0%D7%95%D7%90%D7%A8&Bday=1&career=&signature=&homepage=&icq=&Morehobbies=&allowmail=on&SendNews=on&chkterms=on&btnSubmit=%D7%9C%D7%97%D7%A5+%D7%9C%D7%A1%D7%99%D7%95%D7%9D+%D7%94%D7%94%D7%A8%D7%A9%D7%9E%D7%94+%D7%9C%D7%90%D7%AA%D7%A8</p>
</blockquote>
<p>but as I see in the documentation, the only option is to add the data as a dict (which wouldn't work with that data for some reason)</p>
<p>Anyone has an idea how to send this data properly? (should result a 302)
or even better, anyone knows how to turn this data in to a dict?</p>
<p>Thanks a lot :)</p>
| 0 |
2016-09-21T10:59:54Z
| 39,615,067 |
<p>There's no issue sending raw post data:</p>
<pre><code>raw_data = '__VIEWSTATE=%2FwEPDwUJODgwODc4MzI2D2QWBAIEDxYCHgdWaXNpYmxlaGQCBg8WAh8AZxYCZg9kFhBmDw8WAh4EVGV4dAUl16jXmdep15XXnSDXntep16rXntepINeX15PXqSDXnNeQ16rXqGRkAgEPFgIeBWNsYXNzBSNmb3JtLWdyb3VwIGhhcy1mZWVkYmFjayBoYXMtc3VjY2VzcxYIAgEPDxYCHwEFLSog16nXnSDXntep16rXntepICjXkdeZ158gNCDXnCAxMiDXqteV15nXnSkgOmRkAgUPDxYCHwBoZGQCBw8PFgQeCENzc0NsYXNzZR4EXyFTQgICFgIfAgUwZ2x5cGhpY29uIGZvcm0tY29udHJvbC1mZWVkYmFjayBnbHlwaGljb24tcmVtb3ZlZAIJDxYGHwIFE2FsZXJ0IGFsZXJ0LXN1Y2Nlc3MeBXN0eWxlBQ5kaXNwbGF5OmJsb2NrOx4JaW5uZXJodG1sBRjXqdedINee16nXqtee16kg16TXoNeV15lkAgIPFgIfAGgWAmYPFgIfAgUXZm9ybS1ncm91cCBoYXMtZmVlZGJhY2sWBAIDDw8WBB8DZR8EAgJkZAIFDxYCHwUFDWRpc3BsYXk6bm9uZTtkAgMPFgIfAgUXZm9ybS1ncm91cCBoYXMtZmVlZGJhY2sWBgIBDw8WAh8AaGRkAgUPDxYEHwNlHwQCAmRkAgcPFgIfBQUNZGlzcGxheTpub25lO2QCBA8WAh8CBRdmb3JtLWdyb3VwIGhhcy1mZWVkYmFjaxYEAgMPDxYEHwNlHwQCAmRkAgUPFgIfBQUNZGlzcGxheTpub25lO2QCBQ8WAh8CBRdmb3JtLWdyb3VwIGhhcy1mZWVkYmFjaxYEAgMPDxYEHwNlHwQCAmRkAgUPFgIfBQUNZGlzcGxheTpub25lO2QCEw8WAh8CBRdmb3JtLWdyb3VwIGhhcy1mZWVkYmFjaxYEAgUPD2QWAh8FBQ1kaXNwbGF5Om5vbmU7ZAIHDw8WBB8DZR8EAgJkZAIXDxYGHwIFEmFsZXJ0IGFsZXJ0LWRhbmdlch8FBQ5kaXNwbGF5OmJsb2NrOx8GBSjXl9eV15HXlCDXnNeQ16nXqCDXkNeqINeq16DXkNeZINeU15DXqteoZBgBBR5fX0NvbnRyb2xzUmVxdWlyZVBvc3RCYWNrS2V5X18WAwUJYWxsb3dtYWlsBQhTZW5kTmV3cwUIY2hrdGVybXNHWETH5Z00Bw%2FRQ%2BPP7XKuKE1Yc7MfMt6j3NmJGEldOg%3D%3D&__VIEWSTATEGENERATOR=98F5786E&__EVENTVALIDATION=%2FwEWPwLc4fuhDgLPv6LBCALyveCRDwKt9JiEDQKyzcaDDQLQzfKVCgLi0PKVCgKwgbuWDQK4qIuIDAKDhrjXCAKT%2B%2B00AqGSlqELAoPh28wDAvSit9QCAsvep4YKAtj71VwC9pD07goC8rfjvgoCoryYqAECv6uC5A4C2dmTnQUCrdnKtgICyJiTrwsCjLGlBQKNsaUFAo6xpQUCj7GlBQKIsaUFAomxpQUCirGlBQKbsaUFApSxpQUCjLHlBgKMsekGAoyx7QYCjLHRBgKMsdUGAoyx2QYCjLHdBgKMscEGAoyxhQUCjLGJBQKNseUGAo2x6QYCjbHtBgKNsdEGAo2x1QYCjbHZBgKNsd0GAo2xwQYCjbGFBQKNsYkFAo6x5QYCjrHpBgLopM%2F9CwLsyZauAQL4oO3lBAK8yuqBCwL1hriZBwLUpvv7CALsqKmSDgLqoY%2BHAwLCi9reA5HII3R9bARNVKmrB9WBnfeJepHFJrdPAtcLnXlE%2BdKP&username2=gfk7ljlyks&password=boolbool1&password2=boolbool1&email=myeail%40gf.com&fname=&lname=&phone=&street=&city=&BYear=&Bmonth=%D7%99%D7%A0%D7%95%D7%90%D7%A8&Bday=1&career=&signature=&homepage=&icq=&Morehobbies=&allowmail=on&SendNews=on&chkterms=on&btnSubmit=%D7%9C%D7%97%D7%A5+%D7%9C%D7%A1%D7%99%D7%95%D7%9D+%D7%94%D7%94%D7%A8%D7%A9%D7%9E%D7%94+%D7%9C%D7%90%D7%AA%D7%A8'
requests.post(url, data=raw_data)
</code></pre>
<p>A minor bonus is that your data is already percent encoded.</p>
<p>From the doc string:</p>
<pre><code>post(url, data=None, json=None, **kwargs)
Sends a POST request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
</code></pre>
<p>So <code>data</code> can be a dictionary, string/bytes, or a file-like object.</p>
<p>Perhaps you need to specify the <code>Content-Type</code> header like this:</p>
<pre><code>requests.post(url, data=raw_data, headers={'Content-Type': 'application/x-www-form-urlencoded'})
</code></pre>
| 2 |
2016-09-21T11:17:37Z
|
[
"python",
"http",
"python-requests"
] |
Parsing and tabulating results
| 39,614,689 |
<p>What is a simple and flexible way to parse and tabulate output like the one below on a Unix system? </p>
<p>The output has multiple entries in the following format: </p>
<pre><code>=====================================================
====== SOLVING WITH MATRIX small_SPD ====
===================================================
sizes: 5,5,8
Solving with Sparse LU AND COLAMD ...
COMPUTE TIME : 8.9287e-05
SOLVE TIME : 1.0663e-05
TOTAL TIME : 9.995e-05
REL. ERROR : 2.30263e-18
Solving with BiCGSTAB ...
COMPUTE TIME : 4.113e-06
SOLVE TIME : 1.853e-05
TOTAL TIME : 2.2643e-05
REL. ERROR : 1.34364e-10
ITERATIONS : 2
</code></pre>
<p>This should be tabulated as (or similar):</p>
<pre><code>Matrix Sizes Solver Compute Solve Total Rel Error
small_SPD 5,5,8 Sparse LU AND COLAMD 8.9287e-05 1.0663e-05 9.995e-05 2.30263e-18
small_SPD 5,5,8 BiCGSTAB 4.113e-06 1.853e-05 2.2643e-05 1.34364e-10
</code></pre>
| -3 |
2016-09-21T11:00:31Z
| 39,615,580 |
<p>If you're just parsing an output, I'd tackle it like this:</p>
<pre><code>#!/usr/bin/env perl
use strict;
use warnings;
#set paragraph mode - look for empty lines between records.
local $/ = '';
#init the matrix/size vars.
my $matrix;
my $sizes;
#output order
my @columns = ( "COMPUTE TIME", "SOLVE TIME", "TOTAL TIME", "REL. ERROR" );
#Column headings.
print join "\t", "matrix", "sizes", "solver", @columns,"\n";
#iterate the data.
#note - <> is a magic file handle that reads STDIN or 'files specified on command line'
#that's just like how sed/grep/awk do it.
while (<>) {
#find and set the matrix name
#note conditional - this only appears in the 'first' record.
if (m/MATRIX (\w+)/) {
$matrix = $1;
}
#find and set the sizes.
if (m/sizes: ([\d\,]+)/) {
$sizes = $1;
}
#multi-line pattern match to grab keys and values.
#this then maps neatly into a hash.
my %result_set = m/^(\w+).*: ([\d\.\-e]+)/gm;
#add the solver to the 'set':
#and use this test to check if this 'record' is of interest.
#skipping the "ITERATIONS" line.
my ( $solver ) = m/Solving with (.*) .../ or next;
#print it tab separated.
print join "\t", $matrix, $sizes, $solver, @result_set{@columns}, "\n";
}
</code></pre>
<p>Output:</p>
<pre><code>matrix sizes solver Compute Solve Total Rel Error
small_SPD 5,5,8 Sparse LU AND COLAMD 8.9287e-05 1.0663e-05 9.995e-05 2.30263e-18
small_SPD 5,5,8 BiCGSTAB 4.113e-06 1.853e-05 2.2643e-05 1.34364e-10
</code></pre>
<p>Tab separated, which is probably useful for some applications - but you might want to <a href="http://perldoc.perl.org/functions/printf.html" rel="nofollow">printf</a> or <a href="http://perldoc.perl.org/functions/format.html" rel="nofollow">format</a> instead. </p>
| 1 |
2016-09-21T11:40:34Z
|
[
"python",
"perl",
"parsing"
] |
Why isn't my monkey patching isn't working ?
| 39,614,766 |
<p>I am trying to add a method to a class that I import.</p>
<p>This is my code : </p>
<pre><code>from pyrser.parsing import node
def to_dxml(self):
return "test"
node.Node().to_dxml = to_dxml
tree = node.Node()
tree.ls = [1, 2.0, "titi", True, [2, 3, 4, [3, [3, 4]], 5]]
tree.dct = {"g":1, "y":2, "koko":{'D', 'T', 'C'}}
tree.aset = {'Z', 'X', 'T', 'U'}
tree.ablob = b'\xFF\xaa\x06Th -}'
print(tree.to_dxml())
</code></pre>
<p>But when I run it I get</p>
<pre><code>AttributeError: 'Node' object has no attribute 'to_dxml'
</code></pre>
<p>Any idea why it isn't working ? </p>
| -1 |
2016-09-21T11:03:28Z
| 39,614,819 |
<p>You need to add attribute to the class, not object.</p>
<pre><code>node.Node().to_dxml = to_dxml
</code></pre>
<p>Should be</p>
<pre><code>node.Node.to_dxml = to_dxml
</code></pre>
| 6 |
2016-09-21T11:05:50Z
|
[
"python"
] |
Why is my condition in python not being met
| 39,614,869 |
<p>I have a list of filenames sorted by creation date. These files contain a datetime in the filename for their creation date time. I am attempting to create a sub list for all files after a certain time. </p>
<p>Full list of files -</p>
<pre><code>Allfilenames = ['CCN-200 data 130321055347.csv',
'CCN-200 data 130321060000.csv',
'CCN-200 data 130321063235.csv',
'CCN-200 data 130321070000.csv',
'CCN-200 data 130321080000.csv',
'CCN-200 data 130321090000.csv',
'CCN-200 data 130321100000.csv',
'CCN-200 data 130321110000.csv',
'CCN-200 data 130321120000.csv',
'CCN-200 data 130321130000.csv',
'CCN-200 data 130321140000.csv',
'CCN-200 data 130321150000.csv']
</code></pre>
<p><code>positions [19:24]</code> give the time in format hhmmss. I am using</p>
<pre><code>filenames = [s for s in Allfilenames if os.path.basename(s)[19:24] >= TOffRound]
TOffRound = "080000"
</code></pre>
<p>The result should be a list of all filenames created on or after or 08:00:00, however the resulting list is missing the "080000" file.</p>
<pre><code>filenames = ['CCN-200 data 130321090000.csv',
'CCN-200 data 130321100000.csv',
'CCN-200 data 130321110000.csv',
'CCN-200 data 130321120000.csv',
'CCN-200 data 130321130000.csv',
'CCN-200 data 130321140000.csv',
'CCN-200 data 130321150000.csv']
</code></pre>
<p>Why is the conditional not returning true on the = part of the condition and returning 'CCN-200 data 130321080000.csv' in my list? Please note I have only shown the basename here for clarity. </p>
| 0 |
2016-09-21T11:08:44Z
| 39,615,114 |
<p>In your filenames <code>hhmmss</code> exist from index <code>19:25</code> rather than <code>19:24</code>. So the correct statement to get the <code>hhmmss</code> from filename is:</p>
<pre><code>filenames = [s for s in Allfilenames if os.path.basename(s)[19:25] >= TOffRound]
</code></pre>
| 0 |
2016-09-21T11:19:39Z
|
[
"python"
] |
Why is my condition in python not being met
| 39,614,869 |
<p>I have a list of filenames sorted by creation date. These files contain a datetime in the filename for their creation date time. I am attempting to create a sub list for all files after a certain time. </p>
<p>Full list of files -</p>
<pre><code>Allfilenames = ['CCN-200 data 130321055347.csv',
'CCN-200 data 130321060000.csv',
'CCN-200 data 130321063235.csv',
'CCN-200 data 130321070000.csv',
'CCN-200 data 130321080000.csv',
'CCN-200 data 130321090000.csv',
'CCN-200 data 130321100000.csv',
'CCN-200 data 130321110000.csv',
'CCN-200 data 130321120000.csv',
'CCN-200 data 130321130000.csv',
'CCN-200 data 130321140000.csv',
'CCN-200 data 130321150000.csv']
</code></pre>
<p><code>positions [19:24]</code> give the time in format hhmmss. I am using</p>
<pre><code>filenames = [s for s in Allfilenames if os.path.basename(s)[19:24] >= TOffRound]
TOffRound = "080000"
</code></pre>
<p>The result should be a list of all filenames created on or after or 08:00:00, however the resulting list is missing the "080000" file.</p>
<pre><code>filenames = ['CCN-200 data 130321090000.csv',
'CCN-200 data 130321100000.csv',
'CCN-200 data 130321110000.csv',
'CCN-200 data 130321120000.csv',
'CCN-200 data 130321130000.csv',
'CCN-200 data 130321140000.csv',
'CCN-200 data 130321150000.csv']
</code></pre>
<p>Why is the conditional not returning true on the = part of the condition and returning 'CCN-200 data 130321080000.csv' in my list? Please note I have only shown the basename here for clarity. </p>
| 0 |
2016-09-21T11:08:44Z
| 39,615,387 |
<p>Instead of checking the time part as a string, I would suggest a stronger method to test the time part of your filename. This includes extracting the date part of the filename, retrieving the time value and comparing it on your specified time as a time object.</p>
<pre><code>import re
import datetime
TOffRound = datetime.time(8, 0)
filenames = []
for s in Allfilenames:
datestr = re.search("[\d]{12}", s).group(0)
dateobj = datetime.datetime.strptime(datestr,"%y%m%d%H%M%S")
timeobj = dateobj.time()
if timeobj >= TOffRound:
filenames.append(s)
</code></pre>
| 1 |
2016-09-21T11:31:33Z
|
[
"python"
] |
Why is my condition in python not being met
| 39,614,869 |
<p>I have a list of filenames sorted by creation date. These files contain a datetime in the filename for their creation date time. I am attempting to create a sub list for all files after a certain time. </p>
<p>Full list of files -</p>
<pre><code>Allfilenames = ['CCN-200 data 130321055347.csv',
'CCN-200 data 130321060000.csv',
'CCN-200 data 130321063235.csv',
'CCN-200 data 130321070000.csv',
'CCN-200 data 130321080000.csv',
'CCN-200 data 130321090000.csv',
'CCN-200 data 130321100000.csv',
'CCN-200 data 130321110000.csv',
'CCN-200 data 130321120000.csv',
'CCN-200 data 130321130000.csv',
'CCN-200 data 130321140000.csv',
'CCN-200 data 130321150000.csv']
</code></pre>
<p><code>positions [19:24]</code> give the time in format hhmmss. I am using</p>
<pre><code>filenames = [s for s in Allfilenames if os.path.basename(s)[19:24] >= TOffRound]
TOffRound = "080000"
</code></pre>
<p>The result should be a list of all filenames created on or after or 08:00:00, however the resulting list is missing the "080000" file.</p>
<pre><code>filenames = ['CCN-200 data 130321090000.csv',
'CCN-200 data 130321100000.csv',
'CCN-200 data 130321110000.csv',
'CCN-200 data 130321120000.csv',
'CCN-200 data 130321130000.csv',
'CCN-200 data 130321140000.csv',
'CCN-200 data 130321150000.csv']
</code></pre>
<p>Why is the conditional not returning true on the = part of the condition and returning 'CCN-200 data 130321080000.csv' in my list? Please note I have only shown the basename here for clarity. </p>
| 0 |
2016-09-21T11:08:44Z
| 39,617,662 |
<p>The problem with the code given, as suggested by others, is that you are missing the last digit. In terms of slicing a list, the "stop" number given after the : is not considered.</p>
<pre><code>(eg):
>> a = "hello world"
>> print a[0:4]
hell
>> print a[0:5]
hello
</code></pre>
<p>So, change this line in your code and you are good to go:</p>
<pre><code>filenames = [s for s in Allfilenames if os.path.basename(s)[19:25] >= TOffRound]
</code></pre>
<p>However, what you are doing does not scale at all. This is not easier to maintain nor work with any file that is a even a slightly different. The code can be transformed like this:</p>
<pre><code>def filter_files(file_list, TOffRound):
text_length = len(TOffRound)
return [file_name for file_name in file_list if file_name[-text_length:] >= TOffRound]
</code></pre>
<p>This will work, irrespective of the size of the file name.</p>
<p>I would also suggest you to get the list of files based on their modification time, that can be taken using <code>os.stat</code> or <code>os.path.getmtime</code>, and act accordingly, rather than using the file name. File name is a string and even though it can support you with older or newer files, it is generally, not a good idea to use that way. You are converting a time stamp to string for the file name. Then this string is converted back to time stamp and convert in the normal case. Instead, if you go for file modification time, you can stay only with the date and time formats rather than the conversions that need be done. This has few advantages:</p>
<ul>
<li>File name or any explicit parameter can change over time but you need not change the logic again and again</li>
<li>File based time stamps do exist for these kind of purposes. So they do provide more control. For instance, If you wish to select files of a certain range, created or modified only at a specific time period? Easy to do with file time stamps.</li>
<li>This splits the time logic from the file names and thus you can name them more meaningfully regarding their purposes thereby simplifying the maintenance of the code over a period of time.</li>
</ul>
| 0 |
2016-09-21T13:16:12Z
|
[
"python"
] |
How to create a prescription pill count like pain management facilities use?
| 39,614,930 |
<p>I don't understand why this code won't work. I want to create some code to help me know exactly how many pills need to be taken back to pain management. If you don't take the right amount back, then you get kicked out of pain management. So I'm just wanting to create a script that will help me so I don't take too few back.</p>
<p>As anyone can tell. I don't have any experience with Python. I just installed it and tried using the documentation to aide in completing what I thought would be a trivial script.</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\howell\AppData\Local\Programs\Python\Python35-32\Scripts\pill_count.py", line 17, in <module>
date1 = datetime.date(datetime.strptime((str(year) + "-" + str(starting_Month) + "-" + str(starting_Month) + "-" + str(starting_Day)), '%Y-%m-%d'))
File "C:\Users\howell\AppData\Local\Programs\Python\Python35-32\lib\_strptime.py", line 510, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "C:\Users\howell\AppData\Local\Programs\Python\Python35-32\lib\_strptime.py", line 346, in _strptime
data_string[found.end():])
ValueError: unconverted data remains: -1
</code></pre>
<pre><code>How many pills did you have left? 12
How many pills did you get? 90
How many pills do you take? 6
Starting Month, Type 1 for January, 2 for February, etc.9
Starting Day; Type 1-311
Ending Month, Type 1 for January, 2 for February, etc.10
Starting Day; Type 1-3131
Taking 6 a day, you should have 102 left.
# dates are easily constructed and formatted
#from datetime import datetime, timedelta
from datetime import datetime
year = 2016
left_over_pill_count = input('How many pills did you have left? ')
new_prescription = input('How many pills did you get? ')
total_pills = int(left_over_pill_count) + int(new_prescription)
daily_pill_intake = input('How many pills do you take? ')
starting_Month = input('Starting Month, Type 1 for January, 2 for February, etc.')
starting_Day = input('Starting Day; Type 1-31')
ending_Month = input('Ending Month, Type 1 for January, 2 for February, etc.')
ending_Day = input('Starting Day; Type 1-31')
# count number of days until next doctors appointment
date1 = datetime.date(datetime.strptime((str(year) + "-" + str(starting_Month) + "-" + str(starting_Day)), '%Y-%m-%d'))
date2 = datetime.date(datetime.strptime((str(year) + "-" + str(ending_Month) + "-" + str(ending_Day)), '%Y-%m-%d'))
#date_count = (date2 - date1)
#total_days = date_count
# fmt = '%Y-%m-%d %H:%M:%S'
#fmt = '%d'
#d1 = datetime.strptime(date1, fmt)
#d2 = datetime.strptime(date2, fmt)
# print (d2-d1).days * 24 * 60
for i in range(1, (date1-date2).days):
total_pills = total_pills - int(daily_pill_intake)
print(total_pills)
print("Taking " + str(daily_pill_intake) + " a day, you should have " + str(total_pills) + " left.")
</code></pre>
| 0 |
2016-09-21T11:11:06Z
| 39,625,271 |
<p>In this line:</p>
<pre><code>date1 = datetime.date(datetime.strptime((str(year) + "-" + str(starting_Month) + "-" + str(starting_Month) + "-" + str(starting_Day)), '%Y-%m-%d'))
</code></pre>
<p>You're telling <code>datetime.strptime</code> to parse a string of the form "year-month-day", but the string you give it is of the form "year-month-month-day"; you included the month twice! The same problem applies to the next line as well.</p>
| 0 |
2016-09-21T19:44:23Z
|
[
"python"
] |
Programming a function that saves and returns values in python
| 39,614,966 |
<p>I am currently experimenting with Python and programming a little text-adventure. In my game the player has certain properties like hp, attack damage and inventory slots for items.
I want to be able to call these properties from everywhere in my code. For that I created a function that receives three values: </p>
<p>"edit": to specify if a variable should be edited</p>
<p>"info_id": to specify which variable should be accessed</p>
<p>"value": the new value for the variable</p>
<p>This is what it looks like in my code:</p>
<pre><code>def player_info(edit, info_id, value):
if edit == 1:
##function wants to edit value
if info_id == 1:
player_hp = value
print ("Assigned hp to: ", player_hp)
##the "prints" are just to check if the asignments work -> they do
return player_hp
elif info_id == 2:
player_attack = value
print ("Assigned attack to: ", player_attack)
return player_attack
elif info_id == 3:
item_1 = value
return item_1
elif info_id == 4:
item_2 = value
return item_2
elif info_id == 5:
item_3 = value
elif edit == 0:
##function wants to retrieve value
if info_id == 1:
return player_hp
elif info_id == 2:
return player_attack
elif info_id == 3:
return item_1
elif info_id == 4:
return item_2
elif info_id == 5:
return item_3
</code></pre>
<p>There are actually 10 item slots (going up to info_id==13) but they are all the same anyway.</p>
<p>I define all variables at the beginning of my code:</p>
<pre><code> player_info(1,1,20)
player_info(1,2,5)
n=3
while n<=13:
player_info(1,n,0)
n=n+1
##items are not fully implemented yet so I define the item slots as 0
</code></pre>
<p>The definition works, I can tell because of the control "print" I implemented in the code. Still when I call a variable, e.g. the health like this:</p>
<pre><code>player_info(0,1,0)
</code></pre>
<p>I get an error:</p>
<pre><code>local variable 'player_hp' referenced before assignment
</code></pre>
<p>Does the function not save the variable properly? Or what is the problem?</p>
<p><strong>Is there a better way to save variables? Are global variables the way to go in this case?</strong></p>
<p>Thanks for the help!</p>
| 1 |
2016-09-21T11:12:50Z
| 39,615,607 |
<p>First of all, your error is caused because of retrieving a variable that is not assigned - that just doesn't work. When you edit <code>player_hp</code>, it's not stored anywhere. you are returning it to the function that called it and not assigning it to anything. It just gets lost.</p>
<p>Second of all, you should really indent with 4 spaces (or tabs) - it's much more readable than 2 spaces. Not only for you, but for anyone trying to help too.</p>
<p>And lastly, the proper way to go about this would be to learn about classes. Global variables should never be used in python, only in special cases, or when you are learning, but just skip ahead to the class.</p>
<p>You should create something like</p>
<pre><code>class Player:
def __init__(self):
self.hp = 20 # or another starting hp
self.attack = 3 # or another starting attack
self.inventory = []
</code></pre>
<p>Then you can just create an instance of Player class and pass it to the functions where it's relevant</p>
<pre><code>player1 = Player()
print(player1.hp) # Prints out player's hp
player1.hp -= 5 # Remove 5 hp from the player. Tip: Use method to do this so that it can check if it reaches 0 or max etc.
player1.inventory.append("axe")
print(player1.inventory[0]) # Prints out axe, learn about lists, or use dictionary, or another class if you want this not to be indexed like a list
</code></pre>
| 1 |
2016-09-21T11:41:47Z
|
[
"python",
"function",
"variables",
"save",
"call"
] |
Programming a function that saves and returns values in python
| 39,614,966 |
<p>I am currently experimenting with Python and programming a little text-adventure. In my game the player has certain properties like hp, attack damage and inventory slots for items.
I want to be able to call these properties from everywhere in my code. For that I created a function that receives three values: </p>
<p>"edit": to specify if a variable should be edited</p>
<p>"info_id": to specify which variable should be accessed</p>
<p>"value": the new value for the variable</p>
<p>This is what it looks like in my code:</p>
<pre><code>def player_info(edit, info_id, value):
if edit == 1:
##function wants to edit value
if info_id == 1:
player_hp = value
print ("Assigned hp to: ", player_hp)
##the "prints" are just to check if the asignments work -> they do
return player_hp
elif info_id == 2:
player_attack = value
print ("Assigned attack to: ", player_attack)
return player_attack
elif info_id == 3:
item_1 = value
return item_1
elif info_id == 4:
item_2 = value
return item_2
elif info_id == 5:
item_3 = value
elif edit == 0:
##function wants to retrieve value
if info_id == 1:
return player_hp
elif info_id == 2:
return player_attack
elif info_id == 3:
return item_1
elif info_id == 4:
return item_2
elif info_id == 5:
return item_3
</code></pre>
<p>There are actually 10 item slots (going up to info_id==13) but they are all the same anyway.</p>
<p>I define all variables at the beginning of my code:</p>
<pre><code> player_info(1,1,20)
player_info(1,2,5)
n=3
while n<=13:
player_info(1,n,0)
n=n+1
##items are not fully implemented yet so I define the item slots as 0
</code></pre>
<p>The definition works, I can tell because of the control "print" I implemented in the code. Still when I call a variable, e.g. the health like this:</p>
<pre><code>player_info(0,1,0)
</code></pre>
<p>I get an error:</p>
<pre><code>local variable 'player_hp' referenced before assignment
</code></pre>
<p>Does the function not save the variable properly? Or what is the problem?</p>
<p><strong>Is there a better way to save variables? Are global variables the way to go in this case?</strong></p>
<p>Thanks for the help!</p>
| 1 |
2016-09-21T11:12:50Z
| 39,617,454 |
<p>You asked, "<strong>Does the function not save the variable properly?</strong>"</p>
<p>In general, <strong>Python functions do not save their state</strong>. The exception is functions that use the <code>yield</code> statement. If you write a function like this</p>
<pre><code>def save_data(data):
storage = data
</code></pre>
<p>and call it like this</p>
<pre><code>save_data(10)
</code></pre>
<p>you will not be able to get the value of <code>storage</code> later. In Python, if you need to save data and retrieve it later, you would normally use <a href="https://docs.python.org/2.7/tutorial/classes.html" rel="nofollow"><code>classes</code></a>.</p>
<p>Python <code>classes</code> allow you do do things like this:</p>
<pre><code>class PlayerData(object):
def __init__(self, hp=0, damage=0):
self.hp = hp
self.damage = damage
self.inventory = list()
self.max_inventory = 10
def add_item(self, item):
if len(self.inventory) < self.max_inventory:
self.inventory.append(item)
def hit(self, damage):
self.hp -= damage
if self.hp < 0:
self.hp = 0
def attack(self, other):
other.hit(self.damage)
if __name__ == '__main__':
player1 = PlayerData(20, 5)
player2 = PlayerData(20, 5)
player1.attack(player2)
print player2.hp
player1.add_item('sword')
player1.add_item('shield')
print player1.inventory
</code></pre>
<p><strong>Output</strong></p>
<pre><code>15
['sword', 'shield']
</code></pre>
<p>This really only scratches the surface of how to use <code>classes</code>. In a more complete implementation, you might have an <code>Item</code> base class. Then you might create <code>Sword</code> and <code>Shield</code> classes that inherit from <code>Item</code>.</p>
| 0 |
2016-09-21T13:07:20Z
|
[
"python",
"function",
"variables",
"save",
"call"
] |
Convert date string to UTC datetime
| 39,615,093 |
<p>I returned dates between two given dates:</p>
<pre><code>for date in rrule(DAILY, dtstart = date1, until = date2):
print date_item.strftime("%Y-%m-%d")
</code></pre>
<p>How to convert the <code>date_item</code> e.g. <code>2016-01-01</code> to ISO 8601 format like:<code>2016-01-01T18:10:18.000Z</code> in python?</p>
| 1 |
2016-09-21T11:18:45Z
| 39,615,433 |
<p>I created a sample datetime object like shown below:</p>
<pre><code>from datetime import datetime
now = datetime.now()
print now
2016-09-21 16:59:18.175038
</code></pre>
<p>Here is the output, formatted to your requirement:</p>
<pre><code>print datetime.strftime(now, "%Y-%m-%dT%H:%M:%S.000Z")
'2016-09-21T16:59:18.000Z'
</code></pre>
<p>Here is a good site to refer the various options, I personally find it easier to use than Python's official docs:
<a href="http://strftime.org" rel="nofollow">http://strftime.org</a></p>
| 0 |
2016-09-21T11:33:37Z
|
[
"python"
] |
Convert date string to UTC datetime
| 39,615,093 |
<p>I returned dates between two given dates:</p>
<pre><code>for date in rrule(DAILY, dtstart = date1, until = date2):
print date_item.strftime("%Y-%m-%d")
</code></pre>
<p>How to convert the <code>date_item</code> e.g. <code>2016-01-01</code> to ISO 8601 format like:<code>2016-01-01T18:10:18.000Z</code> in python?</p>
| 1 |
2016-09-21T11:18:45Z
| 39,615,551 |
<p>You can use <code>strftime</code> to convert <code>date_item</code> in any format as per your requirement.see the below example</p>
<pre><code>current_time = strftime("%Y-%m-%dT%H:%M:%SZ",gmtime())
print(current_time)
</code></pre>
<p>Output: </p>
<pre><code>'2016-09-21T11:30:04Z'
</code></pre>
<p>so you can use like this: </p>
<pre><code>from time import gmtime,strftime
import datetime
date_item = datetime.datetime.now()
print date_item.strftime("%Y-%m-%dT%H:%M:%SZ")
</code></pre>
| 1 |
2016-09-21T11:39:22Z
|
[
"python"
] |
fill NaN with another lookup table
| 39,615,199 |
<p>Is there a way to fill the <code>NaN</code> with value for <code>test=default</code> by matching name, reticle and cell rev?</p>
<p><a href="http://i.stack.imgur.com/8gZ38.png" rel="nofollow"><img src="http://i.stack.imgur.com/8gZ38.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/A71KG.png" rel="nofollow"><img src="http://i.stack.imgur.com/A71KG.png" alt="enter image description here"></a></p>
<p>with the few variables in "test" column:
<a href="http://i.stack.imgur.com/MYDOO.png" rel="nofollow"><img src="http://i.stack.imgur.com/MYDOO.png" alt="enter image description here"></a></p>
<p>Is there a way to update the values from others row ? as the datatype "do" would have higher precedence than int and drop the "do" data row?</p>
<p>data:<br>
test datatype name value reticle cell_rev<br>
default int s 0x45 CR1<br>
default int s 0xCB CR3<br>
default do s 0.68 CR1 </p>
<p>I'd like to get: </p>
<p>test datatype name value reticle cell_rev<br>
default int s 0.68 CR1<br>
default int s 0xCB CR3 </p>
| 1 |
2016-09-21T11:23:11Z
| 39,615,395 |
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a> for reshaping, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ffill.html" rel="nofollow"><code>ffill</code></a> for add missing values and last reshape to original by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow"><code>stack</code></a>:</p>
<pre><code>df = df.set_index(['name','value_old','reticle','test','cell_rev'])
.unstack()
.ffill()
.stack()
.reset_index()
print (df)
name value_old reticle test cell_rev value_new
0 s 0x8E A28 default CR1 0x8C
1 s 0x8E A28 default CR3 0x8E
2 s 0x8E A28 etlc CR1 0x8C
3 s 0x8E A28 etlc CR3 0x8E
</code></pre>
<p>EDIT by comment:</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a> by subset <code>df1</code> created by <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> and then fill <code>NaN</code> values by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.combine_first.html" rel="nofollow"><code>combine_first</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="nofollow"><code>fillna</code></a>:</p>
<pre><code>df1 = df.ix[df.test == 'default']
print (df1)
test name value_old reticle cell_rev value_new
0 default s 0x8E A28 CR1 0x8E
1 default s 0x8E A28 CR3 0x8C
df2 = pd.merge(df, df1, how='left', on=['name','reticle','cell_rev'], suffixes=('','1'))
print (df2)
test name value_old reticle cell_rev value_new test1 value_old1 \
0 default s 0x8E A28 CR1 0x8E default 0x8E
1 default s 0x8E A28 CR3 0x8C default 0x8E
2 etlc s 0x8E A28 CR1 0x44 default 0x8E
3 etlc s 0x8E A28 CR3 0x44 default 0x8E
4 mlc s 0x1E A28 CR1 NaN default 0x8E
5 mlc s 0x1E A28 CR3 NaN default 0x8E
6 slc s 0x2E A28 CR1 NaN default 0x8E
7 slc s 0x2E A28 CR3 NaN default 0x8E
value_new1
0 0x8E
1 0x8C
2 0x8E
3 0x8C
4 0x8E
5 0x8C
6 0x8E
7 0x8C
</code></pre>
<pre><code>df['value_new'] = df2['value_new'].combine_first(df2['value_new1'])
#df['value_new'] = df2['value_new'].fillna(df2['value_new1'])
print (df)
test name value_old reticle cell_rev value_new
0 default s 0x8E A28 CR1 0x8E
1 default s 0x8E A28 CR3 0x8C
2 etlc s 0x8E A28 CR1 0x44
3 etlc s 0x8E A28 CR3 0x44
4 mlc s 0x1E A28 CR1 0x8E
5 mlc s 0x1E A28 CR3 0x8C
6 slc s 0x2E A28 CR1 0x8E
7 slc s 0x2E A28 CR3 0x8C
</code></pre>
| 2 |
2016-09-21T11:31:52Z
|
[
"python",
"pandas",
"dataframe",
"multiple-columns",
null
] |
fill NaN with another lookup table
| 39,615,199 |
<p>Is there a way to fill the <code>NaN</code> with value for <code>test=default</code> by matching name, reticle and cell rev?</p>
<p><a href="http://i.stack.imgur.com/8gZ38.png" rel="nofollow"><img src="http://i.stack.imgur.com/8gZ38.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/A71KG.png" rel="nofollow"><img src="http://i.stack.imgur.com/A71KG.png" alt="enter image description here"></a></p>
<p>with the few variables in "test" column:
<a href="http://i.stack.imgur.com/MYDOO.png" rel="nofollow"><img src="http://i.stack.imgur.com/MYDOO.png" alt="enter image description here"></a></p>
<p>Is there a way to update the values from others row ? as the datatype "do" would have higher precedence than int and drop the "do" data row?</p>
<p>data:<br>
test datatype name value reticle cell_rev<br>
default int s 0x45 CR1<br>
default int s 0xCB CR3<br>
default do s 0.68 CR1 </p>
<p>I'd like to get: </p>
<p>test datatype name value reticle cell_rev<br>
default int s 0.68 CR1<br>
default int s 0xCB CR3 </p>
| 1 |
2016-09-21T11:23:11Z
| 39,620,971 |
<pre><code>for i in range(len(df)):
if df.loc[i, 'value_new'] != df.loc[i, 'value_new']:
df.loc[i, 'value_new'] = df.loc[(df.test == 'default') &
(df.name == df.loc[i, 'name']) &
(df.reticle == df.loc[i, 'reticle']) &
(df.cell_rev == df.loc[i, 'cell_rev']),
'value_new']
</code></pre>
<p>I think there's a more efficient solution, but this should work.</p>
| 0 |
2016-09-21T15:37:47Z
|
[
"python",
"pandas",
"dataframe",
"multiple-columns",
null
] |
Python returns wrong truth table for logical implication
| 39,615,368 |
<p><a href="http://i.stack.imgur.com/CChne.png" rel="nofollow"><img src="http://i.stack.imgur.com/CChne.png" alt="enter image description here"></a></p>
<p>I have implemented the above implication in Python but it does not return the expected results:</p>
<pre><code> True True None
True False None
False True True
False False None
</code></pre>
<p>My python code is:</p>
<pre><code>def implies(a,b):
if a:
return b
else:True
return
for p in (True, False):
for q in (True, False):
print("%10s %10s %s" %(p,q,implies((p or q) and (not p), q)))
</code></pre>
<p>I don't understand the contradiction here. None implies False doesn't it? And why not print True like it should?</p>
| -3 |
2016-09-21T11:30:41Z
| 39,615,430 |
<pre><code>def implies(a,b):
if a:
return b
else:True
return
</code></pre>
<p>Your error is in the last two lines, if !a, you aren't returning a specific value, so the result is <code>None</code>.
You want:</p>
<pre><code>def implies(a,b):
if a:
return b
else:
return True
</code></pre>
| 2 |
2016-09-21T11:33:23Z
|
[
"python",
"boolean-logic",
"discrete-mathematics"
] |
doc2vec - Input Format for doc2vec training and infer_vector() in python
| 39,615,420 |
<p>In gensim, when I give a string as input for training doc2vec model, I get this error : </p>
<blockquote>
<p>TypeError('don\'t know how to handle uri %s' % repr(uri))</p>
</blockquote>
<p>I referred to this question <a href="https://stackoverflow.com/questions/36780138/doc2vec-taggedlinedocument">Doc2vec : TaggedLineDocument()</a>
but still have a doubt about the input format. </p>
<p><code>documents = TaggedLineDocument('myfile.txt')</code></p>
<p>Should the myFile.txt have tokens as list of lists or separate list in each line for each document or a string? </p>
<p>For eg - I have 2 documents.</p>
<p>Doc 1 : Machine learning is a subfield of computer science that evolved from the study of pattern recognition.</p>
<p>Doc 2 : Arthur Samuel defined machine learning as a "Field of study that gives computers the ability to learn".</p>
<p>So, what should the myFile.txt look like?</p>
<p>Case 1 : simple text of each document in each line</p>
<p>Machine learning is a subfield of computer science that evolved from the study of pattern recognition</p>
<p>Arthur Samuel defined machine learning as a Field of study that gives computers the ability to learn</p>
<p>Case 2 : a list of lists having tokens of each document</p>
<p>[ ["Machine", "learning", "is", "a", "subfield", "of", "computer", "science", "that", "evolved", "from", "the", "study", "of", "pattern", "recognition"],</p>
<p>["Arthur", "Samuel", "defined", "machine", "learning", "as", "a", "Field", "of", "study", "that", "gives", "computers" ,"the", "ability", "to", "learn"] ]</p>
<p>Case 3 : list of tokens of each document in a separate line</p>
<p>["Machine", "learning", "is", "a", "subfield", "of", "computer", "science", "that", "evolved", "from", "the", "study", "of", "pattern", "recognition"]</p>
<p>["Arthur", "Samuel", "defined", "machine", "learning", "as", "a", "Field", "of", "study", "that", "gives", "computers" ,"the", "ability", "to", "learn"]</p>
<p>And when I am running it on the test data, what should be the format of the sentence which i want to predict the doc vector for? Should it be like case 1 or case 2 below or something else?</p>
<p><code>model.infer_vector(testSentence, alpha=start_alpha, steps=infer_epoch)</code></p>
<p>Should the testSentence be :</p>
<p>Case 1 : string</p>
<p>testSentence = "Machine learning is an evolving field"</p>
<p>Case 2 : list of tokens</p>
<p>testSentence = ["Machine", "learning", "is", "an", "evolving", "field"]</p>
| 1 |
2016-09-21T11:33:05Z
| 39,715,845 |
<p><code>TaggedLineDocument</code> is a convenience class that expects its source file (or file-like object) to be space-delimited tokens, one per line. (That is, what you refer to as 'Case 1' in your 1st question.)</p>
<p>But you can write your own iterable object to feed to gensim <code>Doc2Vec</code> as the <code>documents</code> corpus, as long as this corpus (1) iterably-returns <code>next()</code> objects that, like TaggedDocument, have <code>words</code> and <code>tags</code> lists; and (2) can be iterated over multiple times, for the multiple passes <code>Doc2Vec</code> requires for both the initial vocabulary-survey and then <code>iter</code> training passes. </p>
<p>The <code>infer_vector()</code> method takes lists-of-tokens, similar to the <code>words</code> attribute of individual <code>TaggedDocument</code>-like objects. (That is, what you refer to as 'Case 2' in your 2nd question.)</p>
| 0 |
2016-09-27T04:13:50Z
|
[
"python",
"gensim",
"word2vec",
"doc2vec"
] |
Fails to fix the seed value in LDA model in gensim
| 39,615,436 |
<p>When using LDA model, I get different topics each time and I want to replicate the same set. I have searched for the similar question in Google such as <a href="https://groups.google.com/forum/#!topic/gensim/s1EiOUsqT8s" rel="nofollow">this</a>.</p>
<p>I fix the seed as shown in the article by <code>num.random.seed(1000)</code> but it doesn't work. I read the <code>ldamodel.py</code> and find the code below:</p>
<pre><code>def get_random_state(seed):
"""
Turn seed into a np.random.RandomState instance.
Method originally from maciejkula/glove-python, and written by @joshloyal
"""
if seed is None or seed is numpy.random:
return numpy.random.mtrand._rand
if isinstance(seed, (numbers.Integral, numpy.integer)):
return numpy.random.RandomState(seed)
if isinstance(seed, numpy.random.RandomState):
return seed
raise ValueError('%r cannot be used to seed a numpy.random.RandomState'
' instance' % seed)
</code></pre>
<p>So I use the code:</p>
<pre><code>lda = models.LdaModel(
corpus_tfidf,
id2word=dic,
num_topics=2,
random_state=numpy.random.RandomState(10)
)
</code></pre>
<p>But it's still not working.</p>
| 0 |
2016-09-21T11:33:45Z
| 39,651,384 |
<p>The dictionary generated by <code>corpora.Dictionary</code> may be different to the same corpus(such as same words but different order).So one should fix the dictionary as well as seed to get tht same topic each time.The code below may help to fix the dictionary:</p>
<pre><code>dic = corpora.Dictionary(corpus)
dic.save("filename")
dic=corpora.Dictionary.load("filename")
</code></pre>
| 0 |
2016-09-23T01:55:03Z
|
[
"python",
"numpy",
"gensim"
] |
Script to create multiple socket connections to multiple servers fast
| 39,615,439 |
<p>I have a list of server address in a file as below:</p>
<pre><code>192.168.1.100
192.168.1.101
192.168.1.102
...
192.168.1.200
</code></pre>
<p>I want to write a program which create multiple socket connections from one PC client to all these servers (using the same source IP, source port and destination port) in order to make my modem's NAT table full. </p>
<p>Could anyone suggest me the most efficient way to do that ? Because if I have a list of 7K server IP address, I expect the number of socket connection should be increase up to 7k in a fast way, for example, after 5 minutes (I just want to simulate a TCP attack). I wrote a python script but it's very slow regarding my expectation</p>
| -1 |
2016-09-21T11:33:54Z
| 39,626,566 |
<p>You should be able to issue the 7K connects in a non-blocking fashion, then wait on them. Assuming they all succeed, the wait time of all of them will be overlapped. That should result in a much smaller overall delay.</p>
<p>In other words, try something like this:</p>
<pre><code>for (i = 0; i < 7000; ++i) {
// Create socket
sock_array[i] = socket(PF_INET, SOCK_STREAM, 0);
// Set socket non-blocking
flags = fcntl(sock_array[i], F_GETFL, 0);
fcntl(sock_array[i], F_SETFL, flags | O_NONBLOCK);
// Do the connect
connect(sock_array[i], &sock_addr[i], sizeof sock_addr[i]);
}
for (i = 0; i < 7000; ++i) {
// Find out if connect completed.
getsockopt(s, SOL_SOCKET, SO_ERROR, &err, &len);
// Assuming err == 0, your connect to i-th host is done
}
</code></pre>
<p>In actuality, during the first loop, you would add each socket to an <code>fd_set</code> and use <code>select</code> to determine when each <code>connect</code> had completed (either successfully or unsuccessfully). The time required to create all the connections should be bounded by the connection that takes the longest to establish. (Add error handling, etc. to taste.)</p>
| 0 |
2016-09-21T21:04:13Z
|
[
"java",
"python",
"shell",
"tcp",
"network-programming"
] |
What can cause this unicode object is not callable error in nosetests lib?
| 39,615,464 |
<p>I have a test case that tests some flow in an API (uses <code>requests.Session()</code> and makes multiple calls to our backend.)</p>
<p>This test case passes on my mac and on other peoples macs. But when its executed in Jenkins I get an error. There are other similar tests cases like this that pass without issues in Jenkins. Unfortunately I cannot share the test code itself. </p>
<p>Jenkins is running on Ubuntu 14.04</p>
<p>But here are first lines of the test code if it helps.</p>
<pre><code># filename: test_payment_visa.py
import unittest
from tests.utils import WWHTTPClient
import math
from nose.plugins.attrib import attr
class TestPaymentWorkflow(unittest.TestCase):
def setUp(self):
self.ww_api = WWHTTPClient()
def test_payment_visa(self):
"""Test for Payment Workflow via VISA"""
</code></pre>
<p>Does anyone have an idea what this can be related to?</p>
<pre><code>Traceback (most recent call last):
File "/var/lib/jenkins/.virtualenvs/api-tests/bin/nosetests", line 11, in <module>
sys.exit(run_exit())
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/core.py", line 121, in __init__
**extra_args)
File "/usr/lib/python2.7/unittest/main.py", line 95, in __init__
self.runTests()
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/core.py", line 207, in runTests
result = self.testRunner.run(self.test)
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/core.py", line 62, in run
test(result)
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/suite.py", line 177, in __call__
return self.run(*arg, **kw)
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/suite.py", line 224, in run
test(orig)
File "/usr/lib/python2.7/unittest/suite.py", line 70, in __call__
return self.run(*args, **kwds)
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/suite.py", line 75, in run
test(result)
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/suite.py", line 177, in __call__
return self.run(*arg, **kw)
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/suite.py", line 224, in run
test(orig)
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/suite.py", line 177, in __call__
return self.run(*arg, **kw)
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/suite.py", line 224, in run
test(orig)
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/suite.py", line 177, in __call__
return self.run(*arg, **kw)
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/suite.py", line 224, in run
test(orig)
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/case.py", line 45, in __call__
return self.run(*arg, **kwarg)
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/case.py", line 138, in run
result.addError(self, err)
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/proxy.py", line 131, in addError
plugins.addError(self.test, err)
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/plugins/manager.py", line 99, in __call__
return self.call(*arg, **kw)
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/plugins/manager.py", line 167, in simple
result = meth(*arg, **kw)
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/plugins/xunit.py", line 288, in addError
id = test.id()
File "/var/lib/jenkins/.virtualenvs/api-tests/local/lib/python2.7/site-packages/nose/case.py", line 85, in id
return self.test.id()
TypeError: 'unicode' object is not callable
</code></pre>
| 0 |
2016-09-21T11:34:56Z
| 39,617,973 |
<p>So there was a problem with my code.</p>
<p>I used a statement <code>self.id = r.json()["orders"][0]["id"]</code></p>
<pre><code># filename: test_payment_visa.py
import unittest
from tests.utils import WWHTTPClient
import math
from nose.plugins.attrib import attr
class TestPaymentWorkflow(unittest.TestCase):
def setUp(self):
self.ww_api = WWHTTPClient()
def test_payment_visa(self):
"""Test for Payment Workflow via VISA"""
...
...
self.id = r.json()["orders"][0]["id"]
...
...
</code></pre>
<p>Problem is that <code>unittest.TestCase</code> clas has a method called <code>id</code>.</p>
<pre><code>def id(self):
"""Get a short(er) description of the test
"""
return self.test.id()
</code></pre>
<p>So I assume that during test run I set <code>self.id</code> to a unicode string and then <code>nosetests</code> is trying to call its <code>id</code> method and is getting <code>TypeError: 'unicode' object is not callable</code> exception.</p>
| 0 |
2016-09-21T13:28:32Z
|
[
"python",
"jenkins",
"nose"
] |
converting dataframe to list of tuples on condition
| 39,615,476 |
<p>I have following df:</p>
<pre><code> 1 2 3 4
1 NaN 0.000000 0.000000 0.000000
2 NaN 0.027273 0.000000 0.000000
3 NaN 0.000000 0.101449 0.000000
4 NaN 0.000000 0.000000 0.194245
5 NaN 0.000000 0.000000 0.000000
6 NaN 0.000000 0.000000 0.000000
7 NaN 0.000000 0.000000 0.000000
8 NaN 0.000000 0.000000 0.000000
13 NaN 0.000000 0.000000 0.000000
14 NaN 0.000000 5 0.000000
</code></pre>
<p>How I can convert it to list of tuples <code>[(column, row, data)]</code> and to take only values that are greater then <code>zero</code>.</p>
<p>for example I want to have following values:</p>
<pre><code>[(2,2,0.027273), (3,3,0.101449 ), (3,14,5),(4,4,0.194245)]
</code></pre>
| 1 |
2016-09-21T11:35:09Z
| 39,615,610 |
<p>You can first cast columns to <code>int</code> (if necessary), <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a> and use list comprehension, where is necessary convert first and second value in <code>tuples</code> to <code>int</code> (default is <code>float</code>):</p>
<pre><code>df.columns = df.columns.astype(int)
s = df.unstack()
tuples = [tuple((int(x[0]),int(x[1]),x[2])) for x in s[s>0].reset_index().values]
print (tuples)
[(2, 2, 0.027273000000000002), (3, 3, 0.101449), (3, 14, 5.0), (4, 4, 0.194245)]
</code></pre>
| 1 |
2016-09-21T11:41:54Z
|
[
"python",
"list",
"pandas",
"dataframe",
"tuples"
] |
How can I generate a random point (x, y) 10 steps apart from y0(a, b) in xy-plane?
| 39,615,495 |
<p>I have generated a random point named <code>y0=(a,b)</code> in xy-plane , How can I generate another random point <code>(x,y)</code> 10 steps apart from <code>y0</code>? </p>
<p>note: by 10 steps apart from the firt point I don't mean the Euclidean distance. I mean the number of steps on lattice between the two point (a,b) and (x,y) which is given by |x-a|+|y-b|=10</p>
<p>My attempt(sometimes gives wrong result).</p>
<pre><code>import random
y0=(random.randint(0,50),random.randint(0,50))# here I generated the first point.
y=random.randint(0,50)
# I used the formula |x-a|+|y-b|=10.
x=(10 -abs(y-y0[1]))+y0[0] or x=-(10 -abs(y-y0[1]))+y0[0]
x0=(x,y)
</code></pre>
| 1 |
2016-09-21T11:36:02Z
| 39,615,688 |
<p>Let's say that your new point (x, y) is on a cercle of radius 10 and center (x0, y0). The random component is the angle.</p>
<pre><code>import math as m
# radius of the circle
r = 10
# create random angle and compute coordinates of the new point
theta = 2*m.pi*random.random()
x = x0 + r*m.cos(theta)
y = y0 + r*m.sin(theta)
# test if the point created is in the domain [[0,50], [0, 50]] (see comments of PM2Ring)
while not ( 0<=x<=50 and 0<=y<=50 ) :
# update theta: add pi/2 until the new point is in the domain (see HumanCatfood's comment)
theta += 0.5*m.pi
x = x0 + r*m.cos(theta)
y = y0 + r*m.sin(theta)
</code></pre>
| 1 |
2016-09-21T11:45:59Z
|
[
"python",
"random"
] |
How can I generate a random point (x, y) 10 steps apart from y0(a, b) in xy-plane?
| 39,615,495 |
<p>I have generated a random point named <code>y0=(a,b)</code> in xy-plane , How can I generate another random point <code>(x,y)</code> 10 steps apart from <code>y0</code>? </p>
<p>note: by 10 steps apart from the firt point I don't mean the Euclidean distance. I mean the number of steps on lattice between the two point (a,b) and (x,y) which is given by |x-a|+|y-b|=10</p>
<p>My attempt(sometimes gives wrong result).</p>
<pre><code>import random
y0=(random.randint(0,50),random.randint(0,50))# here I generated the first point.
y=random.randint(0,50)
# I used the formula |x-a|+|y-b|=10.
x=(10 -abs(y-y0[1]))+y0[0] or x=-(10 -abs(y-y0[1]))+y0[0]
x0=(x,y)
</code></pre>
| 1 |
2016-09-21T11:36:02Z
| 39,615,862 |
<p>Let's say you have a point <code>(x, y)</code></p>
<ol>
<li><p>create another random point <em>anywhere</em> on the plane: <code>(x1, y2) = (random(), random())</code></p></li>
<li><p>take the vector from your point to the new point: <code>(vx, vy) = (x1-x, y1-y)</code></p></li>
<li><p>get the length <code>l</code> of the vector: <code>l = sqrt(vx * vx + vy * vy)</code></p></li>
<li><p>use <code>l</code> to normalise the vector (so it has a length of 1): <code>(vx, vy) = (vx / l, vy / l)</code></p></li>
<li><p>make the vector 10 steps long: <code>(vx, vy) = (vx * 10, vy * 10)</code></p></li>
<li><p>add it to your original point to get to the desired point: <code>(x1, y2) = (x + vx, y + vy)</code></p></li>
</ol>
<p>voilá :)</p>
| 1 |
2016-09-21T11:54:34Z
|
[
"python",
"random"
] |
How can I generate a random point (x, y) 10 steps apart from y0(a, b) in xy-plane?
| 39,615,495 |
<p>I have generated a random point named <code>y0=(a,b)</code> in xy-plane , How can I generate another random point <code>(x,y)</code> 10 steps apart from <code>y0</code>? </p>
<p>note: by 10 steps apart from the firt point I don't mean the Euclidean distance. I mean the number of steps on lattice between the two point (a,b) and (x,y) which is given by |x-a|+|y-b|=10</p>
<p>My attempt(sometimes gives wrong result).</p>
<pre><code>import random
y0=(random.randint(0,50),random.randint(0,50))# here I generated the first point.
y=random.randint(0,50)
# I used the formula |x-a|+|y-b|=10.
x=(10 -abs(y-y0[1]))+y0[0] or x=-(10 -abs(y-y0[1]))+y0[0]
x0=(x,y)
</code></pre>
| 1 |
2016-09-21T11:36:02Z
| 39,616,057 |
<pre><code>from random import random
from math import sqrt
# Deviation
dev = 50
# Required distance between points
l = 10
if __name__ == '__main__':
# First random point
x0, y0 = dev*random(), dev*random()
# Second point
x1 = dev*random()
y1 = y0 + sqrt(l**2 - (x1 - x0)**2)
# Output
print "First point (%s, %s)" % (x0, y0)
print "Second point (%s, %s)" % (x1, y1)
print "Distance: %s" % (sqrt((x1 - x0)**2 + (y1 - y0)**2))
</code></pre>
| 1 |
2016-09-21T12:02:57Z
|
[
"python",
"random"
] |
How can I generate a random point (x, y) 10 steps apart from y0(a, b) in xy-plane?
| 39,615,495 |
<p>I have generated a random point named <code>y0=(a,b)</code> in xy-plane , How can I generate another random point <code>(x,y)</code> 10 steps apart from <code>y0</code>? </p>
<p>note: by 10 steps apart from the firt point I don't mean the Euclidean distance. I mean the number of steps on lattice between the two point (a,b) and (x,y) which is given by |x-a|+|y-b|=10</p>
<p>My attempt(sometimes gives wrong result).</p>
<pre><code>import random
y0=(random.randint(0,50),random.randint(0,50))# here I generated the first point.
y=random.randint(0,50)
# I used the formula |x-a|+|y-b|=10.
x=(10 -abs(y-y0[1]))+y0[0] or x=-(10 -abs(y-y0[1]))+y0[0]
x0=(x,y)
</code></pre>
| 1 |
2016-09-21T11:36:02Z
| 39,616,446 |
<p>So, you got the formula <code>d=d1+d2=|x-x0|+|y-y0| , for d=10</code></p>
<p>Let's examine what's going on with this formula:</p>
<ul>
<li>Let's say we generate a random point P at (0,0) </li>
<li>Let's say we generate <code>y=random.randint(0,50)</code> and let's imagine the value is 50. </li>
</ul>
<p>What does this mean? </p>
<p><code>d1=|x-p[0]|=50</code> and your original formula is <code>d=d1+d2=|x-x0|+|y-y0|</code>, so
that means <code>d2=|y-y0|=10-50</code> and <code>d2=|y-y0|=-40</code>. Is this possible? Absolutely not! An absolute value |y-y0| will always be positive, that's why your formula won't work for certain random points, you need to make sure (d-d1)>0, otherwise your equation won't have solution.</p>
<hr>
<p>If you wanted to consider Euclidean distance you just need to generate random points in a circle where your original point will be the center, something like this will do:</p>
<pre><code>import random
import math
def random_point(p, r=10):
theta = 2 * math.pi * random.random()
return (p[0] + r * math.cos(theta), p[1] + r * math.sin(theta))
</code></pre>
<p>If you draw a few random points you'll see more and more how the circle shape is created, let's try with N=10, N=50, N=1000:</p>
<p><a href="http://i.stack.imgur.com/CnY7e.png" rel="nofollow"><img src="http://i.stack.imgur.com/CnY7e.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/KrVxJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/KrVxJ.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/9L9pI.png" rel="nofollow"><img src="http://i.stack.imgur.com/9L9pI.png" alt="enter image description here"></a></p>
<p>Now, it seems you need the generated circle to be constrained at certain area region. One possible choice (not the most optimal though) would be generating random points till they meet those constraints, something like this would do:</p>
<pre><code>def random_constrained_point(p, r=10, x_limit=50, y_limit=50):
i = 0
MAX_ITERATIONS = 100
while True:
x0, y0 = random_point(p, r)
if (0 <= x0 <= x_limit and 0 <= y0 <= y_limit):
return (x0, y0)
if i == MAX_ITERATIONS:
return p
i += 1
</code></pre>
<p>Once you got this, it's interesting to check what shape is created when you increase more and more the circle radius (10,20,50):</p>
<p><a href="http://i.stack.imgur.com/XHgVs.png" rel="nofollow"><img src="http://i.stack.imgur.com/XHgVs.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/hUZDr.png" rel="nofollow"><img src="http://i.stack.imgur.com/hUZDr.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/pXYdB.png" rel="nofollow"><img src="http://i.stack.imgur.com/pXYdB.png" alt="enter image description here"></a></p>
<p>As you can see, your generated random constrained points will form a well_defined subarc.</p>
| 0 |
2016-09-21T12:22:26Z
|
[
"python",
"random"
] |
How can I generate a random point (x, y) 10 steps apart from y0(a, b) in xy-plane?
| 39,615,495 |
<p>I have generated a random point named <code>y0=(a,b)</code> in xy-plane , How can I generate another random point <code>(x,y)</code> 10 steps apart from <code>y0</code>? </p>
<p>note: by 10 steps apart from the firt point I don't mean the Euclidean distance. I mean the number of steps on lattice between the two point (a,b) and (x,y) which is given by |x-a|+|y-b|=10</p>
<p>My attempt(sometimes gives wrong result).</p>
<pre><code>import random
y0=(random.randint(0,50),random.randint(0,50))# here I generated the first point.
y=random.randint(0,50)
# I used the formula |x-a|+|y-b|=10.
x=(10 -abs(y-y0[1]))+y0[0] or x=-(10 -abs(y-y0[1]))+y0[0]
x0=(x,y)
</code></pre>
| 1 |
2016-09-21T11:36:02Z
| 39,645,657 |
<p>this code generate a random point xy-plane named y0 then generate another point x0 10 steps apart from y0 in taxi distance .</p>
<p>------- begining of the code--------
import random
y0=(random.randint(0,50),random.randint(0,50))</p>
<pre><code> while True:
y=random.randint(0,50)
x=(10 -abs(y-y0[1]))+y0[0]
if (abs(x-y0[0])+abs(y-y0[1]))==10:
x0=(x,y)
break
</code></pre>
| 0 |
2016-09-22T17:46:53Z
|
[
"python",
"random"
] |
How can I generate a random point (x, y) 10 steps apart from y0(a, b) in xy-plane?
| 39,615,495 |
<p>I have generated a random point named <code>y0=(a,b)</code> in xy-plane , How can I generate another random point <code>(x,y)</code> 10 steps apart from <code>y0</code>? </p>
<p>note: by 10 steps apart from the firt point I don't mean the Euclidean distance. I mean the number of steps on lattice between the two point (a,b) and (x,y) which is given by |x-a|+|y-b|=10</p>
<p>My attempt(sometimes gives wrong result).</p>
<pre><code>import random
y0=(random.randint(0,50),random.randint(0,50))# here I generated the first point.
y=random.randint(0,50)
# I used the formula |x-a|+|y-b|=10.
x=(10 -abs(y-y0[1]))+y0[0] or x=-(10 -abs(y-y0[1]))+y0[0]
x0=(x,y)
</code></pre>
| 1 |
2016-09-21T11:36:02Z
| 39,737,936 |
<p><code>abs(x)+abs(y)=10</code> defines a <a href="http://www.wolframalpha.com/input/?i=plot+abs(x)%2Babs(y)+%3D+10" rel="nofollow">square</a>, so all you need to do is pick a random value along the perimeter of the square (40 units long), and map that random distance back to your x,y coordinate pair.</p>
<p>Something like (untested):</p>
<pre><code>x = random.randint(-10,9)
y = 10 - abs(x)
if (random.randint(0,1) == 0):
x = -x
y = -y
x = x + y0[0]
y = y + y0[1]
x0=(x,y)
</code></pre>
<p>Clipping the <code>x</code> range that way ensures that all points are picked uniformly. Otherwise you can end up with (-10,0) and (10,0) having twice the chance of being picked compared to any other coordinate.</p>
| 0 |
2016-09-28T04:14:08Z
|
[
"python",
"random"
] |
Django Login Form Returning false on is_valid() if username already exists
| 39,615,504 |
<p>I have a login form I created from Django's <code>User</code> model:</p>
<p><strong>forms.py</strong>:</p>
<pre><code>class LoginForm(ModelForm):
class Meta:
model = User
fields = ['username', 'password']
widgets = {
'username': forms.TextInput(attrs={'placeholder': '@username'}),
'password': forms.PasswordInput(attrs={'placeholder': 'Password'})
}
</code></pre>
<p><strong>views.py</strong>:</p>
<pre><code> reg = LoginForm(request.POST or None)
if reg.is_valid():
return HttpResponse('Success: valid form')
else:
return HttpResponse('Error: invalid form')
</code></pre>
<p>Now if I try login in with a <em>username</em> that is not registered it returns <strong>Success: valid form</strong> but if it is a
username that already exists, it says <strong>Error: invalid form</strong>.</p>
<p>I tried doing this in command line and below is what I get (I first tried it with a username that is register, <strong>yax</strong>):</p>
<pre><code>In [2]: data = {'username':'yax', 'password':'wrong_password'}
In [3]: form = LoginForm(data)
In [4]: form.is_valid()
Out[5]: False
In [6]: form.errors
Out[7]: {'username': [u'A user with that username already exists.']}
In [8]: data = {'username':'wrong_name', 'password':'wrong_password'}
In [9]: form = LoginForm(data)
In [10]: form.is_valid()
Out[11]: True
In [12]: form.errors
Out[13]: {}
</code></pre>
<p>Would be glad I can know why I am getting this error and how do I solve it. </p>
| -1 |
2016-09-21T11:36:42Z
| 39,615,579 |
<p>A login form should not be a ModelForm. That's for creating or editing model instances - in this case, since you don't supply an <code>instance</code> parameter, Django assumes you want to create a new user.</p>
<p>Just use a standard Form and define the username and password fields explicitly.</p>
<p>Alternatively, use the <a href="https://docs.djangoproject.com/en/1.10/topics/auth/default/#django.contrib.auth.forms.AuthenticationForm" rel="nofollow">AuthenticationForm</a> supplied in django.contrib.auth.forms, wich takes care of the entire authentication/login process for you.</p>
| 1 |
2016-09-21T11:40:30Z
|
[
"python",
"django",
"forms"
] |
Pandas: write condition to filter in dataframe
| 39,615,506 |
<p>I have dataframe</p>
<pre><code>member_id,event_time,event_path,event_duration
19440,"2016-08-09 08:26:48",accounts.google.com/ServiceLogin?service=mail&passive=true&rm=false&continue=https://mail.google.com/mail/&ss=1&scc=1&ltmpl=default&ltmplcache=2&emr=1&osid=1#identifier,0
19440,"2016-08-09 08:27:04",ebesucher.ru/surfbar/Ochotona,25
19440,"2016-08-09 08:27:53",accounts.google.com/ServiceLogin?service=mail&passive=true&rm=false&continue=https://mail.google.com/mail/&ss=1&scc=1&ltmpl=default&ltmplcache=2&emr=1&osid=1#identifier,0
19440,"2016-08-09 08:27:53",accounts.google.com/ServiceLogin?service=mail&passive=true&rm=false&continue=https://mail.google.com/mail/&ss=1&scc=1&ltmpl=default&ltmplcache=2&emr=1&osid=1#identifier,2
19441,"2016-08-09 08:27:55",accounts.google.com/ServiceLogin?service=mail&passive=true&rm=false&continue=https://mail.google.com/mail/&ss=1&scc=1&ltmpl=default&ltmplcache=2&emr=1&osid=1#password,1
19441,"2016-08-09 08:27:58",neobux.com/m/l/,0
19441,"2016-08-09 08:27:59",neobux.com/m/l/,0
19441,"2016-08-09 08:28:01",http://new.enjoysurvey.com/ru/survey/649/index/m_e48f6e46bf0d222e2be70bc9067730c423423,11
19441,"2016-08-09 08:28:12",echo.msk.ru ,1
19441,"2016-08-09 08:28:15",neobux.com/m/l/?vl=A206591715C607425417A51CDE023499,2
</code></pre>
<p>I need to create new column with <code>visiting</code>, and if <code>new.enjoysurvey.com/ru/survey/649/index/m_e48f6e46bf0d222e2be70bc9067730c4</code> contain in <code>event_path</code> and next <code>event_path</code> contain <code>['echo.msk.ru', 'edimdoma.ru', 'glaz.tv', 'vesti.ru']</code>, than <code>visiting == 1</code>, else - <code>2</code>.
If it complete to <code>member_id</code>, give to member_id visiting=1
Desire output</p>
<pre><code>member_id,event_time,event_path,event_duration, visiting
19440,"2016-08-09 08:26:48",accounts.google.com/ServiceLogin?service=mail&passive=true&rm=false&continue=https://mail.google.com/mail/&ss=1&scc=1&ltmpl=default&ltmplcache=2&emr=1&osid=1#identifier,0,2
19440,"2016-08-09 08:27:04",n,25,2
19440,"2016-08-09 08:27:53",accounts.google.com/ServiceLogin?service=mail&passive=true&rm=false&continue=https://mail.google.com/mail/&ss=1&scc=1&ltmpl=default&ltmplcache=2&emr=1&osid=1#identifier,0,2
19440,"2016-08-09 08:27:53",accounts.google.com/ServiceLogin?service=mail&passive=true&rm=false&continue=https://mail.google.com/mail/&ss=1&scc=1&ltmpl=default&ltmplcache=2&emr=1&osid=1#identifier,2,2
19441,"2016-08-09 08:27:55",accounts.google.com/ServiceLogin?service=mail&passive=true&rm=false&continue=https://mail.google.com/mail/&ss=1&scc=1&ltmpl=default&ltmplcache=2&emr=1&osid=1#password,1,1
19441,"2016-08-09 08:27:58",neobux.com/m/l/,0,1
19441,"2016-08-09 08:27:59",neobux.com/m/l/,0,1
19441,"2016-08-09 08:28:01",http://new.enjoysurvey.com/ru/survey/649/index/m_e48f6e46bf0d222e2be70bc9067730c423423,11,1
19441,"2016-08-09 08:28:12",echo.msk.ru ,1,1
19441,"2016-08-09 08:28:15",neobux.com/m/l/?vl=A206591715C607425417A51CDE023499,2,1
</code></pre>
<p>I try</p>
<pre><code>df['visiting'] = df.groupby("member_id").event_path.transform(lambda g: (g.isin(["new.enjoysurvey.com/ru/survey/649/index/m_e48f6e46bf0d222e2be70bc9067730c4", 'echo.msk.ru', 'edimdoma.ru', 'glaz.tv', 'vesti.ru']).sum() > 1).astype(int)).replace(0, 2)
</code></pre>
<p>But it determine only size of quantity <code>event_path</code>, but I need to consider the sequence. But don't know how to do this.</p>
| 0 |
2016-09-21T11:36:48Z
| 39,627,483 |
<p>Consider a <code>groupby.apply()</code> which uses a loop through <code>event_path</code> strings. With loop you can search adjacent elements by list indices:</p>
<pre><code>def findevent(row):
event_paths = row['event_path'].tolist()
row['visiting'] = 2
for i in range(len(event_paths)):
if 'new.enjoysurvey.com/ru/survey/649/index/m_e48f6e46bf0d222e2be70bc9067730c423423' in event_paths[i] and \
event_paths[i+1] in ['echo.msk.ru', 'edimdoma.ru', 'glaz.tv', 'vesti.ru']:
row['visiting'] = 1
break
return(row)
df = df.groupby(['member_id']).apply(findevent)
print(df)
# member_id event_time event_path event_duration visiting
# 0 19440 2016-08-09 08:26:48 accounts.google.com/ServiceLogin?service=mail&... 0 2
# 1 19440 2016-08-09 08:27:04 ebesucher.ru/surfbar/Ochotona 25 2
# 2 19440 2016-08-09 08:27:53 accounts.google.com/ServiceLogin?service=mail&... 0 2
# 3 19440 2016-08-09 08:27:53 accounts.google.com/ServiceLogin?service=mail&... 2 2
# 4 19441 2016-08-09 08:27:55 accounts.google.com/ServiceLogin?service=mail&... 1 1
# 5 19441 2016-08-09 08:27:58 neobux.com/m/l/ 0 1
# 6 19441 2016-08-09 08:27:59 neobux.com/m/l/ 0 1
# 7 19441 2016-08-09 08:28:01 http://new.enjoysurvey.com/ru/survey/649/index... 11 1
# 8 19441 2016-08-09 08:28:12 echo.msk.ru 1 1
# 9 19441 2016-08-09 08:28:15 neobux.com/m/l/?vl=A206591715C607425417A51CDE0... 2 1
</code></pre>
<p>**NOTE: your first URL search, <code>new.enjoysurvey.com/...</code> is not contained in your posted data. Above changes this url to item in data for code demonstration.</p>
| 1 |
2016-09-21T22:25:12Z
|
[
"python",
"pandas"
] |
Pandas: create word cloud from a column with strings
| 39,615,520 |
<p>I have a following <code>dataframe</code> with <code>string</code> values:</p>
<pre><code> text
0 match of the day
1 euro 2016
2 wimbledon
3 euro 2016
</code></pre>
<p>How can I create a <code>word cloud</code> from this column?</p>
| 1 |
2016-09-21T11:37:14Z
| 39,616,033 |
<p>I think you need <a href="http://stackoverflow.com/a/39172275/2901002">tuple of tuples</a> with frequencies, so use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow"><code>value_counts</code></a> with <code>list comprehension</code>:</p>
<pre><code>tuples = tuple([tuple(x) for x in df.text.value_counts().reset_index().values])
print (tuples)
(('euro 2016', 2), ('wimbledon', 1), ('match of the day', 1))
#http://stackoverflow.com/q/38247648/2901002
cloud.generate_from_frequencies(tuples)
</code></pre>
| 1 |
2016-09-21T12:01:30Z
|
[
"python",
"string",
"python-2.7",
"pandas",
"word-cloud"
] |
NumPy genfromxt TypeError: data type not understood error
| 39,615,628 |
<p>I would like to read in this file (test.txt)</p>
<pre><code>01.06.2015;00:00:00;0.000;0;-9.999;0;8;0.00;18951;(SPECTRUM)ZERO(/SPECTRUM)
01.06.2015;00:01:00;0.000;0;-9.999;0;8;0.00;18954;(SPECTRUM)ZERO(/SPECTRUM)
01.06.2015;00:02:00;0.000;0;-9.999;0;8;0.00;18960;(SPECTRUM)ZERO(/SPECTRUM)
01.06.2015;09:23:00;0.327;61;25.831;39;29;0.18;19006;01.06.2015;09:23:00;0.327;61;25.831;39;29;0.18;19006;(SPECTRUM);;;;;;;;;;;;;;1;1;;;1;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;1;;;;;;;;;;;;(/SPECTRUM)
01.06.2015;09:24:00;0.000;0;-9.999;0;29;0.00;19010;(SPECTRUM)ZERO(/SPECTRUM)
</code></pre>
<p>...I tried it with the numpy function genfromtxt() (see below in the code excerpt).</p>
<pre><code>import numpy as np
col_names = ["date", "time", "rain_intensity", "weather_code_1", "radar_ref", "weather_code_2", "val6", "rain_accum", "val8", "val9"]
types = ["object", "object", "float", "uint8", "float", "uint8", "uint8", "float", "uint8","|S10"]
# Read in the file with np.genfromtxt
mydata = np.genfromtxt("test.txt", delimiter=";", names=col_names, dtype=types)
</code></pre>
<p>Now when I execute the code I get the following error --></p>
<pre><code>raise ValueError(errmsg)ValueError: Some errors were detected !
Line #4 (got 79 columns instead of 10)
</code></pre>
<p>Now I think that the difficulties come from the last column (val9) with the many <code>;;;;;;;</code><br>
It is obvious that the delimeters and the signs in the last column<code>;</code> are the same!</p>
<p>How can I read in the file without an error, maybe there is a possibility to skip the last column, or to replace the <code>;</code> only in the last column?</p>
| 0 |
2016-09-21T11:42:52Z
| 39,616,162 |
<p>From the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html" rel="nofollow">numpy documentation</a></p>
<blockquote>
<p><strong>invalid_raise</strong> : bool, optional<br>
If True, an exception is raised if an inconsistency is detected in the
number of columns. If False, a warning is emitted and the offending
lines are skipped.</p>
</blockquote>
<pre><code>mydata = np.genfromtxt("test.txt", delimiter=";", names=col_names, dtype=types, invalid_raise = False)
</code></pre>
<p>Note that there were errors in your code which I have corrected (delimiter spelled incorrectly, and <code>types</code> list referred to as <code>dtypes</code> in function call)</p>
<p><strong>Edit</strong>: From your comment, I see I slightly misunderstood. You meant that you want to skip the last <em>column</em> not the last <em>row</em>.</p>
<p>Take a look at the following code. I have defined a generator that only returns the first ten elements of each row. This will allow <code>genfromtxt()</code> to complete without error and you now get column #3 from all rows.</p>
<p>Note though, that you are still going to lose some data, as if you look carefully you will see that the problem line is actually two lines concatenated together with garbage where the other lines have <code>ZERO</code>. So you are still going to lose this second line. You could maybe modify the generator to parse each line and deal with this differently, but I'll leave that as a fun exercise :)</p>
<pre><code>import numpy as np
def filegen(filename):
with open(filename, 'r') as infile:
for line in infile:
yield ';'.join(line.split(';')[:10])
col_names = ["date", "time", "rain_intensity", "weather_code_1", "radar_ref", "weather_code_2", "val6", "rain_accum", "val8", "val9"]
dtypes = ["object", "object", "float", "uint8", "float", "uint8", "uint8", "float", "uint8","|S10"]
# Read in the file with np.genfromtxt
mydata = np.genfromtxt(filegen('temp.txt'), delimiter=";", names=col_names, dtype = dtypes)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>[('01.06.2015', '00:00:00', 0.0, 0, -9.999, 0, 8, 0.0, 7, '(SPECTRUM)')
('01.06.2015', '00:01:00', 0.0, 0, -9.999, 0, 8, 0.0, 10, '(SPECTRUM)')
('01.06.2015', '00:02:00', 0.0, 0, -9.999, 0, 8, 0.0, 16, '(SPECTRUM)')
('01.06.2015', '09:23:00', 0.327, 61, 25.831, 39, 29, 0.18, 62, '01.06.2015')
('01.06.2015', '09:24:00', 0.0, 0, -9.999, 0, 29, 0.0, 66, '(SPECTRUM)')]
</code></pre>
| 2 |
2016-09-21T12:07:25Z
|
[
"python",
"numpy",
"genfromtxt"
] |
NumPy genfromxt TypeError: data type not understood error
| 39,615,628 |
<p>I would like to read in this file (test.txt)</p>
<pre><code>01.06.2015;00:00:00;0.000;0;-9.999;0;8;0.00;18951;(SPECTRUM)ZERO(/SPECTRUM)
01.06.2015;00:01:00;0.000;0;-9.999;0;8;0.00;18954;(SPECTRUM)ZERO(/SPECTRUM)
01.06.2015;00:02:00;0.000;0;-9.999;0;8;0.00;18960;(SPECTRUM)ZERO(/SPECTRUM)
01.06.2015;09:23:00;0.327;61;25.831;39;29;0.18;19006;01.06.2015;09:23:00;0.327;61;25.831;39;29;0.18;19006;(SPECTRUM);;;;;;;;;;;;;;1;1;;;1;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;1;;;;;;;;;;;;(/SPECTRUM)
01.06.2015;09:24:00;0.000;0;-9.999;0;29;0.00;19010;(SPECTRUM)ZERO(/SPECTRUM)
</code></pre>
<p>...I tried it with the numpy function genfromtxt() (see below in the code excerpt).</p>
<pre><code>import numpy as np
col_names = ["date", "time", "rain_intensity", "weather_code_1", "radar_ref", "weather_code_2", "val6", "rain_accum", "val8", "val9"]
types = ["object", "object", "float", "uint8", "float", "uint8", "uint8", "float", "uint8","|S10"]
# Read in the file with np.genfromtxt
mydata = np.genfromtxt("test.txt", delimiter=";", names=col_names, dtype=types)
</code></pre>
<p>Now when I execute the code I get the following error --></p>
<pre><code>raise ValueError(errmsg)ValueError: Some errors were detected !
Line #4 (got 79 columns instead of 10)
</code></pre>
<p>Now I think that the difficulties come from the last column (val9) with the many <code>;;;;;;;</code><br>
It is obvious that the delimeters and the signs in the last column<code>;</code> are the same!</p>
<p>How can I read in the file without an error, maybe there is a possibility to skip the last column, or to replace the <code>;</code> only in the last column?</p>
| 0 |
2016-09-21T11:42:52Z
| 39,621,491 |
<p><code>usecols</code> can be used to ignore excess delimiters, e.g.</p>
<pre><code>In [546]: np.genfromtxt([b'1,2,3',b'1,2,3,,,,,,'], dtype=None,
delimiter=',', usecols=np.arange(3))
Out[546]:
array([[1, 2, 3],
[1, 2, 3]])
</code></pre>
| 0 |
2016-09-21T16:05:42Z
|
[
"python",
"numpy",
"genfromtxt"
] |
send_mail is clearly sending email but no email is showing up in inbox
| 39,615,860 |
<p>I have these email settings in my <code>settings.py</code></p>
<pre><code>EMAIL_USE_TLS = True
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_USER = 'email@myemail.com'
EMAIL_HOST_PASSWORD = 'password'
EMAIL_PORT = 587
</code></pre>
<p>and am using this function to send email to recipients.</p>
<pre><code>def send_email(subject, body, recipients, agent_email, bcc=[], attachments=[]):
recipient_list = []
if isinstance(recipients, (str, unicode,)):
recipient_list.append(recipients)
else:
recipient_list = recipients
recipient_list = recipient_list + bcc
send_mail(subject, body, settings.EMAIL_FROM, recipient_list)
</code></pre>
<p>while it looks quite clear when I go to the <code>django-admin</code> site that the email was indeed sent and no error messages show in the log files whatsoever, when I go to check the email that it was sent to, nothing shows in the inbox. I would expect to see the email there, especially given that it shows as sent in <code>django-admin</code>. Have I misunderstood something about how email is sent from the system?</p>
<p><b>EDIT</b></p>
<p>I also checked my spam folder and added</p>
<pre><code>EMAIL_FROM = 'email@myemail.com'
</code></pre>
<p>because I noticed it wasn't there before. Same results, though. Email appears sent according to <code>django-admin</code> but no email in my inbox.</p>
| 1 |
2016-09-21T11:54:14Z
| 39,620,478 |
<p>Try to add the following in settings.py</p>
<pre><code>EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
DEFAULT_FROM_EMAIL = EMAIL_HOST_USER
</code></pre>
| 0 |
2016-09-21T15:16:17Z
|
[
"python",
"django",
"python-2.7",
"email",
"webserver"
] |
Google App Engine Memcache Python
| 39,615,861 |
<p>Using Google App Engine <strong>Memcache</strong>... Can more than one user access the same key-value pair?
or in other words.. Is there a Memcache created per user or is it shared across multiple users?</p>
| 0 |
2016-09-21T11:54:33Z
| 39,623,880 |
<p>Memcache is shared across users. It is not a cookie, but exists in RAM on the server for all pertinent requests to access.</p>
| 0 |
2016-09-21T18:22:18Z
|
[
"python",
"google-app-engine",
"memcached"
] |
Google App Engine Memcache Python
| 39,615,861 |
<p>Using Google App Engine <strong>Memcache</strong>... Can more than one user access the same key-value pair?
or in other words.. Is there a Memcache created per user or is it shared across multiple users?</p>
| 0 |
2016-09-21T11:54:33Z
| 40,032,242 |
<p>It's not shared between multiple applications if that's what you asked. Each of your instances on a single application will see a consistent view of the cache. </p>
<p>From <a href="https://cloud.google.com/appengine/docs/java/memcache/?csw=1" rel="nofollow">documentation</a>: The cache is global and is shared across the application's frontend, backend, and all of its services and versions.</p>
| 2 |
2016-10-13T22:46:49Z
|
[
"python",
"google-app-engine",
"memcached"
] |
Function to calculate value from first row in Python Pandas
| 39,615,871 |
<p>Is there any function in pandas to simulate excel formula like '=sum($A$1:A10'(for 10th row), i.e. the formula should take rolling data from 1st row.</p>
<p>Pandas rolling function needs a integer value as window argument.</p>
| 0 |
2016-09-21T11:55:00Z
| 39,616,114 |
<p>The equivalent of <code>=SUM($A$1:A1)</code> in pandas is <code>.expanding().sum()</code> (requires pandas 0.18.0):</p>
<pre><code>ser = pd.Series([1, 2, 3, 4])
ser
Out[3]:
0 1
1 2
2 3
3 4
dtype: int64
ser.expanding().sum()
Out[4]:
0 1.0
1 3.0
2 6.0
3 10.0
</code></pre>
<p>You can also apply a generic function via apply:</p>
<pre><code>ser.expanding().apply(lambda x: np.percentile(x, 90))
Out:
0 1.0
1 1.9
2 2.8
3 3.7
dtype: float64
</code></pre>
<p>Or directly with quantile:</p>
<pre><code>ser.expanding().quantile(0.9)
Out[15]:
0 1.0
1 1.0
2 2.0
3 3.0
dtype: float64
</code></pre>
<p>Note that 90th percentile is equal to 0.9th quantile. However, Series.quantile and Series.expanding.quantile are returning different results which is probably <a href="https://github.com/pydata/pandas/issues/8084" rel="nofollow">a bug</a>.</p>
<p>np.percentile returns the same results as Excel's PERCENTILE.INC. For PERCENTILE.EXC, I've previously wrote a small function <a href="http://stackoverflow.com/a/38597798/2285236">here</a>.</p>
| 2 |
2016-09-21T12:05:31Z
|
[
"python",
"pandas",
"numpy"
] |
Python for loop over list with directories does not find every value
| 39,616,041 |
<p>I have a list of directories. I try just to keep those, which are named by number and not have some string name, e.g. "lastStableBuild". The following code removes every non-digit named directory except from "lastUnsuccessfulBuild".</p>
<pre><code>for dir in listdir:
print(dir, " ", end="")
if not dir.isdigit():
listdir.remove(dir)
</code></pre>
<p>While debugging I tried to print the whole list. There are several directories with "string names" listed, but not "lastUnsuccessfulBuild". Then I printed the whole list again, and now it was listed, and the other were removed. After executing the removal code again "lastUnsuccessfulBuild" was removed, too. I am really irritated.
This is the complete code:</p>
<pre><code>listdir = os.listdir(rootdir)
for dir in listdir:
print(dir, " ", end="")
if not dir.isdigit():
listdir.remove(dir)
print("")
print("________________After first removal_____________________________")
for dir in listdir:
print(dir, " ", end="")
for dir in listdir:
if not dir.isdigit():
listdir.remove(dir)
print("")
print("________________After second removal_____________________________")
for dir in listdir:
print(dir, " ", end="")
</code></pre>
<p>Which produces the following output:</p>
<pre><code>> python listdir.py
> 778 761 794 792 885 877 811 873 871 679
> 726 771 837 691 783 751 813 780 852 723 721 801 826
> 784 757 846 812 782 724 855 804 847 831 874 718 741 703
> 789 756 688 825 824 748 875 697 676 798 747 705 736 765
> 858 717 745 863 876 823 865 704 719 732 800 880 767 759
> 842 815 753 779 680 833 752 734 716 696 851 834 682 708
> 844 758 772 884 828 737 795 754 829 797 827 lastStableBuild
> 850 714 886 774 887 762 883 860 707 687 739 861 805 722
> 763 859 845 817 822 864 821 749 699 746 802 711 857 867
> 740 684 698 692 760 832 693 830 839 806 750 738 728 678
> 818 710 727 701 888 849 729 843 841 744 764 814 872 766
> 808 712 854 695 810 881 731 862 878 713 816 848 720 777
> 700 lastUnstableBuild 775 769 742 791 866 694 725 796 770
> 773 879 685 787 809 legacyIds 690 856 799 838 768 730 803
> 793 677 686 683 807 743 lastFailedBuild 702 870 735 715 820
> lastSuccessfulBuild 835 836 785 733 776 706 786 781 788 709
> 790 868 689 882 869 840 755
> ________________After first removal_____________________________
> 778 761 794 792 885 877 811 873 871 679 726 771 837 691 783
> 751 813 780 852 723 721 801 826 784 757 846 812 782 724
> 855 804 847 831 874 718 741 703 789 756 688 825 824 748
> 875 697 676 798 747 705 736 765 858 717 745 863 876 823
> 865 704 719 732 800 880 767 759 842 815 753 779 680 833
> 752 734 716 696 851 834 682 708 844 758 772 884 828 737
> 795 754 829 797 827 675 850 714 886 774 887 762 883 860
> 707 687 739 861 805 722 763 859 845 817 822 864 821 749
> 699 746 802 711 857 867 740 684 698 692 760 832 693 830
> 839 806 750 738 728 678 818 710 727 701 888 849 729 843
> 841 744 764 814 872 766 808 712 854 695 810 881 731 862
> 878 713 816 848 720 777 700 819 775 769 742 791 866 694
> 725 796 770 773 879 685 787 809 681 690 856 799 838 768
> 730 803 793 677 686 683 807 743 853 702 870 735 715 820
> lastUnsuccessfulBuild 835 836 785 733 776 706 786 781 788
> 709 790 868 689 882 869 840 755
> ________________After second removal_____________________________
> 778 761 794 792 885 877 811 873 871 679 726 771 837 691 783
> 751 813 780 852 723 721 801 826 784 757 846 812 782 724
> 855 804 847 831 874 718 741 703 789 756 688 825 824 748
> 875 697 676 798 747 705 736 765 858 717 745 863 876 823
> 865 704 719 732 800 880 767 759 842 815 753 779 680 833
> 752 734 716 696 851 834 682 708 844 758 772 884 828 737
> 795 754 829 797 827 675 850 714 886 774 887 762 883 860
> 707 687 739 861 805 722 763 859 845 817 822 864 821 749
> 699 746 802 711 857 867 740 684 698 692 760 832 693 830
> 839 806 750 738 728 678 818 710 727 701 888 849 729 843
> 841 744 764 814 872 766 808 712 854 695 810 881 731 862
> 878 713 816 848 720 777 700 819 775 769 742 791 866 694
> 725 796 770 773 879 685 787 809 681 690 856 799 838 768
> 730 803 793 677 686 683 807 743 853 702 870 735 715 820
> 835 836 785 733 776 706 786 781 788 709 790 868 689 882
> 869 840 755
</code></pre>
<p>The only thing, which comes to my mind, is that the for loop has a maximum of iterations, but I could not found anything about it.</p>
<p>What is the explanation of this behavior? And is there a better solution to get just the directories with "number names". (I can't determine the number of directories.)</p>
| 0 |
2016-09-21T12:01:58Z
| 39,616,149 |
<p>Well, <a href="http://stackoverflow.com/questions/1207406/remove-items-from-a-list-while-iterating-in-python">don't change a list while iterating over it</a>.</p>
<p>Use list comprehension:</p>
<pre><code>listdir = [dir for dir in listdir if dir.isdigit()]
</code></pre>
| 1 |
2016-09-21T12:07:02Z
|
[
"python",
"for-loop"
] |
Is there any difference between closing a cursor or a connection in SQLite?
| 39,616,077 |
<p>I have been using always the command <code>cur.close()</code> once I'm done with the database:</p>
<pre><code>import sqlite3
conn = sqlite3.connect('mydb')
cur = conn.cursor()
# whatever actions in the database
cur.close()
</code></pre>
<p>However, I just saw in some cases the following approach:</p>
<pre><code>import sqlite3
conn = sqlite3.connect('mydb')
cur = conn.cursor()
# whatever actions in the database
cur.close()
conn.close()
</code></pre>
<p>And in the official <a href="https://docs.python.org/2/library/sqlite3.html" rel="nofollow">documentation</a> sometimes the cursor is closed, sometimes the connection and sometimes both.</p>
<p>My question are: </p>
<ol>
<li>Is there any difference between <code>cur.close()</code> and <code>conn.close()</code>?</li>
<li>Is it enough closing one of both once I am done? If so, which one is preferable?</li>
</ol>
| 2 |
2016-09-21T12:03:35Z
| 39,616,258 |
<p><em>[On closing cursors]</em></p>
<p>If you close the cursor, you are simply flagging it as invalid to process further requests ("I am done with this"). </p>
<p>So, in the end of a function/transaction, you should keep closing the cursor, giving hint to the database that that transaction is finished.</p>
<p>A good pattern is to make cursors are short-lived: you get one from the connection object, do what you need, and then discard it. So closing makes sense and you should keep using cursor.close() at the end of your code section that makes use of it.</p>
<p>I <strong>believe</strong> (couldn't find any references) that if you just let the cursor fall out of scope (end of function, or simply 'del cursor') you should get the same behavior. But for the sake of good coding practices you should explicitly close it.</p>
<p>[Connection Objects]</p>
<p>When you are actually done <em>with the database</em>, you should close your <strong>connection</strong> to it. that means connection.close()</p>
| 1 |
2016-09-21T12:13:29Z
|
[
"python",
"sqlite",
"sqlite3",
"cursor"
] |
UPDATE MySQL with 'onclick' button command in Django Project
| 39,616,301 |
<p>I am extremely new to django and web dev on the whole, so please bear with me.</p>
<p>I have created a simple site with a MySQL backend for my local football team and want to create a page to update the score of a game (stored on a table) simply by clicking a button (increment the current score by + 1).
I have no doubt in my mind that this is super simple, but after WEEKS of trawling through similar posts, nothing seemed to work for me (nothing that I could understand at least).</p>
<p>I have a template that that creates a button that contains the ID for the record that needs to be updated:</p>
<pre><code><a href="{% url livegame_update %}?value={{stat.id}}?updatetype=goals" class="btn btn-success" role="button" onclick="alert({{stat.id}})" id={{stat.id}}>Goal</a>
</code></pre>
<p>This points to this URL:</p>
<pre><code>url(r'^livegame_update', 'steelers_fc.players.views.livegame_update', name='livegame_update'),
</code></pre>
<p>Which in turn executes this function the Views;</p>
<pre><code>def livegame_update(request):
StatID = request.GET.get('value','1')
StatType = request.GET.get('updatetype','1')
SQL = "update players_statistics set " + StatType + " = " + StatType + " + 1 where id = " + StatID + ";"
#stat_edit = statistics.objects.get(id=StatID)
#stat_edit.goals = stat_edit.goals + 1
#stat_edit.save() # save object
cursor = connection.cursor ()
cursor.execute (SQL)
connection.commit()
cursor.close ()
connection.close ()
return render_to_response
</code></pre>
<p>Ultimately, the SQL above would end up being:
"<em>update players_statistics set goals = goals + 1 where id = 99</em>"</p>
<p>I have tried several things:</p>
<ul>
<li>Execute RAW SQL (as per this example)</li>
<li>Update Django Model (as per the commented out section)</li>
<li>Executed an external .py script (which works perfectly well from a Bash Comamnd)</li>
<li>Looked at using 'forms'..but never really understood.</li>
</ul>
<p>Ultimately, I feel the issue is around the URL and the passing of parameters between the request and the view?!?!</p>
<p>Any help would be greatly appreciated and some simple examples would be amazing.</p>
<p>Thanks</p>
| -3 |
2016-09-21T12:15:31Z
| 39,616,893 |
<p>Well... First of all I do not understand why are you creating cursors and stuff. </p>
<p>I`ll try to help you out here in most easy way I can imagine.</p>
<p><strong>STEP 1 (if you do not have model)</strong></p>
<p>Create model</p>
<pre><code>class Match(models.Model):
goals = models.IntegerField(default=)
# Add as many fields as you need, eg. Teams / Scorer / and other stuff
def __str___(self):
return self.goals
</code></pre>
<p><strong>STEP 2 - urls.py / you have, but some parts are missing</strong></p>
<pre><code>url(r'^livegame_update/(?P<match_id>[\d]+)/$', 'steelers_fc.players.views.livegame_update', name='livegame_update'),
</code></pre>
<p>Explanation: As you can see your url now takes match_id as parameter passed to your view. The part with <strong>[\d]+</strong> - says it has to be a number, plus means unlimited number.</p>
<p><strong>STEP 3 - Your view (update)</strong></p>
<pre><code>def livegame_update(request, match_id):
try:
match = Match.objects.get(id=match_id)
except Match.DoesNotExist():
return render(request, 'your_template.html', {'error': 'match with given id dosnt exist})
match.goals += 1
match.save()
return render(request, 'your_template.html', {'msg': 'incremented score'})
</code></pre>
<p>so simply what this does is: Find me a Match object with id given in parameter (<strong>match_id</strong>). If it is not found, return to your template, and pass a dict with an error. If it exists increment your <strong>goals</strong> field by one. Then save your object. Finally return to your template with buttons and stuff.</p>
<p>Is it clear ? Do you need to explain anything in above code ? </p>
| 0 |
2016-09-21T12:43:43Z
|
[
"javascript",
"python",
"html",
"mysql",
"django"
] |
Open/edit utf8 fits header in python (pyfits)
| 39,616,304 |
<p>I have to deal with some fits files which contain utf8 text in their header. This means basically all functions of the pyfits package do not work. Also <strong>.decode</strong> does not work as the fits header is a class not a list. Does someone know how to decode the header so I can process the data? The actual content is not so important so something like ignoring the letters is fine. My current code looks like this:</p>
<pre><code>hdulist = fits.open('Jupiter.FIT')
hdu = hdulist[0].header
hdu.decode('ascii', errors='ignore')
</code></pre>
<p>And I get:
AttributeError: 'Header' object has no attribute 'decode'</p>
<p>Functions like:</p>
<pre><code>print (hdu)
</code></pre>
<p>return:</p>
<pre><code>ValueError: FITS header values must contain standard printable ASCII characters; "'Uni G\xf6ttingen, Institut f\xfcr Astrophysik'" contains characters/bytes that do not represent printable characters in ASCII.
</code></pre>
<p>I thought about writing something in the entry so I don't need to care about it. However I can' even retrieve which entry contains the bad characters and I would like to have a batch solution as I have some hundred files.</p>
| 0 |
2016-09-21T12:15:34Z
| 39,616,496 |
<p>Looks like <code>PyFITS</code> just doesn't support it (yet?)</p>
<p>From <a href="https://github.com/astropy/astropy/issues/3497" rel="nofollow">https://github.com/astropy/astropy/issues/3497</a>:</p>
<blockquote>
<p>FITS predates unicode and has never been updated to support anything beyond the ASCII printable characters for data. It is impossible to encode non-ASCII characters in FITS headers.</p>
</blockquote>
| 0 |
2016-09-21T12:25:21Z
|
[
"python",
"decode",
"fits",
"pyfits"
] |
Open/edit utf8 fits header in python (pyfits)
| 39,616,304 |
<p>I have to deal with some fits files which contain utf8 text in their header. This means basically all functions of the pyfits package do not work. Also <strong>.decode</strong> does not work as the fits header is a class not a list. Does someone know how to decode the header so I can process the data? The actual content is not so important so something like ignoring the letters is fine. My current code looks like this:</p>
<pre><code>hdulist = fits.open('Jupiter.FIT')
hdu = hdulist[0].header
hdu.decode('ascii', errors='ignore')
</code></pre>
<p>And I get:
AttributeError: 'Header' object has no attribute 'decode'</p>
<p>Functions like:</p>
<pre><code>print (hdu)
</code></pre>
<p>return:</p>
<pre><code>ValueError: FITS header values must contain standard printable ASCII characters; "'Uni G\xf6ttingen, Institut f\xfcr Astrophysik'" contains characters/bytes that do not represent printable characters in ASCII.
</code></pre>
<p>I thought about writing something in the entry so I don't need to care about it. However I can' even retrieve which entry contains the bad characters and I would like to have a batch solution as I have some hundred files.</p>
| 0 |
2016-09-21T12:15:34Z
| 39,637,762 |
<p>As anatoly techtonik <a href="http://stackoverflow.com/a/39616496/982257">pointed out</a> non-ASCII characters in FITS headers are outright invalid, and make invalid FITS files. That said, it would be nice if <code>astropy.io.fits</code> could at least read the invalid entries. Support for that is currently broken and needs a champion to fix it, but nobody has because it's an infrequent enough problem, and most people encounter it in one or two files, fix those files, and move on. Would love for someone to tackle the problem though.</p>
<p>In the meantime, since you know exactly what string this file is hiccupping on, I would just open the file in raw binary mode and replace the string. If the FITS file is very large, you could read it a block at a time and do the replacement on those blocks. FITS files (especially headers) are written in 2880 byte blocks, so you know that anywhere that string appears will be aligned to such a block, and you don't have to do any parsing of the header format beyond that. Just make sure that the string you replace it with is no longer than the original string, and that if it's shorter it is right-padded with spaces, because FITS headers are a fixed-width format and anything that changes the length of a header will corrupt the entire file. For this particular case then, I would try something like this:</p>
<pre><code>bad_str = 'Uni Göttingen, Institut für Astrophysik'.encode('latin1')
good_str = 'Uni Gottingen, Institut fur Astrophysik'.encode('ascii')
# In this case I already know the replacement is the same length so I'm no worried about it
# A more general solution would require fixing the header parser to deal with non-ASCII bytes
# in some consistent manner; I'm also looking for the full string instead of the individual
# characters so that I don't corrupt binary data in the non-header blocks
in_filename = 'Jupiter.FIT'
out_filename = 'Jupiter-fixed.fits'
with open(in_filename, 'rb') as inf, open(out_filename, 'wb') as outf:
while True:
block = inf.read(2880)
if not block:
break
block = block.replace(bad_str, good_str)
outf.write(block)
</code></pre>
<p>This is ugly, and for a very large file might be slow, but it's a start. I can think of better solutions, but that are harder to understand and probably not worth taking the time on if you just have a handful of files to fix.</p>
<p>Once that's done, please give the originator of the file a stern talking to--they should not be publishing corrupt FITS files.</p>
| 0 |
2016-09-22T11:28:03Z
|
[
"python",
"decode",
"fits",
"pyfits"
] |
Alexa lambda_handler not creating event session
| 39,616,470 |
<p>I am having a problem getting my python lambda function to work. I get an invalid key for the event array that should be created when the skill is invoked. The error I get is:</p>
<pre><code>{
"stackTrace": [
[
"/var/task/lambda_function.py",
163,
"lambda_handler",
"app_id = event['session']['application']['applicationId']"
]
],
"errorType": "KeyError",
"errorMessage": "'session'"
}
</code></pre>
<p>and here is my code</p>
<pre><code>def lambda_handler(event, context):
"""Lambda function entrypoint."""
# print("event.session.application.applicationId={}".format(
# event['session']['application']['applicationId']))
# Prevent unwanted access to this Lambda function.
app_id = event['session']['application']['applicationId']
if app_id != "amzn1.ask.skill.yyyyyyyy-xxx":
raise ValueError("Invalid Application ID: {}".format(app_id))
request = event['request']
if event['session']['new']:
on_session_started(
{'requestId': request['requestId']}, event['session'])
func_map = {
"LaunchRequest": on_launch,
"IntentRequest": on_intent,
"SessionEndedRequest": on_session_ended,
}
return func_map[request['type']](event['request'], event['session'])
</code></pre>
| 1 |
2016-09-21T12:24:01Z
| 39,617,854 |
<p>The problem was I had configured the wrong test in the Lambda Function dashboard. When I changed it to an Alexa Start Session, the event object got created. :)</p>
| 1 |
2016-09-21T13:23:32Z
|
[
"python",
"aws-lambda",
"alexa",
"alexa-skills-kit"
] |
Alexa lambda_handler not creating event session
| 39,616,470 |
<p>I am having a problem getting my python lambda function to work. I get an invalid key for the event array that should be created when the skill is invoked. The error I get is:</p>
<pre><code>{
"stackTrace": [
[
"/var/task/lambda_function.py",
163,
"lambda_handler",
"app_id = event['session']['application']['applicationId']"
]
],
"errorType": "KeyError",
"errorMessage": "'session'"
}
</code></pre>
<p>and here is my code</p>
<pre><code>def lambda_handler(event, context):
"""Lambda function entrypoint."""
# print("event.session.application.applicationId={}".format(
# event['session']['application']['applicationId']))
# Prevent unwanted access to this Lambda function.
app_id = event['session']['application']['applicationId']
if app_id != "amzn1.ask.skill.yyyyyyyy-xxx":
raise ValueError("Invalid Application ID: {}".format(app_id))
request = event['request']
if event['session']['new']:
on_session_started(
{'requestId': request['requestId']}, event['session'])
func_map = {
"LaunchRequest": on_launch,
"IntentRequest": on_intent,
"SessionEndedRequest": on_session_ended,
}
return func_map[request['type']](event['request'], event['session'])
</code></pre>
| 1 |
2016-09-21T12:24:01Z
| 39,797,358 |
<p>We just started a project <a href="https://github.com/bespoken/bstpy" rel="nofollow">bstpy</a> to expose a Python lambda as an http service. You may find it useful for testing. You can throw json payloads at it with curl or postman. If you try it with other <a href="https://github.com/bespoken/bst" rel="nofollow">Bespoken Tools</a> you can a have very nice development environment</p>
| 0 |
2016-09-30T17:55:46Z
|
[
"python",
"aws-lambda",
"alexa",
"alexa-skills-kit"
] |
Dictionaries and unicode
| 39,616,578 |
<p>I have recently started learning Python and have got stuck in a small project I was trying.
I have an array which contains data for my project, I wanted to link this by using the code.</p>
<pre><code> >>> keys = ['a', 'b', 'c']
>>> values = [1, 2, 3]
>>> dictionary = dict(zip(keys, values))
</code></pre>
<p>But for my project I need Japanese characters in my values array. Is there a method where I can have my Japanese characters in the Array? Also if I were to type the words in using unicode, how would I be able to display the words equivalent to the unicode in the dictionary
For example:</p>
<pre><code> print(u'\4096')
</code></pre>
<p>Would work but if I were to print the entire dictionary as </p>
<pre><code> print (dictionary)
</code></pre>
<p>It wouldn't display my Japanese Characters. How would I be able to get around this problem?</p>
<p>Extra: </p>
<p>Another Problem is that I have my first array instead as a list as it was required to store information put together, is there an alternative solution?</p>
<pre><code> dictionary = dict(zip(file_content,japanese))
TypeError: unhashable type: 'list'
</code></pre>
| 0 |
2016-09-21T12:29:04Z
| 39,616,916 |
<p>There are two major flavours of Python - Python2 and Python3, and one of the differences between them is how they treat Unicode. As a rule-of-thumb, you should be using Python3, because it is a much better language and getting better over time. Most of the bigger libraries support it.</p>
<p>In Python3, this should just work:</p>
<pre><code>keys = ['a', 'b', 'c']
values = ['á', 'b', 'ç']
dictionary = dict(zip(keys, values))
print(dictionary) # {'a': 'á', 'b': 'b', 'c': 'ç'}
</code></pre>
<p>In Python2, the same thing should work, but you should be using "Unicode" strings, something like this:</p>
<pre><code>keys = ['a', 'b', 'c']
values = [u'á', u'b', u'ç']
dictionary = dict(zip(keys, values))
print(dictionary) # {'a': u'\xe1', 'c': u'\xe7', 'b': u'b'}
</code></pre>
<p>The strings properly store the unicode contents you want, they are just not printed nicely, due to the naivité of the print function. But if you want to print a key, for example, it will figure it out and do the right thing. </p>
<p>You will always have to take care of properly encoding the strings when you're writing output to files or other types of streams.</p>
<p><a href="http://www.diveintopython3.net/strings.html" rel="nofollow">This</a> is a pretty good introduction to the topic of Python and strings.</p>
| 0 |
2016-09-21T12:44:14Z
|
[
"python",
"dictionary",
"unicode"
] |
Converting to/getting original list object from string representation of original list object in python
| 39,616,631 |
<p>I want to convert string representation of arbitrary list back to the original-like list object as explained in the code below:</p>
<pre><code>list1 = [1,2,3,4]
list1str = str(list1)
str2list1 = list1str[1:-1].split(",") #stripping off square brackets from [1,2,3,4] and then splitting
print(list1) #[1,2,3,4]
print(str(list1)) #[1,2,3,4]
print(str2list1) #['1', ' 2', ' 3', ' 4']
#Notice that the elements are of string types.
#Also there is a prefix empty space in ' 2' (and also in ' 3' and ' 4')
# --> I will like str2list1 to be ['1', ' 2', ' 3', ' 4']
</code></pre>
<p>Also if we are dealing with nested lists:</p>
<pre><code>list2 = [1,2,['a','b'],4]
list2str = str(list2)
str2list2 = list2str[1:-1].split(",")
print(list2) #[1, 2, ['a', 'b'], 4]
print(str(list2)) #[1, 2, ['a', 'b'], 4]
print(str2list2) #['1', ' 2', " ['a'", " 'b']", ' 4']
# --> I will like str2list2 to be [1, 2, ['a', 'b'], 4]
</code></pre>
<p>How can I get exact the original list from its string representation?</p>
| 1 |
2016-09-21T12:31:45Z
| 39,616,906 |
<p>Easiest way, use <em>eval()</em>:</p>
<pre><code>lst = "[1, 2, 3, 4, ['a', 'b'], 5, 6]"
newlst = eval(lst)
</code></pre>
<p>so the newlst will be the exact python list you want</p>
<pre><code>print (newlst)
# [1, 2, 3, 4, ['a', 'b'], 5, 6]
</code></pre>
<p>There's one more way to do what you want with <em>exec()</em>:</p>
<pre><code>exec("newlst = {}".format(lst))
</code></pre>
<p>exec will actually treat the string as a python command/syntax.</p>
| 2 |
2016-09-21T12:43:58Z
|
[
"python"
] |
Converting to/getting original list object from string representation of original list object in python
| 39,616,631 |
<p>I want to convert string representation of arbitrary list back to the original-like list object as explained in the code below:</p>
<pre><code>list1 = [1,2,3,4]
list1str = str(list1)
str2list1 = list1str[1:-1].split(",") #stripping off square brackets from [1,2,3,4] and then splitting
print(list1) #[1,2,3,4]
print(str(list1)) #[1,2,3,4]
print(str2list1) #['1', ' 2', ' 3', ' 4']
#Notice that the elements are of string types.
#Also there is a prefix empty space in ' 2' (and also in ' 3' and ' 4')
# --> I will like str2list1 to be ['1', ' 2', ' 3', ' 4']
</code></pre>
<p>Also if we are dealing with nested lists:</p>
<pre><code>list2 = [1,2,['a','b'],4]
list2str = str(list2)
str2list2 = list2str[1:-1].split(",")
print(list2) #[1, 2, ['a', 'b'], 4]
print(str(list2)) #[1, 2, ['a', 'b'], 4]
print(str2list2) #['1', ' 2', " ['a'", " 'b']", ' 4']
# --> I will like str2list2 to be [1, 2, ['a', 'b'], 4]
</code></pre>
<p>How can I get exact the original list from its string representation?</p>
| 1 |
2016-09-21T12:31:45Z
| 39,617,082 |
<p>You can try:</p>
<pre><code>>>> import ast
>>> ast.literal_eval("[1,2,['a','b'],4]")
[1, 2, ['a', 'b'], 4]
</code></pre>
| 3 |
2016-09-21T12:50:49Z
|
[
"python"
] |
pack and unpack at the right format in python
| 39,616,638 |
<p>I'm looking to unpack from a buffer a string and its length.</p>
<blockquote>
<p>For example to obtain (4, 'Gégé') from this buffer :
b'\x00\x04G\xE9g\xe9'</p>
</blockquote>
<p>Does someone know how to do ?</p>
| 0 |
2016-09-21T12:32:15Z
| 39,617,185 |
<p>The length data looks like a big-endian unsigned 16 bit integer, and the string data looks like it's using the Latin1 encoding. If that's correct, you can extract it like this:</p>
<pre><code>from struct import unpack
def extract(buff):
return unpack(b'>H', buff[:2])[0], buff[2:].decode('latin1')
buff = b'\x00\x04G\xE9g\xe9'
print(extract(buff))
</code></pre>
<p><strong>output</strong></p>
<pre><code>(4, 'Gégé')
</code></pre>
| 1 |
2016-09-21T12:55:20Z
|
[
"python",
"python-3.x",
"pack",
"unpack"
] |
pass python lists tp methods
| 39,616,639 |
<p>I want to read a file and create a list from one of its columns by split() method and pass on this list to another method. Can someone explain what is the most pythonic way to achieve that ??</p>
<pre><code>def t(fname):
k = []
with open(fname, 'rU') as tx:
for line in tx:
lin = line.split()
k.append(lin[1])
res = anno(k)
for id in res.items():
if i > 0.05:
print(i)
</code></pre>
<p>I want to pass elements of 'k' as one list to anno() method. But in this way, I have number of lists but not one (required). </p>
| 0 |
2016-09-21T12:32:15Z
| 39,616,757 |
<p>When you want to create a new list, lsit comprehensions ate the prefered way:</p>
<pre><code>def t(fname):
with open(fname, 'rU') as tx:
k = [(line.split())[1] for line in tx]
res = anno(k)
for i in res.items():
if i > 0.05:
print(i)
</code></pre>
| 0 |
2016-09-21T12:36:41Z
|
[
"python",
"list",
"methods"
] |
pass python lists tp methods
| 39,616,639 |
<p>I want to read a file and create a list from one of its columns by split() method and pass on this list to another method. Can someone explain what is the most pythonic way to achieve that ??</p>
<pre><code>def t(fname):
k = []
with open(fname, 'rU') as tx:
for line in tx:
lin = line.split()
k.append(lin[1])
res = anno(k)
for id in res.items():
if i > 0.05:
print(i)
</code></pre>
<p>I want to pass elements of 'k' as one list to anno() method. But in this way, I have number of lists but not one (required). </p>
| 0 |
2016-09-21T12:32:15Z
| 39,616,943 |
<p>instead of appending to that list one by one why don't you just have a loop for that particular statement like <code>k = [(line.split())[1] for line in tx]</code>.</p>
<p>And instead of using <code>with open(file) as:</code> I have used <code>tx = open(file)</code> so whenever you have it's need you can use it and close it using tx.close() , it eliminate that that extra level of intendation.</p>
<pre><code>def t(fname):
k = []
tx = open(fname, 'rU')
k = [(line.split())[1] for line in tx]
tx.close()
res = anno(k)
for i in res.items():
if i > 0.05:print(i)
</code></pre>
| 1 |
2016-09-21T12:45:11Z
|
[
"python",
"list",
"methods"
] |
pass python lists tp methods
| 39,616,639 |
<p>I want to read a file and create a list from one of its columns by split() method and pass on this list to another method. Can someone explain what is the most pythonic way to achieve that ??</p>
<pre><code>def t(fname):
k = []
with open(fname, 'rU') as tx:
for line in tx:
lin = line.split()
k.append(lin[1])
res = anno(k)
for id in res.items():
if i > 0.05:
print(i)
</code></pre>
<p>I want to pass elements of 'k' as one list to anno() method. But in this way, I have number of lists but not one (required). </p>
| 0 |
2016-09-21T12:32:15Z
| 39,617,118 |
<p>I think you just did a mistake with the nesting. You have to call <code>anno()</code> after you've build the list, outside the for loop.</p>
<pre><code>def t(fname):
k = []
for line in open('fname'):
k.append(line.split()[1])
res = anno(k)
</code></pre>
| 0 |
2016-09-21T12:52:09Z
|
[
"python",
"list",
"methods"
] |
NameError: global name 'query' is not defined
| 39,616,647 |
<p>I have a small django project and Im trying to pass a variable from my views.py into tasks.py and run a task using the variable, but I am getting name is not defined error, ive tried many solutions ive seen on other questions but i cannot seem to get it to work</p>
<p>here is my views.py</p>
<pre><code># -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.shortcuts import render, loader
from django.template import Context
from django.http import HttpResponse
import json
import requests
from tasks import rti
def index(request):
return render(request, 'bus/index.html')
def search(request):
query = request.GET.get('q')
t = loader.get_template('bus/search.html')
c = Context({ 'query': query,})
rti()
return HttpResponse(t.render(c))
</code></pre>
<p>here is my tasks.py</p>
<pre><code>from background_task import background
import time
@background(schedule=1)
def rti():
timeout = time.time() + 60 * 15
while time.time() < timeout:
from views import search
dblink = '*apiurl*' + str(query) + '&format=json'
savelink = 'bus/static/bus/stop' + str(query)+ '.json'
r = requests.get(dblink)
jsondata = json.loads(r.text)
with open(savelink, 'w') as f:
json.dump(jsondata, f)
</code></pre>
<p>here is the traceback:</p>
<pre><code>Traceback (most recent call last):
File "/Users/dylankilkenny/dev/python/test2/lib/python2.7/site-packages/background_task/tasks.py", line 49, in bg_runner
func(*args, **kwargs)
File "/Users/dylankilkenny/dev/python/test2/mysite/bus/tasks.py", line 9, in rti
from views import search
NameError: global name 'query' is not defined
</code></pre>
| 0 |
2016-09-21T12:32:27Z
| 39,616,731 |
<p>You have to change defining of your method to <code>def rti(query):</code> and use it in view <code>rti(query)</code>, because you background task don't know anything about your query variable inside.</p>
| 0 |
2016-09-21T12:35:35Z
|
[
"python",
"django"
] |
NameError: global name 'query' is not defined
| 39,616,647 |
<p>I have a small django project and Im trying to pass a variable from my views.py into tasks.py and run a task using the variable, but I am getting name is not defined error, ive tried many solutions ive seen on other questions but i cannot seem to get it to work</p>
<p>here is my views.py</p>
<pre><code># -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.shortcuts import render, loader
from django.template import Context
from django.http import HttpResponse
import json
import requests
from tasks import rti
def index(request):
return render(request, 'bus/index.html')
def search(request):
query = request.GET.get('q')
t = loader.get_template('bus/search.html')
c = Context({ 'query': query,})
rti()
return HttpResponse(t.render(c))
</code></pre>
<p>here is my tasks.py</p>
<pre><code>from background_task import background
import time
@background(schedule=1)
def rti():
timeout = time.time() + 60 * 15
while time.time() < timeout:
from views import search
dblink = '*apiurl*' + str(query) + '&format=json'
savelink = 'bus/static/bus/stop' + str(query)+ '.json'
r = requests.get(dblink)
jsondata = json.loads(r.text)
with open(savelink, 'w') as f:
json.dump(jsondata, f)
</code></pre>
<p>here is the traceback:</p>
<pre><code>Traceback (most recent call last):
File "/Users/dylankilkenny/dev/python/test2/lib/python2.7/site-packages/background_task/tasks.py", line 49, in bg_runner
func(*args, **kwargs)
File "/Users/dylankilkenny/dev/python/test2/mysite/bus/tasks.py", line 9, in rti
from views import search
NameError: global name 'query' is not defined
</code></pre>
| 0 |
2016-09-21T12:32:27Z
| 39,616,739 |
<p>You need to modify your task so that it takes the query as an argument.</p>
<pre><code>@background(schedule=1)
def rti(query):
...
</code></pre>
<p>Then pass the query when you call the task in your view</p>
<pre><code>rti(query)
</code></pre>
| 0 |
2016-09-21T12:35:43Z
|
[
"python",
"django"
] |
NameError: global name 'query' is not defined
| 39,616,647 |
<p>I have a small django project and Im trying to pass a variable from my views.py into tasks.py and run a task using the variable, but I am getting name is not defined error, ive tried many solutions ive seen on other questions but i cannot seem to get it to work</p>
<p>here is my views.py</p>
<pre><code># -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.shortcuts import render, loader
from django.template import Context
from django.http import HttpResponse
import json
import requests
from tasks import rti
def index(request):
return render(request, 'bus/index.html')
def search(request):
query = request.GET.get('q')
t = loader.get_template('bus/search.html')
c = Context({ 'query': query,})
rti()
return HttpResponse(t.render(c))
</code></pre>
<p>here is my tasks.py</p>
<pre><code>from background_task import background
import time
@background(schedule=1)
def rti():
timeout = time.time() + 60 * 15
while time.time() < timeout:
from views import search
dblink = '*apiurl*' + str(query) + '&format=json'
savelink = 'bus/static/bus/stop' + str(query)+ '.json'
r = requests.get(dblink)
jsondata = json.loads(r.text)
with open(savelink, 'w') as f:
json.dump(jsondata, f)
</code></pre>
<p>here is the traceback:</p>
<pre><code>Traceback (most recent call last):
File "/Users/dylankilkenny/dev/python/test2/lib/python2.7/site-packages/background_task/tasks.py", line 49, in bg_runner
func(*args, **kwargs)
File "/Users/dylankilkenny/dev/python/test2/mysite/bus/tasks.py", line 9, in rti
from views import search
NameError: global name 'query' is not defined
</code></pre>
| 0 |
2016-09-21T12:32:27Z
| 39,616,907 |
<p>You have not passed any argument to the method <code>rti()</code> that you have called inside <code>views.py</code>. And to do that, while defining the method <code>rti()</code> inside <code>tasks.py</code>, the method should take an argument like query. After that you will be able to use <code>query</code> inside <code>rti()</code>.</p>
<p>Please follow these:</p>
<p>tasks.py:</p>
<pre><code>@background(schedule=1)
def rti(query):
{...your code}
</code></pre>
<p>views.py:</p>
<pre><code>def search(request):
query = request.GET.get('q')
t = loader.get_template('bus/search.html')
c = Context({ 'query': query,})
rti(query) #calling rti from tasks.py passing the argument
return HttpResponse(t.render(c))
</code></pre>
| 0 |
2016-09-21T12:43:59Z
|
[
"python",
"django"
] |
How can I tell SQLAlchemy to use a different identity rule for Session.merge (instead of the PK)?
| 39,616,663 |
<p>I have a legacy DB which was blindly created with auto-increment IDs even though there's a perfectly valid natural key in the table.</p>
<p>This ends up with code littered with code along the lines:</p>
<pre><code>Fetch row with natural key 'x'
if exists:
update row with NK 'x'
else:
insert row with NK 'x'
</code></pre>
<p>Essentially an upsert.</p>
<p>This use-case (upsert) is covered by <a href="http://docs.sqlalchemy.org/en/latest/orm/session_api.html#sqlalchemy.orm.session.Session.merge" rel="nofollow">Session.merge()</a> from SQLAlchemy. But SA will only look at the primary key of the table to reconcile whether it has to do an insert or update. In the existing DB, the PK does however - contrary to what it <em>should</em> do - not represent the true identity of the row. So the same identity can appear with multiple auto-increment IDs. There are some other business rules in place to ensure uniqueness. But the ID <code>1</code> of today can be ID <code>3246</code> tomorrow!</p>
<p>There is currently no good way to modify the DB in a sensible manner as too many legacy applications are dependent on the structure as it is.</p>
<p>For the sake of a tangible example, assume we have network devices in the table, and take their hostname as natural key. The <em>current</em> DB would look something like this:</p>
<pre><code>CREATE TABLE device (
id SERIAL PRIMARY KEY,
hostname TEXT UNIQUE,
some_other_column TEXT
)
</code></pre>
<p>The corresponding SA model:</p>
<pre><code>class Device(Base):
id = Column(Integer, primary_key=True)
hostname = Column(String(256))
some_other_column = Column(String(20))
</code></pre>
<p>I would like to be able to do the following:</p>
<pre><code>mydevice = Device(hostname='hello-world', some_other_column='foo')
merged_device = session.merge(mydevice)
session.commit()
</code></pre>
<p>In this example, I would like SA to do an "insert or update". But with the current model, this would actually result in an error (due to the unique hostname constraint).</p>
<p>I <em>could</em> specify the <code>hostname</code> column as primary key in the SA model (and leave the PK in the DB as-is). But that looks a bit hacky. Is there not a more explicit and understandable way to tell SQLAlchemy that it should use "hostname" as identity? And if yes, how?</p>
| 0 |
2016-09-21T12:32:57Z
| 39,619,273 |
<p>In situations like this, i find it best to <em>lie</em> to sqlalchemy. Tell it that the natural key is primary.</p>
<pre><code>class Device(Base):
hostname = Column(String(256), primary_key=True)
some_other_column = Column(String(20))
</code></pre>
| 0 |
2016-09-21T14:20:38Z
|
[
"python",
"sqlalchemy"
] |
What does "1B63" mean in bash?
| 39,616,698 |
<p>When I print the string value of <code>0x1b63</code> in bash, the screen clear (exactly like <code>tput reset</code> result):
<a href="http://i.stack.imgur.com/vaRX1.png" rel="nofollow"><img src="http://i.stack.imgur.com/vaRX1.png" alt="enter image description here"></a></p>
<p>After pressing <code>Enter</code> button we have:</p>
<p><a href="http://i.stack.imgur.com/f0zy6.png" rel="nofollow"><img src="http://i.stack.imgur.com/f0zy6.png" alt="enter image description here"></a></p>
<p>What is going on?</p>
| 0 |
2016-09-21T12:34:22Z
| 39,617,222 |
<p>It's ANSI escape sequences. There's a list of some on <a href="https://en.wikipedia.org/wiki/ANSI_escape_code" rel="nofollow">wikipedia</a></p>
<p><code>\x1b</code> means <code>ESC</code>
<code>\x63</code> is a lower case <code>c</code></p>
<p>On that page <code>ESC</code> <code>c</code> is shown as </p>
<blockquote>
<p>RIS â Reset to Intitial State Resets the device to its original state. This may include (if applicable): reset graphic rendition, clear tabulation stops, reset to default font, and more.</p>
</blockquote>
<p>so the terminal will clear. This isn't related to bash, or python, but the terminal that you're running in.</p>
| 2 |
2016-09-21T12:56:37Z
|
[
"python",
"xterm",
"ansi-escape"
] |
How to configure celery-redis in django project on microsoft azure?
| 39,616,701 |
<p>I have this django locator project deployed in azure. My redis cache host name(DNS) is mycompany.azure.microsoft.net. I created it in azure, but not sure where i can find the password for the redis server. I have got this as my configuration in my settings.py. I am using redis as a broker for my celery setup in project.</p>
<pre><code>BROKER_URL = 'redis://:passwordAzureAccessKey=@mycompany.redis.cache.windows.net:6380/0'
</code></pre>
<p>I could not connect. Is there anyplace different, I need to put password or username to connect to the above server ? Also where can i find the password in Azure. Or is it due to the fact that I am trying to contact the Azure redis from localhost ?</p>
| 0 |
2016-09-21T12:34:27Z
| 39,762,239 |
<p>You can find your redis services keys in Azure portal, click <strong>Settings</strong>=><strong>Access keys</strong>, you can select either primary or secondary key as your password in the redis connection string.<br>
<a href="http://i.stack.imgur.com/UKST2.png" rel="nofollow"><img src="http://i.stack.imgur.com/UKST2.png" alt="enter image description here"></a></p>
<p>And addtionally, you can try to enable the non-ssl endpoint of your redis service, as mentioned at <a href="https://azure.microsoft.com/en-us/documentation/articles/cache-python-get-started/#enable-the-non-ssl-endpoint" rel="nofollow">https://azure.microsoft.com/en-us/documentation/articles/cache-python-get-started/#enable-the-non-ssl-endpoint</a>. </p>
<p>It seems that you are using <code>celery</code>, you can use celery cli command to test your redis serivice. E.G.</p>
<p><code>celery inspect ping -b redis://:{password}@{redis_service_name}.redis.cache.wi
ndows.net:6379/0</code></p>
| 1 |
2016-09-29T05:41:17Z
|
[
"python",
"django",
"azure",
"redis",
"celery"
] |
BeautifulSoup: How to extract content?
| 39,616,753 |
<p>on the website that I'm trying to parse are tags like:</p>
<pre><code><a class="sku" href="http://pl.farnell.com/tdk/c3225x6s0j107m250ac/capacitor-mlcc-x6s-100uf-6-3v/dp/2526286" title="2526286">2526286</a>
</code></pre>
<p>I would like to get a list of their content (here it is 2526286 value). How can I do that? I tried with</p>
<pre><code>for node in soup.find_all('a', {'class': 'sku'}):
print(node.content)
</code></pre>
<p>but it returns 'None' for each tag found.</p>
| 1 |
2016-09-21T12:36:24Z
| 39,616,768 |
<p>You can use:</p>
<pre><code>for node in soup.find_all('a', {'class': 'sku'}):
print(node.string)
</code></pre>
<p>As whole code:</p>
<pre><code>from bs4 import BeautifulSoup
string = """
<div>
<a class="sku" href="http://pl.farnell.com/tdk/c3225x6s0j107m250ac/capacitor-mlcc-x6s-100uf-6-3v/dp/2526286" title="2526286">2526286</a>
</div>
"""
soup = BeautifulSoup(string, "lxml")
for node in soup.find_all('a', {'class': 'sku'}):
print(node.string)
</code></pre>
| 2 |
2016-09-21T12:37:17Z
|
[
"python",
"css-selectors",
"beautifulsoup",
"html-parsing"
] |
Pandas per group imputation of missing values
| 39,616,764 |
<h2>How can I achieve such a per-country imputation for each indicator in pandas?</h2>
<p>I want to impute the missing values per group</p>
<ul>
<li><em>no-A-state</em> should get <code>np.min</code> per indicatorKPI </li>
<li><em>no-ISO-state</em> should get the <code>np.mean</code> per indicatorKPI</li>
<li><p>for states with missing values, I want to impute with the per <code>indicatorKPI</code> mean. Here, this would mean to impute the missing values for Serbia</p>
<p>mydf = pd.DataFrame({'Country':['no-A-state','no-ISO-state','germany','serbia', 'austria', 'germany','serbia', 'austria',], 'indicatorKPI':[np.nan,np.nan,'SP.DYN.LE00.IN','NY.GDP.MKTP.CD','NY.GDP.MKTP.CD', 'SP.DYN.LE00.IN','NY.GDP.MKTP.CD', 'SP.DYN.LE00.IN'], 'value':[np.nan,np.nan,0.9,np.nan,0.7, 0.2, 0.3, 0.6]})
<a href="http://i.stack.imgur.com/axpWR.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/axpWR.jpg" alt="enter image description here"></a></p></li>
</ul>
<h1>edit</h1>
<p>The desired output should be similar to</p>
<pre><code>mydf = pd.DataFrame({'Country':['no-A-state','no-ISO-state', 'no-A-state','no-ISO-state',
'germany','serbia','serbia', 'austria',
'germany','serbia', 'austria',],
'indicatorKPI':['SP.DYN.LE00.IN','NY.GDP.MKTP.CD', 'SP.DYN.LE00.IN',
'SP.DYN.LE00.IN','NY.GDP.MKTP.CD','SP.DYN.LE00.IN','NY.GDP.MKTP.CD','NY.GDP.MKTP.CD', 'SP.DYN.LE00.IN','NY.GDP.MKTP.CD', 'SP.DYN.LE00.IN'],
'value':['MIN of all for this indicator', 'MEAN of all for this indicator','MIN of all for this indicator','MEAN of all for this indicator', 0.9,'MEAN of all for SP.DYN.LE00.IN indicator',0.7, 'MEAN of all for NY.GDP.MKTP.CD indicator',0.2, 0.3, 0.6]
})
</code></pre>
<p><a href="http://i.stack.imgur.com/pvHoo.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/pvHoo.jpg" alt="enter image description here"></a></p>
| 1 |
2016-09-21T12:37:11Z
| 39,617,769 |
<p>Based on your new example df the following works for me:</p>
<pre><code>In [185]:
mydf.loc[mydf['Country'] == 'no-A-state', 'value'] = mydf['value'].min()
mydf.loc[mydf['Country'] == 'no-ISO-state', 'value'] = mydf['value'].mean()
mydf.loc[mydf['value'].isnull(), 'value'] = mydf['indicatorKPI'].map(mydf.groupby('indicatorKPI')['value'].mean())
mydf
Out[185]:
Country indicatorKPI value
0 no-A-state SP.DYN.LE00.IN 0.200000
1 no-ISO-state NY.GDP.MKTP.CD 0.442857
2 no-A-state SP.DYN.LE00.IN 0.200000
3 no-ISO-state SP.DYN.LE00.IN 0.442857
4 germany NY.GDP.MKTP.CD 0.900000
5 serbia SP.DYN.LE00.IN 0.328571
6 serbia NY.GDP.MKTP.CD 0.700000
7 austria NY.GDP.MKTP.CD 0.585714
8 germany SP.DYN.LE00.IN 0.200000
9 serbia NY.GDP.MKTP.CD 0.300000
10 austria SP.DYN.LE00.IN 0.600000
</code></pre>
<p>Basically what this does is to fill the missing values for each condition, so we set the min for the 'no-A-state' countries, then mean for 'no-ISO-state' countries. We then groupby on 'indicatorKPI' and calc the mean for each group and assign again to the null value rows, the respective countries' mean using <code>map</code> which performs a lookup</p>
<p>Here are the steps broken down:</p>
<pre><code>In [187]:
mydf.groupby('indicatorKPI')['value'].mean()
Out[187]:
indicatorKPI
NY.GDP.MKTP.CD 0.633333
SP.DYN.LE00.IN 0.400000
Name: value, dtype: float64
In [188]:
mydf['indicatorKPI'].map(mydf.groupby('indicatorKPI')['value'].mean())
Out[188]:
0 0.400000
1 0.633333
2 0.400000
3 0.400000
4 0.633333
5 0.400000
6 0.633333
7 0.633333
8 0.400000
9 0.633333
10 0.400000
Name: indicatorKPI, dtype: float64
</code></pre>
| 1 |
2016-09-21T13:20:14Z
|
[
"python",
"pandas",
"group-by",
"missing-data",
"imputation"
] |
Write from a query to table in BigQuery only if query is not empty
| 39,616,849 |
<p>In BigQuery it's possible to write to a new table the results of a query. I'd like the table to be created only whenever the query returns at least one row. Basically I don't want to end up creating empty table. I can't find an option to do that. (I am using the Python library, but I suppose the same applies to the raw API)</p>
| 0 |
2016-09-21T12:41:28Z
| 39,616,971 |
<p>Since you have to specify the destination on the query definition and you don't know what it will return when you run it can you tack a <code>LIMIT 1</code> to the end?</p>
<p>You can check the row number in the <a href="https://developers.google.com/resources/api-libraries/documentation/bigquery/v2/python/latest/bigquery_v2.jobs.html#query" rel="nofollow">job result object</a> and then re run the query without the limiter if there are results into your new table.</p>
| 1 |
2016-09-21T12:46:38Z
|
[
"python",
"google-bigquery"
] |
Write from a query to table in BigQuery only if query is not empty
| 39,616,849 |
<p>In BigQuery it's possible to write to a new table the results of a query. I'd like the table to be created only whenever the query returns at least one row. Basically I don't want to end up creating empty table. I can't find an option to do that. (I am using the Python library, but I suppose the same applies to the raw API)</p>
| 0 |
2016-09-21T12:41:28Z
| 39,632,118 |
<p>There's no option to do this in one step. I'd recommend running the query, inspecting the results, and then performing a table copy with WRITE_TRUNCATE to commit the results to the final location if the intermediate output contains at least one row.</p>
| 1 |
2016-09-22T06:50:36Z
|
[
"python",
"google-bigquery"
] |
Array operations using multiple indices of same array
| 39,616,919 |
<p>I am very new to Python, and I am trying to get used to performing Python's array operations rather than looping through arrays. Below is an example of the kind of looping operation I am doing, but am unable to work out a suitable pure array operation that does not rely on loops:</p>
<pre><code>import numpy as np
def f(arg1, arg2):
# an arbitrary function
def myFunction(a1DNumpyArray):
A = a1DNumpyArray
# Create a square array with each dimension the size of the argument array.
B = np.zeros((A.size, A.size))
# Function f is a function of two elements of the 1D array. For each
# element, i, I want to perform the function on it and every element
# before it, and store the result in the square array, multiplied by
# the difference between the ith and (i-1)th element.
for i in range(A.size):
B[i,:i] = f(A[i], A[:i])*(A[i]-A[i-1])
# Sum through j and return full sums as 1D array.
return np.sum(B, axis=0)
</code></pre>
<p>In short, I am integrating a function which takes two elements of the same array as arguments, returning an array of results of the integral. </p>
<p>Is there a more compact way to do this, without using loops? </p>
| 1 |
2016-09-21T12:44:23Z
| 39,623,009 |
<p>The use of an arbitrary <code>f</code> function, and this <code>[i, :i]</code> business complicates by passing a loop.</p>
<p>Most of the fast compiled <code>numpy</code> operations work on the whole array, or whole rows and/or columns, and effectively do so in parallel. Loops that are inherently sequential (value from one loop depends on the previous) don't fit well. And different size lists or arrays in each loop are also a good indicator that 'vectorizing' will be difficult.</p>
<pre><code>for i in range(A.size):
B[i,:i] = f(A[i], A[:i])*(A[i]-A[i-1])
</code></pre>
<p>With a sample <code>A</code> and known <code>f</code> (as simple as <code>arg1*arg2</code>), I'd generate a <code>B</code> array, and look for patterns that treat <code>B</code> as a whole. At first glance it looks like your <code>B</code> is a lower triangle. There are functions to help index those. But that final sum might change the picture.</p>
<p>Sometimes I tackle these problems with a bottom up approach, trying to remove inner loops first. But in this case, I think some sort of big-picture approach is needed.</p>
| 0 |
2016-09-21T17:30:49Z
|
[
"python",
"arrays",
"numpy"
] |
How to create rows and columns in a .csv file from .log file
| 39,616,931 |
<p>I am trying to parse a <code>.log</code>-file from MTurk in to a <code>.csv</code>-file with rows and columns using Python. My data looks like:</p>
<blockquote>
<p>P:,14142,GREEN,800,9;R:,14597,7,y,NaN,Correct;P:,15605,#E5DC22,800,9;R:,16108,7,f,NaN,Correct;P:,17115,GREEN,100,9;R:,17548,7,y,NaN,Correct;P:,18552,#E5DC22,100,9;R:,18972,7,f,NaN,Correct;P:,19979,GREEN,800,9;R:,20379,7,y,NaN,Correct;P:,21387,#E5DC22,800,9;R:,21733,7,f,NaN,Correct;P:,22740,RED,100,9;R:,23139,7,y,NaN,False;P:,24147,BLUE,100,9;R:,24547,7,f,NaN,False;P:,25555,RED,800,9;R:,26043,7,b,NaN,Correct;P:,27051,BLUE,800,9;</p>
</blockquote>
<p>Currently, I have this, which puts everything in to columns:</p>
<pre><code>import pandas as pd
from pandas import read_table
log_file = '3BF51CHDTWYBE3LE8DZRA0R5AFGH0H.log'
df = read_table(log_file, sep=';|,', header=None, engine='python')
</code></pre>
<p>Like this:</p>
<blockquote>
<p>P|14142|GREEN|800|9|R|14597|7|y|NaN|Correct|P|15605|#E5DC22|800|9|R|16108</p>
</blockquote>
<p>However, I cannot seem to be able to break this in to multiple rows, so that it would look more like this:<br /></p>
<blockquote>
<p>P|14142|GREEN|800|9|R|14597|7|y|NaN|Correct|<br />
|P|15605|#E5DC22|800|9|R|16108</p>
</blockquote>
<p>i.e. where all the "P"s would be in one column, where all the colors would be in another, the "r"s, etc..</p>
| 2 |
2016-09-21T12:44:43Z
| 39,617,086 |
<p>You can use</p>
<pre><code>In [16]: df = pd.read_csv('log.txt', lineterminator=';', sep=':', header=None)
</code></pre>
<p>to read the file (say, <code>'log.txt'</code>) assuming that the lines are terminated by <code>';'</code>, and the separators within lines are <code>':'</code>.</p>
<p>Unfortunately, your second column will now contain commas, which you'd like to logically separate. You can split the commas along the lines, and concatenate the result to the first column:</p>
<pre><code>In [17]: pd.concat([df[[0]], df[1].str.split(',').apply(pd.Series).iloc[:, 1: 6]], axis=1)
Out[17]:
0 1 2 3 4 5
0 P 14142 GREEN 800 9 NaN
1 R 14597 7 y NaN Correct
2 P 15605 #E5DC22 800 9 NaN
3 R 16108 7 f NaN Correct
4 P 17115 GREEN 100 9 NaN
5 R 17548 7 y NaN Correct
6 P 18552 #E5DC22 100 9 NaN
7 R 18972 7 f NaN Correct
8 P 19979 GREEN 800 9 NaN
9 R 20379 7 y NaN Correct
10 P 21387 #E5DC22 800 9 NaN
11 R 21733 7 f NaN Correct
12 P 22740 RED 100 9 NaN
13 R 23139 7 y NaN False
14 P 24147 BLUE 100 9 NaN
15 R 24547 7 f NaN False
16 P 25555 RED 800 9 NaN
17 R 26043 7 b NaN Correct
18 P 27051 BLUE 800 9 NaN
19 \n\n NaN NaN NaN NaN NaN
</code></pre>
| 1 |
2016-09-21T12:51:02Z
|
[
"python",
"csv",
"pandas",
"numpy"
] |
How to create rows and columns in a .csv file from .log file
| 39,616,931 |
<p>I am trying to parse a <code>.log</code>-file from MTurk in to a <code>.csv</code>-file with rows and columns using Python. My data looks like:</p>
<blockquote>
<p>P:,14142,GREEN,800,9;R:,14597,7,y,NaN,Correct;P:,15605,#E5DC22,800,9;R:,16108,7,f,NaN,Correct;P:,17115,GREEN,100,9;R:,17548,7,y,NaN,Correct;P:,18552,#E5DC22,100,9;R:,18972,7,f,NaN,Correct;P:,19979,GREEN,800,9;R:,20379,7,y,NaN,Correct;P:,21387,#E5DC22,800,9;R:,21733,7,f,NaN,Correct;P:,22740,RED,100,9;R:,23139,7,y,NaN,False;P:,24147,BLUE,100,9;R:,24547,7,f,NaN,False;P:,25555,RED,800,9;R:,26043,7,b,NaN,Correct;P:,27051,BLUE,800,9;</p>
</blockquote>
<p>Currently, I have this, which puts everything in to columns:</p>
<pre><code>import pandas as pd
from pandas import read_table
log_file = '3BF51CHDTWYBE3LE8DZRA0R5AFGH0H.log'
df = read_table(log_file, sep=';|,', header=None, engine='python')
</code></pre>
<p>Like this:</p>
<blockquote>
<p>P|14142|GREEN|800|9|R|14597|7|y|NaN|Correct|P|15605|#E5DC22|800|9|R|16108</p>
</blockquote>
<p>However, I cannot seem to be able to break this in to multiple rows, so that it would look more like this:<br /></p>
<blockquote>
<p>P|14142|GREEN|800|9|R|14597|7|y|NaN|Correct|<br />
|P|15605|#E5DC22|800|9|R|16108</p>
</blockquote>
<p>i.e. where all the "P"s would be in one column, where all the colors would be in another, the "r"s, etc..</p>
| 2 |
2016-09-21T12:44:43Z
| 39,617,237 |
<p>Another faster solution:</p>
<pre><code>import pandas as pd
import numpy as np
import io
temp=u"""P:,14142,GREEN,800,9;R:,14597,7,y,NaN,Correct;P:,15605,#E5DC22,800,9;R:,16108,7,f,NaN,Correct;P:,17115,GREEN,100,9;R:,17548,7,y,NaN,Correct;P:,18552,#E5DC22,100,9;R:,18972,7,f,NaN,Correct;P:,19979,GREEN,800,9;R:,20379,7,y,NaN,Correct;P:,21387,#E5DC22,800,9;R:,21733,7,f,NaN,Correct;P:,22740,RED,100,9;R:,23139,7,y,NaN,False;P:,24147,BLUE,100,9;R:,24547,7,f,NaN,False;P:,25555,RED,800,9;R:,26043,7,b,NaN,Correct;P:,27051,BLUE,800,9;"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), sep=':', header=None, lineterminator=';')
print (df)
0 1
0 P ,14142,GREEN,800,9
1 R ,14597,7,y,NaN,Correct
2 P ,15605,#E5DC22,800,9
3 R ,16108,7,f,NaN,Correct
4 P ,17115,GREEN,100,9
5 R ,17548,7,y,NaN,Correct
6 P ,18552,#E5DC22,100,9
7 R ,18972,7,f,NaN,Correct
8 P ,19979,GREEN,800,9
9 R ,20379,7,y,NaN,Correct
10 P ,21387,#E5DC22,800,9
11 R ,21733,7,f,NaN,Correct
12 P ,22740,RED,100,9
13 R ,23139,7,y,NaN,False
14 P ,24147,BLUE,100,9
15 R ,24547,7,f,NaN,False
16 P ,25555,RED,800,9
17 R ,26043,7,b,NaN,Correct
18 P ,27051,BLUE,800,9
</code></pre>
<p>First <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a> index from first column, then remove triling <code>,</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html" rel="nofollow"><code>strip</code></a> and create <code>DataFrame</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow"><code>str.split</code></a>. Last need add <code>0</code> to column names and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a>:</p>
<pre><code>df1 = df.set_index(0)[1].str.strip(',').str.split(',', expand=True)
df1.columns = df1.columns + 1
df1.reset_index(inplace=True)
print (df1)
0 1 2 3 4 5
0 P 14142 GREEN 800 9 None
1 R 14597 7 y NaN Correct
2 P 15605 #E5DC22 800 9 None
3 R 16108 7 f NaN Correct
4 P 17115 GREEN 100 9 None
5 R 17548 7 y NaN Correct
6 P 18552 #E5DC22 100 9 None
7 R 18972 7 f NaN Correct
8 P 19979 GREEN 800 9 None
9 R 20379 7 y NaN Correct
10 P 21387 #E5DC22 800 9 None
11 R 21733 7 f NaN Correct
12 P 22740 RED 100 9 None
13 R 23139 7 y NaN False
14 P 24147 BLUE 100 9 None
15 R 24547 7 f NaN False
16 P 25555 RED 800 9 None
17 R 26043 7 b NaN Correct
18 P 27051 BLUE 800 9 None
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>def jez(df):
df1 = df.set_index(0)[1].str.strip(',').str.split(',', expand=True)
df1.columns = df1.columns + 1
df1.reset_index(inplace=True)
return (df1)
print (jez(df))
In [310]: %timeit (pd.concat([df[[0]], df[1].str.split(',').apply(pd.Series).iloc[:, 1: 6]], axis=1))
100 loops, best of 3: 4.85 ms per loop
In [311]: %timeit (jez(df))
1000 loops, best of 3: 1.61 ms per loop
</code></pre>
| 0 |
2016-09-21T12:57:17Z
|
[
"python",
"csv",
"pandas",
"numpy"
] |
Else statement in Python 3 always runs
| 39,616,937 |
<p>I've been making a basic calculator with Python and I have come across this issue. After the calculations are made "Invalid Number" always prints and then the pause happens. I think it has something to do with the newline breaking the <strong>if</strong> block but I'm not sure.</p>
<p>Any help will be appreciated.
Thanks in advance.</p>
<pre><code>def badnum():
print("Invalid Number")
print("Press enter to continue")
input("")
def main():
print("Select an action ")
print("1.) Add")
print("2.) Subtract")
print("3.) Multiply")
print("4.) Divide")
ac = int(input(">>>"))
if ac == 1:
print("First number :")
fn = float(input(">>>"))
print("Second number :")
sn = float(input(">>>"))
a = fn+sn
print(a)
if ac == 2:
print("First number :")
fn = float(input(">>>"))
print("Second number :")
sn = float(input(">>>"))
a = fn-sn
print(a)
if ac == 3:
print("First number :")
fn = float(input(">>>"))
print("Second number :")
sn = float(input(">>>"))
a = fn*sn
print(a)
if ac == 4:
print("First number :")
fn = float(input(">>>"))
print("Second number :")
sn = float(input(">>>"))
a = fn/sn
print(a)
else:
badnum()
print("\n"*100)
while True:
try:
main()
except ValueError:
badnum()
except ZeroDivisionError:
print("Infinity")
print("\n"*100)
</code></pre>
| 0 |
2016-09-21T12:44:57Z
| 39,616,983 |
<p>No, it has got something to do with how you have written your code, consider this with <code>if...elif</code>:</p>
<pre><code>ac = int(input(">>>"))
if ac == 1:
print("First number :")
fn = float(input(">>>"))
print("Second number :")
sn = float(input(">>>"))
a = fn+sn
print(a)
elif ac == 2:
print("First number :")
fn = float(input(">>>"))
print("Second number :")
sn = float(input(">>>"))
a = fn-sn
print(a)
elif ac == 3:
print("First number :")
fn = float(input(">>>"))
print("Second number :")
sn = float(input(">>>"))
a = fn*sn
print(a)
elif ac == 4:
print("First number :")
fn = float(input(">>>"))
print("Second number :")
sn = float(input(">>>"))
a = fn/sn
print(a)
else:
badnum()
</code></pre>
<p><hr>
<strong>Explanation:</strong> Before, you were checking for <code>ac == 1</code> <strong>and</strong> <code>ac == 4</code> which cannot both be true, so the second <code>else</code> statement was executed as well. This can be omitted with the <code>if..elif</code> construction: once, one of the earlier comparisons become true, the rest is not executed anymore.</p>
| 2 |
2016-09-21T12:47:01Z
|
[
"python",
"python-3.x",
"if-statement"
] |
Else statement in Python 3 always runs
| 39,616,937 |
<p>I've been making a basic calculator with Python and I have come across this issue. After the calculations are made "Invalid Number" always prints and then the pause happens. I think it has something to do with the newline breaking the <strong>if</strong> block but I'm not sure.</p>
<p>Any help will be appreciated.
Thanks in advance.</p>
<pre><code>def badnum():
print("Invalid Number")
print("Press enter to continue")
input("")
def main():
print("Select an action ")
print("1.) Add")
print("2.) Subtract")
print("3.) Multiply")
print("4.) Divide")
ac = int(input(">>>"))
if ac == 1:
print("First number :")
fn = float(input(">>>"))
print("Second number :")
sn = float(input(">>>"))
a = fn+sn
print(a)
if ac == 2:
print("First number :")
fn = float(input(">>>"))
print("Second number :")
sn = float(input(">>>"))
a = fn-sn
print(a)
if ac == 3:
print("First number :")
fn = float(input(">>>"))
print("Second number :")
sn = float(input(">>>"))
a = fn*sn
print(a)
if ac == 4:
print("First number :")
fn = float(input(">>>"))
print("Second number :")
sn = float(input(">>>"))
a = fn/sn
print(a)
else:
badnum()
print("\n"*100)
while True:
try:
main()
except ValueError:
badnum()
except ZeroDivisionError:
print("Infinity")
print("\n"*100)
</code></pre>
| 0 |
2016-09-21T12:44:57Z
| 39,616,989 |
<p>You shoud use <code>elif</code>:</p>
<pre><code>if ac == 1:
...
elif ac == 2:
...
elif ac == 3:
...
elif ac == 4:
...
else:
...
</code></pre>
| 2 |
2016-09-21T12:47:19Z
|
[
"python",
"python-3.x",
"if-statement"
] |
Else statement in Python 3 always runs
| 39,616,937 |
<p>I've been making a basic calculator with Python and I have come across this issue. After the calculations are made "Invalid Number" always prints and then the pause happens. I think it has something to do with the newline breaking the <strong>if</strong> block but I'm not sure.</p>
<p>Any help will be appreciated.
Thanks in advance.</p>
<pre><code>def badnum():
print("Invalid Number")
print("Press enter to continue")
input("")
def main():
print("Select an action ")
print("1.) Add")
print("2.) Subtract")
print("3.) Multiply")
print("4.) Divide")
ac = int(input(">>>"))
if ac == 1:
print("First number :")
fn = float(input(">>>"))
print("Second number :")
sn = float(input(">>>"))
a = fn+sn
print(a)
if ac == 2:
print("First number :")
fn = float(input(">>>"))
print("Second number :")
sn = float(input(">>>"))
a = fn-sn
print(a)
if ac == 3:
print("First number :")
fn = float(input(">>>"))
print("Second number :")
sn = float(input(">>>"))
a = fn*sn
print(a)
if ac == 4:
print("First number :")
fn = float(input(">>>"))
print("Second number :")
sn = float(input(">>>"))
a = fn/sn
print(a)
else:
badnum()
print("\n"*100)
while True:
try:
main()
except ValueError:
badnum()
except ZeroDivisionError:
print("Infinity")
print("\n"*100)
</code></pre>
| 0 |
2016-09-21T12:44:57Z
| 39,617,163 |
<p>If I understand you correctly, you just need to replace second and further <code>if</code> with <code>elif</code>:</p>
<pre><code>if ac == 1:
...
elif ac == 2:
...
if ac == 3:
...
if ac == 4:
...
else:
...
</code></pre>
<p>And "Invalid Number" will not be printed after each calculation.</p>
| 0 |
2016-09-21T12:54:19Z
|
[
"python",
"python-3.x",
"if-statement"
] |
Django using functions from a python class from frontend?
| 39,616,960 |
<p>I am new to Django, but i managed to create the back-end and front end for my website but in the front end i am connecting to an external socket and getting data on the fly and i implemented a class that has the function <code>add_data2GraphDB(Data)</code> that adds the element to my graph database</p>
<p>How can I call this function from the front end so it is applied in the back-end without disturbing the rendering of the website.</p>
<p>this is the js code on the front-end page</p>
<pre><code><script>
eventToListenTo = 'tx'
room = 'inv'
var socket = io("https://blockexplorer.com/");
socket.on('connect', function() {
// Join the room.
socket.emit('subscribe', room);
})
socket.on(eventToListenTo, function(data) {
***add_data2GraphDB(Data)***;
})
</script>
</code></pre>
<p>also after getting the Data i am viewing it in to the user with ability to check the details of this data so it should be added to the graph before viewing it</p>
| 1 |
2016-09-21T12:45:52Z
| 39,617,480 |
<p>You can start providing an api point to your Django app that will take parameters from the request body (or/and request parameters) and call your function. So create an url like <code>/api/add2grah</code>. And you call it in the front end with a classic async call.</p>
<p>Now if your function takes a long time, you may want to start using a task queue, so than your api function returns immediately but your add2Graph runs in the back. Good and simple solutions are <a href="https://huey.readthedocs.io/en/latest/django.html" rel="nofollow">Huey</a>, <a href="https://django-q.readthedocs.io" rel="nofollow">Django-q</a> or django-rq (they're simpler than Celery).</p>
<p>Does that answer something ?</p>
| 1 |
2016-09-21T13:08:28Z
|
[
"javascript",
"python",
"html",
"django",
"neo4j"
] |
Django using functions from a python class from frontend?
| 39,616,960 |
<p>I am new to Django, but i managed to create the back-end and front end for my website but in the front end i am connecting to an external socket and getting data on the fly and i implemented a class that has the function <code>add_data2GraphDB(Data)</code> that adds the element to my graph database</p>
<p>How can I call this function from the front end so it is applied in the back-end without disturbing the rendering of the website.</p>
<p>this is the js code on the front-end page</p>
<pre><code><script>
eventToListenTo = 'tx'
room = 'inv'
var socket = io("https://blockexplorer.com/");
socket.on('connect', function() {
// Join the room.
socket.emit('subscribe', room);
})
socket.on(eventToListenTo, function(data) {
***add_data2GraphDB(Data)***;
})
</script>
</code></pre>
<p>also after getting the Data i am viewing it in to the user with ability to check the details of this data so it should be added to the graph before viewing it</p>
| 1 |
2016-09-21T12:45:52Z
| 39,628,690 |
<p>I solved the issue by using ajax and also @Ehvince helped me with the concept of the api
basically in the front-end i used:</p>
<pre><code>$.ajax({
type:'POST',
url:'/app/add2Graph/',
data:{
tx:data.txid,
csrfmiddlewaretoken:$('input[name=csrfmiddlewaretoken]').val()
},
success:function(result){
console.log(result)
}
});
</code></pre>
<p>and in the back-end I added the URL </p>
<pre><code>url(r'^app/add2Graph/$',addTx2graph, name='add2graph'),
</code></pre>
<p>and in the api I added:</p>
<pre><code>def addTx2graph(request):
transactionInfo="Unknown transaction"
if request.method=='POST':
tx=request.POST['tx']
transactionInfo=addTransaction(tx)
return HttpResponse("success")
</code></pre>
| 0 |
2016-09-22T01:02:02Z
|
[
"javascript",
"python",
"html",
"django",
"neo4j"
] |
Adding custom user registration fields in django
| 39,617,102 |
<p>I couldn't find much information/am having trouble adding custom user fields to the django create_user function. I have quite a few fields and am not sure how to get them into the database, as currently this function only allows username, password, first name and last name. My views/form/models are:</p>
<p>views:</p>
<pre><code>def create_account(request):
form = CreateAccountForm(request.POST)
if form.is_valid():
username = form.cleaned_data['username']
password = form.cleaned_data['password']
password2 = form.cleaned_data['password2']
first_name = form.cleaned_data['first_name']
last_name = form.cleaned_data['last_name']
gender = form.cleaned_data['gender']
medication = form.cleaned_data['medication']
medical_history = form.cleaned_data['medical_history']
DOB = form.cleaned_data['DOB']
email = form.cleaned_data['email']
telephone = form.cleaned_data['telephone']
address = form.cleaned_data['address']
city = form.cleaned_data['city']
state = form.cleaned_data['state']
postcode = form.cleaned_data['postcode']
if password == password2:
if (password_verification(password)) == 3:
if (username_verification(username)) == False:
user = User.objects.create_user(username, email, password)
user.last_name = last_name
user.first_name = first_name
user.save()
return HttpResponseRedirect('/login')
else:
return HttpResponseRedirect('/create_account')
else:
return HttpResponseRedirect('/create_account')
else:
return HttpResponseRedirect('/create_account')
return render(request, "create_account.html", {'form': form})
</code></pre>
<p>Models:</p>
<pre><code>class user(models.Model):
username = models.CharField(max_length=20)
password = models.CharField(max_length=15)
password2 = models.CharField(max_length=20)
first_name = models.CharField(max_length=20)
last_name = models.CharField(max_length=20)
gender = models.CharField(max_length=20)
medication = models.CharField(max_length=50, blank=True)
medical_history = models.CharField(max_length=50,blank=True)
DOB = models.CharField(max_length=20)
email = models.EmailField(max_length=30)
telephone = models.CharField(max_length=20)
address = models.CharField(max_length=30)
city = models.CharField(max_length=20)
state = models.CharField(max_length=20)
postcode = models.CharField(max_length=30)
</code></pre>
<p>Forms:</p>
<pre><code>class CreateAccountForm(forms.Form):
username = forms.CharField()
password = forms.CharField(widget=forms.PasswordInput)
password2 = forms.CharField(widget=forms.PasswordInput)
first_name = forms.CharField()
last_name = forms.CharField()
gender = forms.CharField()
medication = forms.CharField()
medical_history = forms.CharField()
DOB = forms.CharField()
email = forms.CharField()
telephone = forms.CharField()
address = forms.CharField()
city = forms.CharField()
state = forms.CharField()
postcode = forms.CharField()
</code></pre>
<p>If anyone knows how to add the extra fields into the database I would be greatly appreciative! </p>
| 0 |
2016-09-21T12:51:31Z
| 39,617,301 |
<p>If you want to extend your User you cannot create model with username as char field. Just follow this <a href="https://docs.djangoproject.com/en/dev/topics/auth/customizing/#extending-the-existing-user-model" rel="nofollow">Django Docs</a>. </p>
| 1 |
2016-09-21T13:00:40Z
|
[
"python",
"django",
"forms",
"user",
"registration"
] |
Convert string to list without elements being individual characters?
| 39,617,157 |
<p>Suppose I have a function called <code>support</code> that counts the number of times passed items occur in elements in a list:</p>
<pre><code>>>> rows = ['candy apple banana cookie', 'candy apple banana', 'candy', 'apple', 'apple banana candy', 'candy apple', 'banana']
>>> def support(item, rows):
return float(sum([1 for row in rows if item in row]))
>>> print(support('apple', rows))
5.0
</code></pre>
<p>That works well, but eventually I'll need to measure how frequently <strong>two</strong> items occur together in the data. I could define something like this:</p>
<pre><code>>>> def joint_support(items, rows):
return float(sum([1 for row in rows if all(item in row.split() for item in items)]))
</code></pre>
<p>I'd rather not define two functions that effectively do the same thing. Whether the user passes one or two elements in <code>items</code>, I'd like for the function to count the occurrence of those items, either jointly or separately, in the data. Without using an <code>if</code> statement to measure the length of <code>items</code> (i.e. using a list comprehension), how can I make sure that, if the <code>items</code> parameter is just one string, that the function does not search for joint occurrence of each individual letter?</p>
<p>This is what I have so far:</p>
<pre><code>>>> def master_support(items, rows):
return float(sum([1 for row in rows if all(item in row.split() for item in items if type(items) is not str) else 1 if items in row.split()]))
</code></pre>
<p>Effectively, I think I'm asking how I can automatically convert <code>str</code> to <code>list</code> without the elements of the list being individual characters.</p>
| 0 |
2016-09-21T12:53:54Z
| 39,617,426 |
<p>You've actually already figured out how to convert a <code>str</code> to a <code>list</code> without the elements being individual characters: <code>row.split()</code>. Your problem is that this leaves you with bunch of a small lists (like <code>['candy', 'apple', 'banana', 'cookie']</code>) rather than flattening all the lists into one long one that is easy to count. For that, you can use itertools.chain() as I do here:</p>
<pre><code>>>> from collections import Counter
>>> import itertools
>>>
>>> rows = ['candy apple banana cookie', 'candy apple banana', 'candy', 'apple', 'apple banana candy', 'candy apple', 'banana']
>>> words_list = list(itertools.chain(*[phrase.split() for phrase in rows]))
>>> word_counts = Counter(words_list)
>>> print(words_list)
['candy', 'apple', 'banana', 'cookie', 'candy', 'apple', 'banana', 'candy', 'apple', 'apple', 'banana', 'candy', 'candy', 'apple', 'banana']
>>> print(word_counts)
Counter({'apple': 5, 'candy': 5, 'banana': 4, 'cookie': 1})
</code></pre>
| 1 |
2016-09-21T13:06:02Z
|
[
"python",
"string"
] |
Convert string to list without elements being individual characters?
| 39,617,157 |
<p>Suppose I have a function called <code>support</code> that counts the number of times passed items occur in elements in a list:</p>
<pre><code>>>> rows = ['candy apple banana cookie', 'candy apple banana', 'candy', 'apple', 'apple banana candy', 'candy apple', 'banana']
>>> def support(item, rows):
return float(sum([1 for row in rows if item in row]))
>>> print(support('apple', rows))
5.0
</code></pre>
<p>That works well, but eventually I'll need to measure how frequently <strong>two</strong> items occur together in the data. I could define something like this:</p>
<pre><code>>>> def joint_support(items, rows):
return float(sum([1 for row in rows if all(item in row.split() for item in items)]))
</code></pre>
<p>I'd rather not define two functions that effectively do the same thing. Whether the user passes one or two elements in <code>items</code>, I'd like for the function to count the occurrence of those items, either jointly or separately, in the data. Without using an <code>if</code> statement to measure the length of <code>items</code> (i.e. using a list comprehension), how can I make sure that, if the <code>items</code> parameter is just one string, that the function does not search for joint occurrence of each individual letter?</p>
<p>This is what I have so far:</p>
<pre><code>>>> def master_support(items, rows):
return float(sum([1 for row in rows if all(item in row.split() for item in items if type(items) is not str) else 1 if items in row.split()]))
</code></pre>
<p>Effectively, I think I'm asking how I can automatically convert <code>str</code> to <code>list</code> without the elements of the list being individual characters.</p>
| 0 |
2016-09-21T12:53:54Z
| 39,617,634 |
<p>If I understand you right you are searching for something like this</p>
<pre><code>def joint_support(items, rows):
return sum([1 for row in rows if set(items).issubset(set(row.split()))])
</code></pre>
<p>The second <code>set</code> is optional</p>
<pre><code>rows = ['candy apple banana cookie', 'candy apple banana', 'candy', 'apple', 'apple banana candy', 'candy apple', 'banana']
rows2 = ['candy apple banana cookie']
items = ['apple', 'banana']
joint_support(items, rows)
joint_support(items, rows2)
</code></pre>
| 1 |
2016-09-21T13:15:00Z
|
[
"python",
"string"
] |
Convert string to list without elements being individual characters?
| 39,617,157 |
<p>Suppose I have a function called <code>support</code> that counts the number of times passed items occur in elements in a list:</p>
<pre><code>>>> rows = ['candy apple banana cookie', 'candy apple banana', 'candy', 'apple', 'apple banana candy', 'candy apple', 'banana']
>>> def support(item, rows):
return float(sum([1 for row in rows if item in row]))
>>> print(support('apple', rows))
5.0
</code></pre>
<p>That works well, but eventually I'll need to measure how frequently <strong>two</strong> items occur together in the data. I could define something like this:</p>
<pre><code>>>> def joint_support(items, rows):
return float(sum([1 for row in rows if all(item in row.split() for item in items)]))
</code></pre>
<p>I'd rather not define two functions that effectively do the same thing. Whether the user passes one or two elements in <code>items</code>, I'd like for the function to count the occurrence of those items, either jointly or separately, in the data. Without using an <code>if</code> statement to measure the length of <code>items</code> (i.e. using a list comprehension), how can I make sure that, if the <code>items</code> parameter is just one string, that the function does not search for joint occurrence of each individual letter?</p>
<p>This is what I have so far:</p>
<pre><code>>>> def master_support(items, rows):
return float(sum([1 for row in rows if all(item in row.split() for item in items if type(items) is not str) else 1 if items in row.split()]))
</code></pre>
<p>Effectively, I think I'm asking how I can automatically convert <code>str</code> to <code>list</code> without the elements of the list being individual characters.</p>
| 0 |
2016-09-21T12:53:54Z
| 39,617,688 |
<p>When passing a list of items, add a leading <em>asterisk</em> to the parameter, so the list is treated as a container of separate items:</p>
<pre><code>def joint_support(rows, *items):
if len(items) == 1:
return float(sum(items[0] in row for row in rows))
elif len(items) > 1:
return float(sum(any(r in row for r in items) for row in rows))
rows = ['candy apple banana cookie', 'candy apple banana', 'candy', 'apple', 'apple banana candy', 'candy apple', 'banana']
print(joint_support(rows, 'apple')) # 5.0
# add a leading asterisk
print(joint_support(rows, *['apple', 'boy', 'banana'])) # 6.0
</code></pre>
<p>To count containment of all the joint items instead of <em>any</em> of them, replace <code>any</code> with <code>all</code> in the <code>elif</code> block.</p>
| 1 |
2016-09-21T13:17:14Z
|
[
"python",
"string"
] |
Convert string to list without elements being individual characters?
| 39,617,157 |
<p>Suppose I have a function called <code>support</code> that counts the number of times passed items occur in elements in a list:</p>
<pre><code>>>> rows = ['candy apple banana cookie', 'candy apple banana', 'candy', 'apple', 'apple banana candy', 'candy apple', 'banana']
>>> def support(item, rows):
return float(sum([1 for row in rows if item in row]))
>>> print(support('apple', rows))
5.0
</code></pre>
<p>That works well, but eventually I'll need to measure how frequently <strong>two</strong> items occur together in the data. I could define something like this:</p>
<pre><code>>>> def joint_support(items, rows):
return float(sum([1 for row in rows if all(item in row.split() for item in items)]))
</code></pre>
<p>I'd rather not define two functions that effectively do the same thing. Whether the user passes one or two elements in <code>items</code>, I'd like for the function to count the occurrence of those items, either jointly or separately, in the data. Without using an <code>if</code> statement to measure the length of <code>items</code> (i.e. using a list comprehension), how can I make sure that, if the <code>items</code> parameter is just one string, that the function does not search for joint occurrence of each individual letter?</p>
<p>This is what I have so far:</p>
<pre><code>>>> def master_support(items, rows):
return float(sum([1 for row in rows if all(item in row.split() for item in items if type(items) is not str) else 1 if items in row.split()]))
</code></pre>
<p>Effectively, I think I'm asking how I can automatically convert <code>str</code> to <code>list</code> without the elements of the list being individual characters.</p>
| 0 |
2016-09-21T12:53:54Z
| 39,618,705 |
<p>If you are looking for only checking whether all the items exist in the list, you can use <code>set</code> and subtract it.</p>
<pre><code>def joint_support(item, rows):
if isinstance(item, str):
item = (item,)
return float(sum[1 for row in rows if not set(item)-set(row.split(" "))])
</code></pre>
| 0 |
2016-09-21T13:57:46Z
|
[
"python",
"string"
] |
Reverse the list while creation
| 39,617,160 |
<p>I have this code:</p>
<pre><code>def iterate_through_list_1(arr):
lala = None
for i in range(len(arr))[::-1]:
lala = i
def iterate_through_list_2(arr):
lala = None
for i in range(len(arr), 0, -1):
lala = i
</code></pre>
<p>Logically, iterating through list created by <code>range()</code> in reverse order should be more efficient, than creating list with <code>range()</code> and reversing it afterwards with <code>[::-1]</code>. But <em>cProfile</em> tells me, that <code>iterate_through_list_1</code> function works faster.</p>
<p>I used python-3. And here you can see output of profiling on the two identical arrays with 100000000 elements in them.</p>
<pre><code>ncalls tottime percall cumtime percall filename:lineno(function)
1 5.029 5.029 5.029 5.029 bs.py:24(iterate_throgh_list_2)
1 4.842 4.842 4.842 4.842 bs.py:19(iterate_throgh_list_1)
</code></pre>
<p>What happened underneath Python slices while list creation? </p>
| -2 |
2016-09-21T12:54:01Z
| 39,617,655 |
<p>Well designed test shows that first function is slowest on Python 2.x (mostly because two lists have to be created, first one as a increasing range, second one as a reverted first one). I also included a demo using <code>reversed</code>.</p>
<pre><code>from __future__ import print_function
import sys
import timeit
def iterate_through_list_1(arr):
lala = None
for i in range(len(arr))[::-1]:
lala = i
def iterate_through_list_2(arr):
lala = None
for i in range(len(arr), 0, -1):
lala = i
def correct_iterate_reversed(arr):
lala = None
for obj in reversed(arr):
lala = obj
print(sys.version)
print('iterate_through_list_1', timeit.timeit('iterate_through_list_1(seq)',
setup='from __main__ import iterate_through_list_1\nseq = range(0, 10000)',
number=10000))
print('iterate_through_list_2', timeit.timeit('iterate_through_list_2(seq)',
setup='from __main__ import iterate_through_list_2\nseq = range(0, 10000)',
number=10000))
print('correct_iterate_reversed', timeit.timeit('correct_iterate_reversed(seq)',
setup='from __main__ import correct_iterate_reversed\nseq = range(0, 10000)',
number=10000))
</code></pre>
<p>Results:</p>
<pre><code>2.7.12 (default, Jun 29 2016, 14:05:02)
[GCC 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31)]
iterate_through_list_1 3.87919592857
iterate_through_list_2 3.38339591026
correct_iterate_reversed 2.78083491325
</code></pre>
<p>Differences in 3.x are all neglible, because in each case objects iterated over are lazy.</p>
<pre><code>3.5.2 (default, Jul 28 2016, 21:28:00)
[GCC 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31)]
iterate_through_list_1 2.986786328998278
iterate_through_list_2 2.9836046030031866
correct_iterate_reversed 2.9411962590020266
</code></pre>
| 1 |
2016-09-21T13:15:51Z
|
[
"python",
"performance",
"list",
"memory"
] |
Nesting mpi calls with mpi4py
| 39,617,250 |
<p>I am trying to use mpi4py to call a second instance of an mpi executable.</p>
<p>I am getting the error:</p>
<pre><code>Open MPI does not support recursive calls of mpirun
</code></pre>
<p>But I was under the impression that is exactly what Spawn is supposed to be able to handle - i.e. setting up a new communicator within which another mpi command could be launched.</p>
<p>The test code:</p>
<p>parent.py:</p>
<pre><code>#!/usr/bin/env python
from mpi4py import MPI
import numpy
import sys
rank = MPI.COMM_WORLD.Get_rank()
new_comm = MPI.COMM_WORLD.Split(color=rank, key=rank)
print(new_comm.Get_rank())
new_comm.Spawn(sys.executable,
args=['test.py'],
maxprocs=4)
</code></pre>
<p>which calls test.py:</p>
<pre><code>#!/usr/bin/env python
from mpi4py import MPI
import numpy
import os
import sys
comm = MPI.Comm.Get_parent()
rank = comm.Get_rank()
cwd=os.getcwd()
directory=os.path.join(cwd,str(rank))
os.chdir(directory)
os.system('{}'.format('mpirun -np 4 SOME_MPI_EXECUTABLE_HERE'))
print("Finished in "+directory)
os.chdir(cwd)
comm.Disconnect()
</code></pre>
<p>I'm running with:</p>
<pre><code>mpirun --oversubscribe -np 1 parent.py
</code></pre>
<p>Using openmpi 2.0.0 with gcc, and python/3.4.2</p>
<p>Anyone have any bright ideas as to why this is happening.....</p>
<p>Thanks!</p>
| 0 |
2016-09-21T12:58:10Z
| 39,637,553 |
<p>The following code seems to perform the way I wanted.</p>
<pre><code>#!/usr/bin/env python
from mpi4py import MPI
import numpy
import sys
import os
rank = MPI.COMM_WORLD.Get_rank()
new_comm = MPI.COMM_WORLD.Split(color=rank, key=rank)
print(new_comm.Get_rank())
cwd=os.getcwd()
os.mkdir(str(rank))
directory=os.path.join(cwd,str(rank))
print(rank,directory)
os.chdir(directory)
new_comm.Spawn("SOME_MPI_EXECUTABLE_HERE",
args=[""],
maxprocs=4)
</code></pre>
<p>run with:</p>
<p>mpirun --oversubscribe -np 4 parent.py</p>
<p>Seems to start 4 instances of SOME_MPI_EXECUTABLE each running on 4 cores.</p>
<p>(Thanks to Zulan)</p>
| 1 |
2016-09-22T11:17:36Z
|
[
"python",
"parallel-processing",
"mpi",
"python-3.4",
"mpi4py"
] |
Make a table from 2 columns
| 39,617,298 |
<p>I'm fairly new on Python.</p>
<p>I have 2 columns on a dataframe, columns are something like: </p>
<pre><code>db = pd.read_excel(path_to_file/file.xlsx)
db = db.loc[:,['col1','col2']]
col1 col2
C 4
C 5
A 1
B 6
B 1
A 2
C 4
</code></pre>
<p>I need them to be like this:</p>
<pre><code> 1 2 3 4 5 6
A 1 1 0 0 0 0
B 1 0 0 0 0 1
C 0 0 0 2 1 0
</code></pre>
<p>so they act like rows and columns and values refer to the number of coincidences.</p>
| 2 |
2016-09-21T13:00:25Z
| 39,617,447 |
<p>I think you need aggreagate by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow"><code>size</code></a> and add missing values to columns by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.reindex.html" rel="nofollow"><code>reindex</code></a>:</p>
<pre><code>print (df)
a b
0 C 4
1 C 5
2 A 1
3 B 6
4 B 1
5 A 2
6 C 4
df1 = df.b.groupby([df.a, df.b])
.size()
.unstack()
.reindex(columns=(range(1,df.b.max() + 1)))
.fillna(0)
.astype(int)
df1.index.name = None
df1.columns.name = None
print (df1)
1 2 3 4 5 6
A 1 1 0 0 0 0
B 1 0 0 0 0 1
C 0 0 0 2 1 0
</code></pre>
<p>Instead <code>size</code> you can use <code>count</code>, differences are in <a class='doc-link' href="http://stackoverflow.com/documentation/pandas/1822/grouping-data/6874/aggregating-by-size-and-count#t=201609211337167690224">pandas documentation</a>.</p>
| 1 |
2016-09-21T13:07:01Z
|
[
"python",
"pandas",
"group-by",
"aggregate",
"multiple-columns"
] |
Make a table from 2 columns
| 39,617,298 |
<p>I'm fairly new on Python.</p>
<p>I have 2 columns on a dataframe, columns are something like: </p>
<pre><code>db = pd.read_excel(path_to_file/file.xlsx)
db = db.loc[:,['col1','col2']]
col1 col2
C 4
C 5
A 1
B 6
B 1
A 2
C 4
</code></pre>
<p>I need them to be like this:</p>
<pre><code> 1 2 3 4 5 6
A 1 1 0 0 0 0
B 1 0 0 0 0 1
C 0 0 0 2 1 0
</code></pre>
<p>so they act like rows and columns and values refer to the number of coincidences.</p>
| 2 |
2016-09-21T13:00:25Z
| 39,617,474 |
<p>Say your columns are called <code>cat</code> and <code>val</code>:</p>
<pre><code>In [26]: df = pd.DataFrame({'cat': ['C', 'C', 'A', 'B', 'B', 'A', 'C'], 'val': [4, 5, 1, 6, 1, 2, 4]})
In [27]: df
Out[27]:
cat val
0 C 4
1 C 5
2 A 1
3 B 6
4 B 1
5 A 2
6 C 4
</code></pre>
<p>Then you can <code>groupby</code> the table hierarchicaly, then unstack it:</p>
<pre><code>In [28]: df.val.groupby([df.cat, df.val]).sum().unstack().fillna(0).astype(int)
Out[28]:
val 1 2 4 5 6
cat
A 1 2 0 0 0
B 1 0 0 0 6
C 0 0 8 5 0
</code></pre>
<p><strong>Edit</strong></p>
<p>As IanS pointed out, 3 is missing here (thanks!). If there's a range of columns you must have, then you can use</p>
<pre><code>r = df.val.groupby([df.cat, df.val]).sum().unstack().fillna(0).astype(int)
for c in set(range(1, 7)) - set(df.val.unique()):
r[c] = 0
</code></pre>
| 2 |
2016-09-21T13:08:12Z
|
[
"python",
"pandas",
"group-by",
"aggregate",
"multiple-columns"
] |
Python: variables shared betwen modules and namespaces
| 39,617,347 |
<p>After reading at
<a href="http://stackoverflow.com/questions/15959534/python-visibility-of-global-variables-in-imported-modules">Python - Visibility of global variables in imported modules</a></p>
<p>I was curious about this example:</p>
<pre><code>import shared_stuff
import module1
shared_stuff.a = 3
module1.f()
</code></pre>
<p>If there are no other variables "a" anywhere else, why the following one is not equivalent? </p>
<pre><code>from shared_stuff import *
import module1
a = 3
module1.f()
</code></pre>
<p>We leave out "explicit is better than implicit": I am asking out of curiosity, as I prefer the first syntax anyway.
I come from C and it looks like I didn't fully grasp Python's namespace's subtleties.
Even a link to docs where this namespace's behaviour is explained is enough.</p>
| 0 |
2016-09-21T13:02:29Z
| 39,617,448 |
<p>Importing <code>*</code> copies all the references from the module into the current scope; there is no connection to the original module at all.</p>
| 1 |
2016-09-21T13:07:02Z
|
[
"python",
"module",
"global-variables"
] |
Do AND, OR strings have special meaning in PLY?
| 39,617,450 |
<p>When using PLY (<a href="http://www.dabeaz.com/ply/" rel="nofollow">http://www.dabeaz.com/ply/</a>) I've noticed what seems to be a very strange problem: when I'm using tokens like <code>&</code> for conjunction, the program below works, but when I use <code>AND</code> in the same place, PLY claims syntax error.</p>
<p>Program:</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
import os
from ply import lex
import ply.yacc as yacc
parser = None
lexer = None
def flatten_list(lst):
flat = []
for x in lst:
if isinstance(x, list):
flat.extend(flatten_list(x))
else:
flat.append(x)
return flat
############## Tokenization ##############
tokens = (
'number',
'lparen',
'rparen',
'textw',
'titlew',
'qword',
'AND'
)
t_lparen = r'\('
t_rparen = r'\)'
t_textw = r'TEXTW:'
t_titlew = r'TITLEW:'
t_qword = r'\w+'
t_AND = r'AND'
def t_number(t):
r'\d+'
t.value = int(t.value)
return t
t_ignore = ' \t'
def t_error(t):
raise ValueError(
'Illegal character "{}" at position {}, query text: {}'.format(t.value[0], t.lexpos, t.lexer.lexdata))
lexer = lex.lex()
################# Parsing #################
def p_querylist_boolop(p):
"""querylist : subquery AND subquery"""
print >> sys.stderr, 'p_querylist', list(p)
p[0] = []
p[0].append(p[1])
p[0].append(p[3])
def p_subquery(p):
"""subquery : lparen querykw qwordseq rparen"""
print >> sys.stderr, 'p_subquery', list(p)
p[0] = flatten_list(p[3])
def p_querykw(p):
"""querykw : textw
| titlew"""
print >> sys.stderr, 'p_querykw', list(p)
p[0] = p[1]
def p_qwordseq(p):
"""qwordseq : qwordseq qword
| qwordseq number
| qword
| number"""
print >> sys.stderr, 'p_qwordseq', list(p)
if p[0]:
p[0].extend(p[1:])
else:
p[0] = p[1:]
def p_error(p):
global parser
if p:
tok = parser.token()
if tok:
msg = 'Syntax error in input, token "{}" at position {}, query text: {}'.format(tok.value, tok.lexpos,
lexer.lexdata)
raise ValueError(msg)
msg = 'Syntax error at the end of input, query text: {}'.format(lexer.lexdata)
raise ValueError(msg)
parser = yacc.yacc()
# parser = yacc.yacc(debug=0, write_tables=0)
def parse_query(q):
return parser.parse(q)
if __name__ == '__main__':
query_texts = ["""(TEXTW: one article) AND (TEXTW: two books)"""]
for qt in query_texts:
res = parse_query(qt)
print '***', res
</code></pre>
<p>This produces:</p>
<pre><code>ValueError: Syntax error in input, token "(" at position 19, query text: ( TEXTW: abc ) AND ( TEXTW: aaa )
</code></pre>
<p>However, when I change the following to:</p>
<pre><code>t_AND = r'&'
query_texts = ["""(TEXTW: one article) & (TEXTW: two books)"""]
</code></pre>
<p>..it works just fine:</p>
<pre><code>*** [['one', 'article'], ['two', 'books']]
</code></pre>
| 0 |
2016-09-21T13:07:09Z
| 39,628,385 |
<p>Ply has a slightly eccentric approach to ordering token regular expressions, in part because it depends on the underlying python regular expression library. Tokens defined with functions, such as your <code>number</code> token, are recognuzed in the order they appear, and unlike many lexical scanner generators, Ply makes no attempt to perform a longest match. Tokens defined by assignment -- all your other token types -- have lower priority than functions, and are placed in order by decreasing length (of the regular expression).</p>
<p>The Ply manual (section 4.3) strongly suggests not using variable assignment style for keyword tokens such as <code>AND</code>, because the pattern <code>r'AND'</code>, for example, will recognize the first three characters of, for example, <code>ANDROGYNOUS</code>, which you would probably expect to be a variable.
Instead, it recommends using a function with a simple pattern to first recognize all keywords and variables as simple words, and then use a dictionary to recognize the specific keywords. Sample code and a less telegraphic explanation are in the Ply manual (in the section I cited above).</p>
| 0 |
2016-09-22T00:20:12Z
|
[
"python",
"parsing",
"ply"
] |
Listing all directories recursively within the zipfile without extracting in python
| 39,617,494 |
<p>In Python we can get the list of all files within a zipfile without extracting the zip file using the below code.</p>
<pre><code>import zipfile
zip_ref = zipfile.ZipFile(zipfilepath, 'r')
for file in zip_ref.namelist():
print file
</code></pre>
<p>Similarly is there a way to fetch the list of all directories and sub directories within the zipfile without extracting the zipfile?</p>
| 1 |
2016-09-21T13:09:20Z
| 39,618,800 |
<pre><code>import zipfile
with zipfile.ZipFile(zipfilepath, 'r') as myzip:
print(myzip.printdir())
</code></pre>
| 1 |
2016-09-21T14:01:48Z
|
[
"python",
"file",
"directory",
"zipfile",
"filestructure"
] |
Listing all directories recursively within the zipfile without extracting in python
| 39,617,494 |
<p>In Python we can get the list of all files within a zipfile without extracting the zip file using the below code.</p>
<pre><code>import zipfile
zip_ref = zipfile.ZipFile(zipfilepath, 'r')
for file in zip_ref.namelist():
print file
</code></pre>
<p>Similarly is there a way to fetch the list of all directories and sub directories within the zipfile without extracting the zipfile?</p>
| 1 |
2016-09-21T13:09:20Z
| 39,632,125 |
<p>Thanks everyone for your help.</p>
<pre><code>import zipfile
subdirs_list = []
zip_ref = zipfile.ZipFile('C:/Download/sample.zip', 'r')
for dir in zip_ref.namelist():
if dir.endswith('/'):
subdirs_list.append(os.path.basename(os.path.normpath(dir)))
print subdirs_list
</code></pre>
<p>With the above code, I would be able to get a list of all directories and subdirectoies within my zipfile without extracting the sample.zip.</p>
| 0 |
2016-09-22T06:51:01Z
|
[
"python",
"file",
"directory",
"zipfile",
"filestructure"
] |
Unable to access docker SimpleHTTPServer container
| 39,617,497 |
<p>This question has been asked before, but I have not found a solution to my issue. I have some static files that I want to serve using Python's <code>SimpleHTTPServer</code> module. I have successfully built the image and run it, but I am unable to access the files from the browser.
Here is my DockerFile:</p>
<pre><code>FROM python:2.7
WORKDIR /test/
EXPOSE 8080
CMD python -m SimpleHTTPServer 8080
</code></pre>
<p>I am using the following commands: </p>
<pre><code>docker build -t gwidgets/client-python-server .
docker run --rm -ti -p 8080:8080 -v /gwt-polymer-starter/:/test/ --name client2 gwidgets/client-python-server
</code></pre>
<p>The image runs fine, but when I go to <code>http://{container'sIP}:8080/Test.html</code>, nothing happens. The server does not even get the request. </p>
<p>I am running docker on Windows 10. </p>
<p>Any ideas what the issue might be?</p>
| 0 |
2016-09-21T13:09:25Z
| 39,617,880 |
<p>The <code>-p</code> option publishes the container's ports to the host. You may need to use <a href="http://localhost:8080/Test.html" rel="nofollow">http://localhost:8080/Test.html</a> if you run the container with <code>-p 8080:8080</code> specified.</p>
| 1 |
2016-09-21T13:24:22Z
|
[
"python",
"windows",
"docker",
"simplehttpserver"
] |
Tensorflow Data Input Toggle: Train/Validation
| 39,617,686 |
<p>I have data that comes into my graph through queue runners, after I switched from the handy but speed-inferior placeholders.</p>
<p>After each training epoch, I wish to run a validation pass. Other than the the training pass, the validation pass uses different data, no augmentation and no shuffling.</p>
<p>The question is simple: how do I toggle these things?</p>
<p>A few observations:</p>
<ul>
<li>I cannot toggle the <code>shuffle</code> option in the <code>string_input_producer</code> through a <code>tf.placeholder</code> boolean.</li>
<li>The only examples on-the-line that I have found use the <code>placeholder</code> to seperate the training from the validation data. These in turn, do not use the superior queue runners.</li>
<li>I did manage to do the above with a <code>tf.cond()</code> here i would test for a <code>is_training</code> <code>tf.placeholder</code> boolean that i pass through the <code>feed_dict</code>. Is this solution the most optimal? How expensive is this <code>tf.conf()</code> method?</li>
</ul>
| 0 |
2016-09-21T13:17:10Z
| 39,620,485 |
<p>The method that works well for me is to use <code>tf.placeholder_with_default</code>:</p>
<blockquote>
<pre><code>images_train, labels_train = train_data_pipeline(fnlist_train, ref_grid)
images_val, labels_val = val_data_pipeline(fnlist_val, ref_grid)
images = tf.placeholder_with_default(images_train, shape=[None, FLAGS.nx_image, FLAGS.ny_image, FLAGS.nz_image])
labels = tf.placeholder_with_default(labels_train, shape=[None, label_length])
</code></pre>
</blockquote>
<p>During training, <code>images</code> and <code>labels</code> come directly from the training queue. For the intermittent validation steps I feed <code>images</code> and <code>labels</code> through a feed_dict in a call to<code>sess.run()</code>. The only slight hack is that is that the validation data are also tensors from a queue and feed_dict doesn't accept tensors, so I call <code>sess.run([images_val, labels_val])</code> first to get numpy values and then use them in the feed_dict. Seems to work well and there is minimal delay from the tensor==>numpy==>tensor conversion, which only occurs during validation anyway. </p>
<p>And for your case where the validation data have separate processing requirements, this can be handled when you set up the separate validation queue and processing flow to it.</p>
| 1 |
2016-09-21T15:16:36Z
|
[
"python",
"tensorflow"
] |
Tensorflow Data Input Toggle: Train/Validation
| 39,617,686 |
<p>I have data that comes into my graph through queue runners, after I switched from the handy but speed-inferior placeholders.</p>
<p>After each training epoch, I wish to run a validation pass. Other than the the training pass, the validation pass uses different data, no augmentation and no shuffling.</p>
<p>The question is simple: how do I toggle these things?</p>
<p>A few observations:</p>
<ul>
<li>I cannot toggle the <code>shuffle</code> option in the <code>string_input_producer</code> through a <code>tf.placeholder</code> boolean.</li>
<li>The only examples on-the-line that I have found use the <code>placeholder</code> to seperate the training from the validation data. These in turn, do not use the superior queue runners.</li>
<li>I did manage to do the above with a <code>tf.cond()</code> here i would test for a <code>is_training</code> <code>tf.placeholder</code> boolean that i pass through the <code>feed_dict</code>. Is this solution the most optimal? How expensive is this <code>tf.conf()</code> method?</li>
</ul>
| 0 |
2016-09-21T13:17:10Z
| 39,622,064 |
<p>One probable answer is usage of <code>make_template</code>
This is outlined in <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/kernel_tests/template_test.py" rel="nofollow">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/kernel_tests/template_test.py</a> ; It basically says one could do this:</p>
<pre class="lang-py prettyprint-override"><code>training_input, training_output = ([1., 2., 3., 4.], [2.8, 5.1, 7.2, 8.7])
test_input, test_output = ([5., 6., 7., 8.], [11, 13, 15, 17])
tf.set_random_seed(1234)
def test_line(x):
m = tf.get_variable("w", shape=[],
initializer=tf.truncated_normal_initializer())
b = tf.get_variable("b", shape=[],
initializer=tf.truncated_normal_initializer())
return x * m + b
line_template = template.make_template("line", test_line)
train_prediction = line_template(training_input)
test_prediction = line_template(test_input)
train_loss = tf.reduce_mean(tf.square(train_prediction - training_output))
test_loss = tf.reduce_mean(tf.square(test_prediction - test_output))
optimizer = tf.train.GradientDescentOptimizer(0.1)
train_op = optimizer.minimize(train_loss)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
initial_test_loss = sess.run(test_loss)
sess.run(train_op)
final_test_loss = sess.run(test_loss)
# Parameters are tied, so the loss should have gone down when we trained it.
self.assertLess(final_test_loss, initial_test_loss)
</code></pre>
| 0 |
2016-09-21T16:38:09Z
|
[
"python",
"tensorflow"
] |
Semantic error during if-statement check in program
| 39,617,727 |
<p>So I have a form that requires the user to put in all info. Only by entering all the info does the information get saved. However a weird little bug has come about.After submitting with no checkbox selected and then switching to yes afterwards (Yes this is intended) as long as a radio-button is selected, data is saved. I don't wish to change the properties of the form my only wish is to fix the semantic error when checking during the if-statements.</p>
<pre><code>#Import tkinter to make gui
from tkinter import *
from tkinter import ttk
from tkinter import messagebox
#Sets title and creates gui
root = Tk()
root.title("Entry Form")
def changed1(*args):
if yes.get()=="1":
no.set('0')
def changed2(*args):
if no.get()=="1":
yes.set('0')
#After sumbitting with no selected and then switching to yes afterwards as long as a radiobutton is seletected, data is saved?
def submit(*args): #Realign?
file = open("data.csv", "a")
if first.get() != "" and last.get() != "" and option.get() == 'Business' or option.get() == 'Residence' or option.get() == 'Other' and state.get() != "" and yes.get() == '1' or no.get() == '1':
if yes.get()=='1':
file.write(last.get().title() + "," + first.get().title() + "," + option.get() + "," + state.get() + '\n')
printer.set("Data Saved!")
else:
messagebox.showinfo("Unauthorized", "You must accept terms to continue.")
else:
messagebox.showinfo("Incomplete Information", "Please fill out all parts of the form")
#Configures column and row settings and sets padding
mainframe = ttk.Frame(root, padding="3 3 12 12")
mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
mainframe.columnconfigure(0, weight=1)
mainframe.rowconfigure(0, weight=1)
first = StringVar()
last = StringVar()
option = StringVar()
statevar = StringVar()
printer = StringVar()
yes = StringVar()
no = StringVar()
#Widgets to put in name
firstvar = ttk.Entry(mainframe, width=15, textvariable=first)
firstvar.grid(column=2, row=1, sticky=(N, W))
lastvar = ttk.Entry(mainframe, width=15, textvariable=last)
lastvar.grid(column=2, row=2, sticky=(N, W))
ttk.Label(mainframe, text="First Name").grid(column=1, row=1, sticky=(W))
ttk.Label(mainframe, text="Last Name").grid(column=1, row=2, sticky=(W))
business = ttk.Radiobutton(mainframe, text='Business', variable=option, value='Business')
residence = ttk.Radiobutton(mainframe, text='Residence', variable=option, value='Residence')
other = ttk.Radiobutton(mainframe, text='Other', variable=option, value='Other')
business.grid(column=1, row=3, sticky=(W, E))
residence.grid(column=2, row=3, sticky=(W, E))
other.grid(column=3, row=3, sticky=(W, E))
state = ttk.Combobox(mainframe, textvariable=statevar, state='readonly')
state.grid(column=2, row=4, sticky=(W))
ttk.Label(mainframe, text="State").grid(column=1, row=4, sticky=W)
state['values'] = ('Alabama', 'Alaska', 'Arizona', 'Arkansas', 'California', 'Colorado', 'Connecticut', 'Delaware', 'Florida', 'Georgia', 'Hawaii', 'Idaho', 'Illinois', 'Indiana', 'Iowa', 'Kansas', 'Kentucky', 'Louisiana', 'Maine', 'Maryland', 'Massachussetts', 'Michigan', 'Minnesota', 'Mississippi', 'Missouri', 'Montana', 'Nebraska', 'Nevada', 'New Hampshire', 'New Jersey', 'New Mexico', 'New York', 'North Carolina', 'North Dakota', 'Ohio', 'Oklahoma', 'Oregon', 'Pennsylvannia', 'Rhode Island', 'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah', 'Vermont', 'Virginia', 'Washington', 'West Virginia', 'Wisconsin', 'Wyoming')
#Creates no checkbutton
yesvar = ttk.Checkbutton(mainframe, text='Yes', command=changed1, variable=yes)
yesvar.grid(column=2, row=5, sticky=(W, E))
#Creates yes check button
novar = ttk.Checkbutton(mainframe, text='No', command=changed2, variable=no)
novar.grid(column=3, row=5, sticky=(W, E))
ttk.Label(mainframe, text="Accept Policy").grid(column=1, row=5, sticky=(W))
#Adds a calculate button
ttk.Button(mainframe, text="Submit", command=submit).grid(column=3, row=9, sticky=W)
ttk.Label(mainframe, textvariable=printer).grid(column=2, row=9, sticky=(E))
root.bind('<Return>', submit)
#Keeps the gui running
root.mainloop()
</code></pre>
| 0 |
2016-09-21T13:18:33Z
| 39,619,041 |
<p>You do not need to test <code>option</code> more than once.<br>
Use <code>if first.get():</code> instead of <code>if first.get() != ""</code>.<br>
You are testing <code>yes</code> in the second <code>if</code> statement, so remove it from the first one (needed to show different messagebox).</p>
<pre><code># Check required fields
if first.get() and last.get() and option.get() and state.get():
# Check Constraints before saving
if yes.get() == '1':
</code></pre>
| 0 |
2016-09-21T14:11:12Z
|
[
"python",
"if-statement",
"tkinter"
] |
Django Rest Framework return True if relation exist
| 39,617,817 |
<p>I have two questions. How i can just return true if relation exist? For example I have post model and comment model also, comment have foreginKey to post. Now after post serialization i wanna have something like this </p>
<pre><code>{
id: 2
content: "My first post!"
has-comments: True
}
</code></pre>
<p>And my second question is how rename field name in model relation? Again we have post and comment. In comment model i have foregin key to post like </p>
<pre><code>post = models.ForeignKey(Post)
</code></pre>
<p>Now when i add new comment i send JSON data with {post: postIdHere}. Is possible to change post to postId only in drf not in django model?</p>
<p>I hope you understand me :)
Best Redgards,
Sierran.</p>
| 0 |
2016-09-21T13:22:15Z
| 39,618,197 |
<p>The closest thing I can come up with is a custom <code>has_comments</code> field (rather than <code>has-comments</code>) with this in the serializer:</p>
<pre><code>from rest_framework import serializers
class YourSerializer(Either Serializer or ModelSerializer...):
has_comments = serializers.SerializerMethodField()
@staticmethod
def get_has_comments(instance):
# Choose whichever one works for you.
# You did not specify some model names, so I am just making stuff up.
return instance.post_set.exists()
return Comment.objects.filter(post_id=instance.id).exists()
</code></pre>
<p>You may also have to specify the field in the serializer's <code>Meta</code> class. When first run, the framework will tell you exactly how.</p>
| 1 |
2016-09-21T13:37:55Z
|
[
"python",
"django",
"django-rest-framework"
] |
Python - No valuerror on int with isalpha
| 39,617,822 |
<p>Why is no ValueError raised on this try / except when isalpha should fail.</p>
<p>I know that isalpha returns false if given a number</p>
<pre><code>In [9]: ans = input("Enter a Letter")
Enter a Letter4
In [10]: ans.isalpha()
Out[10]: False
</code></pre>
<p>How do I get the value error if they supply a number instead of a y or n? Because if the try is false shouldn't it stop being true and not print my trajectory?</p>
<pre><code>import sys
v0 = float(input("What velocity would you like? "))
g = float(input("What gravity would you like? "))
t = float(input("What time decimal would you like? "))
print("""
We have the following inputs.
v0 is %d
g is %d
t is %d
Is this correct? [Y/n]
""" % (v0, g, t))
while True:
try:
answer = input("\t >> ").isalpha()
print(v0 * t - 0.5 * g * t ** 2)
except ValueError as err:
print("Not a valid entry", err.answer)
sys.exit()
finally:
print("would you like another?")
break
</code></pre>
<p>For example if the user types 5 not a y or n still gets an answer </p>
<pre><code>$ python3 ball.py
What velocity would you like? 2
What gravity would you like? 3
What time decimal would you like? 4
We have the following inputs.
v0 is 2
g is 3
t is 4
Is this correct? [Y/n]
>> 5
-16.0
would you like another?
</code></pre>
| -1 |
2016-09-21T13:22:20Z
| 39,617,925 |
<p>You need to raise the error yourself. There is no exception raised by typing in something that you don't prefer:</p>
<pre><code>try:
answer = input("\t >> ").isalpha()
if not answer:
raise ValueError
print(v0 * t - 0.5 * g * t ** 2)
except ValueError as err:
print("Not a valid entry", err.answer)
sys.exit()
</code></pre>
| 1 |
2016-09-21T13:26:11Z
|
[
"python",
"python-3.x"
] |
Python - No valuerror on int with isalpha
| 39,617,822 |
<p>Why is no ValueError raised on this try / except when isalpha should fail.</p>
<p>I know that isalpha returns false if given a number</p>
<pre><code>In [9]: ans = input("Enter a Letter")
Enter a Letter4
In [10]: ans.isalpha()
Out[10]: False
</code></pre>
<p>How do I get the value error if they supply a number instead of a y or n? Because if the try is false shouldn't it stop being true and not print my trajectory?</p>
<pre><code>import sys
v0 = float(input("What velocity would you like? "))
g = float(input("What gravity would you like? "))
t = float(input("What time decimal would you like? "))
print("""
We have the following inputs.
v0 is %d
g is %d
t is %d
Is this correct? [Y/n]
""" % (v0, g, t))
while True:
try:
answer = input("\t >> ").isalpha()
print(v0 * t - 0.5 * g * t ** 2)
except ValueError as err:
print("Not a valid entry", err.answer)
sys.exit()
finally:
print("would you like another?")
break
</code></pre>
<p>For example if the user types 5 not a y or n still gets an answer </p>
<pre><code>$ python3 ball.py
What velocity would you like? 2
What gravity would you like? 3
What time decimal would you like? 4
We have the following inputs.
v0 is 2
g is 3
t is 4
Is this correct? [Y/n]
>> 5
-16.0
would you like another?
</code></pre>
| -1 |
2016-09-21T13:22:20Z
| 39,617,941 |
<p><code>except ValueError as err:</code> only happens when there is a ValueError thrown. The value of <code>answer</code> is <code>False</code>, but that is just an arbitrary boolean value, not an error.</p>
<p>See <a href="https://docs.python.org/2/library/exceptions.html#exceptions.ValueError" rel="nofollow"><code>ValueError</code> documentation</a> for examples of things that are errors.</p>
<p>In your case, simply test:</p>
<pre><code>answer = input("\t >> ")
if answer.isalpha():
print(v0 * t - 0.5 * g * t ** 2)
break
</code></pre>
| 3 |
2016-09-21T13:27:11Z
|
[
"python",
"python-3.x"
] |
Python - No valuerror on int with isalpha
| 39,617,822 |
<p>Why is no ValueError raised on this try / except when isalpha should fail.</p>
<p>I know that isalpha returns false if given a number</p>
<pre><code>In [9]: ans = input("Enter a Letter")
Enter a Letter4
In [10]: ans.isalpha()
Out[10]: False
</code></pre>
<p>How do I get the value error if they supply a number instead of a y or n? Because if the try is false shouldn't it stop being true and not print my trajectory?</p>
<pre><code>import sys
v0 = float(input("What velocity would you like? "))
g = float(input("What gravity would you like? "))
t = float(input("What time decimal would you like? "))
print("""
We have the following inputs.
v0 is %d
g is %d
t is %d
Is this correct? [Y/n]
""" % (v0, g, t))
while True:
try:
answer = input("\t >> ").isalpha()
print(v0 * t - 0.5 * g * t ** 2)
except ValueError as err:
print("Not a valid entry", err.answer)
sys.exit()
finally:
print("would you like another?")
break
</code></pre>
<p>For example if the user types 5 not a y or n still gets an answer </p>
<pre><code>$ python3 ball.py
What velocity would you like? 2
What gravity would you like? 3
What time decimal would you like? 4
We have the following inputs.
v0 is 2
g is 3
t is 4
Is this correct? [Y/n]
>> 5
-16.0
would you like another?
</code></pre>
| -1 |
2016-09-21T13:22:20Z
| 39,617,978 |
<p>In general you should prefer to use normal control flow logic to handle a range of user input rather than raising/catching exceptions.</p>
| 2 |
2016-09-21T13:28:35Z
|
[
"python",
"python-3.x"
] |
Python - No valuerror on int with isalpha
| 39,617,822 |
<p>Why is no ValueError raised on this try / except when isalpha should fail.</p>
<p>I know that isalpha returns false if given a number</p>
<pre><code>In [9]: ans = input("Enter a Letter")
Enter a Letter4
In [10]: ans.isalpha()
Out[10]: False
</code></pre>
<p>How do I get the value error if they supply a number instead of a y or n? Because if the try is false shouldn't it stop being true and not print my trajectory?</p>
<pre><code>import sys
v0 = float(input("What velocity would you like? "))
g = float(input("What gravity would you like? "))
t = float(input("What time decimal would you like? "))
print("""
We have the following inputs.
v0 is %d
g is %d
t is %d
Is this correct? [Y/n]
""" % (v0, g, t))
while True:
try:
answer = input("\t >> ").isalpha()
print(v0 * t - 0.5 * g * t ** 2)
except ValueError as err:
print("Not a valid entry", err.answer)
sys.exit()
finally:
print("would you like another?")
break
</code></pre>
<p>For example if the user types 5 not a y or n still gets an answer </p>
<pre><code>$ python3 ball.py
What velocity would you like? 2
What gravity would you like? 3
What time decimal would you like? 4
We have the following inputs.
v0 is 2
g is 3
t is 4
Is this correct? [Y/n]
>> 5
-16.0
would you like another?
</code></pre>
| -1 |
2016-09-21T13:22:20Z
| 39,630,518 |
<p>Posting an answer for clarity trying to provide a way to be more consistent and explicit with the treatment of <em>strings</em> and <em>int's</em>. By using <em>isinstance</em> I declare to a person reading my code explicitly what my values are to be hopefully improving readability.</p>
<pre><code>answer = input("\t >> ")
if isinstance(int(answer), int) is True:
raise ValueError("Ints aren't valid input")
sys.exit()
elif isinstance(answer, str) is True:
print(v0 * t - 0.5 * g * t ** 2)
else:
print("Ok please ammend your entries")
</code></pre>
<p>Should I have differing requirements later this could easily be abstracted into a function, which because isinstance allows checking against multiple types increases flexibility.</p>
<p><strong>Reference</strong> <a href="http://stackoverflow.com/a/11204870/461887">How to properly use python's isinstance() to check if a variable is a number?</a></p>
<pre><code>def testSomething(arg1, **types):
if isintance(arg1, [types]):
do_something
</code></pre>
| 0 |
2016-09-22T04:52:30Z
|
[
"python",
"python-3.x"
] |
Python Slice a List Using a List of Multiple Tuples
| 39,617,956 |
<p>I have a list of numbers that I would like to slice the range of numbers that is given from a list of multiple tuples. For example, I have a list that looks like:</p>
<pre><code>my_list = [ 5, 8, 3, 0, 0, 1, 3, 4, 8, 13, 0, 0, 0, 0, 21, 34, 25, 91, 61, 0, 0,]
</code></pre>
<p>I also have a list of tuples that are the indicies of values that that I want that looks like:</p>
<pre><code>my_tups = [(5,9), (14,18)]
</code></pre>
<p>How would I return only the values of my_list using my_tups as an index? </p>
| 1 |
2016-09-21T13:27:48Z
| 39,618,075 |
<p>a possibility</p>
<pre><code>from itertools import chain
my_iter = chain(*[my_list[start:end] for start, end in my_tups])
[l for l in my_iter]
</code></pre>
<p>gives </p>
<pre><code>[1, 3, 4, 8, 21, 34, 25, 91]
</code></pre>
| 0 |
2016-09-21T13:32:49Z
|
[
"python",
"list",
"tuples",
"slice"
] |
Python Slice a List Using a List of Multiple Tuples
| 39,617,956 |
<p>I have a list of numbers that I would like to slice the range of numbers that is given from a list of multiple tuples. For example, I have a list that looks like:</p>
<pre><code>my_list = [ 5, 8, 3, 0, 0, 1, 3, 4, 8, 13, 0, 0, 0, 0, 21, 34, 25, 91, 61, 0, 0,]
</code></pre>
<p>I also have a list of tuples that are the indicies of values that that I want that looks like:</p>
<pre><code>my_tups = [(5,9), (14,18)]
</code></pre>
<p>How would I return only the values of my_list using my_tups as an index? </p>
| 1 |
2016-09-21T13:27:48Z
| 39,618,085 |
<p>If I understand the question correctly, you want to return the values from <code>my_list</code> in the ranges <code>5:9</code> and <code>14:18</code>. The following code should do it</p>
<pre><code>my_list = [ 5, 8, 3, 0, 0, 1, 3, 4, 8, 13, 0, 0, 0, 0, 21, 34, 25, 91, 61, 0, 0]
my_tups = [(5,9), (14,18)]
def flattens(lists):
return sum(lists, [])
flatten([my_list[lo:hi] for (lo, hi) in my_tups])
# gives [1, 3, 4, 8, 21, 34, 25, 91]
</code></pre>
| 0 |
2016-09-21T13:33:03Z
|
[
"python",
"list",
"tuples",
"slice"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.