title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Python dictionary comparison | 39,736,128 | <p>Problem:<br>
I am trying to implement distance vector routing protocol and I need to track distances of Nodes/Routers (A, B, C) and distances from their neighbors (1,2,3) and update the best path (source router to destination router) if one of the router learns about another best path from its neighbors by processing and sending updated distance vectors. More details here en.wikipedia.org/wiki/Distance-vector_routing_protocol</p>
<p>I am trying to compare two dictionaries (<code>a</code> & <code>b</code>) and if I find any of the keys of <code>b</code> (i.e. <code>'B'</code>) present in <code>a</code> then I want to add the value of <code>'B'</code> (i.e. <code>1</code>) from <code>a</code> in to <code>'C'</code> (i.e. <code>2</code>) from <code>b</code> so the output looks similar to following:</p>
<pre><code>a = {'A': {'B': 1}}
b = {'B': {'C': 2}}
</code></pre>
<p>Final Output:</p>
<pre><code>a = {'A': {'B': 1, 'C': 3}}
</code></pre>
| -4 | 2016-09-28T00:10:49Z | 39,736,177 | <p>This is not possible with dictionaries, but with sets it is. For example, </p>
<pre><code>s = {'K':'L', 'L':'K', 'Q':'P'}
p = {'K':'L', 'Q':'P'}
# for python 3
k = s.values() & p.values() # k is now {'L', 'P'}
# for python 2
k = s.viewvalues() & p.viewvalues() # is now {'L', 'P'}
</code></pre>
<p>Look at set documentation <a href="https://docs.python.org/2/library/stdtypes.html#set" rel="nofollow">here</a></p>
| 0 | 2016-09-28T00:16:54Z | [
"python"
]
|
Load JSON object including escaped json string | 39,736,140 | <p>I'm trying to load a JSON object from a string (via Python). This object has a single key mapped to an array. The array includes a single value which is another serialized JSON object. I have tried a few online JSON parsers / validators, but can't seem to identify what the issue with loading this object is.</p>
<p>JSON Data:</p>
<pre><code> {
"parent": [
"{\"key\":\"value\"}"
]
}
</code></pre>
<p>Trying to load from Python:</p>
<pre><code>>>> import json
>>> test_string = '{"parent":["{\"key\":\"value\"}"]}'
>>> json.loads(test_string)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting , delimiter: line 1 column 15 (char 14)
</code></pre>
| 0 | 2016-09-28T00:11:53Z | 39,736,166 | <p>If you try out your string in the REPL, you'll see pretty quickly why it doesn't work:</p>
<pre><code>>>> '{"parent":["{\"key\":\"value\"}"]}'
'{"parent":["{"key":"value"}"]}'
</code></pre>
<p>Notice the <code>\</code> have gone away because python is treating them as escape sequences ...</p>
<p>One easy fix is to use a raw string:</p>
<pre><code>>>> r'{"parent":["{\"key\":\"value\"}"]}'
'{"parent":["{\\"key\\":\\"value\\"}"]}'
</code></pre>
<p>e.g.</p>
<pre><code>>>> import json
>>> test_string = r'{"parent":["{\"key\":\"value\"}"]}'
>>> json.loads(test_string)
{u'parent': [u'{"key":"value"}']}
</code></pre>
| 0 | 2016-09-28T00:15:18Z | [
"python",
"json"
]
|
Pandas Dataframe performance vs list performance | 39,736,195 | <p>I'm comparing two dataframes to determine if rows in df1 begin any row in df2. df1 is on the order of a thousand entries, df2 is in the millions.</p>
<p>This does the job but is rather slow. </p>
<pre><code>df1['name'].map(lambda x: any(df2['name'].str.startswith(x)))
</code></pre>
<p>When run on a subset of df1 (10 items), this is the result:</p>
<pre><code>35243 True
39980 False
40641 False
45974 False
53788 False
59895 True
61856 False
81083 True
83054 True
87717 False
Name: name, dtype: bool
Time: 57.8873581886 secs
</code></pre>
<p>When I converted df2 to a list, it runs much faster:</p>
<pre><code>df2_list = df2['name'].tolist()
df1['name'].map(lambda x: any(item.startswith(x + ' ') for item in df2_list))
35243 True
39980 False
40641 False
45974 False
53788 False
59895 True
61856 False
81083 True
83054 True
87717 False
Name: name, dtype: bool
Time: 33.0746209621 secs
</code></pre>
<p>Why is it quicker to iterate through a list than a Series?</p>
| 1 | 2016-09-28T00:19:51Z | 39,744,808 | <p><code>any()</code> will early return when it get a <code>True</code> value, thus the <code>startswith()</code> calls is less then the <code>Dataframe</code> version. </p>
<p>Here is a method that use <code>searchsorted()</code>:</p>
<pre><code>import random, string
import pandas as pd
import numpy as np
def randomword(length):
return ''.join(random.choice(string.ascii_lowercase) for i in range(length))
xs = pd.Series([randomword(3) for _ in range(1000)])
ys = pd.Series([randomword(10) for _ in range(10000)])
def is_any_prefix1(xs, ys):
yo = ys.sort_values().reset_index(drop=True)
y2 = yo[yo.searchsorted(xs)]
return np.fromiter(map(str.startswith, y2, xs), dtype=bool)
def is_any_prefix2(xs, ys):
x = xs.tolist()
y = ys.tolist()
return np.fromiter((any(yi.startswith(xi) for yi in y) for xi in x), dtype=bool)
res1 = is_any_prefix1(xs, ys)
res2 = is_any_prefix2(xs, ys)
print(np.all(res1 == res2))
%timeit is_any_prefix1(xs, ys)
%timeit is_any_prefix2(xs, ys)
</code></pre>
<p>output:</p>
<pre><code>True
100 loops, best of 3: 17.8 ms per loop
1 loop, best of 3: 2.35 s per loop
</code></pre>
<p>It's 100x faster.</p>
| 0 | 2016-09-28T10:25:27Z | [
"python",
"pandas"
]
|
How to get alert from a diff output using python | 39,736,205 | <p>I am writing a python script to monitor changes in webpage. I have the diff command implemented in python and I have the diff output files in a folder. </p>
<p>I have 260 diff output files. Logically i cannot check all 260 to know which file has changes. </p>
<p>Is there a python solution to read all the diff files and alert me with the filenames which have the changes.</p>
<pre><code>sample filename in my diff output folder: ['4streaming', 'net-log-2016-09- 26-12:29:32']-diff-output-2016-09-27-13:07:32.html
Required output: 4streaming has changed
</code></pre>
<p>Forgive me if my way of asking question is wrong. I am new to stackoverflow forum wrt asking questions.</p>
| 1 | 2016-09-28T00:21:27Z | 39,736,381 | <p>To check if two files have the same content you can use the <a href="https://docs.python.org/2/library/filecmp.html" rel="nofollow">filecmp</a> module:</p>
<pre><code>>>> import filecmp
>>> filecmp.cmp('a_file.txt', 'another_file.txt')
True
</code></pre>
<p>So in your case where you have a lot of files, you could store their names on a list (ex.File_list), and using <code>itertools</code> compare each item==file on the list only once with the others:</p>
<pre><code>import itertools
for i,j in itertools.combinations(File_list, 2):
filecmp.cmp(i, j) #where i,j are actual file names
# do something based on the result
</code></pre>
<p>*To get a list with all the file names in a directory take a look at <a href="http://stackoverflow.com/questions/3207219/how-to-list-all-files-of-a-directory-in-python">this</a> post.</p>
<p>Another way would be by hashing them and comparing the hashes.</p>
| 4 | 2016-09-28T00:48:49Z | [
"python",
"diff"
]
|
How to set time zone in InfluxDB using python client | 39,736,238 | <p>I am using Python client for InfluxDb to write into Database as shown below. </p>
<pre><code> import pytz
tz = pytz.timezone('US/Pacific')
client.write_points([{"measurement": system_id[1], "tags": tagdic, "time":ts, "fields": fielddic}])
</code></pre>
<p>When i run any query in D.B i am getting time information in UTC.</p>
<pre><code>time
2016-09-19T18:01:36.001482473Z
2016-09-19T18:01:36.007748467Z
2016-09-19T18:01:36.012061884Z
</code></pre>
<p>How can i get time info in my local time zone, considering it is US/Pacific?</p>
| 0 | 2016-09-28T00:25:17Z | 39,787,332 | <p>You can't do that at the moment. There are a lot of github issues regarding that topic but they are not implemented yet. Here are some of them: <a href="https://github.com/influxdata/influxdb/issues/6541" rel="nofollow">Link1</a>, <a href="https://github.com/influxdata/influxdb/issues/2074" rel="nofollow">Link2</a> , <a href="https://github.com/influxdata/influxdb/issues/6542" rel="nofollow">Link3</a></p>
<p>But you can convert the time to your timezone after you selected your data of course. </p>
| 1 | 2016-09-30T08:47:34Z | [
"python",
"influxdb"
]
|
Regex add quotes in Python so I can return Python Dictionary | 39,736,249 | <p>Using Python and Regex, I would like to add quotes to each word. Currently, I am only able to add quotes to a first index. When I loop through my end results, I get a string. Instead, I would like Python Dictionary. To solve this problem, I think adding quotes will help me get dictionary instead of a string. Can someone please guide me? </p>
<p><strong>Code</strong></p>
<pre><code>raw = "[historic_list {id: 'A(long) 11A' startdate: 42521 numvaluelist: 0.1065599566767107 datelist: 42521}historic_list {id: 'A(short) 11B' startdate: 42521 numvaluelist: 0.0038113334533441123 datelist: 42521 }historic_list {id: 'B(long) 11C' startdate: 42521 numvaluelist: 20.061623176440904 datelist: 42521}time_statistics {job_id: '' portfolio_id: '112341'} UrlPairList {}]"
line_re = re.compile(r'\{[^\}]+\}')
records = line_re.findall(raw)
record_re = re.compile(
r"""
id:\s*\'(?P<id>[^']+)\'\s*
startdate:\s*(?P<startdate>\d+)\s*
numvaluelist:\s*(?P<numvaluelist>[\d\.]+)\s*
datelist:\s*(?P<datelist>\d+)\s*
""",
re.X
)
record_parsed = record_re.search(line_re.findall(raw)[0])
record_parsed.groupdict()
# {'startdate': '42521', 'numvaluelist': '0.1065599566767107', 'datelist': '42521', 'id': 'A(long) 11A'}
for record in records:
record_parsed = record_re.search(record)
print type(record)
</code></pre>
<p><strong>Current Output</strong></p>
<pre><code>{id: 'A(long) 11A' startdate: 42521 numvaluelist: 0.1065599566767107 datelist: 42521}
{id: 'A(short) 11B' startdate: 42521 numvaluelist: 0.0038113334533441123 datelist: 42521 }
{id: 'B(long) 11C' startdate: 42521 numvaluelist: 20.061623176440904 datelist: 42521}
</code></pre>
<p><strong>Desired Output</strong> Everything in quotes</p>
<pre><code>{'id': 'A(long) 11A' 'startdate': '42521' 'numvaluelist': '0.1065599566767107' 'datelist': '42521'}
{'id': 'A(short) 11B' 'startdate': '42521' 'numvaluelist': '0.0038113334533441123' 'datelist': '42521' }
{'id': 'B(long) 11C' 'startdate': '42521' 'numvaluelist': '20.061623176440904' 'datelist': '42521'}
</code></pre>
| 1 | 2016-09-28T00:27:06Z | 39,736,570 | <p>This looks like an <a href="http://xyproblem.info/" rel="nofollow">XY Problem</a>. Your end goal is to parse that textual data into Python dictionaries. Adding quotes is the way you've come up with to do that (presumably you're then planning to use <code>eval()</code> to parse it), but it's the long way around.</p>
<p>Instead, parse it directly. You don't even need a regex, and it is much clearer what you're doing. Here's a quick and dirty attempt.</p>
<pre><code>from collections import OrderedDict
raw = "[historic_list {id: 'A(long) 11A' startdate: 42521 numvaluelist: 0.1065599566767107 datelist: 42521}historic_list {id: 'A(short) 11B' startdate: 42521 numvaluelist: 0.0038113334533441123 datelist: 42521 }historic_list {id: 'B(long) 11C' startdate: 42521 numvaluelist: 20.061623176440904 datelist: 42521}time_statistics {job_id: '' portfolio_id: '112341'} UrlPairList {}]"
record = OrderedDict()
records = []
tokens = iter(raw.split())
previous_token = ""
for token in tokens:
if previous_token == "{id:":
record["id"] = token.lstrip("'")
# get the rest of the ID up to closing quote
for token in tokens:
record["id"] += " " + token
if token.endswith("'"):
record["id"] = record["id"].rstrip("'")
break
elif previous_token in ("startdate:", "numvaluelist:"):
record["numvaluelist"] = token
elif previous_token == "datelist:":
record["datelist"] = token.partition("}")[0]
# record is complete; start new one
records.append(record)
record = OrderedDict()
previous_token = token
</code></pre>
<p>Once you have it as Python data, you can of course print it any way you like... including, just for fun, the format you asked for:</p>
<pre><code>for record in records:
print("{%s}" % ", ".join(repr(k) + ": " + repr(record[k]) for k in record))
</code></pre>
| 1 | 2016-09-28T01:17:09Z | [
"python",
"regex",
"for-loop",
"dictionary"
]
|
Regex add quotes in Python so I can return Python Dictionary | 39,736,249 | <p>Using Python and Regex, I would like to add quotes to each word. Currently, I am only able to add quotes to a first index. When I loop through my end results, I get a string. Instead, I would like Python Dictionary. To solve this problem, I think adding quotes will help me get dictionary instead of a string. Can someone please guide me? </p>
<p><strong>Code</strong></p>
<pre><code>raw = "[historic_list {id: 'A(long) 11A' startdate: 42521 numvaluelist: 0.1065599566767107 datelist: 42521}historic_list {id: 'A(short) 11B' startdate: 42521 numvaluelist: 0.0038113334533441123 datelist: 42521 }historic_list {id: 'B(long) 11C' startdate: 42521 numvaluelist: 20.061623176440904 datelist: 42521}time_statistics {job_id: '' portfolio_id: '112341'} UrlPairList {}]"
line_re = re.compile(r'\{[^\}]+\}')
records = line_re.findall(raw)
record_re = re.compile(
r"""
id:\s*\'(?P<id>[^']+)\'\s*
startdate:\s*(?P<startdate>\d+)\s*
numvaluelist:\s*(?P<numvaluelist>[\d\.]+)\s*
datelist:\s*(?P<datelist>\d+)\s*
""",
re.X
)
record_parsed = record_re.search(line_re.findall(raw)[0])
record_parsed.groupdict()
# {'startdate': '42521', 'numvaluelist': '0.1065599566767107', 'datelist': '42521', 'id': 'A(long) 11A'}
for record in records:
record_parsed = record_re.search(record)
print type(record)
</code></pre>
<p><strong>Current Output</strong></p>
<pre><code>{id: 'A(long) 11A' startdate: 42521 numvaluelist: 0.1065599566767107 datelist: 42521}
{id: 'A(short) 11B' startdate: 42521 numvaluelist: 0.0038113334533441123 datelist: 42521 }
{id: 'B(long) 11C' startdate: 42521 numvaluelist: 20.061623176440904 datelist: 42521}
</code></pre>
<p><strong>Desired Output</strong> Everything in quotes</p>
<pre><code>{'id': 'A(long) 11A' 'startdate': '42521' 'numvaluelist': '0.1065599566767107' 'datelist': '42521'}
{'id': 'A(short) 11B' 'startdate': '42521' 'numvaluelist': '0.0038113334533441123' 'datelist': '42521' }
{'id': 'B(long) 11C' 'startdate': '42521' 'numvaluelist': '20.061623176440904' 'datelist': '42521'}
</code></pre>
| 1 | 2016-09-28T00:27:06Z | 39,736,588 | <p>Here's a way to parse it using regular expressions. If you are matching each part so exactly as in your original, there's probably a more generic way.</p>
<pre><code>import re
raw = "[historic_list {id: 'A(long) 11A' startdate: 42521 numvaluelist: 0.1065599566767107 datelist: 42521}historic_list {id: 'A(short) 11B' startdate: 42521 numvaluelist: 0.0038113334533441123 datelist: 42521 }historic_list {id: 'B(long) 11C' startdate: 42521 numvaluelist: 20.061623176440904 datelist: 42521}time_statistics {job_id: '' portfolio_id: '112341'} UrlPairList {}]"
line_re = re.compile(r'\{[^\}]+\}')
value_re = re.compile(r"(\w+): ('[^']*'|\S+)")
data = []
lines = line_re.findall(raw)
for line in lines:
data_line = dict()
values = re.findall(value_re, line)
for (name, value) in values:
if(value[-1] == '}'): value = value[:-1] # to handle "foo}" without space
if(value[:1] == "'"): value = value[1:-1] # strip quotes
data_line[name] = value
data.append(data_line)
print data
</code></pre>
| 1 | 2016-09-28T01:20:19Z | [
"python",
"regex",
"for-loop",
"dictionary"
]
|
Cleaning up broken XML in Python | 39,736,276 | <p>A server I don't control sends broken XML with characters such as '>', '&', '<' etc. in attributes and text.</p>
<p>A small sample:</p>
<pre><code><StockFormula Description="" Name="F_ÃâTURN" RankType="Higher" Scope="Universe" Weight="10.86%">
<Formula>AstTurnTTM>AstTurnPTM</Formula>
</StockFormula>
<Composite Name="Piotroski & Trends - <11@4w600k 70b" Weight="0%" RankType="Higher">
</Composite>
</code></pre>
<p>I settled on using the lxml module because it is case sensitive, is very fast and gets the job done. </p>
<p>How would I go about fixing up this type of XML? Basically, I am trying to replace all occurrences of the invalid characters with the proper escape sequences.</p>
<pre><code>import re
broken = '<StockFormula Description="" Name="F_ÃâTURN" RankType="Higher" Scope="Universe" Weight="10.86%">\n<Formula>AstTurnTTM>AstTurnPTM</Formula>\n<Composite Name="Piotroski & Trends - <11@4w600k 70b" Weight="0%" RankType="Higher">\n</Composite>'
print re.sub(r'(.*Name=".*)&(")', r'\g<1>&gt;\g<2>', broken)
</code></pre>
<p>Output:</p>
<pre><code><StockFormula Description="" Name="F_ÃŽââ¬TURN" RankType="Higher" Scope="Universe" Weight="10.86%">
<Formula>AstTurnTTM>AstTurnPTM</Formula>
</StockFormula>
<Composite Name="Piotroski & Trends - <11@4w600k 70b" Weight="0%" RankType="Higher">
</Composite>
</code></pre>
| 1 | 2016-09-28T00:32:32Z | 39,736,494 | <p>First, realize no XML parser can help you with "broken XML." XML parsers only operate over XML, which by definition must be <strong><a href="http://stackoverflow.com/a/25830482/290085">well-formed</a></strong>.</p>
<p>Second, it is not possible to repair "broken XML" in the general case. There are no rules governing "broken XML." Without a clear definition of "broken XML," you cannot be guaranteed to be able to process it and convert it to real XML.</p>
<p>That said, <a href="http://www.html-tidy.org/" rel="nofollow">HTML Tidy</a> does a decent job of repairing (X)HTML, and it has limited capabilities for repairing XML as well. It's your best bet for automated repair of "broken XML." There is a Python package, <a href="http://countergram.com/open-source/pytidylib/docs/index.html" rel="nofollow">PyTidyLib</a>, which wraps the HTML Tidy library. </p>
| 3 | 2016-09-28T01:06:18Z | [
"python",
"xml",
"lxml"
]
|
Have to display coins used in makeChange function? | 39,736,280 | <p>I am creating a makeChange function using a useIt or loseIt recursive call. I was able to determine the least amount of coins that's needed to create the amount; however, I am unsure as to how to display the actual coins used.</p>
<pre><code>def change(amount,coins):
'''takes the amount and makes it out of the given coins in the list in order to return the least number of coins that's needed to create the value and the coins used'''
if amount==0:
return 0
if coins== [] or amount < 0:
return float('inf')
else:
useIt= change(amount,coins[1:])
loseIt= change(amount-coins[0],coins) + 1
return min(useIt,loseIt)
</code></pre>
| 0 | 2016-09-28T00:33:15Z | 39,736,596 | <p>You want something like:</p>
<pre><code>def change(amount, coins):
'''
>>> change(10, [1, 5, 25])
[5, 5]
'''
if not coins or amount <= 0:
return []
else:
return min((change(amount - coin, coins) + [coin]
for coin in coins if amount >= coin), key=len)
</code></pre>
<p>As an aside, this is not a dynamic programming solution. This is a simple recursive algorithm which will not scale for very large inputs.</p>
| 0 | 2016-09-28T01:21:50Z | [
"python",
"recursion",
"dynamic-programming"
]
|
Have to display coins used in makeChange function? | 39,736,280 | <p>I am creating a makeChange function using a useIt or loseIt recursive call. I was able to determine the least amount of coins that's needed to create the amount; however, I am unsure as to how to display the actual coins used.</p>
<pre><code>def change(amount,coins):
'''takes the amount and makes it out of the given coins in the list in order to return the least number of coins that's needed to create the value and the coins used'''
if amount==0:
return 0
if coins== [] or amount < 0:
return float('inf')
else:
useIt= change(amount,coins[1:])
loseIt= change(amount-coins[0],coins) + 1
return min(useIt,loseIt)
</code></pre>
| 0 | 2016-09-28T00:33:15Z | 39,737,314 | <p>First of all the semantics of your names for the use-it-or-lose-it strategy are backwards. "Use it" means the element in question is being factored into your total result. "Lose it" means you're simply recursing on the rest of the list without considering the current element. So I'd start by switching those names around. And also you should be discarding the current element in the recursive call, whether or not you use it:</p>
<pre><code>loseIt = change(amount,coins[1:])
useIt = change(amount-coins[0], coins[1:]) + 1
</code></pre>
<p>Now that we've fixed that, on to your question about passing the result down. You can take the <code>min</code> of tuples or lists, and it will start by comparing the first elements of each, and continue on successive elements if no definite <code>min</code> is yet found. For example if you pass around a list with an integer and a list:</p>
<pre><code>>>> min([2, [1,2]], [3, [0,2]])
[2, [1, 2]]
</code></pre>
<p>You can also combine any number of elements in this form with an expression like this:</p>
<pre><code>>>> [x+y for x,y in zip([2, [1,2]], [3, [0,2]])]
[5, [1, 2, 0, 2]]
</code></pre>
<p>but your case is easier since you always are just combining 2 of these, so it can be written more like: </p>
<pre><code>[useIt[0] + 1, useIt[1] + [coins[0]]]
</code></pre>
<p>Note that the first element is an integer and the second a list.</p>
<p>in your case the implementation of this idea would look something like:</p>
<pre><code>def change(amount,coins):
if amount==0:
return [0, []]
if coins== [] or amount < 0:
return [float('inf'), []]
else:
loseIt = change(amount, coins[1:])
useIt = change(amount - coins[0], coins[1:])
# Here we add the current stuff indicating that we "use" it
useIt = [useIt[0] + 1, useIt[1] + [coins[0]]]
return min(useIt,loseIt)
</code></pre>
| 0 | 2016-09-28T02:57:15Z | [
"python",
"recursion",
"dynamic-programming"
]
|
Issue with registered ec2 creation variable | 39,736,288 | <p>I have a playbook that creates an ec2 instance and from that creation I want to register the output as a var. From there I would use the registered var to create and attach an ebs volume. However I'm running into the following error. </p>
<pre><code> - name: Create EC2 instance for zone A
ec2:
key_name: "{{ keypair }}"
group: "{{ security_groups }}"
image: "{{ ami }}"
instance_type: "{{ instance_type }}"
wait: true
region: "{{ ec2_region }}"
vpc_subnet_id: "{{ subneta }}"
assign_public_ip: "{{ public_choice }}"
zone: "{{ zonea }}"
count: 1
instance_tags:
Name: "db{{ item }}a.{{ env }}"
envtype: "{{ envtype }}"
register: ec2
with_sequence: "start=1 end={{ num }}"
- debug:
msg: "{{ ec2 }}"
- debug:
msg: "{{ ec2.instance_ids }}"
</code></pre>
<p>With this I get the following output</p>
<pre><code>dumbledore@ansible1a:/etc/ansible/roles/db_ec2/tasks > ansible-playbook db_ec2.yml -e 'env=qa num=1 ebs=true'
______
< PLAY >
------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
______________________
< TASK [db_ec2 : fail] >
----------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
_______________________
< TASK [db_ec2 : debug] >
-----------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
ok: [localhost] => {
"msg": "If you need to change any default variables for this playbook edit vars/qa.yml and vars/ebs.yml for ebs configs"
}
_________________________
< TASK [db_ec2 : include] >
-------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
included: /etc/ansible/roles/db_ec2/tasks/./db_create.yml for localhost
________________________________________________
< TASK [db_ec2 : Create EC2 instance for zone A] >
------------------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
changed: [localhost] => (item=1)
_______________________
< TASK [db_ec2 : debug] >
-----------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
ok: [localhost] => {
"msg": {
"changed": true,
"msg": "All items completed",
"results": [
{
"_ansible_no_log": false,
"changed": true,
"instance_ids": [
"i-1046c108"
],
"instances": [
{
"ami_launch_index": "0",
"architecture": "x86_64",
"block_device_mapping": {
"/dev/sda1": {
"delete_on_termination": true,
"status": "attached",
"volume_id": "vol-55812edd"
}
},
"dns_name": "",
"ebs_optimized": false,
"groups": {
"sg-749f3c0d": "qa-ssh",
"sg-8f983bf6": "qa-db"
},
"hypervisor": "xen",
"id": "i-1046c108",
"image_id": "ami-55e31a35",
"instance_type": "m4.xlarge",
"kernel": null,
"key_name": "ccpkey",
"launch_time": "2016-09-28T00:28:00.000Z",
"placement": "us-west-2a",
"private_dns_name": "ip-10-50-36-201.us-west-2.compute.internal",
"private_ip": "10.50.36.201",
"public_dns_name": "",
"public_ip": null,
"ramdisk": null,
"region": "us-west-2",
"root_device_name": "/dev/sda1",
"root_device_type": "ebs",
"state": "running",
"state_code": 16,
"tags": {
"Name": "db1a.qa",
"envtype": "qa-db"
},
"tenancy": "default",
"virtualization_type": "hvm"
}
],
"invocation": {
"module_args": {
"assign_public_ip": false,
"aws_access_key": null,
"aws_secret_key": null,
"count": 1,
"count_tag": null,
"ebs_optimized": false,
"ec2_url": null,
"exact_count": null,
"group": [
"qa-db",
"qa-ssh"
],
"group_id": null,
"id": null,
"image": "ami-55e31a35",
"instance_ids": null,
"instance_profile_name": null,
"instance_tags": {
"Name": "db1a.qa",
"envtype": "qa-db"
},
"instance_type": "m4.xlarge",
"kernel": null,
"key_name": "ccpkey",
"monitoring": false,
"network_interfaces": null,
"placement_group": null,
"private_ip": null,
"profile": null,
"ramdisk": null,
"region": "us-west-2",
"security_token": null,
"source_dest_check": true,
"spot_price": null,
"spot_type": "one-time",
"spot_wait_timeout": 600,
"state": "present",
"tenancy": "default",
"termination_protection": false,
"user_data": null,
"validate_certs": true,
"volumes": null,
"vpc_subnet_id": "subnet-5d625c39",
"wait": true,
"wait_timeout": 300,
"zone": "us-west-2a"
},
"module_name": "ec2"
},
"item": "1",
"tagged_instances": []
}
]
}
}
_______________________
< TASK [db_ec2 : debug] >
-----------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
fatal: [localhost]: FAILED! => {"failed": true, "msg": "'dict object' has no attribute 'instance_ids'"}
____________
< PLAY RECAP >
------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
localhost : ok=4 changed=1 unreachable=0 failed=1
</code></pre>
<p>So I can see that the variable that should exist does in the output of the full ec2 variable. But when trying to use the key specific variable of ec2.instance_ids it fails...</p>
<p>EDIT:</p>
<p>This appears to be specific to with_sequence. If I remove the with sequence and change it up to something like this. </p>
<pre><code> - name: Create EC2 instance for zone A
ec2:
key_name: "{{ keypair }}"
group: "{{ security_groups }}"
image: "{{ ami }}"
instance_type: "{{ instance_type }}"
region: "{{ ec2_region }}"
vpc_subnet_id: "{{ subneta }}"
assign_public_ip: "{{ public_choice }}"
zone: "{{ zonea }}"
count: "{{ num }}"
wait: true
instance_tags:
envtype: "{{ envtype }}"
register: ec2
- debug:
msg: "{{ item.id }}"
with_items: "{{ ec2.instances }}"
</code></pre>
<p>I no longer get any errors and it prints the instance ID without issue. </p>
| 0 | 2016-09-28T00:34:31Z | 39,738,088 | <p>Not sure how with_sequence would affect here, but I see atleast one basic problem here.</p>
<p>When you register the output of ec2 task in a variable named "ec2", you can refer to instance_ids with the following -</p>
<pre><code>ec2.results[0].instance_ids
</code></pre>
<p>Ansible task is complaining because there is no top level key named as "instance_ids" within "ec2", rather it is present in "results" key within "ec2".</p>
<p>Hope this makes sense unless I am interpreting your question incorrectly.</p>
| 1 | 2016-09-28T04:31:45Z | [
"python",
"ansible",
"ansible-playbook"
]
|
cannot accessing a simple database in django | 39,736,293 | <p>I went through the django doc polls example. Now I have a database <code>name.info.db</code> available. I put it in same directory as manage.py, also where db.sqlite3 is.
I changed <code>settings.py</code> to</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'name.info.db'),
}}
</code></pre>
<p>My models.py is</p>
<pre><code>class PItable(models.Model):
pid_text = models.CharField(max_length=200)
lname_text = models.CharField(max_length=200, null=True)
fname_text = models.CharField(max_length=200, null=True)
affs_text = models.CharField(max_length=2000, null=True)
pmidlist_text = models.CharField(max_length=2000, null=True)
clustering_text = models.CharField(max_length=2000, null=True)
def __str__(self):
return self.fname_text
</code></pre>
<p>I deleted Question and Choice(from polls example) from views and models.
I did</p>
<pre><code>python manage.py makemigrations pidb
python manage.py migrate
</code></pre>
<p>I got the following message which didn't show my database, still showed the deleted models(Question, Choice). What did I do wrong here? When I access from python shell <code>PItable.objects.filter(id=1)</code>, it shows empty[]. Thanks for any help!!</p>
<pre><code>$ python manage.py sqlmigrate pidb 0001
BEGIN;
--
-- Create model Choice
--
CREATE TABLE "pidb_choice" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "choice_text" varchar(200) NOT NULL, "votes" integer NOT NULL);
--
-- Create model Question
--
CREATE TABLE "pidb_question" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "question_text" varchar(200) NOT NULL, "pub_date" datetime NOT NULL);
--
-- Add field question to choice
--
ALTER TABLE "pidb_choice" RENAME TO "pidb_choice__old";
CREATE TABLE "pidb_choice" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "choice_text" varchar(200) NOT NULL, "votes" integer NOT NULL, "question_id" integer NOT NULL REFERENCES "pidb_question" ("id"));
INSERT INTO "pidb_choice" ("id", "choice_text", "question_id", "votes") SELECT "id", "choice_text", NULL, "votes" FROM "pidb_choice__old";
DROP TABLE "pidb_choice__old";
CREATE INDEX "pidb_choice_7aa0f6ee" ON "pidb_choice" ("question_id");
COMMIT;
</code></pre>
| 1 | 2016-09-28T00:35:10Z | 39,738,384 | <p>Read about <a href="https://docs.djangoproject.com/en/1.10/ref/models/options/#managed" rel="nofollow">managed option</a>. If You set it to <code>False</code>, no database table creation or deletion operations will be performed for this model. This is useful if the model represents an existing table or a database view that has been created by some other means.</p>
<pre><code>class PItable(models.Model):
pid_text = models.CharField(max_length=200)
lname_text = models.CharField(max_length=200, null=True)
fname_text = models.CharField(max_length=200, null=True)
affs_text = models.CharField(max_length=2000, null=True)
pmidlist_text = models.CharField(max_length=2000, null=True)
clustering_text = models.CharField(max_length=2000, null=True)
class Meta(object):
managed = False
def __str__(self):
return self.fname_text
</code></pre>
| 0 | 2016-09-28T04:59:25Z | [
"python",
"django",
"database"
]
|
`manage.py runserver` and Ctrl+C (Django) | 39,736,330 | <p>When I quit Django <code>manage.py runserver</code> with Ctrl+C, do threads running HTTP request finish properly or are they interrupted in the middle?</p>
| 2 | 2016-09-28T00:40:47Z | 39,741,069 | <p><strong>TL;DR</strong> running HTTP requests are stopped when Ctrl+C is hit on the django dev server</p>
<p>I thought your question is really interesting and investigated:</p>
<p>I made a view that takes 10 seconds to execute and after that sends a response.
To test for your behaviour I stopped the development-server <code>manage.py runserver</code> using Ctrl+C and checked for the results.</p>
<p>My base test:</p>
<pre><code>class TestView ( generic.View ):
def get ( self, request ):
import time
time.sleep(10)
response = HttpResponse('Done.')
return response
</code></pre>
<ul>
<li>Normal execute (10s runtime): Displays the msg <code>Done.</code></li>
<li>interrupted execute (Ctrl+C while the request is running): Browser error, the host cannot be reached</li>
</ul>
<p>so far everything as expected. But I played around a little bit, because Ctrl+C in python is <strong>not a full stop</strong>, but actually handled rather conveniently: As soon as Ctrl+C is hit, a <code>KeyboardInterrupt</code> aka <strong>an Exception is risen</strong> (equivalent to this):</p>
<pre><code>raise KeyboardInterrupt()
</code></pre>
<p>so in your command-line based programm you can put the following:</p>
<pre><code>try:
some_action_that_takes_a_while()
except KeyboardInterrupt:
print('The user stopped the programm.')
</code></pre>
<p>ported to django the new view looks like that:</p>
<pre><code>def get ( self, request ):
import time
slept_for = 0
try:
for i in range( 100 ):
slept_for += 0.1
time.sleep( 0.1 )
except KeyboardInterrupt:
pass
response = HttpResponse( 'Slept for: ' + str( slept_for ) + 's' )
return response
</code></pre>
<ul>
<li>Normal execute (10s runtime): Displays the msg <code>Slept for: 10s</code></li>
<li>interrupted execute (Ctrl+C while the request is running): Browser error, the host cannot be reached</li>
</ul>
<p>so no change in behaviour here. out of interest i changed one line, but the result didn't change; i used</p>
<pre><code>slept_for = 1000*1000
</code></pre>
<p>instead of </p>
<pre><code>time.sleep( 0.1 )
</code></pre>
<p>so to finally answer your question: on Ctrl+C the dev server shuts down immediately and <em>running http-requets</em> are <strong>not finished</strong>.</p>
| 2 | 2016-09-28T07:42:37Z | [
"python",
"django",
"signals"
]
|
hash table , nonempty slot already contains the key ,the odd data value is replaced with the new data value | 39,736,464 | <p>I am learning hash table in python and one question occurs here.</p>
<p>when hash function begins , I should generate a empty "map" hash the list.But why "nonempty slot already contains the key ,the odd data value is replaced with the new data value", isn't it should find the next empty slot and store there,why replace ?</p>
<p><a href="https://interactivepython.org/runestone/static/pythonds/SortSearch/Hashing.html" rel="nofollow">https://interactivepython.org/runestone/static/pythonds/SortSearch/Hashing.html</a></p>
<blockquote>
<p>hashfunction implements the simple remainder method. The collision resolution technique is linear probing with a âplus 1â rehash function. The put function (see Listing 3) assumes that there will eventually be an empty slot unless the key is already present in the self.slots. It computes the original hash value and if that slot is not empty, iterates the rehash function until an empty slot occurs. If a nonempty slot already contains the key, the old data value is replaced with the new data value. Dealing with the situation where there are no empty slots left is an exercise.</p>
</blockquote>
| 1 | 2016-09-28T01:00:14Z | 39,749,215 | <p>First, we need to differentiate between the <code>hash value</code>, <code>key</code> and <code>value</code>.</p>
<p><code>Hash value</code> is the ID of the slot in the hash table and it is generated by the hash function.</p>
<p><code>Key</code> is the data that you want to map from.</p>
<p><code>value</code> is the data that you want to map to.</p>
<p>So, Mapping means that you have a unique key that refers to some value, and the values are not necessarily distinct.</p>
<p>When you try to add a new slot with <code>key</code> and <code>value</code> the put function will do the following:</p>
<p>It hashes the key to get the ID of the slot in the list and if the slot with this ID is empty, the insertion is done successfully, otherwise there are two paths:</p>
<p>1- The found slot is not empty but its key doesn't equal the new key, this means that 2 different keys had the same <code>hash value</code>, so this is considered collision and it is resolved by the ways mentioned in the link you sent.</p>
<p>2- The found slot is not empty but its key equals the new key, this means that you are updating this slot, so it will replace the old <code>value</code> with the new one.</p>
<p>Example:
If the hash contains these slots:</p>
<pre><code>"foo": 5
"bar": 7
</code></pre>
<p>Then you tried to insert <code>hash["foo"] = 10</code>, this will hash the key (<code>foo</code>) and find its slot ID in the hash table(list), and it will also find that there exists a slot with key <code>foo</code>, so it will replace the value and the hash table will become like this:</p>
<pre><code>"foo": 10
"bar": 7
</code></pre>
<p>But if you tried to insert <code>hash["abc"] = 7</code>, this will hash the key <code>abc</code>, and if the <code>hash value</code> is mapping to a non-empty slot with a key different from <code>abc</code>, in this case the put function will consider this a collision and will try to find another empty slot.</p>
| 1 | 2016-09-28T13:38:35Z | [
"python",
"data-structures",
"hash",
"hashmap",
"hashtable"
]
|
Is it possible to overload the ~ operator on strings? | 39,736,575 | <pre><code>>>> a = 55
>>> b = "hello"
>>> ~a # this will work
>>> ~b # this will fail
</code></pre>
<p>No real surprise for the failure above, but suppose I wanted to overload ~ operator to work on strings. I'm fairly new to Python, so I did some digging on this and found a few tantalizing suggestions that I just couldn't get working. I know I can create some kind of new class, but I'd like the following to work as well:</p>
<pre><code>>>> ~"alpha bravo"
</code></pre>
<p>Is this possible? If so, how? How does one do this kind of overload?</p>
| 3 | 2016-09-28T01:17:56Z | 39,736,587 | <p>No, this is not possible in Python. You can not add new methods to built-in types in a way that will work reliably.</p>
<p>One thing you could do is subclass string, and define the magic method <code>__invert__</code>. But it would not work on string literals, only on instances of your subclass. </p>
| 1 | 2016-09-28T01:20:08Z | [
"python",
"string",
"overloading",
"operator-keyword"
]
|
List appended in function doesn't work | 39,736,716 | <p>This is my class:</p>
<pre><code>from Student import Student
class Class:
stulist=[]
def __init__ (self, classname, numstudents):
self.classname=classname
self.numstudents=numstudents
def addStudent(self, stuNum, stuName, stuGrades):
Class.stulist.append(Student(stuName, stuGrades))
def getPlace(self):
print (Class.stulist[0].printLn()) #printLn is function in Student
print (Class.stulist[1].printLn())
print (Class.stulist[2].printLn())
</code></pre>
<p>This is my runner:</p>
<pre><code>from Class import Class
class ClassRunner():
def main():
test=Class("Comp sci 1", 3)
test.addStudent(0, "Jimmy","4 - 100 90 80 60")
test.addStudent(1, "Sandy","4 - 100 100 80 70")
test.addStudent(2,"Fred","4 - 50 50 70 68")
test.getPlace()
main()
</code></pre>
<p>my output shows:</p>
<p>Fred = 50 50 70 68</p>
<p>Fred = 50 50 70 68</p>
<p>Fred = 50 50 70 68</p>
<p>But i want it to show:</p>
<p>Jimmy = 100 90 80 60</p>
<p>Sandy = 100 100 80 70</p>
<p>Fred = 50 50 70 68</p>
<p>what am I doing wrong? Thank you!</p>
| 2 | 2016-09-28T01:40:38Z | 39,736,758 | <p>In your <code>Class</code> class, <code>self.stulist</code> instead of <code>Class.stulist</code>. </p>
<p>You are modifying the class itself, not the instance variable.</p>
<pre><code>class Class:
stulist=[]
def __init__ (self, classname, numstudents):
self.classname=classname
self.numstudents=numstudents
def addStudent(self, stuNum, stuName, stuGrades):
self.stulist.append(Student(stuName, stuGrades))
def getPlace(self):
print (self.stulist[0].printLn()) #printLn is function in Student
print (self.stulist[1].printLn())
print (self.stulist[2].printLn())
</code></pre>
<p>References to <code>Class</code> are going to modify the actual <code>Class</code> object itself.</p>
| 0 | 2016-09-28T01:46:21Z | [
"python"
]
|
Python2.7(on Windows) Need to capture serial port output into log files during Python/Robot script run | 39,736,740 | <p>We are testing networking devices to which test interaction is done using serial ports. Python 2.7 with Windows is used to achieve this using the PySerial module of Python.
The scripts are run using Robot framework.</p>
<p>We observe that the Robot logs do not contain the serial device interaction dialogues. </p>
<p>We tried checking on Robot framework forums and it is unlikely that such support exists at Robot framework level.</p>
<p>We need to implement this in Python.
How can the following be achieved:
I) Basic requirement: All script interaction with the (multiple) test devices on serial port needs to be captured into a log file
II) Advanced requirement: while the script is not actively interacting with the test device there has to be continuous background monitoring of the device under test over serial ports for any errors/crashes</p>
<p>Thanks!</p>
| 0 | 2016-09-28T01:43:45Z | 39,766,125 | <p>I may be incorrect but perhaps you want to capture data sent/received between computer and device through serial port. If this is true then serial port sniffer will be required. Linux and mac os x does not support sniffing however you may use sniffing for windows.</p>
| 0 | 2016-09-29T09:11:50Z | [
"python",
"python-2.7",
"testing",
"automated-tests",
"robotframework"
]
|
Installing pygame on ubuntu | 39,736,745 | <p>I am trying to install pygame on an Ubuntu 16.04 system. My default python is 2.7.12. I opened terminal and tried:</p>
<pre><code>sudo apt-get install python-pygame
</code></pre>
<p>I got this message:</p>
<pre><code>Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
python-pygame : Depends: python-numpy (>= 1:1.8.0) but it is not going to be installed
Depends: python-numpy-abi9
E: Unable to correct problems, you have held broken packages.
</code></pre>
<p>I then tried to install numpy and got the same message except:</p>
<pre><code>The following packages have unmet dependencies:
python-numpy : Depends: python:any (>= 2.7.5-5~)
</code></pre>
<p>What should I do?</p>
| 0 | 2016-09-28T01:44:17Z | 39,781,796 | <p>Figured it out. Found a terminal command to reinstall python:</p>
<pre><code>sudo apt-get purge python && sudo apt-get install python2.7
</code></pre>
| 0 | 2016-09-30T00:06:13Z | [
"python",
"linux",
"ubuntu",
"numpy",
"pygame"
]
|
In python, how to restore the function of os.chdir() if os.chdir = 'path' has been implemented | 39,736,773 | <p>In path setup, I wrongly wrote the code: os.chdir = '\some path', which turns the function os.chdir() into a string. Is there any quick way to restore the function without restarting the software? Thanks!</p>
| 2 | 2016-09-28T01:48:32Z | 39,736,831 | <p>Kicking <code>os</code> out of the modules cache can make it freshly importable again:</p>
<pre><code>>>> import sys, os
>>> os.chdir = "d'oh!"
>>> os.chdir()
TypeError: 'str' object is not callable
>>> del sys.modules['os']
>>> import os
>>> os.chdir
<function posix.chdir>
</code></pre>
| 2 | 2016-09-28T01:53:55Z | [
"python"
]
|
In python, how to restore the function of os.chdir() if os.chdir = 'path' has been implemented | 39,736,773 | <p>In path setup, I wrongly wrote the code: os.chdir = '\some path', which turns the function os.chdir() into a string. Is there any quick way to restore the function without restarting the software? Thanks!</p>
| 2 | 2016-09-28T01:48:32Z | 39,736,839 | <pre><code>>>> import os
</code></pre>
<p>Assign to the <code>chdir</code> method a string value:</p>
<pre><code>>>> os.chdir = '\some path'
>>> os.chdir
'\some path'
</code></pre>
<p>Use <code>reload</code> to, well, reload the module. <code>reload</code> will reload a <em>previously imported</em> module.</p>
<pre><code>>>> reload(os)
>>> os.chdir
<built-in function chdir>
</code></pre>
| 1 | 2016-09-28T01:55:44Z | [
"python"
]
|
Yum is not working anymore in scientific linux after python update | 39,736,788 | <p>Im using a scientific linux on a remote machine. I tried to install python 2.7 on it. After that the yum and some other python packages are not working (it says "no module named yum"). I searched it online and it seems I should not have touched the system python as it breaks some of the system tools. Is there a way to reinstall the previous python (which was 2.6). I already tried to install python 2.6 by downloading the package but still yum is not working.</p>
| 0 | 2016-09-28T01:49:30Z | 39,917,999 | <p>Once you get your system back to normal, add Python 2.7 as a Software Collection - it installs "along side" the original Python 2.6 rather than replace it so both are available without collision. Get 2.7 and others from softwarecollections.org.</p>
| 0 | 2016-10-07T13:03:51Z | [
"python",
"linux",
"redhat",
"yum"
]
|
Can generatorizing a list improve Python performance? | 39,736,863 | <pre><code>def numbers(mi, ma):
return [n for n in range(mi, ma + 1)]
def gen(xs):
return (x for x in xs)
example = gen(numbers(10, 20))
</code></pre>
<p>In this example, can <code>gen</code> improve iteration performance of <code>numbers</code>? Why (not)?</p>
<pre><code>def numbersGen(mi, ma):
return gen([n for n in range(mi, ma + 1)]) # Generator from list comprehension?
</code></pre>
<p>Can Python get as lazy as Haskell?</p>
| 0 | 2016-09-28T01:58:58Z | 39,736,913 | <p>In this particular case, <code>range</code> is already about as lazy as you can hope for. There's no reason to further wrap its output in layers of indirection. Of course, the usual caveats of Python generators vs. Haskell laziness apply; namely, iterating through multiple times, or continuing from a specific point multiple times, is painful in Python, but the memory usage is generally more predictable as a result.</p>
| 0 | 2016-09-28T02:05:12Z | [
"python",
"performance",
"python-3.x",
"generator"
]
|
Use Python to calculate P**n while n approach +oo and P is a matrix | 39,736,887 | <p>The code is </p>
<pre><code>import sympy as sm
import numpy as np
k=sm.Symbol('k')
p=np.matrix([[1./2,1./4,1./4],[1./2,0,1./2],[1./4,0,3./4]])
sm.limit(p**k,k,sm.oo)
</code></pre>
<p>It indicates 'TypeError: exponent must be an integer'. But if I change the matrix to a constant like this
<code>sm.limit(2**k,k,sm.oo)</code>
, it could print the correct answer. So how can I deal with this problem? Thanks for your help!</p>
| 2 | 2016-09-28T02:01:57Z | 39,752,785 | <p>Use SymPy matrices:</p>
<pre><code>In [5]: p = sm.Matrix(p)
</code></pre>
<p>Then the exponent by a symbol will work:</p>
<pre><code>In [6]: p**k
Out[6]:
â¡ k k k k
⢠0.216542364659101â
-0.154508497187474 + 0.419821271704536â
0.404508497187474 + 0.363636363636364â
1.0 - 0.350372906022699â
-0.154508497187474 + 0.259463815113608â
0.40450849718747
â¢
⢠k k k k
â¢- 0.507064433090879â
-0.154508497187474 + 0.143428069454515â
0.404508497187474 + 0.363636363636364â
1.0 0.820447487227238â
-0.154508497187474 + 0.0886434218636708â
0.404508497187474
â¢
⢠k k k k
â£- 0.0598508375909206â
-0.154508497187474 - 0.303785526045443â
0.404508497187474 + 0.363636363636364â
1.0 0.0968406894772593â
-0.154508497187474 - 0.18774978038635â
0.404508497187474
k k k k k â¤
4 + 0.0909090909090909â
1.0 0.133830541363598â
-0.154508497187474 - 0.679285086818143â
0.404508497187474 + 0.545454545454545â
1.0 â¥
â¥
k k k k k â¥
+ 0.0909090909090909â
1.0 - 0.31338305413636â
-0.154508497187474 - 0.232071491318186â
0.404508497187474 + 0.545454545454545â
1.0 â¥
â¥
k k k k kâ¥
+ 0.0909090909090909â
1.0 - 0.0369898518863388â
-0.154508497187474 + 0.491535306431793â
0.404508497187474 + 0.545454545454545â
1.0 â¦
</code></pre>
<p>At this point you could apply the limit to the single components:</p>
<pre><code>In [9]: (p**k).applyfunc(lambda x: limit(x, k, sm.oo))
Out[9]:
â¡â â ââ¤
⢠â¥
â¢â â ââ¥
⢠â¥
â£â â ââ¦
</code></pre>
<p>This could be either a floating-point approximation error or a bug. Trying another way:</p>
<pre><code>In [15]: p**10000000000000
Out[15]:
â¡0.363636363636364 0.0909090909090909 0.545454545454545â¤
⢠â¥
â¢0.363636363636364 0.0909090909090909 0.545454545454545â¥
⢠â¥
â£0.363636363636364 0.0909090909090909 0.545454545454546â¦
</code></pre>
<p>I strongly recommend to avoid such operations with floating points, better define a Matrix of SymPy numbers:</p>
<pre><code>In [19]: p = Matrix([[sm.S.One/2, sm.S.One/4, sm.S.One/4], [sm.S.One/2, 0, sm.S.One/2], [sm.S.One/4, 0, sm.S.One*3/4]])
In [20]: p
Out[20]:
â¡1/2 1/4 1/4â¤
⢠â¥
â¢1/2 0 1/2â¥
⢠â¥
â£1/4 0 3/4â¦
</code></pre>
<p>Now if you try <strong>simplify(p</strong>k)** you'll get the closed expressions for your matrix power:</p>
<pre><code>In [27]: simplify(p**k)
Out[27]:
â¡ -k â 6â
k 6â
k 3â
k k 3â
k k 3â
k k 3â
k k k kâ -k â 6â
k 6â
k 3â
k
â¢64 â
â- 36â
2 â
â5 + 120â
2 - 55â
2 â
(1 + â5) + 33â
2 â
â5â
(1 + â5) - 120â
2 â
(-â5 + 1) + 58â
2 â
â5â
(-â5 + 1) - 220â
64 + 88â
â5â
64 â 64 â
â- 42â
2 â
â5 + 30â
2 - 44â
2 â
â¢âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ âââââââââââââââââââââââââââââââââââââââ
⢠11â
(-25 + 13â
â5)
â¢
⢠â 3â
k k 3â
k k k kâ
⢠-k â 3â
k k 3â
k k 161301â
2 â
(-â5 + 1) 72136â
2 â
â5â
(-â5 + 1) 408950â
64 182888â
â5â
64 â
⢠2â
64 â
â- 23184â
2 â
â5â
(1 + â5) + 51841â
2 â
(1 + â5) - ââââââââââââââââââââââ + ââââââââââââââââââââââââ - ââââââââââ + ââââââââââââââ -k â 6â
k 6â
k 3â
k
⢠â 11 11 11 11 â 64 â
â- 42â
2 â
â5 + 30â
2 - 242â
2
⢠ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ âââââââââââââââââââââââââââââââââââââââ
⢠-204475 + 91444â
â5
â¢
⢠â 6â
k 6â
k k kâ
⢠-k â 100â
2 52â
2 â
â5 k k 41â
â5â
(-8â
â5 + 8) 89â
(-8â
â5 + 8) â
⢠64 â
â- ââââââââ + ââââââââââ - â5â
(8 + 8â
â5) + (8 + 8â
â5) - ââââââââââââââââââ + ââââââââââââââââ -k â 6â
k 6â
k 3â
k
⢠â 11 11 11 11 â -64 â
â- 42â
2 â
â5 + 30â
2 - 33â
2
⢠ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ âââââââââââââââââââââââââââââââââââââââ
⣠-25 + 13â
â5
k 3â
k k 3â
k k 3â
k k k kâ -k â 3â
k k 3â
k k 3â
k k 3â
k
â5â
(1 + â5) + 110â
2 â
(1 + â5) - 85â
2 â
(-â5 + 1) + 31â
2 â
â5â
(-â5 + 1) - 55â
64 + 55â
â5â
64 â 64 â
â- 2â
2 â
â5â
(1 + â5) - 3â
2 â
(1 + â5) - 3â
2 â
(-â5 + 1) + 2â
2 â
â5â
(
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
11â
(-25 + 13â
â5) 11
k 3â
k k 3â
k k 3â
k k k kâ -k â 6â
k k k k
â
(1 + â5) + 110â
2 â
â5â
(1 + â5) - 8â
2 â
(-â5 + 1) + 20â
2 â
â5â
(-â5 + 1) - 55â
64 + 55â
â5â
64 â 64 â
â30â
2 - 15â
(8 + 8â
â5) + â5â
(8 + 8â
â5) - 15â
(-8â
â5 + 8) - â5â
(-
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
11â
(-25 + 13â
â5) 55
k 3â
k k 3â
k k 3â
k k k kâ -6â
k â 6â
k k k k
â
(1 + â5) + 11â
2 â
â5â
(1 + â5) - 24â
2 â
â5â
(-â5 + 1) + 58â
2 â
(-â5 + 1) - 55â
64 + 55â
â5â
64 â 2 â
â60â
2 + 25â
(8 + 8â
â5) + 13â
â5â
(8 + 8â
â5) - 13â
â5â
(-8â
â5 + 8) + 25
âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
11â
(-13â
â5 + 25) 110
k kââ¤
-â5 + 1) + 6â
64 â â¥
âââââââââââââââââââ¥
â¥
â¥
â¥
â¥
kâ â¥
8â
â5 + 8) â â¥
âââââââââââ â¥
â¥
â¥
â¥
â¥
kâ â¥
â
(-8â
â5 + 8) â â¥
ââââââââââââââ â¥
â¦
</code></pre>
<p>Unfortunately there are some bugs in SymPy that do not allow the calculation of this limit. Don't know if this expression could be useful to you.</p>
| 1 | 2016-09-28T16:18:45Z | [
"python",
"sympy"
]
|
chcp 65001 codepage results in program termination without any error | 39,736,901 | <p><strong>Problem</strong><br>
The problem arises when I want to <strong>input</strong> Unicode character in Python interpreter (for simplicity I have used a-umlaut in the example, but I have first encountered this for Farsi characters). Whenever I use python with <code>chcp 65001</code> code page and then try to input even one Unicode character, Python exits without any error.</p>
<p>I have spent days trying to solve this problem to no avail. But today, I found a thread on <a href="https://bugs.python.org/issue1602" rel="nofollow">python website</a>, another on <a href="https://bugs.mysql.com/bug.php?id=66682" rel="nofollow">MySQL</a> and another on Lua-users which issues were raised regarding this sudden exit, although without any solution and some saying that <code>chcp 65001</code> is inherently broken.</p>
<p>It would be good to know once and for all whether this problem is chcp-design-related or there is a possible workaround.</p>
<p><strong>Reproduce Error</strong></p>
<p><code>chcp 65001</code></p>
<blockquote>
<p>Python 3.X:</p>
</blockquote>
<p>Python shell</p>
<p><code>print('ä')</code></p>
<p>result: it just exits the shell</p>
<p><strong>however</strong>, this works <code>python.exe -c "print('ä')"</code>
and also this : <code>print('\u00e4')</code></p>
<p>result: ä</p>
<blockquote>
<p>in Luajit2.0.4</p>
</blockquote>
<p><code>print('ä')</code></p>
<p>result: it just exits the shell</p>
<p>however this works: <code>print('\xc3\xa4')</code></p>
<p><strong>I have come up with this observation so far:</strong></p>
<ol>
<li>direct output with the command prompt works.</li>
<li>Unicode-based , hex-based equivalent of the character works.</li>
</ol>
<p><strong>So</strong>
This is not a Python bug <strong>and</strong> that we can't use a Unicode character directly in CLI programs in Windows command prompt or any of its Wrapper like Conemu, Cmder (I am using Cmder to be able to see and use Unicode character in Windows shell and I have done so without any problem). Is this correct?</p>
| 3 | 2016-09-28T02:03:52Z | 39,745,938 | <p>To use Unicode in the Windows console for Python 2.7 and 3.x (prior to 3.6), install and enable <a href="https://pypi.python.org/pypi/win_unicode_console">win_unicode_console</a>. This uses the wide-character functions <a href="https://msdn.microsoft.com/en-us/library/ms684958"><code>ReadConsoleW</code></a> and <a href="https://msdn.microsoft.com/en-us/library/ms687401"><code>WriteConsoleW</code></a>, just like other Unicode-aware console programs such as cmd.exe and powershell.exe. For Python 3.6, a new <code>io._WindowsConsoleIO</code> raw I/O class has been added. It reads and writes UTF-8 encoded text (for cross-platform compatibility with Unix -- "get a byte" -- programs), but internally it uses the wide-character API by transcoding to and from UTF-16LE. </p>
<p>The problem you're experiencing with non-ASCII input is reproducible in the console for all Windows versions up to and including Windows 10. The console host process, i.e. conhost.exe, wasn't designed for UTF-8 (codepage 65001) and hasn't been updated to support it consistently. In particular, non-ASCII input causes an empty read. This in turn causes Python's REPL to exit and built-in <code>input</code> to raise <code>EOFError</code>. </p>
<p>The problem is that conhost encodes its UTF-16 input buffer assuming a single-byte codepage, such as the OEM and ANSI codepages in Western locales (e.g. 437, 850, 1252). UTF-8 is a multibyte encoding in which non-ASCII characters are encoded as 2 to 4 bytes. To handle UTF-8 it would need to encode in multiple iterations of <code>M / 4</code> characters, where M is the remaining bytes available from the N-byte buffer. Instead it assumes a request to read N bytes is a request to read N characters. Then if the input has one or more non-ASCII characters, the internal <code>WideCharToMultiByte</code> call fails due to an undersized buffer, and the console returns a 'successful' read of 0 bytes.</p>
<p>You may not observe exactly this problem in Python 3.5 if the pyreadline module is installed. Python 3.5 automatically tries to import <code>readline</code>. In the case of pyreadline, input is read via the wide-character function <a href="https://msdn.microsoft.com/en-us/library/ms684961"><code>ReadConsoleInputW</code></a>. This is a low-level function to read console input records. In principle it should work, but in practice entering <code>print('ä')</code> gets read by the REPL as <code>print('')</code>. For a non-ASCII character, <code>ReadConsoleInputW</code> returns a sequence of Alt+Numpad <code>KEY_EVENT</code> records. The sequence is a lossy OEM encoding, which can be ignored except for the last record, which has the input character in the <code>UnicodeChar</code> field. Apparently pyreadline ignores the entire sequence.</p>
<p>Prior to Windows 8, output using codepage 65001 is also broken. It prints a trail of garbage text in proportion to the number of non-ASCII characters. In this case the problem is that <code>WriteFile</code> and <code>WriteConsoleA</code> incorrectly return the number of UTF-16 codes written to the screen buffer instead of the number of UTF-8 bytes. This confuses Python's buffered writer, leading to repeated writes of what it thinks are the remaining unwritten bytes. This problem was fixed in Windows 8 as part of rewriting the internal console API to use the ConDrv device instead of an LPC port. Older versions of Windows can use ConEmu or ANSICON to work around this bug.</p>
| 5 | 2016-09-28T11:17:58Z | [
"python",
"windows",
"unicode",
"cmd",
"codepages"
]
|
Python: range doesn't increase | 39,736,960 | <p>I have to write a program that shows a column of kilograms and a column of pounds, starting at 1 kilogram and ending 99, increasing every step with 2.</p>
<p>I have the following code, and the range() works for the pounds part, but not for the kilograms part. It stays always 1 for the kilograms.</p>
<pre><code>for k in range(1,3,1):
print("Kilograms","Pounds",sep=" ")
for i in range(1,101,2):
for j in range(2,223,3):
print(i,"",j,sep=" ")
break
break
</code></pre>
<p>Also, why can't I use floats in the range, because officially it is 2.2 pounds.</p>
<p>Thanks in advance!</p>
| -2 | 2016-09-28T02:10:54Z | 39,736,991 | <p>Because you use <code>break</code> with in a loop.</p>
<p>In python you don't end a loop with anything but a decreased indentation. Remove your <code>break</code> statements and try again.</p>
<p>The <code>break</code> statements ends the current loop unconditionally. For example,</p>
<pre><code>s = 0
for i in range(1, 101):
s = s + i
</code></pre>
<p>will make <code>s</code> equal to 5050. However, if you break it some where, like</p>
<pre><code>s = 0
for i in range(1, 101):
s = s + i
if i == 5:
break
</code></pre>
<p><code>s</code> will stop increasing on 15.</p>
<p>As commentors say, you should learn the basics of python from some tutorial. There are pretty many free tutorials on the internet. Don't haste.</p>
<p>Besides, if you wanna use float steps in ranges, take a look at <a href="http://stackoverflow.com/questions/477486/python-decimal-range-step-value">this answer</a>; or rather, see comments below for a simple answer.</p>
| 1 | 2016-09-28T02:16:04Z | [
"python",
"for-loop"
]
|
Python: range doesn't increase | 39,736,960 | <p>I have to write a program that shows a column of kilograms and a column of pounds, starting at 1 kilogram and ending 99, increasing every step with 2.</p>
<p>I have the following code, and the range() works for the pounds part, but not for the kilograms part. It stays always 1 for the kilograms.</p>
<pre><code>for k in range(1,3,1):
print("Kilograms","Pounds",sep=" ")
for i in range(1,101,2):
for j in range(2,223,3):
print(i,"",j,sep=" ")
break
break
</code></pre>
<p>Also, why can't I use floats in the range, because officially it is 2.2 pounds.</p>
<p>Thanks in advance!</p>
| -2 | 2016-09-28T02:10:54Z | 39,737,028 | <p>Try this</p>
<pre><code>import numpy as np
KG_TO_LBS = 2.20462
KG = np.arange(0,100,2.20462)
print("Kilograms", "Pounds")
for kg in KG:
print(kg, kg/KG_TO_LBS)
</code></pre>
<p>you may have to change the printout to the format you like.</p>
| 0 | 2016-09-28T02:21:43Z | [
"python",
"for-loop"
]
|
Python: range doesn't increase | 39,736,960 | <p>I have to write a program that shows a column of kilograms and a column of pounds, starting at 1 kilogram and ending 99, increasing every step with 2.</p>
<p>I have the following code, and the range() works for the pounds part, but not for the kilograms part. It stays always 1 for the kilograms.</p>
<pre><code>for k in range(1,3,1):
print("Kilograms","Pounds",sep=" ")
for i in range(1,101,2):
for j in range(2,223,3):
print(i,"",j,sep=" ")
break
break
</code></pre>
<p>Also, why can't I use floats in the range, because officially it is 2.2 pounds.</p>
<p>Thanks in advance!</p>
| -2 | 2016-09-28T02:10:54Z | 39,737,101 | <p>First off, remove the breaks, as those will prematurely end the loops iterations. Secondly, why are you using nested <code>for</code>loops?</p>
<p>For what you described, nested loops are not even required. You simply need to use <strong>one</strong> <code>for</code>loop. Use <code>range()</code> <strong>once</strong>, to step through the values <code>1</code> to <code>99</code> in increments of <code>2</code>.</p>
<p>From you description, something such as this should suffice:</p>
<pre><code>for i in range(1, 100, 2): # for the numbers 1-99, going by twos
print('pounds: {} kilograms: {}'.format(i, i * 2.2)) # print pounds, print kilograms
</code></pre>
<hr>
<p>You seem to be confused about loops in python and he builtin <code>range()</code> function. I recommend looking at the official Python documentation for both:</p>
<ul>
<li><a href="https://docs.python.org/3/tutorial/controlflow.html#for-statements" rel="nofollow">Python documentation for <code>for</code> loops</a></li>
<li><a href="https://docs.python.org/3/reference/simple_stmts.html#break" rel="nofollow">Python documentation for <code>break</code> statement</a></li>
<li><a href="https://docs.python.org/3/library/stdtypes.html?highlight=range#range" rel="nofollow">Python documentation for <code>range()</code> function</a></li>
</ul>
| 1 | 2016-09-28T02:31:00Z | [
"python",
"for-loop"
]
|
Need help posting first python program on github | 39,736,999 | <p>I have finally created my own program that works. I am trying to upload the code to github for my first portfolio submission. I added github desktop to my macbook to try to make it easy to upload the code, but I can't figure it out. All of my code ending in .py is unable to be added to github. Can anyone offer advice on what I need to do to get this up?</p>
<p>Below is the latest screenshot from the error I am receiving:</p>
<p><img src="http://i.stack.imgur.com/AD79H.png" alt="enter image description here"></p>
| 0 | 2016-09-28T02:17:33Z | 39,737,254 | <p>Can you share your <code>.gitignore</code> file?</p>
<p>Or just try to push it with the command line interface (to learn the basics of the <code>git</code> commands, see <a href="https://git-scm.com/book/en/v2/Git-Basics-Getting-a-Git-Repository" rel="nofollow">gitcsm site</a>, or follow github <a href="https://help.github.com/articles/create-a-repo/" rel="nofollow">instruction</a> after you create your repo</p>
<pre><code>git init
git add .
git commit -m "first commit"
git remote add origin git@github.com:username/repo-name.git
</code></pre>
<p>As a last step, you need to push your "master" branch to "origin" remote and set up tracking</p>
<pre><code>git push -u origin master
</code></pre>
| 0 | 2016-09-28T02:48:37Z | [
"python",
"github"
]
|
Replace all occurrences of 'pattern' in part of a string using Python regex | 39,737,138 | <p>I would like to replace all '<' symbols that are in between apple and orange with '-'. </p>
<pre><code>>>> print re.sub(r'(apple.*)<(.*orange)', r'\1-\2', r'apple < < orange')
</code></pre>
<blockquote>
<p>apple < - orange</p>
</blockquote>
<pre><code>>>> print re.sub(r'(apple.*)<(?=.*orange)', r'\g<1>-', r'apple < < orange')
</code></pre>
<blockquote>
<p>apple < - orange</p>
</blockquote>
| 0 | 2016-09-28T02:35:30Z | 39,737,253 | <p>One invocation to <code>re.sub</code> only processes non-overlapping matches.</p>
<p>One way around that is brute force:</p>
<pre><code>>>> s = 'apple < < orange'
>>> old = None
>>> while s != old:
... old = s
... s = re.sub(r'(apple.*)<(.*orange)', r'\1-\2', s)
...
>>> print s
apple - - orange
</code></pre>
| 0 | 2016-09-28T02:48:37Z | [
"python",
"regex"
]
|
Getting error using Tkinter in python on mac OS X | 39,737,169 | <p>I tried to run a nltk code for drawing parse trees. I got the error that tkinter module is not installed.</p>
<p>These are the error messages I got:</p>
<pre><code>1. UserWarning: nltk.draw package not loaded (please install Tkinter library).
warnings.warn("nltk.draw package not loaded")
2. import _tkinter # If this fails your Python may not be configured for Tk
ImportError: No module named _tkinter
</code></pre>
<p>After some searches I installed the ActiveTcl 8.5.18.0 using this <a href="https://www.python.org/download/mac/tcltk/" rel="nofollow">instructions</a>.</p>
<p>But when I try to run my code I still get the sam error. I tried </p>
<pre><code>import Tkinter
</code></pre>
<p>but I get the second error message above again.</p>
<pre><code>File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 39, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
ImportError: No module named _tkinter
</code></pre>
<p>I also looked at Tkinter documentations and it is mentioned that the correct installation of Tkinter can be verified by running the following command which again gives me another error.</p>
<pre><code>command: python -m tkinter
error: /usr/local/opt/python/bin/python2.7: No module named tkinter
</code></pre>
<p>I found this answers on Stackoverflow for my problem but they are either not very clear or not applicable to my case.</p>
<ol>
<li><p><a href="http://stackoverflow.com/questions/11752174/how-to-get-tkinter-working-with-ubuntus-default-python-2-7-install">How to get tkinter working with Ubuntu's default Python 2.7 install?</a>
Problems: tk-dev is not available for OS X (it is same as ActiveTcl) and I couldn't figure out how to rebuilt my python using make</p></li>
<li><p><a href="http://stackoverflow.com/questions/5459444/tkinter-python-may-not-be-configured-for-tk">Tkinter: "Python may not be configured for Tk"</a>
Problems: very vague. I don't know what should I do</p></li>
</ol>
<p>please help.</p>
| 1 | 2016-09-28T02:38:40Z | 39,755,391 | <p>You should install ActivePython rather than ActiveTcl, and use it as your preferred Python. </p>
<p>The problem is your Python install isn't picking up your Tcl install, and the simplest way to solve that is to install a Python version that is configured for Tk, which ActivePython is: <a href="http://www.activestate.com/activepython" rel="nofollow">http://www.activestate.com/activepython</a></p>
<p>The issue is that the _tkinter Python module is not installed in your build, which is a required bridge between Python and Tk. You will have to reinstall nltk and any other packages you are using, unfortunately, as the versions you have will be installed for your current Python and not your new one.</p>
| 0 | 2016-09-28T18:45:49Z | [
"python",
"osx",
"python-2.7",
"tkinter",
"activetcl"
]
|
Python push files to Github remote repo without local working directory | 39,737,192 | <p>I am working on a Python based web application for collaborative xml/document editing, and one requirement from the client is that users should be able to push the files they created (and saved on the server) directly to a Github remote repo, without the need of ever creating a local clone on the server (i.e., no local working directory or tracking of any sort). In GUI terms, this would correspond to going to Github website and manually add the file to the remote repo by clicking the "Upload files" or "create new file" button, or simply edit the existing file on the remote repo on the Github website and then commit the change inside the web browser. I wonder is this functionality even possible to achieve either using some Python Github modules or writing some code from scratch using the Github API or something? Thanks!</p>
| 0 | 2016-09-28T02:41:53Z | 39,770,084 | <p>So you can create files via the API and if the user has their own GitHub account, you can upload it as them.</p>
<p>Let's use github3.py as an example of how to do this:</p>
<pre><code>import github3
gh = github3.login(username='foo', password='bar')
repository = gh.repository('organization-name', 'repository-name')
for file_info in files_to_upload:
with open(file_info, 'rb') as fd:
contents = fd.read()
repository.create_file(
path=file_info,
message='Start tracking {!r}'.format(file_info),
content=contents,
)
</code></pre>
<p>You will want to check that it returns an object you'd expect to verify the file was successfully uploaded. You can also specify <code>committer</code> and <code>author</code> dictionaries so you could attribute the commit to your service so people aren't under the assumption that the person authored it on a local git set-up.</p>
| 1 | 2016-09-29T12:16:00Z | [
"python",
"git",
"github",
"github-api"
]
|
Replace Duplicate String Characters | 39,737,196 | <p>I need to convert a string <code>word</code> where each character that appears only once should be appear as <code>'('</code> in the new string. Any duplicate characters in the original string should be replaced with <code>')'</code>. </p>
<p>My code below...</p>
<pre><code>def duplicate_encode(word):
new_word = ''
for char in word:
if len(char) > 1:
new_word += ')'
else:
new_word += '('
return new_word
</code></pre>
<p>The test I'm not passing is as follows: </p>
<p>'((((((' should equal '()()()'</p>
<p>This would suggest that, if for example, the input is "recede," the output should read <code>()()()</code>. </p>
| 1 | 2016-09-28T02:42:21Z | 39,737,272 | <p>Seems like your result is based on the number of occurrences of a character in the word, you can use <code>Counter</code> to keep track of that:</p>
<pre><code>def duplicate_encode(word):
from collections import Counter
word = word.lower() # to disregard case
counter = Counter(word)
new_word = ''
for char in word:
if counter[char] > 1: # if the character appears more than once in the word
# translate it to )
new_word += ')'
else:
new_word += '('
return new_word
duplicate_encode('recede')
# '()()()'
</code></pre>
| 0 | 2016-09-28T02:51:41Z | [
"python",
"string",
"replace",
"duplicates",
"iteration"
]
|
Replace Duplicate String Characters | 39,737,196 | <p>I need to convert a string <code>word</code> where each character that appears only once should be appear as <code>'('</code> in the new string. Any duplicate characters in the original string should be replaced with <code>')'</code>. </p>
<p>My code below...</p>
<pre><code>def duplicate_encode(word):
new_word = ''
for char in word:
if len(char) > 1:
new_word += ')'
else:
new_word += '('
return new_word
</code></pre>
<p>The test I'm not passing is as follows: </p>
<p>'((((((' should equal '()()()'</p>
<p>This would suggest that, if for example, the input is "recede," the output should read <code>()()()</code>. </p>
| 1 | 2016-09-28T02:42:21Z | 39,737,775 | <p>Your Code is Good just need some alteration it will be great.</p>
<pre><code>def duplicate_encode(word):
"""
To replace the duplicate letter with ")" in a string.
if given letter is unique it replaced with "("
"""
word_dict = {} # initialize a dictionary
new_word = ""
for i in set(word): # this loop is used to count duplicate words
word_count = word.count(i)
word_dict[i] = word_count # add letter and count of the letter to dictionary
for i in word:
if word_dict[i] > 1:
new_word += ")"
else:
new_word += "("
print new_word
duplicate_encode("recede")
</code></pre>
<p>I think you got the answer :)</p>
| 1 | 2016-09-28T03:55:41Z | [
"python",
"string",
"replace",
"duplicates",
"iteration"
]
|
Replace Duplicate String Characters | 39,737,196 | <p>I need to convert a string <code>word</code> where each character that appears only once should be appear as <code>'('</code> in the new string. Any duplicate characters in the original string should be replaced with <code>')'</code>. </p>
<p>My code below...</p>
<pre><code>def duplicate_encode(word):
new_word = ''
for char in word:
if len(char) > 1:
new_word += ')'
else:
new_word += '('
return new_word
</code></pre>
<p>The test I'm not passing is as follows: </p>
<p>'((((((' should equal '()()()'</p>
<p>This would suggest that, if for example, the input is "recede," the output should read <code>()()()</code>. </p>
| 1 | 2016-09-28T02:42:21Z | 39,742,140 | <p>Just because (it's late and) it's possible:</p>
<pre><code>def duplicate_encode(word):
return (lambda w: ''.join(('(', ')')[c in w[:i] + w[i+1:]] for i, c in enumerate(w)))(word.lower())
print(duplicate_encode("rEcede"))
</code></pre>
<p>OUTPUT</p>
<pre><code>> python3 test.py
()()()
>
</code></pre>
| 1 | 2016-09-28T08:37:30Z | [
"python",
"string",
"replace",
"duplicates",
"iteration"
]
|
Django get months from models.DateTimeField | 39,737,216 | <p>Is it possible to filter a models.DateTimeField but only get the month in the filter object?</p>
<p>The field is:</p>
<pre><code>time_stamp = models.DateTimeField(
default=timezone.now)
</code></pre>
<p>When I filter it, this is what I get:</p>
<blockquote>
<p>[datetime.datetime(2016, 9, 22, 15, 2, 48, 867473, tzinfo=),
datetime.datetime(2016, 9, 22, 15, 4, 22, 618675, tzinfo=),
datetime.datetime(2016, 9, 22, 15, 5, 20, 939593, tzinfo=)]</p>
</blockquote>
<p>The filter returns 3 rows, but clearly there is too much information. I only require the months, and maybe the year.</p>
<p>How can I achieve this?</p>
<p>Any help or direction would be appreciated,</p>
<p>Thanks</p>
| 0 | 2016-09-28T02:44:09Z | 39,737,340 | <p>You can use propety:</p>
<pre><code>Class your_model(models.Model):
time_stamp = models.DateTimeField(
default=timezone.now)
@property
def _get_year(self):
return self.time_stamp.strftime("%Y-%m")
year = property(_get_year) #EDIT
</code></pre>
| 0 | 2016-09-28T03:01:08Z | [
"python",
"django",
"datetime"
]
|
Django get months from models.DateTimeField | 39,737,216 | <p>Is it possible to filter a models.DateTimeField but only get the month in the filter object?</p>
<p>The field is:</p>
<pre><code>time_stamp = models.DateTimeField(
default=timezone.now)
</code></pre>
<p>When I filter it, this is what I get:</p>
<blockquote>
<p>[datetime.datetime(2016, 9, 22, 15, 2, 48, 867473, tzinfo=),
datetime.datetime(2016, 9, 22, 15, 4, 22, 618675, tzinfo=),
datetime.datetime(2016, 9, 22, 15, 5, 20, 939593, tzinfo=)]</p>
</blockquote>
<p>The filter returns 3 rows, but clearly there is too much information. I only require the months, and maybe the year.</p>
<p>How can I achieve this?</p>
<p>Any help or direction would be appreciated,</p>
<p>Thanks</p>
| 0 | 2016-09-28T02:44:09Z | 39,738,091 | <p>If you are using django 1.10.x there is <a href="https://docs.djangoproject.com/en/1.10/ref/models/database-functions/#extract" rel="nofollow"><code>Extract</code></a> db function</p>
<pre><code>from django.db.models.functions import Extract
months = MyModel.objects.annotate(month_stamp=Extract('time_stamp', 'month')).values_list('month_stamp', flat=True)
</code></pre>
<p>For django 1.9.x</p>
<pre><code>from django.db.models import Func
def Extract(field, date_field='DOW'):
template = "EXTRACT({} FROM %(expressions)s::timestamp)".format(date_field)
return Func(field, template=template)
months = MyModel.objects.annotate(month_stamp=Extract('time_stamp', 'month')).values_list('month_stamp', flat=True)
</code></pre>
| 1 | 2016-09-28T04:32:07Z | [
"python",
"django",
"datetime"
]
|
How to use a pandas DataFrame as data parameter in Seaborn lmplot | 39,737,226 | <p>Looking at the <a href="https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.lmplot.html" rel="nofollow"><code>lmplot</code> documentation</a>, it shows</p>
<blockquote>
<p>Parameters: </p>
<pre><code> x, y : strings, optional
Input variables; these should be column names in data.
data : DataFrame
Tidy (âlong-formâ) dataframe where each column
is a variable and each row is an observation.
</code></pre>
</blockquote>
<p>I have a pandas dataframe <code>customers</code> that I would like to use as a parameter for a <code>lmplot</code>. How can I transform my pandas dataframe into a tidy ("long-form") dataframe for use in <code>lmplot</code>?</p>
<p>Currently, I'm trying to do the following:</p>
<pre><code>sns.lmplot(customers["Length of Membership"], customers["Yearly Amount Spent"], customers)
</code></pre>
<p>(Seaborn is imported as <code>sns</code>). The python code above returns an error that contains a very long list of floating points.</p>
| 0 | 2016-09-28T02:45:31Z | 39,737,302 | <p>Thanks to <a href="http://stackoverflow.com/users/3437504/bob-haffner">Bob Haffner</a>, looking closer at the documentation, the DataFrame wasn't the issue at all, rather, the X and Y parameters I was passing. </p>
<p>I was passing in Series, when I should have been passing in strings, as such:</p>
<pre><code>sns.lmplot("Length of Membership", "Yearly Amount Spent", customers)
</code></pre>
| 1 | 2016-09-28T02:55:33Z | [
"python",
"pandas",
"dataframe",
"seaborn"
]
|
Python: How do I refer to an attribute of a class by an attribute of another class or a variable? | 39,737,273 | <p>I am trying to create a magic system in my text-based RPG that refers to an attribute of a monster class using the attribute from the magic class.
The monster class looks like </p>
<pre><code>class monster(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
</code></pre>
<p>with the list of monsters being store in the format </p>
<pre><code>bestiary = {
99999: monster(name="Slime", currentHP= 3, maxHP= 10, initiativeMod= 1, AC= 0, baseAttack= 0, equippedWeapon= itemsList[13], speed = 10) ##Syntax items
}
</code></pre>
<p>The spells are created in the form </p>
<pre><code>class BuffSpell(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
</code></pre>
<p>with instances of each spell in the form </p>
<pre><code>bardSpells = {
2: BuffSpell(name= "Flare", level= 0, stat= "baseAttack", value = -1, MP = 3, spellType = "buff"),
}
</code></pre>
<p>I am trying to refer to an attribute in monster that is given by an attribute in the spell like this </p>
<pre><code>def useMagic(target, spell):
if spell.spellType == "buff":
x = spell.stat
target.x += spell.value
</code></pre>
<p>which of course doesn't work. How can I get the spell.stat attribute and apply spell.value to the corresponding attribute in monster?</p>
| 0 | 2016-09-28T02:51:44Z | 39,737,353 | <p>You could try something like the following:</p>
<pre><code>def use_magic(target, spell):
if spell.spell_type == "buff":
stat = spell.stat
setattr(target, stat, getattr(target,stat) + spell.value)
</code></pre>
| 1 | 2016-09-28T03:02:26Z | [
"python",
"attributes"
]
|
Return string that is not a substring of other strings - is it possible in time less than O(n^2)? | 39,737,300 | <p>You are given an array of strings. you have to return only those strings that are not sub strings of other strings in the array.
Input - <code>['abc','abcd','ab','def','efgd']</code>.
Output should be - <code>'abcd'</code> and <code>'efgd'</code>
I have come up with a solution in python that has time complexity O(n^2).
Is there a possible solution that gives a lesser time complexity?
My solution:</p>
<pre><code>def sub(l,s):
l1=l
for i in range (len(l)):
l1[i]=''.join(sorted(l1[i]))
for i in l1:
if s in i:
return True
return False
def main(l):
for i in range(len(l)):
if sub(l[0:i-1]+l[i+1:],l[i])==False:
print l[i]
main(['abc','abcd','ab','def','efgd'])
</code></pre>
| 4 | 2016-09-28T02:55:12Z | 39,737,452 | <p>Pop the first element. Go through each remaining element and see if the shorter string is a substring of the longer string. Repeat. That should be O(n log n)</p>
<p>EDIT: Rough draft of implementation</p>
<pre><code>def not_substrings(l):
mask = [True]*len(l)
for i in range(len(l)):
if not mask[i]:
continue
for j in range(i+1, len(l)):
if len(l[i]) > len(l[j]):
if l[j] in l[i]:
mask[j] = False
elif l[j] == l[i]:
mask[j] = False
mask[i] = False
else:
if l[i] in l[j]:
mask[i] = False
if mask[i]:
print l[i]
</code></pre>
<p>I haven't run this code, but it should be roughly correct. I don't know if there's a way of doing this without the mask, or what time complexity the <code>[True]*len(l)</code> statement has. I haven't done any rigorous analysis, but this looks <code>n log n</code> to me, because each iteration only iterates over the remainer of the list, not the entire list.</p>
| -1 | 2016-09-28T03:16:03Z | [
"python",
"string",
"python-2.7",
"substring"
]
|
Return string that is not a substring of other strings - is it possible in time less than O(n^2)? | 39,737,300 | <p>You are given an array of strings. you have to return only those strings that are not sub strings of other strings in the array.
Input - <code>['abc','abcd','ab','def','efgd']</code>.
Output should be - <code>'abcd'</code> and <code>'efgd'</code>
I have come up with a solution in python that has time complexity O(n^2).
Is there a possible solution that gives a lesser time complexity?
My solution:</p>
<pre><code>def sub(l,s):
l1=l
for i in range (len(l)):
l1[i]=''.join(sorted(l1[i]))
for i in l1:
if s in i:
return True
return False
def main(l):
for i in range(len(l)):
if sub(l[0:i-1]+l[i+1:],l[i])==False:
print l[i]
main(['abc','abcd','ab','def','efgd'])
</code></pre>
| 4 | 2016-09-28T02:55:12Z | 39,802,679 | <p>Use a <code>set</code> object to keep all the substrings. This is faster but use a lot of memory, if every string is short, you can try this.</p>
<pre><code>import string
import random
from itertools import combinations
def get_substrings(w):
return (w[s:e] for s, e in combinations(range(len(w)+1), 2))
def get_not_substrings(words):
words = sorted(set(words), key=len, reverse=True)
substrings = set()
for w in words:
if w not in substrings:
yield w
substrings.update(get_substrings(w))
words = ["".join(random.choice(string.ascii_lowercase)
for _ in range(random.randint(1, 12))) for _ in range(10000)]
res = list(get_not_substrings(words))
</code></pre>
| -1 | 2016-10-01T03:18:19Z | [
"python",
"string",
"python-2.7",
"substring"
]
|
Return string that is not a substring of other strings - is it possible in time less than O(n^2)? | 39,737,300 | <p>You are given an array of strings. you have to return only those strings that are not sub strings of other strings in the array.
Input - <code>['abc','abcd','ab','def','efgd']</code>.
Output should be - <code>'abcd'</code> and <code>'efgd'</code>
I have come up with a solution in python that has time complexity O(n^2).
Is there a possible solution that gives a lesser time complexity?
My solution:</p>
<pre><code>def sub(l,s):
l1=l
for i in range (len(l)):
l1[i]=''.join(sorted(l1[i]))
for i in l1:
if s in i:
return True
return False
def main(l):
for i in range(len(l)):
if sub(l[0:i-1]+l[i+1:],l[i])==False:
print l[i]
main(['abc','abcd','ab','def','efgd'])
</code></pre>
| 4 | 2016-09-28T02:55:12Z | 39,802,797 | <p>Using <a href="https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm" rel="nofollow">Aho-Corasick</a> should allow you to get asymptotic run time of <code>O(n)</code>, at the expense of adding additional memory usage, and higher fixed multiplier on costs (ignored by big-O notation, but still meaningful). The complexity of the algorithm is the sum of several components, but none of them multiply, so it should be linear by all metrics (number of strings, length of strings, longest string, etc.).</p>
<p>Using <a href="https://pypi.python.org/pypi/pyahocorasick" rel="nofollow"><code>pyahocorasick</code></a>, you'd do an initial pass to make an automaton that can scan for all of the strings at once:</p>
<pre><code>import ahocorasick
# This code assumes no duplicates in mystrings (which would make them mutually
# substrings). Easy to handle if needed, but simpler to avoid for demonstration
mystrings = ['abc','abcd','ab','def','efgd']
# Build Aho-Corasick automaton, involves O(n) (in combined length of mystrings) work
# Allows us to do single pass scans of a string for all strings in mystrings
# at once
aut = ahocorasick.Automaton()
for s in mystrings:
# mapping string to itself means we're informed directly of which substring
# we hit as we scan
aut.add_word(s, s)
aut.make_automaton()
# Initially, assume all strings are non-substrings
nonsubstrings = set(mystrings)
# Scan each of mystrings for substrings from other mystrings
# This only involves a single pass of each s in mystrings thanks to Aho-Corasick,
# so it's only O(n+m) work, where n is again combined length of mystrings, and
# m is the number of substrings found during the search
for s in mystrings:
for _, substr in aut.iter(s):
if substr != s:
nonsubstrings.discard(substr)
# A slightly more optimized version of the above loop, but admittedly less readable:
# from operator import itemgetter
# getsubstr = itemgetter(1)
# for s in mystrings:
# nonsubstrings.difference_update(filter(s.__ne__, map(getsubstr, aut.iter(s))))
for nonsub in nonsubstrings:
print(nonsub)
</code></pre>
<p>Note: Annoyingly, I'm on a machine without a compiler right now, so I can't install <code>pyahocorasick</code> to test this code, but I've used it before, and I believe this should work, modulo stupid typos.</p>
| 0 | 2016-10-01T03:40:52Z | [
"python",
"string",
"python-2.7",
"substring"
]
|
Return string that is not a substring of other strings - is it possible in time less than O(n^2)? | 39,737,300 | <p>You are given an array of strings. you have to return only those strings that are not sub strings of other strings in the array.
Input - <code>['abc','abcd','ab','def','efgd']</code>.
Output should be - <code>'abcd'</code> and <code>'efgd'</code>
I have come up with a solution in python that has time complexity O(n^2).
Is there a possible solution that gives a lesser time complexity?
My solution:</p>
<pre><code>def sub(l,s):
l1=l
for i in range (len(l)):
l1[i]=''.join(sorted(l1[i]))
for i in l1:
if s in i:
return True
return False
def main(l):
for i in range(len(l)):
if sub(l[0:i-1]+l[i+1:],l[i])==False:
print l[i]
main(['abc','abcd','ab','def','efgd'])
</code></pre>
| 4 | 2016-09-28T02:55:12Z | 39,803,163 | <p>Is memory an issue? You could turn to the tried and true...TRIE! </p>
<p>Build a suffix tree!</p>
<p>Given your input <code>['abc','abcd','ab','def','efgd']</code></p>
<p>We would have a tree of </p>
<pre><code> _
/ | \
a e d
/ | \
b* f e
/ | \
c* g f*
/ |
d* d*
</code></pre>
<p>Utilizing a DFS (Depth-First-Search) search of said tree you would locate the deepest leafs <code>abcd</code>, <code>efgd</code>, and <code>def</code></p>
<p>Tree traversal is pretty straight forward and your time complexity is <code>O(n*m).</code> A much better improvement over the <code>O(n^2)</code> time you had previously. </p>
<p>With this approach it becomes simple to add new keys and still make it easy to find the unique keys. </p>
<p>Consider adding the key <code>deg</code> </p>
<p>your new tree would be approximately </p>
<pre><code> _
/ | \
a e d
/ | \
b* f e
/ | / \
c* g g* f*
/ |
d* d*
</code></pre>
<p>With this new tree it is still a simple matter of performing a DFS search to obtain the unique keys that are not prefixes of others. </p>
<pre><code>from typing import List
class Trie(object):
class Leaf(object):
def __init__(self, data, is_key):
self.data = data
self.is_key = is_key
self.children = []
def __str__(self):
return "{}{}".format(self.data, "*" if self.is_key else "")
def __init__(self, keys):
self.root = Trie.Leaf('', False)
for key in keys:
self.add_key(key)
def add_key(self, key):
self._add(key, self.root.children)
def has_suffix(self, suffix):
leaf = self._find(suffix, self.root.children)
if not leaf:
return False
# This is only a suffix if the returned leaf has children and itself is not a key
if not leaf.is_key and leaf.children:
return True
return False
def includes_key(self, key):
leaf = self._find(key, self.root.children)
if not leaf:
return False
return leaf.is_key
def delete(self, key):
"""
If the key is present as a unique key as in it does not have any children nor are any of its nodes comprised of
we should delete all of the nodes up to the root
If the key is a prefix of another long key in the trie, umark the leaf node
if the key is present in the trie and contains no children but contains nodes that are keys we should delete all
nodes up to the first encountered key
:param key:
:return:
"""
if not key:
raise KeyError
self._delete(key, self.root.children, None)
def _delete(self, key, children: List[Leaf], parents: (List[Leaf], None), key_idx=0, parent_key=False):
if not parents:
parents = [self.root]
if key_idx >= len(key):
return
key_end = True if len(key) == key_idx + 1 else False
suffix = key[key_idx]
for leaf in children:
if leaf.data == suffix:
# we have encountered a leaf node that is a key we can't delete these
# this means our key shares a common branch
if leaf.is_key:
parent_key = True
if key_end and leaf.children:
# We've encountered another key along the way
if parent_key:
leaf.is_key = False
else:
# delete all nodes recursively up to the top of the first node that has multiple children
self._clean_parents(key, key_idx, parents)
elif key_end and not leaf.children:
# delete all nodes recursively up to the top of the first node that has multiple children
self._clean_parents(key, key_idx, parents)
# Not at the key end so we need to keep traversing the tree down
parents.append(leaf)
self._delete(key, leaf.children, parents, key_idx + 1, key_end)
def _clean_parents(self, key, key_idx, parents):
stop = False
while parents and not stop:
p = parents.pop()
# Need to stop processing a removal at a branch
if len(p.children) > 1:
stop = True
# Locate our branch and kill its children
for i in range(len(p.children)):
if p.children[i].data == key[key_idx]:
p.children.pop(i)
break
key_idx -= 1
def _find(self, key, children: List[Leaf]):
if not key:
raise KeyError
match = False
if len(key) == 1:
match = True
suffix = key[0]
for leaf in children:
if leaf.data == suffix and not match:
return self._find(key[1:], leaf.children)
elif leaf.data == suffix and match:
return leaf
return None
def _add(self, key, children: List[Leaf]):
if not key:
return
is_key = False
if len(key) == 1:
is_key = True
suffix = key[0]
for leaf in children:
if leaf.data == suffix:
self._add(key[1:], leaf.children)
break
else:
children.append(Trie.Leaf(suffix, is_key))
self._add(key[1:], children[-1].children)
return
@staticmethod
def _has_children(leaf):
return bool(leaf.children)
def main():
keys = ['ba', 'bag', 'a', 'abc', 'abcd', 'abd', 'xyz']
trie = Trie(keys)
print(trie.includes_key('ba')) # True
print(trie.includes_key('b')) # False
print(trie.includes_key('dog')) # False
print(trie.has_suffix('b')) # True
print(trie.has_suffix('ab')) # True
print(trie.has_suffix('abd')) # False
trie.delete('abd') # Should only remove the d
trie.delete('a') # should unmark a as a key
trie.delete('ba') # should remove the ba trie
trie.delete('xyz') # Should remove the entire branch
trie.delete('bag') # should only remove the g
print(trie)
if __name__ == "__main__":
main()
</code></pre>
<p>Please note the above trie implementation does not have a DFS search implemented; however, provides you with some amazing legwork to get started. </p>
| 2 | 2016-10-01T04:52:00Z | [
"python",
"string",
"python-2.7",
"substring"
]
|
Import multiple CSV fields into one MySQL field | 39,737,398 | <p>I have a table like this</p>
<pre><code>mytable(`id` int, 'number1' varchar(11), 'number2' varchar(1200))
</code></pre>
<p>Also I have cvs-like file</p>
<pre><code>111111111,222222222,333333333,44444444,,,
222222222,333333333,
111111111,555555555,666666666,
</code></pre>
<p>They are separated by ","(or something else)
The csv have 100 colunms.</p>
<p>I would like to combine the second column to 100rd loumn into mysql "number2",
the first column into mysql "number1".</p>
<p>like this:</p>
<pre><code>id number1 number 2
1 111111111 222222222,333333333,44444444
2 222222222 333333333
3 111111111 555555555,666666666
</code></pre>
<p>So can I use LOAD DATA INFILE to load the file into the table? How can I do this..? or have other method?</p>
<p>thanks.</p>
| 1 | 2016-09-28T03:08:54Z | 39,737,506 | <p>Here is an option which creates a <em>new</em> column combining columns 2 through 100:</p>
<pre><code>LOAD DATA INFILE 'input.csv'
INTO TABLE myTable
COLUMNS TERMINATED BY ','
IGNORE 1 LINES
(id, number1, number2, ..., number99)
SET newCol = CONCAT(NULLIF(number1, ''), NULLIF(number2, ''), ..., NULLIF(number99, ''));
</code></pre>
<p>Then, you can remove columns 2 through 100 from within MySQL:</p>
<pre><code>ALTER TABLE myTable
DROP COLUMN number1,
DROP COLUMN number2,
...
DROP COLUMN number99
</code></pre>
| 1 | 2016-09-28T03:22:45Z | [
"python",
"mysql",
"csv"
]
|
Import multiple CSV fields into one MySQL field | 39,737,398 | <p>I have a table like this</p>
<pre><code>mytable(`id` int, 'number1' varchar(11), 'number2' varchar(1200))
</code></pre>
<p>Also I have cvs-like file</p>
<pre><code>111111111,222222222,333333333,44444444,,,
222222222,333333333,
111111111,555555555,666666666,
</code></pre>
<p>They are separated by ","(or something else)
The csv have 100 colunms.</p>
<p>I would like to combine the second column to 100rd loumn into mysql "number2",
the first column into mysql "number1".</p>
<p>like this:</p>
<pre><code>id number1 number 2
1 111111111 222222222,333333333,44444444
2 222222222 333333333
3 111111111 555555555,666666666
</code></pre>
<p>So can I use LOAD DATA INFILE to load the file into the table? How can I do this..? or have other method?</p>
<p>thanks.</p>
| 1 | 2016-09-28T03:08:54Z | 39,737,509 | <p>you can use the function <code>file()</code> to open the cvs file ;</p>
<p>use the function <code>explode(',', $each_line)</code> to seperate each line with <code>,</code>,</p>
<p>I did not understand your demand clearly,I think simple code could like this:</p>
<pre><code>$file_items = file('file.cvs');
foreach($file_items as $item) {
$item_arr = explode(',', $item);
// combine $item_arr[0]... $item_arr[99] ,you get $number1, $number2;
$sql_insert = "INSERT INTO mytable (number1, number2) VALUES(number1, $number2)";
}
</code></pre>
| 0 | 2016-09-28T03:23:12Z | [
"python",
"mysql",
"csv"
]
|
Parsing date and time using regex from a chuck of string | 39,737,498 | <p>EDIT: DONE ALREADY! THANKS</p>
<p>Code as below:</p>
<p>import ast,re</p>
<pre><code>a = "('=====================================', '30/06/2016 17:15 T001 -------------------------------')"
t=ast.literal_eval(a)
z=re.compile(r"(\d\d/\d\d/\d\d\d\d)\s(\d\d:\d\d)")
m = z.match(t[1])
if m:
print("date: {}, time {}".format(m.group(1),m.group(2)))
</code></pre>
| 1 | 2016-09-28T03:21:58Z | 39,737,564 | <p>You can iterate list items, and match those items.</p>
<pre><code>t = ast.literal_eval(a) # assuming `t` is an iterable
z = re.compile(r"(\d\d/\d\d/\d\d\d\d)\s(\d\d:\d\d)")
for item in t: # <-----
m = z.match(item)
if m:
print("date: {}, time {}".format(m.group(1), m.group(2)))
# break # if you want to get only the first matched data/time pair
</code></pre>
| 1 | 2016-09-28T03:30:48Z | [
"python",
"regex"
]
|
Basic addition in Tensorflow? | 39,737,507 | <p>I want to make a program where I enter in a set of x1 x2 and outputs a y. All of the tensor flow tutorials I can find start with image recognition. Can someone help me by providing me either code or a tutorial on how to do this in python? thanks in advance. edit- the x1 x2 coordinates I was planning to use would be like 1, 1 and the y would be 2 or 4, 6 and the y would be 10. I want to provide the program with data to learn from. I have tried to learn from the tensorflow website but it seemed way more complex that what I wanted.</p>
| 0 | 2016-09-28T03:23:02Z | 39,747,526 | <p>Here is a snippet to get you started:</p>
<pre><code>import numpy as np
import tensorflow as tf
#a placeholder is like a variable that you can
#set later
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
#build the sum operation
c = a+b
#get the tensorflow session
sess = tf.Session()
#initialize all variables
sess.run(tf.initialize_all_variables())
#Now you want to sum 2 numbers
#first set up a dictionary
#that includes the numbers
#The key of the dictionary
#matches the placeholders
# required for the sum operation
feed_dict = {a:2.0, b:3.0}
#now run the sum operation
ppx = sess.run([c], feed_dict)
#print the result
print(ppx)
</code></pre>
| 0 | 2016-09-28T12:28:08Z | [
"python",
"machine-learning",
"bigdata",
"tensorflow",
"add"
]
|
Python plot won't run: 'x and y must have same first dimension' | 39,737,536 | <p>I'm absolutely certain I'm doing something simple wrong with my function definition, but I'm completely drained right now and can't figure it out. If someone can help, I'd love them forever.</p>
<pre><code>import matplotlib.pyplot as plt
import scipy as sp
lamb = sp.array([1100, 1650, 2200, 2750, 3300, 3850, 4400, 4950, 5500, 6050, 6600])
fno = sp.array([3.779, 2.443, 1.788, 1.361, 1.049, 0.831, 0.689, 0.590, 0.524, 0.486, 0.463])
fla = sp.array([0.743, 0.622, 0.555, 0.507, 0.468, 0.434, 0.401, 0.371, 0.348, 0.336, 0.320])
ebv = .1433
fig = plt.figure()
ax = fig.add_subplot(111)
def alam(fno, fla):
return (2.5*sp.log(fno/fla))
def rlam(lamb):
return (alam/ebv)
plt.plot(lamb, rlam,'k-')
plt.show()
</code></pre>
<p>I'm probably an idiot, so feel free to call me an idiot. Thanks!</p>
| 1 | 2016-09-28T03:28:00Z | 39,737,908 | <p>You can clearly see there is an issue. You have to give two arrays for the plt.plot(x,y). In your case, you gave an array and rlam which is a function name. So obviously, there is an error. </p>
<p>Try to learn more about the python function usage. I added a small code snippet which shows the plot usage and python function usage with an input argument. </p>
<pre><code>import matplotlib.pyplot as plt
import scipy as sp
lamb = sp.array([1100, 1650, 2200, 2750, 3300, 3850, 4400, 4950, 5500, 6050, 6600])
ebv = .1433
fig = plt.figure()
ax = fig.add_subplot(111)
def test_func(lamb):
return lamb/ebv
plt.plot(lamb, test_func(lamb),'k-')
plt.show()
</code></pre>
| 1 | 2016-09-28T04:10:30Z | [
"python",
"matplotlib"
]
|
Create new database from chunk reader in pandas | 39,737,545 | <p>I'm working with a massive excel file (14GB) that I need to clean so only the information I need is left. I made the file into Chunks so my computer would stop crashing, but now need to create a new database that shows only the data for the city I am looking for. </p>
<p>I have made it to print(chunk)</p>
<pre><code>for chunk in reader:
print(chunk)
</code></pre>
<p>am unsure how to continue, I tried</p>
<pre><code>df = reader
df = reader[reader.SitusCity == Miami]
</code></pre>
<p>But get this error code:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-7-d4c11720d1c5> in <module>()
----> 1 df = reader[reader.SitusCity == Miami]
AttributeError: 'TextFileReader' object has no attribute 'SitusCity'
</code></pre>
<p>Help!!!</p>
| 0 | 2016-09-28T03:28:59Z | 39,757,560 | <p>try this:</p>
<pre><code>chunk in reader:
chunk.ix[chunk.SitusCity == 'Miami'].to_excel('output.xlsx', mode='a')
</code></pre>
| 0 | 2016-09-28T20:59:40Z | [
"python",
"excel",
"pandas",
"jupyter",
"jupyter-notebook"
]
|
SQLalchemy find id and use it to lookup other information | 39,737,600 | <p>I'm making a simple lookup application for Japanese characters (Kanji), where the user can search the database using any of the information available.</p>
<h2>My database structure</h2>
<p><strong>Kanji</strong>:</p>
<ul>
<li>id</li>
<li>character (A kanji like é )</li>
<li>heisig6 (a number indicating the order of showing Kanji)</li>
<li>kanjiorigin (a number indicating the order of showing Kanji)</li>
</ul>
<p><strong>MeaningEN</strong> (1 kanji_id can have multiple entries with different meanings):</p>
<ul>
<li>kanji_id (FOREIGN KEY(kanji_id) REFERENCES "Kanji" (id)</li>
<li>meaning</li>
</ul>
<h2>User handling</h2>
<p>The user can choose to search by 'id', 'character', 'heisig6', 'kanjiorigin' or 'meaning' and it should then return all information in all those fields. (All fields return only 1 result, except meanings, which can return multiple results)</p>
<h2>Code, EDIT 4+5: my code with thanks to @ApolloFortyNine and #sqlalchemy on IRC, EDIT 6: <code>join</code> --> <code>outerjoin</code> (otherwise won't find information that has no Origins)</h2>
<pre><code>import sqlalchemy as sqla
import sqlalchemy.orm as sqlo
from tableclass import TableKanji, TableMeaningEN, TableMisc, TableOriginKanji # See tableclass.py
# Searches database with argument search method
class SearchDatabase():
def __init__(self):
#self.db_name = "sqlite:///Kanji_story.db"
self.engine = sqla.create_engine("sqlite:///Kanji.db", echo=True)
# Bind the engine to the metadata of the Base class so that the
# declaratives can be accessed through a DBSession instance
tc.sqla_base.metadata.bind = self.engine
# For making sessions to connect to db
self.db_session = sqlo.sessionmaker(bind=self.engine)
def retrieve(self, s_input, s_method):
# s_input: search input
# s_method: search method
print("\nRetrieving results with input: {} and method: {}".format(s_input, s_method))
data = [] # Data to return
# User searches on non-empty string
if s_input:
session = self.db_session()
# Find id in other table than Kanji
if s_method == 'meaning':
s_table = TableMeaningEN # 'MeaningEN'
elif s_method == 'okanji':
s_table = TableOriginKanji # 'OriginKanji'
else:
s_table = TableKanji # 'Kanji'
result = session.query(TableKanji).outerjoin(TableMeaningEN).outerjoin(
(TableOriginKanji, TableKanji.origin_kanji)
).filter(getattr(s_table, s_method) == s_input).all()
print("result: {}".format(result))
for r in result:
print("r: {}".format(r))
meanings = [m.meaning for m in r.meaning_en]
print(meanings)
# TODO transform into origin kanji's
origins = [str(o.okanji_id) for o in r.okanji_id]
print(origins)
data.append({'character': r.character, 'meanings': meanings,
'indexes': [r.id, r.heisig6, r.kanjiorigin], 'origins': origins})
session.close()
if not data:
data = [{'character': 'X', 'meanings': ['invalid', 'search', 'result']}]
return(data)
</code></pre>
<h2>Question EDIT 4+5</h2>
<ul>
<li><p>Is this an efficient query?: <code>result = session.query(TableKanji).join(TableMeaningEN).filter(getattr(s_table, s_method) == s_input).all()</code> (The .join statement is necessary, because otherwise e.g. <code>session.query(TableKanji).filter(TableMeaningEN.meaning == 'love').all()</code> returns all the meanings in my database for some reason? So is this either the right query or is my <code>relationship()</code> in my tableclass.py not properly defined?</p></li>
<li><p><strong>fixed</strong> (see <code>lambda:</code> in tableclass.py) <code>kanji = relationship("TableKanji", foreign_keys=[kanji_id], back_populates="OriginKanji")</code> <-- <strong>what is wrong</strong> about this? It gives the error:</p>
<p>File "/<em>path</em>/python3.5/site-packages/sqlalchemy/orm/mapper.py", line 1805, in get_property
"Mapper '%s' has no property '%s'" % (self, key))</p>
<p>sqlalchemy.exc.InvalidRequestError: Mapper 'Mapper|TableKanji|Kanji' has no property 'OriginKanji'</p></li>
</ul>
<h2>Edit 2: tableclass.py (EDIT 3+4+5: updated)</h2>
<pre><code>import sqlalchemy as sqla
from sqlalchemy.orm import relationship
import sqlalchemy.ext.declarative as sqld
sqla_base = sqld.declarative_base()
class TableKanji(sqla_base):
__tablename__ = 'Kanji'
id = sqla.Column(sqla.Integer, primary_key=True)
character = sqla.Column(sqla.String, nullable=False)
radical = sqla.Column(sqla.Integer) # Can be defined as Boolean
heisig6 = sqla.Column(sqla.Integer, unique=True, nullable=True)
kanjiorigin = sqla.Column(sqla.Integer, unique=True, nullable=True)
cjk = sqla.Column(sqla.String, unique=True, nullable=True)
meaning_en = relationship("TableMeaningEN", back_populates="kanji") # backref="Kanji")
okanji_id = relationship("TableOriginKanji", foreign_keys=lambda: TableOriginKanji.kanji_id, back_populates="kanji")
class TableMeaningEN(sqla_base):
__tablename__ = 'MeaningEN'
kanji_id = sqla.Column(sqla.Integer, sqla.ForeignKey('Kanji.id'), primary_key=True)
meaning = sqla.Column(sqla.String, primary_key=True)
kanji = relationship("TableKanji", back_populates="meaning_en")
class TableOriginKanji(sqla_base):
__tablename__ = 'OriginKanji'
kanji_id = sqla.Column(sqla.Integer, sqla.ForeignKey('Kanji.id'), primary_key=True)
okanji_id = sqla.Column(sqla.Integer, sqla.ForeignKey('Kanji.id'), primary_key=True)
order = sqla.Column(sqla.Integer)
#okanji = relationship("TableKanji", foreign_keys=[kanji_id], backref="okanji")
kanji = relationship("TableKanji", foreign_keys=[kanji_id], back_populates="okanji_id")
</code></pre>
| 0 | 2016-09-28T03:35:29Z | 39,757,177 | <p>We would really have to be able to see your database schema to give real critique, but assuming no foreign keys, what you said is basically the best you can do.</p>
<p>SQLAlchemy really begins to shine when you have complicated relations going on however. For example, if you properly had foreign keys set, you could do something like the following.</p>
<pre><code># Assuming kanji is a tc.tableMeaningEN.kanji_id object
kanji_meaning = kanji.meanings
</code></pre>
<p>And that would return the meanings for the kanji as an array, without any further queries.</p>
<p>You can go quite deep with relationships, so I'm linking the documentation here. <a href="http://docs.sqlalchemy.org/en/latest/orm/relationships.html" rel="nofollow">http://docs.sqlalchemy.org/en/latest/orm/relationships.html</a></p>
<p>EDIT: Actually, you don't need to manually join at all, SQLAlchemy will do it for you.</p>
<p>The case is wrong on your classes, but I'm not sure if SQLAlchemy is case sensitive there or not. If it works, then just move on.</p>
<p>If you query the a table (self.session.query(User).filter(User.username == self.name).first()) you should have an object of the table type (User here).</p>
<p>So in your case, querying the TableKanji table alone will return an object of that type.</p>
<pre><code>kanji_obj = session.query(TableKanji).filter(TableKanji.id == id).first()
# This will return an array of all meaning_ens that match the foreign key
meaning_arr = kanji_obj.meaning_en
# This will return a single meeting, just to show each member of the arr is of type TableMeaningEn
meaning_arr[0].meaning
</code></pre>
<p>I have a project made use of some of these features, hope it helps:
<a href="https://github.com/ApolloFortyNine/SongSense" rel="nofollow">https://github.com/ApolloFortyNine/SongSense</a>
Database declaration (with relationships): <a href="https://github.com/ApolloFortyNine/SongSense/blob/master/songsense/database.py" rel="nofollow">https://github.com/ApolloFortyNine/SongSense/blob/master/songsense/database.py</a>
Automatic joins: <a href="https://github.com/ApolloFortyNine/SongSense/blob/master/songsense/getfriend.py#L134" rel="nofollow">https://github.com/ApolloFortyNine/SongSense/blob/master/songsense/getfriend.py#L134</a></p>
<p>I really like my database structure, but as for the rest it's pretty awful. Hope it still helps though.</p>
| 1 | 2016-09-28T20:35:12Z | [
"python",
"sqlalchemy"
]
|
Python having issues computing large numbers in complex equations | 39,737,687 | <p>I'm programming some code that allows a user to input seconds, and receive how many days, hours, minutes, and seconds it churns out to. However, if I enter any number larger than 311039999, the amount of hours goes to 24+, instead of 0.</p>
<p>Right now I have something programmed in that tells the user that the number is too large if it exceeds the aforementioned value, but I want to change it so that it's not a problem anymore.</p>
<p>Here is my code:</p>
<pre><code>user_sec= int(input("How many seconds are there? "))
#When max value is minutes, displays number of minutes
tot_min_min = user_sec/60
#When max value is minutes, displays number of seconds
tot_min_sec = user_sec%60
#When max value is hours, displays number of hours
tot_hr_hr = user_sec/3600
#When max value is hours, displays number of minutes
tot_hr_min = tot_min_min%60
#When max value is hours, displays number of seconds
tot_hr_sec = user_sec%60
#When max value is days, displays number of days
tot_day_day = user_sec/86400
#When max value is days, displays number of hours
tot_day_hr = tot_hr_hr/3600
#When max value is days, displays number of minutes
tot_day_min = tot_hr_min%60
#When max value is days, displays number of seconds
tot_day_sec = user_sec%60
if user_sec >= 311040000:
print 'Your number is too large to calculate.'
elif user_sec >= 60 and user_sec < 3600:
print '{} seconds makes {} minute(s) and {} second(s).'.format(user_sec,tot_min_min,tot_min_sec)
elif user_sec >= 3600 and user_sec < 86400:
print '{} seconds makes {} hour(s), {} minute(s) and {} second(s).'.format(user_sec,tot_hr_hr,tot_hr_min,tot_hr_sec)
elif user_sec >= 86400 and user_sec < 311040000:
print '{} seconds makes {} days(s), {} hour(s), {} minute(s) and {} second(s).'.format(user_sec,tot_day_day,tot_day_hr,tot_day_min,tot_day_sec)
else:
print 'There is/are {} second(s).'.format(user_sec)
</code></pre>
<p>I'm using Canopy, if this helps. Simple answers are appreciated, since I've only been doing this for a few weeks.</p>
<p>[EDIT] Here's an example of my problem. If user_sec = 1000000000, it prints out '1000000000 seconds makes 11574 days(s), 77 hour(s), 46 minute(s) and 40 second(s).' I'm not sure where the mathematical issue is, but the correct answer is '11574 days, <strong><em>1</em></strong> hour, 46 minutes and 40 seconds.'</p>
| 0 | 2016-09-28T03:45:54Z | 39,738,042 | <p>If you don't mind using a 3rd party module, <code>dateutil</code> provides an easy way to do this:</p>
<pre><code>from dateutil.relativedelta import relativedelta
user_sec = int(input("How many seconds are there? "))
d = relativedelta(seconds=user_sec)
print(d)
</code></pre>
<p>This will output the following if you enter <code>351080000</code></p>
<pre><code>relativedelta(days=+4063, hours=+10, minutes=+13, seconds=+20)
</code></pre>
<p>From there you can print out a more user friendly string:</p>
<pre><code>print('{0} seconds makes {1.days} days(s), {1.hours} hour(s), {1.minutes} minute(s) and {1.seconds} second(s).'.format(user_sec, d))
351080000 seconds makes 4063 days(s), 10 hour(s), 13 minute(s) and 20 second(s).
</code></pre>
<hr>
<p>Otherwise it is pretty straightforward to calculate the days. hours, minutes and seconds:</p>
<pre><code>n = user_sec
days, n = divmod(n, 86400)
hours, n = divmod(n, 3600)
minutes, n = divmod(n, 60)
seconds = n
print('{} seconds makes {} days(s), {} hour(s), {} minute(s) and {} second(s).'.format(user_sec, days, hours, minutes, seconds))
</code></pre>
| 2 | 2016-09-28T04:26:26Z | [
"python",
"complex-numbers",
"canopy"
]
|
Get Attributes python | 39,737,712 | <pre><code>class A(object):
a = 1
b = 0
c = None
d = None
a_obj=A()
a_list = ['a', 'b', 'c', 'd']
attrs_present = filter(lambda x: getattr(a_obj, x), a_list)
</code></pre>
<p>I want both a and b attributes, here 0 is a valid value. I don't want to use comparison==0</p>
<p>is there a way to get those?
Any help will be appriciated, Thanks.</p>
| 3 | 2016-09-28T03:49:02Z | 39,737,741 | <p>If you want to exclude <code>c</code>, <code>d</code> (<code>None</code>s), use <code>is None</code> or <code>is not None</code>:</p>
<pre><code>attrs_present = filter(lambda x: getattr(a_obj, x, None) is not None, a_list)
# NOTE: Added the third argument `None`
# to prevent `AttributeError` in case of missing attribute
# (for example, a_list = ['a', 'e'])
</code></pre>
<p>If you want to include <code>c</code>, <code>d</code>, use <a href="https://docs.python.org/2/library/functions.html#hasattr" rel="nofollow"><code>hasattr</code></a>:</p>
<pre><code>attrs_present = filter(lambda x: hasattr(a_obj, x), a_list)
</code></pre>
| 2 | 2016-09-28T03:51:50Z | [
"python",
"python-2.7"
]
|
Print random number of lines after read a file | 39,737,728 | <p>I am writing a thread-enabled python program which can read a file and send, but is there any ways to let the program read and send N numbers of line at a time?</p>
<pre><code>from random import randint
import sys
import threading
import time
def function():
fo = open("1.txt", "r")
print "Name of the file: ", fo.name
while True:
line = fo.readlines()
for lines in line:
print(lines)
fo.seek(0, 0)
time.sleep(randint(1,3))
game = threading.Thread(target=function)
game.start()
</code></pre>
<p>The following python code only can let me send one line at a time, and then rewind. </p>
| -1 | 2016-09-28T03:50:43Z | 39,737,818 | <p>If you follow your code logic, in the <code>for</code> loop iterating over the lines in the file, you reset the file pointer to the first line right after having printed it. That's why you get the same first line printed. To achieve a random number of printed lines you could do that in any number of ways, for example:</p>
<pre><code>def function():
fo = open("1.txt", "r")
print "Name of the file: ", fo.name
lines = fo.readlines() # changed the var names, lines vs. line
start_index = 0
while True:
length = randint(1, len(lines)-start_index)
for line in lines[start_index:start_index+length]:
print(line)
start_index += length
time.sleep(randint(1,3))
</code></pre>
<p>In there, after reading in the file content into <code>lines</code>, the code will loop on each line, but only until the n-th index, calculated via <code>randint(1, len(lines))</code>, and avoiding <code>0</code> so at the very least you get one line printed. After the print loop, we reset the file pointer, and sleep.</p>
<p>REVISION: given the new detail, at every cycle we are now randomizing the window of lines to be printed, while moving along the already printed lines. Basically a sliding window of random length at each iteration, making sure it (should) be consistent with the size of the array. Adjust as needed.</p>
| 0 | 2016-09-28T04:00:43Z | [
"python",
"python-2.7",
"python-3.x"
]
|
Print random number of lines after read a file | 39,737,728 | <p>I am writing a thread-enabled python program which can read a file and send, but is there any ways to let the program read and send N numbers of line at a time?</p>
<pre><code>from random import randint
import sys
import threading
import time
def function():
fo = open("1.txt", "r")
print "Name of the file: ", fo.name
while True:
line = fo.readlines()
for lines in line:
print(lines)
fo.seek(0, 0)
time.sleep(randint(1,3))
game = threading.Thread(target=function)
game.start()
</code></pre>
<p>The following python code only can let me send one line at a time, and then rewind. </p>
| -1 | 2016-09-28T03:50:43Z | 39,737,860 | <p>something like this?</p>
<pre><code>from random import randint
import sys
import threading
import time
def function():
fo = open("1.txt", "r")
print "Name of the file: ", fo.name
lines = fo.readlines()
while lines:
toSend = ""
for i in range(0,random.randint(x,y)): #plug your range in
toSend += lines.pop(0)
print(toSend)
game = threading.Thread(target=function)
game.start()
</code></pre>
| 0 | 2016-09-28T04:05:44Z | [
"python",
"python-2.7",
"python-3.x"
]
|
String Output in Python | 39,737,767 | <pre><code> html_body += "<tr><td>{}</td><td>{}</td><td>{}</td><td>{}</td>".\format(p[0],p[1],p[2],p[3])
^
</code></pre>
<blockquote>
<p>SyntaxError: unexpected character after line continuation character</p>
</blockquote>
<p>It looks normal. how should i fix it?</p>
| 2 | 2016-09-28T03:54:31Z | 39,737,787 | <pre><code>html_body += "<tr><td>{}</td><td>{}</td><td>{}</td><td>{}</td>".\form
^
</code></pre>
<p>Right there is your problem. The backslash is the line-continuation character that the error mentions. Take it out.</p>
| 1 | 2016-09-28T03:56:58Z | [
"python"
]
|
String Output in Python | 39,737,767 | <pre><code> html_body += "<tr><td>{}</td><td>{}</td><td>{}</td><td>{}</td>".\format(p[0],p[1],p[2],p[3])
^
</code></pre>
<blockquote>
<p>SyntaxError: unexpected character after line continuation character</p>
</blockquote>
<p>It looks normal. how should i fix it?</p>
| 2 | 2016-09-28T03:54:31Z | 39,737,788 | <pre><code>html_body += "<tr><td>{}</td><td>{}</td><td>{}</td><td>{}</td>".\format(p[0],p[1],p[2],p[3])
this backslash is not needed ----------------------------------^
</code></pre>
| 0 | 2016-09-28T03:57:00Z | [
"python"
]
|
Google OAuth2client - invalid_grant: Token has been revoked | 39,737,796 | <p>I'm writing a basic app that prints out projects in Google's Cloud Resources Manager using this method:
<a href="https://cloud.google.com/resource-manager/reference/rest/v1/projects/list" rel="nofollow">https://cloud.google.com/resource-manager/reference/rest/v1/projects/list</a></p>
<p>Yesterday it worked but I revoked the token and the code doesn't prompt to re-authorize.</p>
<pre><code>from googleapiclient import discovery from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('cloudresourcemanager', 'v1', credentials=credentials)
projects = service.projects() request = projects.list() while request is not None:
response = request.execute()
for project in response['projects']:
print project
request = projects.list_next(previous_request=request, previous_response=response)
</code></pre>
<blockquote>
<p>File "oauth2client/client.py", line 834, in _do_refresh_request
raise HttpAccessTokenRefreshError(error_msg, status=resp.status) oauth2client.client.HttpAccessTokenRefreshError: invalid_grant: Token
has been revoked.</p>
</blockquote>
<p>I think there's a way to tell the client to check if the token is valid and pop the user out to a browser if not, but can't seem to get the code to do it. Any help appreciated ;)</p>
| 0 | 2016-09-28T03:57:48Z | 39,761,655 | <p>Suddenly yesterday we faced a similar issue in Google OAuth2 connection from gerrit code review. So just wanted to share this as it might be related to you. Refer this <a href="https://github.com/davido/gerrit-oauth-provider/issues/64" rel="nofollow">github issue</a> & its <a href="https://github.com/davido/gerrit-oauth-provider/pull/65/files" rel="nofollow">commit made for gerrit-oauth-provider</a></p>
| 0 | 2016-09-29T04:51:44Z | [
"python",
"pycharm",
"google-api-client",
"google-oauth2",
"oauth2client"
]
|
Google OAuth2client - invalid_grant: Token has been revoked | 39,737,796 | <p>I'm writing a basic app that prints out projects in Google's Cloud Resources Manager using this method:
<a href="https://cloud.google.com/resource-manager/reference/rest/v1/projects/list" rel="nofollow">https://cloud.google.com/resource-manager/reference/rest/v1/projects/list</a></p>
<p>Yesterday it worked but I revoked the token and the code doesn't prompt to re-authorize.</p>
<pre><code>from googleapiclient import discovery from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('cloudresourcemanager', 'v1', credentials=credentials)
projects = service.projects() request = projects.list() while request is not None:
response = request.execute()
for project in response['projects']:
print project
request = projects.list_next(previous_request=request, previous_response=response)
</code></pre>
<blockquote>
<p>File "oauth2client/client.py", line 834, in _do_refresh_request
raise HttpAccessTokenRefreshError(error_msg, status=resp.status) oauth2client.client.HttpAccessTokenRefreshError: invalid_grant: Token
has been revoked.</p>
</blockquote>
<p>I think there's a way to tell the client to check if the token is valid and pop the user out to a browser if not, but can't seem to get the code to do it. Any help appreciated ;)</p>
| 0 | 2016-09-28T03:57:48Z | 39,761,906 | <p>Since this code uses application default credentials, the gcloud command is how I get a new token:</p>
<pre><code>gcloud beta auth application-default login
</code></pre>
<p>Although it would be nice if there was a way to do this in code in the event the token is revoked again.</p>
| 0 | 2016-09-29T05:14:20Z | [
"python",
"pycharm",
"google-api-client",
"google-oauth2",
"oauth2client"
]
|
Writing to a file after parsing | 39,737,812 | <p>I have written a small python code where it will read a sample csv file and copy its first column to a temp csv file. Now when I try to compare that temporary file with another text file and try to write result to another file called result file, The file is created but with empty content.</p>
<p>But when i tested it in chunks, It is working fine</p>
<pre><code>import csv
f = open("sample.csv", "r")
reader = csv.reader(f)
data = open("temp1.csv", "w")
w = csv.writer(data)
for row in reader:
my_row = []
my_row.append(row[0])
w.writerow(my_row)
with open('temp1.csv', 'r') as file1:
with open('serialNumber.txt', 'r') as file2:
same = set(file1).intersection(file2)
with open('result.txt', 'w') as file_out:
for line in same:
file_out.write(line)
print line
</code></pre>
<p>sample.csv</p>
<blockquote>
<p>M11435TDS144,STB#1,Router#1</p>
<p>M11543TH4292,STB#2,Router#1</p>
<p>M11509TD9937,STB#3,Router#1 </p>
<p>M11543TH4258,STB#4,Router#1</p>
</blockquote>
<p>serialNumber.txt</p>
<blockquote>
<p>G1A114042400571</p>
<p>M11543TH4258</p>
<p>M11251TH1230</p>
<p>M11435TDS144</p>
<p>M11543TH4292</p>
<p>M11509TD9937</p>
</blockquote>
| 0 | 2016-09-28T04:00:05Z | 39,738,120 | <p>You should close the output file (<code>temp1.csv</code>) before you can read data from it.</p>
<pre><code>import csv
f = open("sample.csv", "r")
reader = csv.reader(f)
data = open("temp1.csv", "w")
w = csv.writer(data)
for row in reader:
my_row = []
my_row.append(row[0])
w.writerow(my_row)
data.close() # <--- Should close it before reading it in the same program !!
with open('temp1.csv', 'r') as file1:
with open('serialNumber.txt', 'r') as file2:
same = set(file1).intersection(file2)
with open('result.txt', 'w') as file_out:
for line in same:
file_out.write(line)
print line
</code></pre>
| 0 | 2016-09-28T04:34:22Z | [
"python",
"file-writing"
]
|
Writing to a file after parsing | 39,737,812 | <p>I have written a small python code where it will read a sample csv file and copy its first column to a temp csv file. Now when I try to compare that temporary file with another text file and try to write result to another file called result file, The file is created but with empty content.</p>
<p>But when i tested it in chunks, It is working fine</p>
<pre><code>import csv
f = open("sample.csv", "r")
reader = csv.reader(f)
data = open("temp1.csv", "w")
w = csv.writer(data)
for row in reader:
my_row = []
my_row.append(row[0])
w.writerow(my_row)
with open('temp1.csv', 'r') as file1:
with open('serialNumber.txt', 'r') as file2:
same = set(file1).intersection(file2)
with open('result.txt', 'w') as file_out:
for line in same:
file_out.write(line)
print line
</code></pre>
<p>sample.csv</p>
<blockquote>
<p>M11435TDS144,STB#1,Router#1</p>
<p>M11543TH4292,STB#2,Router#1</p>
<p>M11509TD9937,STB#3,Router#1 </p>
<p>M11543TH4258,STB#4,Router#1</p>
</blockquote>
<p>serialNumber.txt</p>
<blockquote>
<p>G1A114042400571</p>
<p>M11543TH4258</p>
<p>M11251TH1230</p>
<p>M11435TDS144</p>
<p>M11543TH4292</p>
<p>M11509TD9937</p>
</blockquote>
| 0 | 2016-09-28T04:00:05Z | 39,738,615 | <p><strong>Points regarding code</strong>:</p>
<ul>
<li><code>data</code> file handle is not closed. <code>data.close()</code> after writing to <code>temp1.csv</code>.</li>
<li>In your code, <code>same = set(file1).intersection(file2)</code>, you are directly passing file handle <code>file2</code> to intersection. It expects list. This is exact problem is. It should be <code>same = set(file1.readlines()).intersection(file2.readlines())</code></li>
</ul>
<p><strong>Working Code:</strong></p>
<pre><code>import csv
f = open("sample.csv", "r")
reader = csv.reader(f)
data = open("temp1.csv", "wb")
w = csv.writer(data)
for row in reader:
my_row = []
if len(row) != 0:
my_row.append(row[0])
w.writerow(my_row)
#File should be closed
data.close()
with open('temp1.csv', 'r') as file1:
with open('serialNumber.txt', 'r') as file2:
tmp_list = (file1.readlines())
ser_list = (file2.readlines())
same = set(file1.readlines()).intersection(file2.readlines())
with open('result.txt', 'w') as file_out:
for line in same:
file_out.write(line)
</code></pre>
<p><strong>Content of temp1.csv:</strong></p>
<pre><code>M11435TDS144
M11543TH4292
M11509TD9937
M11543TH4258
</code></pre>
<p><strong>Content of result.txt :</strong></p>
<pre><code>M11543TH4258
M11543TH4292
M11435TDS144
</code></pre>
<p><em>You can use <code>with</code> for opening files <strong>sample.csv</strong> and <strong>temp1.csv</strong> as below.</em> </p>
<pre><code>import csv
with open("sample.csv") as f:
with open("temp1.csv",'wb') as data:
reader = csv.reader(f)
w = csv.writer(data)
for row in reader:
my_row = []
my_row.append(row[0])
w.writerow(my_row)
with open('temp1.csv', 'r') as file1:
with open('serialNumber.txt', 'r') as file2:
same = set(file1.readlines()).intersection(file2.readlines())
with open('result.txt', 'w') as file_out:
for line in same:
file_out.write(line)
</code></pre>
| 0 | 2016-09-28T05:20:31Z | [
"python",
"file-writing"
]
|
requests.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:600) | 39,737,820 | <p><strong>This is not a duplicate of <a href="http://stackoverflow.com/questions/35403605/ssl-certificate-verify-failed-ssl-c600">this question</a></strong></p>
<p>I checked <a href="http://stackoverflow.com/questions/38522939/requests-exceptions-sslerror-ssl-certificate-verify-failed-certificate-verif">this</a> but going insecure way doesn't looks good to me.</p>
<p>I am working on image size fetcher in python, which would fetch size of image on a web page. Before doing that I need to get web page status-code. I tried doing this way </p>
<pre class="lang-py prettyprint-override"><code>import requests
hdrs = {'User-Agent': 'Mozilla / 5.0 (X11 Linux x86_64) AppleWebKit / 537.36 (KHTML, like Gecko) Chrome / 52.0.2743.116 Safari / 537.36'}
urlResponse = requests.get(
'http://aucoe.info/', verify=True, headers=hdrs)
print(urlResponse.status_code)
</code></pre>
<p>This gives error:</p>
<blockquote>
<p>ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify
failed (_ssl.c:600)</p>
</blockquote>
<p>I tried changing <code>verify=True</code> to</p>
<p><code>verify='/etc/ssl/certs/ca-certificates.crt'</code></p>
<p>and </p>
<p><code>verify='/etc/ssl/certs'</code></p>
<p>But it still gives the same error.
I need to get status code for more than 5000 urls. Kindly help me.
Thanks in advance.</p>
<p><strong>Python Version :</strong> 3.4</p>
<p><strong>Requests version :</strong> requests==2.11.1</p>
<p><strong>O.S :</strong> Ubuntu 14.04</p>
<p><strong>pyOpenSSL :</strong> 0.13</p>
<p><strong>openssl version :</strong> OpenSSL 1.0.1f 6 Jan 2014</p>
| 0 | 2016-09-28T04:00:50Z | 39,738,401 | <p>You need to download the GoDaddy root certificates, available at <a href="https://certs.godaddy.com/repository" rel="nofollow">this site</a> and then pass it in as a parameter to <code>verify</code>, like this:</p>
<pre><code>>>> r = requests.get('https://aucoe.info', verify='/path/to/gd_bundle-g2-g1.crt')
>>> r.status_code
200
</code></pre>
<p>If you'll be doing multiple requests, you may want to configure the SSL as part of the session, as highlighted in the <a href="http://docs.python-requests.org/en/master/user/advanced/#ssl-cert-verification" rel="nofollow">documentation</a>.</p>
| 0 | 2016-09-28T05:01:01Z | [
"python",
"python-3.x",
"ssl",
"ssl-certificate",
"python-requests"
]
|
"None" string appear after print a line in python | 39,737,836 | <p>I a new in python programming. Today I practice with sorting a list.
I've got a strange output with this code:</p>
<pre><code># problem 13
def lensort(list):
list.sort(key=lambda x: len(x))
print list
print "Sort by length: ", lensort(['python', 'perl', 'java', 'c', 'haskell','ruby'])
</code></pre>
<p>The output is: </p>
<pre><code>Sort by length: ['c', 'perl', 'java', 'ruby', 'python', 'haskell']
None
</code></pre>
<p>So, I have no idea the reason why "None" is shown in 2nd line.
Could anyone help to about this problem?</p>
<p>Thank you very much!</p>
| 0 | 2016-09-28T04:02:11Z | 39,737,844 | <p>Because you are printing the list at the end of <code>lensort</code> function instead of returning it. A function in Python implicitly returns <code>None</code> if it doesn't <code>return</code> anything explicitly.</p>
<pre><code>def lensort(list):
list.sort(key=lambda x: len(x))
return list
</code></pre>
<p>By the way, you can simplify the key function passed as follows: <code>list.sort(key=len)</code></p>
<p>Also, avoid using class names as a variable name such as <code>list</code>, <code>set</code>, <code>dict</code> or <code>object</code>:</p>
<pre><code>def lensort(lst):
...
</code></pre>
| 0 | 2016-09-28T04:03:24Z | [
"python",
"sorting"
]
|
"None" string appear after print a line in python | 39,737,836 | <p>I a new in python programming. Today I practice with sorting a list.
I've got a strange output with this code:</p>
<pre><code># problem 13
def lensort(list):
list.sort(key=lambda x: len(x))
print list
print "Sort by length: ", lensort(['python', 'perl', 'java', 'c', 'haskell','ruby'])
</code></pre>
<p>The output is: </p>
<pre><code>Sort by length: ['c', 'perl', 'java', 'ruby', 'python', 'haskell']
None
</code></pre>
<p>So, I have no idea the reason why "None" is shown in 2nd line.
Could anyone help to about this problem?</p>
<p>Thank you very much!</p>
| 0 | 2016-09-28T04:02:11Z | 39,737,861 | <p>You're almost double printing the list so to speak. You don't want to print in the function since you're already calling <code>print</code> right after it. Instead you'll want to <code>return</code> the list and then have the <code>print</code> function print the returned list when you call <code>lensort</code>. </p>
<p>Here's a corrected version that should hopefully clarify what I mean: </p>
<pre><code># problem 13
def lensort(list):
list.sort(key=lambda x: len(x))
return list
print "Sort by length: ", lensort(['python', 'perl', 'java', 'c', 'haskell','ruby'])
</code></pre>
| 0 | 2016-09-28T04:05:51Z | [
"python",
"sorting"
]
|
"None" string appear after print a line in python | 39,737,836 | <p>I a new in python programming. Today I practice with sorting a list.
I've got a strange output with this code:</p>
<pre><code># problem 13
def lensort(list):
list.sort(key=lambda x: len(x))
print list
print "Sort by length: ", lensort(['python', 'perl', 'java', 'c', 'haskell','ruby'])
</code></pre>
<p>The output is: </p>
<pre><code>Sort by length: ['c', 'perl', 'java', 'ruby', 'python', 'haskell']
None
</code></pre>
<p>So, I have no idea the reason why "None" is shown in 2nd line.
Could anyone help to about this problem?</p>
<p>Thank you very much!</p>
| 0 | 2016-09-28T04:02:11Z | 39,737,896 | <p>The <code>.sort</code> method sorts a list in place. You do not return its return. Also, no <code>lambda</code> needed:</p>
<pre><code>>>> li=['python', 'perl', 'java', 'c', 'haskell','ruby']
>>> li.sort(key=len)
>>> li
['c', 'perl', 'java', 'ruby', 'python', 'haskell']
</code></pre>
<p>If you want a new sorted list, use the <code>sorted</code> function:</p>
<pre><code>>>> sorted(['python', 'perl', 'java', 'c', 'haskell','ruby'], key=len)
['c', 'perl', 'java', 'ruby', 'python', 'haskell']
</code></pre>
| 0 | 2016-09-28T04:09:38Z | [
"python",
"sorting"
]
|
Resampling (Upsample) Pandas multiindex dataframe | 39,737,890 | <p>Here is a sample dataframe for reference:</p>
<pre><code>import pandas as pd
import datetime
import numpy as np
np.random.seed(1234)
arrays = [np.sort([datetime.date(2016, 8, 31), datetime.date(2016, 7, 31), datetime.date(2016, 6, 30)]*3),
['A', 'B', 'C', 'D', 'E']*5]
df = pd.DataFrame(np.random.randn(15, 4), index=arrays)
df.index.rename(['date', 'id'], inplace=True)
</code></pre>
<p>What it looks like:</p>
<p><a href="http://i.stack.imgur.com/dsrRV.png" rel="nofollow"><img src="http://i.stack.imgur.com/dsrRV.png" alt="enter image description here"></a></p>
<p>I would like to resample the <code>date</code> level of the multiindex to weekly frequency <code>W-FRI</code> via upsampling, i.e., copying from the most recent values <code>how='last'</code>. The examples I've seen usually end up aggregating the data (which I want to avoid) after using the <code>pd.Grouper</code> function.</p>
<p>Edit: I have found a solution below, but I wonder if there is a more efficient method.</p>
| 0 | 2016-09-28T04:08:56Z | 39,737,891 | <p>Edit: I have found a solution:</p>
<pre><code>df.unstack().resample('W-FRI', how='last', fill_method='ffill')
</code></pre>
<p>but I wonder if there's a more efficient way to do this.</p>
| 0 | 2016-09-28T04:08:56Z | [
"python",
"pandas",
"dataframe",
"multi-index",
"resampling"
]
|
Efficient way to check if the last element in a row (in a list of lists) is found in another list? | 39,737,941 | <p>I currently have a list of lists (let's name it "big") that is about 9 columns and 5000 rows and growing. I have another list (let's name this one "small") that has approximately 3000 elements. My goal is to return each row in big where big[8] can be found in small. The results will be stored in a list of lists.</p>
<p>I have used list comprehension which has been returning the proper output, but it is far too inefficient for my needs. It takes several seconds to process these 5000 rows (usually about 6.5 seconds, and its efficiency gets worse with larger lists), and it needs to be able to quickly handle hundreds of thousands of rows.</p>
<p>The list comprehension I wrote is:</p>
<pre><code>results = [row for row in big if row[8] in small]
</code></pre>
<p>Sample data of list of lists (big):</p>
<pre><code>[[23.4, 6.8, 9.0, 13.0, 4.0, 19.0, 2.5, 7.6, 1472709600000],
[32.1, 15.5, 17.7, 21.7, 12.7, 27.7, 11.2, 16.3, 1472882400000],
[40.8, 24.2, 26.4, 30.4, 21.4, 36.4, 19.9, 25.0, 1473055200000],
[49.5, 32.9, 35.1, 39.1, 30.1, 45.1, 28.6, 33.7, 1473228000000],
[58.2, 41.6, 43.8, 47.8, 38.8, 53.8, 37.3, 42.4, 1473400800000]]
</code></pre>
<p>Sample data of list (small):</p>
<pre><code>[1472709600000, 1473055200000]
</code></pre>
<p>Desired output (results):</p>
<pre><code>[[23.4, 6.8, 9.0, 13.0, 4.0, 19.0, 2.5, 7.6, 1472709600000],
[40.8, 24.2, 26.4, 30.4, 21.4, 36.4, 19.9, 25.0, 1473055200000]]
</code></pre>
<p>Is there a more efficient way to return each row that has its last element found in another list?</p>
| 2 | 2016-09-28T04:14:50Z | 39,737,983 | <p>You can easily eliminate the linear search of <code>small</code> on each iteration by using a set:</p>
<pre><code>smallset = set(small)
results = [row for row in big if row[8] in smallset]
</code></pre>
| 3 | 2016-09-28T04:19:39Z | [
"python",
"performance",
"list"
]
|
Efficient way to check if the last element in a row (in a list of lists) is found in another list? | 39,737,941 | <p>I currently have a list of lists (let's name it "big") that is about 9 columns and 5000 rows and growing. I have another list (let's name this one "small") that has approximately 3000 elements. My goal is to return each row in big where big[8] can be found in small. The results will be stored in a list of lists.</p>
<p>I have used list comprehension which has been returning the proper output, but it is far too inefficient for my needs. It takes several seconds to process these 5000 rows (usually about 6.5 seconds, and its efficiency gets worse with larger lists), and it needs to be able to quickly handle hundreds of thousands of rows.</p>
<p>The list comprehension I wrote is:</p>
<pre><code>results = [row for row in big if row[8] in small]
</code></pre>
<p>Sample data of list of lists (big):</p>
<pre><code>[[23.4, 6.8, 9.0, 13.0, 4.0, 19.0, 2.5, 7.6, 1472709600000],
[32.1, 15.5, 17.7, 21.7, 12.7, 27.7, 11.2, 16.3, 1472882400000],
[40.8, 24.2, 26.4, 30.4, 21.4, 36.4, 19.9, 25.0, 1473055200000],
[49.5, 32.9, 35.1, 39.1, 30.1, 45.1, 28.6, 33.7, 1473228000000],
[58.2, 41.6, 43.8, 47.8, 38.8, 53.8, 37.3, 42.4, 1473400800000]]
</code></pre>
<p>Sample data of list (small):</p>
<pre><code>[1472709600000, 1473055200000]
</code></pre>
<p>Desired output (results):</p>
<pre><code>[[23.4, 6.8, 9.0, 13.0, 4.0, 19.0, 2.5, 7.6, 1472709600000],
[40.8, 24.2, 26.4, 30.4, 21.4, 36.4, 19.9, 25.0, 1473055200000]]
</code></pre>
<p>Is there a more efficient way to return each row that has its last element found in another list?</p>
| 2 | 2016-09-28T04:14:50Z | 39,738,062 | <p>A quick and easy way to do is to create a dictionary with 8 column item as the key. Below is a code snippet. </p>
<pre><code>big_dict = {}
for list in big:
big_dict[list[-1]] = list
output_list = []
for element in small:
output_list.append(big_dict[element])
</code></pre>
| 1 | 2016-09-28T04:28:10Z | [
"python",
"performance",
"list"
]
|
Efficient way to check if the last element in a row (in a list of lists) is found in another list? | 39,737,941 | <p>I currently have a list of lists (let's name it "big") that is about 9 columns and 5000 rows and growing. I have another list (let's name this one "small") that has approximately 3000 elements. My goal is to return each row in big where big[8] can be found in small. The results will be stored in a list of lists.</p>
<p>I have used list comprehension which has been returning the proper output, but it is far too inefficient for my needs. It takes several seconds to process these 5000 rows (usually about 6.5 seconds, and its efficiency gets worse with larger lists), and it needs to be able to quickly handle hundreds of thousands of rows.</p>
<p>The list comprehension I wrote is:</p>
<pre><code>results = [row for row in big if row[8] in small]
</code></pre>
<p>Sample data of list of lists (big):</p>
<pre><code>[[23.4, 6.8, 9.0, 13.0, 4.0, 19.0, 2.5, 7.6, 1472709600000],
[32.1, 15.5, 17.7, 21.7, 12.7, 27.7, 11.2, 16.3, 1472882400000],
[40.8, 24.2, 26.4, 30.4, 21.4, 36.4, 19.9, 25.0, 1473055200000],
[49.5, 32.9, 35.1, 39.1, 30.1, 45.1, 28.6, 33.7, 1473228000000],
[58.2, 41.6, 43.8, 47.8, 38.8, 53.8, 37.3, 42.4, 1473400800000]]
</code></pre>
<p>Sample data of list (small):</p>
<pre><code>[1472709600000, 1473055200000]
</code></pre>
<p>Desired output (results):</p>
<pre><code>[[23.4, 6.8, 9.0, 13.0, 4.0, 19.0, 2.5, 7.6, 1472709600000],
[40.8, 24.2, 26.4, 30.4, 21.4, 36.4, 19.9, 25.0, 1473055200000]]
</code></pre>
<p>Is there a more efficient way to return each row that has its last element found in another list?</p>
| 2 | 2016-09-28T04:14:50Z | 39,738,133 | <p>Binary search your small list which will reduce the time complexity to O(log n).</p>
<pre><code>smallset = sort(small)
def binarySearch(x):
#implement binary search
pass
results = [row for row in big if binarySearch(row[8])]
</code></pre>
| -1 | 2016-09-28T04:36:00Z | [
"python",
"performance",
"list"
]
|
meaning of `button_map` in the form_field definition in flask-bootstrap | 39,737,970 | <p>I am reading the documentation of <a href="https://pythonhosted.org/Flask-Bootstrap/forms.html" rel="nofollow">flask-boostrap doc</a>. In the <code>form_field</code> definition, what is the purpose of the <code>button_map</code>? </p>
<pre><code>form_field(field, form_type="basic", horizontal_columns=('lg', 2, 10), button_map={})
</code></pre>
| 0 | 2016-09-28T04:18:20Z | 39,738,183 | <p>According to your link (see <a href="https://pythonhosted.org/Flask-Bootstrap/forms.html#quick_form" rel="nofollow"><code>quick_form</code></a>):</p>
<blockquote>
<p><strong>button_map</strong> â A dictionary, mapping button field names to names such as <code>primary</code>, <code>danger</code> or <code>success</code>. Buttons not found in the <code>button_map</code> will use the <code>default</code> type of button.</p>
</blockquote>
<p>That means if you did something like</p>
<pre><code>form_field(submit_button, button_map={'submit_button': 'primary'})
</code></pre>
<p>you'd get a button with <code>primary</code> as its type.</p>
<p>As the docs also mention, <code>form_field</code> is used primarily by <code>quick_form</code> where a mapping makes more sense than for an individual field.</p>
| 0 | 2016-09-28T04:40:44Z | [
"python",
"flask",
"flask-wtforms",
"flask-bootstrap"
]
|
Python: Sleep methods | 39,738,065 | <p>I know this is already technically "asked" on this forums but this question is based around a different concept that I couldn't find.</p>
<p>While using time.sleep(whatever) it obviously sleeps, I get that but while using Pygame it will lock up the program. Is there any real method of using a sleep or a pause in the code other than an input that doesn't lock up pygame? I've tried;</p>
<pre><code>time.sleep
pygame.wait and
pygame.delay
</code></pre>
<p>these all do the exact same thing. I'm working on a game for a Computer Science class that involves a small animation of 13 photos I have that are slightly different, but when played 0.12seconds apart, it makes it look good, sadly the whole freezing up of the window from wait statements makes it skip and look very bad.</p>
<p>Thanks to whoever can figure out this mystery.</p>
| 0 | 2016-09-28T04:28:36Z | 39,739,381 | <p>I think you may want to try using the method that is shown <a href="http://stackoverflow.com/questions/18839039/how-to-wait-some-time-in-pygame">here</a>.</p>
<p>an example of what I mean is this:</p>
<pre><code>class Unit():
def __init__(self):
self.last = pygame.time.get_ticks()
self.cooldown = 300
def fire(self):
# fire gun, only if cooldown has been 0.3 seconds since last
now = pygame.time.get_ticks()
if now - self.last >= self.cooldown:
self.last = now
spawn_bullet()
</code></pre>
<p>notice he uses pygame.time.get_ticks() to check if a variable is less than that and if it is, he passes in the if statement and spawns the bullet.</p>
<p>a simpler way to view this concept is the following</p>
<pre><code>curtime = pygame.time.get_ticks() + 10 # the 10 is what we add to make it sleep
while True: # Create a loop that checks its value each step
if curtime < pygame.time.get_ticks(): # check if curtime is less than get_ticks()
print("Sleep is over now!")
break # exit the loop
</code></pre>
<p>I hope that helps you in any way, it seems like it might be a viable option since it keeps freezing when you use normal functions.</p>
<p><strong>ONE MORE THING</strong></p>
<p>do note that pygame.time.get_ticks() will give you the current time in milliseconds, for full documentation on this go <a href="http://www.pygame.org/docs/ref/time.html" rel="nofollow">here</a>.</p>
| 0 | 2016-09-28T06:18:32Z | [
"python",
"wait",
"sleep"
]
|
flask-bootstrap with two forms in one page | 39,738,069 | <p>I plan to put two forms in one page in my flask app, one to edit general user information and the other to reset password. The template looks like this</p>
<pre><code>{% extends "base.html" %}
{% import "bootstrap/wtf.html" as wtf %}
{% block page_content %}
<div class="page-header">
<h1>Edit Profile</h1>
</div>
{{ wtf.quick_form(form_profile, form_type='horizontal') }}
<hr>
{{ wtf.quick_form(form_reset, form_type='horizontal') }}
<hr>
{% endblock %}
</code></pre>
<p>Each form has a submit button. </p>
<p>In the route function, I tried to separate the two form like this</p>
<pre><code>form_profile = ProfileForm()
form_reset = ResetForm()
if form_profile.validate_on_submit() and form_profile.submit.data:
....
if form_reset.validate_on_submit() and form_reset.submit.data:
.....
</code></pre>
<p>But it didn't work. When I click on the button in the ResetForm, the ProfileForm validation logic is executed. </p>
<p>I suspect the problem is that <code>wtf.quick_form()</code> creates two identical submit buttons, but not sure. </p>
<p>What should I do in this case? Can <code>bootstrap/wtf.html</code> template deal with this situation?</p>
| 0 | 2016-09-28T04:29:21Z | 39,739,863 | <p>Define this two SubmitField with different names, like this:</p>
<pre><code>class Form1(Form):
name = StringField('name')
submit1 = SubmitField('submit')
class Form2(Form):
name = StringField('name')
submit2 = SubmitField('submit')
</code></pre>
<p>Then in <code>view.py</code>:</p>
<pre><code>....
form1 = Form1()
form2 = Form2()
if form1.submit1.data and form1.validate_on_submit(): # notice the order
....
if form2.submit2.data and form2.validate_on_submit(): # notice the order
....
</code></pre>
<p><strong>Now the problem was solved.</strong> </p>
<p>If you want to dive into it, then continue read.</p>
<p>Here is <code>validate_on_submit()</code>:</p>
<pre><code> def validate_on_submit(self):
"""
Checks if form has been submitted and if so runs validate. This is
a shortcut, equivalent to ``form.is_submitted() and form.validate()``
"""
return self.is_submitted() and self.validate()
</code></pre>
<p>And here is <code>is_submitted()</code>:</p>
<pre><code> def is_submitted(self):
"""
Checks if form has been submitted. The default case is if the HTTP
method is **PUT** or **POST**.
"""
return request and request.method in ("PUT", "POST")
</code></pre>
<p>When you call <code>form.validate_on_submit()</code>, it check if form has been submitted by the HTTP method no matter which submit button was clicked. So the little trick above is just add a filter (to check if submit has data, i.e., <code>form1.submit1.data</code>).</p>
<p>Besides, we change the order of if, so when we click one submit, <strong>it only call validate() to this form</strong>, preventing the validation error for both form.</p>
<p>The story isn't over yet. Here is <code>.data</code>:</p>
<pre><code>@property
def data(self):
return dict((name, f.data) for name, f in iteritems(self._fields))
</code></pre>
<p>It return a dict with field name(key) and field data(value), however, <strong>our two form submit button has same name <code>submit</code>(key)!</strong> </p>
<p>When we click the first submit button(in form1), the call from <code>form1.submit1.data</code> return a dict like this:</p>
<pre><code>temp = {'submit': True}
</code></pre>
<p>There is no doubt when we call <code>if form1.submit.data:</code>, it return <code>True</code>.</p>
<p>When we click the second submit button(in form2), the call to <code>.data</code> in <code>if form1.submit.data:</code> add a key-value in dict <strong>first</strong>, then the call from <code>if form2.submit.data:</code> add another key-value, in the end, the dict will like this:</p>
<pre><code>temp = {'submit': False, 'submit': True}
</code></pre>
<p>Now we call <code>if form1.submit.data:</code>, it return <code>True</code>, even if the submit button we clicked was in form2. </p>
<p>That's why we need to define this two <code>SubmitField</code> with different names. By the way, thanks for reading(to here)!</p>
<p>Thanks for nos's notice, he add an issue about <code>validate()</code>, check the comments below!</p>
| 2 | 2016-09-28T06:43:55Z | [
"python",
"flask",
"flask-wtforms",
"flask-bootstrap"
]
|
python .loc with some condition(string, regex etc) | 39,738,137 | <p>I am willing to get subset of the dataframe. And the condition is that, the value of certain column starts with the string 'HOUS'. How should I do?. </p>
<pre><code>df.loc[df.id.startswith('HOUS')]
</code></pre>
| 0 | 2016-09-28T04:36:10Z | 39,738,188 | <p>I should have searched more.</p>
<p>Here is the solution.</p>
<pre><code>df[df.id.str.startswith('HOUS')]
</code></pre>
| 0 | 2016-09-28T04:41:10Z | [
"python",
"pandas",
"dataframe",
"substring"
]
|
No such file or directory? | 39,738,179 | <p>For some reason despite having the file in the main and in the same directory, I keep getting the no such file error. Any help is appreciated. </p>
<pre><code> import time
def firstQues():
print('"TT(Announcing): HONING IN FROM SU CASA CUIDAD, (Their hometown)"')
time.sleep(3)
print('"VEN EL UN.....(Comes the one.....)"')
time.sleep(2)
print('"EL SOLO......(The ONLY.....)"')
time.sleep(3)
print('"Campeón NOVATO (NEWBIE CHAMP)"')
print()
text_file = open("Questions1.txt", "r")
wholefile = text_file.readline()
for wholefile in open("Questions1.txt"):
print(wholefile)
return wholefile
return text_file
def main():
firstQues()
text_file = open("Questions1.txt", "r")
main()
</code></pre>
| 0 | 2016-09-28T04:40:24Z | 39,738,323 | <p>You cannot open a file in read mode that doesn't exist. Make sure you create a file in a current working directory. Your program executed successfully.</p>
<p><a href="http://i.stack.imgur.com/1gqs3.png" rel="nofollow"><img src="http://i.stack.imgur.com/1gqs3.png" alt="enter image description here"></a></p>
| 0 | 2016-09-28T04:53:51Z | [
"python",
"python-3.x"
]
|
No such file or directory? | 39,738,179 | <p>For some reason despite having the file in the main and in the same directory, I keep getting the no such file error. Any help is appreciated. </p>
<pre><code> import time
def firstQues():
print('"TT(Announcing): HONING IN FROM SU CASA CUIDAD, (Their hometown)"')
time.sleep(3)
print('"VEN EL UN.....(Comes the one.....)"')
time.sleep(2)
print('"EL SOLO......(The ONLY.....)"')
time.sleep(3)
print('"Campeón NOVATO (NEWBIE CHAMP)"')
print()
text_file = open("Questions1.txt", "r")
wholefile = text_file.readline()
for wholefile in open("Questions1.txt"):
print(wholefile)
return wholefile
return text_file
def main():
firstQues()
text_file = open("Questions1.txt", "r")
main()
</code></pre>
| 0 | 2016-09-28T04:40:24Z | 39,738,490 | <pre><code> with open("Questions1.txt", "r") as f:
file_data = f.read().splitlines()
for line in file_data:
#do what ever you want
</code></pre>
<p><a href="http://stackoverflow.com/questions/3277503/how-to-read-a-file-line-by-line-into-a-list-with-python">How to read a file line by line into a list with Python</a></p>
| 0 | 2016-09-28T05:08:02Z | [
"python",
"python-3.x"
]
|
No such file or directory? | 39,738,179 | <p>For some reason despite having the file in the main and in the same directory, I keep getting the no such file error. Any help is appreciated. </p>
<pre><code> import time
def firstQues():
print('"TT(Announcing): HONING IN FROM SU CASA CUIDAD, (Their hometown)"')
time.sleep(3)
print('"VEN EL UN.....(Comes the one.....)"')
time.sleep(2)
print('"EL SOLO......(The ONLY.....)"')
time.sleep(3)
print('"Campeón NOVATO (NEWBIE CHAMP)"')
print()
text_file = open("Questions1.txt", "r")
wholefile = text_file.readline()
for wholefile in open("Questions1.txt"):
print(wholefile)
return wholefile
return text_file
def main():
firstQues()
text_file = open("Questions1.txt", "r")
main()
</code></pre>
| 0 | 2016-09-28T04:40:24Z | 39,740,043 | <p>The simplest solution comes down to a paradigm choice, <em>ask for permission</em> or <em>ask for forgiveness</em>.</p>
<p>Ask for permission: check if the file exists before using</p>
<pre><code>import os.path
if os.path.isfile("Questions1.txt"):
#read file here
</code></pre>
<p>Ask for forgiveness: try-except block, report if issues</p>
<pre><code>try:
#read file and work on it
except:
print 'Error reading file'
</code></pre>
<p>If you use read-write flag, it will create a file when it doesn't exist, but it doesn't seem like you want to write</p>
<pre><code>with open("Questions1.txt", "rw") as f:
#work on file
</code></pre>
<p>Pick your poison. Hope this helps :)</p>
| 0 | 2016-09-28T06:54:15Z | [
"python",
"python-3.x"
]
|
How to call csv data into tuples in python with numpy genfromtxt? | 39,738,231 | <p>I've been having trouble calling csv files in form of tuples with python.</p>
<p>I'm using: </p>
<pre><code> csv_data = np.genfromtxt('csv-data.csv', dtype=int, delimiter=',', names=True)
</code></pre>
<p>While the data looks like: (sorry I don't know how to display csv format)</p>
<p>Trial1 Trial2 Trial3</p>
<p>50-------70----90 </p>
<p>60-------70----80</p>
<p>(data of 3 trials by 2 people)</p>
<p>My genfromtxt code from above would generate:</p>
<pre><code>csv_data=[(50, 70, 90) (60, 70, 80)]
</code></pre>
<p>while I want to separate the data by person with a comma like:</p>
<pre><code>csv_data=[(50, 70, 90), (60, 70, 80)]
</code></pre>
<p>Any help from here on?
Thank you!</p>
| 0 | 2016-09-28T04:45:37Z | 39,738,381 | <p>I think you can use the <code>tolist</code> method.</p>
<p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html</a></p>
<p>It will convert the ndarray to a list.</p>
<pre><code>import numpy as np
csv_data = np.genfromtxt('csv-data.csv', dtype=int, delimiter=',', names=True)
print(csv_data.tolist())
</code></pre>
<p>The result of the above code is:</p>
<pre><code>â /tmp/csv $ python3 test.py
[(50, 70, 90), (60, 70, 80)]
<class 'list'>
</code></pre>
| 1 | 2016-09-28T04:58:58Z | [
"python",
"csv",
"numpy",
"genfromtxt"
]
|
YouCompleteMe post install error : cannot import name _compare_digest | 39,738,301 | <p>I am trying to install YouCompleteMe plugin on a source compiled Vim instance. I have a server without sudo privileges, hence I had to compile new Vim (7.4+) in order to make most plugins work. Also, I have installed miniconda and thus refer to the python in miniconda for all installations.</p>
<p>While following all steps how to install YouCompleteMe plugin (via Vundle or even manually), I faced this issue : "Cannot find module urllib3". So I installed urllib3 via pip, and then the error changed to "cannot import name _compare_digest". Point to note that conda virtualenv (I have just made the miniconda bin to $PATH) cannot start and it still shows "Cannot find module urllib3" even after installing it explicitly.</p>
<p>Is there something wrong with the way I installed vim? I had been extra careful to point to miniconda python wherever it's needed. How do I mitigate this issue and get the plugin running again?</p>
| 0 | 2016-09-28T04:52:25Z | 40,113,395 | <p>When I had trouble with dependencies I had to run </p>
<p><code>git submodule update --init --recursive</code></p>
<p>in the YouCompleteMe directory to get the dependencies installed.</p>
<p>Also make sure you have taken all of the other steps here:</p>
<p><a href="https://valloric.github.io/YouCompleteMe/#full-installation-guide" rel="nofollow">https://valloric.github.io/YouCompleteMe/#full-installation-guide</a></p>
<p>One of those steps may fix the issue.</p>
| 0 | 2016-10-18T16:22:48Z | [
"python",
"ubuntu",
"vim",
"conda",
"miniconda"
]
|
Siri-like app: calculating similarities between a query and a predefined set of control phrases | 39,738,327 | <p>I am trying to make a Apple Siri-like application in python in which you give it vocal commands or questions through a microphone, it determines the text version of the inputted audio, and then determines the appropriate action to take based on the meaning of the command/question. I am going to be using the Speech Recognition library to accept microphone input and convert from speech to text (via the IBM Watson Speech to Text API).</p>
<p>The main problem I have with it right now is that when I define an action for the app to execute when the appropriate command is given/question is asked, I don't know how to determine if the said command/question is denoting that action. Let me clarify what I mean by that with an example:</p>
<p>Say we have a action called <code>hello</code>. There are multiple ways for somebody to say "hello" to another person (or in this case, my application), such as:</p>
<ul>
<li>"Hello"</li>
<li>"Hi"</li>
<li>"Howdy"</li>
<li>...Etcetera...</li>
</ul>
<p>Of course, I want all of these ways of saying "hello" to be classified under the action of <code>hello</code>. That is, when someone says "hello", "hi", or "howdy", the response for the action <code>hello</code> should be executed (most likely just the app saying "hello" back in this case).</p>
<p>My first thought on how to solve this was to supply the app with all of or the most common ways to say a certain command/question. So, if I follow the previous example, I would tell the computer that "hello", "hi", and "howdy" all meant the same thing: the <code>hello</code> action. However, this method has a couple flaws. First off, it simply wouldn't understand ways of saying "hello" that weren't hardcoded in, such as "hey". Second off, once the responses for new commands/questions start getting coded in, it would become very tedious entering all the ways to say a certain phrase.</p>
<p>So then, because of the aforementioned problems, I started looking into ways to calculate the similarities between a group of sentences, and a single query. I eventually came across the Gensim library for python. I looked into it and found some very promising information on complex processes such as latent semantic indexing/analysis (LSI/LSA) and Tf-idf. However, it seemed to me like these things were mainly for comparing documents with large word counts as they rely on the frequency of certain terms. Assuming this is true, these processes wouldn't really provide me with accurate results as the commands/questions given to my app will probably be about eight words on average. I could be completely wrong, after all I know very little about these processes.</p>
<p>I also discovered WordNet, and how to work with it in python using the Natural Language Toolkit (NLTK). It looks like it could be useful, but I'm not sure how.</p>
<p>So, finally, I guess my real question here is what would be the best solution to the problem I've mentioned? Should I use one of the methods I've mentioned? Or is there a better way to do what I want that I don't know about?</p>
<p>Any help at all would be greatly appreciated. Thanks in advance.</p>
<p>P.S. Sorry for the wordy explanation; I wanted to be sure I was clear :P</p>
| 0 | 2016-09-28T04:54:24Z | 39,742,604 | <p>This is a hard problem. It is also the subject of <a href="http://alt.qcri.org/semeval2017/task11" rel="nofollow">Task 11</a> of this year's set of Semantic evaluation challenges (<a href="http://alt.qcri.org/semeval2017/" rel="nofollow">Semeval 2017</a>). So take a look at the <a href="http://alt.qcri.org/semeval2017/task11/index.php?id=task-description" rel="nofollow">task description</a>, which will give you a road map for how this problem can be solved. The task also comes with a suite of training data, which is essential for approaching a problem like this. The challenge is still ongoing, but eventually you'll be able to learn from the solutions as well. </p>
<p>So the short answer to "how do I determine if some command/question is denoting a certain action" is: Use the training data from Semeval2017 (or your own of course), and write a classifier. The <a href="http://nltk.org/book" rel="nofollow">nltk book</a> can help you get up to speed with writing classifiers.</p>
| 3 | 2016-09-28T08:58:50Z | [
"python",
"nlp",
"nltk",
"wordnet",
"gensim"
]
|
Quicksort: Infinite Loop | 39,738,341 | <p>The following implementation of <code>QuickSort</code> runs into infinite loop</p>
<pre class="lang-py prettyprint-override"><code>def partition(arr, lo, hi):
pivot = lo
for i in range(lo+1, hi+1):
if arr[i] <= arr[lo]:
pivot += 1
arr[i], arr[pivot] = arr[pivot], arr[i]
arr[lo], arr[pivot] = arr[pivot], arr[lo]
return pivot
def quickSort(arr, lo=0, hi=None):
if not hi: hi = len(arr) - 1
if lo >= hi: return
pivot = partition(arr, lo, hi)
quickSort(arr, lo, pivot-1)
quickSort(arr, pivot+1, hi)
arr = [5,3,2,-9,1,6,0,-1,9,6,2,5]
quickSort(arr)
print(arr)
</code></pre>
<p>I presume the <code>partition</code> function is the culprit. Not able to figure out the mistake. </p>
<p>Thanks</p>
| 0 | 2016-09-28T04:55:20Z | 39,738,410 | <p>At one point, your loop never works in partition def. See below</p>
<pre><code>[5, 3, 2, -9, 1, 6, 0, -1, 9, 6, 2, 5]
lo 0 hi 11
pivot 8
[5, 3, 2, -9, 1, 0, -1, 2, 5, 6, 6, 9]
lo 0 hi 7
pivot 7
[2, 3, 2, -9, 1, 0, -1, 5, 5, 6, 6, 9]
lo 0 hi 6
pivot 5
[-1, 2, -9, 1, 0, 2, 3, 5, 5, 6, 6, 9]
lo 0 hi 4
pivot 1
[-9, -1, 2, 1, 0, 2, 3, 5, 5, 6, 6, 9]
lo 0 hi 11
pivot 0
[-9, -1, 2, 1, 0, 2, 3, 5, 5, 6, 6, 9]
lo 1 hi 11
</code></pre>
<p>partition doesn't seem to be doing the entire job and correctly, it is a two step process. See a sample partition def below. Also, you can refer to original source <a href="http://interactivepython.org/runestone/static/pythonds/SortSearch/TheQuickSort.html" rel="nofollow">here</a>: </p>
<pre><code> def partition(alist,first,last):
pivotvalue = alist[first]
leftmark = first+1
rightmark = last
done = False
while not done:
while leftmark <= rightmark and alist[leftmark] <= pivotvalue:
leftmark = leftmark + 1
while alist[rightmark] >= pivotvalue and rightmark >= leftmark:
rightmark = rightmark -1
if rightmark < leftmark:
done = True
else:
temp = alist[leftmark]
alist[leftmark] = alist[rightmark]
alist[rightmark] = temp
temp = alist[first]
alist[first] = alist[rightmark]
alist[rightmark] = temp
return rightmark
</code></pre>
| 1 | 2016-09-28T05:01:28Z | [
"python",
"sorting",
"infinite-loop",
"quicksort"
]
|
Quicksort: Infinite Loop | 39,738,341 | <p>The following implementation of <code>QuickSort</code> runs into infinite loop</p>
<pre class="lang-py prettyprint-override"><code>def partition(arr, lo, hi):
pivot = lo
for i in range(lo+1, hi+1):
if arr[i] <= arr[lo]:
pivot += 1
arr[i], arr[pivot] = arr[pivot], arr[i]
arr[lo], arr[pivot] = arr[pivot], arr[lo]
return pivot
def quickSort(arr, lo=0, hi=None):
if not hi: hi = len(arr) - 1
if lo >= hi: return
pivot = partition(arr, lo, hi)
quickSort(arr, lo, pivot-1)
quickSort(arr, pivot+1, hi)
arr = [5,3,2,-9,1,6,0,-1,9,6,2,5]
quickSort(arr)
print(arr)
</code></pre>
<p>I presume the <code>partition</code> function is the culprit. Not able to figure out the mistake. </p>
<p>Thanks</p>
| 0 | 2016-09-28T04:55:20Z | 39,740,077 | <p>The problem is with your initialization code for <code>hi</code>:</p>
<pre><code>if not hi: hi = len(arr) - 1
</code></pre>
<p>The condition <code>not hi</code> is <code>True</code> if <code>hi</code> is zero.</p>
<p>This causes <code>quicksort(arr, 0, 0)</code> and <code>quicksort(arr, 1, 0)</code> (one of which will almost always occur in the process of sorting) to try to sort most of the list again, rather than being an end to the recursion.</p>
<p>You should use the more specific test <code>if hi is None</code> instead of <code>if not hi</code>. This way you'll never incorrect reinitialize <code>hi</code> if it's zero, and the initialization code will only run if <code>hi</code> is <code>None</code> because it was not passed in to the function by the user.</p>
| 1 | 2016-09-28T06:56:15Z | [
"python",
"sorting",
"infinite-loop",
"quicksort"
]
|
How to make flask jinja use variables from python list as a jinja expression variables | 39,738,423 | <p>So I have a list:</p>
<pre><code>ABC = ['{{ row[0] }}','{{ row[1] }}','{{ row[2] }}']
</code></pre>
<p>In HTML template, I want to use each items in list ABC as Jinja expression, how can i do it, here is my HTML table template</p>
<pre><code>{% for row in reports %}
<tr>
{% for item in ABC %}
<td>{{ item }}</td>
{% endfor %}
</tr>
{% endfor %}
</code></pre>
<p>I tried to remove {{ }} from each item in ABC list, but it didn't work. It seemed like each item in HTML template was treated as a string, can't use by each row in reports.</p>
<p>In HTML page, the table render like this:</p>
<p>Column1 | Column2 | Column3</p>
<p>{{ row[0] }} | {{ row[1] }} | {{ row[2] }}</p>
<p>I want it work like this.</p>
<pre><code>{% for row in reports %}
<tr>
<td>{{ row[0] }}</td>
<td>{{ row[1] }}</td>
<td>{{ row[2] }}</td>
</tr>
{% endfor %}
</code></pre>
<p>Edit: I have change to another solution.</p>
| 2 | 2016-09-28T05:02:47Z | 39,745,979 | <p>The problem is how you are creating the list. Right now you're giving it strings that identify the items you want to output. Instead, give it what you want to output. </p>
<pre><code>ABC = [row[0], row[1], row[2]]
</code></pre>
<p>Edit: Since you are trying to print all of the columns in the template you can just iterate over them there. </p>
<pre><code>{% for column in row %}
<td>{{ column }}</td>
{% endfor %}
</code></pre>
| 0 | 2016-09-28T11:20:36Z | [
"python",
"python-3.x",
"flask",
"jinja2"
]
|
What's this after aggregation in django? | 39,738,438 | <p>I see the following code from a django project. I understand it's aggregation, but what's ['kw__sum'] after the aggregation?</p>
<pre><code>Project.objects.filter(project = project).aggregate(Sum('kw'))['kw__sum']
</code></pre>
<p>Thanks</p>
| 1 | 2016-09-28T05:03:56Z | 39,738,474 | <p>Here if you look in <a href="https://docs.djangoproject.com/en/1.10/topics/db/aggregation/#cheat-sheet" rel="nofollow">examples</a> you will see that <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#aggregate" rel="nofollow"><code>aggregate</code></a> returns dictionary so last part is just dict lookup</p>
<pre><code>aggregation = Project.objects.filter(project = project).aggregate(Sum('kw'))
result = aggregation['kw__sum']
</code></pre>
<p>From the docs</p>
<blockquote>
<p>Returns a dictionary of aggregate values (averages, sums, etc.) calculated over the QuerySet. Each argument to aggregate() specifies a value that will be included in the dictionary that is returned.</p>
</blockquote>
| 2 | 2016-09-28T05:06:23Z | [
"python",
"django",
"aggregation"
]
|
Parse decodeURIComponent JSON string with Python | 39,738,443 | <p>I have a "deep" JSON string that I need to pass as GET vars in a URL. It looks like the following:</p>
<pre><code>{
"meta": {
"prune": true,
"returnFields": ["gf", "gh", "gh", "rt"],
"orient": "split"
},
"indicators": [{
"type": "beta",
"computeOn": "gf",
"parameters": {
"timeperiod": 5,
"nbdevup": 2,
"nbdevdn": 2,
"matype": 0
}
}, {
"type": "alpha",
"computeOn": "gf",
"parameters": {
"timeperiod": 30
}
}]
};
</code></pre>
<p>When encoding using <code>jQuery.param</code>, the result is as follows:</p>
<pre><code>var recursiveEncoded = jQuery.param(body);
console.log(recursiveEncoded);
meta%5Bprune%5D=true&meta%5BreturnFields%5D%5B%5D=gf&meta%5BreturnFields%5D%5B%5D=gh&meta%5BreturnFields%5D%5B%5D=gh&meta%5BreturnFields%5D%5B%5D=rt&meta%5Borient%5D=split&indicators%5B0%5D%5Btype%5D=beta&indicators%5B0%5D%5BcomputeOn%5D=gf&indicators%5B0%5D%5Bparameters%5D%5Btimeperiod%5D=5&indicators%5B0%5D%5Bparameters%5D%5Bnbdevup%5D=2&indicators%5B0%5D%5Bparameters%5D%5Bnbdevdn%5D=2&indicators%5B0%5D%5Bparameters%5D%5Bmatype%5D=0&indicators%5B1%5D%5Btype%5D=alpha&indicators%5B1%5D%5BcomputeOn%5D=gf&indicators%5B1%5D%5Bparameters%5D%5Btimeperiod%5D=30
</code></pre>
<p>Which is decoded to the following:</p>
<pre><code>var recursiveDecoded = decodeURIComponent( jQuery.param(body) );
console.log(recursiveDecoded);
meta[prune]=true&meta[returnFields][]=gf&meta[returnFields][]=gh&meta[returnFields][]=gh&meta[returnFields][]=rt&meta[orient]=split&indicators[0][type]=beta&indicators[0][computeOn]=gf&indicators[0][parameters][timeperiod]=5&indicators[0][parameters][nbdevup]=2&indicators[0][parameters][nbdevdn]=2&indicators[0][parameters][matype]=0&indicators[1][type]=alpha&indicators[1][computeOn]=gf&indicators[1][parameters][timeperiod]=30
</code></pre>
<p>If just using a serialized string result on the server leaves the string as the key in a key value pair:</p>
<pre><code>"query": {
"{\"meta\":{\"prune\":true,\"returnFields\":[\"gf\",\"gh\",\"gh\",\"rt\"],\"orient\":\"split\"},\"indicators\":[{\"type\":\"beta\",\"computeOn\":\"gf\",\"parameters\":{\"timeperiod\":5,\"nbdevup\":2,\"nbdevdn\":2,\"matype\":0}},{\"type\":\"alpha\",\"computeOn\":\"gf\",\"parameters\":{\"timeperiod\":30}}]}": ""
},
</code></pre>
<p>My backend processing is done with Python. What modules exist to convert the above result to a <code>dict</code> resembling the original object?</p>
| 1 | 2016-09-28T05:04:11Z | 39,738,683 | <p>Well, since we hashed it out in the comments, I'll post the answer here for posterity.</p>
<p>Use a combination of <code>JSON.stringify</code> on the JavaScript side to serialize your data structure and <code>json.loads</code> on the Python side to deserialize it. Pass the serialized structure as a query string parameter ("query" in your example) and then read the value from that query string parameter in Python. Huzzah!</p>
<p><a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify" rel="nofollow">https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify</a></p>
| 0 | 2016-09-28T05:25:50Z | [
"javascript",
"jquery",
"python",
"json"
]
|
Return value from multiprocessing.Queue() in multiprocessing Python | 39,738,504 | <p>I run a simple multiprocess program (the code below). I just make 2 processors, then initial a queue to store the result.</p>
<p>I wonder why with the same name <code>q</code>, but each time it prints out a different value.
I know the queue store 2 return values, from <code>pro1</code> and <code>pro2</code>. But I expected, it's something like: </p>
<pre><code>q = [1,2]
</code></pre>
<p>or</p>
<pre><code>q=[2,1] #depend on which one runs first
</code></pre>
<p>I can not figure out how one variable <code>q</code> can be 1, or 2.</p>
<p>It makes me so confused.
Thank you.</p>
<p><strong>The code:</strong></p>
<pre><code>import multiprocessing
def run(ID, q):
print("Starting thread %s " % (ID))
q.put(ID)
return None
if __name__ == '__main__':
q = multiprocessing.Queue() #store the result
pro1 = multiprocessing.Process(target=run, args=(1,q))
pro2 = multiprocessing.Process(target=run, args=(2,q))
pro1.start()
pro2.start()
pro1.join()
pro2.join()
print("q is ", q.get())
print("another q is ", q.get())
</code></pre>
<p><em>The result</em>:</p>
<pre><code>Starting thread 2
Starting thread 1
('q is ', 1)
('another q is ', 2)
</code></pre>
| 3 | 2016-09-28T05:09:40Z | 39,738,563 | <p>I'm not 100% sure what you're confused about exactly, but here's what I think will help, but please correct me if I'm off track. Your queue has the outputs of both processes on it. You're just getting the one that finished first.</p>
<p>For example in your case it looks like process 1 finishes first, and then process 2.</p>
<p>Thus after both processes are complete the queue will look like</p>
<pre><code>[1,2]
</code></pre>
<p>Then when you call <code>q.get()</code> you get <code>1</code> and the queue now looks like:</p>
<pre><code>`[2]`
</code></pre>
<p>Then when you call <code>q.get()</code> again you will get <code>2</code> and now the queue is empty.</p>
| 1 | 2016-09-28T05:15:58Z | [
"python",
"multiprocessing"
]
|
Python requests, how to add content-type to multipart/form-data request | 39,738,525 | <p>I Use python requests to upload a file with PUT method.</p>
<p>The remote API Accept any request only if the body contains an attribute
Content-Type:i mage/png not as Request Header </p>
<p>When i use python requests , the request rejected because missing attribute</p>
<p><a href="http://i.stack.imgur.com/8KQ0F.png" rel="nofollow"><img src="http://i.stack.imgur.com/8KQ0F.png" alt="This request is rejected on this image"></a></p>
<p>I tried to use a proxy and after adding the missing attribute , it was accepted</p>
<p>See the highlighted text</p>
<p><a href="http://i.stack.imgur.com/zzFBn.png" rel="nofollow"><img src="http://i.stack.imgur.com/zzFBn.png" alt="Valid request"></a> </p>
<p>but i can not programmatically add it , How can i do it?</p>
<p>And this is my code:</p>
<pre><code>files = {'location[logo]': open(fileinput,'rb')}
ses = requests.session()
res = ses.put(url=u,files=files,headers=myheaders,proxies=proxdic)
</code></pre>
| 2 | 2016-09-28T05:12:23Z | 39,742,334 | <p>As per the [docs][1, you need to add two more arguments to the tuple, filename and the content type:</p>
<pre><code># filed name filename file object content=type
files = {'location[logo]': ("name.png", open(fileinput),'image/png')}
</code></pre>
<p>You can see a sample an example below:</p>
<pre><code>In [1]: import requests
In [2]: files = {'location[logo]': ("foo.png", open("/home/foo.png"),'image/png')}
In [3]:
In [3]: ses = requests.session()
In [4]: res = ses.put("http://httpbin.org/put",files=files)
In [5]: print(res.request.body[:200])
--0b8309abf91e45cb8df49e15208b8bbc
Content-Disposition: form-data; name="location[logo]"; filename="foo.png"
Content-Type: image/png
�PNG
IHDR��:d�tEXtSoftw
</code></pre>
<p>For future reference, <a href="https://github.com/kennethreitz/requests/issues/1495" rel="nofollow">this comment</a> in a old related issue explains all variations:</p>
<pre><code># 1-tuple (not a tuple at all)
{fieldname: file_object}
# 2-tuple
{fieldname: (filename, file_object)}
# 3-tuple
{fieldname: (filename, file_object, content_type)}
# 4-tuple
{fieldname: (filename, file_object, content_type, headers)}
</code></pre>
| 0 | 2016-09-28T08:46:31Z | [
"python",
"file",
"http",
"upload",
"python-requests"
]
|
Categorize each feature from a dataset by percentiles via python | 39,738,611 | <p>I'm trying to figure a way to categorize each column in my dataset based on it's percentiles. For example, consider the column:</p>
<pre><code> ticket
24160
113781
113781
113781
113781
19952
13502
112050
11769
</code></pre>
<p>The 20th percentile of the column above is 1350. Basically I want to convert that column into a categorical variable where all values from the 0-20th percentile = 1, all values from the 20-40th percentile = 2, all values from the 40-60th percentile = 3 and so on. Thus the ticket feature will be a categorical variable with either 1,2,3,4 or 5. I want to apply this conversion to every column in my dataset besides the last column. So far I've coded:</p>
<pre><code> import numpy as np
import pandas as pd
dataset = pd.read_csv('somedataset.csv')
def func(x):
if min(x)<=x< np.percentile(x, 20):
return 1
elif np.percentile(x, 20)<=x< np.percentile(x, 40):
return 2
elif np.percentile(x, 40)<=x< np.percentile(x, 60):
return 3
elif np.percentile(x, 60)<=x< np.percentile(x, 80):
return 4
elif x = max(x):
return 5
dataset[:]= dataset[:].apply(func)
</code></pre>
<p>I don't know how to apply this function to each column besides the last column within my dataset. I would greatly appreciate any feedback!</p>
| 0 | 2016-09-28T05:20:06Z | 39,750,582 | <pre><code>np.floor(df[df.columns[:-1]].rank() / len(df) / .2).astype(int) + 1
</code></pre>
<p>The above code returns what you want, with the same column names as original data.</p>
<ol>
<li><code>df[df.columns[:-1]]</code> subsets all but the last column as you requested</li>
<li><code>.rank()</code> gives the integer rank of the item from smallest to largest</li>
<li><code>/ len(df) / .2</code> gives you the percentile bucket </li>
<li><code>np.floor(...).astype(int) + 1</code> gives you the bucket as an integer starting at 1</li>
</ol>
| 0 | 2016-09-28T14:35:41Z | [
"python"
]
|
AttributeError: '_io.TextIOWrapper' object has no attribute 'lower' for txt file | 39,738,661 | <p>Here is my code (A pig <strong>Latin translator</strong> from a text file): </p>
<pre><code>f = open('Assignment_4.txt', 'r+')
for line in f:
print(line)
def pigLatin():
var = 'ay'
wordL = f.lower()
firstLetter = wordL[0]
pigLatin = wordL + firstLetter + var
pigLatin = pigLatin[1:]
print(pigLatin)
</code></pre>
<p>It works for defined strings but won't for the file. Help is appreciated!</p>
| -1 | 2016-09-28T05:23:58Z | 39,738,851 | <p><strong>Points:</strong></p>
<ul>
<li><code>lower()</code> works with string. You are trying to use with file handle <code>f</code>. That's why you are getting this error. </li>
<li>Also, after reading file line by line, you should call <code>pigLatin()</code> for each line as <code>pigLatin(line)</code>. So, now , pigLatin() function expects one argument.</li>
<li>Also, close the file at the end as <code>f.close()</code>. It will be if you use <code>with</code> statement to open file.</li>
</ul>
<p><strong>Code with comments inline:</strong></p>
<pre><code>def pigLatin(stuff_to_be_changed):
var = 'ay'
wordL = stuff_to_be_changed.lower()
firstLetter = wordL[0]
pigLatin = wordL + firstLetter + var
pigLatin = pigLatin[1:]
print(pigLatin)
#For string
string = "I am to change"
#Call function
pigLatin(string)
f = open('Assignment_4.txt', 'r+')
#For file
for line in f:
print(line)
#Call function
pigLatin(line)
#Close the file
f.close()
</code></pre>
| 0 | 2016-09-28T05:41:02Z | [
"python"
]
|
AttributeError: '_io.TextIOWrapper' object has no attribute 'lower' for txt file | 39,738,661 | <p>Here is my code (A pig <strong>Latin translator</strong> from a text file): </p>
<pre><code>f = open('Assignment_4.txt', 'r+')
for line in f:
print(line)
def pigLatin():
var = 'ay'
wordL = f.lower()
firstLetter = wordL[0]
pigLatin = wordL + firstLetter + var
pigLatin = pigLatin[1:]
print(pigLatin)
</code></pre>
<p>It works for defined strings but won't for the file. Help is appreciated!</p>
| -1 | 2016-09-28T05:23:58Z | 39,738,885 | <p>The error is quite right - file objects don't have a <code>lower()</code> method - before you can use your function you need to <code>read</code> a line of text from your file and <code>split</code> it into separate words. (Note that it is never a good idea to use the same name for a variable and a method as it can cause confusion.)</p>
| 0 | 2016-09-28T05:43:14Z | [
"python"
]
|
How do you get a probability of all classes to predict without building a classifier for each single class? | 39,738,703 | <p>Given a classification problem, sometimes we do not just predict a class, but need to return the probability that it is a class.</p>
<p>i.e. P(y=0|x), P(y=1|x), P(y=2|x), ..., P(y=C|x)</p>
<p>Without building a new classifier to predict y=0, y=1, y=2... y=C respectively. Since training C classifiers (let's say C=100) can be quite slow.</p>
<p>What can be done to do this? What classifiers naturally can give all probabilities easily (one I know is using neural network with 100 out nodes)? But if I use traditional random forests, I can't do that, right? I use the Python Scikit-Learn library.</p>
| 0 | 2016-09-28T05:27:11Z | 39,742,380 | <p>If you want probabilities, look for sklearn-classifiers that have method: predict_proba()</p>
<p>Sklearn documentation about multiclass:[<a href="http://scikit-learn.org/stable/modules/multiclass.html]" rel="nofollow">http://scikit-learn.org/stable/modules/multiclass.html]</a></p>
<p>All scikit-learn classifiers are capable of multiclass classification. So you don't need to build 100 models yourself.</p>
<p>Below is a summary of the classifiers supported by scikit-learn grouped by strategy:</p>
<ul>
<li>Inherently multiclass: Naive Bayes, LDA and QDA, Decision Trees,
Random Forests, Nearest Neighbors, setting multi_class='multinomial'
in sklearn.linear_model.LogisticRegression. </li>
<li>Support multilabel: Decision Trees, Random Forests, Nearest Neighbors, Ridge Regression.</li>
<li>One-Vs-One: sklearn.svm.SVC. </li>
<li>One-Vs-All: all linear models exceptsklearn.svm.SVC.</li>
</ul>
| 2 | 2016-09-28T08:48:18Z | [
"python",
"machine-learning",
"scikit-learn"
]
|
How do you get a probability of all classes to predict without building a classifier for each single class? | 39,738,703 | <p>Given a classification problem, sometimes we do not just predict a class, but need to return the probability that it is a class.</p>
<p>i.e. P(y=0|x), P(y=1|x), P(y=2|x), ..., P(y=C|x)</p>
<p>Without building a new classifier to predict y=0, y=1, y=2... y=C respectively. Since training C classifiers (let's say C=100) can be quite slow.</p>
<p>What can be done to do this? What classifiers naturally can give all probabilities easily (one I know is using neural network with 100 out nodes)? But if I use traditional random forests, I can't do that, right? I use the Python Scikit-Learn library.</p>
| 0 | 2016-09-28T05:27:11Z | 39,743,987 | <p>Random forests do indeed give P(Y/x) for multiple classes. In most cases
P(Y/x) can be taken as:</p>
<p>P(Y/x)= the number of trees which vote for the class/Total Number of trees.</p>
<p>However you can play around with this, for example in one case if the highest class has 260 votes, 2nd class 230 votes and other 5 classes 10 votes, and in another case class 1 has 260 votes, and other classes have 40 votes each, you migth feel more confident in your prediction in 2nd case as compared to 1st case, so you come up with a confidence metric according to your use case.</p>
| 0 | 2016-09-28T09:53:41Z | [
"python",
"machine-learning",
"scikit-learn"
]
|
How to measure similarity between two python code blocks? | 39,738,872 | <p>Many would want to measure code similarity to catch plagiarisms, however my intention is to cluster a set of python code blocks (say answers to the same programming question) into different categories and distinguish different approaches taken by students. </p>
<p>If you have any idea how this could be achieved, I would appreciate it if you share it here.</p>
| 1 | 2016-09-28T05:42:31Z | 39,738,985 | <p>One approach would be to count then number of functions, objects, keywords <em>possibly grouped into categories such as branching, creating, manipulating, etc.,</em> and number variables of each type. Without relying on the methods and variables being called the same name(s).</p>
<p>For a given problem the similar approaches will tend to come out with similar scores for these, e.g.: A students who used decision tree would have a high number of branch statements while one who used a decision table would have much lower.</p>
<p>This approach would be much quicker to implement than parsing the code structure and comparing the results.</p>
| 1 | 2016-09-28T05:51:30Z | [
"python",
"compilation",
"comparison",
"abstract-syntax-tree"
]
|
How to measure similarity between two python code blocks? | 39,738,872 | <p>Many would want to measure code similarity to catch plagiarisms, however my intention is to cluster a set of python code blocks (say answers to the same programming question) into different categories and distinguish different approaches taken by students. </p>
<p>If you have any idea how this could be achieved, I would appreciate it if you share it here.</p>
| 1 | 2016-09-28T05:42:31Z | 39,741,309 | <p>You can choose any scheme you like that essentially hashes the contents of the code blocks, and place code blocks with identical hashes into the same category.</p>
<p>Of course, what will turn out to be similar will then depend highly on how you defined the hashing function. For instance, a truly stupid hashing function H(code)==0 will put everything in the same bin.</p>
<p>A hard problem is finding a hashing function that classifies code blocks in a way that seems similar in a natural sense. With lots of research, nobody has yet found anything better to judge this than <em>I'll know if they are similar when I see them</em>.</p>
<p>You surely do not want it to be dependent on layout/indentation/whitespace/comments, or slight changes to these will classify blocks differently even if their semantic content is identical. </p>
<p>There are three major schemes people have commonly used to find duplicated (or similar) code:</p>
<ul>
<li><p>Metrics-based schemes, which compute the hash by counting various type of operators and operands by computing a metric. (Note: this uses lexical tokens). These often operate only at the function level. I know of no practical tools based on this.</p></li>
<li><p>Lexically based schemes, which break the input stream into lexemes, convert identifiers and literals into fixed special constants (e.g, treat them as undifferentiated), and then essentially hash N-grams (a sequence of N tokens) over these sequences. There are many clone detectors based on essentially this idea; they work tolerably well, but also find stupid matches because nothing forces alignment with program structure boundaries.
The sequence </p>
<pre><code> return ID; } void ID ( int ID ) {
</code></pre></li>
</ul>
<p>is an 11 gram which occurs frequently in C like languages but clearly isn't a useful clone). The result is that false positives tend to occur, e.g, you get claimed matches where there isn't one.</p>
<ul>
<li>Abstract syntax tree based matching, (hashing over subtrees) which automatically aligns clones to language boundaries by virtue of using the ASTs, which represent the language structures directly. (I'm the author of the original paper on this, and build a commercial product CloneDR based on the idea, see my bio). These tools have the advantage that they can match code that contains sequences of tokens of different lengths in the middle of a match, e.g., one statement (of arbitrary size) is replaced by another.</li>
</ul>
<p>This paper provides a survey of the various techniques: <a href="http://www.cs.usask.ca/~croy/papers/2009/RCK_SCP_Clones.pdf" rel="nofollow">http://www.cs.usask.ca/~croy/papers/2009/RCK_SCP_Clones.pdf</a>. It shows that AST-based clone detection tools appear to be the most effective at producing clones that people agree are similar blocks of code, which seems key to OP's particular interest; see Table 14.</p>
<p>[There are graph-based schemes that match control and data flow graphs. They should arguably produce even better matches but apparantly do not do much better in practice.]</p>
| 1 | 2016-09-28T07:54:56Z | [
"python",
"compilation",
"comparison",
"abstract-syntax-tree"
]
|
Python file matching and appending | 39,738,915 | <p>This is one file <code>result.csv</code>:</p>
<pre><code>M11251TH1230
M11543TH4292
M11435TDS144
</code></pre>
<p>This is another file <code>sample.csv</code>:</p>
<pre><code>M11435TDS144,STB#1,Router#1
M11543TH4292,STB#2,Router#1
M11509TD9937,STB#3,Router#1
M11543TH4258,STB#4,Router#1
</code></pre>
<p>Can I write a Python program to compare both the files and if line in <code>result.csv</code> matches with the first word in the line in <code>sample.csv</code>, then append 1 else append 0 at every line in <code>sample.csv</code>?</p>
| 0 | 2016-09-28T05:45:22Z | 39,739,335 | <p>The following snippet of code will work for you </p>
<pre><code>import csv
with open('result.csv', 'rb') as f:
reader = csv.reader(f)
result_list = []
for row in reader:
result_list.extend(row)
with open('sample.csv', 'rb') as f:
reader = csv.reader(f)
sample_list = []
for row in reader:
if row[0] in result_list:
sample_list.append(row + [1])
else:
sample_list.append(row + [0]
with open('sample.csv', 'wb') as f:
writer = csv.writer(f)
writer.writerows(sample_list)
</code></pre>
| 0 | 2016-09-28T06:14:58Z | [
"python",
"append"
]
|
Python file matching and appending | 39,738,915 | <p>This is one file <code>result.csv</code>:</p>
<pre><code>M11251TH1230
M11543TH4292
M11435TDS144
</code></pre>
<p>This is another file <code>sample.csv</code>:</p>
<pre><code>M11435TDS144,STB#1,Router#1
M11543TH4292,STB#2,Router#1
M11509TD9937,STB#3,Router#1
M11543TH4258,STB#4,Router#1
</code></pre>
<p>Can I write a Python program to compare both the files and if line in <code>result.csv</code> matches with the first word in the line in <code>sample.csv</code>, then append 1 else append 0 at every line in <code>sample.csv</code>?</p>
| 0 | 2016-09-28T05:45:22Z | 39,739,728 | <pre><code>import pandas as pd
d1 = pd.read_csv("1.csv",names=["Type"])
d2 = pd.read_csv("2.csv",names=["Type","Col2","Col3"])
d2["Index"] = 0
for x in d1["Type"] :
d2["Index"][d2["Type"] == x] = 1
d2.to_csv("3.csv",header=False)
</code></pre>
<p>Considering "1.csv" and "2.csv" are your csv input files and "3.csv" is the result you needed</p>
| 0 | 2016-09-28T06:36:57Z | [
"python",
"append"
]
|
Python file matching and appending | 39,738,915 | <p>This is one file <code>result.csv</code>:</p>
<pre><code>M11251TH1230
M11543TH4292
M11435TDS144
</code></pre>
<p>This is another file <code>sample.csv</code>:</p>
<pre><code>M11435TDS144,STB#1,Router#1
M11543TH4292,STB#2,Router#1
M11509TD9937,STB#3,Router#1
M11543TH4258,STB#4,Router#1
</code></pre>
<p>Can I write a Python program to compare both the files and if line in <code>result.csv</code> matches with the first word in the line in <code>sample.csv</code>, then append 1 else append 0 at every line in <code>sample.csv</code>?</p>
| 0 | 2016-09-28T05:45:22Z | 39,741,529 | <p>The solution using <code>csv.reader</code> and <code>csv.writer</code> (<code>csv</code> module):</p>
<pre><code>import csv
newLines = []
# change the file path to the actual one
with open('./data/result.csv', newline='\n') as csvfile:
data = csv.reader(csvfile)
items = [''.join(line) for line in data]
with open('./data/sample.csv', newline='\n') as csvfile:
data = list(csv.reader(csvfile))
for line in data:
line.append(1 if line[0] in items else 0)
newLines.append(line)
with open('./data/sample.csv', 'w', newline='\n') as csvfile:
writer = csv.writer(csvfile)
writer.writerows(newLines)
</code></pre>
<p>The <code>sample.csv</code> contents:</p>
<pre><code>M11435TDS144,STB#1,Router#1,1
M11543TH4292,STB#2,Router#1,1
M11509TD9937,STB#3,Router#1,0
M11543TH4258,STB#4,Router#1,0
</code></pre>
| 0 | 2016-09-28T08:06:33Z | [
"python",
"append"
]
|
Python file matching and appending | 39,738,915 | <p>This is one file <code>result.csv</code>:</p>
<pre><code>M11251TH1230
M11543TH4292
M11435TDS144
</code></pre>
<p>This is another file <code>sample.csv</code>:</p>
<pre><code>M11435TDS144,STB#1,Router#1
M11543TH4292,STB#2,Router#1
M11509TD9937,STB#3,Router#1
M11543TH4258,STB#4,Router#1
</code></pre>
<p>Can I write a Python program to compare both the files and if line in <code>result.csv</code> matches with the first word in the line in <code>sample.csv</code>, then append 1 else append 0 at every line in <code>sample.csv</code>?</p>
| 0 | 2016-09-28T05:45:22Z | 39,745,025 | <p>With only one column, I wonder why you made it as a <code>result.csv</code>. If it is not going to have any more columns, a simple file read operation would suffice. Along with converting the data from <code>result.csv</code> to dictionary will help in quick run as well.</p>
<pre><code>result_file = "result.csv"
sample_file = "sample.csv"
with open(result_file) as fp:
result_data = fp.read()
result_dict = dict.fromkeys(result_data.split("\n"))
"""
You can change the above logic, in case you have very few fields on csv like this:
result_data = fp.readlines()
result_dict = {}
for result in result_data:
key, other_field = result.split(",", 1)
result_dict[key] = other_field.strip()
"""
#Since sample.csv is a real csv, using csv reader and writer
with open(sample_file, "rb") as fp:
sample_data = csv.reader(fp)
output_data = []
for data in sample_data:
output_data.append("%s,%d" % (data, data[0] in result_dict))
with open(sample_file, "wb") as fp:
data_writer = csv.writer(fp)
data_writer.writerows(output_data)
</code></pre>
| 0 | 2016-09-28T10:36:03Z | [
"python",
"append"
]
|
Followup : missing required Charfield in django Modelform is saved as empty string and do not raise an error | 39,739,029 | <p>If I try to save incomplete model instance in Django 1.10, I would expect Django to raise an error. It does not seem to be the case.</p>
<p>models.py:</p>
<pre><code>from django.db import models
class Essai(models.Model):
ch1 = models.CharField(max_length=100, blank=False)
ch2 = models.CharField(max_length=100, blank=False)
</code></pre>
<p>So I have two fields not allowed to be empty (default behavior, <code>NOT NULL</code> restriction is applied by Django at MySQL table creation). I expect Django to rase an error if one of the fields is not set before storing.</p>
<p>However, when I create an incomplete instance, the data is stored just fine:</p>
<pre><code>>>> from test.models import Essai
>>> bouh = Essai()
>>> bouh.ch1 = "some content for ch1"
>>> bouh.save()
>>> bouh.id
9
>>> bouh.ch1
'some content for ch1'
>>> bouh.ch2
''
>>>
</code></pre>
<p>I would have expected Django to raise an error. If I force <code>ch2</code> to <code>None</code>, however, it raises an error:</p>
<pre><code>>>> bouh = Essai()
>>> bouh.ch1 = "some content for ch1"
>>> bouh.ch2 = None
>>> bouh.save()
Traceback (most recent call last):
(...)
return Database.Cursor.execute(self, query, params)
django.db.utils.IntegrityError: NOT NULL constraint failed: test_essai.ch2
>>> bouh.id
>>> bouh.ch1
'some content for ch1'
>>> bouh.ch2
>>>
</code></pre>
<p>Explanation: Django is not raising an error as default behavior in this simple case because in SQL empty string <code>""</code> is not equivalent to NULL, as stated in <a href="http://stackoverflow.com/questions/17816229/django-model-blank-false-does-not-work">Django model blank=False does not work?</a></p>
<p>Now, if we look at ModelForm behavior, there seem to be a inconsistency in django doc:</p>
<p>According to:
<a href="https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/#selecting-the-fields-to-use" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/#selecting-the-fields-to-use</a></p>
<blockquote>
<p>Django will prevent any attempt to save an incomplete model, so if the model does not allow the missing fields to be empty, and does not provide a default value for the missing fields, any attempt to save() a ModelForm with missing fields will fail. To avoid this failure, you must instantiate your model with initial values for the missing, but required fields:(â¦)</p>
</blockquote>
<p>a modelForm should not be saved with a missing field if there is no default value. </p>
<p>So with this ModelForm:
<code>
class EssaiModelForm(forms.ModelForm):
class Meta:
model = Essai
fields = ['ch1']
</code>
A form with only one field <code>ch1</code> is generated. </p>
<ul>
<li>If <code>ch1</code> is left empty, the validation fails at <code>EssaiModelFormInstance.is_valid()</code> as expected. </li>
<li>If <code>ch1</code> contains a value, the validation succeeds even though <code>ch2</code> is still missing. Then the EssaiModelFormInstance.save() succeeds, contrary to what is claimed in django documentation. <code>ch2</code> is empty string and thus is compatible with SQL requirements (<code>NOT NULL</code>).</li>
</ul>
<p>So it seems that there is a default default value empty string <code>""</code> for Charfield that is <strong>not accepted</strong> in form validation but that is <strong>accepted</strong> in save() validation. This may require clarification in the documentation.</p>
| 3 | 2016-09-28T05:54:37Z | 39,739,820 | <p>As the two fields <code>ch1</code> and <code>ch2</code> are always required in your case, all you need to do is modify your model such that </p>
<pre><code>from django.db import models
class Essai(models.Model):
ch1 = models.CharField(max_length=100)
ch2 = models.CharField(max_length=100)
</code></pre>
<p>and include the fields in the form as</p>
<pre><code>class EssaiModelForm(forms.ModelForm):
class Meta:
model = Essai
fields = ['ch1', 'ch2']
</code></pre>
<p>Django will automatically raise a validation error for this fields on <code>is_valid()</code></p>
<p>Or if you want to disallow empty string inputs to these fields <code>blank=False</code> will not work</p>
<p>Refer this <a href="http://stackoverflow.com/questions/6194988/django-how-to-set-blank-false-required-false">Django - how to set blank = False, required = False</a></p>
| 0 | 2016-09-28T06:41:36Z | [
"python",
"mysql",
"django",
"validation",
"django-models"
]
|
Followup : missing required Charfield in django Modelform is saved as empty string and do not raise an error | 39,739,029 | <p>If I try to save incomplete model instance in Django 1.10, I would expect Django to raise an error. It does not seem to be the case.</p>
<p>models.py:</p>
<pre><code>from django.db import models
class Essai(models.Model):
ch1 = models.CharField(max_length=100, blank=False)
ch2 = models.CharField(max_length=100, blank=False)
</code></pre>
<p>So I have two fields not allowed to be empty (default behavior, <code>NOT NULL</code> restriction is applied by Django at MySQL table creation). I expect Django to rase an error if one of the fields is not set before storing.</p>
<p>However, when I create an incomplete instance, the data is stored just fine:</p>
<pre><code>>>> from test.models import Essai
>>> bouh = Essai()
>>> bouh.ch1 = "some content for ch1"
>>> bouh.save()
>>> bouh.id
9
>>> bouh.ch1
'some content for ch1'
>>> bouh.ch2
''
>>>
</code></pre>
<p>I would have expected Django to raise an error. If I force <code>ch2</code> to <code>None</code>, however, it raises an error:</p>
<pre><code>>>> bouh = Essai()
>>> bouh.ch1 = "some content for ch1"
>>> bouh.ch2 = None
>>> bouh.save()
Traceback (most recent call last):
(...)
return Database.Cursor.execute(self, query, params)
django.db.utils.IntegrityError: NOT NULL constraint failed: test_essai.ch2
>>> bouh.id
>>> bouh.ch1
'some content for ch1'
>>> bouh.ch2
>>>
</code></pre>
<p>Explanation: Django is not raising an error as default behavior in this simple case because in SQL empty string <code>""</code> is not equivalent to NULL, as stated in <a href="http://stackoverflow.com/questions/17816229/django-model-blank-false-does-not-work">Django model blank=False does not work?</a></p>
<p>Now, if we look at ModelForm behavior, there seem to be a inconsistency in django doc:</p>
<p>According to:
<a href="https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/#selecting-the-fields-to-use" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/#selecting-the-fields-to-use</a></p>
<blockquote>
<p>Django will prevent any attempt to save an incomplete model, so if the model does not allow the missing fields to be empty, and does not provide a default value for the missing fields, any attempt to save() a ModelForm with missing fields will fail. To avoid this failure, you must instantiate your model with initial values for the missing, but required fields:(â¦)</p>
</blockquote>
<p>a modelForm should not be saved with a missing field if there is no default value. </p>
<p>So with this ModelForm:
<code>
class EssaiModelForm(forms.ModelForm):
class Meta:
model = Essai
fields = ['ch1']
</code>
A form with only one field <code>ch1</code> is generated. </p>
<ul>
<li>If <code>ch1</code> is left empty, the validation fails at <code>EssaiModelFormInstance.is_valid()</code> as expected. </li>
<li>If <code>ch1</code> contains a value, the validation succeeds even though <code>ch2</code> is still missing. Then the EssaiModelFormInstance.save() succeeds, contrary to what is claimed in django documentation. <code>ch2</code> is empty string and thus is compatible with SQL requirements (<code>NOT NULL</code>).</li>
</ul>
<p>So it seems that there is a default default value empty string <code>""</code> for Charfield that is <strong>not accepted</strong> in form validation but that is <strong>accepted</strong> in save() validation. This may require clarification in the documentation.</p>
| 3 | 2016-09-28T05:54:37Z | 39,739,906 | <p>There is a slightly notable difference in model and it's forms. Model.save() performs no validation. So if it fails then probably it raises a database level IntegrityError. For reducing risks of failure at database layer one need to call Model.full_clean() which is also done by ModelForm as documented here in <a href="https://docs.djangoproject.com/en/1.10/ref/models/instances/#validating-objects" rel="nofollow">django's validating objects</a>. </p>
<p>In my opinion, I expect from django that ModelForms are meant to validate and save data entered by client. Model instantiated programmatically should follow the same control before saving into database.</p>
| 0 | 2016-09-28T06:46:13Z | [
"python",
"mysql",
"django",
"validation",
"django-models"
]
|
Fetching data till the end instead of using multiple next(v)[1] | 39,739,093 | <p>Hi i have a python code that grab data from sample_data.csv before parsing them into out.csv.</p>
<p>Do see the image for better visualisation of sample_data.csv
<a href="http://i.imgur.com/wwi4RC7.jpg" rel="nofollow">http://i.imgur.com/wwi4RC7.jpg</a></p>
<p>My question is how do i start from the the last next(v)[1] in</p>
<pre><code>datetime = next(v)[1],next(v)[1],next(v)[1],next(v)[1],next(v)[1]
</code></pre>
<p>and begin all the way till the end of the line instead of being silly and using multiple next(v)[1]? This is an issue as different receipt has different amount of line thus i cant have a fix number of next(v)[1] for transaction</p>
<blockquote>
<p>transaction= next(v)[1], next(v)[1],
next(v)[1],next(v)[1],next(v)[1],next(v)[1],next(v)[1],next(v)[1]</p>
</blockquote>
<pre><code>import csv
from itertools import groupby
from operator import itemgetter
import re
with open("sample_data.csv", "rb") as f, open("out.csv", "wb") as out:
reader = csv.reader(f)
next(reader)
writer = csv.writer(out)
writer.writerow(["Receipt ID","Name","Address","Date","Time","Items","Amount","Cost","Total"])
groups = groupby(csv.reader(f), key=itemgetter(0))
for k, v in groups:
id_, name = next(v)
add_date_1, add_date_2 = next(v)[1], next(v)[1]
combinedaddress = add_date_1+ " " +add_date_2
datetime = next(v)[1],next(v)[1],next(v)[1],next(v)[1],next(v)[1]
abcd = str(datetime)
dateprinter = re.search('(\d\d/\d\d/\d\d\d\d)\s(\d\d:\d\d)', abcd).group(1)
timeprinter = re.search('(\d\d/\d\d/\d\d\d\d)\s(\d\d:\d\d)', abcd).group(2)
transaction= next(v)[1], next(v)[1], next(v)[1],next(v)[1],next(v)[1],next(v)[1],next(v)[1],next(v)[1]
writer.writerow([id_, name, combinedaddress, dateprinter, timeprinter, transaction])
</code></pre>
| 2 | 2016-09-28T05:59:26Z | 39,739,684 | <p>If I understand the question correctly, you could use a list comprehension to finish reading values in v , like that :</p>
<pre><code>transaction = [ x[1] for x in v ]
</code></pre>
<p>That code will be like grabing all remaining <code>next(v)[1]</code> until the end of v.</p>
<p><strong>Side note</strong> : calling next(v) all the time is quite ugly and unpractical, you could instead start by converting <code>v</code> to a list and then use simple list slicing to get what you want :</p>
<pre><code> v = list(v)
id_, name = v[0]
add_date_1, add_date_2 = [x[1] for x in v[1:3]]
...
transaction = [ x[1] for x in v[1234:] ]
</code></pre>
| 1 | 2016-09-28T06:34:42Z | [
"python",
"csv"
]
|
Conditional addition of different values to an array using python | 39,739,215 | <p>I want to add 10 if <code>x < 50</code>, 20 if <code>50 <= x < 100</code>, 30 if <code>100 <= x < 150</code>, and 40 for <code>150 <= x < 200</code>. How can I solve this problem? In my array <code>arr</code> I have more than 300 data element. Thanks in advance for your kind co-operation. </p>
<pre><code>arr =[10,20,30,40,50,60,70,80,90,100,120,130,140,150,160,170,180,190,200]
</code></pre>
| -2 | 2016-09-28T06:07:54Z | 39,741,356 | <p>It seems a little bit like a homework exercise?
I'd explicitly split out the modifications you need to make so it's easy to see what it's doing. Please note: I did not input ALL your rules, just some, so you can see how you could extend it.</p>
<pre><code>x=[10,20,30,40,50,60,70,80,90,100,120,130,140,150,160,170,180,190,200]
for value in x:
oldvalue = value
if value < 50:
value += 10
elif value < 100:
value += 20
else:
value += 30
print("%i => %i" % (oldvalue, value))
</code></pre>
<p>this prints:</p>
<pre><code>10 => 20
20 => 30
30 => 40
40 => 50
50 => 70
60 => 80
70 => 90
80 => 100
....
</code></pre>
| 1 | 2016-09-28T07:56:56Z | [
"python",
"arrays",
"conditional",
"add"
]
|
virtualenv hanging forever pythonanywhere | 39,739,224 | <p>I am trying to follow this <a href="https://tutorial.djangogirls.org/en/deploy/" rel="nofollow">tutorial</a> to get a django application up on pythonanywhere, but when trying to create a virtual environment using </p>
<pre><code> virtualenv --python=python3.5 myvenv
</code></pre>
<p>The console hangs</p>
<p>I have done this before a while ago and I remember it was all quite painless but when running this command the console is just hanging and I eventually get put in the tarpit. When I interrupt the process I get some errors around Python 2.7 </p>
<pre><code>virtualenv --python=python3.5 myvenv
Running virtualenv with interpreter /usr/bin/python3.5
Using base prefix '/usr'
New python executable in /home/username/myvenv/bin/python3.5
Also creating executable in /home/username/myvenv/bin/python
Traceback (most recent call last):
File "/usr/local/bin/virtualenv", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 669, in main
Traceback (most recent call last):
raise SystemExit(popen.wait())
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 2327, in <module>
File "/usr/lib/python2.7/subprocess.py", line 1376, in wait
pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0)
File "/usr/lib/python2.7/subprocess.py", line 476, in _eintr_retry_call
return func(*args)
KeyboardInterrupt
main()
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 711, in main
symlink=options.symlink)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 944, in create_environment
download=download,
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 900, in install_wheel
call_subprocess(cmd, show_stdout=False, extra_env=env, stdin=SCRIPT)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 767, in call_subprocess
line = stdout.readline()
KeyboardInterrupt
</code></pre>
<p>Not sure if these relate to anything or if they are just because the interrupt.</p>
| 0 | 2016-09-28T06:08:21Z | 39,739,408 | <p>General practice is to reference the python binary directly, run virtualenv as a module, and specify the directory in which to place the virtualenv. For your example above:</p>
<pre><code>/path/to/python/bin/python3.5 -m virtualenv myvenv
</code></pre>
<p>This will create the virtual environment in myvenv, running python3.5. Note: your base install of python3.5 must have virtualenv library installed (either through pip or source).</p>
<p>Hope this helps! :)</p>
| 0 | 2016-09-28T06:20:14Z | [
"python",
"virtualenv",
"pythonanywhere"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.