title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Pandas: Use apply to sum row and column on data frame | 39,859,374 | <pre><code>import datetime
import pandas as pd
import numpy as np
todays_date = datetime.datetime.now().date()
index = pd.date_range(todays_date-datetime.timedelta(10), periods=10, freq='D')
columns = ['A','B', 'C']
df = pd.DataFrame(index=index, columns=columns)
df = df.fillna(0) # with 0s rather than NaNs
data = np.array([np.arange(10)]*3).T
df = pd.DataFrame(data, index=index, columns=columns)
</code></pre>
<p>Given the <strong>df</strong>, I would like to group by each 'column' and apply a function that calculates the sum of the values for each date divided by the total for that group (A, B, C)?</p>
<p>Example:</p>
<pre><code>def total_calc(grp):
sum_of_group = np.sum(group)
return sum_of_group
</code></pre>
<p>I am trying to use the 'apply' function on my data frame in this fashion but the <strong>axis=1</strong> only works on rows and <strong>axis=0</strong> works on columns and I want to get both data points for each group?</p>
<pre><code>df.groupby(["A"]).apply(total_calc)
</code></pre>
<p>Any ideas?</p>
| 1 | 2016-10-04T18:31:38Z | 39,859,822 | <p>I am unsure of your question so I'll guess it. First off I don't like to use integer value so let's transform your df to float</p>
<pre><code>df = df.astype(float)
</code></pre>
<p>if you want to divide each element of column A by the sum of column A and vice versa you could do this :</p>
<pre><code>df.div(df.sum(axis=0), axis=1)
Out[24]:
A B C
2016-09-24 0.000000 0.000000 0.000000
2016-09-25 0.022222 0.022222 0.022222
2016-09-26 0.044444 0.044444 0.044444
2016-09-27 0.066667 0.066667 0.066667
2016-09-28 0.088889 0.088889 0.088889
2016-09-29 0.111111 0.111111 0.111111
2016-09-30 0.133333 0.133333 0.133333
2016-10-01 0.155556 0.155556 0.155556
2016-10-02 0.177778 0.177778 0.177778
2016-10-03 0.200000 0.200000 0.200000
</code></pre>
| 1 | 2016-10-04T18:59:18Z | [
"python",
"pandas",
"aggregation"
]
|
Get grouped Dictionary list from a file that has a time and errors then plot the time differences in python | 39,859,392 | <p>I have this file as below:</p>
<pre><code>Date;Time;Task;Error_Line;Error_Message
03-13-15;08:2123:10;C:LOGINMAN;01073;Web Login Successful from IP Address xxx.xxx.x.xx
03-13-15;05:23:1235;B:VDOM;0906123;Port 123 Device 1012300 Remote 1 1012301 Link Up RP2009
03-13-15;05:23:123123;A:VCOM;0906123;Port 123 Device 1012300 Remote 1 1012301 Link Up RP2009
03-13-15;05:23:123123;B:VDOM;1312325;Port 123 Device 1012300 Remote 1 1012301 Receive Time Error: 2123666 23270 1396 69
03-13-15;05:23:1233;B:VDOM;13372;Port 123 Device 1012300 Remote 1 1012301 Send Time Error: 123123123 1888 1123123123 69
03-13-15;05:23:1233;A:VCOM;1312325;Port 123 Device 1012300 Remote 1 1012301 Receive Time Error: 2123666 23270 1396 69
03-13-15;05:23:1233;A:VCOM;13372;Port 123 Device 1012300 Remote 1 1012301 Send Time Error: 123123123 1888 1123123123 69
03-13-15;05:21:56;B:VDOM;07270;Port 123 Device 1012300 Remote 1 1012301 AT Timer Expired
03-13-15;05:21:56;A:VCOM;07270;Port 123 Device 1012300 Remote 1 1012301 AT Timer Expired
</code></pre>
<p>The desired output should be like that:</p>
<pre><code>D = {'Error_line1': [Time1,Time2,...],'Error_Line2' = [Time1,Time2,..],...}
</code></pre>
<p>I was looking for plotting the differences between or time based on Error_Line. Error_Line in my file occurs different time . I want group times according to Error_Line. I have have no idea if that works for plotting time.</p>
| 0 | 2016-10-04T18:32:54Z | 39,859,985 | <p>As far as grouping by line number, this should do the trick:</p>
<pre><code>import csv
D = {}
with open('logfile') as f:
reader = csv.DictReader(f, delimiter=';')
for row in reader:
el = row['Error_Line']
if el not in D:
D[el] = [] # Initiate an empty list
D[el].append(row['Time'])
</code></pre>
| 1 | 2016-10-04T19:10:14Z | [
"python",
"numpy",
"matplotlib",
"plot"
]
|
Get grouped Dictionary list from a file that has a time and errors then plot the time differences in python | 39,859,392 | <p>I have this file as below:</p>
<pre><code>Date;Time;Task;Error_Line;Error_Message
03-13-15;08:2123:10;C:LOGINMAN;01073;Web Login Successful from IP Address xxx.xxx.x.xx
03-13-15;05:23:1235;B:VDOM;0906123;Port 123 Device 1012300 Remote 1 1012301 Link Up RP2009
03-13-15;05:23:123123;A:VCOM;0906123;Port 123 Device 1012300 Remote 1 1012301 Link Up RP2009
03-13-15;05:23:123123;B:VDOM;1312325;Port 123 Device 1012300 Remote 1 1012301 Receive Time Error: 2123666 23270 1396 69
03-13-15;05:23:1233;B:VDOM;13372;Port 123 Device 1012300 Remote 1 1012301 Send Time Error: 123123123 1888 1123123123 69
03-13-15;05:23:1233;A:VCOM;1312325;Port 123 Device 1012300 Remote 1 1012301 Receive Time Error: 2123666 23270 1396 69
03-13-15;05:23:1233;A:VCOM;13372;Port 123 Device 1012300 Remote 1 1012301 Send Time Error: 123123123 1888 1123123123 69
03-13-15;05:21:56;B:VDOM;07270;Port 123 Device 1012300 Remote 1 1012301 AT Timer Expired
03-13-15;05:21:56;A:VCOM;07270;Port 123 Device 1012300 Remote 1 1012301 AT Timer Expired
</code></pre>
<p>The desired output should be like that:</p>
<pre><code>D = {'Error_line1': [Time1,Time2,...],'Error_Line2' = [Time1,Time2,..],...}
</code></pre>
<p>I was looking for plotting the differences between or time based on Error_Line. Error_Line in my file occurs different time . I want group times according to Error_Line. I have have no idea if that works for plotting time.</p>
| 0 | 2016-10-04T18:32:54Z | 39,860,357 | <p>I won't touch the plotting because there are multiple ways of displaying the data and I don't know what style you're looking for. Do you want to have separate graphs for each Error_Line? Each Error_Line's datapoints represented on one graph? Some other way of comparing times and errors (e.g. mean of each Error_Line's times plotted against each other, variance, yadda yadda)?</p>
<p>Getting that info into a dict, however, will involve getting each line, splitting it with the semicolon as the delimiter, and picking the pieces out that you want. Personally I'd do this as such:</p>
<pre><code>from collections import defaultdict
ourdata = defaultdict(list)
with open('stackoverflow.txt') as ourfile:
for row in ourfile:
datatoadd = row.split(';')[1:4:2]
ourdata[datatoadd[1]].append(datatoadd[0])
</code></pre>
<p>As far as those timestamps go they're currently strings. You'll also need to convert them (within the append statement would do it all at once) to the data type you need (e.g. numpy's <a href="http://docs.scipy.org/doc/numpy/reference/arrays.datetime.html" rel="nofollow">datetimes</a> which allow for arithmetic).</p>
<p>For more information on what's going on here, see: <a href="https://docs.python.org/2/library/collections.html#collections.defaultdict" rel="nofollow">defaultdict</a>, <a href="http://stackoverflow.com/a/1369553/6758601">with</a>, <a href="https://docs.python.org/2/library/stdtypes.html#str.split" rel="nofollow">str.split()</a>, <a href="http://stackoverflow.com/a/509295/6758601">extended slice notation</a></p>
| 1 | 2016-10-04T19:32:46Z | [
"python",
"numpy",
"matplotlib",
"plot"
]
|
moving the location of y labels with matplotlib | 39,859,446 | <p>How can I move the location of y labels?</p>
<p><a href="http://i.stack.imgur.com/bIhOP.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/bIhOP.jpg" alt="enter image description here"></a></p>
<p>And this is the zoomed version:</p>
<p><a href="http://i.stack.imgur.com/jkHHw.png" rel="nofollow"><img src="http://i.stack.imgur.com/jkHHw.png" alt="enter image description here"></a></p>
<p>The idea is to move the righthandside labels further to the right, because where they currently are is hard to see the last data point of each of the time series (in this case I circled in green where the last data point is).
Tried <code>set_label_coords()</code>, but didn't accomplish anything. </p>
| 0 | 2016-10-04T18:36:13Z | 39,866,712 | <p>Normally, the data does not flush out of the axis frame. This might however differ for an axis with dates. Since your data goes up to 3rd of october while in the non-zoomed version your last point on the x axis ist the first of october, you should simply plot your data until the 3rd of october or beyond. </p>
<p>The axis limits can be set using the <a href="http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.set_xlim" rel="nofollow"><code>ax.set_xlim()</code></a> command.</p>
<p>Also see <a href="http://stackoverflow.com/questions/21423158/how-do-i-change-the-range-of-the-x-axis-with-datetimes-in-matplotlib">How do I change the range of the x-axis with datetimes in MatPlotLib?</a></p>
| 0 | 2016-10-05T06:20:16Z | [
"python",
"matplotlib"
]
|
moving the location of y labels with matplotlib | 39,859,446 | <p>How can I move the location of y labels?</p>
<p><a href="http://i.stack.imgur.com/bIhOP.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/bIhOP.jpg" alt="enter image description here"></a></p>
<p>And this is the zoomed version:</p>
<p><a href="http://i.stack.imgur.com/jkHHw.png" rel="nofollow"><img src="http://i.stack.imgur.com/jkHHw.png" alt="enter image description here"></a></p>
<p>The idea is to move the righthandside labels further to the right, because where they currently are is hard to see the last data point of each of the time series (in this case I circled in green where the last data point is).
Tried <code>set_label_coords()</code>, but didn't accomplish anything. </p>
| 0 | 2016-10-04T18:36:13Z | 39,873,309 | <p>showing how I did it in the end after @ImportanceOfBeingErnest pointed the right direction:</p>
<p>1) created an extra date that is actually longer than the last day of my data</p>
<pre><code>extradate = pd.Timestamp(dt.datetime(df.index[-1].year,df.index[-1].month,df.index[-1].day + 5))
</code></pre>
<p>2) use that extradate in: </p>
<pre><code>ax.set_xlim(df.index[-100],extradate)
</code></pre>
<p>(the -100 is just to plot 100 points of data). </p>
| 0 | 2016-10-05T11:57:03Z | [
"python",
"matplotlib"
]
|
Extract specific information from XML (Google Distance Matrix API) | 39,859,482 | <p>Here is my XML output:</p>
<pre><code><DistanceMatrixResponse>
<status>OK</status>
<origin_address>
868-978 Middle Tennessee Blvd, Murfreesboro, TN 37130, USA
</origin_address>
<destination_address>
980-1060 Middle Tennessee Blvd, Murfreesboro, TN 37130, USA
</destination_address>
<row>
<element>
<status>OK</status>
<duration>
<value>19</value>
<text>1 min</text>
</duration>
<distance>
<value>154</value>
<text>0.1 mi</text>
</distance>
</element>
</row>
</DistanceMatrixResponse>
</code></pre>
<p>I am trying to use Python to save this XML from the web locally (this part is complete). After the file is saved, I want to extract the 'duration' value (of 19, in this case) and the 'distance' value (of 154, in this case). </p>
<p>I just can't seem to figure out how to read and extract the necessary information from this XML. I have tried working with ElementTree and trying to implement others solutions from stackoverflow, with no luck. I'm about 3 hours into what should be a quick process.</p>
<p>Here is my code as it sits now:</p>
<pre><code>import urllib2
import xml.etree.ElementTree as ET
## import XML and save it out
url = "https://maps.googleapis.com/maps/api/distancematrix/xml?units=imperial&origins=35.827581,-86.394077&destinations=35.827398,-86.392381&key=mygooglemapsAPIkey"
s = urllib2.urlopen(url)
contents = s.read()
file = open("export.xml", 'w')
file.write(contents)
file.close()
## finish saving the XML
element_tree = ET.parse("export.xml")
root = element_tree.getroot()
agreement = root.find("duration").text
print agreement
## open XML and save out travel time in seconds
xmlfile = 'export.xml'
element_tree = ET.parse(xmlfile)
root = element_tree.getroot()
agreement = root.findall("duration").text
print agreement
</code></pre>
<p>Current error message is: AttributeError: 'Nonetype' object has no attribute 'text'</p>
<p>I know the code is incomplete to grab both the duration and distance, but I am just trying to get something working at this point!</p>
| 0 | 2016-10-04T18:39:21Z | 39,860,297 | <p>Just use <a href="https://docs.python.org/3/library/xml.etree.elementtree.html#example" rel="nofollow">XPath queries</a>:</p>
<pre><code>duration = tree.find('.//duration/value').text
distance = tree.find('.//distance/value').text
</code></pre>
<p>Here's a nice XPath tutorial: <a href="http://zvon.org/comp/r/tut-XPath_1.html" rel="nofollow">http://zvon.org/comp/r/tut-XPath_1.html</a>.</p>
| 0 | 2016-10-04T19:28:53Z | [
"python",
"xml",
"elementtree"
]
|
How to update a subset of 2D tensor in Tensorflow? | 39,859,516 | <p>I want to update an index in a 2D tensor with value 0. So data is a 2D tensor whose 2nd row 2nd column index value is to be replaced by 0. However, I am getting a type error. Can anyone help me with it?</p>
<blockquote>
<p>TypeError: Input 'ref' of 'ScatterUpdate' Op requires l-value input</p>
</blockquote>
<pre><code>data = tf.Variable([[1,2,3,4,5], [6,7,8,9,0], [1,2,3,4,5]])
data2 = tf.reshape(data, [-1])
sparse_update = tf.scatter_update(data2, tf.constant([7]), tf.constant([0]))
#data = tf.reshape(data, [N,S])
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run([init_op])
print "Values before:", sess.run([data])
#sess.run([updated_data_subset])
print "Values after:", sess.run([sparse_update])
</code></pre>
| 0 | 2016-10-04T18:41:12Z | 39,860,467 | <p>Scatter update only works on variables. Instead try this pattern
<code>a = tf.concat(0, [a[:i], [updated_value], a[i+1:]])</code></p>
| 0 | 2016-10-04T19:39:50Z | [
"python",
"neural-network",
"tensorflow",
"deep-learning"
]
|
How to update a subset of 2D tensor in Tensorflow? | 39,859,516 | <p>I want to update an index in a 2D tensor with value 0. So data is a 2D tensor whose 2nd row 2nd column index value is to be replaced by 0. However, I am getting a type error. Can anyone help me with it?</p>
<blockquote>
<p>TypeError: Input 'ref' of 'ScatterUpdate' Op requires l-value input</p>
</blockquote>
<pre><code>data = tf.Variable([[1,2,3,4,5], [6,7,8,9,0], [1,2,3,4,5]])
data2 = tf.reshape(data, [-1])
sparse_update = tf.scatter_update(data2, tf.constant([7]), tf.constant([0]))
#data = tf.reshape(data, [N,S])
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run([init_op])
print "Values before:", sess.run([data])
#sess.run([updated_data_subset])
print "Values after:", sess.run([sparse_update])
</code></pre>
| 0 | 2016-10-04T18:41:12Z | 39,959,911 | <p><code>tf.scatter_update</code> could only be applied to <code>Variable</code> type. <code>data</code> in your code IS a <code>Variable</code>, while <code>data2</code> IS NOT, because the return type of <code>tf.reshape</code> is <code>Tensor</code>.</p>
<p>Solution:</p>
<pre><code>data = tf.Variable([[1,2,3,4,5], [6,7,8,9,0], [1,2,3,4,5]])
row = tf.gather(data, 2)
new_row = tf.concat(0, [row[:2], tf.constant([0]), row[3:]])
sparse_update = tf.scatter_update(data, tf.constant(2), new_row)
</code></pre>
| 1 | 2016-10-10T13:51:47Z | [
"python",
"neural-network",
"tensorflow",
"deep-learning"
]
|
List KeyError Python | 39,859,562 | <p>I'm trying to append a list with values from an API for the Glassdoor.</p>
<p>When I get a response back from this API, I get info such as the name of the company, ratings, CEO, a bunch more info, and lastly if the company is owned by a parent company, I get that too.</p>
<p>My problem is when I append my list with all this info, if the company I'm getting a response from the API doesn't have a parent company, I don't want it to skip extracting the other relevant data like name, CEO, etc. I want it to print out the available data for that companies response, then where the parent company is print NA.</p>
<p>Each company I get a response from the API may have a different length/vary in the available data.</p>
<p>For example:</p>
<pre><code>comp_info.append(data['response']['employers'][0]['name'])
</code></pre>
<p>This is what I'm trying to achieve, Apple doesn't have a parent company, while LSI Corporation does. I'm not sure how to approach this problem?</p>
<p>[APPLE, Tim Cook, 4.5, N/A, Computer Hardware]
[LSI Corporation, Some Guy, 4.6, Avago Technologies, Computer Hardware]</p>
| 0 | 2016-10-04T18:43:52Z | 39,859,615 | <p>I am not sure if I fully understand your question, but Python has a concept of "Better to ask forgiveness than permission" that might be helpful here:</p>
<pre><code>try:
comp_info.append(data['response']['employers'][0]['name'])
except KeyError:
comp_info.append("N/A")
# or print ("N/A")
</code></pre>
<p>Please clarify your question if you are looking for different handling than this.</p>
| 0 | 2016-10-04T18:47:26Z | [
"python",
"json",
"for-loop",
"keyerror"
]
|
List KeyError Python | 39,859,562 | <p>I'm trying to append a list with values from an API for the Glassdoor.</p>
<p>When I get a response back from this API, I get info such as the name of the company, ratings, CEO, a bunch more info, and lastly if the company is owned by a parent company, I get that too.</p>
<p>My problem is when I append my list with all this info, if the company I'm getting a response from the API doesn't have a parent company, I don't want it to skip extracting the other relevant data like name, CEO, etc. I want it to print out the available data for that companies response, then where the parent company is print NA.</p>
<p>Each company I get a response from the API may have a different length/vary in the available data.</p>
<p>For example:</p>
<pre><code>comp_info.append(data['response']['employers'][0]['name'])
</code></pre>
<p>This is what I'm trying to achieve, Apple doesn't have a parent company, while LSI Corporation does. I'm not sure how to approach this problem?</p>
<p>[APPLE, Tim Cook, 4.5, N/A, Computer Hardware]
[LSI Corporation, Some Guy, 4.6, Avago Technologies, Computer Hardware]</p>
| 0 | 2016-10-04T18:43:52Z | 39,859,871 | <p>If I understand you correctly:</p>
<pre><code>comp_info.append(data['response']['employers'][0].get('name', 'N/A'))
</code></pre>
| 0 | 2016-10-04T19:02:18Z | [
"python",
"json",
"for-loop",
"keyerror"
]
|
List KeyError Python | 39,859,562 | <p>I'm trying to append a list with values from an API for the Glassdoor.</p>
<p>When I get a response back from this API, I get info such as the name of the company, ratings, CEO, a bunch more info, and lastly if the company is owned by a parent company, I get that too.</p>
<p>My problem is when I append my list with all this info, if the company I'm getting a response from the API doesn't have a parent company, I don't want it to skip extracting the other relevant data like name, CEO, etc. I want it to print out the available data for that companies response, then where the parent company is print NA.</p>
<p>Each company I get a response from the API may have a different length/vary in the available data.</p>
<p>For example:</p>
<pre><code>comp_info.append(data['response']['employers'][0]['name'])
</code></pre>
<p>This is what I'm trying to achieve, Apple doesn't have a parent company, while LSI Corporation does. I'm not sure how to approach this problem?</p>
<p>[APPLE, Tim Cook, 4.5, N/A, Computer Hardware]
[LSI Corporation, Some Guy, 4.6, Avago Technologies, Computer Hardware]</p>
| 0 | 2016-10-04T18:43:52Z | 39,862,871 | <pre><code>comp_info.append(data['response']['employers'][0].get('parentEmployer', 'N/A'))
</code></pre>
| 0 | 2016-10-04T22:42:11Z | [
"python",
"json",
"for-loop",
"keyerror"
]
|
How can I rewrite this list comprehension as a for loop? | 39,859,566 | <p>How can I rewrite this using nested for loops instead of a list comprehension? </p>
<pre><code>final= [[]]
for i in array_list:
final.extend([sublist + [i] for sublist in final])
return final
</code></pre>
| 0 | 2016-10-04T18:44:08Z | 39,859,780 | <p>Your solution looks to be a very good <code>for</code> loop one. A one-liner using <code>itertools</code> is possible, but ugly</p>
<pre><code>list(itertools.chain(list(itertools.combinations(arr, i)) for i in range(len(arr) + 1)))
</code></pre>
<p>EDIT:</p>
<p>Prettier:</p>
<pre><code>list(itertools.chain(*[itertools.combinations(arr, i) for i in range(len(arr)+1)]))
</code></pre>
| 0 | 2016-10-04T18:56:53Z | [
"python"
]
|
How can I rewrite this list comprehension as a for loop? | 39,859,566 | <p>How can I rewrite this using nested for loops instead of a list comprehension? </p>
<pre><code>final= [[]]
for i in array_list:
final.extend([sublist + [i] for sublist in final])
return final
</code></pre>
| 0 | 2016-10-04T18:44:08Z | 39,859,786 | <p>If you try to iterate over final as you extend it, it creates an infinite loop. Because every time you go to the next element, you add another element, so you never reach the end of the list. </p>
<p>If you want to do the inner loop as a for loop instead of a list comprehension, you need to iterate over a copy of final. </p>
<pre><code>final = [[]]
for i in [1, 2, 3]:
for sublist in final[:]:
final.extend([sublist + [i]])
</code></pre>
| 2 | 2016-10-04T18:57:03Z | [
"python"
]
|
How can I rewrite this list comprehension as a for loop? | 39,859,566 | <p>How can I rewrite this using nested for loops instead of a list comprehension? </p>
<pre><code>final= [[]]
for i in array_list:
final.extend([sublist + [i] for sublist in final])
return final
</code></pre>
| 0 | 2016-10-04T18:44:08Z | 39,860,626 | <p>This code also seems to give the same results as your code. </p>
<pre><code>final = [[]]
for i in array_list:
for sublist in list(final):
final.extend([sublist + [i]])
return final
</code></pre>
<p>It seems like your code takes the elements of the last iteration and combines them with the element of the current iteration (See 'Example' below). In order to do this with a traditional for loop, you need to prevent the list that is being looped over from being updated while looping. I do this by breaking the link with the 'final' list variable. This can be done with the list() function or by slicing it (then you would need to replace with as proposed by Morgan).</p>
<p><strong>Example</strong></p>
<p>For the array_list of [1,2,3] you get the following. Each line (except for the lines) is a new element of the 'final' list variable.</p>
<pre><code>[]
--------------------------------
1 > array_list[0]
--------------------------------
2 > array_list[1]
1, 2 > array_list[0] + array_list[1]
--------------------------------
3 > array_list[2]
1, 3 > array_list[0] + array_list[2]
2, 3 > array_list[1] + array_list[2]
1, 2, 3 > array_list[0] + array_list[1] + array_list[2]
</code></pre>
| 0 | 2016-10-04T19:49:21Z | [
"python"
]
|
Python Regex handling dot character | 39,859,662 | <p>While using regex in python I came across a scenario.
What I am trying to do is if a string has operators, I want to add space before and after the operator.</p>
<pre><code>s = 'H>=ll<=o=wo+rl-d.my name!'
op = 'H >= ll <= o = wo + rl - d.my name!'
</code></pre>
<p>seemed pretty straight forward, so I came up with the following expression:</p>
<pre><code>re.sub(r'((<=)|(>=)|[+-=*/])+',' \\1 ',r'H>=ll<=o=wo+rl-d.myname!')
</code></pre>
<p>but the result I am getting using this is :</p>
<pre><code>'H >= ll <= o = wo + rl - d . my name!'
</code></pre>
<p>Its adding a space after every dot (.) encountered, even though I haven't mentioned it in my regex.</p>
<p>I am using python 2.7 and would really appreciate if you can shed some light on this.</p>
| 1 | 2016-10-04T18:50:10Z | 39,859,758 | <p>The reason for the spaces around the dot is <code>-</code>. Concrete it is <code>[+-=]</code>, which is a character class with characters from <code>+</code> until <code>=</code>, which includes <code>.</code>.</p>
<p>To avoid this, you must escape <code>-</code> with <code>\-</code>, e.g. </p>
<pre><code>re.sub(r'((<=)|(>=)|[+\-=*/])+',' \\1 ',r'H>=ll<=o=wo+rl-d.myname!')
</code></pre>
<hr>
<p>As @LaurentLAPORTE mentioned, you can also put <code>-</code> at the beginning or the end of a character class, e.g. <code>[-+=*/]</code> or <code>[+=*/-]</code> will both do the trick.</p>
| 7 | 2016-10-04T18:55:42Z | [
"python",
"regex"
]
|
Python Regex handling dot character | 39,859,662 | <p>While using regex in python I came across a scenario.
What I am trying to do is if a string has operators, I want to add space before and after the operator.</p>
<pre><code>s = 'H>=ll<=o=wo+rl-d.my name!'
op = 'H >= ll <= o = wo + rl - d.my name!'
</code></pre>
<p>seemed pretty straight forward, so I came up with the following expression:</p>
<pre><code>re.sub(r'((<=)|(>=)|[+-=*/])+',' \\1 ',r'H>=ll<=o=wo+rl-d.myname!')
</code></pre>
<p>but the result I am getting using this is :</p>
<pre><code>'H >= ll <= o = wo + rl - d . my name!'
</code></pre>
<p>Its adding a space after every dot (.) encountered, even though I haven't mentioned it in my regex.</p>
<p>I am using python 2.7 and would really appreciate if you can shed some light on this.</p>
| 1 | 2016-10-04T18:50:10Z | 39,859,777 | <p>So when you do a character class like:</p>
<pre><code>[+-=]
</code></pre>
<p>The regex reads that as any character between <code>+</code> (ASCII 43) and <code>=</code> (ASCII 61). It's similar to:</p>
<pre><code>[A-Z]
</code></pre>
<p>So you have to escape the <code>-</code>:</p>
<pre><code>r'((<=)|(>=)|[+\-=*/])+'
</code></pre>
<p>(Or put the <code>-</code> at the end as suggested in the comments: <code>[+=-]</code>)</p>
<p><a href="https://regex101.com/" rel="nofollow">Regex101</a> is very handy for analyzing regex patterns like this. You can see the problem with your pattern <a href="https://regex101.com/r/zDZAKf/1" rel="nofollow">here</a></p>
| 4 | 2016-10-04T18:56:44Z | [
"python",
"regex"
]
|
Python Regex handling dot character | 39,859,662 | <p>While using regex in python I came across a scenario.
What I am trying to do is if a string has operators, I want to add space before and after the operator.</p>
<pre><code>s = 'H>=ll<=o=wo+rl-d.my name!'
op = 'H >= ll <= o = wo + rl - d.my name!'
</code></pre>
<p>seemed pretty straight forward, so I came up with the following expression:</p>
<pre><code>re.sub(r'((<=)|(>=)|[+-=*/])+',' \\1 ',r'H>=ll<=o=wo+rl-d.myname!')
</code></pre>
<p>but the result I am getting using this is :</p>
<pre><code>'H >= ll <= o = wo + rl - d . my name!'
</code></pre>
<p>Its adding a space after every dot (.) encountered, even though I haven't mentioned it in my regex.</p>
<p>I am using python 2.7 and would really appreciate if you can shed some light on this.</p>
| 1 | 2016-10-04T18:50:10Z | 39,859,849 | <p>I was able to simplify this a little bit by using a negated set:</p>
<pre><code>import re
s = 'H>=ll<=o=wo+rl-d.my name!'
op = 'H >= ll <= o = wo + rl - d.my name!'
s = re.sub(r'([^a-zA-Z0-9.])+',' \\1 ',r'H>=ll<=o=wo+rl-d.myname!')
print (s)
</code></pre>
<p>Other commenters above mentioned the reason this is happening is because the - wasn't working as you intended it to. </p>
| 0 | 2016-10-04T19:00:54Z | [
"python",
"regex"
]
|
Error in tf.contrib.learn Quickstart, no attribute named load_csv | 39,859,670 | <p>I am getting started in tensorflow on OSX and installed the lasted version following the guidelines for a pip installation using:</p>
<pre><code>echo $TF_BINARY_URL
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.11.0rc0-py2-none-any.whl
</code></pre>
<p>Quick overview:</p>
<p>OS: OS X El Capitan version 10.11.6 (15G31)</p>
<p>Python: Python 2.7.12_1 installed with <code>brew install python</code></p>
<p>TensorFlow: 0.11.0rc0 from <code>import tensorflow as tf; print(tf.__version__)</code></p>
<p>I can run TensorFlow using:</p>
<pre><code>python
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
>>> print(sess.run(hello))
>>> Hello, TensorFlow!
</code></pre>
<p>So TensorFlow is installed and running the basic commands.</p>
<p>But when I run the code for tf.contrib.learn Quickstart from here:
<a href="https://www.tensorflow.org/versions/r0.11/tutorials/tflearn/index.html" rel="nofollow">https://www.tensorflow.org/versions/r0.11/tutorials/tflearn/index.html</a></p>
<p>I get the following issue:</p>
<pre><code>Traceback (most recent call last):
File "tf_learn_quickstart.py", line 13, in <module>
training_set = tf.contrib.learn.datasets.base.load_csv(filename=IRIS_TRAINING,
AttributeError: 'module' object has no attribute 'load_csv'
</code></pre>
<p>I can't figure out what went wrong as everything else seems to be working fine. Any ideas what is wrong?</p>
| 1 | 2016-10-04T18:50:35Z | 39,861,351 | <p>This function has been deprecated: <a href="https://github.com/tensorflow/tensorflow/commit/2d4267507e312007a062a90df37997bca8019cfb" rel="nofollow">https://github.com/tensorflow/tensorflow/commit/2d4267507e312007a062a90df37997bca8019cfb</a></p>
<p>And the tutorial seems not up to date. I believe you can simply replace load_csv with load_csv_with_header to get it work.</p>
| 1 | 2016-10-04T20:38:54Z | [
"python",
"osx",
"python-2.7",
"tensorflow",
"osx-elcapitan"
]
|
creating teaming from user input | 39,859,761 | <p>I'm trying to create a centos 7 a networking teaming (bonding) using a user input in a python script. C</p>
<pre><code>import socket
# IP Address
IPADDR = socket.gethostbyname(socket.gethostname())
print IPADDR
# Netmask
NETMASK = raw_input("Enter Netmask address: ")
PREFIX = sum([bin(int(x)).count("1") for x in NETMASK.split(".")])
print NETMASK
# Gateway
GATEWAY = raw_input("Enter Gateway address: ")
print GATEWAY
# Run command and substitution
os.system("nmcli con add type team con-name team0 ifname team0 config '{"runner":{"name": "activebackup"}}'")
os.system("nmcli con mod team0 ipv4.addresses IPDDR/PREFIX")
os.system("nmcli con mod team0 ipv4.gateway GATEWAY")
os.system("nmcli con mod team0 connection.autoconnect yes")
os.system("nmcli con mod team0 ipv4.method manual")
os.system("nmcli con mod team0 ipv6.method ignore")
</code></pre>
<p>When I run the script I get this errors:</p>
<pre><code> File "team0.py", line 16
os.system("nmcli con add type team con-name team0 ifname team0 config '{"runner":{"name": "activebackup"}}'")
^
SyntaxError: invalid syntax
</code></pre>
<p>Can someone help to find what I'm doing wrong.
Thanks</p>
| 0 | 2016-10-04T18:55:47Z | 39,859,831 | <p>Sure -- you terminated the string, as shown by the colouring of the text. It starts at <strong>"nmcli</strong>. Use a pair of double quotation marks to have them as a literal within the outer string.</p>
<pre><code>os.system("nmcli con add type team con-name team0 ifname team0 config '{""runner"":
</code></pre>
<p>... and so on. Alternately, put the dictionary value into a variable and append it to the larger string later:</p>
<pre><code>my_dict = '{"runner":{"name": "activebackup"}}'
os.system("nmcli con add type team con-name team0 ifname team0 config '"
+ my_dict) + "'"
</code></pre>
| 2 | 2016-10-04T18:59:54Z | [
"python"
]
|
creating teaming from user input | 39,859,761 | <p>I'm trying to create a centos 7 a networking teaming (bonding) using a user input in a python script. C</p>
<pre><code>import socket
# IP Address
IPADDR = socket.gethostbyname(socket.gethostname())
print IPADDR
# Netmask
NETMASK = raw_input("Enter Netmask address: ")
PREFIX = sum([bin(int(x)).count("1") for x in NETMASK.split(".")])
print NETMASK
# Gateway
GATEWAY = raw_input("Enter Gateway address: ")
print GATEWAY
# Run command and substitution
os.system("nmcli con add type team con-name team0 ifname team0 config '{"runner":{"name": "activebackup"}}'")
os.system("nmcli con mod team0 ipv4.addresses IPDDR/PREFIX")
os.system("nmcli con mod team0 ipv4.gateway GATEWAY")
os.system("nmcli con mod team0 connection.autoconnect yes")
os.system("nmcli con mod team0 ipv4.method manual")
os.system("nmcli con mod team0 ipv6.method ignore")
</code></pre>
<p>When I run the script I get this errors:</p>
<pre><code> File "team0.py", line 16
os.system("nmcli con add type team con-name team0 ifname team0 config '{"runner":{"name": "activebackup"}}'")
^
SyntaxError: invalid syntax
</code></pre>
<p>Can someone help to find what I'm doing wrong.
Thanks</p>
| 0 | 2016-10-04T18:55:47Z | 39,859,853 | <p>The syntax error is occurring because you are not escaping the quote <code>"</code> characters. The interpreter thinks that the string has ended and then trips.</p>
<p>You can use a backslash <code>\</code> to escape the quote:</p>
<pre><code>os.system("nmcli con add type team con-name team0 ifname team0 config '{\"runner\":{\"name\": \"activebackup\"}}'")
</code></pre>
<p>In addition, please note that your declared variables are not being picked up when they are declared in string literals. You will need to modify these commands with <a href="https://docs.python.org/2/library/string.html#string.Formatter.format" rel="nofollow"><code>format</code></a>, for example:</p>
<pre><code>os.system("nmcli con mod team0 ipv4.addresses {}/{}".format(IPDDR, PREFIX))
os.system("nmcli con mod team0 ipv4.gateway {}".format(GATEWAY)")
</code></pre>
<p>See the above documentation and other SO questions for more info about how to use <code>format</code>.</p>
| 2 | 2016-10-04T19:00:58Z | [
"python"
]
|
Python regex to remove punctuation except from URLs and decimal numbers | 39,859,764 | <p>People,</p>
<p>I need a regex to remove punctuation from a string, but keep the accents and URLs. I also have to keep the mentions and hashtags from that string.</p>
<p>I tried with the code above but unfortunately, it replaces the characters with accents but I want to keep the accents.</p>
<pre><code>import unicodedata
if __name__ == "__main__":
text = "Apenas um teste com acentuação. Para pontuação também! #python @stackoverflow http://xyhdhz.com.br"
text = unicodedata.normalize('NFKD', text).encode('ascii','ignore')
print text
</code></pre>
<p>The output for the following text <strong>"Apenas um teste com acentuação. Para pontuação também! #python @stackoverflow <a href="http://xyhdhz.com.br" rel="nofollow">http://xyhdhz.com.br</a>"</strong> should be <strong>"Apenas um teste com acentuação Para pontuação também #python @stackoverflow <a href="http://xyhdhz.com.br" rel="nofollow">http://xyhdhz.com.br</a>"</strong></p>
<p>How could I do that?</p>
| 0 | 2016-10-04T18:56:03Z | 39,860,030 | <p>You can use Python's <a href="https://docs.python.org/3/library/re.html#module-contents" rel="nofollow">regex module</a> and <code>re.sub()</code> to replace any characters you want to get rid of. You can either use a blacklist and replace all the characters you don't want, or use a whitelist of all the characters you want to allow and only keep those. </p>
<p>This will remove anything in the bracketed class of characters:</p>
<pre><code>import re
test = r'#test.43&^%à , è, ì, ò, ù, Ã, Ã, Ã, Ã, ÃÃz'
out = re.sub(r'[/.!$%^&*()]', '', test)
print(out)
# Out: #test43à è ì ò ù à à à à ÃÃz
</code></pre>
<p>(tested with Python 3.5)</p>
<p>To keep URLs you will have to do a little more processing to check for that format (which is pretty varied). What kind of input/output are you looking for in that case?</p>
<p>edit: based on your added input example:</p>
<pre><code>test = "Apenas um teste com acentuação. Para pontuação também! #python @stackoverflow"
# Out: Apenas um teste com acentuação Para pontuação também #python @stackoverflow
</code></pre>
| 0 | 2016-10-04T19:13:11Z | [
"python",
"regex",
"nltk"
]
|
How can I install pythonds module? | 39,859,834 | <p>when I run a programme containing:-</p>
<p>from pythonds.basic.stack import Stack</p>
<p>it says:-
ImportError: No module named pythonds.basic.stack</p>
<p>Please help me out.</p>
| 1 | 2016-10-04T19:00:01Z | 39,859,868 | <p><code>pip install pythonds</code>. </p>
<p>And then <code>from pythonds.basic.stack import Stack</code>. Note that it's <code>Stack</code>, not <code>stack</code>.</p>
| 3 | 2016-10-04T19:02:10Z | [
"python",
"data-structures",
"stack",
"python-module"
]
|
How can I install pythonds module? | 39,859,834 | <p>when I run a programme containing:-</p>
<p>from pythonds.basic.stack import Stack</p>
<p>it says:-
ImportError: No module named pythonds.basic.stack</p>
<p>Please help me out.</p>
| 1 | 2016-10-04T19:00:01Z | 39,860,364 | <p>if you do not have the python path variable set, then type this into your command prompt.</p>
<p>C:\Python34\Scripts\pip install LIBRARY NAME</p>
<p>I cant remember if it is a forward slash, or a backslash, but try either. Or wherever you have python saved.</p>
| 0 | 2016-10-04T19:33:28Z | [
"python",
"data-structures",
"stack",
"python-module"
]
|
Python - Split pdf by pages | 39,859,835 | <p>I am using <code>PyPdf2</code> to split large <code>PDF</code> to pages. The problem is that this process is very slow.</p>
<p>This is the code i use:</p>
<pre><code>import os
from PyPDF2 import PdfFileWriter, PdfFileReader
with open(input_pdf_path, "rb") as input_file:
input_pdf = PdfFileReader(input_file)
directory = "%s/paging/" % os.path.dirname(input_pdf_path)
if not os.path.exists(directory):
os.makedirs(directory)
page_files = []
for i in range(0, input_pdf.numPages):
output = PdfFileWriter()
output.addPage(input_pdf.getPage(i))
file_name = "%s/#*#*#*##-%s.pdf" % (directory, i)
page_files.append(file_name)
with open(file_name, "wb") as outputStream:
output.write(outputStream)
</code></pre>
<p>Using this code it takes about 35 to 55 seconds to split a 177 pages pdf. Is there a way i can improve this code? Is there any other library that is more suitable for this job?</p>
| 1 | 2016-10-04T19:00:03Z | 39,868,722 | <h3>Refactoring</h3>
<p>I have refactored the code like this:</p>
<pre><code>import os
import PyPDF2
def split_pdf_pages(input_pdf_path, target_dir, fname_fmt=u"{num_page:04d}.pdf"):
if not os.path.exists(target_dir):
os.makedirs(target_dir)
with open(input_pdf_path, "rb") as input_stream:
input_pdf = PyPDF2.PdfFileReader(input_stream)
if input_pdf.flattenedPages is None:
# flatten the file using getNumPages()
input_pdf.getNumPages() # or call input_pdf._flatten()
for num_page, page in enumerate(input_pdf.flattenedPages):
output = PyPDF2.PdfFileWriter()
output.addPage(page)
file_name = os.path.join(target_dir, fname_fmt.format(num_page=num_page))
with open(file_name, "wb") as output_stream:
output.write(output_stream)
</code></pre>
<p><em>note:</em> it's difficult to do betterâ¦</p>
<h3>Profiling</h3>
<p>With this <code>split_pdf_pages</code> function, you can do profiling:</p>
<pre><code>import cProfile
import pstats
import io
pdf_path = "path/to/file.pdf"
directory = os.path.join(os.path.dirname(pdf_path), "pages")
pr = cProfile.Profile()
pr.enable()
split_pdf_pages(pdf_path, directory)
pr.disable()
s = io.StringIO()
ps = pstats.Stats(pr, stream=s).sort_stats('cumulative')
ps.print_stats()
print(s.getvalue())
</code></pre>
<p>Run the profiling with your own PDF file, and analyse the resultâ¦</p>
<h3>Profiling result</h3>
<p>The profiling gave me this result:</p>
<pre><code> 159696614 function calls (155047949 primitive calls) in 57.818 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.899 0.899 57.818 57.818 $HOME/workspace/pypdf2_demo/src/pypdf2_demo/split_pdf_pages.py:14(split_pdf_pages)
2136 0.501 -.--- 53.851 0.025 $HOME/virtualenv/py3-pypdf2_demo/lib/site-packages/PyPDF2/pdf.py:445(write)
103229/96616 1.113 -.--- 36.924 -.--- $HOME/virtualenv/py3-pypdf2_demo/lib/site-packages/PyPDF2/generic.py:544(writeToStream)
27803 9.066 -.--- 25.381 0.001 $HOME/virtualenv/py3-pypdf2_demo/lib/site-packages/PyPDF2/generic.py:445(writeToStream)
4185807/2136 5.054 -.--- 14.635 0.007 $HOME/virtualenv/py3-pypdf2_demo/lib/site-packages/PyPDF2/pdf.py:541(_sweepIndirectReferences)
50245/41562 0.117 -.--- 9.028 -.--- $HOME/virtualenv/py3-pypdf2_demo/lib/site-packages/PyPDF2/pdf.py:1584(getObject)
31421489 6.898 -.--- 8.193 -.--- $HOME/virtualenv/py3-pypdf2_demo/lib/site-packages/PyPDF2/utils.py:231(b_)
56779 2.070 -.--- 7.882 -.--- $HOME/virtualenv/py3-pypdf2_demo/lib/site-packages/PyPDF2/generic.py:142(writeToStream)
8683 0.322 -.--- 7.020 0.001 $HOME/virtualenv/py3-pypdf2_demo/lib/site-packages/PyPDF2/pdf.py:1531(_getObjectFromStream)
459978/20068 1.098 -.--- 6.490 -.--- $HOME/virtualenv/py3-pypdf2_demo/lib/site-packages/PyPDF2/generic.py:54(readObject)
26517/19902 0.484 -.--- 6.360 -.--- $HOME/virtualenv/py3-pypdf2_demo/lib/site-packages/PyPDF2/generic.py:553(readFromStream)
27803 3.893 -.--- 5.565 -.--- $HOME/virtualenv/py3-pypdf2_demo/lib/site-packages/PyPDF2/generic.py:1162(encode_pdfdocencoding)
15735379 4.173 -.--- 5.412 -.--- $HOME/virtualenv/py3-pypdf2_demo/lib/site-packages/PyPDF2/utils.py:268(chr_)
3617738 2.105 -.--- 4.956 -.--- $HOME/virtualenv/py3-pypdf2_demo/lib/site-packages/PyPDF2/generic.py:265(writeToStream)
18882076 3.856 -.--- 3.856 -.--- {method 'write' of '_io.BufferedWriter' objects}
</code></pre>
<p>It appears that:</p>
<ul>
<li>The <code>writeToStream</code> function is heavily called, but I don't know how to optimize this.</li>
<li>The <code>write</code> method directly write to the stream, not in memory => an optimisation is possible.</li>
</ul>
<h3>Improvement</h3>
<p>Serialize the PDF page in a buffer (in memory), then write the buffer to the file:</p>
<pre><code>buffer = io.BytesIO()
output.write(buffer)
with open(file_name, "wb") as output_stream:
output_stream.write(buffer.getvalue())
</code></pre>
<p>I processed the 2135 pages in 35 seconds instead of 40.</p>
<p>Poor optimization indeed :-(</p>
| 1 | 2016-10-05T08:15:30Z | [
"python",
"python-3.x",
"pdf",
"pypdf",
"pypdf2"
]
|
Python - Split pdf by pages | 39,859,835 | <p>I am using <code>PyPdf2</code> to split large <code>PDF</code> to pages. The problem is that this process is very slow.</p>
<p>This is the code i use:</p>
<pre><code>import os
from PyPDF2 import PdfFileWriter, PdfFileReader
with open(input_pdf_path, "rb") as input_file:
input_pdf = PdfFileReader(input_file)
directory = "%s/paging/" % os.path.dirname(input_pdf_path)
if not os.path.exists(directory):
os.makedirs(directory)
page_files = []
for i in range(0, input_pdf.numPages):
output = PdfFileWriter()
output.addPage(input_pdf.getPage(i))
file_name = "%s/#*#*#*##-%s.pdf" % (directory, i)
page_files.append(file_name)
with open(file_name, "wb") as outputStream:
output.write(outputStream)
</code></pre>
<p>Using this code it takes about 35 to 55 seconds to split a 177 pages pdf. Is there a way i can improve this code? Is there any other library that is more suitable for this job?</p>
| 1 | 2016-10-04T19:00:03Z | 39,872,566 | <p>Any optimization couldn't really make a real improvement. I ended up using <code>pdftk</code>. I came across with this <a href="http://linuxcommando.blogspot.co.il/2013/02/splitting-up-is-easy-for-pdf-file.html" rel="nofollow">page</a> which explains really nice how to split pages fast. </p>
<p><code>pdftk</code> is a command line tool(and a graphical one) with some very nice options.</p>
<p>Installation:</p>
<pre><code> sudo apt-get update
sudo apt-get install pdftk
</code></pre>
<p>Usage with python3:</p>
<pre><code> process = Popen(['pdftk',
input_pdf_path,
'burst',
'output',
PdfSplitter.FILE_FORMAT + '%d.pdf'],
stdout=PIPE,
stderr=PIPE)
stdout, stderr = process.communicate()
</code></pre>
<p>With this tool i managed to split up the 177 pages pdf within <strong>2 seconds</strong>.</p>
| 0 | 2016-10-05T11:19:30Z | [
"python",
"python-3.x",
"pdf",
"pypdf",
"pypdf2"
]
|
How to use a variable from main in another function? | 39,860,055 | <p>In the following code how to I make the variable 'guesses' available to me in the 'end' function. Whenever I try this I just receive a guesses is not defined. In the play function I am returning a number and if I understand correctly than that number should be equal to 'guesses', and for a reason I don't understand 'guesses' won't work in 'end'. </p>
<pre><code>def main():
guesses = play()
play_again = again()
while (play_again == True):
guesses = play()
play_again = again()
total_games = 1
total_games += 1
end()
def end():
print("Results: ")
print("Total: " + print(str(guesses + 1)))
</code></pre>
| -2 | 2016-10-04T19:15:12Z | 39,860,115 | <p>Pass it as a parameter</p>
<pre><code>def main():
guesses = play()
play_again = again()
while (play_again == True):
guesses = play()
play_again = again()
total_games = 1
total_games += 1
end(guesses)
def end(guesses):
print("Results: ")
print("Total: " + str(guesses + 1))
</code></pre>
<p>Passing inputs as parameters, and using <code>return</code> to pass variables out as outputs allows you to control the flow of data in your program and not use <code>global</code> variables as a crutch.</p>
| 5 | 2016-10-04T19:18:16Z | [
"python"
]
|
How to use a variable from main in another function? | 39,860,055 | <p>In the following code how to I make the variable 'guesses' available to me in the 'end' function. Whenever I try this I just receive a guesses is not defined. In the play function I am returning a number and if I understand correctly than that number should be equal to 'guesses', and for a reason I don't understand 'guesses' won't work in 'end'. </p>
<pre><code>def main():
guesses = play()
play_again = again()
while (play_again == True):
guesses = play()
play_again = again()
total_games = 1
total_games += 1
end()
def end():
print("Results: ")
print("Total: " + print(str(guesses + 1)))
</code></pre>
| -2 | 2016-10-04T19:15:12Z | 39,860,144 | <p>Something like this: </p>
<pre><code>def main():
guesses = play()
play_again = again()
while (play_again == True):
guesses = play()
play_again = again()
total_games = 1
total_games += 1
end(guesses)
def end(guesses):
print("Results: ")
print("Total: {}".format(guesses + 1))
</code></pre>
| 2 | 2016-10-04T19:19:52Z | [
"python"
]
|
How to use a variable from main in another function? | 39,860,055 | <p>In the following code how to I make the variable 'guesses' available to me in the 'end' function. Whenever I try this I just receive a guesses is not defined. In the play function I am returning a number and if I understand correctly than that number should be equal to 'guesses', and for a reason I don't understand 'guesses' won't work in 'end'. </p>
<pre><code>def main():
guesses = play()
play_again = again()
while (play_again == True):
guesses = play()
play_again = again()
total_games = 1
total_games += 1
end()
def end():
print("Results: ")
print("Total: " + print(str(guesses + 1)))
</code></pre>
| -2 | 2016-10-04T19:15:12Z | 39,860,164 | <p>Simply pass <code>guesses</code> as a parameter to <code>end()</code> </p>
<pre><code>def main():
guesses = play()
play_again = again()
while (play_again == True):
guesses = play()
play_again = again()
total_games = 1
total_games += 1
end(guesses)
def end(guesses):
print("Results: ")
print("Total: " + str(guesses + 1))
</code></pre>
<p>Or another option(although I don't recommend it unless you know what your doing), is to make guess global in main.</p>
<pre><code>guesses = None
def main():
global guesses
guesses = play()
play_again = again()
while (play_again == True):
guesses = play()
play_again = again()
total_games = 1
total_games += 1
end()
def end():
print("Results: ")
print("Total: " + str(guesses + 1))
</code></pre>
<p>And to note, i fixed your print statements. You don't need to use print twice to print data you passed into <code>print()</code>'s parameters.</p>
| 0 | 2016-10-04T19:20:56Z | [
"python"
]
|
How to use a variable from main in another function? | 39,860,055 | <p>In the following code how to I make the variable 'guesses' available to me in the 'end' function. Whenever I try this I just receive a guesses is not defined. In the play function I am returning a number and if I understand correctly than that number should be equal to 'guesses', and for a reason I don't understand 'guesses' won't work in 'end'. </p>
<pre><code>def main():
guesses = play()
play_again = again()
while (play_again == True):
guesses = play()
play_again = again()
total_games = 1
total_games += 1
end()
def end():
print("Results: ")
print("Total: " + print(str(guesses + 1)))
</code></pre>
| -2 | 2016-10-04T19:15:12Z | 39,860,237 | <p><code>main()</code> and <code>end()</code> are two separate functions with separate scopes. You defined the variable <code>guesses</code> inside the function <code>main()</code>. It will not be available to <code>end()</code>, because the scope in which <code>end()</code> was defined did not have access to <code>guesses</code>. This is despite the fact that <code>end()</code> is called within <code>main()</code>. The inside of the function isn't aware that <code>guesses</code> exists when <code>end()</code> was created/defined.</p>
<p>The need to pass information between two functions as you are trying to do highlights the need for a very common programming paradigm regarding data flow. You can pass information into a function through the use of "parameters" or "arguments". These are variables that are defined or set when a function is called.</p>
<p>In python, they look like this:</p>
<pre><code>def function(argument):
#do something with argument
print (argument)
</code></pre>
| 2 | 2016-10-04T19:25:37Z | [
"python"
]
|
How to get these information in context of given below using beautiful soup python | 39,860,107 | <p>Going to <a href="https://www.couponraja.in" rel="nofollow">https://www.couponraja.in</a></p>
<p>I need to data scrape the whole website with the code:
As I wrote my code like this:</p>
<pre><code>wiki = 'https://www.couponraja.in'
page = urllib2.urlopen(wiki)
soup = BeautifulSoup(page, 'html.parser')
titles = soup.find_all('p', attrs={'class':'ofr-sml-descp-nw'})
for p in titles:
print 'title: ',p.text
divs = soup.select('div.deal-sub-nw')
for x in divs:
print 'Offer-Title:', x.select('p.ofr-sml-descp-nw')[0].text
print 'Offer-Image URL:', x.select('img')[0].attrs['src']
div1 = soup.select('div.m-sml-logo-nwcell')
for x in div1:
print 'Offer-Merchant:',x.select('a')[0].attrs['alt']
#for a in soup.find_all('a', href=True):
#print "Found the URL:", a['href']
#print(soup.prettify())
#text = soup.find('p',{'class':'ofr-sml-descp-nw'}) --> prints single statement
</code></pre>
<ol>
<li><p>OFFER TITLE </p></li>
<li><p>OFFER DESCRIPTION</p></li>
<li><p>OFFER TERMS AND CONDITIONS --> as i want to print pop up, but it only get me with Terms and Condition,i.e. i can't print the pop up frame which is with it.</p></li>
<li><p>OFFER START DATE</p></li>
<li><p>OFFER END DATE</p></li>
<li><p>OFFER URL</p></li>
<li><p>OFFER MERCHANT</p></li>
<li><p>OFFER IMAGE URL (ANY ONE) </p></li>
<li><p>OFFER AREA (EXAMPLE NAME OF SUBURB) - APPLICABLE FOR RETAIL OFFERS </p></li>
<li><p>OFFER CITY - APPLICABLE FOR RETAIL OFFERS</p></li>
<li><p>OFFER PINCODE - APPLICABLE FOR RETAIL OFFERS</p></li>
<li><p>OFFER ADDRESS - APPLICABLE FOR RETAIL OFFERS</p></li>
<li><p>OFFER CATEGORY</p></li>
<li><p>OFFER BANK (ICICI Card, PAYTM etc) </p></li>
</ol>
<p>Please Help me to get these, as I am a newbie in it and want to know how it works.
Thanks For the Help</p>
| -3 | 2016-10-04T19:17:54Z | 39,860,725 | <p>I don't really see anything wrong with your code. Perhaps the approach can be modified. If your going to use a for loop for all the items you are parsing it going to take code. If things on that page are being generated with javascript, then you can't use the raw html you get from a get request, you'll probably have to use something like selenium. Following your approach I'd continue with something like this. Lastly, look into using regexes, you know what you're looking for internally.. So instead of writing for loop after for loop, you can probably write a regexes to get items in a few passes. I'm thinking O^2 runtime, then agin there still might be a better approach. That's the beauty of programming, if you get stuck look at it differently.</p>
<pre><code>import json
import requests
import sys
from bs4 import BeautifulSoup
url = 'https://www.couponraja.in'
res = requests.get(url)
if res.status_code != 200:
print "Page Could not be accessed"
sys.exit(1)
data = {}
titles = []
desciptions = []
# add more list
soup = BeautifulSoup(res.content, 'html.parser')
t = soup.find_all('p', attrs={'class': 'ofr-sml-descp-nw'})
for p in t:
try:
titles.append(p.contents)
except AttributeError:
pass
# i'm assumming the two list are the same length you'll have to add your own checks
data = []
for i in xrange(len(titles)):
temp = {}
try:
temp['title'] = titles[i]
# add more fields
except IndexError:
pass
data.append(temp)
print json.dumps(data, indent=2)
</code></pre>
<p>You just need to keep at it. No one on here is going to parse that data for you... and your not asking about a specific issue.</p>
<p>Given you're comments below, this is how to extract the content from the 'rvwssec' class, which contains the word Terms and Conditions.</p>
<pre><code>terms = soup.find_all('div', attrs={'class': 'rvwssec'})
for term in terms:
print term.findNext('a').contents[0]
</code></pre>
<p>Now you'll have to convert it to a json yourself, you can store the data in a list like I showed above, convert those list into a single dictionary. Python Dictionaries and JSON are very similar. But to ensure that everything is proper json, you can call json.dumps(my_dictionary) to make the full conversion.</p>
| 0 | 2016-10-04T19:55:54Z | [
"python",
"html",
"beautifulsoup"
]
|
How to serialize a one to many relation in django-rest using Model serializer? | 39,860,114 | <p>These are my models and serializers. I want a representation of Question Model along with a list of people the question was asked to.</p>
<p>I am trying this:</p>
<pre><code>@api_view(['GET', 'PATCH'])
def questions_by_id(request,user,pk):
question = Question.objects.get(pk=pk)
if request.method == 'GET':
serializer = QuestionSerializer(question)
return Response(serializer.data)
</code></pre>
<p>But I get an empty dictionary (<code>{}</code>). However when I remove the <code>asked</code> field from <code>QuestionSerializer</code> I get a complete representation of <code>Question</code> along with <code>Places</code> serialized nicely. What am I missing ?</p>
<pre><code>class AskedToSerializer(serializers.ModelSerializer):
class Meta:
model = AskedTo
fields = ('to_user', 'answered')
class QuestionSerializer(serializers.ModelSerializer):
class Meta:
model = Question
places = PlaceSerializer(many=True, required=False)
asked = AskedToSerializer(source='askedto_set', many=True)
fields = ('id', 'created_on', 'title', 'places', 'answered','asked')
extra_kwargs = {'created_by': {'read_only': True}}
class Question(BaseModel):
title = models.CharField(max_length=200, null=False)
places = models.ManyToManyField(Place, blank=True)
answered = models.BooleanField(default=False)
class AskedTo(BaseModel):
ques = models.ForeignKey(Question, on_delete=models.CASCADE)
to_user = models.ForeignKey(settings.AUTH_USER_MODEL)
replied = models.BooleanField(default=False)
class Place(models.Model):
g_place_id = models.CharField(max_length=20,primary_key=True)
json = models.TextField(null=True)
name = models.CharField(max_length=40)
</code></pre>
| 0 | 2016-10-04T19:18:15Z | 39,860,737 | <p>I figured it out. There were two errors.</p>
<p>changed this to </p>
<pre><code>class AskedToSerializer(serializers.ModelSerializer):
class Meta:
model = AskedTo
fields = ('to_user', 'answered')
</code></pre>
<p>this (notice the change in fields, fields on model and serializer didn't match)</p>
<pre><code>class AskedToSerializer(serializers.ModelSerializer):
class Meta:
model = AskedTo
fields = ('to_user', 'replied')
</code></pre>
<p>Secondly I needed to define any extra fields outside <code>class Meta</code></p>
<pre><code>class QuestionSerializer(serializers.ModelSerializer):
places = PlaceSerializer(many=True, required=False)
asked = AskedToSerializer(source='askedto_set', many=True)
class Meta:
model = Question
fields = ('id', 'created_on', 'title', 'places', 'answered','asked')
extra_kwargs = {'created_by': {'read_only': True}}
</code></pre>
<p>Notice the change in definition of <code>places</code> and <code>asked</code> </p>
| 0 | 2016-10-04T19:56:42Z | [
"python",
"serialization",
"django-rest-framework",
"one-to-many",
"django-serializer"
]
|
Pandas Column differences, containing lists | 39,860,131 | <p>I have a data frame where the columns values are list and want to find the differences between two columns, or in other words I want to find all the elements in column A which is not there in column B.</p>
<pre><code>data={'NAME':['JOHN','MARY','CHARLIE'],
'A':[[1,2,3],[2,3,4],[3,4,5]],
'B':[[2,3,4],[3,4,5],[4,5,6]]}
df=pd.DataFrame(data)
df=df[['NAME','A','B']]
#I'm able to concatenate
df['C']=df['A']+df['B']
NAME A B C
0 JOHN [1, 2, 3] [2, 3, 4] [1, 2, 3, 2, 3, 4]
1 MARY [2, 3, 4] [3, 4, 5] [2, 3, 4, 3, 4, 5]
2 CHARLIE [3, 4, 5] [4, 5, 6] [3, 4, 5, 4, 5, 6]
</code></pre>
<p>Any way to find the differences?</p>
<pre><code>df['C']=df['A']-df['B']
</code></pre>
<p>I know we can use <code>df.apply</code> to a function but row by row processing will run slow since I have around 400K rows. I'm looking for a straight forward method like </p>
<pre><code>df['C']=df['A']+df['B']
</code></pre>
| 0 | 2016-10-04T19:19:00Z | 39,860,356 | <p>For a set difference,</p>
<pre><code>df['A'].map(set) - df['B'].map(set)
</code></pre>
| 1 | 2016-10-04T19:32:27Z | [
"python",
"list",
"pandas",
"dataframe"
]
|
How can I get subgroups of the match in Scala? | 39,860,158 | <p>I have the following in python:</p>
<pre><code> regex.sub(lambda t: t.group(1).replace(" ", " ") + t.group(2),string)
</code></pre>
<p>where <code>regex</code> is a Regular Expression and <code>string</code> is a filled String. </p>
<p>So I am trying to do the same in Scala, using <code>regex.replaceAllIn(...)</code> function instead of python<code>sub</code>. However, I don't know how to get the subgroups that match.</p>
<p>Is there something similar to python function <code>group</code> in Scala?</p>
| 4 | 2016-10-04T19:20:37Z | 39,860,234 | <p>You can use <code>replaceAll</code>, and use <code>$n</code>, where "n" is the group you want to match. For example:</p>
<pre><code>yourString.replaceAll(yourRegex, "$1")
</code></pre>
<p>Replaces the matched parts with the first group.</p>
| 2 | 2016-10-04T19:25:32Z | [
"python",
"scala",
"grouping"
]
|
How can I get subgroups of the match in Scala? | 39,860,158 | <p>I have the following in python:</p>
<pre><code> regex.sub(lambda t: t.group(1).replace(" ", " ") + t.group(2),string)
</code></pre>
<p>where <code>regex</code> is a Regular Expression and <code>string</code> is a filled String. </p>
<p>So I am trying to do the same in Scala, using <code>regex.replaceAllIn(...)</code> function instead of python<code>sub</code>. However, I don't know how to get the subgroups that match.</p>
<p>Is there something similar to python function <code>group</code> in Scala?</p>
| 4 | 2016-10-04T19:20:37Z | 39,861,056 | <p>Maybe other way to do this is having a <code>regex</code>, for example:</p>
<pre><code>val regExtractor = """a(b+)(c+)(d*)""".r
</code></pre>
<p>And then match the <code>String:</code></p>
<pre><code>val string = "abbbbbbbbbccdd"
val newString = string match {
case regExtractor(g1, g2, g3) =>
s"""String Replaced: ${g1.replace(g1, "XXXXX")},
| ${g2.replace(g2, "YYYYY")}""".stripMargin
}
</code></pre>
<p><code>newString</code> will be:</p>
<pre><code>"String Replaced: XXXXX, YYYYY"
</code></pre>
| 1 | 2016-10-04T20:18:52Z | [
"python",
"scala",
"grouping"
]
|
How can I get subgroups of the match in Scala? | 39,860,158 | <p>I have the following in python:</p>
<pre><code> regex.sub(lambda t: t.group(1).replace(" ", " ") + t.group(2),string)
</code></pre>
<p>where <code>regex</code> is a Regular Expression and <code>string</code> is a filled String. </p>
<p>So I am trying to do the same in Scala, using <code>regex.replaceAllIn(...)</code> function instead of python<code>sub</code>. However, I don't know how to get the subgroups that match.</p>
<p>Is there something similar to python function <code>group</code> in Scala?</p>
| 4 | 2016-10-04T19:20:37Z | 39,862,503 | <p>The scaladoc has one example. Provide a function from <code>Match</code> instead of a string.</p>
<pre><code>scala> val r = "a(b)(c)+".r
r: scala.util.matching.Regex = a(b)(c)+
scala> val s = "123 abcccc and abcc"
s: String = 123 abcccc and abcc
scala> r.replaceAllIn(s, m => s"a${m.group(1).toUpperCase}${m.group(2)*3}")
res0: String = 123 aBccc and aBccc
</code></pre>
<p>The resulting string also does group substitution.</p>
<pre><code>scala> val r = "a(b)(c+)".r
r: scala.util.matching.Regex = a(b)(c+)
scala> r.replaceAllIn(s, m => if (m.group(2).length > 3) "$1" else "$2")
res3: String = 123 b and cc
scala> r.replaceAllIn(s, m => s"$$${ if (m.group(2).length > 3) 1 else 2 }")
res4: String = 123 b and cc
</code></pre>
| 3 | 2016-10-04T22:07:16Z | [
"python",
"scala",
"grouping"
]
|
Nose generate dynamically test with decorator | 39,860,189 | <p>I have dynamic amount of test so i want use for loop
I'd try something like:</p>
<pre><code>from nose.tools import istest, nottest
from nose.tools import eq_
import nose
nose.run()
@istest
def test_1():
for i in range(100):
@istest
def test_1_1():
eq_(randint(1,1),1)
---------------------
Ran 1 test in 0.001s
OK
</code></pre>
<p>But nose display it like only one test. How can i improve it to 100 tests?
Thanks in advance.</p>
| 1 | 2016-10-04T19:22:56Z | 39,860,236 | <p>For data-driven tests in nose, check out <a href="https://pypi.python.org/pypi/nose-parameterized/" rel="nofollow"><code>nose_parameterized</code></a>.</p>
<p>Example usage:</p>
<pre><code>from nose_parameterized import parameterized
@parameterized.expand([(1, 1, 2), (2, 2, 4)])
def test_add(self, a, b, sum):
self.assertEqual(sum, a + b)
</code></pre>
<p>Here, two tests will be generated by the runner. It tests <code>1+1==2</code> and <code>2+2==4</code>. The decorator is also compatible with other test runners such as <code>unittest</code>. </p>
| 2 | 2016-10-04T19:25:37Z | [
"python",
"python-2.7",
"nose"
]
|
How can i create a dictionary from a dataframe without specifying column names in pandas? | 39,860,219 | <p>I am trying to convert the dataframe into dictionary in pandas and i got into a problem. When i specify the column names i was able to convert the dataframe into dictionary but when i tried to do without specifying column names but just by column numbers it throws an error. </p>
<p>Here is an example of my dataframe (df1)</p>
<pre><code> id Crub
0 At1NC000060_TBH_1 Not found
1 At1NC005280_TBH_1 Known_Gene_Sense
2 At1NC007770_TBH_1 Known_Gene_Sense
3 At1NC018710_TBH_1 Not found
4 At1NC027560_TBH_1 Known_Gene_Antisense
5 At1NC030450_TBH_1 Known_Gene_Sense
6 At1NC031370_TBH_1 Known_Gene_Antisense
7 At1NC035150_TBH_1 Known_Gene_Antisense
8 At1NC046250_TBH_1 Known_Gene_Sense
-------------
-----------
</code></pre>
<p>So when it tried to convert to dictionary like below, it works fine</p>
<pre><code>dic1 = dict(zip(df.id,df.Crub))
print(dic1)
{'At3NC025390_TBH_1': 'Known_Gene_Antisense', 'At1NC064880_TBH_1': 'Not found', 'At1NC035150_TBH_1': 'Known_Gene_Antisense', 'At3NC044300_TBH_1': 'Known_Gene_Antisense', 'At2NC000610_TBH_1': 'Known_Gene_Antisense'}
</code></pre>
<p>However when i tried with column number, i get error</p>
<pre><code>col2 = list(df1)[1]
dic2 = dict(zip(df.id,df.col2))
print dic2
AttributeError: 'DataFrame' object has no attribute 'col2'
</code></pre>
<p>I don't know what am i doing wrong here.</p>
<p>ps. The reason i can't specify the column name is i am trying to insert this code as part of another code and so i don't exactly the names of 2nd column but i definitely know that it is the second column.</p>
| 0 | 2016-10-04T19:24:29Z | 39,860,574 | <p>you cannot call a column like that if you don't know the name. If you only know the position you could</p>
<pre><code> dict(zip(df.id,df.iloc[:,1]))
</code></pre>
<p>would be the second column. if you want the first use 0</p>
| 1 | 2016-10-04T19:45:38Z | [
"python",
"pandas",
"dictionary"
]
|
How to apply read/write permissions to user-uploaded files in Django | 39,860,225 | <p>I have a "document" model, an upload system using dropzone.js and the register/login. Im now lost on how to apply permissions to each individual uploaded <strong>File</strong> so only the specified users can access them.</p>
<p>Basically:
File1->accessible_by = user1,user2</p>
<p>File2->accesible_by=user3,user5...</p>
<p>And so on. </p>
<p>Thanks to anyone for advice/help on my problem.</p>
<h1>Edit with relevant code:</h1>
<h2>Create Document View:</h2>
<pre><code>class DocumentCreate(CreateView):
model = Document
fields = ['file', 'is_public']
def form_valid(self, form):
self.object = form.save()
data = {'status': 'success'}
response = JSONResponse(data, mimetype =
response_mimetype(self.request))
return response
</code></pre>
<p>I did the above to the view to handle dropzone.js file uploading.</p>
<h2>This is my "document" model</h2>
<pre><code>class Document(models.Model):
file = models.FileField(upload_to = 'files/')
#validators=[validate_file_type])
uploaded_at = models.DateTimeField(auto_now_add = True)
extension = models.CharField(max_length = 30, blank = True)
thumbnail = models.ImageField(blank = True, null = True)
is_public = models.BooleanField(default = False)
uploaded_by = models.ForeignKey(User,
related_name='uploadedByAsUser', null=True)
allowed_users = models.ManyToManyField(User,
related_name='allowedUsersAsUser')
def clean(self):
self.file.seek(0)
self.extension = self.file.name.split('/')[-1].split('.')[-1]
if self.extension == 'xlsx' or self.extension == 'xls':
self.thumbnail = 'xlsx.png'
elif self.extension == 'pptx' or self.extension == 'ppt':
self.thumbnail = 'pptx.png'
elif self.extension == 'docx' or self.extension == 'doc':
self.thumbnail = 'docx.png'
def delete(self, *args, **kwargs):
#delete file from /media/files
self.file.delete(save = False)
#call parent delete method.
super().delete(*args, **kwargs)
#Redirect to file list page.
def get_absolute_url(self):
return reverse('dashby-files:files')
def __str__(self):
return self.file.name.split('/')[-1]
class Meta():
ordering = ['-uploaded_at']
</code></pre>
| 1 | 2016-10-04T19:24:51Z | 39,861,922 | <p>You can add an <code>allowed_user</code> field into the document model, so that only those users specified can access the files. for example:</p>
<pre><code>class Document(models.Model):
file = FileField()
uploaded_by = models.ForeignKey(User, related_name='uploadedByAsUser')
allowed_users = models.ManyToManyField(User, related_name='allowedUsersAsUser')
</code></pre>
<p>This way, if you want a user to be added to the "allowed" list, you can just add them using something like this:</p>
<pre><code>class DocumentCreate(CreateView):
model = Document
fields = ['file', 'is_public']
def form_valid(self, form):
self.object = form.save(commit=False)
self.object.allowed_users.add(request.user)
self.object.save()
data = {'status': 'success'}
response = JSONResponse(data, mimetype =
response_mimetype(self.request))
return response
</code></pre>
<p>Then to make the Admin look nice (admin.py):</p>
<pre><code>class DocumentAdmin(admin.ModelAdmin):
list_display = ('uploaded_by', 'file')
fields = ('id', 'file', 'uploaded_at', 'extension', 'thumbnail', 'is_public', 'uploaded_by', 'allowed_users')
filter_horizontal = ('allowed_users',)
readonly_fields = ('id',)
admin.site.register(Document, DocumentAdmin)
</code></pre>
<p>Then if you want to check if they're allowed, you can do:</p>
<pre><code>if allowed_user in doc.allowed_users:
print('We have liftoff')
</code></pre>
| 0 | 2016-10-04T21:18:28Z | [
"python",
"django",
"authentication",
"permissions"
]
|
How to apply read/write permissions to user-uploaded files in Django | 39,860,225 | <p>I have a "document" model, an upload system using dropzone.js and the register/login. Im now lost on how to apply permissions to each individual uploaded <strong>File</strong> so only the specified users can access them.</p>
<p>Basically:
File1->accessible_by = user1,user2</p>
<p>File2->accesible_by=user3,user5...</p>
<p>And so on. </p>
<p>Thanks to anyone for advice/help on my problem.</p>
<h1>Edit with relevant code:</h1>
<h2>Create Document View:</h2>
<pre><code>class DocumentCreate(CreateView):
model = Document
fields = ['file', 'is_public']
def form_valid(self, form):
self.object = form.save()
data = {'status': 'success'}
response = JSONResponse(data, mimetype =
response_mimetype(self.request))
return response
</code></pre>
<p>I did the above to the view to handle dropzone.js file uploading.</p>
<h2>This is my "document" model</h2>
<pre><code>class Document(models.Model):
file = models.FileField(upload_to = 'files/')
#validators=[validate_file_type])
uploaded_at = models.DateTimeField(auto_now_add = True)
extension = models.CharField(max_length = 30, blank = True)
thumbnail = models.ImageField(blank = True, null = True)
is_public = models.BooleanField(default = False)
uploaded_by = models.ForeignKey(User,
related_name='uploadedByAsUser', null=True)
allowed_users = models.ManyToManyField(User,
related_name='allowedUsersAsUser')
def clean(self):
self.file.seek(0)
self.extension = self.file.name.split('/')[-1].split('.')[-1]
if self.extension == 'xlsx' or self.extension == 'xls':
self.thumbnail = 'xlsx.png'
elif self.extension == 'pptx' or self.extension == 'ppt':
self.thumbnail = 'pptx.png'
elif self.extension == 'docx' or self.extension == 'doc':
self.thumbnail = 'docx.png'
def delete(self, *args, **kwargs):
#delete file from /media/files
self.file.delete(save = False)
#call parent delete method.
super().delete(*args, **kwargs)
#Redirect to file list page.
def get_absolute_url(self):
return reverse('dashby-files:files')
def __str__(self):
return self.file.name.split('/')[-1]
class Meta():
ordering = ['-uploaded_at']
</code></pre>
| 1 | 2016-10-04T19:24:51Z | 39,862,182 | <p>You can set content_type on a HttpResponse. So you can do permission handling in your view and serve the file directly from Django:</p>
<pre><code>return HttpResponse("Text only, please.", content_type="text/plain")
</code></pre>
<p>Note: Django isn't a web server. It is recommended to use a web server for serving static files!</p>
<p>The above method might be a good approach if you handle small amounts of data and only incidentally serve data. If you need a robust solution, you need to check permissions in Django and leave the serving of data to the web server. </p>
<ul>
<li>Lighthttpd X-Sendfile</li>
<li>Apache mod_wsgi Internal redirects</li>
<li>NGNIX X-Accel-Redirect</li>
</ul>
<p>Have a look at Django file streaming packages: <a href="https://djangopackages.org/grids/g/file-streaming/" rel="nofollow">https://djangopackages.org/grids/g/file-streaming/</a></p>
| 0 | 2016-10-04T21:38:21Z | [
"python",
"django",
"authentication",
"permissions"
]
|
Trying to take input from stdin and convert to integer for sum | 39,860,245 | <p>Very new to Python.
Trying to take inputs from stdin in the form of numbers ex. 416 2876 2864 8575 9784 and convert to int for sum of all using a loop.</p>
<p>Having a terrible time just converting to int for use in a loop. Would like to get a hint on the integer issue and then try to solve myself. Was trying to test my code to print the integers after conversion.</p>
<p>currently have:</p>
<pre><code>import sys
s=sys.stdin
n=(sys.stdin.split())
while (int(n) >= 1):
print(n)
</code></pre>
| -1 | 2016-10-04T19:25:55Z | 39,860,316 | <p>You probably don't want to use <code>sys.stdin</code> directly. Use <code>input()</code> instead.</p>
<pre><code>line = input("Enter some numbers: ")
total = 0
for n in line.split():
total = total + int(n)
print("The total is: %d" % total)
</code></pre>
| 1 | 2016-10-04T19:30:10Z | [
"python"
]
|
Trying to take input from stdin and convert to integer for sum | 39,860,245 | <p>Very new to Python.
Trying to take inputs from stdin in the form of numbers ex. 416 2876 2864 8575 9784 and convert to int for sum of all using a loop.</p>
<p>Having a terrible time just converting to int for use in a loop. Would like to get a hint on the integer issue and then try to solve myself. Was trying to test my code to print the integers after conversion.</p>
<p>currently have:</p>
<pre><code>import sys
s=sys.stdin
n=(sys.stdin.split())
while (int(n) >= 1):
print(n)
</code></pre>
| -1 | 2016-10-04T19:25:55Z | 39,860,330 | <pre><code>user_input = input("Type some numbers: ")
numbers = user_input.split()
print(sum([int(x) for x in numbers]))
</code></pre>
| 0 | 2016-10-04T19:30:45Z | [
"python"
]
|
Trying to take input from stdin and convert to integer for sum | 39,860,245 | <p>Very new to Python.
Trying to take inputs from stdin in the form of numbers ex. 416 2876 2864 8575 9784 and convert to int for sum of all using a loop.</p>
<p>Having a terrible time just converting to int for use in a loop. Would like to get a hint on the integer issue and then try to solve myself. Was trying to test my code to print the integers after conversion.</p>
<p>currently have:</p>
<pre><code>import sys
s=sys.stdin
n=(sys.stdin.split())
while (int(n) >= 1):
print(n)
</code></pre>
| -1 | 2016-10-04T19:25:55Z | 39,860,589 | <p>If you want to read from <em>stdin</em> and cast to int just iterate over stdin and split then cast each int and <em>sum</em> :</p>
<pre><code>~$ cat test.py
import sys
print(sum(int(i) for sub in sys.stdin for i in sub.split()))
padraic@dell:~$ printf "416 2876 2864 8575 9784\n123 456 789 120"|python test.py
26003
</code></pre>
<p>Or using map:</p>
<pre><code>import sys
print(sum(sum(map(int, sub.split())) for sub in sys.stdin))
</code></pre>
<p>Or just call read and split.</p>
<pre><code>import sys
print(sum(map(int, sys.stdin.read().split())))
</code></pre>
| 0 | 2016-10-04T19:46:42Z | [
"python"
]
|
Python Requests ignoring time outs | 39,860,257 | <p>I'm trying to make some kind of a scanner with python (just for fun)</p>
<p>it will send a get request to an random ip and see if there is any answer
the problem is that every time that the connection fails the program will stop running .
this is the code </p>
<pre><code>import time
import requests
ips = open("ip.txt", "r")
for ip in ips:
r = requests.get(url="http://"+ip+"/",verify=False)
print(r.status_code)
time.sleep(0.5)
</code></pre>
<p>this is what i get by trying just a random ip :</p>
<pre><code>requests.exceptions.ConnectionError: HTTPConnectionPool(host='177.0.0.0', port=80): Max retries exceeded with url:
</code></pre>
| 0 | 2016-10-04T19:26:34Z | 39,860,302 | <p>This is throwing an error. To protect against this, use a <code>try</code>/<code>except</code> statement:</p>
<pre><code>for ip in ips:
try:
r = requests.get(url="http://"+ip+"/",verify=False)
print(r.status_code)
except requests.exceptions.RequestException as e:
print('Connecting to ip ' + ip + ' failed.', e)
time.sleep(0.5)
</code></pre>
| 2 | 2016-10-04T19:29:06Z | [
"python",
"python-requests"
]
|
Modifying a pandas dataframe that may be a view | 39,860,269 | <p>I have a pandas <code>DataFrame</code> <code>df</code> that is returned from a function and I generally don't know whether it is an independent object or a view on another <code>DataFrame</code>. I want to add new columns to it but don't want to copy it unnecessarily.</p>
<pre><code>df['new_column'] = 0
</code></pre>
<p>may give a nasty warning about modifying a copy</p>
<pre><code>df = df.copy()
</code></pre>
<p>may be expensive if df is large.
What's the best way here?</p>
| 0 | 2016-10-04T19:27:11Z | 39,860,742 | <p>you should use an indexer to create your s1 such has:</p>
<pre><code>import pandas as pd
s = pd.DataFrame({'a':[1,2], 'b':[2,3]})
indexer = s[s.a > 1].index
s1 = s.loc[indexer, :]
s1['c'] = 0
</code></pre>
<p>should remove the warning.</p>
| 0 | 2016-10-04T19:56:57Z | [
"python",
"pandas"
]
|
is there a way I can store the factorization machine model? | 39,860,371 | <p>I am using <a href="https://github.com/coreylynch/pyFM" rel="nofollow">https://github.com/coreylynch/pyFM</a> module, to predict move ratings. However, is there a way I can store (I am using django) the factorization machine after it is trained? Because right now (following the example), I would have to retrain the model everytime I restart the server.</p>
| 0 | 2016-10-04T19:33:54Z | 39,860,426 | <p>Take a look at <a href="https://docs.python.org/2/library/pickle.html" rel="nofollow">pickle</a>. After you train your model, you can save a representation of the python object to a file, and reopen it when you need it.</p>
| 0 | 2016-10-04T19:37:28Z | [
"python",
"django",
"machine-learning",
"artificial-intelligence",
"collaborative-filtering"
]
|
is there a way I can store the factorization machine model? | 39,860,371 | <p>I am using <a href="https://github.com/coreylynch/pyFM" rel="nofollow">https://github.com/coreylynch/pyFM</a> module, to predict move ratings. However, is there a way I can store (I am using django) the factorization machine after it is trained? Because right now (following the example), I would have to retrain the model everytime I restart the server.</p>
| 0 | 2016-10-04T19:33:54Z | 39,860,444 | <p>You are using <code>sklearn</code>. If your model is not huge, the built-in persistence model of python - pickle should work. There is an example <a href="http://scikit-learn.org/stable/modules/model_persistence.html" rel="nofollow">here</a>.</p>
| 0 | 2016-10-04T19:38:36Z | [
"python",
"django",
"machine-learning",
"artificial-intelligence",
"collaborative-filtering"
]
|
Python list of lists to divide an element position on all sublists by a number | 39,860,398 | <p>How can in python take a lists of lists and apply a division to the third element of each sublist.</p>
<p>The sublists looks like this </p>
<pre><code>[1.3735876284e-05, 0.9277849431216683, 34.02875434027778, 0.0]
[2.60440773e-06, 7.35174234e-01, 2.79259180e+02, 0.00000000e+00]
...
</code></pre>
<p>I need to get the same sublists but the third element of each sublist (34.02 ..., 2.79 ...) should be divided by 100</p>
| -2 | 2016-10-04T19:35:53Z | 39,860,493 | <p>use a list comprehension to extract sub-lists, and use addition to concatenate list parts...</p>
<p><code>lambda L: [l[:2]+[l[2]/100]+l[3:] for l in L]</code></p>
| 1 | 2016-10-04T19:41:16Z | [
"python",
"list"
]
|
Python list of lists to divide an element position on all sublists by a number | 39,860,398 | <p>How can in python take a lists of lists and apply a division to the third element of each sublist.</p>
<p>The sublists looks like this </p>
<pre><code>[1.3735876284e-05, 0.9277849431216683, 34.02875434027778, 0.0]
[2.60440773e-06, 7.35174234e-01, 2.79259180e+02, 0.00000000e+00]
...
</code></pre>
<p>I need to get the same sublists but the third element of each sublist (34.02 ..., 2.79 ...) should be divided by 100</p>
| -2 | 2016-10-04T19:35:53Z | 39,860,526 | <p>You could try this:</p>
<pre><code>a = [
[1.3735876284e-05, 0.9277849431216683, 34.02875434027778, 0.0],
[2.60440773e-06, 7.35174234e-01, 2.79259180e+02, 0.00000000e+00],
]
b = [
[(x / 100.0 if i == 2 else x) for (i, x) in enumerate(lst)]
for lst in a
]
</code></pre>
<p>Or the lambda version:</p>
<pre><code>f = lambda a: [
[(x / 100.0 if i == 2 else x) for (i, x) in enumerate(lst)]
for lst in a
]
b = f(a)
</code></pre>
| 1 | 2016-10-04T19:43:04Z | [
"python",
"list"
]
|
Referencing a C++ allocated object in pybind11 | 39,860,405 | <p>I'm trying to create a python binding with pybind11 that references a C++ instance whose memory is handled on the C++ side. Here is some example code:</p>
<pre><code>import <pybind11/pybind11>
struct Dog {
void bark() { printf("Bark!\n"); }
};
int main()
{
auto dog = new Dog;
Py_Initialize();
initexample(); // Initialize the example python module for import
// TBD - Add binding between dog and example.dog .
PyRun_StringFlags("import example\n"
"\n"
"example.dog.bark()\n" // Access the C++ allocated object dog.
, Py_file_input, main_dict, main_dict, NULL);
Py_Finalize();
}
</code></pre>
<p>I'm stuck on how to create the link between the python <code>example.dog</code> and the C++ <code>dog</code> variable. </p>
<p>I can't use <code>py:class_<Dog>.def(py::init<>())</code> as that will allocate a new instance of <code>Dog</code>, which is not what I want. </p>
| 0 | 2016-10-04T19:36:03Z | 39,916,514 | <p>I found an answer to my own question. The trick was a combination of the following two concepts:</p>
<ul>
<li>Create an independent function that returns the singleton.</li>
<li>Create binding to the singleton class <strong>without</strong> binding a constructor. </li>
</ul>
<p>The following example illustrates the technique:</p>
<pre><code>#include <Python.h>
#include <pybind11/pybind11.h>
namespace py = pybind11;
using namespace pybind11::literals;
// Singleton to wrap
struct Singleton
{
Singleton() : x(0) {}
int exchange(int n) // set x and return the old value
{
std::swap(n, x);
return n;
}
// Singleton reference
static Singleton& instance()
{
static Singleton just_one;
return just_one;
}
int x;
};
PYBIND11_PLUGIN(example) {
py::module m("example", "pybind11 example plugin");
// Use this function to get access to the singleton
m.def("get_instance",
&Singleton::instance,
py::return_value_policy::reference,
"Get reference to the singleton");
// Declare the singleton methods
py::class_<Singleton>(m, "Singleton")
// No init!
.def("exchange",
&Singleton::exchange,
"n"_a,
"Exchange and return the current value"
)
;
return m.ptr();
}
int main(int argc, char **argv)
{
Py_Initialize();
PyObject* main_module = PyImport_AddModule("__main__");
PyObject* main_dict = PyModule_GetDict(main_module);
initexample();
// Call singleton from c++
Singleton::instance().exchange(999);
// Populate the example class with two static pointers to our instance.
if (PyRun_StringFlags("import example\n"
"\n"
"example.s1 = example.get_instance()\n"
"example.s2 = example.get_instance()\n",
Py_file_input, main_dict, main_dict, NULL) == nullptr)
PyErr_Print();
// Test referencing the singleton references
if (PyRun_StringFlags("from example import *\n"
"\n"
"for i in range(3):\n"
" print s1.exchange(i*2+1)\n"
" print s2.exchange(i*2+2)\n"
"print dir(s1)\n"
"print help(s1.exchange)\n"
,
Py_file_input, main_dict, main_dict, NULL) == nullptr)
PyErr_Print();
Py_Finalize();
exit(0);
}
</code></pre>
| 0 | 2016-10-07T11:48:34Z | [
"python",
"c++",
"pybind11"
]
|
Fast Python/Numpy Frequency-Severity Distribution Simulation | 39,860,431 | <p>I'm looking for a away to simulate a classical frequency severity distribution, something like:
X = sum(i = 1..N, Y_i), where N is for example poisson distributed and Y lognormal.</p>
<p>Simple naive numpy script would be:</p>
<pre><code>import numpy as np
SIM_NUM = 3
X = []
for _i in range(SIM_NUM):
nr_claims = np.random.poisson(1)
temp = []
for _j in range(nr_claims):
temp.append(np.random.lognormal(0, 1))
X.append(sum(temp))
</code></pre>
<p>Now I try to vectorize that for a performance increase. A bit better is the following:</p>
<pre><code>N = np.random.poisson(1, SIM_NUM)
X = []
for n in N:
X.append(sum(np.random.lognormal(0, 1, n)))
</code></pre>
<p>My question is if I can still vectorize the second loop? For example by simulating all the losses with:</p>
<pre><code>N = np.random.poisson(1, SIM_NUM)
# print(N) would lead to something like [1 3 0]
losses = np.random.lognormal(0,1, sum(N))
# print(N) would lead to something like
#[ 0.56750244 0.84161871 0.41567216 1.04311742]
# X should now be [ 0.56750244, 0.84161871 + 0.41567216 + 1.04311742, 0]
</code></pre>
<p>Ideas that I have are "smart slicing" or "matrix multiplication with A = [[1, 0, 0, 0]],[0,1,1,1],[0,0,0,0]]. But I couldn't make something clever out these ideas. </p>
<p>I'm looking for fastest possible computation of X.</p>
| 3 | 2016-10-04T19:38:00Z | 39,861,935 | <p>You're looking for <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.reduceat.html" rel="nofollow"><code>numpy.add.reduceat</code></a>:</p>
<pre><code>N = np.random.poisson(1, SIM_NUM)
losses = np.random.lognormal(0,1, np.sum(N))
x = np.zeros(SIM_NUM)
offsets = np.r_[0, np.cumsum(N[N>0])]
x[N>0] = np.add.reduceat(losses, offsets[:-1])
</code></pre>
<p>The case where <code>n == 0</code> is handled separately, because of how <code>reduceat</code> works. Also, be sure to use <code>numpy.sum</code> on arrays instead of the much slower Python <code>sum</code>.</p>
<p>If this is faster than the other answer depends on the mean of your Poisson distribution.</p>
| 1 | 2016-10-04T21:19:12Z | [
"python",
"performance",
"numpy",
"vectorization"
]
|
Fast Python/Numpy Frequency-Severity Distribution Simulation | 39,860,431 | <p>I'm looking for a away to simulate a classical frequency severity distribution, something like:
X = sum(i = 1..N, Y_i), where N is for example poisson distributed and Y lognormal.</p>
<p>Simple naive numpy script would be:</p>
<pre><code>import numpy as np
SIM_NUM = 3
X = []
for _i in range(SIM_NUM):
nr_claims = np.random.poisson(1)
temp = []
for _j in range(nr_claims):
temp.append(np.random.lognormal(0, 1))
X.append(sum(temp))
</code></pre>
<p>Now I try to vectorize that for a performance increase. A bit better is the following:</p>
<pre><code>N = np.random.poisson(1, SIM_NUM)
X = []
for n in N:
X.append(sum(np.random.lognormal(0, 1, n)))
</code></pre>
<p>My question is if I can still vectorize the second loop? For example by simulating all the losses with:</p>
<pre><code>N = np.random.poisson(1, SIM_NUM)
# print(N) would lead to something like [1 3 0]
losses = np.random.lognormal(0,1, sum(N))
# print(N) would lead to something like
#[ 0.56750244 0.84161871 0.41567216 1.04311742]
# X should now be [ 0.56750244, 0.84161871 + 0.41567216 + 1.04311742, 0]
</code></pre>
<p>Ideas that I have are "smart slicing" or "matrix multiplication with A = [[1, 0, 0, 0]],[0,1,1,1],[0,0,0,0]]. But I couldn't make something clever out these ideas. </p>
<p>I'm looking for fastest possible computation of X.</p>
| 3 | 2016-10-04T19:38:00Z | 39,862,127 | <p>We can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.bincount.html" rel="nofollow"><code>np.bincount</code></a> , which is quite efficient for such interval/ID based summing operations specially when working with <code>1D</code> arrays. The implementation would look something like this -</p>
<pre><code># Generate all poisson distribution values in one go
pv = np.random.poisson(1,SIM_NUM)
# Use poisson values to get count of total for random lognormal needed.
# Generate all those random numbers again in vectorized way
rand_arr = np.random.lognormal(0, 1, pv.sum())
# Finally create IDs using pv as extents for use with bincount to do
# ID based and thus effectively interval-based summing
out = np.bincount(np.arange(pv.size).repeat(pv),rand_arr,minlength=SIM_NUM)
</code></pre>
<p>Runtime test -</p>
<p>Function definitions :</p>
<pre><code>def original_app1(SIM_NUM):
X = []
for _i in range(SIM_NUM):
nr_claims = np.random.poisson(1)
temp = []
for _j in range(nr_claims):
temp.append(np.random.lognormal(0, 1))
X.append(sum(temp))
return X
def original_app2(SIM_NUM):
N = np.random.poisson(1, SIM_NUM)
X = []
for n in N:
X.append(sum(np.random.lognormal(0, 1, n)))
return X
def vectorized_app1(SIM_NUM):
pv = np.random.poisson(1,SIM_NUM)
r = np.random.lognormal(0, 1,pv.sum())
return np.bincount(np.arange(pv.size).repeat(pv),r,minlength=SIM_NUM)
</code></pre>
<p>Timings on large datasets :</p>
<pre><code>In [199]: SIM_NUM = 1000
In [200]: %timeit original_app1(SIM_NUM)
100 loops, best of 3: 2.6 ms per loop
In [201]: %timeit original_app2(SIM_NUM)
100 loops, best of 3: 6.65 ms per loop
In [202]: %timeit vectorized_app1(SIM_NUM)
1000 loops, best of 3: 252 µs per loop
In [203]: SIM_NUM = 10000
In [204]: %timeit original_app1(SIM_NUM)
10 loops, best of 3: 26.1 ms per loop
In [205]: %timeit original_app2(SIM_NUM)
10 loops, best of 3: 77.5 ms per loop
In [206]: %timeit vectorized_app1(SIM_NUM)
100 loops, best of 3: 2.46 ms per loop
</code></pre>
<p>So, we are looking at some <strong><code>10x+</code></strong> speedup there.</p>
| 3 | 2016-10-04T21:33:37Z | [
"python",
"performance",
"numpy",
"vectorization"
]
|
Operations with multiple dicts | 39,860,466 | <p>The main questions is how do I have to iterate / indicate correctly to work with two dicts?
I have given two dicts <code>(d1, d2)</code> which I have to compare. If the key i is the same in both, an operation is followed due to a given function. The result goes into another dict (<code>dict1</code>). If either <code>d1 or d2</code> contains the key i, the value goes in to <code>dict2</code>. The return is a <code>tup = (dict1, dict2)</code>. Here is an example.</p>
<pre><code>If f(a, b) returns a + b
d1 = {1:30, 2:20, 3:30, 5:80}
d2 = {1:40, 2:50, 3:60, 4:70, 6:90}
then dict_interdiff(d1, d2) returns ({1: 70, 2: 70, 3: 90}, {4: 70, 5: 80, 6: 90})
</code></pre>
<p>I am struggling with the correct way to properly indicate the two dicts d1 and d2. Here is my code:</p>
<pre><code>def h(a, b):
return a > b
d2 = {1:40, 2:50, 3:60, 4:70, 6:90}
d1 = {1:30, 2:20, 3:30, 5:80}
def dict_interdiff(d1, d2):
dict1 = {}
dict2 = {}
for i in d1:
if i in d1 #and d2:
dict1[i] = h(d1[i], d2[i])
else:
dict[i] = d1[i] #or d2[i]
tup = (dict1, dict2)
return tup
</code></pre>
<p>Do I have to loop over d1 and d2 (<code>for i in d1 and d2:</code>)? It seems like I have to somehow integrate both given dicts to make the for loop work. </p>
<p>Thanks for any hints!</p>
| 0 | 2016-10-04T19:39:50Z | 39,860,599 | <p>Use <a href="https://docs.python.org/3/library/itertools.html#itertools.chain" rel="nofollow"><code>itertools.chain</code></a> to get an iterable of all keys in your dicts.</p>
<pre><code>from itertools import chain
def h(a, b):
return a > b
d2 = {1:40, 2:50, 3:60, 4:70, 6:90}
d1 = {1:30, 2:20, 3:30, 5:80}
def dict_interdiff(d1, d2):
dict1 = {}
dict2 = {}
for key in set(chain(d1, d2)):
if key in d1 and key in d2:
dict1[key] = h(d1[key], d2[key])
else:
dict2[key] = d1.get(key) or d2.get(key)
return dict1, dict2
</code></pre>
| 0 | 2016-10-04T19:47:31Z | [
"python",
"dictionary",
"iteration"
]
|
Operations with multiple dicts | 39,860,466 | <p>The main questions is how do I have to iterate / indicate correctly to work with two dicts?
I have given two dicts <code>(d1, d2)</code> which I have to compare. If the key i is the same in both, an operation is followed due to a given function. The result goes into another dict (<code>dict1</code>). If either <code>d1 or d2</code> contains the key i, the value goes in to <code>dict2</code>. The return is a <code>tup = (dict1, dict2)</code>. Here is an example.</p>
<pre><code>If f(a, b) returns a + b
d1 = {1:30, 2:20, 3:30, 5:80}
d2 = {1:40, 2:50, 3:60, 4:70, 6:90}
then dict_interdiff(d1, d2) returns ({1: 70, 2: 70, 3: 90}, {4: 70, 5: 80, 6: 90})
</code></pre>
<p>I am struggling with the correct way to properly indicate the two dicts d1 and d2. Here is my code:</p>
<pre><code>def h(a, b):
return a > b
d2 = {1:40, 2:50, 3:60, 4:70, 6:90}
d1 = {1:30, 2:20, 3:30, 5:80}
def dict_interdiff(d1, d2):
dict1 = {}
dict2 = {}
for i in d1:
if i in d1 #and d2:
dict1[i] = h(d1[i], d2[i])
else:
dict[i] = d1[i] #or d2[i]
tup = (dict1, dict2)
return tup
</code></pre>
<p>Do I have to loop over d1 and d2 (<code>for i in d1 and d2:</code>)? It seems like I have to somehow integrate both given dicts to make the for loop work. </p>
<p>Thanks for any hints!</p>
| 0 | 2016-10-04T19:39:50Z | 39,860,728 | <pre><code>import itertools
def interdict(d1,d2):
dict1 = {}
dict2 = {}
for i in set(itertools.chain(d1, d2)):
if i in d1 and i in d2:
#whatever
elif i in d1:
#not in d2
else:
#not in d1
</code></pre>
<p><code>set</code> gets rid of duplicates. <code>itertools.chain</code> combines the two lists of keys in the dictionaries</p>
| 0 | 2016-10-04T19:56:15Z | [
"python",
"dictionary",
"iteration"
]
|
Running super().__init__(value) where value is an @property | 39,860,556 | <p>I'm just trying to grok how exactly Python handles this behind the scenes. So take this code snippet (from Effective Python by Brett Slatkin):</p>
<pre><code>class Resistor(object):
def __init__(self, ohms):
self.ohms = ohms
self.voltage = 0
self.current = 0
class VoltageResistor(Resistor):
def __init__(self, ohms):
super().__init__(ohms)
self._voltage = 0
@property
def ohms(self):
return self._ohms
@ohms.setter
def ohms(self, ohms):
if ohms <= 0:
raise ValueError('{o} ohms must be > 0'.format(o=ohms))
self._ohms = ohms
@property
def voltage(self):
return self._voltage
@voltage.setter
def voltage(self, voltage):
self._voltage = voltage
self.current = self._voltage / self.ohms
VoltageResistor(-1) # fails
</code></pre>
<p>Running the <code>super()</code> call invokes the property check so that you can't instantiate with a zero or negative value. What is confusing me to me is that I would think that since the the <code>__init__(ohms)</code> call is being ran on the superclass, shouldn't it be in a different scope (the scope of the superclass) and thus exempt from invoking the <code>@property</code> check? </p>
| 1 | 2016-10-04T19:44:50Z | 39,860,687 | <p>Scope doesn't come into play when working with object's attributes. Consider the following:</p>
<pre><code>class A(object):
def __init__(self):
self.a = 1
def foo():
a = A()
a.a = 2
return a
def bar(a):
print(a.a)
bar(foo())
</code></pre>
<p>This example code will print 2. Note that within the scope of <code>bar</code>, there <em>is</em> no way to gain access to the scope of <code>foo</code> or even <code>A.__init__</code>. The class <em>instance</em> is carrying along all of it's attributes/properties with it (and a reference to it's class which has a reference to <em>it's</em> superclass, etc).</p>
<p>In your code, when you call <code>VoltageResistor</code>, an instance of <code>VoltageResistor</code> is created and passed to <code>__init__</code> as <code>self</code>. When you call <code>super.__init__(self)</code>, that <code>VoltageResistor</code> instance is passed along to <code>Resistor.__init__</code>. When it does <code>self.ohms = ohms</code>, python sees that <code>self.ohms</code> resolves to a property and you get the error. The tl;dr; here is that <code>self</code> is an instance of <code>VoltageResistor</code> and <strong>when working with attributes, the object on which the attributes are accessed is what is important, not the current scope</strong>).</p>
| 2 | 2016-10-04T19:53:08Z | [
"python"
]
|
Running super().__init__(value) where value is an @property | 39,860,556 | <p>I'm just trying to grok how exactly Python handles this behind the scenes. So take this code snippet (from Effective Python by Brett Slatkin):</p>
<pre><code>class Resistor(object):
def __init__(self, ohms):
self.ohms = ohms
self.voltage = 0
self.current = 0
class VoltageResistor(Resistor):
def __init__(self, ohms):
super().__init__(ohms)
self._voltage = 0
@property
def ohms(self):
return self._ohms
@ohms.setter
def ohms(self, ohms):
if ohms <= 0:
raise ValueError('{o} ohms must be > 0'.format(o=ohms))
self._ohms = ohms
@property
def voltage(self):
return self._voltage
@voltage.setter
def voltage(self, voltage):
self._voltage = voltage
self.current = self._voltage / self.ohms
VoltageResistor(-1) # fails
</code></pre>
<p>Running the <code>super()</code> call invokes the property check so that you can't instantiate with a zero or negative value. What is confusing me to me is that I would think that since the the <code>__init__(ohms)</code> call is being ran on the superclass, shouldn't it be in a different scope (the scope of the superclass) and thus exempt from invoking the <code>@property</code> check? </p>
| 1 | 2016-10-04T19:44:50Z | 39,860,780 | <p>To supplement the above excellent answer, just add the following line in the parent's constructor to get a better idea of what is going on:</p>
<pre><code>class Resistor(object):
def __init__(self, ohms):
print (type(self).__name__)
self.ohms = ohms
</code></pre>
<p>It will print <code>VoltageResistor</code> and then throw a <code>ValueError</code>. The <a href="https://docs.python.org/3/library/functions.html" rel="nofollow">Python docs</a> confirm this:</p>
<blockquote>
<p>If <strong>c is an instance of C</strong>, c.x will invoke the getter, <strong>c.x = value will invoke the setter</strong> and del c.x the deleter.</p>
</blockquote>
| 1 | 2016-10-04T19:59:11Z | [
"python"
]
|
Inspectdb error django | 39,860,583 | <p>I have postgresql db. I want to make reverse engeneering from db by command </p>
<pre><code>python3 manage.py inspectdb > models/models.py
</code></pre>
<p>All was ok, but I needed to customize auth. I extends from AbstractBaseUser, and try to reispect, because I was alter fields.Then, I looked up in db, and I saw that django add a pound of tables and I cant reinspect my db. Error:</p>
<pre><code>python3 manage.py inspectdb > models/models.py
super(ForeignObject, self).contribute_to_class(cls, name, private_only=private_only, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/fields/related.py", line 314, in contribute_to_class
lazy_related_operation(resolve_related_class, cls, self.remote_field.model, field=self)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/fields/related.py", line 81, in lazy_related_operation
return apps.lazy_model_operation(partial(function, **kwargs), *model_keys)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/fields/related.py", line 79, in <genexpr>
model_keys = (make_model_tuple(m) for m in models)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/utils.py", line 23, in make_model_tuple
"must be of the form 'app_label.ModelName'." % model
ValueError: Invalid model reference 'models.model.Account'. String model references must be of the form 'app_label.ModelName'.
artem@artem-debian:/usr/finstatement/project$ python3 manage.py inspectdb > models/models.py
ERROR OH NO
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/django/db/models/utils.py", line 14, in make_model_tuple
app_label, model_name = model.split(".")
ValueError: too many values to unpack (expected 2)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.4/dist-packages/django/core/management/__init__.py", line 367, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.4/dist-packages/django/core/management/__init__.py", line 341, in execute
django.setup()
File "/usr/local/lib/python3.4/dist-packages/django/__init__.py", line 27, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/local/lib/python3.4/dist-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/usr/local/lib/python3.4/dist-packages/django/apps/config.py", line 199, in import_models
self.models_module = import_module(models_module_name)
File "/usr/lib/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1471, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "/usr/local/lib/python3.4/dist-packages/django/contrib/admin/models.py", line 37, in <module>
class LogEntry(models.Model):
File "/usr/local/lib/python3.4/dist-packages/django/db/models/base.py", line 157, in __new__
new_class.add_to_class(obj_name, obj)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/base.py", line 316, in add_to_class
value.contribute_to_class(cls, name)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/fields/related.py", line 700, in contribute_to_class
super(ForeignObject, self).contribute_to_class(cls, name, private_only=private_only, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/fields/related.py", line 314, in contribute_to_class
lazy_related_operation(resolve_related_class, cls, self.remote_field.model, field=self)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/fields/related.py", line 81, in lazy_related_operation
return apps.lazy_model_operation(partial(function, **kwargs), *model_keys)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/fields/related.py", line 79, in <genexpr>
model_keys = (make_model_tuple(m) for m in models)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/utils.py", line 23, in make_model_tuple
"must be of the form 'app_label.ModelName'." % model
ValueError: Invalid model reference 'models.model.Account'. String model references must be of the form 'app_label.ModelName'.
</code></pre>
<p>And it delete all models from models/models.py</p>
<p>I have a dumps now, but maybe I can find a better solution than recreate a DB?</p>
| -1 | 2016-10-04T19:46:32Z | 39,860,669 | <p>Ok, I found I a solution. Need to delete from manage.py</p>
<pre><code>AUTH_USER_MODEL = 'models.models.Account'
</code></pre>
| 0 | 2016-10-04T19:52:04Z | [
"python",
"django",
"postgresql"
]
|
multiprocessing function call | 39,860,588 | <p>I am trying to do some multiprocessing in Python. I have a function doing some work and returning a list. I want to repeat that for several cases. Finally, I want to get the returned list of each parallel call and unify them (have only one list with all duplicates removed).</p>
<pre><code>def get_version_list(env):
list = []
#do some intensive work
return list
from multiprocessing import Pool
pool = Pool()
result1 = pool.apply_async(get_version_list, ['prod'])
result2 = pool.apply_async(get_version_list, ['uat'])
#etc, I have six environment to check.
alist = result1.get()
blist = result2.get()
</code></pre>
<p>This does not work (not sure about the function call syntax, but I tried other things too, without success), it gives me an error (and repeats it a lot since my intensive work is doing around 300 request.post calls).</p>
<p>RuntimeError:
Attempt to start a new process before the current process
has finished its bootstrapping phase.</p>
<pre><code>This probably means that you are on Windows and you have
forgotten to use the proper idiom in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce a Windows executable.
</code></pre>
| 0 | 2016-10-04T19:46:41Z | 39,860,952 | <p>You have to put the multiprocessing portion inside of your main function, like:</p>
<pre><code>def get_version_list(env):
list = []
print "ENV: " + env
return list
if __name__ == '__main__':
from multiprocessing import Pool
pool = Pool()
result1 = pool.apply_async(get_version_list, ['prod'])
result2 = pool.apply_async(get_version_list, ['uat'])
#etc, I have six environment to check.
alist = result1.get()
blist = result2.get()
</code></pre>
| 1 | 2016-10-04T20:11:19Z | [
"python",
"multiprocessing"
]
|
ZeroMQ FiniteStateMachineException in REQ/REP pattern | 39,860,614 | <p>I have two simple components which are supposed to communicate with each other using the REQ/REP ZeroMQ pattern.
The Server (REP Socket) is implemented in Python using pyzmq:</p>
<pre><code>import zmq
def launch_server():
print "Launching server"
with zmq.Context.instance() as ctx:
socket = ctx.socket(zmq.REP)
socket.bind('tcp://127.0.0.1:5555')
while True:
msg = socket.recv()
print "EOM\n\n"
</code></pre>
<p>The Client (REQ socket) written in C# using the NetMQ library:</p>
<pre><code>using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using NetMQ;
namespace PyNetMQTest
{
class Program
{
static void Main(string[] args)
{
string msg;
NetMQ.Sockets.RequestSocket socket = new NetMQ.Sockets.RequestSocket();
socket.Connect("tcp://127.0.0.1:5555");
for(int i=0; i<5; i++)
socket.SendFrame("test_"+i);
}
}
}
</code></pre>
<p>The Python Server implementation has been tested and works fine by talking to a REQ socket implemented using Python. But the C# REQ socket throws the following error within the first iteration of the loop and no messages reach the Server whatsoever:</p>
<p><strong>An unhandled exception of type 'NetMQ.FiniteStateMachineException' occurred in NetMQ.dll
Additional information: Req.XSend - cannot send another request</strong></p>
<p>Stacktrace:</p>
<pre><code>at NetMQ.Core.Patterns.Req.XSend(Msg& msg)
at NetMQ.Core.SocketBase.TrySend(Msg& msg, TimeSpan timeout, Boolean more)
at NetMQ.NetMQSocket.TrySend(Msg& msg, TimeSpan timeout, Boolean more)
at NetMQ.OutgoingSocketExtensions.Send(IOutgoingSocket socket, Msg& msg, Boolean more)
at NetMQ.OutgoingSocketExtensions.SendFrame(IOutgoingSocket socket, String message, Boolean more)
at PyNetMQTest.Program.Main(String[] args) in d:\users\emes\documents\visual studio 2015\Projects\PyNetMQ Test\PyNetMQTest\Program.cs:line 20
at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args)
at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args)
at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
</code></pre>
<p>These are the first my first steps with ZMQ and the C# code is taken from the library <a href="http://netmq.readthedocs.io/en/latest/introduction/" rel="nofollow">documentation</a>. <strong>What makes the code throw this error?</strong></p>
<p>I am using:</p>
<ul>
<li>pyzmq 14.7</li>
<li>NetMQ 3.3.3.4 </li>
<li>.NET 4.6</li>
</ul>
<hr>
<p><strong>====================== SOLUTION ======================</strong></p>
<p>As explained by @somdoron in his answer, th roor casue was that both sockets need to go the full cycle of send/receive before beeing able to be reused.
As a matter of fact the REP socket implemented in python did not ever change it's state either so the error was in both, the python AND the C# code. Here is the fixed code:</p>
<p>REP Socket</p>
<pre><code>import zmq
def launch_server():
print "Launching server"
with zmq.Context.instance() as ctx:
socket = ctx.socket(zmq.REP)
socket.bind('tcp://127.0.0.1:5555')
while True:
msg = socket.recv()
socket.send("reply to "+msg)
print "EOM\n\n"
</code></pre>
<p>REQ Socket</p>
<pre><code>using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using NetMQ;
namespace PyNetMQTest
{
class Program
{
static void Main(string[] args)
{
NetMQ.Sockets.RequestSocket socket = new NetMQ.Sockets.RequestSocket();
socket.Connect("tcp://127.0.0.1:5555");
string msg, reply;
while (true)
{
Console.WriteLine("Type message: ");
msg = Console.ReadLine();
Console.WriteLine("Sending : " + msg);
socket.SendFrame(msg);
reply = socket.ReceiveFrameString();
Console.WriteLine("Received: " + reply + Environment.NewLine);
}
}
}
}
</code></pre>
| 0 | 2016-10-04T19:48:41Z | 39,868,689 | <p>Request and Response sockets are state machines, with Request you must first Send and then call Receive, you cannot call 5 consecutive Send.</p>
<p>With Response its the opposite, you must call Receive first.</p>
<p>If one side is only sending and the other only receiving you can use Push-Pull pattern instead of Req-Rep. You can also use Dealer-Router if needed both ways communication. Anyway it seems the usage of Req-Rep is incorrect.</p>
| 1 | 2016-10-05T08:13:44Z | [
"c#",
"python",
"zeromq",
"pyzmq",
"netmq"
]
|
Wrapping namespaced custom types defined only in headers with SWIG | 39,860,675 | <p>How can i wrap this content using SWIG?</p>
<p>usecase5.h</p>
<pre><code>#ifndef __USECASE5_H__
#define __USECASE5_H__
#include <math.h>
namespace foo_namespace {
struct vec2
{
int x, y;
vec2() {}
explicit vec2( int a, int b )
{
x = a;
y = b;
}
vec2 & operator =( vec2 const & v ) { x = v.x; y = v.y; return *this; }
vec2 & operator/=( vec2 const & v ) { x /= v.x; y /= v.y; return *this; }
vec2 xx() const { return vec2(x,x); }
}
}
#endif
</code></pre>
<p>usecase6.h</p>
<pre><code>#ifndef __USECASE6_H__
#define __USECASE6_H__
#include "usecase5.h"
namespace foo_namespace {
vec2 usecase6_f1(const vec2 & x);
}
#endif
</code></pre>
<p>usecase6.cpp</p>
<pre><code>namespace foo_namespace {
vec2 usecase6_f1(const vec2 & x)
{
x = vec2();
}
}
</code></pre>
<p>example.i</p>
<pre><code>// GENERATED BY gen_swig.py at 2016-10-04 21:46
%module example
%{
//#include "usecase1.h"
//#include "usecase2.h"
//#include "usecase3.h"
//#include "usecase4.h"
#include "usecase5.h"
#include "usecase6.h"
using namespace foo_namespace;
%}
//%include "usecase1.h"
//%include "usecase2.h"
//%include "usecase3.h"
//%include "usecase4.h"
%include "usecase5.h"
%include "usecase6.h"
</code></pre>
<p>After trying that naive example.i with <code>swig -python -c++ example.i</code> I'll get:</p>
<pre><code>usecase5.h(19) : Warning 362: operator= ignored
usecase5.h(24) : Error: Syntax error in input(3).
</code></pre>
<p>So, how could I wrap this little dummy c++ example?</p>
| 0 | 2016-10-04T19:52:27Z | 39,862,458 | <p>It seems this wasn't any issue related with swig but more on the c++ side, to fix it I just needed to fix the c++ issues:</p>
<p>usecase5.h</p>
<pre><code>#ifndef __USECASE5_H__
#define __USECASE5_H__
#include <math.h>
namespace foo_namespace {
struct vec2
{
int x, y;
vec2() {}
explicit vec2( int a, int b )
{
x = a;
y = b;
}
vec2 & operator =( vec2 const & v ) { x = v.x; y = v.y; return *this; }
vec2 & operator/=( vec2 const & v ) { x /= v.x; y /= v.y; return *this; }
vec2 xx() const { return vec2(x,x); }
};
}
#endif
</code></pre>
<p>usecase6.cpp</p>
<pre><code>#include "usecase5.h"
namespace foo_namespace {
vec2 usecase6_f1(const vec2 & x)
{
return x;
}
}
</code></pre>
<p>After that the wrapper will be built succesfully and working properly on the python side.</p>
| 0 | 2016-10-04T22:02:21Z | [
"python",
"c++",
"swig"
]
|
if elif else blocks evaluate for every case | 39,860,692 | <p>I am trying to append an iterator to a list but my code below evaluates for every case.</p>
<pre><code>Less7=Head7=Over7=[]
i=0
for i in range(0,10):
if i<7:
Less7.append(i)
elif i==7:
Head7=i
else:
Over7.append(i)
</code></pre>
<p>The result I am getting are:
Head7 is an int value of 7
Less7 and Over7 are lists - [0,1,2,3,4,5,6,7,8,9]</p>
<p>My desired results are:</p>
<pre><code>Less7=[0,1,2,3,4,5,6]
Head7=[7]
Over7=[8,9]
</code></pre>
<p>I'm sure it's basic, could you point me in the right direction?
My thought is that it has to do with the datatype.
When I step through the code, even <code>Head7</code> evaluates <code>[0,1,2,3,4,5,6]</code> but when <code>i=7</code> then it correctly assigns the value, but I want it in a list.</p>
| 1 | 2016-10-04T19:53:21Z | 39,860,771 | <p>You need to create <em>three lists</em>, one for each possible outcome:</p>
<pre><code>less_than_7, is_7, greater_than_7 = [], [], []
for i in range(0, 10):
if i < 7:
less_than_7.append(i)
elif i > 7:
greater_than_7.append(i)
else:
is_7.append(i)
</code></pre>
<p><em><code>Less7=Head7=Over7=[]</code></em> creates one list that is referenced by the three names so your output would be identical as you are appending to the same list, well you are in the first and last case, <em><code>Head7=i</code></em> sets <em><code>Head7</code></em> equal to <em><code>i/7</code></em>.</p>
| 0 | 2016-10-04T19:58:37Z | [
"python",
"python-3.5.2"
]
|
Insert into table without all values | 39,860,694 | <p>Specifically, how would I insert into sqlite3 database and python without using all the values.</p>
<p>If my table is setup like this:</p>
<pre><code>People:
f_name | l_name | age | favorite_color
</code></pre>
<p>When I create a new row, how would I be able to only put in a few values like this:</p>
<pre><code>c.execute("INSERT INTO People VALUES (?, ?)", (f_name, l_name))
</code></pre>
<p>Right now it would give an error for only filling two of the four columns.</p>
| -1 | 2016-10-04T19:53:36Z | 39,860,715 | <p>You'll have to list the columns explicitly:</p>
<p><code>c.execute("INSERT INTO People (f_name, l_name) VALUES (?, ?)", (f_name, l_name))</code></p>
| 1 | 2016-10-04T19:55:21Z | [
"python",
"sqlite3"
]
|
Insert into table without all values | 39,860,694 | <p>Specifically, how would I insert into sqlite3 database and python without using all the values.</p>
<p>If my table is setup like this:</p>
<pre><code>People:
f_name | l_name | age | favorite_color
</code></pre>
<p>When I create a new row, how would I be able to only put in a few values like this:</p>
<pre><code>c.execute("INSERT INTO People VALUES (?, ?)", (f_name, l_name))
</code></pre>
<p>Right now it would give an error for only filling two of the four columns.</p>
| -1 | 2016-10-04T19:53:36Z | 39,860,904 | <p>1) INSERT statement doesn't automatically know what value goes where if you omit some parameters. It is generally good practice to explicitly mentioning the names of the columns. e.g: INSERT INTO People (f_name, l_name)</p>
<p>2) If those columns aren't nullable then you can't omit them at all!</p>
| 0 | 2016-10-04T20:07:24Z | [
"python",
"sqlite3"
]
|
Google API Client: Container Registry API Python | 39,860,726 | <p>I want to get a list of images in the Google Container Engine using Python, and eventually, start an instance of one of them. I know there's a <a href="https://cloud.google.com/sdk/gcloud/reference/beta/container/images/list" rel="nofollow">gcloud command to list images</a>, but is this possible to do using the googleapiclient?</p>
<p>I imagine it would be something like this:</p>
<pre><code>from googleapiclient.discovery import build
gce_service = build('container', 'v1')
# Now what?
</code></pre>
| 0 | 2016-10-04T19:56:09Z | 39,883,430 | <p>For image listing,</p>
<p>Google Container Registry is a Docker Registry API so you probably won't be able to list images through googleapiclient.</p>
<p>Something that uses the Docker Registry API like <a href="https://pypi.python.org/pypi/docker-registry-client/1.0" rel="nofollow">https://pypi.python.org/pypi/docker-registry-client/1.0</a> may be worth a try but I have not tried it.</p>
| 0 | 2016-10-05T20:51:38Z | [
"python",
"google-api-client",
"google-container-engine",
"google-container-registry"
]
|
Pyspark, initializing spark programmetically : IllegalArgumentException: Missing application resource | 39,860,907 | <p>When creating a spark context in Python, I get the following error.</p>
<pre><code> app_name="my_app"
master="local[*]"
sc = SparkContext(appName=app_name, master=master)
Exception in thread "main" java.lang.IllegalArgumentException: Missing application resource.
at org.apache.spark.launcher.CommandBuilderUtils.checkArgument(CommandBuilderUtils.java:241)
at org.apache.spark.launcher.SparkSubmitCommandBuilder.buildSparkSubmitArgs(SparkSubmitCommandBuilder.java:160)
at org.apache.spark.launcher.SparkSubmitCommandBuilder.buildSparkSubmitCommand(SparkSubmitCommandBuilder.java:276)
at org.apache.spark.launcher.SparkSubmitCommandBuilder.buildCommand(SparkSubmitCommandBuilder.java:151)
at org.apache.spark.launcher.Main.main(Main.java:86)
....
pyspark.zip/pyspark/java_gateway.py", line 94, in launch_gateway
raise Exception("Java gateway process exited before sending the driver its port number")
Exception: Java gateway process exited before sending the driver its port number
</code></pre>
<p>The spark launcher seems to be failing somehow.</p>
| 0 | 2016-10-04T20:07:39Z | 40,050,942 | <p>This was happening due to preexisting env variables, that conflicted. I deleted them in the python program and it works smoothly now.</p>
<p>ex:</p>
<pre><code>import os
#check if pyspark env vars are set and then reset to required or delete.
del os.environ['PYSPARK_SUBMIT_ARGS']
</code></pre>
| 0 | 2016-10-14T19:47:47Z | [
"python",
"pyspark"
]
|
Call state module from within state module | 39,860,976 | <p>I'm writing a <a href="https://docs.saltstack.com/en/latest/ref/states/writing.html" rel="nofollow">custom state module</a> with the purpose of creating a file in a given location with (partly) configurable content.</p>
<p>Basically I'm looking to shorten the following SLS (simplified for the purpose of this question, more complex in reality)...</p>
<pre><code>/etc/foo/bar:
file.managed:
- contents: Hello World
</code></pre>
<p>...to this:</p>
<pre><code>bar:
my_module.foo:
- message: World
</code></pre>
<p>Since this functionality is basically a more specialized version of the <a href="https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html#salt.states.file.managed" rel="nofollow"><code>file.managed</code></a> state, it would be useful for me to re-use the <code>file.managed</code> state in my custom state module.</p>
<p><strong>Is there any way to call the <code>file.managed</code> state module from my own state module?</strong></p>
<hr>
<p>What I've tried already (unsuccesfully):</p>
<ol>
<li><p>Importing <code>salt.states.file</code> and calling <code>salt.states.file.managed</code>:</p>
<pre><code>import salt.states.file
def foo(name, message, **kwargs):
return salt.states.file.managed(name='/etc/foo/%s' % name,
contents='Hello %s' % message,
**kwargs)
</code></pre>
<p>This results in an error message:</p>
<pre><code> File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1626, in call
**cdata['kwargs'])
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1492, in wrapper
return f(*args, **kwargs)
File "/var/cache/salt/minion/extmods/states/my_module.py", line 14, in static_pod
contents=yaml.dump(data, default_flow_style=False), **kwargs)
File "/usr/lib/python2.7/dist-packages/salt/states/file.py", line 1499, in managed
mode = __salt__['config.manage_mode'](mode)
NameError: global name '__salt__' is not defined
</code></pre></li>
<li><p>Using <code>__salt__['file.managed']</code>:</p>
<pre><code>def foo(name, message, **kwargs):
return __salt__['file.managed'](name='/etc/foo/%s' % name,
contents='Hello %s' % message,
**kwargs)
</code></pre>
<p>Which also results in a (different) error message (which is unsurprising, as the documentation explicitly states that <code>__salt__</code> only contains <em>execution</em> modules, and not <em>state modules</em>):</p>
<pre><code> File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1626, in call
**cdata['kwargs'])
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1492, in wrapper
return f(*args, **kwargs)
File "/var/cache/salt/minion/extmods/states/my_module.py", line 13, in static_pod
return __salt__['file.managed'](name='/etc/foo/%s' % name,
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 900, in __getitem__
func = super(LazyLoader, self).__getitem__(item)
File "/usr/lib/python2.7/dist-packages/salt/utils/lazy.py", line 93, in __getitem__
raise KeyError(key)
KeyError: 'file.managed'
</code></pre></li>
</ol>
| 0 | 2016-10-04T20:13:37Z | 39,866,831 | <p>I found a solution using the <a href="https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.state.html#salt.modules.state.single" rel="nofollow"><code>state.single</code></a> execution module:</p>
<pre><code>def foo(name, messsage):
result = __salt__['state.single'](fun='file.managed',
name='/etc/foo/%s' % name,
contents='Hello %s' % message,
test=__opts__['test'])
return result.items()[0][1]
</code></pre>
<hr>
<p>The <code>state.single</code> module returns a data structure like the following:</p>
<pre><code>{
'file_|-/etc/foo/bar_|-/etc/foo/bar_|-managed': {
'comment': 'The file /etc/foo/bar is set to be changed',
'name': '/etc/foo/bar',
'start_time': '08:24:45.022518',
'result': None,
'duration': 18.019,
'__run_num__': 0,
'changes': {'diff': '...'}
}
}
</code></pre>
<p>The inner object (everything under the <code>file_|-/etc/foo/bar_|-/etc/foo/bar_|-managed</code>) is what the <code>file.managed</code> module would return by itself, so you can re-use this value as your custom state's return value.</p>
| 0 | 2016-10-05T06:27:44Z | [
"python",
"salt-stack"
]
|
Reformat a column to only first 5 characters | 39,860,985 | <p>I am new to Python and I'm struggling with this section. There are about 25 columns in a text file and 50,000+ Rows. For one of the columns, #11 (<strong>ZIP</strong>), this column contains all the zip code values of customers in this format "<strong>07598-XXXX</strong>", I would only like to get the first 5, so "<strong>07598</strong>", I need to do this for the entire column but I'm confused based on my current logic how to write it.
So far my code is able to delete rows that contains certain strings and I am also using the '|' delimiter to format it nicely as a CSV.</p>
<p>State | ZIP(#11) | Column 12| .... </p>
<hr>
<p>NY | 60169-8547 | 98</p>
<p>NY | 60169-8973 | 58</p>
<p>NY | 11219-4598 | 25</p>
<p>NY | 11219-8475 | 12</p>
<p>NY | 20036-4879 | 56</p>
<p>How can I iterate through the ZIP column and just show the first 5 characters?
Thanks for the help!</p>
<pre><code>import csv
my_file_name = "NVG.txt"
cleaned_file = "cleanNVG.csv"
remove_words = ['INAC-EIM','-INAC','TO-INAC','TO_INAC','SHIP_TO-inac','SHIP_TOINAC']
with open(my_file_name, 'r', newline='') as infile, open(cleaned_file, 'w',newline='') as outfile:
writer = csv.writer(outfile)
for line in csv.reader(infile, delimiter='|'):
if not any(remove_word in element for element in line for remove_word in remove_words):
writer.writerow(line)
</code></pre>
| 3 | 2016-10-04T20:14:20Z | 39,861,016 | <pre><code>'{:.5}'.format(zip_)
</code></pre>
<p>where <code>zip_</code> is the string containing the zip code. More on <code>format</code> here: <a href="https://docs.python.org/2/library/string.html#format-string-syntax" rel="nofollow">https://docs.python.org/2/library/string.html#format-string-syntax</a></p>
| 3 | 2016-10-04T20:16:22Z | [
"python",
"python-3.x"
]
|
Reformat a column to only first 5 characters | 39,860,985 | <p>I am new to Python and I'm struggling with this section. There are about 25 columns in a text file and 50,000+ Rows. For one of the columns, #11 (<strong>ZIP</strong>), this column contains all the zip code values of customers in this format "<strong>07598-XXXX</strong>", I would only like to get the first 5, so "<strong>07598</strong>", I need to do this for the entire column but I'm confused based on my current logic how to write it.
So far my code is able to delete rows that contains certain strings and I am also using the '|' delimiter to format it nicely as a CSV.</p>
<p>State | ZIP(#11) | Column 12| .... </p>
<hr>
<p>NY | 60169-8547 | 98</p>
<p>NY | 60169-8973 | 58</p>
<p>NY | 11219-4598 | 25</p>
<p>NY | 11219-8475 | 12</p>
<p>NY | 20036-4879 | 56</p>
<p>How can I iterate through the ZIP column and just show the first 5 characters?
Thanks for the help!</p>
<pre><code>import csv
my_file_name = "NVG.txt"
cleaned_file = "cleanNVG.csv"
remove_words = ['INAC-EIM','-INAC','TO-INAC','TO_INAC','SHIP_TO-inac','SHIP_TOINAC']
with open(my_file_name, 'r', newline='') as infile, open(cleaned_file, 'w',newline='') as outfile:
writer = csv.writer(outfile)
for line in csv.reader(infile, delimiter='|'):
if not any(remove_word in element for element in line for remove_word in remove_words):
writer.writerow(line)
</code></pre>
| 3 | 2016-10-04T20:14:20Z | 39,861,071 | <p>Process title line separately, then read row by row like you do, just modify second <code>line</code> column by truncating to 5 characters.</p>
<pre><code>import csv
my_file_name = "NVG.txt"
cleaned_file = "cleanNVG.csv"
remove_words = ['INAC-EIM','-INAC','TO-INAC','TO_INAC','SHIP_TO-inac','SHIP_TOINAC']
with open(my_file_name, 'r', newline='') as infile, open(cleaned_file, 'w',newline='') as outfile:
writer = csv.writer(outfile)
cr = csv.reader(infile, delimiter='|')
# iterate over title line and write it as-is
writer.writerow(next(cr))
for line in cr:
if not any(remove_word in element for element in line for remove_word in remove_words):
line[1] = line[1][:5] # truncate
writer.writerow(line)
</code></pre>
<p>alternately, you could use <code>line[1] = line[1].split("-")[0]</code> which would keep everything on the left of the dash character.</p>
<p>Note the special processing for the title line: <code>cr</code> is an iterator. I just consume it manually before the <code>for</code> loop to perform a pass-through processing.</p>
| 2 | 2016-10-04T20:19:49Z | [
"python",
"python-3.x"
]
|
Reformat a column to only first 5 characters | 39,860,985 | <p>I am new to Python and I'm struggling with this section. There are about 25 columns in a text file and 50,000+ Rows. For one of the columns, #11 (<strong>ZIP</strong>), this column contains all the zip code values of customers in this format "<strong>07598-XXXX</strong>", I would only like to get the first 5, so "<strong>07598</strong>", I need to do this for the entire column but I'm confused based on my current logic how to write it.
So far my code is able to delete rows that contains certain strings and I am also using the '|' delimiter to format it nicely as a CSV.</p>
<p>State | ZIP(#11) | Column 12| .... </p>
<hr>
<p>NY | 60169-8547 | 98</p>
<p>NY | 60169-8973 | 58</p>
<p>NY | 11219-4598 | 25</p>
<p>NY | 11219-8475 | 12</p>
<p>NY | 20036-4879 | 56</p>
<p>How can I iterate through the ZIP column and just show the first 5 characters?
Thanks for the help!</p>
<pre><code>import csv
my_file_name = "NVG.txt"
cleaned_file = "cleanNVG.csv"
remove_words = ['INAC-EIM','-INAC','TO-INAC','TO_INAC','SHIP_TO-inac','SHIP_TOINAC']
with open(my_file_name, 'r', newline='') as infile, open(cleaned_file, 'w',newline='') as outfile:
writer = csv.writer(outfile)
for line in csv.reader(infile, delimiter='|'):
if not any(remove_word in element for element in line for remove_word in remove_words):
writer.writerow(line)
</code></pre>
| 3 | 2016-10-04T20:14:20Z | 39,861,278 | <p>to get first 5 <em>characters</em> in a string use <code>str[:6]</code></p>
<p>in your case:</p>
<pre><code>with open(my_file_name, 'r', newline='') as infile, open(cleaned_file, 'w',newline='') as outfile:
writer = csv.writer(outfile)
for line in csv.reader(infile, delimiter='|'):
if not any(remove_word in element for element in line for remove_word in remove_words):
line[1] = line[1][:6]
writer.writerow(line)
</code></pre>
<p><code>line[1] = line[1][:6]</code> will set 2nd column in your file to first 5 characters in itself.</p>
| 1 | 2016-10-04T20:34:17Z | [
"python",
"python-3.x"
]
|
Nested structured array field access with numpy | 39,861,047 | <p>I am working on parsing Matlab structured arrays in Python. For simplicity, the data structure ultimately consists of 3 fields, say header, body, trailer. Creating some data in Matlab for example:</p>
<pre><code>header_data = {100, 100, 100};
body_data = {1234, 100, 4321};
trailer_data = {1001, 1001, 1001};
data = struct('header', header_data, 'body', body_data, 'trailer', trailer_data);
</code></pre>
<p>yields a 1x3 struct array.</p>
<p>This data is then read in Python as follows:</p>
<pre><code>import scipy.io as sio
import numpy as np
matlab_data = sio.loadmat('data.mat', squeeze_me=True)
data = matlab['data']
</code></pre>
<p>This makes <code>data</code> a 1-dimensional <code>numpy.ndarray</code> of size 3 with <code>dtype=dtype([('header', 'O'), ('body', 'O'), ('trailer', 'O')])</code>, which I can happily iterate through using <code>numpy.nditer</code> and extract and parse the data from each struct.</p>
<p>The problem I'm trying to overcome is that unfortunately (and out of my control) in some of the files I need to parse, the above defined struct arrays are themselves a member of another struct array with a field <code>msg</code>. Continuing with my example in Matlab:</p>
<pre><code>messages = struct('msg', {data(1), data(2), data(3)});
</code></pre>
<p>When this is loaded with <code>scipy.loadmat</code> in Python, it results in a 1-dimensional <code>numpy.ndarray</code> of size 3 with <code>dtype=dtype([('msg', 'O')])</code>. In order to reuse the same function for parsing the data fields, I'd need to have logic to detect the <code>msg</code> field, if it exists, and then extract each <code>numpy.void</code> from there before calling the function to parse the individual header, body and trailer fields.</p>
<p>In Matlab, this is easily overcome because the original 1x3 struct array with three fields can be extracted from the 1x3 struct array with the single <code>msg</code> field by doing: <code>[messages.msg]</code>, which yields a 1x3 struct array with the header, body and trailer fields. If I try to translate this to numpy, the following command gives me a view of the original <code>numpy.ndarray</code>, which is not a structure (<code>dtype=dtype('O')</code>).</p>
<p>I'm trying to figure out if there an analogous way with <code>numpy</code> to recover the struct array with three fields from the one with the single <code>msg</code> field, as I can do in Matlab, or if I truly need to iterate over each value and manually extract it from the <code>msg</code> field before using a common parsing function. Again, the format of the Matlab input files is out of my control and I cannot change them; and my example here is only trivial compared to the number of nested fields I need to extract from the Matlab data.</p>
| 0 | 2016-10-04T20:18:26Z | 39,861,747 | <p>Trying to recreate your file with Octave (save with -v7), I get, in an Ipython session:</p>
<pre><code>In [190]: data = io.loadmat('test.mat')
In [191]: data
Out[191]:
{'__globals__': [],
'__header__': b'MATLAB 5.0 MAT-file, written by Octave 4.0.0, 2016-10-04 20:54:53 UTC',
'__version__': '1.0',
'body_data': array([[array([[ 1234.]]), array([[ 100.]]), array([[ 4321.]])]], dtype=object),
'data': array([[([[100.0]], [[1234.0]], [[1001.0]]),
([[100.0]], [[100.0]], [[1001.0]]),
([[100.0]], [[4321.0]], [[1001.0]])]],
dtype=[('header', 'O'), ('body', 'O'), ('trailer', 'O')]),
'header_data': array([[array([[ 100.]]), array([[ 100.]]), array([[ 100.]])]], dtype=object),
'messages': array([[([[(array([[ 100.]]), array([[ 1234.]]), array([[ 1001.]]))]],),
([[(array([[ 100.]]), array([[ 100.]]), array([[ 1001.]]))]],),
([[(array([[ 100.]]), array([[ 4321.]]), array([[ 1001.]]))]],)]],
dtype=[('msg', 'O')]),
'trailer_data': array([[array([[ 1001.]]), array([[ 1001.]]), array([[ 1001.]])]], dtype=object)}
</code></pre>
<p><code>body_data</code>, <code>header_data</code>, <code>trailer_data</code> are Octave cells, which in <code>numpy</code> are 2d objects arrays containing 2d elements</p>
<pre><code>In [194]: data['trailer_data'][0,0]
Out[194]: array([[ 1001.]])
In [195]: data['trailer_data'][0,0][0,0]
Out[195]: 1001.0
</code></pre>
<p><code>data</code> is a structured array (1,3) with 3 fields;</p>
<pre><code>In [198]: data['data']['header'][0,0][0,0]
Out[198]: 100.0
</code></pre>
<p><code>messages</code> is (1,3) with 1 field, with further nesting as with <code>data</code>.</p>
<pre><code>In [208]: data['messages']['msg'][0,0]['header'][0,0][0,0]
Out[208]: 100.0
</code></pre>
<p>(This may be a repetition of what you describe, but I just want to clear about the data structure).</p>
<p>================</p>
<p>Playing around, I found that, can I strip out the <code>(1,3)</code> shape of <code>msg</code>, with indexing and concatenate:</p>
<pre><code>In [241]: np.concatenate(data['messages']['msg'][0])
Out[241]:
array([[([[100.0]], [[1234.0]], [[1001.0]])],
[([[100.0]], [[100.0]], [[1001.0]])],
[([[100.0]], [[4321.0]], [[1001.0]])]],
dtype=[('header', 'O'), ('body', 'O'), ('trailer', 'O')])
In [242]: data['data']
Out[242]:
array([[([[100.0]], [[1234.0]], [[1001.0]]),
([[100.0]], [[100.0]], [[1001.0]]),
([[100.0]], [[4321.0]], [[1001.0]])]],
dtype=[('header', 'O'), ('body', 'O'), ('trailer', 'O')])
</code></pre>
<p>this looks the same as <code>data</code>. </p>
<p>For some reason I have to reduce it to a (3,)` array before the concatenate does what I want. I haven't wrapped my mind around those details.</p>
| 1 | 2016-10-04T21:06:20Z | [
"python",
"numpy",
"scipy"
]
|
How to DISABLE Jupyter notebook matplotlib plot inline? | 39,861,106 | <p>Well, I know I can use <code>%matplotlib inline</code> to plot inline.</p>
<p>However, how to disable it? </p>
<p>Sometime I just want to zoom in the figure that I plotted. Which I can't do on a inline-figure.</p>
| 0 | 2016-10-04T20:22:27Z | 39,861,256 | <p>Use <code>%matplotlib notebook</code> to change to a zoom-able display.</p>
| 0 | 2016-10-04T20:32:32Z | [
"python",
"matplotlib",
"ipython",
"jupyter"
]
|
Python string have not control characters | 39,861,200 | <p>i have proxy string:</p>
<pre><code>proxy = '127.0.0.1:8080'
</code></pre>
<p>i need check is it real string:</p>
<pre><code>def is_proxy(proxy):
return not any(c.isalpha() for c in proxy)
</code></pre>
<p>to skip string like:</p>
<pre><code>fail_proxy = 'This is proxy: 127.0.0.1:8080'
</code></pre>
<p>but some time i have like:</p>
<pre><code>fail_proxy2 = '127.0.0.1:8080\r'
is_proxy(fail_proxy2) is True
True
</code></pre>
<p>need False</p>
| -1 | 2016-10-04T20:28:34Z | 39,861,459 | <p>Try the following specific approach using <code>re</code> module(regexp):</p>
<pre><code>import re
def is_proxy(proxy):
return re.fullmatch('^\d{1,3}\.\d{1,3}\.\d{1,3}.\d{1,3}:\d{1,5}$', proxy) is not None
proxy1 = '127.0.0.1:8080'
proxy2 = '127.0.0.1:8080\r'
print(is_proxy(proxy1)) # True
print(is_proxy(proxy2)) # False
</code></pre>
<p>As for port number (<code>\d{1,5}</code>): range <strong>1-65535</strong> are available for port numbers</p>
| 0 | 2016-10-04T20:46:22Z | [
"python",
"string",
"control-characters"
]
|
Best to use join or append? | 39,861,226 | <p>Currently I have this code:</p>
<pre><code>with open(w_file, 'r') as file_r:
for line in file_r:
if len(line) > 0:
spLine = line.split()
if(spLine[0] == operandID and spLine[1] == bitID
and spLine[3] == sInstID and spLine[4] == opcodeID):
spLine[-2]='1'
line = ' '.join(spLine) # I need to add a new line into
# "line" after join all the spLine
lines.append(line)
print line
with open(w_file, 'w') as file_w:
for line in lines:
file_w.write(line)
</code></pre>
<p>The output:</p>
<pre class="lang-none prettyprint-override"><code>1 60 14039 470 13 0 28
1 60 14039 470 13 0 28
0 60 14039 470 13 1 281 60 14039 470 13 0 28 # I want to separate these two
1 60 14039 470 13 0 28 # lines, this wrong situation
# only happens when I found
# the target line which
# satisfies the if statement.
</code></pre>
| -2 | 2016-10-04T20:30:27Z | 39,861,308 | <p>Just add a <code>+ "\n"</code> to the end, like this.</p>
<pre><code>line = ' '.join(spLine) + "\n"
</code></pre>
<p>This will add a new line after the joining. </p>
| 1 | 2016-10-04T20:36:18Z | [
"python",
"join",
"append"
]
|
Text compressor doesn't compress the intended way | 39,861,280 | <p>For a school assignment I have to write a python program to compress a text and write a new file with the compressed text, conserving the original. For example, the text "heeeeeeellllooo" to "he7l4o3". So each character that is iterated subsequently has to be replaced by the character and has to be directly followed by the number of iterations, except if the number of iterations is 1 of course. However, in stead of getting the compressed text "he7l4o3", I get "hee2ll2oo2."</p>
<p>What am I doing wrong?</p>
<p>The code:</p>
<pre><code>inp = open("hello.txt", "r")
out = open("compr.txt", "w")
kar=inp.read(1)
prevkar=''
def iterations(a,b):
while a==b:
re=1
re+=1
b = a
a = inp.read(1)
out.write(b + str(re))
return
def no_iteration(a):
out.write(a)
return
while kar:
if kar==prevkar:
iterations(kar, prevkar)
prevkar=kar
kar=inp.read(1)
else:
no_iteration(kar)
prevkar=kar
kar=inp.read(1)
inp.close()
out.close()
</code></pre>
| 0 | 2016-10-04T20:34:19Z | 39,861,622 | <p>Here's a correct solution:</p>
<pre><code>inp = open("hello.txt", "r")
out = open("compr.txt", "w")
kar=inp.read(1)
prevkar=''
def iterations(a,b):
re = 1
while a==b:
re+=1
b = a
a = inp.read(1)
out.write(str(re))
return a
def no_iteration(a):
out.write(a)
return
while kar:
if kar==prevkar:
prevkar=kar
kar = iterations(kar, prevkar)
else:
no_iteration(kar)
prevkar=kar
kar=inp.read(1)
inp.close()
out.close()
</code></pre>
<p>the problem with wrong numbers was because you set <code>re</code> to be 1 every time you looped through letters, also <code>iterations(a, b)</code> did one <code>inp.read(1)</code> but did nothing with it, making <code>while kar:</code> loop skip one letter after each call to <code>iterations(a,b)</code></p>
| 0 | 2016-10-04T20:58:09Z | [
"python",
"python-2.7",
"compression"
]
|
How do I create multiple instances of python application server? | 39,861,345 | <p>How do I create multiple instances of python application server?</p>
<p>I created an application server in python using httpserver. I am not using any python frameworks. Now I want to create multiple instances of the server and use load balancer on top of it. How can I create multiple instances of this application server? Are there any tutorials on how to create multiple instances?</p>
<p>I was going through Nginx. Can nginx create multiple instances? Are there any tutorials?</p>
<p>Details: I am working on windows machine. It is a python application server created using BaseHTTPServer. I am not using any framework like tornado, django.</p>
| 0 | 2016-10-04T20:38:45Z | 39,863,659 | <p>You could bind different ports for each instance of your server. </p>
<p>Make your script initialize each instance binding a different port ( e.g port 3000, 3001, 3002 ...) and configure nginx to spread the load on those ports. </p>
<p>From <a href="https://docs.python.org/2/library/basehttpserver.html" rel="nofollow">https://docs.python.org/2/library/basehttpserver.html</a> , the example given is :</p>
<pre><code>def run(server_class=BaseHTTPServer.HTTPServer,
handler_class=BaseHTTPServer.BaseHTTPRequestHandler):
server_address = ('', 8000)
httpd = server_class(server_address, handler_class)
httpd.serve_forever()
</code></pre>
<p>So you might create a class with a function like this as a method which could take the server port as an argument, in the example above the port is hard coded as 8000. </p>
| 0 | 2016-10-05T00:23:42Z | [
"python",
"python-2.7",
"nginx",
"application-server"
]
|
How to decode binary file with " for index, line in enumerate(file)"? | 39,861,431 | <p>I am opening up an extremely large binary file I am opening in Python 3.5 in <code>file1.py</code>:</p>
<pre><code>with open(pathname, 'rb') as file:
for i, line in enumerate(file):
# parsing here
</code></pre>
<p>However, I naturally get an error because I am reading the file in binary mode and then creating a list of bytes. Then with a for loop, you are comparing string to bytes and here the code fails. </p>
<p>If I was reading in individual lines, I would do this:</p>
<pre><code>with open(fname, 'rb') as f:
lines = [x.decode('utf8').strip() for x in f.readlines()]
</code></pre>
<p>However, I am using <code>for index, lines in enumerate(file):</code>. What is the correct approach in this case? Do I decode the next objects? </p>
<p>Here is the actual code I am running: </p>
<pre><code>with open(bam_path, 'rb') as file:
for i, line in enumerate(file):
line_data=pd.DataFrame({k.strip():v.strip()
for k,_,v in (e.partition(':')
for e in line.split('\t'))}, index=[i])
</code></pre>
<p>And here is the error: </p>
<pre><code>Traceback (most recent call last):
File "file1.py", line 18, in <module>
for e in line.split('\t'))}, index=[i])
TypeError: a bytes-like object is required, not 'str'
</code></pre>
| -2 | 2016-10-04T20:44:26Z | 39,861,533 | <p>You could feed a generator with the decoded lines to <code>enumerate</code>:</p>
<pre><code>for i, line in enumerate(l.decode(errors='ignore') for l in f):
</code></pre>
<p>Which does the trick of yielding every line in <code>f</code> after decoding it. I've added <code>errors='ignore'</code> due to the fact that opening with <code>r</code> failed with an unknown start byte.</p>
<p>As an aside, you could just replace all string literals with byte literals when operating on <code>bytes</code>, i.e: <code>partition(b':')</code>, <code>split(b'\t')</code> and do your work using <code>bytes</code> (pretty sure pandas works fine with them).</p>
| 2 | 2016-10-04T20:51:24Z | [
"python",
"python-3.x",
"binary",
"decode",
"enumerate"
]
|
Recursively modify a dictionary | 39,861,433 | <p>I've created a class that should transform a nested list into a dictionary. The following is my input:</p>
<pre><code>['function:and',
['variable:X', 'function:>=', 'value:13'],
['variable:Y', 'function:==', 'variable:W']]
</code></pre>
<p>And the output should be a dictionary in the following form:</p>
<pre><code>{
"function": "and",
"args": [
{
"function": ">=",
"args": [
{
"variable": "X"
},
{
"value": 13
}
]
},
{
"function": "==",
"args": [
{
"variable": "Y"
},
{
"variable": "W"
}
]
}
]
}
</code></pre>
<p>This is the class that receives the input list and should return the required dictionary.</p>
<pre><code>class Tokenizer(object):
def __init__(self, tree):
self.tree = tree
self.filter = {}
def to_dict(self, triple):
my_dict = {}
try:
first = triple[0]
second = triple[1]
third = triple[2]
except KeyError:
return
if type(second) == str and type(third) == str:
my_dict['function'] = second.split(':')[-1]
my_dict['args'] = [
{first.split(':')[0]: first.split(':')[1]},
{third.split(':')[0]: third.split(':')[1]}]
# case recursive
if type(second) == list:
my_dict['function'] = first.split(':')[-1]
my_dict['args'] = [second, third]
return my_dict
def walk(self, args):
left = self.to_dict(args[0])
right = self.to_dict(args[1])
if isinstance(left, dict):
if 'args' in left.keys():
left = self.walk(left['args'])
if isinstance(right, dict):
if 'args' in right.keys():
right = self.walk(right['args'])
args = [left, right]
return args
def run(self):
self.filter.update(self.to_dict(self.tree))
if 'args' in self.filter.keys():
self.filter['args'] = self.walk(self.filter['args'])
tree = [
'function:and',
['variable:X', 'function:>=', 'value:13'],
['variable:Y', 'function:==', 'variable:W']
]
import pprint
pp = pprint.PrettyPrinter(indent=4)
t = Tokenizer(tree)
t.run()
pp.pprint(t.filter)
</code></pre>
<p>My recursive method <code>walk</code> is not doing what it should be and I'm a total sucker in recursion so I can't figure what I'm doing wrong. </p>
<p>The output I'm getting is:</p>
<pre><code>{ 'args': [[None, None], [None, None]], 'function': 'and'}
</code></pre>
| 0 | 2016-10-04T20:44:43Z | 39,862,952 | <p>For your particular test case you don't need to go into recursion at all. You can comment out your calls:</p>
<pre><code>def walk(self, args):
left = self.to_dict(args[0])
right = self.to_dict(args[1])
#if isinstance(left, dict):
# if 'args' in left.keys():
# left = self.walk(left['args'])
#if isinstance(right, dict):
# if 'args' in right.keys():
# right = self.walk(right['args'])
args = [left, right]
return args
</code></pre>
<p>and get the desired output.
You only need to go into recursion if you allow for nested functions in your input:</p>
<pre><code> ['function:and',
['variable:X', 'function:>=', 'value:13'],
['function:==',
['variable:R', 'function:>=', 'value:1'],
['variable:Z', 'function:==', 'variable:K']
]
]
</code></pre>
<p>then you have to check for a base case, so you go into recursion only if your <code>args</code> key's value contains unprocessed values:</p>
<pre><code>def walk(self, args):
left = self.to_dict(args[0])
right = self.to_dict(args[1])
if isinstance(left, dict):
if 'args' in left.keys() and isinstance(left['args'][0], list):
left = self.walk(left['args'])
if isinstance(right, dict):
if 'args' in right.keys() and isinstance(right['args'][0], list):
right = self.walk(right['args'])
args = [left, right]
return args
</code></pre>
<p>and then you'll get this:</p>
<pre><code>{ 'args': [ { 'args': [{ 'variable': 'X'}, { 'value': '13'}],
'function': '>='},
{ 'args': [ { 'args': [ { 'variable': 'R'},
{ 'value': '1'}],
'function': '>='},
{ 'args': [ { 'variable': 'Z'},
{ 'variable': 'K'}],
'function': '=='}],
'function': '=='}],
'function': 'and'}
</code></pre>
<p>Also it would be easier if your input list was a regular structure that consistently had argument fields following function name field. You could then significantly simplify your <code>to_dict</code> method.</p>
| 1 | 2016-10-04T22:51:06Z | [
"python",
"algorithm",
"dictionary",
"recursion"
]
|
How to access items from two lists in circular order? | 39,861,439 | <p>I have 2 lists,</p>
<pre><code>list_a = ['color-1', 'color-2', 'color-3', 'color-4']
list_b = ['car1', 'car2', 'car3', 'car4' ........... 'car1000']
</code></pre>
<p>I need to access the elements in a circular order of <code>list_a</code>:</p>
<pre><code>['color-1']['car1']
['color-2']['car2']
['color-3']['car3']
['color-4']['car4']
['color-1']['car5'] #list_a is starting from color-1 once it reaches end
['color-2']['car6'] #... goes on until end of items in list_b
</code></pre>
<p>I tried this, it doesn't work. Please advise.</p>
<pre><code>start=0
i=0
for car_idx in xrange(start, end):
if i <= len(color_names):
try:
self.design(color_names[i], self.cars[car_idx])
i+=1
except SomeException as exe:
print 'caught an error'
</code></pre>
| 0 | 2016-10-04T20:45:21Z | 39,861,525 | <p>Use the modulo operator <code>%</code> to index into the proper range:</p>
<pre><code>len_a = len(list_a)
len_b = len(list_b)
end = max(len_a, len_b)
for i in range(end):
print(list_a[i % len_a], list_b[i % len_b])
# ... do something else
</code></pre>
| 2 | 2016-10-04T20:50:40Z | [
"python",
"list",
"circular"
]
|
How to access items from two lists in circular order? | 39,861,439 | <p>I have 2 lists,</p>
<pre><code>list_a = ['color-1', 'color-2', 'color-3', 'color-4']
list_b = ['car1', 'car2', 'car3', 'car4' ........... 'car1000']
</code></pre>
<p>I need to access the elements in a circular order of <code>list_a</code>:</p>
<pre><code>['color-1']['car1']
['color-2']['car2']
['color-3']['car3']
['color-4']['car4']
['color-1']['car5'] #list_a is starting from color-1 once it reaches end
['color-2']['car6'] #... goes on until end of items in list_b
</code></pre>
<p>I tried this, it doesn't work. Please advise.</p>
<pre><code>start=0
i=0
for car_idx in xrange(start, end):
if i <= len(color_names):
try:
self.design(color_names[i], self.cars[car_idx])
i+=1
except SomeException as exe:
print 'caught an error'
</code></pre>
| 0 | 2016-10-04T20:45:21Z | 39,861,532 | <p>Use <a href="https://docs.python.org/2/library/itertools.html#itertools.cycle"><code>itertools.cycle</code></a> to make a cyclic iterable out of <code>list_a</code>.
Use <a href="https://docs.python.org/3/library/functions.html#zip"><code>zip</code></a> to pair items from the cyclic iterable with items from <code>list_b</code>. The iterable returned by <code>zip</code> will stop when the shortest of the iterables passed to <code>zip</code> (i.e. <code>list_b</code>) ends.</p>
<pre><code>import itertools as IT
list_a = ['color-1', 'color-2', 'color-3', 'color-4']
list_b = ['car1', 'car2', 'car3', 'car4', 'car5', 'car6', 'car1000']
for a, b in zip(IT.cycle(list_a), list_b):
print(a, b)
</code></pre>
<p>prints</p>
<pre><code>color-1 car1
color-2 car2
color-3 car3
color-4 car4
color-1 car5
color-2 car6
color-3 car1000
</code></pre>
| 6 | 2016-10-04T20:51:22Z | [
"python",
"list",
"circular"
]
|
How do test in Django | 39,861,651 | <p>I'm trying to do my first tests on Django and I don't know do it or after reading the docs (where it explains a very easy test) I still don't know how do it.</p>
<p>I'm trying to do a test that goes to "login" url and makes the login, and after a succesfull login redirects to the authorized page.</p>
<pre><code>from unittest import TestCase
from django.test.client import Client
class Test(TestCase):
def testLogin(self):
client = Client()
headers = {'X-OpenAM-Username': 'user', 'X-OpenAM-Password': 'password', 'Content-Type': 'application/json'}
data = {}
response = client.post('/login/', headers=headers, data=data, secure=False)
assert(response.status_code == 200)
</code></pre>
<p>And the test success, but I don't know if it's beacuse the 200 of loading "/login/" or because the test do the login and after redirect get the 200 code.</p>
<p>How can I check on the test that after the login the url redirected it's the correct? There is a plugin or something that helps with the test? Or where I can find a good tutorial to test my views and the model?</p>
<p>Thanks and regards.</p>
| 0 | 2016-10-04T21:00:05Z | 39,862,325 | <p>Django have plenty of tools for testing. For this task, you should use test case class from Django, for example <code>django.test.TestCase</code>.
Then you can use method <code>assertRedirects()</code> and it will check where you've been redirected and with which code. You can find any info you need <a href="https://docs.djangoproject.com/en/1.10/topics/testing/tools/" rel="nofollow">here</a>.
I've tried to write the code for your task:</p>
<pre><code>from django.test import TestCase
class Test(TestCase):
def test_login(self):
data = {'X-OpenAM-Username': 'user', 'X-OpenAM-Password': 'password'}
response = client.post('/login/', data=data, content_type='application/json', secure=false)
assertRedirects(response, '/expected_url/', 200)
</code></pre>
<p>Then you can use <code>python3 manage.py test</code> to run all tests.</p>
| 1 | 2016-10-04T21:49:16Z | [
"python",
"django",
"django-testing"
]
|
How do test in Django | 39,861,651 | <p>I'm trying to do my first tests on Django and I don't know do it or after reading the docs (where it explains a very easy test) I still don't know how do it.</p>
<p>I'm trying to do a test that goes to "login" url and makes the login, and after a succesfull login redirects to the authorized page.</p>
<pre><code>from unittest import TestCase
from django.test.client import Client
class Test(TestCase):
def testLogin(self):
client = Client()
headers = {'X-OpenAM-Username': 'user', 'X-OpenAM-Password': 'password', 'Content-Type': 'application/json'}
data = {}
response = client.post('/login/', headers=headers, data=data, secure=False)
assert(response.status_code == 200)
</code></pre>
<p>And the test success, but I don't know if it's beacuse the 200 of loading "/login/" or because the test do the login and after redirect get the 200 code.</p>
<p>How can I check on the test that after the login the url redirected it's the correct? There is a plugin or something that helps with the test? Or where I can find a good tutorial to test my views and the model?</p>
<p>Thanks and regards.</p>
| 0 | 2016-10-04T21:00:05Z | 39,865,780 | <p>To properly test redirects, use the <a href="https://docs.djangoproject.com/en/1.10/topics/testing/tools/#django.test.Client.post" rel="nofollow">follow</a> parameter </p>
<blockquote>
<p>If you set follow to True the client will follow any redirects and a
redirect_chain attribute will be set in the response object containing
tuples of the intermediate urls and status codes.</p>
</blockquote>
<p>Then your code is as simple as</p>
<p>from django.test import TestCase</p>
<pre><code>class Test(TestCase):
def test_login(self):
client = Client()
headers = {'X-OpenAM-Username': 'user', 'X-OpenAM-Password': 'password', 'Content-Type': 'application/json'}
data = {}
response = client.post('/login/', headers=headers, data=data, secure=False)
self.assertRedirects(response,'/destination/',302,200)
</code></pre>
<p>Note that it's <code>self.assertRedirects</code> rather than <code>assert</code> or <code>assertRedirects</code></p>
<p>Also note that the above test will most likely fail because you are posting an empty dictionary as the form data. Django form views do not redirect when the form is invalid and an empty form will probably be invalid here.</p>
| 1 | 2016-10-05T05:02:06Z | [
"python",
"django",
"django-testing"
]
|
Testing pull-request on other person's package | 39,861,662 | <p>I opened an <a href="https://github.com/mvantellingen/python-zeep/issues/177" rel="nofollow">issue</a> in a package that I need for my job, and now the author is asking me to test a <a href="https://github.com/mvantellingen/python-zeep/pull/205" rel="nofollow">pull request</a>. The problem is... I don't really know what is the preferred way to do that. </p>
<p>The only way I see now is that I fork the repository, download and apply the pull request as a patch and then import the function from that project... surely there must be a better way? I'm using PyCharm on Ubuntu.</p>
| 2 | 2016-10-04T21:00:47Z | 39,861,748 | <p>The GitHub pull request names the source: </p>
<blockquote>
<p>mvantellingen wants to merge 12 commits into <code>master</code> from <code>multiple-msg-parts</code></p>
</blockquote>
<p><code>multiple-msg-parts</code> is just another branch in the same repository. Just clone that repository and check out <a href="https://github.com/mvantellingen/python-zeep/tree/multiple-msg-parts" rel="nofollow">that specific branch</a>.</p>
<p>Other pull requests may have been created from a branch in a different repository; the source repository will then have a <code><username>:<branch></code> form, at which point you'd clone the project from that specific user to get that branch. For example, <a href="https://github.com/mvantellingen/python-zeep/pull/185" rel="nofollow">this pull request</a> is sourced from <code>andrewserong:check-node-get-children</code>, so you'd clone <a href="https://github.com/andrewserong/python-zeep" rel="nofollow"><code>andrewserong/python-zeep</code></a> and switch to the <code>check-node-get-children</code> branch instead.</p>
<p>PyCharm lets you <a href="https://www.jetbrains.com/help/pycharm/2016.2/cloning-a-repository-from-github.html" rel="nofollow">clone directly from GitHub</a>; once cloned use the <em>VCS</em> menu item to switch branches.</p>
| 1 | 2016-10-04T21:06:29Z | [
"python",
"github",
"fork",
"pull-request"
]
|
Testing pull-request on other person's package | 39,861,662 | <p>I opened an <a href="https://github.com/mvantellingen/python-zeep/issues/177" rel="nofollow">issue</a> in a package that I need for my job, and now the author is asking me to test a <a href="https://github.com/mvantellingen/python-zeep/pull/205" rel="nofollow">pull request</a>. The problem is... I don't really know what is the preferred way to do that. </p>
<p>The only way I see now is that I fork the repository, download and apply the pull request as a patch and then import the function from that project... surely there must be a better way? I'm using PyCharm on Ubuntu.</p>
| 2 | 2016-10-04T21:00:47Z | 39,861,833 | <blockquote>
<p>The only way I see now is that I fork the repository, download and apply the pull request as a patch [...]</p>
</blockquote>
<p>As <a href="http://stackoverflow.com/questions/39861662/testing-pull-request-on-other-persons-package#comment67011346_39861662">Martijn Pieters commented</a>: clone the repository and check out the mentioned branch. You can do that without forking it on GitHub and without manually applying the pull request:</p>
<pre><code>git clone git@github.com:mvantellingen/python-zeep.git
git checkout multiple-msg-parts
</code></pre>
<p>or even in a single command:</p>
<pre><code>git clone git@github.com:mvantellingen/python-zeep.git --branch multiple-msg-parts
</code></pre>
<blockquote>
<p>[...] and then import the function from that project</p>
</blockquote>
<p>I guess you won't get around that part if you want to test the pull request's change.</p>
<p>Though, the author didn't ask you <em>to test</em> the pull request (even though they might have meant to do so); they asked you <em>whether</em> you <em>can</em> test it:</p>
<blockquote>
<p>I'm working on a fix in #205. It required some refactoring, are you able to give it a try for me?</p>
</blockquote>
<p>To which "No, because I don't know how." would have been an acceptable answer. ;-)</p>
| 1 | 2016-10-04T21:12:27Z | [
"python",
"github",
"fork",
"pull-request"
]
|
Does dask distributed use Tornado coroutines for workers tasks? | 39,861,685 | <p>I've read at the dask <a href="http://distributed.readthedocs.io/en/latest/foundations.html#concurrency-with-tornado-coroutines" rel="nofollow"><code>distributed</code> documentation</a> that:</p>
<blockquote>
<p>Worker and Scheduler nodes operate concurrently. They serve several
overlapping requests and perform several overlapping computations at
the same time without blocking.</p>
</blockquote>
<p>I've always thought single-thread concurrent programming is best suited for I/O expensive, not CPU-bound jobs. However I expect many dask tasks (e.g. <code>dask.pandas</code>, <code>dask.array</code>) to be CPU intensive.</p>
<p>Does distributed only use Tornado for client/server communication, with separate processes/threads to run the dask tasks? Actually <code>dask-worker</code> has <code>--nprocs</code> and <code>--nthreads</code> arguments so I expect this to be the case.</p>
<p>How do concurrency with Tornado coroutines and more common processes/threads processing each dask task live together in distributed?</p>
| 2 | 2016-10-04T21:02:28Z | 39,861,932 | <p>You are correct. </p>
<p>Each <a href="http://distributed.readthedocs.io/en/latest/worker.html" rel="nofollow">distributed.Worker</a> object contains a <a href="https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor" rel="nofollow">concurrent.futures.ThreadPoolExecutor</a> with multiple threads. Tasks are run on this <code>ThreadPoolExecutor</code> for parallel performance. All communication and coordination tasks are managed by the Tornado IOLoop.</p>
<p>Generally this solution allows computation to happen separately from communication and administration. This allows parallel computing within a worker and allows workers to respond to server requests even while computing tasks.</p>
<h3>Command line options</h3>
<p>When you make the following call:</p>
<pre><code>dask-worker --nprocs N --nthreads T
</code></pre>
<p>It starts <code>N</code> separate <code>distributed.Worker</code> objects in separate Python processes. Each of these workers has a ThreadPoolExecutor with <code>T</code> threads.</p>
| 1 | 2016-10-04T21:19:04Z | [
"python",
"multithreading",
"tornado",
"coroutine",
"dask"
]
|
pyqt: How to quit a thread properly | 39,861,700 | <p>I wrote an pyqt gui and used threading to run code which needs a long time to be executed, but I want to have the choice to stop the execution safely. I dont want to use the get_thread.terminate() method. I want to stop the code by a special function (maybe <strong>del</strong>()). My problem is that, I wrote the code in a own class and just want to abort the class without changing a lot of syntax.</p>
<p>Edit: It was mentioned that one has to pass a flag to the class, which has to be checked constantly. How do I send this flag to the class? Because the flag has to change the value, when one presses the stop button.</p>
<p>Edit 2: My solution so far is, to declerate a global variable with the name running_global. I changed self.get_thread.terminate() to running_global = False and I check constantly in my long_running_prog if the variable has been set False. I think this solution is ugly, so I would be pretty happy if someone has a better idea.</p>
<p><strong>This is my code for the dialog where I start the thread:</strong></p>
<pre><code>class SomeDialog(QtGui.QDialog,
userinterface_status.Ui_status_window):
finished = QtCore.pyqtSignal(bool)
def __init__(self):
"""
:param raster: Coordinates which are going to be scanned.
"""
super(self.__class__, self).__init__() # old version, used in python 2.
self.setupUi(self) # It sets up layout and widgets that are defined
self.get_thread = SomeThread()
# Conencting the buttons
self.start_button.clicked.connect(self.start)
self.stop_button.clicked.connect(self.stop)
self.close_button.clicked.connect(self.return_main)
# Connecting other signals
self.connect(self.get_thread, QtCore.SIGNAL("stop()"), self.stop)
self.connect(self.get_thread, QtCore.SIGNAL("update_status_bar()"), self.update_status_bar)
def return_main(self):
"""
Function is excecuted, when close button is clicked.
"""
print("return main")
self.get_thread.terminate()
self.close()
def start(self):
"""
Starts the thread, which means that the run method of the thread is started.
"""
self.start_button.setEnabled(False)
self.get_thread.start()
def stop(self):
print("Stop programm.")
self.start_button.setEnabled(True)
self.get_thread.quit()
def end(self):
QtGui.QMessageBox.information(self, "Done!", "Programm finished")
def closeEvent(self, event):
"""
This method is called, when the window is closed and will send a signal to the main window to activaete the
window again.
:param event:
"""
self.finished.emit(True)
# close window
event.accept()
</code></pre>
<p><strong>In the following class is the code for the thread:</strong></p>
<pre><code>class SomeThread(QtCore.QThread):
finished = QtCore.pyqtSignal(bool)
def __init__(self):
QtCore.QThread.__init__(self)
def __del__(self):
print("del")
self.wait()
def run(self):
self.prog = long_running_prog(self.emit) # Sending from the prog signals
self.prog.run()
self.prog.closeSystem() # Leaving the programm in a safe way.
</code></pre>
<p>So if one presses the stop button, the programm should instantly shut down in a save way. Is there a way to abort the class in a save way? For example can I pass a variable to the long_running_prog class which turns True, when one presses the stop button? If somethin like this is possible, could one tell me how?</p>
<p>Thanks for your help in advance</p>
<p>I hope you understand my problem.
Greetings
Hizzy</p>
| 0 | 2016-10-04T21:03:14Z | 39,862,105 | <p>This is impossible to do unless <code>prog.run(self)</code> would periodically inspect a value of a flag to break out of its loop. Once you implement it, <code>__del__(self)</code> on the thread should set the flag and only then <code>wait</code>.</p>
| 0 | 2016-10-04T21:32:38Z | [
"python",
"qt",
"python-3.x",
"pyqt",
"pyqt4"
]
|
String or object compairson in Python 3.52 | 39,861,740 | <p>I am working on the exorcism.io clock exercise and I can not figure out why this test is failing. The results look identical and even have the same type.</p>
<p>Here is my code:</p>
<pre><code>class Clock:
def __init__(self, h, m):
self.h = h
self.m = m
self.adl = 0
def make_time(self):
s = self.h * 3600
s += self.m * 60
if self.adl: s += self.adl
while s > 86400:
s -= 86400
if s == 0:
return '00:00'
h = s // 3600
if h:
s -= h * 3600
m = s // 60
return '{:02d}:{:02d}'.format(h, m)
def add(self, more):
self.adl = more * 60
return self.make_time()
def __str__(self):
return str(self.make_time()) # i don't think I need to do this
if __name__ == '__main__':
cl1 = Clock(34, 37) #10:37
cl2 = Clock(10, 37) #10:37
print(type(cl2))
print(cl2, cl1)
print(cl2 == cl1) #false
</code></pre>
| 3 | 2016-10-04T21:06:01Z | 39,861,907 | <p>A custom class without an <a href="https://docs.python.org/3/reference/datamodel.html#object.__eq__" rel="nofollow"><code>__eq__</code> method</a> defaults to testing for <em>identity</em>. That is to say, two references to an instance of such a class are only equal if the reference they exact same object.</p>
<p>You'll need to define a custom <code>__eq__</code> method that returns <code>True</code> when two instances contain the same time:</p>
<pre><code>def __eq__(self, other):
if not isinstance(other, Clock):
return NotImplemented
return (self.h, self.m, self.adl) == (other.h, other.m, other.adl)
</code></pre>
<p>By returning the <code>NotImplemented</code> singleton for something that is not a <code>Clock</code> instance (or a subclass), you let Python know that the <code>other</code> object could also be asked to test for equality.</p>
<p>However, your code accepts values greater than the normal hour and minute ranges; rather than store hours and minutes, store seconds and normalise that value:</p>
<pre><code>class Clock:
def __init__(self, h, m):
# store seconds, but only within the range of a day
self.seconds = (h * 3600 + m * 60) % 86400
self.adl = 0
def make_time(self):
s = self.esconds
if self.adl: s += self.adl
s %= 86400
if s == 0:
return '00:00'
s, h = s % 3600, s // 3600
m = s // 60
return '{:02d}:{:02d}'.format(h, m)
def __eq__(self, other):
if not isinstance(other, Clock):
return NotImplemented
return (self.seconds, self.adl) == (other.seconds, other.adl)
</code></pre>
<p>Now your two clock instances will test equal because internally they store the exact same time in a day. Note that I used the <code>%</code> modulus operator rather than a <code>while</code> loop and subtracting.</p>
| 5 | 2016-10-04T21:17:33Z | [
"python",
"python-3.x",
"equality"
]
|
Parsing JSON string from URL with Python 3.5.2 | 39,861,782 | <p>I am trying to print out the values <code>"lat"</code> and <code>"lon"</code> from the JSON (<a href="https://en.wikipedia.org/w/api.php?action=query&format=json&prop=coordinates&list=&titles=venice" rel="nofollow">source</a>):</p>
<pre class="lang-js prettyprint-override"><code>{
"batchcomplete": "",
"query": {
"normalized": [
{
"from": "venice",
"to": "Venice"
}
],
"pages": {
"32616": {
"coordinates": [
{
"globe": "earth",
"lat": 45.4375,
"lon": 12.33583333,
"primary": ""
}
],
"ns": 0,
"pageid": 32616,
"title": "Venice"
}
}
}
}
</code></pre>
<p>This is my code that I supposed to work:</p>
<pre><code>import urllib.request as url
import json
import urllib.parse
import ast
request = url.Request('https://en.wikipedia.org/w/api.php?action=query&format=json&prop=coordinates&list=&titles=venice')
data = url.urlopen(request).read()
data = data.decode("utf-8")
data = ast.literal_eval(data)
data = json.dumps(data)
array = json.loads(data)
#print(array)
print(array['query']['pages']['32616']['coordinates']['lat'+','+'lon'])
</code></pre>
<p>It returns the error: <code>TypeError: list indices must be integers or slices, not str</code>.</p>
| -1 | 2016-10-04T21:08:55Z | 39,861,890 | <pre><code>print(array['query']['pages']['32616']['coordinates'][0]['lat'])
print(array['query']['pages']['32616']['coordinates'][0]['lon'])
insert ^
</code></pre>
| 1 | 2016-10-04T21:16:07Z | [
"python"
]
|
Parsing JSON string from URL with Python 3.5.2 | 39,861,782 | <p>I am trying to print out the values <code>"lat"</code> and <code>"lon"</code> from the JSON (<a href="https://en.wikipedia.org/w/api.php?action=query&format=json&prop=coordinates&list=&titles=venice" rel="nofollow">source</a>):</p>
<pre class="lang-js prettyprint-override"><code>{
"batchcomplete": "",
"query": {
"normalized": [
{
"from": "venice",
"to": "Venice"
}
],
"pages": {
"32616": {
"coordinates": [
{
"globe": "earth",
"lat": 45.4375,
"lon": 12.33583333,
"primary": ""
}
],
"ns": 0,
"pageid": 32616,
"title": "Venice"
}
}
}
}
</code></pre>
<p>This is my code that I supposed to work:</p>
<pre><code>import urllib.request as url
import json
import urllib.parse
import ast
request = url.Request('https://en.wikipedia.org/w/api.php?action=query&format=json&prop=coordinates&list=&titles=venice')
data = url.urlopen(request).read()
data = data.decode("utf-8")
data = ast.literal_eval(data)
data = json.dumps(data)
array = json.loads(data)
#print(array)
print(array['query']['pages']['32616']['coordinates']['lat'+','+'lon'])
</code></pre>
<p>It returns the error: <code>TypeError: list indices must be integers or slices, not str</code>.</p>
| -1 | 2016-10-04T21:08:55Z | 39,861,891 | <pre><code>import requests
import json
r = requests.get('you URL here')
rData = json.loads(r.text, encoding="utf-8")
print(rData['query']['pages']['32616']['coordinates'][0]['lat'])
print(rData['query']['pages']['32616']['coordinates'][0]['lon'])
</code></pre>
| 1 | 2016-10-04T21:16:07Z | [
"python"
]
|
Error trying parsing xml using python : xml.etree.ElementTree.ParseError: syntax error: line 1, | 39,861,806 | <p>In python, simply trying to parse XML: </p>
<pre><code>import xml.etree.ElementTree as ET
data = 'info.xml'
tree = ET.fromstring(data)
</code></pre>
<p>but got error:</p>
<pre><code>Traceback (most recent call last):
File "C:\mesh\try1.py", line 3, in <module>
tree = ET.fromstring(data)
File "C:\Python27\lib\xml\etree\ElementTree.py", line 1312, in XML
return parser.close()
File "C:\Python27\lib\xml\etree\ElementTree.py", line 1665, in close
self._raiseerror(v)
File "C:\Python27\lib\xml\etree\ElementTree.py", line 1517, in _raiseerror
raise err
xml.etree.ElementTree.ParseError: syntax error: line 1, column 0
</code></pre>
<p>thats a bit of xml, i have:</p>
<pre><code><?xml version="1.0" encoding="utf-16"?>
<AnalysisData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<BlendOperations OperationNumber="1">
<ComponentQuality>
<MaterialName>Oil</MaterialName>
<Weight>1067.843017578125</Weight>
<WeightPercent>31.545017776585109</WeightPercent>
</code></pre>
<p>Why is it happening?</p>
| -2 | 2016-10-04T21:10:55Z | 39,861,886 | <p>You're trying to parse the string <code>'info.xml'</code> instead of the contents of the file.</p>
<p>You could call <code>tree = ET.parse('info.xml')</code> which will open the file.</p>
<p>Or you could read the file directly:</p>
<p><code>ET.fromstring(open('info.xml').read())</code></p>
| 1 | 2016-10-04T21:16:02Z | [
"python",
"xml",
"python-2.7"
]
|
Regular Expression Dot not working | 39,861,911 | <p>So I'm trying to parse through a file and I have the following code:</p>
<pre><code>def learn_re(s):
pattern=re.compile("[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3} .")
if pattern.match(s):
return True
return False
</code></pre>
<p>This matches with "01:01:01.123 â"; however, when I add in one more character, it fails to work. For example if I edit my code so that it's </p>
<pre><code>def learn_re(s):
pattern=re.compile("[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3} . C")
if pattern.match(s):
return True
return False
</code></pre>
<p>This fails to match with "01:01:01.123 â C" What's happening here? </p>
| 3 | 2016-10-04T21:17:51Z | 39,861,970 | <p>Rhe em dash in your string is a unicode character, which will be interpreted as multiple characters <a href="http://www.fileformat.info/info/unicode/char/2014/index.htm" rel="nofollow">(3 in your case)</a>. Your version of python is not unicode-aware so you'll either need to match 3 characters to capture <code>.{3}</code> the dash, match the character exactly in your expression, or use a different version of python.</p>
<p>A few notes regarding your expression; You should always prefix your regular expression strings with <code>r'...'</code> so that your <code>\</code> escapes will be interpreted correctly.</p>
<p>A <code>.</code> in a regular expression has a special meaning, it will match any single character. If you need a period/decimal point, you need to escape the dot <code>\.</code>.</p>
<pre><code>pattern = re.compile(r'[0-9]{2}:[0-9]{2}:[0-9]{2}\.[0-9]{3} .')
</code></pre>
| 1 | 2016-10-04T21:21:42Z | [
"python",
"regex",
"python-2.7"
]
|
Regular Expression Dot not working | 39,861,911 | <p>So I'm trying to parse through a file and I have the following code:</p>
<pre><code>def learn_re(s):
pattern=re.compile("[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3} .")
if pattern.match(s):
return True
return False
</code></pre>
<p>This matches with "01:01:01.123 â"; however, when I add in one more character, it fails to work. For example if I edit my code so that it's </p>
<pre><code>def learn_re(s):
pattern=re.compile("[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3} . C")
if pattern.match(s):
return True
return False
</code></pre>
<p>This fails to match with "01:01:01.123 â C" What's happening here? </p>
| 3 | 2016-10-04T21:17:51Z | 39,862,274 | <p>The issue is that your â is a unicode character. When in a <code>str</code>, it actually behaves more like several characters:</p>
<pre><code>>>> print len('â')
3
</code></pre>
<p>But, if you use a <code>unicode</code> instead of a <code>str</code>:</p>
<pre><code>>>> print len(u'â')
1
</code></pre>
<p>And so, the following will print <code>True</code>:</p>
<pre><code>def learn_re(s):
pattern=re.compile("[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3} . C")
if pattern.match(s):
return True
return False
print learn_re(u"01:01:01.123 â C")
</code></pre>
<p>Note that this behavior is specific to python 2. In python 3, <code>str</code> and <code>unicode</code> are merged into a single <code>str</code> type, and so this distinction is not required.</p>
| 3 | 2016-10-04T21:45:30Z | [
"python",
"regex",
"python-2.7"
]
|
Pandas: find substring in a column | 39,861,957 | <p>I need to find in dataframe some strings</p>
<pre><code>url
003.ru/*/mobilnyj_telefon_bq_phoenix*
003.ru/*/mobilnyj_telefon_fly_*
003.ru/*mobile*
003.ru/telefony_i_smartfony/mobilnye_telefony_smartfony
003.ru/telefony_i_smartfony/mobilnye_telefony_smartfony/%brands%5D%5Bbr_23%
1click.ru/*iphone*
1click.ru/catalogue/chasy-motorola
</code></pre>
<p>problen in next: when I use </p>
<pre><code>df_update = df[df['url'].str.contains(substr.url)]
</code></pre>
<p>it return error, because some <code>url</code> contain <code>*</code>.
How can I fix that problem?</p>
| 0 | 2016-10-04T21:20:45Z | 39,862,148 | <p>Try:</p>
<pre><code>df[df['url'].str.contains(substr.url, regex=False)]
</code></pre>
<p>You have to specify whether or not you want your pattern to be interpreted as a regular expression or a normal string. In this case, you want to set the <code>regex</code> argument to <code>False</code> because it is set to <code>True</code> by default. That way, the asterisks in your pattern won't be interpreted as regular expression.</p>
<p>I hope this helps.</p>
| 1 | 2016-10-04T21:35:36Z | [
"python",
"regex",
"pandas"
]
|
Do we need to specify python interpreter externally if python script contains #!/usr/bin/python3? | 39,861,960 | <p>I am trying to invoke python script from C application using <code>system()</code> call</p>
<p>The python script has <code>#!/usr/bin/python3</code> on the first line.</p>
<p>If I do <code>system(python_script)</code>, the script does not seem to run.</p>
<p>It seems I need to do <code>system(/usr/bin/python3 python_script)</code>.</p>
<p>I thought I do not need to specify the interpreter externally if I have <code>#!/usr/bin/python3</code> in the first line of the script.</p>
<p>Am I doing something wrong?</p>
| 2 | 2016-10-04T21:20:58Z | 39,862,114 | <p>Make sure you have executable permission for <code>python_script</code>.
You can make <code>python_script</code> executable by </p>
<p><code>chmod +x python_script</code></p>
<p>Also check if you are giving correct path for <code>python_script</code></p>
| 1 | 2016-10-04T21:33:07Z | [
"python",
"c",
"system",
"interpreter"
]
|
ZeroMQ: load balance many workers and one master | 39,862,022 | <p>Suppose I have one master process that divides up data to be processed in parallel. Lets say there are 1000 chunks of data and 100 nodes on which to run the computations. </p>
<p>Is there some way to do REQ/REP to keep all the workers busy? I've tried to use the load balancer pattern in the guide but with a single client, <code>sock.recv()</code> is going to block until it receives its response from the worker. </p>
<p>Here is the code, slightly modified from the zmq guide for a load balancer. Is starts up one client, 10 workers, and a load balancer/broker in the middle. How can I get all those workers working at the same time???</p>
<pre><code>from __future__ import print_function
from multiprocessing import Process
import zmq
import time
import uuid
import random
def client_task():
"""Basic request-reply client using REQ socket."""
socket = zmq.Context().socket(zmq.REQ)
socket.identity = str(uuid.uuid4())
socket.connect("ipc://frontend.ipc")
# Send request, get reply
for i in range(100):
print("SENDING: ", i)
socket.send('WORK')
msg = socket.recv()
print(msg)
def worker_task():
"""Worker task, using a REQ socket to do load-balancing."""
socket = zmq.Context().socket(zmq.REQ)
socket.identity = str(uuid.uuid4())
socket.connect("ipc://backend.ipc")
# Tell broker we're ready for work
socket.send(b"READY")
while True:
address, empty, request = socket.recv_multipart()
time.sleep(random.randint(1, 4))
socket.send_multipart([address, b"", b"OK : " + str(socket.identity)])
def broker():
context = zmq.Context()
frontend = context.socket(zmq.ROUTER)
frontend.bind("ipc://frontend.ipc")
backend = context.socket(zmq.ROUTER)
backend.bind("ipc://backend.ipc")
# Initialize main loop state
workers = []
poller = zmq.Poller()
# Only poll for requests from backend until workers are available
poller.register(backend, zmq.POLLIN)
while True:
sockets = dict(poller.poll())
if backend in sockets:
# Handle worker activity on the backend
request = backend.recv_multipart()
worker, empty, client = request[:3]
if not workers:
# Poll for clients now that a worker is available
poller.register(frontend, zmq.POLLIN)
workers.append(worker)
if client != b"READY" and len(request) > 3:
# If client reply, send rest back to frontend
empty, reply = request[3:]
frontend.send_multipart([client, b"", reply])
if frontend in sockets:
# Get next client request, route to last-used worker
client, empty, request = frontend.recv_multipart()
worker = workers.pop(0)
backend.send_multipart([worker, b"", client, b"", request])
if not workers:
# Don't poll clients if no workers are available
poller.unregister(frontend)
# Clean up
backend.close()
frontend.close()
context.term()
def main():
NUM_CLIENTS = 1
NUM_WORKERS = 10
# Start background tasks
def start(task, *args):
process = Process(target=task, args=args)
process.start()
start(broker)
for i in range(NUM_CLIENTS):
start(client_task)
for i in range(NUM_WORKERS):
start(worker_task)
# Process(target=broker).start()
if __name__ == "__main__":
main()
</code></pre>
| 3 | 2016-10-04T21:25:40Z | 39,863,502 | <p>I guess there is different ways to do this :</p>
<p>-you can, for example, use the <strong><code>threading</code></strong> module to launch all your requests from your single client, with something like:</p>
<pre><code>result_list = [] # Add the result to a list for the example
rlock = threading.RLock()
def client_thread(client_url, request, i):
context = zmq.Context.instance()
socket = context.socket(zmq.REQ)
socket.setsockopt_string(zmq.IDENTITY, '{}'.format(i))
socket.connect(client_url)
socket.send(request.encode())
reply = socket.recv()
with rlock:
result_list.append((i, reply))
return
def client_task():
# tasks = list with all your tasks
url_client = "ipc://frontend.ipc"
threads = []
for i in range(len(tasks)):
thread = threading.Thread(target=client_thread,
args=(url_client, tasks[i], i,))
thread.start()
threads.append(thread)
</code></pre>
<p>-you can take benefit of an evented library like <strong><code>asyncio</code></strong> (there is a submodule <a href="http://pyzmq.readthedocs.io/en/latest/api/zmq.asyncio.html" rel="nofollow">zmq.asyncio</a> and an other library <a href="https://github.com/aio-libs/aiozmq" rel="nofollow">aiozmq</a>, the last one offers a higher level of abstraction). In this case you will send your requests to the workers, sequentially too, but without blocking for each response (and so not keeping the main loop busy) and get the results when they came back to the main loop. This could look like this:</p>
<pre><code>import asyncio
import zmq.asyncio
async def client_async(request, context, i, client_url):
"""Basic client sending a request (REQ) to a ROUTER (the broker)"""
socket = context.socket(zmq.REQ)
socket.setsockopt_string(zmq.IDENTITY, '{}'.format(i))
socket.connect(client_url)
await socket.send(request.encode())
reply = await socket.recv()
socket.close()
return reply
async def run(loop):
# tasks = list full of tasks
url_client = "ipc://frontend.ipc"
asyncio_tasks = []
ctx = zmq.asyncio.Context()
for i in range(len(tasks)):
task = asyncio.ensure_future(client_async(tasks[i], ctx, i, url_client))
asyncio_tasks.append(task)
responses = await asyncio.gather(*asyncio_tasks)
return responses
zmq.asyncio.install()
loop = asyncio.get_event_loop()
results = loop.run_until_complete(run(loop))
</code></pre>
<p>I didn't tested theses two snippets but they are both coming (with modifications to fit the question) from code i have using zmq in a similar configuration than your question.</p>
| 1 | 2016-10-05T00:01:45Z | [
"python",
"zeromq"
]
|
Pandas: create dataframe without auto ordering column names alphabetically | 39,862,053 | <p>I am creating an initial pandas dataframe to store results generated from other codes: e.g.</p>
<pre><code>result = pd.DataFrame({'date': datelist, 'total': [0]*len(datelist),
'TT': [0]*len(datelist)})
</code></pre>
<p>with <code>datelist</code> a predefined list. Then other codes will output some number for <code>total</code> and <code>TT</code> for each <code>date</code>, which I will store in the <code>result</code> dataframe.</p>
<p>So I want the first column to be <code>date</code>, second <code>total</code> and third <code>TT</code>. However, pandas will automatically reorder it alphabetically to <code>TT</code>, <code>date</code>, <code>total</code> at creation. While I can manually reorder this again afterwards, I wonder if there is an easier way to achieve this in one step.</p>
<p>I figured I can also do</p>
<pre><code>result = pd.DataFrame(np.transpose([datelist, [0]*l, [0]*l]),
columns = ['date', 'total', 'TT'])
</code></pre>
<p>but it somehow also looks tedious. Any other suggestions?</p>
| 1 | 2016-10-04T21:28:21Z | 39,862,649 | <p>You can pass the (correctly ordered) list of column as parameter to the constructor or use an OrderedDict:</p>
<pre><code># option 1:
result = pd.DataFrame({'date': datelist, 'total': [0]*len(datelist),
'TT': [0]*len(datelist)}, columns=['date', 'total', 'TT'])
# option 2:
od = collections.OrderedDict()
od['date'] = datelist
od['total'] = [0]*len(datelist)
od['TT'] = [0]*len(datelist)
result = pd.DataFrame(od)
</code></pre>
| 3 | 2016-10-04T22:21:07Z | [
"python",
"pandas",
"dataframe"
]
|
Pandas: create dataframe without auto ordering column names alphabetically | 39,862,053 | <p>I am creating an initial pandas dataframe to store results generated from other codes: e.g.</p>
<pre><code>result = pd.DataFrame({'date': datelist, 'total': [0]*len(datelist),
'TT': [0]*len(datelist)})
</code></pre>
<p>with <code>datelist</code> a predefined list. Then other codes will output some number for <code>total</code> and <code>TT</code> for each <code>date</code>, which I will store in the <code>result</code> dataframe.</p>
<p>So I want the first column to be <code>date</code>, second <code>total</code> and third <code>TT</code>. However, pandas will automatically reorder it alphabetically to <code>TT</code>, <code>date</code>, <code>total</code> at creation. While I can manually reorder this again afterwards, I wonder if there is an easier way to achieve this in one step.</p>
<p>I figured I can also do</p>
<pre><code>result = pd.DataFrame(np.transpose([datelist, [0]*l, [0]*l]),
columns = ['date', 'total', 'TT'])
</code></pre>
<p>but it somehow also looks tedious. Any other suggestions?</p>
| 1 | 2016-10-04T21:28:21Z | 39,862,651 | <pre><code>result = pd.DataFrame({'date': [23,24], 'total': 0,
'TT': 0},columns=['date','total','TT'])
</code></pre>
| 0 | 2016-10-04T22:21:09Z | [
"python",
"pandas",
"dataframe"
]
|
turn categorical data to numeric and save to libsvm format python | 39,862,058 | <p>I have a DataFrame that looks something like this:</p>
<pre><code> A B C D
1 String1 String2 String3 String4
2 String2 String3 String4 String5
3 String3 String4 String5 String6
.........................................
</code></pre>
<p>My goal is to turn this DataFrame to a libSVM format.</p>
<p>What I have tried so far is the following:</p>
<pre><code>dummy= pd.get_dummies(dataframe)
dummy.to_csv('dataframe.csv', header=False, index=False)
</code></pre>
<p>is there a way to turn the dataframe or the csv file to this format. Or is there a smarter way to do the transformation?</p>
<p>I tried loading the script that's meant to do <a href="https://github.com/zygmuntz/phraug/blob/master/README.md" rel="nofollow">this</a> from this repository as follows:</p>
<pre><code>%load libsvm2csv.py
</code></pre>
<p>and the script is loaded correctly, but when I run:</p>
<pre><code>libsvm2csv.py dataframe.csv dataframe.data 0 True
</code></pre>
<p>or </p>
<pre><code>libsvm2csv.py dataframe.csv dataframe.txt 0 True
</code></pre>
<p>I get <code>"SyntaxError: invalid syntax"</code> pointing at dataframe.csv</p>
| 0 | 2016-10-04T21:28:31Z | 39,863,202 | <p>After preprocessing your data, you can extract a matrix and use scikit-learns <a href="http://scikit-learn.org/stable/modules/generated/sklearn.datasets.dump_svmlight_file.html#sklearn.datasets.dump_svmlight_file" rel="nofollow">dump_svmlight_file</a> to create this format.</p>
<h3>Example code:</h3>
<pre><code>import pandas as pd
from sklearn.datasets import dump_svmlight_file
dummy = pd.get_dummies(dataframe)
mat = dummy.as_matrix()
dump_svmlight_file(mat, y, 'svm-output.libsvm') # where is your y?
</code></pre>
<h3>Remarks / Alternative:</h3>
<p>You are mentioning <strong>libsvm2csv.py</strong> to do this conversion, but it's just the wrong direction. It is <strong>libsvm-format -> csv</strong>.</p>
<p>Check phraugs <strong>csv2libsvm.py</strong> if you want to convert from <strong>cvs -> libsvm</strong> (without scikit-learn).</p>
<p><em>I prefer the usage of scikit-learn (compared to phraug)</em></p>
| 1 | 2016-10-04T23:23:21Z | [
"python",
"csv",
"dataframe",
"libsvm"
]
|
Python threads and strings | 39,862,062 | <p>I am new to threads and multiprocessing. I have some code in which I start a process and I wanted the output to show an active waiting state of something like this </p>
<pre><code>wating....
</code></pre>
<p>The code is similiar to this below:</p>
<p>import threading
import time</p>
<pre><code>class ThreadExample(object):
def __init__(self):
self.pause = True
threading.Thread(target=self.second_thread).start()
# Some other processes
print("Waiting", end="")
while self.pause:
time.sleep(1)
print(".", end="")
print("Hooray!")
def second_thread(self):
print("timer started")
time.sleep(3)
self.pause = False
print("timer finished")
if __name__ == "__main__":
ThreadExample()
</code></pre>
<p>When I run the code above, I receive the output:</p>
<pre><code>timer started
Do something else..timer finished
.
Hooray!
</code></pre>
<p>not a big surprise, except that only the 'timer started' appears at the beginning and the rest of the text appears in an instant at the end.</p>
<p>If I change the line print(".", end="") to print("."), I receive the following output:</p>
<pre><code>timer started
Do something else.
.
timer finished
.
Hooray
</code></pre>
<p>where the dots appear in 1 second increments, which was my intention.</p>
<p>Is there a way to get the 'Waiting...' on one line without the end=""?</p>
<p>And secondly I am guessing this is something to do with the internals of the print() function, and if not, should I perform the threading in another manner? I do not think the problem is the GIL as I have tried multiprocess.Process and got the same result. </p>
| 0 | 2016-10-04T21:29:14Z | 39,862,172 | <p>This is probably due to <code>print</code> buffering. It is flushed on <code>\n</code> and on some other occasions (like buffer overflow or program exit). Instead of <code>print</code> try this:</p>
<pre><code>import sys
def unbuffered_print(msg):
sys.stdout.write(msg)
sys.stdout.flush()
...
unbuffered_print('.')
</code></pre>
<p>everywhere.</p>
| 1 | 2016-10-04T21:37:38Z | [
"python",
"multithreading",
"python-3.x",
"buffering"
]
|
âIncorrect string valueâ when trying to insert String into MySQL via Python and Text file | 39,862,121 | <p>What is causing this incorrect string? I have read many questions and answers and here are my results. I still am getting same error after reading answers.</p>
<p>I am getting the following error:
<code>ERROR 1366 (HY000) at line 34373: Incorrect string value: '\xEF\xBB\xBF<?x...' for column 'change' at row 1</code></p>
<p>When I try to enter the following into SQL:
Line number 34373: <code>INSERT INTO gitlog_changes VALUES ('123456', 'NhincCommonEntity.xsd', '<?xml version=\"1.0\" encoding=\"UTF-8\"?>');</code></p>
<p>My table looks like this:</p>
<pre class="lang-sql prettyprint-override"><code>DROP TABLE IF EXISTS `gitlog_changes`;
CREATE TABLE `gitlog_changes` (
`hashID` varchar(40) NOT NULL,
`filename` varchar(450) DEFAULT NULL,
`change` mediumtext
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
</code></pre>
<p>I read many answers that say change the charset to UTF8 <a href="http://stackoverflow.com/questions/1168036/how-to-fix-incorrect-string-value-errors">[1]</a><a href="http://stackoverflow.com/questions/23670754/exception-java-sql-sqlexception-incorrect-string-value-xf0-x9f-x92-xbc-for">[2]</a><a href="http://stackoverflow.com/questions/10957238/incorrect-string-value-when-trying-to-insert-utf-8-into-mysql-via-jdbc">[3]</a><a href="http://stackoverflow.com/questions/2108824/mysql-incorrect-string-value-error-when-save-unicode-string-in-django">[4]</a>. So I execute this:
<code>alter table yourTableName DEFAULT CHARACTER SET utf8;</code></p>
<p>I continue to get the same error. Then I <code>alter table yourTableName DEFAULT CHARACTER SET utf8mb4_general_ci;</code></p>
<p>Still same error occurs. </p>
<p>I also attempt to read in a file from python and do direct commits to the database. From this answer<a href="http://stackoverflow.com/questions/1168036/how-to-fix-incorrect-string-value-errors">[1]</a>. I get a warning instead of an error.</p>
<p>I insert the following code into my python script:</p>
<pre class="lang-py prettyprint-override"><code> cursor.execute("SET NAMES 'utf8'")
cursor.execute("SET CHARACTER SET utf8")
</code></pre>
<p>Python script: </p>
<pre class="lang-py prettyprint-override"><code>def insert_changes(modList):
db = MySQLdb.connect("localhost", "user", "password", "table")
cursor = db.cursor()
cursor.execute("SET NAMES 'utf8'")
cursor.execute("SET CHARACTER SET utf8")
for mod in modList:
hashID = mod["hashID"]
fileName = mod["fileName"]
change = mod["change"]
cursor.execute("INSERT INTO gitlog_changes VALUES (%s, %s, %s" , (hashID, fileName, change))
# # disconnect from server
db.commit()
db.close()
</code></pre>
<p>The warning I get here is: <code>Warning: Invalid utf8 character string: '\xEF\xBB\xBF<?x...'
cursor.execute("INSERT INTO gitlog_changes VALUES (%s, %s, %s)" , (hashID, fileName, change))</code></p>
| 0 | 2016-10-04T21:33:24Z | 39,862,191 | <p>Text you're trying to insert contains <a href="https://en.wikipedia.org/wiki/Byte_order_mark" rel="nofollow">UTF-8 BOM</a> in the beginning (that's the <strong>\xEF\xBB\xBF</strong> in your error).</p>
<p>Please <a href="http://stackoverflow.com/questions/13590749/reading-unicode-file-data-with-bom-chars-in-python">check this answer</a> to see how to convert from UTF-8 with BOM into UTF-8.</p>
<hr>
<p>As stated in <a href="http://dev.mysql.com/doc/refman/5.7/en/charset-unicode.html" rel="nofollow">MySQL docs</a></p>
<blockquote>
<p>MySQL uses no BOM for UTF-8 values.</p>
</blockquote>
<p>So the only solution is decoding this string in your python code.</p>
| 2 | 2016-10-04T21:38:58Z | [
"python",
"mysql",
"utf-8"
]
|
âIncorrect string valueâ when trying to insert String into MySQL via Python and Text file | 39,862,121 | <p>What is causing this incorrect string? I have read many questions and answers and here are my results. I still am getting same error after reading answers.</p>
<p>I am getting the following error:
<code>ERROR 1366 (HY000) at line 34373: Incorrect string value: '\xEF\xBB\xBF<?x...' for column 'change' at row 1</code></p>
<p>When I try to enter the following into SQL:
Line number 34373: <code>INSERT INTO gitlog_changes VALUES ('123456', 'NhincCommonEntity.xsd', '<?xml version=\"1.0\" encoding=\"UTF-8\"?>');</code></p>
<p>My table looks like this:</p>
<pre class="lang-sql prettyprint-override"><code>DROP TABLE IF EXISTS `gitlog_changes`;
CREATE TABLE `gitlog_changes` (
`hashID` varchar(40) NOT NULL,
`filename` varchar(450) DEFAULT NULL,
`change` mediumtext
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
</code></pre>
<p>I read many answers that say change the charset to UTF8 <a href="http://stackoverflow.com/questions/1168036/how-to-fix-incorrect-string-value-errors">[1]</a><a href="http://stackoverflow.com/questions/23670754/exception-java-sql-sqlexception-incorrect-string-value-xf0-x9f-x92-xbc-for">[2]</a><a href="http://stackoverflow.com/questions/10957238/incorrect-string-value-when-trying-to-insert-utf-8-into-mysql-via-jdbc">[3]</a><a href="http://stackoverflow.com/questions/2108824/mysql-incorrect-string-value-error-when-save-unicode-string-in-django">[4]</a>. So I execute this:
<code>alter table yourTableName DEFAULT CHARACTER SET utf8;</code></p>
<p>I continue to get the same error. Then I <code>alter table yourTableName DEFAULT CHARACTER SET utf8mb4_general_ci;</code></p>
<p>Still same error occurs. </p>
<p>I also attempt to read in a file from python and do direct commits to the database. From this answer<a href="http://stackoverflow.com/questions/1168036/how-to-fix-incorrect-string-value-errors">[1]</a>. I get a warning instead of an error.</p>
<p>I insert the following code into my python script:</p>
<pre class="lang-py prettyprint-override"><code> cursor.execute("SET NAMES 'utf8'")
cursor.execute("SET CHARACTER SET utf8")
</code></pre>
<p>Python script: </p>
<pre class="lang-py prettyprint-override"><code>def insert_changes(modList):
db = MySQLdb.connect("localhost", "user", "password", "table")
cursor = db.cursor()
cursor.execute("SET NAMES 'utf8'")
cursor.execute("SET CHARACTER SET utf8")
for mod in modList:
hashID = mod["hashID"]
fileName = mod["fileName"]
change = mod["change"]
cursor.execute("INSERT INTO gitlog_changes VALUES (%s, %s, %s" , (hashID, fileName, change))
# # disconnect from server
db.commit()
db.close()
</code></pre>
<p>The warning I get here is: <code>Warning: Invalid utf8 character string: '\xEF\xBB\xBF<?x...'
cursor.execute("INSERT INTO gitlog_changes VALUES (%s, %s, %s)" , (hashID, fileName, change))</code></p>
| 0 | 2016-10-04T21:33:24Z | 39,862,241 | <p>The string you're trying to insert into db has an unusual character at its beginning. I just copied your string:</p>
<pre><code>In [1]: a = '<'
In [2]: a
Out[2]: '\xef\xbb\xbf<'
</code></pre>
<p>You need to get rid of those characters. <a href="http://stackoverflow.com/questions/11159118/incorrect-string-value-xef-xbf-xbd-for-column">This</a> is a good post explaining what these characters are.</p>
| 2 | 2016-10-04T21:42:22Z | [
"python",
"mysql",
"utf-8"
]
|
How send Datetime as parameter in SQL command on Python? | 39,862,199 | <p>I have a sql command and just want to select some records with some conditions ( using python & db is Postres):
So, my query is:</p>
<pre><code> current_date= datetime.now()
tt = yield self.db.execute(self.db.execute('SELECT "Id", "RubricId", "IsRubric"
FROM whis2011."CoockieUserInterests"
'WHERE "UserId" = %s AND "Date" = %s '
% (Id, current_date))
result=tt.fetchall()[0]
</code></pre>
<p>Problem: when I want to pass datetime to field "Date" I got error: </p>
<pre><code>syntax error at or near "00"
LINE 1: ...rests" WHERE "UserId" = 1 AND "Date" = 2016-10-05 00:22:07.3...
^
</code></pre>
<p>All "Date" fields in db is like: 2016-09-25 00:00:00</p>
<p>Also, datatype of field "Date" in database is "timestamp without time zone".</p>
<p>it's my pool: </p>
<pre><code> application.db = momoko.Pool(
dsn='dbname=xxxx user=xxxxx password=xxxxx host=xxxx port=5432',
size=1,
ioloop=ioloop,
)
</code></pre>
<p>How I can select "Date" with format like this in my db? </p>
| 0 | 2016-10-04T21:39:33Z | 39,862,384 | <p>You don't state what module you are using to connect to postgresql. Let's assume for the interim that it is <code>psycopg2</code>. </p>
<p>In that case, you use the following to pass parameters to a query:</p>
<pre><code>current_date = datetime.now()
self.db.execute(
'SELECT Id, RubricId, IsRubric '
'FROM whis2011.CoockieUserInterests '
'WHERE UserId = %s AND Date = %s',
(Id, current_date))
</code></pre>
<p>Note we are <strong>not</strong> using the <code>%</code> interpolation operator on the sql string here. Instead we are using <code>%s</code> to mark sql parameters and then passing them separately to <a href="https://www.python.org/dev/peps/pep-0249/#id15" rel="nofollow"><code>cursor.execute</code></a></p>
<p>If you are using a different package to connect to Postgres then it may mark parameters in a different manner. Check the documentation or the modules <a href="https://www.python.org/dev/peps/pep-0249/#paramstyle" rel="nofollow"><code>paramstyle</code></a> for details.</p>
| 1 | 2016-10-04T21:54:57Z | [
"python",
"postgresql",
"datetime"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.