title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Plotting two .txt files in the same figure using Python | 39,739,291 | <p>I am trying to plot two .txt files in the same figure. I am using a simple Python script for that.</p>
<pre><code>import sys
import os
import numpy
import matplotlib.pyplot as plt
from pylab import *
trap_error = 'trap_error.txt'
N , error = numpy.loadtxt(trap_error, unpack =True)
monte_error = 'monte_carlo_error.txt'
points, Integral, error = numpy.loadtxt(monte_error, unpack =True)
plt.loglog(N,error, 'o')
plt.loglog(points,error, 's')
plt.xlabel('Number of equally spaced points N')
plt.ylabel('error')
plt.legend(['trapezoid rule error', 'monte carlo error'], loc = 'upper right')
plt.title('Comparison of error in Trapezoid rule and Monte Carlo rule of Numerical integration')
plt.show()
</code></pre>
<p>The output figure only shows the monte carlo data, but no trace of trapezoid data. The order of magnitude of these two data file is almost same, so I do not understand why I cannot see the other data in the same figure. I am also sharing the datafiles for convenience.</p>
<pre><code> #points Integral error # monte_carlo_error.txt
2 1.400697 0.170100
4 1.415539 0.155258
8 1.394789 0.176008
16 1.444948 0.125848
32 1.501825 0.068971
64 1.577106 0.006309
128 1.558217 0.012580
256 1.563389 0.007407
512 1.570139 0.000657
1024 1.576300 0.005504
2048 1.585733 0.014937
4096 1.577355 0.006558
8192 1.577293 0.006497
16384 1.575404 0.004607
32768 1.572333 0.001536
65536 1.571028 0.000232
131072 1.570317 0.000479
262144 1.570318 0.000478
524288 1.570867 0.000070
1048576 1.571311 0.000515
#N error #trap_error.txt
2 0.629204
4 0.472341
8 0.243747
16 0.123551
32 0.062155
64 0.031166
128 0.015604
256 0.007807
512 0.003905
1024 0.001953
2048 0.000977
4096 0.000487
8192 0.000244
16384 0.000124
32768 0.000064
65536 0.000040
131072 0.000044
262144 0.000087
524288 0.000018
1048576 0.000615
</code></pre>
| 0 | 2016-09-28T06:11:57Z | 39,739,752 | <p>Try the following:</p>
<pre><code>import sys
import os
import numpy
import matplotlib.pyplot as plt
from pylab import *
trap_error = 'trap_error.txt'
N, error1 = numpy.loadtxt(trap_error, unpack=True)
monte_error = 'monte_carlo_error.txt'
points, Integral, error2 = numpy.loadtxt(monte_error, unpack=True)
plt.loglog(N, error1, 'o')
plt.loglog(points, error2, 's')
plt.xlabel('Number of equally spaced points N')
plt.ylabel('error')
plt.legend(['trapezoid rule error', 'monte carlo error'], loc = 'upper right')
plt.title('Comparison of error in Trapezoid rule and Monte Carlo rule of Numerical integration')
plt.show()
</code></pre>
<p>Giving:</p>
<p><a href="http://i.stack.imgur.com/QExg7.png" rel="nofollow"><img src="http://i.stack.imgur.com/QExg7.png" alt="screenshot"></a></p>
<p>You were reusing the <code>error</code> variable for both sets of data.</p>
| 2 | 2016-09-28T06:38:03Z | [
"python",
"matplotlib"
]
|
Plotting two .txt files in the same figure using Python | 39,739,291 | <p>I am trying to plot two .txt files in the same figure. I am using a simple Python script for that.</p>
<pre><code>import sys
import os
import numpy
import matplotlib.pyplot as plt
from pylab import *
trap_error = 'trap_error.txt'
N , error = numpy.loadtxt(trap_error, unpack =True)
monte_error = 'monte_carlo_error.txt'
points, Integral, error = numpy.loadtxt(monte_error, unpack =True)
plt.loglog(N,error, 'o')
plt.loglog(points,error, 's')
plt.xlabel('Number of equally spaced points N')
plt.ylabel('error')
plt.legend(['trapezoid rule error', 'monte carlo error'], loc = 'upper right')
plt.title('Comparison of error in Trapezoid rule and Monte Carlo rule of Numerical integration')
plt.show()
</code></pre>
<p>The output figure only shows the monte carlo data, but no trace of trapezoid data. The order of magnitude of these two data file is almost same, so I do not understand why I cannot see the other data in the same figure. I am also sharing the datafiles for convenience.</p>
<pre><code> #points Integral error # monte_carlo_error.txt
2 1.400697 0.170100
4 1.415539 0.155258
8 1.394789 0.176008
16 1.444948 0.125848
32 1.501825 0.068971
64 1.577106 0.006309
128 1.558217 0.012580
256 1.563389 0.007407
512 1.570139 0.000657
1024 1.576300 0.005504
2048 1.585733 0.014937
4096 1.577355 0.006558
8192 1.577293 0.006497
16384 1.575404 0.004607
32768 1.572333 0.001536
65536 1.571028 0.000232
131072 1.570317 0.000479
262144 1.570318 0.000478
524288 1.570867 0.000070
1048576 1.571311 0.000515
#N error #trap_error.txt
2 0.629204
4 0.472341
8 0.243747
16 0.123551
32 0.062155
64 0.031166
128 0.015604
256 0.007807
512 0.003905
1024 0.001953
2048 0.000977
4096 0.000487
8192 0.000244
16384 0.000124
32768 0.000064
65536 0.000040
131072 0.000044
262144 0.000087
524288 0.000018
1048576 0.000615
</code></pre>
| 0 | 2016-09-28T06:11:57Z | 39,739,798 | <p>You are overwriting the variable <code>error</code> and also plotting the same thing twice :</p>
<pre><code>N , error = numpy.loadtxt(trap_error, unpack =True)
</code></pre>
<p>and then</p>
<pre><code>points, Integral, error = numpy.loadtxt(monte_error, unpack =True)
</code></pre>
<p>Use different names for the variables and you should be fine. Example :</p>
<pre><code>N , error_trap = numpy.loadtxt(trap_error, unpack =True)
</code></pre>
<p>and</p>
<pre><code>points, Integral, error_monte = numpy.loadtxt(monte_error, unpack =True)
</code></pre>
<p>Also change the plot commands to :</p>
<pre><code>plt.loglog(N,error_trap, 'o')
plt.loglog(points,error_monte, 's')
</code></pre>
| 2 | 2016-09-28T06:40:46Z | [
"python",
"matplotlib"
]
|
Plotting two .txt files in the same figure using Python | 39,739,291 | <p>I am trying to plot two .txt files in the same figure. I am using a simple Python script for that.</p>
<pre><code>import sys
import os
import numpy
import matplotlib.pyplot as plt
from pylab import *
trap_error = 'trap_error.txt'
N , error = numpy.loadtxt(trap_error, unpack =True)
monte_error = 'monte_carlo_error.txt'
points, Integral, error = numpy.loadtxt(monte_error, unpack =True)
plt.loglog(N,error, 'o')
plt.loglog(points,error, 's')
plt.xlabel('Number of equally spaced points N')
plt.ylabel('error')
plt.legend(['trapezoid rule error', 'monte carlo error'], loc = 'upper right')
plt.title('Comparison of error in Trapezoid rule and Monte Carlo rule of Numerical integration')
plt.show()
</code></pre>
<p>The output figure only shows the monte carlo data, but no trace of trapezoid data. The order of magnitude of these two data file is almost same, so I do not understand why I cannot see the other data in the same figure. I am also sharing the datafiles for convenience.</p>
<pre><code> #points Integral error # monte_carlo_error.txt
2 1.400697 0.170100
4 1.415539 0.155258
8 1.394789 0.176008
16 1.444948 0.125848
32 1.501825 0.068971
64 1.577106 0.006309
128 1.558217 0.012580
256 1.563389 0.007407
512 1.570139 0.000657
1024 1.576300 0.005504
2048 1.585733 0.014937
4096 1.577355 0.006558
8192 1.577293 0.006497
16384 1.575404 0.004607
32768 1.572333 0.001536
65536 1.571028 0.000232
131072 1.570317 0.000479
262144 1.570318 0.000478
524288 1.570867 0.000070
1048576 1.571311 0.000515
#N error #trap_error.txt
2 0.629204
4 0.472341
8 0.243747
16 0.123551
32 0.062155
64 0.031166
128 0.015604
256 0.007807
512 0.003905
1024 0.001953
2048 0.000977
4096 0.000487
8192 0.000244
16384 0.000124
32768 0.000064
65536 0.000040
131072 0.000044
262144 0.000087
524288 0.000018
1048576 0.000615
</code></pre>
| 0 | 2016-09-28T06:11:57Z | 39,740,323 | <p>You rewrote error from trap_error.txt file. Use the below code to fix your issue</p>
<pre><code>trap_error = 'trap_error.txt'
N , error1 = numpy.loadtxt(trap_error, unpack =True)
monte_error = 'monte_carlo_error.txt'
points, Integral, error2 = numpy.loadtxt(monte_error, unpack =True)
plt.loglog(N,error1, 'o')
plt.loglog(points,error2, 's')
</code></pre>
| 1 | 2016-09-28T07:07:17Z | [
"python",
"matplotlib"
]
|
Python script - String Search - Arista | 39,739,357 | <p>I'm trying to run a script on arista switch, which is a simple python script. I got some values by running a command on the switch and I need to find a value from that command. for eg:</p>
<pre><code>#!/usr/bin/python
from jsonrpclib import Server
switch = Server( "http://XXX:XXX@192.168.XX.XX/command-api" )
response = switch.runCmds( 1, [ "show interfaces ethernet 49 status" ] )
a = response[0]
print a
</code></pre>
<h2>output is</h2>
<pre><code>{'interfaceStatuses': {'Ethernet49': {'vlanInformation': {'interfaceMode': 'routed', 'interfaceForwardingModel': 'routed'}, 'bandwidth': 10000000000L, 'interfaceType': '10GBASE-SR', 'description': 'GH1TPQACORS1 Et1', 'autoNegotiateActive': False, 'duplex': 'duplexFull', 'autoNegotigateActive': False, ***'linkStatus': 'connected'***}}}
</code></pre>
<p>Out of this result, I just need 'linkStatus' : 'connected' value, how do I do it?</p>
| 0 | 2016-09-28T06:16:59Z | 39,739,493 | <p>Basically you have dictionary of dictionary of dictionary. For this case, its simply. But if you have multiple of such cases, you have to iterate with the keys of first two dictionaries (dict1: interface and dict2: ethernet)</p>
<pre><code>a['interfaceStatuses']['Ethernet49']['linkStatus']
</code></pre>
| 1 | 2016-09-28T06:25:00Z | [
"python",
"python-2.7",
"string-search"
]
|
Get minimum number column wise in a multidimensional Pandas DataFrame - Python | 39,739,432 | <p>I am new to Pandas. I am trying to get the minimum number column-wise. So these are the steps I have followed:</p>
<ol>
<li><p>I read the files from CSV files using </p>
<p>data = [pd.read_csv(f, index_col=None, header=None) for f in temp]</p></li>
<li><p>Then added it to another data frame <code>flow = pd.DataFrame(data)</code>, making it "3d" data-frame.</p></li>
</ol>
<p>So, <code>data</code> has <code>[128 rows x 14 columns] * 60 samples</code> with out <code>index_col</code> and <code>header</code></p>
<p>One of the samples is:</p>
<pre><code>[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13
0 3985.1 4393.3 4439.5 3662.1 5061.0 3990.8 4573.8 4036.9 4717.9 4225.6 4638.5 4157.9 4496.4 4007.7
1 3998.5 4398.5 4447.2 3660.0 5062.6 3986.7 4573.3 4045.1 4733.8 4238.5 4650.3 4167.2 4509.2 4022.6
2 3995.4 4397.9 4442.1 3659.5 5058.5 3987.2 4569.7 4039.5 4724.1 4234.9 4645.6 4161.5 4506.2 4014.9
3 3985.1 4396.9 4432.3 3660.0 5054.9 3988.2 4568.2 4037.9 4719.0 4230.3 4632.3 4150.8 4500.5 4004.1
4 3985.1 4391.3 4428.2 3661.5 5057.9 3987.2 4570.8 4044.6 4731.3 4236.9 4631.8 4151.8 4503.1 4005.6
5 3991.3 4391.8 4430.8 3662.6 5059.5 3987.7 4572.8 4044.6 4730.8 4237.4 4639.5 4157.4 4507.2 4009.7
6 3989.7 4396.9 4436.9 3661.5 5057.4 3987.7 4571.3 4035.4 4716.9 4230.3 4641.0 4156.9 4505.1 4010.8
7 3983.6 4392.8 4435.4 3660.0 5056.9 3987.2 4570.8 4032.8 4719.5 4227.7 4634.4 4153.8 4497.4 4008.2
8 3983.1 4388.7 4428.7 3661.5 5056.9 3987.7 4571.8 4041.0 4728.2 4231.8 4631.3 4154.4 4499.0 4004.6
9 3988.2 4395.9 4433.3 3662.1 5057.9 3987.7 4572.3 4040.5 4720.5 4231.3 4636.9 4154.9 4503.1 4005.1
10 3988.7 4398.5 4439.0 3660.0 5060.0 3986.7 4572.3 4032.3 4710.3 4225.1 4640.5 4154.9 4497.4 4008.2
11 3983.6 4391.3 4434.4 3661.0 5059.0 3988.7 4570.3 4041.0 4724.6 4235.4 4642.6 4163.1 4499.5 4010.8
12 3984.1 4388.7 4432.8 3664.1 5058.5 3991.8 4574.4 4051.8 4740.5 4245.1 4645.1 4170.8 4507.7 4014.4
13 3986.7 4390.8 4432.8 3664.1 5057.9 3991.3 4583.1 4043.1 4724.6 4231.8 4642.1 4161.5 4505.6 4012.8
14 3984.6 4395.4 4433.8 3661.5 5059.0 3991.3 4583.1 4036.9 4713.8 4222.1 4641.0 4157.4 4503.1 4010.8
15 3989.2 4400.5 4440.0 3661.0 5066.7 3994.4 4579.5 4045.1 4732.8 4233.8 4648.2 4170.3 4509.2 4016.4
16 3990.8 4394.4 4437.4 3661.5 5071.8 3996.4 4580.5 4045.1 4738.5 4239.5 4650.3 4171.3 4509.7 4016.4
17 3979.0 4383.6 4426.7 3660.0 5065.6 3995.4 4577.4 4034.4 4715.4 4228.2 4643.6 4158.5 4504.6 4005.1
18 3972.8 4383.1 4426.2 3660.0 5057.9 3991.8 4569.7 4034.4 4712.3 4228.2 4639.5 4157.9 4502.6 3999.0
19 3982.6 4386.7 4430.3 3661.5 5055.9 3987.2 4568.7 4045.6 4737.4 4243.1 4641.0 4166.7 4504.1 4007.7
20 3990.3 4389.7 4432.3 3661.5 5059.5 3989.7 4571.8 4047.2 4740.5 4245.1 4647.2 4169.2 4506.2 4014.9
21 3989.2 4392.8 4435.4 3661.0 5066.7 3996.9 4573.8 4035.9 4713.8 4232.3 4650.3 4166.7 4505.6 4014.4
22 3989.7 4391.8 4435.4 3661.5 5069.7 3997.4 4571.8 4035.4 4711.8 4231.3 4647.2 4167.7 4507.7 4017.4
23 3990.8 4389.7 4432.8 3660.0 5069.2 3996.9 4569.2 4044.6 4734.9 4237.9 4646.2 4168.7 4509.7 4020.0
24 3988.7 4393.3 4434.9 3659.0 5070.3 4000.5 4570.8 4041.0 4725.6 4232.8 4648.2 4166.7 4504.6 4016.4
25 3990.3 4397.9 4440.0 3661.0 5065.6 3997.9 4571.8 4039.0 4713.8 4230.8 4650.3 4169.7 4506.7 4019.0
26 3990.8 4396.4 4437.4 3662.1 5057.9 3988.7 4572.3 4045.1 4729.2 4236.4 4648.2 4169.7 4509.2 4022.6
27 3984.6 4385.1 4425.6 3661.5 5056.4 3990.8 4577.4 4041.5 4727.2 4231.8 4641.5 4158.5 4495.4 4010.3
28 3983.6 4381.0 4424.6 3662.1 5057.4 3999.5 4585.1 4037.4 4716.9 4229.7 4641.5 4157.4 4491.8 4006.2
29 3991.8 4391.3 4434.9 3662.1 5056.9 4000.0 4588.7 4040.5 4723.1 4234.4 4647.7 4167.7 4503.1 4017.4
.. ... ... ... ... ... ... ... ... ... ... ... ... ... ...
98 3988.2 4372.3 4424.1 3662.1 5040.5 3989.2 4585.6 4033.3 4719.0 4233.3 4647.2 4163.6 4502.1 4011.8
99 3993.8 4382.1 4429.2 3660.5 5042.1 3988.2 4590.3 4045.1 4737.4 4255.9 4659.0 4176.9 4514.4 4021.5
100 3992.8 4384.1 4430.3 3661.0 5041.0 3989.7 4601.5 4039.5 4733.3 4264.1 4663.1 4186.2 4512.3 4023.6
101 3988.2 4374.9 4424.6 3663.6 5040.0 3991.3 4601.0 4028.7 4719.0 4247.7 4654.9 4171.8 4505.1 4017.4
102 3989.7 4374.9 4427.2 3662.1 5040.5 3990.8 4590.3 4033.3 4716.9 4234.4 4654.4 4168.7 4508.7 4015.9
103 3987.2 4372.3 4428.7 3660.5 5036.4 3988.2 4585.1 4035.9 4719.5 4231.8 4651.3 4171.3 4504.6 4012.8
104 3979.5 4365.6 4421.5 3662.1 5030.3 3984.1 4586.2 4030.3 4717.4 4229.7 4641.0 4158.5 4491.8 4005.6
105 3982.1 4372.8 4420.5 3662.1 5032.3 3974.9 4586.2 4034.4 4719.0 4233.8 4640.0 4155.4 4495.4 4006.2
106 3987.7 4380.0 4427.7 3659.5 5037.9 3973.8 4584.1 4039.0 4720.5 4241.0 4644.1 4165.1 4509.2 4010.8
107 3987.2 4374.4 4428.7 3662.6 5039.5 3982.6 4585.1 4034.4 4719.0 4233.3 4641.5 4158.5 4506.7 4007.7
108 3982.6 4370.8 4420.0 3664.1 5036.9 3982.6 4587.7 4034.9 4724.1 4228.7 4639.0 4150.8 4495.4 4000.5
109 3979.0 4372.3 4414.4 3658.5 5029.2 3971.8 4580.0 4037.4 4723.6 4233.8 4639.5 4154.9 4492.8 3997.4
110 3979.0 4374.4 4418.5 3658.5 5027.7 3970.3 4571.3 4029.7 4712.3 4225.6 4640.0 4155.4 4496.9 3998.5
111 3986.2 4381.0 4428.2 3663.1 5037.4 3980.5 4580.0 4025.6 4705.1 4217.9 4643.6 4157.9 4504.1 4003.1
112 3991.3 4383.6 4430.3 3661.5 5042.6 3985.6 4585.6 4027.2 4708.7 4225.6 4644.6 4166.7 4508.2 4007.2
113 3983.6 4378.5 4432.8 3659.0 5034.4 3976.9 4573.8 4032.8 4725.6 4236.9 4643.6 4165.6 4504.1 4005.1
114 3976.4 4380.0 4443.6 3661.0 5028.2 3968.7 4572.8 4037.4 4735.4 4247.2 4649.7 4168.2 4507.7 4008.2
115 3973.8 4378.5 4441.5 3661.5 5033.3 3974.4 4585.6 4028.2 4713.3 4236.9 4650.8 4170.8 4508.2 4004.1
116 3971.8 4370.3 4431.8 3661.0 5036.4 3983.6 4588.7 4019.0 4696.4 4212.3 4639.0 4159.0 4496.9 3991.8
117 3972.3 4371.8 4437.4 3661.0 5031.3 3982.1 4585.1 4032.3 4720.5 4218.5 4637.4 4155.9 4496.9 3994.9
118 3973.8 4379.0 4444.1 3660.5 5032.3 3980.0 4587.2 4041.0 4730.8 4236.9 4646.7 4166.7 4506.2 4006.7
119 3982.1 4385.1 4447.2 3661.5 5040.5 3984.1 4586.7 4024.6 4708.2 4230.3 4648.2 4168.7 4506.7 4010.3
120 3991.3 4390.8 4452.8 3663.1 5043.1 3985.1 4576.4 4019.0 4710.8 4228.2 4650.3 4168.7 4505.6 4011.8
121 3989.2 4386.7 4451.3 3660.5 5041.0 3981.5 4568.2 4032.3 4733.3 4237.9 4657.4 4172.8 4508.2 4011.3
122 3983.6 4384.1 4448.7 3658.5 5040.0 3982.6 4574.4 4036.9 4730.8 4237.4 4656.4 4172.3 4505.6 4008.7
123 3987.7 4391.3 4455.4 3661.0 5038.5 3984.6 4585.6 4029.7 4716.4 4231.3 4655.4 4171.3 4504.1 4012.8
124 3990.8 4392.8 4460.0 3660.0 5038.5 3983.6 4583.1 4026.2 4714.4 4231.3 4656.9 4172.3 4506.2 4013.8
125 3988.7 4390.8 4456.4 3657.9 5040.0 3984.6 4576.4 4025.1 4715.9 4231.3 4651.8 4167.2 4505.1 4012.8
126 3990.3 4393.8 4455.9 3659.0 5040.0 3983.1 4577.4 4026.7 4720.5 4231.8 4647.2 4167.2 4505.6 4018.5
127 3988.2 4392.8 4453.3 3660.0 5040.5 3976.9 4581.5 4033.8 4732.8 4235.4 4649.2 4170.8 4506.2 4015.9
[128 rows x 14 columns]]
</code></pre>
<p>I am trying to get the minimum number column-wise for every sample. How should I do it?</p>
<p>I tried using <code>min()</code>, by doing <code>data[0][0].min()</code> but I get the following as the output:</p>
<pre><code>[[ 3985.1 4393.3 4439.5 ..., 4157.9 4496.4 4007.7]
[ 3998.5 4398.5 4447.2 ..., 4167.2 4509.2 4022.6]
[ 3995.4 4397.9 4442.1 ..., 4161.5 4506.2 4014.9]
...,
[ 3988.7 4390.8 4456.4 ..., 4167.2 4505.1 4012.8]
[ 3990.3 4393.8 4455.9 ..., 4167.2 4505.6 4018.5]
[ 3988.2 4392.8 4453.3 ..., 4170.8 4506.2 4015.9]]
</code></pre>
<p>It's same as the sample. I don't know what's the problem here.</p>
| 2 | 2016-09-28T06:21:28Z | 39,739,495 | <p>I think you need:</p>
<pre><code>print (data[0].min(axis=1))
0 3662.1
1 3660.0
2 3659.5
3 3660.0
4 3661.5
5 3662.6
6 3661.5
7 3660.0
8 3661.5
9 3662.1
10 3660.0
11 3661.0
12 3664.1
13 3664.1
14 3661.5
15 3661.0
...
...
</code></pre>
<p>Maybe beter is omit <code>flow = pd.DataFrame(data)</code> and use:</p>
<pre><code>data = [pd.read_csv(f, index_col=None, header=None) for f in temp]
mins = [df.min(axis=1) for df in data[0]]
print (mins[0])
print (mins[1])
</code></pre>
| 1 | 2016-09-28T06:25:09Z | [
"python",
"pandas"
]
|
Python - assign value and check condition in IF statment | 39,739,510 | <p>I have this code:</p>
<pre><code>str = func(parameter)
if not str:
do something.
</code></pre>
<p>the function <code>func()</code> return a <code>string</code> on success and <code>''</code> on failure.
The <code>do something</code> should happen only if <code>str</code> actualy contain a string.</p>
<p>Is it possible to do the assigmnt to str on the <code>IF</code> statment itself?</p>
| 1 | 2016-09-28T06:25:51Z | 39,739,567 | <p>Why not try it yourself,</p>
<pre><code>if not (a = some_func()):
do something
^
SyntaxError: invalid syntax
</code></pre>
<p><strong>So, no.</strong></p>
| 1 | 2016-09-28T06:28:59Z | [
"python"
]
|
Python - assign value and check condition in IF statment | 39,739,510 | <p>I have this code:</p>
<pre><code>str = func(parameter)
if not str:
do something.
</code></pre>
<p>the function <code>func()</code> return a <code>string</code> on success and <code>''</code> on failure.
The <code>do something</code> should happen only if <code>str</code> actualy contain a string.</p>
<p>Is it possible to do the assigmnt to str on the <code>IF</code> statment itself?</p>
| 1 | 2016-09-28T06:25:51Z | 39,739,578 | <p>With the below code snippet, if the func returns a string of length > 0 only then the do something part will happen</p>
<pre><code>str = func(parameter)
if str:
do something
</code></pre>
| 0 | 2016-09-28T06:29:39Z | [
"python"
]
|
Python - assign value and check condition in IF statment | 39,739,510 | <p>I have this code:</p>
<pre><code>str = func(parameter)
if not str:
do something.
</code></pre>
<p>the function <code>func()</code> return a <code>string</code> on success and <code>''</code> on failure.
The <code>do something</code> should happen only if <code>str</code> actualy contain a string.</p>
<p>Is it possible to do the assigmnt to str on the <code>IF</code> statment itself?</p>
| 1 | 2016-09-28T06:25:51Z | 39,739,579 | <p>You can try this, it's the shortest version I could come up with: </p>
<pre><code>if func(parameter):
do something.
</code></pre>
| 0 | 2016-09-28T06:29:42Z | [
"python"
]
|
Python - assign value and check condition in IF statment | 39,739,510 | <p>I have this code:</p>
<pre><code>str = func(parameter)
if not str:
do something.
</code></pre>
<p>the function <code>func()</code> return a <code>string</code> on success and <code>''</code> on failure.
The <code>do something</code> should happen only if <code>str</code> actualy contain a string.</p>
<p>Is it possible to do the assigmnt to str on the <code>IF</code> statment itself?</p>
| 1 | 2016-09-28T06:25:51Z | 39,739,616 | <p>In one word: no. In Python assignment is a statement, not an expression. </p>
| 1 | 2016-09-28T06:31:52Z | [
"python"
]
|
Stack the lables of an axis with matplotlib | 39,739,644 | <p>I'm creating a bar graph and showing multiple values on the x axis. By default they are shown in series with a "," separating them as shown below. Instead of a coma how could I show the values stacked on top of each other as drawn on the image below? This would save space on the x-axis to allow for bigger graphs when I want to show multiple values. </p>
<pre><code>import pandas as pd
import matplotlib as plt
dfex = pd.DataFrame({'City': ['LA', 'SF', 'Dallas'],
'Lakes': [3, 9, 6],
'Rivers': [1, 0, 0],
'State': ['CA', 'CA', 'TX'],
'Waterfalls': [2, 4, 5]})
myplot = dfex.plot(x=['City','State'],kind='bar',stacked='True')
</code></pre>
<p><a href="http://i.stack.imgur.com/RdNuX.png" rel="nofollow"><img src="http://i.stack.imgur.com/RdNuX.png" alt="enter image description here"></a></p>
| 2 | 2016-09-28T06:32:59Z | 39,744,519 | <p>You can simply hack the x-axis tick labels to achieve what you want.</p>
<pre><code>ticks = myplot.xaxis.get_ticklabels()
new_ticks = ['\n'.join(t.get_text()[1:-1].split(', ')) for t in ticks]
myplot.xaxis.set_ticklabels(new_ticks)
</code></pre>
<p><a href="http://i.stack.imgur.com/VQ1XE.png" rel="nofollow"><img src="http://i.stack.imgur.com/VQ1XE.png" alt="enter image description here"></a></p>
| 2 | 2016-09-28T10:13:39Z | [
"python",
"matplotlib"
]
|
Interactive System Arguments in Python | 39,739,686 | <p>I have written a code which is to be run from the command prompt, taking arguments from the command.</p>
<p>What i have tried.</p>
<pre><code>start_date = sys.argv[1]
end_date = sys.argv[2]
end_date2 = datetime.strptime(end_date, "%d %b %Y").date()
start_date2 = datetime.strptime(start_date, "%d %b %Y").date()
--- Do Something ---
</code></pre>
<p>and i am running the script thus :</p>
<pre><code>python myscript.py "19 Aug 2016" "19 Sep 2016"
</code></pre>
<p>this runs smoothly. however whatever i am doing in the script requires a condition :-- <code>start_date2 > end_date2</code></p>
<p>so i am using a if statement to restrict this and <code>exit()</code> the code if the above condition is not met.</p>
<p>My Question :-</p>
<p>Is it possible to reset the argument if the above condition is not met??</p>
<p>something like :</p>
<pre><code>if start_date2 < end_date2 :
re enter the arguments from the command prompt
without stopping the code and take the new arguments and ignore the wrong ones.
</code></pre>
<p>kind of like an interactive code.</p>
| 0 | 2016-09-28T06:34:50Z | 39,739,734 | <p>Yes, it is possible. Simply after detecting dates are wrong use <code>print</code>/<code>raw_input</code> and ask user for new date, overwrite old one and check dates again.</p>
<p>For example (pseudocode a bit):</p>
<pre><code>start_date = sys.argv[1]
while start_date.isWrong():
print "start_date is wrong, gimme new one and press enter: "
start_date = parse_from_string(raw_input())
</code></pre>
<p>Anyway, you should try <a href="https://docs.python.org/2.7/library/argparse.html" rel="nofollow">argparse</a> to read command-line options.</p>
| 0 | 2016-09-28T06:37:15Z | [
"python",
"python-2.7",
"sys"
]
|
What does it mean when the cumtimes of callees don't add up to the total cumtime of a function? | 39,739,697 | <p>I'm using <code>cProfile.Profile</code>.</p>
<p>In the output of <code>print_callees</code>, I can see that a function takes about 2 seconds of <code>cumtime</code>.</p>
<p>But when I check the output of the functions it calls, their <code>cumtime</code> don't sum up to that of the caller. Actually it's much smaller.</p>
<p>What could be the reason of this?</p>
| 2 | 2016-09-28T06:35:27Z | 39,739,842 | <p>This means that the function performs computations (and most probably contains a loop).</p>
<p>Example:</p>
<pre><code>def foo(n):
s = 0
for i in range(n):
s += i
print(s)
return s
</code></pre>
<p><code>print()</code> is a called by <code>foo()</code>, but for a large enough <code>n</code> more time is spent in the <code>for</code> loop (and thus in <code>foo()</code> proper) than in <code>print()</code>. Thus <code>cumtime</code> of <code>foo()</code> should significantly exceed <code>cumtime</code> of <code>print()</code>.</p>
| 0 | 2016-09-28T06:42:45Z | [
"python",
"performance"
]
|
What does it mean when the cumtimes of callees don't add up to the total cumtime of a function? | 39,739,697 | <p>I'm using <code>cProfile.Profile</code>.</p>
<p>In the output of <code>print_callees</code>, I can see that a function takes about 2 seconds of <code>cumtime</code>.</p>
<p>But when I check the output of the functions it calls, their <code>cumtime</code> don't sum up to that of the caller. Actually it's much smaller.</p>
<p>What could be the reason of this?</p>
| 2 | 2016-09-28T06:35:27Z | 39,739,858 | <p>That actually makes a lot of sense: the difference is the time the code in the function, excluding the calls it makes, takes. For example:</p>
<pre><code>def foo():
for i in range(99999):
print 'hello'
bar()
baz()
</code></pre>
<p>The cumulative time of <code>foo</code> will be much larger than the sums of times of <code>bar()</code> and <code>baz</code> - it has to do the loop.</p>
| 1 | 2016-09-28T06:43:27Z | [
"python",
"performance"
]
|
How to keep the number fraction format in Python3.x while performing simple operations? | 39,739,706 | <p>In code, I would like to keep the format of number while doing different simple operations (e.g. multiplication 1/6 * 1/6 ) in python 3.x. </p>
<p>I want to have:</p>
<pre><code>{
2: 1/36,
3: 2/36,
4: 3/36,
5: 4/36,
6: 5/36,
7: 6/36,
8: 5/36,
9: 4/36,
10: 3/36,
11: 2/36,
12: 1/36
}
</code></pre>
<p>but i get this:</p>
<pre><code>{
2: 0.027777777777777776,
3: 0.05555555555555555,
4: 0.08333333333333333,
5: 0.1111111111111111,
6: 0.1388888888888889,
7: 0.16666666666666669,
8: 0.1388888888888889,
9: 0.1111111111111111,
10: 0.08333333333333333,
11: 0.05555555555555555,
12: 0.027777777777777776
}
</code></pre>
<p>Any suggestions?</p>
| 0 | 2016-09-28T06:35:57Z | 39,739,853 | <p>I'm not aware of any programming language <strong>not</strong> evaluating the expression to a float without mentioning.</p>
<p>In case of python you should use module <a href="https://docs.python.org/3/library/fractions.html?highlight=fraction#module-fractions" rel="nofollow">fraction</a>.</p>
| 0 | 2016-09-28T06:43:06Z | [
"python"
]
|
How to keep the number fraction format in Python3.x while performing simple operations? | 39,739,706 | <p>In code, I would like to keep the format of number while doing different simple operations (e.g. multiplication 1/6 * 1/6 ) in python 3.x. </p>
<p>I want to have:</p>
<pre><code>{
2: 1/36,
3: 2/36,
4: 3/36,
5: 4/36,
6: 5/36,
7: 6/36,
8: 5/36,
9: 4/36,
10: 3/36,
11: 2/36,
12: 1/36
}
</code></pre>
<p>but i get this:</p>
<pre><code>{
2: 0.027777777777777776,
3: 0.05555555555555555,
4: 0.08333333333333333,
5: 0.1111111111111111,
6: 0.1388888888888889,
7: 0.16666666666666669,
8: 0.1388888888888889,
9: 0.1111111111111111,
10: 0.08333333333333333,
11: 0.05555555555555555,
12: 0.027777777777777776
}
</code></pre>
<p>Any suggestions?</p>
| 0 | 2016-09-28T06:35:57Z | 39,740,025 | <p>Using the <a href="https://docs.python.org/3/library/fractions.html" rel="nofollow">fractions</a> module and string formatting you can print rational numbers like this</p>
<pre><code>import fractions
print("{}".format(fractions.Fraction(0.027777777777777776).limit_denominator()))
</code></pre>
<p>this will output</p>
<pre><code>1/36
</code></pre>
| 0 | 2016-09-28T06:53:10Z | [
"python"
]
|
Column as alias is ambigous in Postgresql | 39,739,716 | <p>I am migrating the database from MySQL to Postgrsql. In MySQL I create a view like this:</p>
<pre><code> create or replace view translated_attributes_with_attribute_templatevalues as
select
concat_ws('',
translated_attributevalues.attribute_id,
translated_attributevalues.languagecode,
attribute_template_value.id
) AS id,
...
GROUP BY id
</code></pre>
<p>But in Postgrsql I got the message:</p>
<p>column reference "id" is ambiguous
LINE 1: ...GROUP BY id</p>
<p>How can I use the alias "id"?</p>
<p>I renamed it, but than other parts of the code break, because it assumes the column is named id.</p>
| 0 | 2016-09-28T06:36:21Z | 39,740,399 | <p>Either repeat the expression in the <code>GROUP BY</code> clause:</p>
<pre><code>GROUP BY concat_ws('', ...)
</code></pre>
<p>or use the result column number:</p>
<pre><code>GROUP BY 1
</code></pre>
<p>The only advantage of the first solution is that it complies with the SQL standard.</p>
| 1 | 2016-09-28T07:10:40Z | [
"python",
"django",
"postgresql"
]
|
Generate two loops with different ranges | 39,739,876 | <p>I am trying to generate two loops and I want the second loop to count from the first number of my first loop in range of 10 until 80. It looks like this:</p>
<pre><code>0
0
1
12345678910
2
234567891011121314151617181920
3
3456789101112131415161718192021222324252627282930
</code></pre>
<p>Continue like this until 8 in my first loop and 80 in my second.
My code so far</p>
<pre><code>count = 1
for i in range(9):
for j in range(0, i,10):
print(count, end=80)
count = count i+1
print()
</code></pre>
<p>Any suggestion on what I am doing wrong will be very helpful.</p>
| -1 | 2016-09-28T06:44:29Z | 39,740,555 | <p>You have a number of syntax errors in your code. I think I have a working version for you here:</p>
<pre><code>for i in range(9):
end_second_loop = (i * 10) + 1
print(i)
for j in range(i, end_second_loop):
print(j, end='')
print()
</code></pre>
<p>I'll also go over what was wrong syntactically with your first code. </p>
<p>In order to get your code to run, an extra space before your last <code>print()</code> call had to be removed, the <code>end=80</code> had to be changed (<code>end</code> in that case must be a string, such as <code>end=''</code> to force <code>print()</code> not to add a newline), and a <code>+</code> was missing from <code>count = count i+1</code>. I imagine you wanted:</p>
<pre><code>count = count + i + 1
</code></pre>
<p>That gets it running, but not yet producing the right output. Based on the way you formatted your desired output, 3 <code>print()</code> statements are necessary.</p>
<ol>
<li>The first one prints the number from the outer loop</li>
<li>The next one prints the count from the first number up to that number multiplied by 10 (using <code>end=''</code> so it all prints on on line)</li>
<li>The last one is an empty statement, in order to add a newline character</li>
</ol>
<p>Inside the outer loop, but before entering the inner loop, I defined <code>end_second_loop</code> to let the second loop know when to stop, and it updates on each iteration of the outer loop. Then the inner loop gets to do its work. Importantly, the 3rd <code>print()</code> statement must be called <strong>outside</strong> the inner loop, but <strong>inside</strong> the outer loop. As you had it at first, it was being called inside the inner loop, preventing it from printing all on one line. By moving it out of that loop, and to the end of the first loop, it only prints a newline once on each iteration of the outer loop, and only after the inner loop is done printing all of its numbers to one line.</p>
<p>I hope that helps.</p>
| 1 | 2016-09-28T07:17:55Z | [
"python",
"loops",
"nested"
]
|
Generate two loops with different ranges | 39,739,876 | <p>I am trying to generate two loops and I want the second loop to count from the first number of my first loop in range of 10 until 80. It looks like this:</p>
<pre><code>0
0
1
12345678910
2
234567891011121314151617181920
3
3456789101112131415161718192021222324252627282930
</code></pre>
<p>Continue like this until 8 in my first loop and 80 in my second.
My code so far</p>
<pre><code>count = 1
for i in range(9):
for j in range(0, i,10):
print(count, end=80)
count = count i+1
print()
</code></pre>
<p>Any suggestion on what I am doing wrong will be very helpful.</p>
| -1 | 2016-09-28T06:44:29Z | 39,741,061 | <p>You don't need a <code>for</code> loop to print a list of integers: use the <code>str.join</code> method. Here is a example:</p>
<pre><code>for x in range(9):
print('{}\n{}'.format(str(x), ''.join(map(str, range(x, 10 * x + 1)))))
</code></pre>
<p>The trick is to map the integers into strings before performing the join operation: <code>map(str, range(x, 10 * x + 1))</code>.</p>
| 0 | 2016-09-28T07:42:15Z | [
"python",
"loops",
"nested"
]
|
heroku-django-uploaded image is not displayed if DEBUG=False | 39,739,904 | <p>I deployed my Django app on heroku. Every thing is working fine except displaying images. Any uploaded image is not displayed if DEBUG=False.</p>
<p>settings.py</p>
<pre><code>DEBUG = False
ALLOWED_HOSTS = ['salma-blog.herokuapp.com']
STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage'
STATIC_ROOT = os.path.dirname(myblog.__file__)+'/static/'
STATIC_URL = '/static/'
#upload images
MEDIA_ROOT= os.path.dirname(myblog.__file__)+'/static/myblog/images'
MEDIA_URL='/images/'
</code></pre>
<p>urls.py</p>
<pre><code>urlpatterns=[
...
]+ static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
</code></pre>
<p>image tag in my template</p>
<pre><code><img alt="img" src="/blog{{image}}"></a>
</code></pre>
| 0 | 2016-09-28T06:46:10Z | 39,741,989 | <p>Whitenoise uses <a href="http://whitenoise.evans.io/en/stable/django.html#WHITENOISE_USE_FINDERS" rel="nofollow">static file finders</a> when <code>DEBUG=True</code>, so you most likely have a problem with your static file collection process.</p>
<p>Update your <code>STATIC_ROOT</code> and <code>MEDIA_ROOT</code> settings to a recommended method, such as:</p>
<pre><code>STATIC_ROOT = os.path.join(BASE_DIR, 'static')
MEDIA_ROOT = os.path.join(BASE_DIR, 'images')
</code></pre>
| 0 | 2016-09-28T08:29:18Z | [
"python",
"django",
"heroku"
]
|
IO error in Python | 39,739,912 | <p>I am trying to run this code:</p>
<pre><code> import urllib
htmlfile = urllib.urlopen("https://www.google.co.in/?gfe_rd=cr&ei=7VzrV6WWG8KC0ATxor_IDw")
htmltext = htmlfile.read()
print htmltext
</code></pre>
<p>But the following error is shown, when I run the code:</p>
<pre><code> Traceback (most recent call last):
File "C:\Python27\newscrap.py", line 2, in <module>
htmlfile = urllib.urlopen("https://www.google.co.in/?gfe_rd=cr&ei=7VzrV6WWG8KC0ATxor_IDw")
File "C:\Python27\lib\urllib.py", line 87, in urlopen
return opener.open(url)
File "C:\Python27\lib\urllib.py", line 213, in open
return getattr(self, name)(url)
File "C:\Python27\lib\urllib.py", line 443, in open_https
h.endheaders(data)
File "C:\Python27\lib\httplib.py", line 997, in endheaders
self._send_output(message_body)
File "C:\Python27\lib\httplib.py", line 850, in _send_output
self.send(msg)
File "C:\Python27\lib\httplib.py", line 812, in send
self.connect()
File "C:\Python27\lib\httplib.py", line 1208, in connect
HTTPConnection.connect(self)
File "C:\Python27\lib\httplib.py", line 793, in connect
self.timeout, self.source_address)
File "C:\Python27\lib\socket.py", line 553, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
IOError: [Errno socket error] [Errno 11004] getaddrinfo failed
</code></pre>
<p>Can anyone tell why this error occurs?</p>
| 0 | 2016-09-28T06:46:32Z | 39,739,982 | <p>It's a connection problem (your code works on my computer). Check your firewall / proxy settings / dns server / other connection settings.</p>
| 1 | 2016-09-28T06:50:33Z | [
"python"
]
|
Outputting two graphs at once using matplotlib | 39,740,109 | <p>In short, I want to write a function that will output a <a href="http://pandas.pydata.org/pandas-docs/version/0.15.0/visualization.html#visualization-scatter-matrix" rel="nofollow">scatter matrix</a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.boxplot.html" rel="nofollow">boxplot</a> at one time in python.
I figured I would do this by created a figure with a 2x1 plotting array. However when I run the code using a Jupyter notebook: </p>
<pre><code>def boxplotscatter(data):
f, ax = plt.subplots(2, 1, figsize = (12,6))
ax[0] = scatter_matrix(data, alpha = 0.2, figsize = (6,6), diagonal = 'kde')
ax[1] = data.boxplot()
</code></pre>
<p>I get, using data called <code>pdf</code>: </p>
<p><a href="http://i.stack.imgur.com/39v84.png" rel="nofollow"><img src="http://i.stack.imgur.com/39v84.png" alt="enter image description here"></a></p>
<p>That isn't exactly what I expected -- I wanted to output the scatter matrix and below it the boxplot, not two empty grids and a boxplot embedded in a scatter matrix. </p>
<p>Thoughts on fixing this code?</p>
| 3 | 2016-09-28T06:57:35Z | 39,740,629 | <p>The issue here is that you are not actually plotting on the created axes , you are just replacing the content of your list <code>ax</code>.
I'm not very familiar with the object oriented interface for matplotlib but the right syntax would look more like <code>ax[i,j].plot()</code> rather than <code>ax[i] = plot()</code>.
Now the only real solution I can provide is using functional interface, like that :</p>
<pre><code>def boxplotscatter(data):
f, ax = plt.subplots(2, 1, figsize = (12,6))
plt.subplot(211)
scatter_matrix(data, alpha = 0.2, figsize = (6,6), diagonal = 'kde')
plt.subplot(212)
data.boxplot()
</code></pre>
| 0 | 2016-09-28T07:21:33Z | [
"python",
"matplotlib",
"jupyter-notebook"
]
|
Outputting two graphs at once using matplotlib | 39,740,109 | <p>In short, I want to write a function that will output a <a href="http://pandas.pydata.org/pandas-docs/version/0.15.0/visualization.html#visualization-scatter-matrix" rel="nofollow">scatter matrix</a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.boxplot.html" rel="nofollow">boxplot</a> at one time in python.
I figured I would do this by created a figure with a 2x1 plotting array. However when I run the code using a Jupyter notebook: </p>
<pre><code>def boxplotscatter(data):
f, ax = plt.subplots(2, 1, figsize = (12,6))
ax[0] = scatter_matrix(data, alpha = 0.2, figsize = (6,6), diagonal = 'kde')
ax[1] = data.boxplot()
</code></pre>
<p>I get, using data called <code>pdf</code>: </p>
<p><a href="http://i.stack.imgur.com/39v84.png" rel="nofollow"><img src="http://i.stack.imgur.com/39v84.png" alt="enter image description here"></a></p>
<p>That isn't exactly what I expected -- I wanted to output the scatter matrix and below it the boxplot, not two empty grids and a boxplot embedded in a scatter matrix. </p>
<p>Thoughts on fixing this code?</p>
| 3 | 2016-09-28T06:57:35Z | 39,746,080 | <p>I think you just need to pass the axis as an argument of your plotting function. </p>
<pre><code>f, ax = plt.subplots(2, 1, figsize = (12,6))
def boxplotscatter(data, ax):
ax[0] = scatter_matrix(data, alpha = 0.2, figsize = (6,6), diagonal = 'kde')
ax[1] = data.boxplot()
</code></pre>
| 1 | 2016-09-28T11:25:00Z | [
"python",
"matplotlib",
"jupyter-notebook"
]
|
If I write two pieces of software that compute the same thing but one does it twice as fast. Does that imply different orders of complexity? | 39,740,155 | <p>For a given balance and interest rate my programs compute the minimum monthly payment to pay off the debt in a year.
However One computes it on average in ~0.000150s and the other in ~0.000300s. Does that imply different degrees of asymptotic complexity?</p>
<p>These are the code samples: </p>
<p>The slower one:</p>
<pre><code>import time
start_time = time.time()
balance = 999999
annualInterestRate = 0.18
mRate = annualInterestRate/12
high = (((mRate+1)**12)*balance)/12
low = balance/12
guessed = False
def balanceLeft(balance,mRate,minPayment):
monthsLeft = 12
while monthsLeft > 0:
unpaidBalance = balance - minPayment
interest = mRate * unpaidBalance
balance = unpaidBalance
balance += interest
monthsLeft -= 1
return balance
while guessed == False:
minPayment = (high + low) / 2
if round(balanceLeft(balance,mRate,minPayment),2) < 0:
high = minPayment
elif round(balanceLeft(balance,mRate,minPayment),2)> 0:
low = minPayment
else:
if abs(round(balanceLeft(balance,mRate,minPayment),2) - 0) < 0.01:
guessed = True
print('Lowest Payment: ',end='')
print(round(minPayment,2))
print("time elapsed: {:.6f}s".format(time.time() - start_time))
</code></pre>
<p>The faster one</p>
<pre><code>import time
start_time = time.time()
annualInterestRate = 0.18
rate = annualInterestRate / 12
monthsLeftr = 12
xCoefficent = 1 + rate
ConstantTerm = 1 + rate
while monthsLeftr > 1:
xCoefficent = (xCoefficent + 1) * ConstantTerm
monthsLeftr -= 1
balance = 999999
monthsLeft = 12
while monthsLeft > 0:
balance = balance * ConstantTerm
monthsLeft -= 1
minPayment = balance / xCoefficent
print('Lowest Payment: ', end="")
print(round(minPayment,2))
print("time elapsed: {:.6f}s".format(time.time() - start_time))
</code></pre>
| -1 | 2016-09-28T06:59:13Z | 39,740,235 | <p>No, they are not same program. The first one has a while loop that calls a function that has another while loop - looks like, both of them has different complexity.</p>
<p>The first one is clearly the slower one (a squared complexity program)- the second one has no such inner loops and is a linear complexity program. </p>
| 0 | 2016-09-28T07:02:54Z | [
"python",
"algorithm",
"python-3.x",
"complexity-theory"
]
|
If I write two pieces of software that compute the same thing but one does it twice as fast. Does that imply different orders of complexity? | 39,740,155 | <p>For a given balance and interest rate my programs compute the minimum monthly payment to pay off the debt in a year.
However One computes it on average in ~0.000150s and the other in ~0.000300s. Does that imply different degrees of asymptotic complexity?</p>
<p>These are the code samples: </p>
<p>The slower one:</p>
<pre><code>import time
start_time = time.time()
balance = 999999
annualInterestRate = 0.18
mRate = annualInterestRate/12
high = (((mRate+1)**12)*balance)/12
low = balance/12
guessed = False
def balanceLeft(balance,mRate,minPayment):
monthsLeft = 12
while monthsLeft > 0:
unpaidBalance = balance - minPayment
interest = mRate * unpaidBalance
balance = unpaidBalance
balance += interest
monthsLeft -= 1
return balance
while guessed == False:
minPayment = (high + low) / 2
if round(balanceLeft(balance,mRate,minPayment),2) < 0:
high = minPayment
elif round(balanceLeft(balance,mRate,minPayment),2)> 0:
low = minPayment
else:
if abs(round(balanceLeft(balance,mRate,minPayment),2) - 0) < 0.01:
guessed = True
print('Lowest Payment: ',end='')
print(round(minPayment,2))
print("time elapsed: {:.6f}s".format(time.time() - start_time))
</code></pre>
<p>The faster one</p>
<pre><code>import time
start_time = time.time()
annualInterestRate = 0.18
rate = annualInterestRate / 12
monthsLeftr = 12
xCoefficent = 1 + rate
ConstantTerm = 1 + rate
while monthsLeftr > 1:
xCoefficent = (xCoefficent + 1) * ConstantTerm
monthsLeftr -= 1
balance = 999999
monthsLeft = 12
while monthsLeft > 0:
balance = balance * ConstantTerm
monthsLeft -= 1
minPayment = balance / xCoefficent
print('Lowest Payment: ', end="")
print(round(minPayment,2))
print("time elapsed: {:.6f}s".format(time.time() - start_time))
</code></pre>
| -1 | 2016-09-28T06:59:13Z | 39,740,251 | <p>Absolutely not.</p>
<p>Asymptotic complexity never describes absolute running times, but the trend when the problem size increases.</p>
<p>In practice, it is extremely frequent that algorithms with a better asymptotic complexity run slower for small problem instances.</p>
| 5 | 2016-09-28T07:03:41Z | [
"python",
"algorithm",
"python-3.x",
"complexity-theory"
]
|
If I write two pieces of software that compute the same thing but one does it twice as fast. Does that imply different orders of complexity? | 39,740,155 | <p>For a given balance and interest rate my programs compute the minimum monthly payment to pay off the debt in a year.
However One computes it on average in ~0.000150s and the other in ~0.000300s. Does that imply different degrees of asymptotic complexity?</p>
<p>These are the code samples: </p>
<p>The slower one:</p>
<pre><code>import time
start_time = time.time()
balance = 999999
annualInterestRate = 0.18
mRate = annualInterestRate/12
high = (((mRate+1)**12)*balance)/12
low = balance/12
guessed = False
def balanceLeft(balance,mRate,minPayment):
monthsLeft = 12
while monthsLeft > 0:
unpaidBalance = balance - minPayment
interest = mRate * unpaidBalance
balance = unpaidBalance
balance += interest
monthsLeft -= 1
return balance
while guessed == False:
minPayment = (high + low) / 2
if round(balanceLeft(balance,mRate,minPayment),2) < 0:
high = minPayment
elif round(balanceLeft(balance,mRate,minPayment),2)> 0:
low = minPayment
else:
if abs(round(balanceLeft(balance,mRate,minPayment),2) - 0) < 0.01:
guessed = True
print('Lowest Payment: ',end='')
print(round(minPayment,2))
print("time elapsed: {:.6f}s".format(time.time() - start_time))
</code></pre>
<p>The faster one</p>
<pre><code>import time
start_time = time.time()
annualInterestRate = 0.18
rate = annualInterestRate / 12
monthsLeftr = 12
xCoefficent = 1 + rate
ConstantTerm = 1 + rate
while monthsLeftr > 1:
xCoefficent = (xCoefficent + 1) * ConstantTerm
monthsLeftr -= 1
balance = 999999
monthsLeft = 12
while monthsLeft > 0:
balance = balance * ConstantTerm
monthsLeft -= 1
minPayment = balance / xCoefficent
print('Lowest Payment: ', end="")
print(round(minPayment,2))
print("time elapsed: {:.6f}s".format(time.time() - start_time))
</code></pre>
| -1 | 2016-09-28T06:59:13Z | 39,745,326 | <p>Translated into a verbal idea, you are trying to solve the linear equation <code>a-b*x=0</code>, in the first variant via the bisection method, in the second method via the direct formula <code>x = a/b</code>. Purely from the description, the second variant will always be faster, since the first also needs to compute <code>a</code> and <code>b</code> (resp. <code>b*x</code> in every step).</p>
<p>You will often find that for equations where a direct solution formula exists, that running this formula is much faster than any iterative approximation method.</p>
<p>Unfortunately, there are not that much problem classes with direct solution formulas. But as here a linear problem can always be solved directly (the efficiency argument changes for high dimensional linear systems.)</p>
| 0 | 2016-09-28T10:48:25Z | [
"python",
"algorithm",
"python-3.x",
"complexity-theory"
]
|
How can I take out(or slice) the elements in a rank-2 tensor , whose first element is unique? | 39,740,318 | <p>My title might be ambiguous due to my awkward English. But I mean this:
suppose i have a tensor <code>a</code> like this:</p>
<pre><code>array([[1, 2, 3],
[2, 2, 3],
[2, 2, 4],
[3, 2, 3],
[4, 2, 3]], dtype=int32)
</code></pre>
<p><strong>the 'first column' of this tensor could contain duplicate elements (e.g. [1, 2, 2, 3, 4] or [1, 1, 2, 3, 3, 4, 5, 5]), and which element is duplicated is not known beforehand.</strong></p>
<p>and i wanna take out a tensor this:</p>
<pre><code>array([[1, 2, 3],
[2, 2, 3],
[3, 2, 3],
[4, 2, 3]], dtype=int32)
</code></pre>
<p>as u can see, I take out the rows whose first element is a unique element in the column of <code>a</code>. </p>
<p>I first wanted to use the function <code>tf.unique()</code> . BUT the <code>idx</code> value returned by it doesn't indicate the first index of each value of output tensor in the original tensor.</p>
<p><code>tf.unique()</code> works like this:</p>
<pre><code># tensor 'x' is [1, 1, 2, 3, 3, 3, 7, 8, 8]
y, idx = tf.unique(x)
y ==> [1, 2, 3, 7, 8]
idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
</code></pre>
<p>The function <code>tf.unique(x, name=None)</code> finds the unique elements in a 1-D tensor. And it now returns two value: <code>y</code> and <code>idx</code>. <code>y</code> contains all of the unique elements of <code>x</code> sorted inthe same order that they occur in <code>x</code>. <code>idx</code> contains the index of each value of <code>x</code> in the unique output <code>y</code>.</p>
<p>How I wish it has a third return value which contains the first index of each value of <code>y</code> in the original tensor <code>x</code> is also needed. It might work like this:</p>
<pre><code># tensor 'x' is [1, 1, 2, 3, 3, 3, 7, 8, 8]
y, idx, idx_ori = tf.unique(x)
y ==> [1, 2, 3, 7, 8]
idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
idx_ori ==> [0, 2, 3, 6, 7]
</code></pre>
<p>Just like its equivalent in Numpy does:</p>
<pre><code>array 'x' is [1, 1, 2, 3, 3, 3, 7, 8, 8]
y, idx_ori = np.unique(x, return_index=True)
y ==> [1, 2, 3, 7, 8]
idx_ori ==> [0, 2, 3, 6, 7]
</code></pre>
<p>IF i have this <code>idx_ori</code>, i can solve my problem by <code>tf.gather()</code>:</p>
<pre><code>_, _1, idx_ori = tf.unique(a[:, 0])
result = tf.gather(a, idx_ori)
</code></pre>
<p>Any idea to workaround this problem? or any idea to get this indices that i want.</p>
<p>P.S. I know my description is tediously long ... :-p</p>
| 0 | 2016-09-28T07:07:04Z | 39,748,459 | <p>This is a bit gross, but you could do:</p>
<pre><code>print a
y, idx = tf.unique(a[:,0])
z = tf.one_hot(idx, tf.shape(y)[0])
s = tf.cumsum(z)
e = tf.equal(s, 1) # only seen once so far
ss = tf.to_int32(e) * tf.to_int32(z) # and we equal the thing
m = tf.reduce_max(ss, reduction_indices=1)
out = tf.boolean_mask(a, tf.equal(m, 1))
sess = tf.Session()
print sess.run(out)
[[1 2 3]
[2 2 3]
[2 2 4]
[3 2 3]
[4 2 3]]
[[1 2 3]
[2 2 3]
[3 2 3]
[4 2 3]]
</code></pre>
| 0 | 2016-09-28T13:06:57Z | [
"python",
"numpy",
"tensorflow",
"unique"
]
|
Naming a set of equality equations Python | 39,740,475 | <p>I am currently working on solving a system of equations. </p>
<p>A subset of the equations are: </p>
<pre><code>eq1 = pi1 * q[0+1] == pi0 * r[0+1]
eq2 = pi2 * q[0+1] == pi0 * r[1+1] + pi1 * r[1+1]
eq3 = pi3 * q[0+1] == pi0 * r[2+1] + pi1 * r[2+1] + pi2 * r[1+1]
eq4 = pi4 * q[0+1] == pi0 * r[3+1] + pi1 * r[3+1] + pi2 * r[2+1] + pi3 * r[1+1]
eq5 = pi5 * q[0+1] == pi0 * r[4+1] + pi1 * r[4+1] + pi2 * r[3+1] + pi3 * r[2+1] + pi4 * r[1+1]
eq6 = pi6 * q[0+1] == pi0 * r[5+1] + pi1 * r[5+1] + pi2 * r[4+1] + pi3 * r[3+1] + pi4 * r[2+1] + pi5 * r[1+1]
eq7 = pi7 * q[0+1] == pi0 * r[6+1] + pi1 * r[6+1] + pi2 * r[5+1] + pi3 * r[4+1] + pi4 * r[3+1] + pi5 * r[2+1] + pi6 * r[1+1]
</code></pre>
<p>Unfortunately, this is not working the way I want it to be working. I want it to be read as follows: the first equation has the name 'eq1' and has a certain equality equation. The other lines should be read similarly. In my code I have 14 more equations which are even longer. I want to give them a name to avoid really long expressions in "solve([], [])" . </p>
<p>Is this possible? And if so, how should it be done?</p>
| 2 | 2016-09-28T07:14:02Z | 39,740,955 | <p>You can store the equations in a dictionary - you will be mapping index of the equation (1, 2, etc) to a tuple, which will contain two items representing two sides of the equation. </p>
<pre><code>equations = dict()
equations[1] = (pi1 * q[0+1], pi0 * r[0+1])
</code></pre>
<p>You can then call <code>solve(equations[1])</code> and in the <code>solve()</code> fuction do something like this:</p>
<pre><code>def solve(equation):
left_side_of_equation = equation[0]
right_side_of_equation = equation[1]
# Here do everything you need with this equation values.
</code></pre>
<p>Or call function like this: <code>solve(equations[1][0], equations[1][1])</code> and have two parameters in the <code>solve()</code> function.</p>
<p>Edit to answer comment:</p>
<p>If you want to have more than equations with just <code>==</code> (greater, greater equal etc.) you need to save this information in the tuple as well. It can look like this:</p>
<pre><code>equations[42] = (pi1, 0, "ge")
</code></pre>
<p>and implement <code>solve()</code> function like this:</p>
<pre><code>def solve(equation):
left_side_of_equation = equation[0]
right_side_of_equation = equation[1]
operator = equation[2]
if operator == "ge": # Greater or equal
# Do your comparison
elif operator == "g": # Greater
# Do your comparison
# Add all remaining possibilities.
</code></pre>
| 0 | 2016-09-28T07:36:38Z | [
"python",
"equality",
"sympy",
"equation"
]
|
How to retrieve all data blocks from database at a certain entry in Django? | 39,740,498 | <p>I am very sorry for this newbie question...
I have data stored in the database (mysql) as tuples.
These data look like <strong>(datetime, frequency, current)</strong> ... etc.</p>
<p>How to retrieve all data from database at a certain time?
for example, I want to get all the saved data at "2015-9-9 8:23:10" </p>
| -1 | 2016-09-28T07:14:51Z | 39,740,682 | <p>can you show your model.</p>
<p>You can add created and updated fields to your model and query them to retrieve your data</p>
<pre><code>class Test(models.Model):
created = models.DateTimeField(
default=django.utils.timezone.now, blank=True)
updated = models.DateTimeField(
default=django.utils.timezone.now, blank=True)
</code></pre>
<p>you can query the Test model then using the time you want</p>
<pre><code>Test.objects.all(created=datetime.now())
</code></pre>
| 0 | 2016-09-28T07:24:02Z | [
"python",
"mysql",
"django"
]
|
How to retrieve all data blocks from database at a certain entry in Django? | 39,740,498 | <p>I am very sorry for this newbie question...
I have data stored in the database (mysql) as tuples.
These data look like <strong>(datetime, frequency, current)</strong> ... etc.</p>
<p>How to retrieve all data from database at a certain time?
for example, I want to get all the saved data at "2015-9-9 8:23:10" </p>
| -1 | 2016-09-28T07:14:51Z | 39,742,662 | <p>if datetime field is the date the data is created, you can try </p>
<p><code>Test.objects.all(datetime__gte=timezone.now)</code></p>
| 0 | 2016-09-28T09:01:23Z | [
"python",
"mysql",
"django"
]
|
How to use reportlab on GAE(Google App Engine)? | 39,740,557 | <p>I have read <a href="http://stackoverflow.com/questions/22606060/how-to-use-reportlab-with-google-app-engine">how to use reportlab with google app engine</a> but it's not help for me. Currently, I have a working version on my local environment and try to deploy to GAE.</p>
<p>But there is error when the deployment try to install pillow:</p>
<p><code>ValueError: zlib is required unless explicitly disabled using --disable-zlib, aborting</code></p>
<p>I looked for some solutions (<a href="http://stackoverflow.com/questions/34631806/fail-during-installation-of-pillow-python-module-in-linux">Fail during installation of Pillow (Python module) in Linux</a>), but I cannot install zlib by either installing <code>zlib1g-dev</code> or `pip install CFLAGS="--disable-zlab"</p>
<p>How should I install reportlab on GAE properly? Thanks</p>
| 0 | 2016-09-28T07:18:02Z | 39,870,767 | <p>Finally I downgrade the version of reportlab to 2.7 to make it work.</p>
| 0 | 2016-10-05T09:51:26Z | [
"python",
"google-app-engine",
"zlib",
"pillow",
"reportlab"
]
|
TypeError: argument 1 must be convertible to a buffer, not BeautifulSoup | 39,740,560 | <pre><code>from bs4 import BeautifulSoup
import requests
import csv
page=requests.get("http://www.gigantti.fi/catalog/tietokoneet/fi_kannettavat/kannettavat-tietokoneet")
data=BeautifulSoup(page.content)
h=open("test.csv","wb+")
h.write(data)
h.close()
print (data)
</code></pre>
<p>i have tried running the code as it is without printing it in csv file and it runs perfectly but the moment I try to save it in csv I get the error : argument 1 must be convertible to a buffer, not BeautifulSoup. PLEASE HELP and thanks in advance</p>
| -1 | 2016-09-28T07:18:08Z | 39,740,886 | <p>I don't know whether someone was able to solve it or not but my hit and trial worked. the problem was I was not converting the content to string.</p>
<pre><code>#what i needed to add was:
#after line data=BeautifulSoup(page.content)
a=str(data)
</code></pre>
<p>Hopefully this helps</p>
| 1 | 2016-09-28T07:33:37Z | [
"python",
"csv",
"web-scraping"
]
|
TypeError: argument 1 must be convertible to a buffer, not BeautifulSoup | 39,740,560 | <pre><code>from bs4 import BeautifulSoup
import requests
import csv
page=requests.get("http://www.gigantti.fi/catalog/tietokoneet/fi_kannettavat/kannettavat-tietokoneet")
data=BeautifulSoup(page.content)
h=open("test.csv","wb+")
h.write(data)
h.close()
print (data)
</code></pre>
<p>i have tried running the code as it is without printing it in csv file and it runs perfectly but the moment I try to save it in csv I get the error : argument 1 must be convertible to a buffer, not BeautifulSoup. PLEASE HELP and thanks in advance</p>
| -1 | 2016-09-28T07:18:08Z | 39,740,924 | <p>What you are trying to do doesn't make any sense.</p>
<p>As mentioned on <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow">Beautiful Soup Documentation</a>:</p>
<blockquote>
<p>Beautiful Soup is a Python library for pulling data out of HTML and XML files. It works with your favorite parser to provide idiomatic ways of navigating, searching, and modifying the parse tree. It commonly saves programmers hours or days of work.</p>
</blockquote>
<p>You do not seem to be pulling any data but you are trying to write a <code>BeautifulSoup</code> object into a file which doesn't make sense.</p>
<pre><code>>>> type(data)
<class 'bs4.BeautifulSoup'>
</code></pre>
<p>What you should be using <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow"><code>BeautifulSoup</code></a> for is to search the data for some information, and then use that information, here's a useless example:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
page = requests.get("http://www.gigantti.fi/catalog/tietokoneet/fi_kannettavat/kannettavat-tietokoneet")
data = BeautifulSoup(page.content)
with open("test.txt", "wb+") as f:
# find the first `<title>` tag and retrieve its value
value = data.findAll('title')[0].text
f.write(value)
</code></pre>
<p>It seems like you should be using <code>BeautifulSoup</code> to be retreiving all the information on each product in the product listing and putting them into columns in a csv file if I'm guessing correctly, but I will leave that work up to you. You must use <code>BeautifulSoup</code> to find each product in the <code>html</code> and then retrieve all of its details and print to a csv</p>
| 0 | 2016-09-28T07:35:10Z | [
"python",
"csv",
"web-scraping"
]
|
Python type hinting without cyclic imports | 39,740,632 | <p>I'm trying to split my huge class into two; well, basically into the "main" class and a mixin with additional functions, like so:</p>
<pre><code># main.py
import mymixin.py
class Main(object, MyMixin):
def func1(self, xxx):
...
# mymixin.py
class MyMixin(object):
def func2(self: Main, xxx): # <--- note the type hint
...
</code></pre>
<p>Now, while this works just fine, the type hint in MyMixin.func2 of course can't work. I can't import main.py, because I'd get a cyclic import and without the hint, my editor (PyCharm) can't tell what <code>self</code> is.</p>
<p>Using Python 3.4, willing to move to 3.5 if a solution is available there.</p>
<p>Is there any way I can split my class into two files and keep all the "connections" so that my IDE still offers me auto completion & all the other goodies that come from it knowing the types?</p>
| 2 | 2016-09-28T07:21:35Z | 39,741,253 | <p>The bigger issue is that your types aren't sane to begin with. <code>MyMixin</code> makes a hardcoded assumption that it will be mixed into <code>Main</code>, whereas it could be mixed into any number of other classes, in which case it would probably break. If your mixin is hardcoded to be mixed into one specific class, you may as well write the methods directly into that class instead of separating them out.</p>
<p>To properly do this with sane typing, <code>MyMixin</code> should be coded against an <em>interface</em>, or abstract class in Python parlance:</p>
<pre><code>import abc
class MixinDependencyInterface(abc.ABC):
@abc.abstractmethod
def foo(self):
pass
class MyMixin:
def func2(self: MixinDependencyInterface, xxx):
self.foo() # â mixin only depends on the interface
class Main(MixinDependencyInterface, MyMixin):
def foo(self):
print('bar')
</code></pre>
| 1 | 2016-09-28T07:52:05Z | [
"python",
"pycharm",
"python-3.4",
"python-3.5",
"type-hinting"
]
|
Python type hinting without cyclic imports | 39,740,632 | <p>I'm trying to split my huge class into two; well, basically into the "main" class and a mixin with additional functions, like so:</p>
<pre><code># main.py
import mymixin.py
class Main(object, MyMixin):
def func1(self, xxx):
...
# mymixin.py
class MyMixin(object):
def func2(self: Main, xxx): # <--- note the type hint
...
</code></pre>
<p>Now, while this works just fine, the type hint in MyMixin.func2 of course can't work. I can't import main.py, because I'd get a cyclic import and without the hint, my editor (PyCharm) can't tell what <code>self</code> is.</p>
<p>Using Python 3.4, willing to move to 3.5 if a solution is available there.</p>
<p>Is there any way I can split my class into two files and keep all the "connections" so that my IDE still offers me auto completion & all the other goodies that come from it knowing the types?</p>
| 2 | 2016-09-28T07:21:35Z | 39,757,388 | <p>There isn't a hugely elegant way to handle import cycles in general, I'm afraid. Your choices are to either redesign your code to remove the cyclic dependency, or if it isn't feasible, do something like this:</p>
<pre><code># some_file.py
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from main import Main
class MyObject(object):
def func2(self, some_param: 'Main'):
...
</code></pre>
<p>The <code>TYPE_CHECKING</code> constant is always <code>False</code> at runtime, so the import won't be evaluated, but mypy (and other type-checking tools) will evaluate the contents of that block.</p>
<p>We also need to make the <code>Main</code> type annotation into a string, effectively forward declaring it since the <code>Main</code> symbol isn't available at runtime.</p>
<p>All that said, using mixins with mypy will likely require a bit more structure then you currently have. Mypy <a href="https://github.com/python/mypy/issues/1996" rel="nofollow">recommends an approach</a> that's basically what <code>deceze</code> is describing -- to create an ABC that both your <code>Main</code> and <code>MyMixin</code> classes inherit. I wouldn't be surprised if you ended up needing to do something similar in order to make Pycharm's checker happy.</p>
| 1 | 2016-09-28T20:48:54Z | [
"python",
"pycharm",
"python-3.4",
"python-3.5",
"type-hinting"
]
|
How to find parenthesis bound strings in python | 39,740,814 | <p>I'm learning Python and wanted to automate one of my assignments in a cybersecurity class.
I'm trying to figure out how I would look for the contents of a file that are bound by a set of parenthesis. The contents of the (.txt) file look like:</p>
<pre><code>cow.jpg : jphide[v5](asdfl;kj88876)
fish.jpg : jphide[v5](65498ghjk;0-)
snake.jpg : jphide[v5](poi098*/8!@#)
test_practice_0707.jpg : jphide[v5](sJ*=tT@&Ve!2)
test_practice_0101.jpg : jphide[v5](nKFdFX+C!:V9)
test_practice_0808.jpg : jphide[v5](!~rFX3FXszx6)
test_practice_0202.jpg : jphide[v5](X&aC$|mg!wC2)
test_practice_0505.jpg : jphide[v5](pe8f%yC$V6Z3)
dog.jpg : negative`
</code></pre>
<p>And here is my code so far:
</p>
<pre><code>import sys, os, subprocess, glob, shutil
# Finding the .jpg files that will be copied.
sourcepath = os.getcwd() + '\\imgs\\'
destpath = 'stegdetect'
rawjpg = glob.glob(sourcepath + '*.jpg')
# Copying the said .jpg files into the destpath variable
for filename in rawjpg:
shutil.copy(filename, destpath)
# Asks user for what password file they want to use.
passwords = raw_input("Enter your password file with the .txt extension:")
shutil.copy(passwords, 'stegdetect')
# Navigating to stegdetect. Feel like this could be abstracted.
os.chdir('stegdetect')
# Preparing the arguments then using subprocess to run
args = "stegbreak.exe -r rules.ini -f " + passwords + " -t p *.jpg"
# Uses open to open the output file, and then write the results to the file.
with open('cracks.txt', 'w') as f: # opens cracks.txt and prepares to w
subprocess.call(args, stdout=f)
# Processing whats in the new file.
f = open('cracks.txt')
</code></pre>
| 3 | 2016-09-28T07:30:35Z | 39,740,912 | <p>If it should just be bound by ( and ) you can use the following regex, which ensures starting ( and closing ) and you can have numbers and characters between them. You can add any other symbol also that you want to include.</p>
<p><code>[\(][a-z A-Z 0-9]*[\)]</code></p>
<pre><code>[\(] - starts the bracket
[a-z A-Z 0-9]* - all text inside bracket
[\)] - closes the bracket
</code></pre>
<p>So for input <code>sdfsdfdsf(sdfdsfsdf)sdfsdfsdf</code> , the output will be <code>(sdfdsfsdf)</code>
Test this regex here: <a href="https://regex101.com/" rel="nofollow">https://regex101.com/</a></p>
| 1 | 2016-09-28T07:34:26Z | [
"python"
]
|
How to find parenthesis bound strings in python | 39,740,814 | <p>I'm learning Python and wanted to automate one of my assignments in a cybersecurity class.
I'm trying to figure out how I would look for the contents of a file that are bound by a set of parenthesis. The contents of the (.txt) file look like:</p>
<pre><code>cow.jpg : jphide[v5](asdfl;kj88876)
fish.jpg : jphide[v5](65498ghjk;0-)
snake.jpg : jphide[v5](poi098*/8!@#)
test_practice_0707.jpg : jphide[v5](sJ*=tT@&Ve!2)
test_practice_0101.jpg : jphide[v5](nKFdFX+C!:V9)
test_practice_0808.jpg : jphide[v5](!~rFX3FXszx6)
test_practice_0202.jpg : jphide[v5](X&aC$|mg!wC2)
test_practice_0505.jpg : jphide[v5](pe8f%yC$V6Z3)
dog.jpg : negative`
</code></pre>
<p>And here is my code so far:
</p>
<pre><code>import sys, os, subprocess, glob, shutil
# Finding the .jpg files that will be copied.
sourcepath = os.getcwd() + '\\imgs\\'
destpath = 'stegdetect'
rawjpg = glob.glob(sourcepath + '*.jpg')
# Copying the said .jpg files into the destpath variable
for filename in rawjpg:
shutil.copy(filename, destpath)
# Asks user for what password file they want to use.
passwords = raw_input("Enter your password file with the .txt extension:")
shutil.copy(passwords, 'stegdetect')
# Navigating to stegdetect. Feel like this could be abstracted.
os.chdir('stegdetect')
# Preparing the arguments then using subprocess to run
args = "stegbreak.exe -r rules.ini -f " + passwords + " -t p *.jpg"
# Uses open to open the output file, and then write the results to the file.
with open('cracks.txt', 'w') as f: # opens cracks.txt and prepares to w
subprocess.call(args, stdout=f)
# Processing whats in the new file.
f = open('cracks.txt')
</code></pre>
| 3 | 2016-09-28T07:30:35Z | 39,741,169 | <blockquote>
<p>I'm learning Python</p>
</blockquote>
<p>If you are learning you should consider alternative implementations, not only regexps. </p>
<p>TO iterate line by line of a text file you just open the file and for over the file handle:</p>
<pre><code>with open('file.txt') as f:
for line in f:
do_something(line)
</code></pre>
<p>Each line is a string with the line contents, including the end-of-line char '/n'. To find the start index of a specific substring in a string you can use find:</p>
<pre><code>>>> A = "hello (world)"
>>> A.find('(')
6
>>> A.find(')')
12
</code></pre>
<p>To get a substring from the string you can use the slice notation in the form:</p>
<pre><code>>>> A[6:12]
'(world'
</code></pre>
| 0 | 2016-09-28T07:47:55Z | [
"python"
]
|
How to find parenthesis bound strings in python | 39,740,814 | <p>I'm learning Python and wanted to automate one of my assignments in a cybersecurity class.
I'm trying to figure out how I would look for the contents of a file that are bound by a set of parenthesis. The contents of the (.txt) file look like:</p>
<pre><code>cow.jpg : jphide[v5](asdfl;kj88876)
fish.jpg : jphide[v5](65498ghjk;0-)
snake.jpg : jphide[v5](poi098*/8!@#)
test_practice_0707.jpg : jphide[v5](sJ*=tT@&Ve!2)
test_practice_0101.jpg : jphide[v5](nKFdFX+C!:V9)
test_practice_0808.jpg : jphide[v5](!~rFX3FXszx6)
test_practice_0202.jpg : jphide[v5](X&aC$|mg!wC2)
test_practice_0505.jpg : jphide[v5](pe8f%yC$V6Z3)
dog.jpg : negative`
</code></pre>
<p>And here is my code so far:
</p>
<pre><code>import sys, os, subprocess, glob, shutil
# Finding the .jpg files that will be copied.
sourcepath = os.getcwd() + '\\imgs\\'
destpath = 'stegdetect'
rawjpg = glob.glob(sourcepath + '*.jpg')
# Copying the said .jpg files into the destpath variable
for filename in rawjpg:
shutil.copy(filename, destpath)
# Asks user for what password file they want to use.
passwords = raw_input("Enter your password file with the .txt extension:")
shutil.copy(passwords, 'stegdetect')
# Navigating to stegdetect. Feel like this could be abstracted.
os.chdir('stegdetect')
# Preparing the arguments then using subprocess to run
args = "stegbreak.exe -r rules.ini -f " + passwords + " -t p *.jpg"
# Uses open to open the output file, and then write the results to the file.
with open('cracks.txt', 'w') as f: # opens cracks.txt and prepares to w
subprocess.call(args, stdout=f)
# Processing whats in the new file.
f = open('cracks.txt')
</code></pre>
| 3 | 2016-09-28T07:30:35Z | 39,741,286 | <p>You should use regular expressions which are implemented in the <a href="https://docs.python.org/2/library/re.html" rel="nofollow">Python re module</a></p>
<p>a simple regex like <code>\(.*\)</code> could match your "parenthesis string"
but it would be better with a group <code>\((.*)\)</code> which allows to get only the content in the parenthesis.</p>
<pre><code>import re
test_string = """cow.jpg : jphide[v5](asdfl;kj88876)
fish.jpg : jphide[v5](65498ghjk;0-)
snake.jpg : jphide[v5](poi098*/8!@#)
test_practice_0707.jpg : jphide[v5](sJ*=tT@&Ve!2)
test_practice_0101.jpg : jphide[v5](nKFdFX+C!:V9)
test_practice_0808.jpg : jphide[v5](!~rFX3FXszx6)
test_practice_0202.jpg : jphide[v5](X&aC$|mg!wC2)
test_practice_0505.jpg : jphide[v5](pe8f%yC$V6Z3)
dog.jpg : negative`"""
REGEX = re.compile(r'\((.*)\)', re.MULTILINE)
print(REGEX.findall(test_string))
# ['asdfl;kj88876', '65498ghjk;0-', 'poi098*/8!@#', 'sJ*=tT@&Ve!2', 'nKFdFX+C!:V9' , '!~rFX3FXszx6', 'X&aC$|mg!wC2', 'pe8f%yC$V6Z3']
</code></pre>
| 0 | 2016-09-28T07:53:47Z | [
"python"
]
|
How can I package my python module and its dependencies such as Numpy/Scipy into one file like dll for C# calls? | 39,740,884 | <p>In my project, some of the algorithms are implemented in python as Modules for calling and libraries such as Numpy/Scipy are used . Now I need to embed these Modules into my UI(implemented by C# and run in Windows 7). There are two reasons I need to package my python modules into a file like DLL(I don't want to package as an .exe because it is not friendly to embed). The first reason is for easily calling, and the second reason is to protect my algorithm source code. Does anyone have any idea?</p>
| 1 | 2016-09-28T07:33:30Z | 39,901,534 | <p>Try pythonnet to embed CPython in .NET:</p>
<p><a href="http://www.python4.net" rel="nofollow">python4.net</a></p>
| 0 | 2016-10-06T16:46:36Z | [
"c#",
"python",
".net",
"pyinstaller",
"python.net"
]
|
How can I package my python module and its dependencies such as Numpy/Scipy into one file like dll for C# calls? | 39,740,884 | <p>In my project, some of the algorithms are implemented in python as Modules for calling and libraries such as Numpy/Scipy are used . Now I need to embed these Modules into my UI(implemented by C# and run in Windows 7). There are two reasons I need to package my python modules into a file like DLL(I don't want to package as an .exe because it is not friendly to embed). The first reason is for easily calling, and the second reason is to protect my algorithm source code. Does anyone have any idea?</p>
| 1 | 2016-09-28T07:33:30Z | 39,952,006 | <p>Finally, I solve my problem by using nuitka. It compile my module into .pyd file, and I can import the module using pythonnet by adding reference Python.Runtime.dll.
My compile command is:<code>nuitka --module --nofreeze-stdlib --recurse-all --recurse-not-to=numpy --recurse-not-to=sklearn --recurse-not-to=pywt --recurse-not-to=matplotlib --recurse-not-to=matplotlib.pyplot --recurse-not-to=scipy.io myfile.py</code>
and my C# code is:</p>
<pre><code>PythonEngine.Initialize();
IntPtr pythonLock = PythonEngine.AcquireLock();
PythonEngine.RunSimpleString("import sys\n");
PythonEngine.RunSimpleString("sys.path.append('C:\\path\\to\\pyd')\n");
PyObject pModule = PythonEngine.ImportModule("modulename");
pModule.InvokeMethod("funcitonname", para1, para2);
</code></pre>
| 1 | 2016-10-10T05:52:22Z | [
"c#",
"python",
".net",
"pyinstaller",
"python.net"
]
|
Reformat table in Python | 39,740,989 | <p>I have a table in a Python script with numpy in the following shape:</p>
<pre><code>[array([[a1, b1, c1], ..., [x1, y1, z1]]),
array([a2, b2, c2, ..., x2, y2, z2])
]
</code></pre>
<p>I would like to reshape it to a format like this:</p>
<pre><code>(array([[a2],
[b2],
.
.
.
[z2]],
dtype = ...),
array([[a1],
[b1],
.
.
.
[z1]])
)
</code></pre>
<p>To be honest, I'm also quite confused about the different parentheses. array1, array2] is a list of arrays, right? What is (array1, array2), then?</p>
| 1 | 2016-09-28T07:38:32Z | 39,741,808 | <p>Round brackets <code>(1, 2)</code> are <a href="https://docs.python.org/3/library/stdtypes.html#tuple" rel="nofollow">tuples</a>, square brackets <code>[1, 2]</code> are <a href="https://docs.python.org/3/library/stdtypes.html#list" rel="nofollow">lists</a>. To convert your data structure, use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.expand_dims.html" rel="nofollow"><code>expand_dims</code></a> and <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.flatten.html" rel="nofollow"><code>flatten</code></a>.</p>
<pre><code>import numpy as np
a = [
np.array([[1, 2, 3], [4, 5, 6]]),
np.array([10, 11, 12, 13, 14])
]
print(a)
b = (
np.expand_dims(a[1], axis=1),
np.expand_dims(a[0].flatten(), axis=1)
)
print(b)
</code></pre>
| 1 | 2016-09-28T08:20:12Z | [
"python",
"numpy"
]
|
Reformat table in Python | 39,740,989 | <p>I have a table in a Python script with numpy in the following shape:</p>
<pre><code>[array([[a1, b1, c1], ..., [x1, y1, z1]]),
array([a2, b2, c2, ..., x2, y2, z2])
]
</code></pre>
<p>I would like to reshape it to a format like this:</p>
<pre><code>(array([[a2],
[b2],
.
.
.
[z2]],
dtype = ...),
array([[a1],
[b1],
.
.
.
[z1]])
)
</code></pre>
<p>To be honest, I'm also quite confused about the different parentheses. array1, array2] is a list of arrays, right? What is (array1, array2), then?</p>
| 1 | 2016-09-28T07:38:32Z | 39,742,308 | <pre><code> #[array1,array2] is a python list of two numpy tables(narray)
#(array1,array2) is a python tuple of two numpy tables(narray)
tuple([array.reshape((-1,1)) for array in your_list.reverse()])
</code></pre>
| 1 | 2016-09-28T08:45:00Z | [
"python",
"numpy"
]
|
Execute entire DAG using Airflow UI | 39,741,067 | <p>I am newbie to airflow, We have a DAG with 3 tasks. Currently we are using Celery Executor as we need the flexibility to run an individual task. We don't want to schedule the workflow, for now it will be manual trigger. Is there any way to execute the entire workflow using the Airflow UI (Same as we have in oozie)?</p>
<p>Executing one task at a time is a pain.</p>
| 0 | 2016-09-28T07:42:34Z | 39,742,181 | <p>I'll take a stab at it and hopefully you can adapt your code to work with this.</p>
<pre><code>default_args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime(2015, 6, 1),
'email': ['airflow@airflow.com'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=5),
# 'queue': 'bash_queue',
# 'pool': 'backfill',
# 'priority_weight': 10,
# 'end_date': datetime(2016, 1, 1),
}
dag = DAG('your_dag', default_args=default_args)
#start of your tasks
first_task = BashOperator(task_id='print_date',
bash_command='python script1_name args',
dag=dag)
second_task = BashOperator(task_id='print_date',
bash_command='python script2_name args',
dag=dag)
third_task = BashOperator(task_id='print_date',
bash_command='python script_name args',
dag=dag)
#then set the dependencies
second_task.set_upstream(first_task)
third_task.set_upstream(second_task)
</code></pre>
<p>Then when you trigger the DAG, all three tasks will run in order. If these tasks are <em>not</em> dependent on each other, you can remove the <code>set_upstream()</code> lines from the script. Note that all of these tasks must be in the same script to run with one command.</p>
| 0 | 2016-09-28T08:39:13Z | [
"python",
"airflow"
]
|
Filter data frame from values in different columns Pandas | 39,741,152 | <p>I am very new in Pandas and hope that somebody at least can point me in the right direction. </p>
<p>Here comes the actual question: </p>
<p>df: </p>
<pre><code> time Area lon lat mode ID
1993-08-01 00:34:28 A 45.627800 34.733400 false 3183
1993-08-01 00:34:28 A 45.699600 34.639300 false 3183
1993-08-01 00:34:28 A 45.603800 34.730600 false 3183
1992-03-21 01:13:18 A 45.686400 34.548100 false 3184
1992-03-21 01:13:18 A 45.702400 34.554300 false 3184
1992-03-21 01:13:18 B 45.304784 34.626540 NaN 3184
1992-03-21 16:13:20 A 45.633800 34.709700 false 3185
1992-03-21 16:13:20 A 45.643400 34.709000 true 3185
1992-03-21 16:13:20 A 45.634600 34.959500 true 3185
</code></pre>
<p>I want to filter out all instances of <strong>âIDâ</strong> that just has data from one <strong>âAreaâ</strong> (<em>either A or B</em>). The <strong>âIDâ</strong> s I want must therefore have at least one instance of <strong>âAâ</strong> <em>AND</em> <strong>âBâ</strong> to be stored in a new data frame. </p>
<p>From df presented above only the entires presented below fits the constrain: </p>
<pre><code> 1992-03-21 01:13:18 A 45.686400 34.548100 false 3184
1992-03-21 01:13:18 A 45.702400 34.554300 false 3184
1992-03-21 01:13:18 B 45.304784 34.626540 NaN 3184
</code></pre>
<p>Right now Iâm about to try to do a regular for loop with if statements and a list to temporary store <strong>âAreaâ</strong> attributes for each <strong>âIDâ</strong>. This feels like a very bad approach and there must be some idiomatic pandas way of doing this. </p>
| 1 | 2016-09-28T07:47:13Z | 39,741,318 | <p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow"><code>pivot_table</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html" rel="nofollow"><code>dropna</code></a> for removing all values which are not in all groups:</p>
<pre><code>print (df.pivot_table(index='Area', columns='ID', values='lat').dropna(axis=1))
ID 3184
Area
A 34.55120
B 34.62654
vals = df.pivot_table(index='Area', columns='ID', values='lat').dropna(axis=1).columns
print (vals)
Int64Index([3184], dtype='int64', name='ID')
</code></pre>
<p>Last use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a>:</p>
<pre><code>print (df[df.ID.isin(vals)])
time Area lon lat mode ID
3 1992-03-21 01:13:18 A 45.686400 34.54810 False 3184
4 1992-03-21 01:13:18 A 45.702400 34.55430 False 3184
5 1992-03-21 01:13:18 B 45.304784 34.62654 NaN 3184
</code></pre>
| 0 | 2016-09-28T07:55:19Z | [
"python",
"pandas"
]
|
Filter data frame from values in different columns Pandas | 39,741,152 | <p>I am very new in Pandas and hope that somebody at least can point me in the right direction. </p>
<p>Here comes the actual question: </p>
<p>df: </p>
<pre><code> time Area lon lat mode ID
1993-08-01 00:34:28 A 45.627800 34.733400 false 3183
1993-08-01 00:34:28 A 45.699600 34.639300 false 3183
1993-08-01 00:34:28 A 45.603800 34.730600 false 3183
1992-03-21 01:13:18 A 45.686400 34.548100 false 3184
1992-03-21 01:13:18 A 45.702400 34.554300 false 3184
1992-03-21 01:13:18 B 45.304784 34.626540 NaN 3184
1992-03-21 16:13:20 A 45.633800 34.709700 false 3185
1992-03-21 16:13:20 A 45.643400 34.709000 true 3185
1992-03-21 16:13:20 A 45.634600 34.959500 true 3185
</code></pre>
<p>I want to filter out all instances of <strong>âIDâ</strong> that just has data from one <strong>âAreaâ</strong> (<em>either A or B</em>). The <strong>âIDâ</strong> s I want must therefore have at least one instance of <strong>âAâ</strong> <em>AND</em> <strong>âBâ</strong> to be stored in a new data frame. </p>
<p>From df presented above only the entires presented below fits the constrain: </p>
<pre><code> 1992-03-21 01:13:18 A 45.686400 34.548100 false 3184
1992-03-21 01:13:18 A 45.702400 34.554300 false 3184
1992-03-21 01:13:18 B 45.304784 34.626540 NaN 3184
</code></pre>
<p>Right now Iâm about to try to do a regular for loop with if statements and a list to temporary store <strong>âAreaâ</strong> attributes for each <strong>âIDâ</strong>. This feels like a very bad approach and there must be some idiomatic pandas way of doing this. </p>
| 1 | 2016-09-28T07:47:13Z | 39,741,437 | <p>You can take a look at the following :</p>
<pre><code>In [24]: df
Out[24]:
area id
0 A 3183
1 A 3183
2 A 3184
3 B 3184
4 A 3185
5 A 3185
In [25]: df[df.groupby('id')['area'].transform('nunique') > 1]
Out[25]:
area id
2 A 3184
3 B 3184
</code></pre>
<p>I reduced my example to the only 2 relevant columns (id and area), but this would work without problem with your full DataFrame.</p>
<p>I basically count the number of different areas for every ID, and filter out those with only one area.</p>
| 0 | 2016-09-28T08:01:01Z | [
"python",
"pandas"
]
|
django It seems the view func do not work | 39,741,232 | <p>I'm trying to write a FBV to delete a subject, but there are some problem I can't figure out. It's Django 1.7.1. Below are related codes.</p>
<p>The model Communication:</p>
<pre><code>...
@models.permalink
def get_delete_url(self):
return 'comm_delete', [self.uuid]
</code></pre>
<p>the URLconf:</p>
<pre><code>url(r'^(?P<uuid>[\w-]+)/delete/$', views.comm_delete, name='comm_delete'),
</code></pre>
<p>the views:</p>
<pre><code>def comm_delete(request, uuid):
obj = get_object_or_404(Communication, uuid=uuid)
account = Account.objects.get(id=obj.account.id)
if request.method == 'POST':
obj.delete()
return HttpResponseRedirect(reverse('crmapp.accounts.views.account_detail', args=(account.uuid,)))
return render(request, 'subject_confirm_delete.html', {'object_name': 'Communication', 'object': obj})
</code></pre>
<p>when I click</p>
<pre><code><a class="cancel" href="{{ comm.get_delete_url }}"></a>
</code></pre>
<p>the page move to uuid/delete/ url, the form display Communication object. If I still click the cancel button, just refresh the page and nothing changed.
So how can I fix it? Help me please!</p>
<p>The object_confirm_delete.html:
<img src="http://i.stack.imgur.com/vnpV5.png" alt="object_confirm_delete.html"></p>
<p>the page when clicked cancel button:
<img src="http://i.stack.imgur.com/7UWqS.png" alt="clicked_move_to_page"></p>
<p>the urls.py in app Communications like this:</p>
<pre><code>url(r'^(?P<uuid>[\w-]+)/', views.comm_detail, name='comm_detail'),
url(r'^(?P<uuid>[\w-]+)/delete/$', views.comm_delete, name='comm_delete'),
</code></pre>
<p>as you can see, the first one has no '$' at the end, so when I link the url of second one, the first one's regex matched and perform the view comm_detail. After I modify it, It execute well.</p>
| -1 | 2016-09-28T07:50:51Z | 39,775,654 | <p>the urls.py in app Communications like this:</p>
<pre><code>url(r'^(?P<uuid>[\w-]+)/', views.comm_detail, name='comm_detail'),
url(r'^(?P<uuid>[\w-]+)/delete/$', views.comm_delete, name='comm_delete'),
</code></pre>
<p>as you can see, the first one has no '$' at the end, so when I link the url of second one, the first one's regex matched and perform the view comm_detail. After I modify it, It execute well. It means the FBV is correct.</p>
| 0 | 2016-09-29T16:34:49Z | [
"python",
"django"
]
|
Tool for google maps using python | 39,741,287 | <p>I need to develop a tool (eg: calculate polygon area) and integrate it with Google Maps. I am not familiar with java. Can I do this using python? If yes, how can I go about integrating my code with Maps?</p>
| 0 | 2016-09-28T07:53:51Z | 39,742,040 | <p>You can do it, using <strong>OpenStreetMap</strong> instead of Google map, in <strong>IPython/Jupyter Notebook</strong>, through <strong>ipyleaflet</strong> package.
Just write(or import) your script in Ipython Notebook(a python based env.) and then take a look at here;</p>
<p><a href="https://github.com/ellisonbg/ipyleaflet/tree/master/examples" rel="nofollow">https://github.com/ellisonbg/ipyleaflet/tree/master/examples</a></p>
<p>you will be able to draw whatever you want defining new <strong>Layer</strong> and so on...</p>
<h1>Here an example:</h1>
<h3>Open your Ipython Notebook and import these modules;</h3>
<pre><code>from ipyleaflet import (
Map,
Marker,
TileLayer, ImageOverlay,
Polyline, Polygon, Rectangle, Circle, CircleMarker,
GeoJSON,
DrawControl
)
m = Map(zoom=0)
dc = DrawControl()
def handle_draw(self, action, geo_json):
print(action)
print(geo_json)
dc.on_draw(handle_draw)
m.add_control(dc)
m
</code></pre>
<h3>The map will be appeared</h3>
<p><a href="https://i.stack.imgur.com/e5XhK.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/e5XhK.jpg" alt="OpenStreetMap with drawing tools on the left hand side"></a></p>
<h3>Zoom by double clicking on the your interesting spot, then draw your polygon using "Draw a polygon" item.</h3>
<p><a href="https://i.stack.imgur.com/OeZhj.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/OeZhj.jpg" alt="enter image description here"></a></p>
<h3>This is just a suggestion, you can use other methods to calculate the polygon's area</h3>
<pre><code>import pyproj
import shapely
import shapely.ops as ops
from shapely.geometry.polygon import Polygon
from functools import partial
my_poly = dc.last_draw['geometry']['coordinates'][0]
geom = Polygon(my_poly)
geom_area = ops.transform(
partial(
pyproj.transform,
pyproj.Proj(init='EPSG:4326'),
pyproj.Proj(
proj='aea',
lat1=geom.bounds[1],
lat2=geom.bounds[3])),
geom)
print (geom_area.area, 'square meters, which is equal to',geom_area.area/1000000, 'square kilometers')
</code></pre>
<p>2320899322382.008 square meters, which is equal to 2320899.3223820077 square kilometers</p>
| 1 | 2016-09-28T08:31:52Z | [
"python"
]
|
How can I get the <select> tag option chosen by the user using Flask? | 39,741,302 | <p>I have a form whereby a user uploads a csv file, that csv file is then converted to a pandas dataframe and the column headers automatically populate the <code><select></code> tag options in the form.</p>
<p>I then want the user to select the column that they want and I want that column choice saved in a variable using Flask.</p>
<p>However, I'm having trouble retrieving the option that the user then selects from this select tag. </p>
<p>My template code:</p>
<pre><code><form class="form-horizontal" action="" method="post" enctype="multipart/form-data">
<fieldset>
<legend>Text Analytics</legend>
<div class="form-group form-group-lg">
<label for="inputData" class="col-lg-2 control-label">Choose Data File:</label>
<div class="col-lg-5">
<input type="file" class="form-control" required="required" autofocus="autofocus" name="inputData"/>
</div>
<div class="col-lg-3">
<button class="btn btn-primary" type="submit" name="upload">Upload</button>
</div>
{% if success %}
<div class="alert alert-success">
<a href="#" class="close" data-dismiss="alert" aria-label="close">&times;</a>
<strong>Success!</strong> {{success}}.
</div>
{% endif %}
{% if error %}
<div class="alert alert-warning">
<a href="#" class="close" data-dismiss="alert" aria-label="close">&times;</a>
<strong>Error:</strong> {{error}}.
</div>
{% endif %}
</div>
<div class="form-group form-group-lg">
<label for="colSelect" class="col-lg-2 control-label">Select column for analysis:</label>
<div class="col-lg-5">
<select class="form-control" name="colSelect" id="colSelect">
{% for column in columns %}
<option id="{{column}}">{{column}}</option>
{% endfor %}
</select>
</div>
</code></pre>
<p>My Flask code:</p>
<pre><code>@app.route('/textanalytics', methods = ['GET', 'POST'])
def upload_file():
error = None
success = None
columns = []
col = None
if request.method == "POST":
if request.files['inputData'] == '':
error = "No file selected"
else:
file = request.files['inputData']
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
success = 'File uploaded'
data = pd.read_csv(os.path.join(app.config['UPLOAD_FOLDER'], filename), header = 0, low_memory = False)
columns = [i for i in data.columns]
col = request.form.get('colSelect')
return render_template('textanalytics.html', success = success, columns = columns, col = col)
elif file and not allowed_file(file.filename):
error = 'Incorrect file type, .csv only'
return render_template('textanalytics.html', error = error)
return render_template('textanalytics.html', error = error, success = success, columns = columns, col = col)
app.add_url_rule('/uploads/<filename>', 'uploaded_file', build_only=True)
app.wsgi_app = SharedDataMiddleware(app.wsgi_app, {'/uploads': app.config['UPLOAD_FOLDER']})
</code></pre>
<p>As you can see, I am using <code>request.form.get('colSelect')</code> to retrieve the option but without any luck. It just returns <code>None</code> which is its initialised value.</p>
<p>I have a feeling it has to do with where I am placing that code but I am new to Flask and so could do with some help.</p>
| 0 | 2016-09-28T07:54:30Z | 39,746,075 | <p>You have to give each <code>option</code> a <code>value</code>. </p>
<pre><code><option value="{{ column }}">{{ column }}</option>
</code></pre>
| 0 | 2016-09-28T11:24:55Z | [
"javascript",
"python",
"html",
"flask"
]
|
How can I get the <select> tag option chosen by the user using Flask? | 39,741,302 | <p>I have a form whereby a user uploads a csv file, that csv file is then converted to a pandas dataframe and the column headers automatically populate the <code><select></code> tag options in the form.</p>
<p>I then want the user to select the column that they want and I want that column choice saved in a variable using Flask.</p>
<p>However, I'm having trouble retrieving the option that the user then selects from this select tag. </p>
<p>My template code:</p>
<pre><code><form class="form-horizontal" action="" method="post" enctype="multipart/form-data">
<fieldset>
<legend>Text Analytics</legend>
<div class="form-group form-group-lg">
<label for="inputData" class="col-lg-2 control-label">Choose Data File:</label>
<div class="col-lg-5">
<input type="file" class="form-control" required="required" autofocus="autofocus" name="inputData"/>
</div>
<div class="col-lg-3">
<button class="btn btn-primary" type="submit" name="upload">Upload</button>
</div>
{% if success %}
<div class="alert alert-success">
<a href="#" class="close" data-dismiss="alert" aria-label="close">&times;</a>
<strong>Success!</strong> {{success}}.
</div>
{% endif %}
{% if error %}
<div class="alert alert-warning">
<a href="#" class="close" data-dismiss="alert" aria-label="close">&times;</a>
<strong>Error:</strong> {{error}}.
</div>
{% endif %}
</div>
<div class="form-group form-group-lg">
<label for="colSelect" class="col-lg-2 control-label">Select column for analysis:</label>
<div class="col-lg-5">
<select class="form-control" name="colSelect" id="colSelect">
{% for column in columns %}
<option id="{{column}}">{{column}}</option>
{% endfor %}
</select>
</div>
</code></pre>
<p>My Flask code:</p>
<pre><code>@app.route('/textanalytics', methods = ['GET', 'POST'])
def upload_file():
error = None
success = None
columns = []
col = None
if request.method == "POST":
if request.files['inputData'] == '':
error = "No file selected"
else:
file = request.files['inputData']
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
success = 'File uploaded'
data = pd.read_csv(os.path.join(app.config['UPLOAD_FOLDER'], filename), header = 0, low_memory = False)
columns = [i for i in data.columns]
col = request.form.get('colSelect')
return render_template('textanalytics.html', success = success, columns = columns, col = col)
elif file and not allowed_file(file.filename):
error = 'Incorrect file type, .csv only'
return render_template('textanalytics.html', error = error)
return render_template('textanalytics.html', error = error, success = success, columns = columns, col = col)
app.add_url_rule('/uploads/<filename>', 'uploaded_file', build_only=True)
app.wsgi_app = SharedDataMiddleware(app.wsgi_app, {'/uploads': app.config['UPLOAD_FOLDER']})
</code></pre>
<p>As you can see, I am using <code>request.form.get('colSelect')</code> to retrieve the option but without any luck. It just returns <code>None</code> which is its initialised value.</p>
<p>I have a feeling it has to do with where I am placing that code but I am new to Flask and so could do with some help.</p>
| 0 | 2016-09-28T07:54:30Z | 39,750,538 | <p>I managed to figure it out. I took the form and separated it over two pages. So, the <strong>File Upload</strong> part was in its own form on a page called <code>textform1.html</code>. The uploaded file would be converted to a pandas dataframe and then a new page called <code>textform2.html</code> would be rendered. This page contained just the <code><select></code> part in its own form and thus I was then able to capture the selection from the user.</p>
| 0 | 2016-09-28T14:33:41Z | [
"javascript",
"python",
"html",
"flask"
]
|
regex to parse out certain value that i want | 39,741,419 | <p>Using <a href="https://regex101.com/" rel="nofollow">https://regex101.com/</a></p>
<ul>
<li>MY current regex Expression: <code>^.*'(\d\s*.*)'*$</code></li>
</ul>
<p>which doesnt seem to be working. What is the right combination formula that i should use?</p>
<p>I want to able to parse out <strong>4</strong> variable namely
<strong>items, quantity, cost and Total</strong></p>
<p>MY CODE:</p>
<pre><code>import re
str = "xxxxxxxxxxxxxxxxxx"
match = re.match(r"^.*'(\d\s*.*)'*$",str)
print match.group(1)
</code></pre>
| -1 | 2016-09-28T08:00:12Z | 39,742,249 | <p>The following regex matches each ingredient string and stores wanted informations into groups: <code>r'^(\d+)\s+([A-Za-z ]+)\s+(\d+(?:\.\d*))$'</code></p>
<p>It defines 3 groups each separated from other by spaces:</p>
<ul>
<li><code>^</code> marks the string start</li>
<li><code>(\d+)</code> is the first group and looks for at least one digit</li>
<li><code>\s+</code> is the first separation between groups and looks for at least one white character</li>
<li><code>([A-Za-z ]+)</code> is the second group and looks for a least one alphabetical character or space</li>
<li><code>\s+</code> is the second separation beween groups and looks for at least one white character</li>
<li><code>(\d+(?:\.\d*)</code> is the third group and looks for at least one digit with eventually a decimal point and some other digits</li>
<li><code>$</code> marks the string end</li>
</ul>
<p>A regex to obtain the total does not need to be explained I think.</p>
<p>Here is a test code using your test data. Is should be a good starting point:</p>
<pre><code>import re
TEST_DATA = ['Table: Waiter: kenny',
'======================================',
'1 SAUSAGE WRAPPED WITH B 10.00',
'1 ESCARGOT WITH GARLIC H 12.00',
'1 PAN SEARED FOIE GRAS 15.00',
'1 SAUTE FIELD MUSHROOM W 9.00',
'1 CRISPY CHICKEN WINGS 7.00',
'1 ONION RINGS 6.00',
'----------------------------------',
'TOTAL 59.00',
'CASH 59.00',
'CHANGE 0.00',
'Signature:__________________________',
'Thank you & see you again soon!']
INGREDIENT_RE = re.compile(r'^(\d+)\s+([A-Za-z ]+)\s+(\d+(?:\.\d*))$')
TOTAL_RE = re.compile(r'^TOTAL (.+)$')
ingredients = []
total = None
for string in TEST_DATA:
match = INGREDIENT_RE.match(string)
if match:
ingredients.append(match.groups())
continue
match = TOTAL_RE.match(string)
if match:
total = match.groups()[0]
break
print(ingredients)
print(total)
</code></pre>
<p>this prints:</p>
<pre><code>[('1', 'SAUSAGE WRAPPED WITH B', '10.00'), ('1', 'ESCARGOT WITH GARLIC H', '12.00'), ('1', 'PAN SEARED FOIE GRAS', '15.00'), ('1', 'SAUTE FIELD MUSHROOM W', '9.00'), ('1', 'CRISPY CHICKEN WINGS', '7.00'), ('1', 'ONION RINGS', '6.00')]
59.00
</code></pre>
<hr>
<p>Edit on Python raw strings:</p>
<p>The <code>r</code> character before a Python string indicates that it is a raw string, which means that spécial characters (like <code>\t</code>, <code>\n</code>, etc...) are not interpreted.</p>
<p>To be clear, and for example, in a standard string <code>\t</code> is one tabulation character. It a raw string it is two characters: <code>\</code> and <code>t</code>.</p>
<p><code>r'\t'</code> is equivalent to <code>'\\t'</code>.</p>
<p><a href="https://docs.python.org/2/reference/lexical_analysis.html#string-literals" rel="nofollow">more details in the doc</a></p>
| 1 | 2016-09-28T08:42:37Z | [
"python",
"regex"
]
|
Python: subprocess not writing output files | 39,741,422 | <p>Using Python's <code>subprocess.call</code>, I am trying to invoke a C program that reads an input file on disk, and creates multiple output files. Running the C program from terminal gives the expected results, though <code>subprocess.call</code> does not.</p>
<p>In the minimal example, the program should read the input file in a scratch folder, and create an output file in the same folder. The locations and names of the input & output files are hardcoded into the C program.</p>
<pre><code>import subprocess
subprocess.call('bin/program.exe') # parses input file 'scratch/input.txt'
with open('scratch/output.txt') as f:
print(f.read())
</code></pre>
<p>This returns:</p>
<p><code>FileNotFoundError: [Errno 2] No such file or directory: 'scratch/output.txt'</code></p>
<p>What am I doing wrong?</p>
<p>Using <code>subprocess.check_output</code>, I see no errors. </p>
<p>EDIT:
So I see the subprocess working directory is involved. The C executable has hardcoded paths for input/output relative to the exe (ie, '../scratch/input.txt'), but the <code>subprocess.call()</code> invocation required paths <em>relative to the python script</em>, not the exe. This is unexpected, and produces very different results from invoking the exe from the terminal.</p>
| 0 | 2016-09-28T08:00:15Z | 39,741,463 | <pre><code>import os
subprocess.call('bin/program.exe') # parses input file 'scratch/input.txt'
if not os.path.isfile('scratch'):
os.mkdir('scratch')
with open(os.path.join('scratch','output.txt'), 'w') as f:
f.write('Your message')
</code></pre>
<p>You have to open the file in a mode. Read for example. You will need to join the path with os.path.join().</p>
<p>You can create a folder and a file if they don't exist. But there will bo nothing to read from. If you want to write to them, that can be acheived like shown above.</p>
| 1 | 2016-09-28T08:02:31Z | [
"python",
"subprocess"
]
|
Pandas replace a character in all column names | 39,741,429 | <p>I have data frames with column names (coming from .csv files) containing <code>(</code> and <code>)</code> and I'd like to replace them with <code>_</code>.</p>
<p>How can I do that in place for all columns?</p>
| 1 | 2016-09-28T08:00:31Z | 39,741,442 | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.replace.html" rel="nofollow"><code>str.replace</code></a>:</p>
<pre><code>df.columns = df.columns.str.replace("[()]", "_")
</code></pre>
<p>Sample:</p>
<pre><code>df = pd.DataFrame({'(A)':[1,2,3],
'(B)':[4,5,6],
'C)':[7,8,9]})
print (df)
(A) (B) C)
0 1 4 7
1 2 5 8
2 3 6 9
df.columns = df.columns.str.replace(r"[()]", "_")
print (df)
_A_ _B_ C_
0 1 4 7
1 2 5 8
2 3 6 9
</code></pre>
| 1 | 2016-09-28T08:01:15Z | [
"python",
"pandas"
]
|
Issue with unit test for tiny Flask app | 39,741,983 | <p>I have a very primitive Flask application which works as I'm expecting, but I'm failing to write a unit test for it. The code of the app is following (i omit insignificant part):</p>
<pre><code>app.py
from flask import *
import random
import string
app = Flask(__name__)
keys = []
app.testing = True
@app.route('/keygen/api/keys', methods=['POST'])
def create():
symbol = string.ascii_letters + string.digits
key = ''.join(random.choice(symbol) for _ in range(4))
key_instance = {'key': key, 'is_used': False}
keys.append(key_instance)
return jsonify({'keys': keys}), 201
</code></pre>
<p>The test is:</p>
<pre><code>tests.py
import unittest
from flask import *
import app
class TestCase(unittest.TestCase):
def test_number_one(self):
test_app = Flask(app)
with test_app.test_client() as client:
rv = client.post('/keygen/api/keys')
...something...
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>The traceback:</p>
<pre><code>ERROR: test_number_one (__main__.TestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tests.py", line 12, in test_number_one
test_app = Flask(app)
File "/Users/bulrathi/Yandex.Disk.localized/Virtualenvs/ailove/lib/python3.5/site-packages/flask/app.py", line 346, in __init__
root_path=root_path)
File "/Users/bulrathi/Yandex.Disk.localized/Virtualenvs/ailove/lib/python3.5/site-packages/flask/helpers.py", line 807, in __init__
root_path = get_root_path(self.import_name)
File "/Users/bulrathi/Yandex.Disk.localized/Virtualenvs/ailove/lib/python3.5/site-packages/flask/helpers.py", line 668, in get_root_path
filepath = loader.get_filename(import_name)
File "<frozen importlib._bootstrap_external>", line 384, in _check_name_wrapper
ImportError: loader for app cannot handle <module 'app' from '/Users/bulrathi/Yandex.Disk.localized/ÐбÑÑение/Code/Django practice/ailove/keygen/app.py'>
----------------------------------------------------------------------
Ran 1 test in 0.003s
FAILED (errors=1)
</code></pre>
<p>Thanks for your time.</p>
| 0 | 2016-09-28T08:28:57Z | 39,743,443 | <p>You have a few issues with the posted code (indentation aside):</p>
<p>First, in <code>tests.py</code> you <code>import app</code> and use it, but <code>app</code> is the module rather than the <code>app</code> object from <code>app.py</code>. You should import the actual <code>app</code> object using</p>
<pre><code>from app import app
</code></pre>
<p>and secondly, you are using that <code>app</code> object (assuming we fix the import) as a parameter to another <code>Flask()</code> constructorâessentially saying:</p>
<pre><code>app = Flask(Flask(app))
</code></pre>
<p>Since we've imported <code>app</code> from <code>app.py</code>, we can just use it directly, so we remove the <code>app = Flask(app)</code> line (and the associated <code>import</code> statement as we don't need it anymore) and your test file becomes:</p>
<pre><code>import unittest
from app import app
class TestCase(unittest.TestCase):
def test_number_one(self):
with app.test_client() as client:
rv = client.post('/keygen/api/keys')
...something...
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>You should note also that the <code>from flask import *</code> form is discouraged in favour of importing specific parts of the module, so</p>
<pre><code>from flask import Flask, jsonify
</code></pre>
<p>would be better in your <code>app.py</code>.</p>
| 2 | 2016-09-28T09:32:38Z | [
"python",
"unit-testing",
"flask"
]
|
Spyder can not import plyfile | 39,742,082 | <p>I have installed plyfile in the Scripts subdirectory of the Anaconda3 (I run Windows 10), using the pip3. When I enter the command
import plyfile
in the Python 3.5 Shell, the command is executed without problems.<br>
But when I move to Spyder and I enter the same command in the console I receive an error message:</p>
<pre><code> import plyfile
Traceback (most recent call last):
File "<ipython-input-3-db7ef797d821>", line 1, in <module>
import plyfile
ImportError: No module named 'plyfile'
</code></pre>
<p>I tried to add the path of the file to the environment using the Windows Commander and giving at the C:\ the command:</p>
<pre><code> sys.path.append(C:\Users\Alexandros_7\Downloads\plyfile-0.4)
</code></pre>
<p>but I received the error message:</p>
<pre><code> "sys.path.append" is not recognized as an internal or external command, operable program or batch file
</code></pre>
<p>I also tried the following commands from the console of Spyder: </p>
<pre><code> import sys
sys.path.append("C:\\Users\\Alexandros\\Downloads\\plyfile")
</code></pre>
<p>These commands were executed without a problem. Then I entered, import plyfile and I received the same error message -- i.e. that "No module named "plyfile"".</p>
<p>Could you please help me?</p>
| 0 | 2016-09-28T08:34:08Z | 39,745,055 | <p>try give Spyder the same environment paths like Anaconda, in your case the path to the installed plyfile eg.:</p>
<p>your_module is located in: "C:\Users\Alexandros\your_module.py"</p>
<p>in Spyder:
lets add the located path to the system environment</p>
<pre><code>import sys
sys.path.append("C:\Users\Alexandros\") #need string
</code></pre>
<p>lets import your module</p>
<pre><code>import your_module
reload(your_module)
</code></pre>
| 0 | 2016-09-28T10:37:10Z | [
"python"
]
|
High performance all-to-all comparison of vectors in Python | 39,742,127 | <p>First to tell about the background: several methods for comparison between clusterings relies on so called pair counting. We have two vectors of flat clusterings <code>a</code> and <code>b</code> over the same <code>n</code> entities. At pair counting for all possible pairs of entities we check whether if those belong to the same cluster in both, or to the same in <code>a</code> but different in <code>b</code>, or the opposite, or to different clusters in both. This way we get 4 counts, let's call them <code>n11, n10, n01, n00</code>. These are input for different metrics.</p>
<p>When the number of entities is around 10,000, and the number of clusterings to compare is dozens or more, the performance becomes an issue, as the number of pairs is <code>10^8</code> for each comparison, and for an all-to-all comparison of clusterings this needs to be performed <code>10^4</code> times.</p>
<p>With a naive Python implementation it took really forever, so I turned to Cython and numpy. This way I could push the time for one comparison down to around <code>0.9-3.0</code> seconds. Which still means half day runtime in my case. I am wondering if you see any possibility for performance achievement, with some clever algorithm or C library, or whatever.</p>
<p>Here are my attempts:</p>
<p>1) This counts without allocating huge arrays for all the pairs, takes 2 membership vectors <code>a1, a2</code> of length <code>n</code>, and returns the counts:</p>
<pre><code>cimport cython
import numpy as np
cimport numpy as np
ctypedef np.int32_t DTYPE_t
@cython.boundscheck(False)
def pair_counts(
np.ndarray[DTYPE_t, ndim = 1] a1,
np.ndarray[DTYPE_t, ndim = 1] a2,
):
cdef unsigned int a1s = a1.shape[0]
cdef unsigned int a2s = a2.shape[0]
cdef unsigned int n11, n10, n01, n00
n11 = n10 = n01 = n00 = 0
cdef unsigned int j0
for i in range(0, a1s - 1):
j0 = i + 1
for j in range(j0, a2s):
if a1[i] == a1[j] and a2[i] == a2[j]:
n11 += 1
elif a1[i] == a1[j]:
n10 += 1
elif a2[i] == a2[j]:
n01 += 1
else:
n00 += 1
return n11, n10, n01, n00
</code></pre>
<p>2) This first calculates comembership vectors (length <code>n * (n-1) / 2</code>, one element for each entity pair) for each of the 2 clusterings, then calculates the counts from these vectors. It allocates ~20-40M memory at each comparison, but interestingly, faster than the previous. Note: <code>c</code> is a wrapper class around a clustering, having the usual membership vector, but also a <code>c.members</code> dict which contains the indices of entities for each cluster in numpy arrays.</p>
<pre><code>cimport cython
import numpy as np
cimport numpy as np
@cython.boundscheck(False)
def comembership(c):
"""
Returns comembership vector, where each value tells if one pair
of entites belong to the same group (1) or not (0).
"""
cdef int n = len(c.memberships)
cdef int cnum = c.cnum
cdef int ri, i, ii, j, li
cdef unsigned char[:] cmem = \
np.zeros((int(n * (n - 1) / 2), ), dtype = np.uint8)
for ci in xrange(cnum):
# here we use the members dict to have the indices of entities
# in cluster (ci), as a numpy array (mi)
mi = c.members[ci]
for i in xrange(len(mi) - 1):
ii = mi[i]
# this is only to convert the indices of an n x n matrix
# to the indices of a 1 x (n x (n-1) / 2) vector:
ri = n * ii - 3 * ii / 2 - ii ** 2 / 2 - 1
for j in mi[i+1:]:
# also here, adding j only for having the correct index
li = ri + j
cmem[li] = 1
return np.array(cmem)
def pair_counts(c1, c2):
p1 = comembership(c1)
p2 = comembership(c2)
n = len(c1.memberships)
a11 = p1 * p2
n11 = a11.sum()
n10 = (p1 - a11).sum()
n01 = (p2 - a11).sum()
n00 = n - n10 - n01 - n11
return n11, n10, n01, n00
</code></pre>
<p>3) This is a pure numpy based solution with creating an <code>n x n</code> boolean array of comemberships of entity pairs. The inputs are the membership vectors (<code>a1, a2</code>).</p>
<pre><code>def pair_counts(a1, a2):
n = len(a1)
cmem1 = a1.reshape([n,1]) == a1.reshape([1,n])
cmem2 = a2.reshape([n,1]) == a2.reshape([1,n])
n11 = int(((cmem1 == cmem2).sum() - n) / 2)
n10 = int((cmem1.sum() - n) / 2) - n11
n01 = int((cmem2.sum() - n) / 2) - n11
n00 = n - n11 - n10 - n01
return n11, n10, n01, n00
</code></pre>
<p><strong>Edit:</strong> example data</p>
<pre><code>import numpy as np
a1 = np.random.randint(0, 1868, 14440, dtype = np.int32)
a2 = np.random.randint(0, 484, 14440, dtype = np.int32)
# to have the members dicts used in example 2:
def get_cnum(a):
"""
Returns number of clusters.
"""
return len(np.unique(a))
def get_members(a):
"""
Returns a dict with cluster numbers as keys and member entities
as sorted numpy arrays.
"""
members = dict(map(lambda i: (i, []), range(max(a) + 1)))
list(map(lambda m: members[m[1]].append(m[0]),
enumerate(a)))
members = dict(map(lambda m:
(m[0], np.array(sorted(m[1]), dtype = np.int)),
members.items()))
return members
members1 = get_members(a1)
members2 = get_members(a2)
cnum1 = get_cnum(a1)
cnum2 = get_cnum(a2)
</code></pre>
| 3 | 2016-09-28T08:36:43Z | 39,746,217 | <p>The approach based on sorting has merit, but can be done a lot simpler:</p>
<pre><code>def pair_counts(a, b):
n = a.shape[0] # also b.shape[0]
counts_a = np.bincount(a)
counts_b = np.bincount(b)
sorter_a = np.argsort(a)
n11 = 0
same_a_offset = np.cumsum(counts_a)
for indices in np.split(sorter_a, same_a_offset):
b_check = b[indices]
n11 += np.count_nonzero(b_check == b_check[:,None])
n11 = (n11-n) // 2
n10 = (np.sum(counts_a**2) - n) // 2 - n11
n01 = (np.sum(counts_b**2) - n) // 2 - n11
n00 = n**2 - n - n11 - n10 - n01
return n11, n10, n01, n00
</code></pre>
<p>If this method is coded efficiently in Cython there's another speedup (probably ~20x) to be gained.</p>
<hr>
<p>Edit:</p>
<p>I found a way to completely vectorize the procedure and lower the time complexity:</p>
<pre><code>def sizes2count(a, n):
return (np.inner(a, a) - n) // 2
def pair_counts_vec_nlogn(a, b):
# Size of "11" clusters (a[i]==a[j] & b[i]==b[j])
ab = a * b.max() + b # make sure max(a)*max(b) fits the dtype!
_, sizes = np.unique(ab, return_counts=True)
# Calculate the counts for each type of pairing
n = len(a) # also len(b)
n11 = sizes2count(sizes, n)
n10 = sizes2count(np.bincount(a), n) - n11
n01 = sizes2count(np.bincount(b), n) - n11
n00 = n**2 - n - n11 - n10 - n01
return n11, n10, n01, n00
def pair_counts_vec_linear(a, b):
# Label "11" clusters (a[i]==a[j] & b[i]==b[j])
ab = a * b.max() + b
# Calculate the counts for each type of pairing
n = len(a) # also len(b)
n11 = sizes2count(np.bincount(ab), n)
n10 = sizes2count(np.bincount(a), n) - n11
n01 = sizes2count(np.bincount(b), n) - n11
n00 = n**2 - n - n11 - n10 - n01
return n11, n10, n01, n00
</code></pre>
<p>Sometimes the O(n log(n)) algorithm is faster than the linear one, because the linear one uses <code>max(a)*max(b)</code> storage. Naming can probably be improved, I'm not really familiar with the terminology.</p>
| 1 | 2016-09-28T11:32:04Z | [
"python",
"algorithm",
"numpy",
"cluster-analysis",
"cython"
]
|
High performance all-to-all comparison of vectors in Python | 39,742,127 | <p>First to tell about the background: several methods for comparison between clusterings relies on so called pair counting. We have two vectors of flat clusterings <code>a</code> and <code>b</code> over the same <code>n</code> entities. At pair counting for all possible pairs of entities we check whether if those belong to the same cluster in both, or to the same in <code>a</code> but different in <code>b</code>, or the opposite, or to different clusters in both. This way we get 4 counts, let's call them <code>n11, n10, n01, n00</code>. These are input for different metrics.</p>
<p>When the number of entities is around 10,000, and the number of clusterings to compare is dozens or more, the performance becomes an issue, as the number of pairs is <code>10^8</code> for each comparison, and for an all-to-all comparison of clusterings this needs to be performed <code>10^4</code> times.</p>
<p>With a naive Python implementation it took really forever, so I turned to Cython and numpy. This way I could push the time for one comparison down to around <code>0.9-3.0</code> seconds. Which still means half day runtime in my case. I am wondering if you see any possibility for performance achievement, with some clever algorithm or C library, or whatever.</p>
<p>Here are my attempts:</p>
<p>1) This counts without allocating huge arrays for all the pairs, takes 2 membership vectors <code>a1, a2</code> of length <code>n</code>, and returns the counts:</p>
<pre><code>cimport cython
import numpy as np
cimport numpy as np
ctypedef np.int32_t DTYPE_t
@cython.boundscheck(False)
def pair_counts(
np.ndarray[DTYPE_t, ndim = 1] a1,
np.ndarray[DTYPE_t, ndim = 1] a2,
):
cdef unsigned int a1s = a1.shape[0]
cdef unsigned int a2s = a2.shape[0]
cdef unsigned int n11, n10, n01, n00
n11 = n10 = n01 = n00 = 0
cdef unsigned int j0
for i in range(0, a1s - 1):
j0 = i + 1
for j in range(j0, a2s):
if a1[i] == a1[j] and a2[i] == a2[j]:
n11 += 1
elif a1[i] == a1[j]:
n10 += 1
elif a2[i] == a2[j]:
n01 += 1
else:
n00 += 1
return n11, n10, n01, n00
</code></pre>
<p>2) This first calculates comembership vectors (length <code>n * (n-1) / 2</code>, one element for each entity pair) for each of the 2 clusterings, then calculates the counts from these vectors. It allocates ~20-40M memory at each comparison, but interestingly, faster than the previous. Note: <code>c</code> is a wrapper class around a clustering, having the usual membership vector, but also a <code>c.members</code> dict which contains the indices of entities for each cluster in numpy arrays.</p>
<pre><code>cimport cython
import numpy as np
cimport numpy as np
@cython.boundscheck(False)
def comembership(c):
"""
Returns comembership vector, where each value tells if one pair
of entites belong to the same group (1) or not (0).
"""
cdef int n = len(c.memberships)
cdef int cnum = c.cnum
cdef int ri, i, ii, j, li
cdef unsigned char[:] cmem = \
np.zeros((int(n * (n - 1) / 2), ), dtype = np.uint8)
for ci in xrange(cnum):
# here we use the members dict to have the indices of entities
# in cluster (ci), as a numpy array (mi)
mi = c.members[ci]
for i in xrange(len(mi) - 1):
ii = mi[i]
# this is only to convert the indices of an n x n matrix
# to the indices of a 1 x (n x (n-1) / 2) vector:
ri = n * ii - 3 * ii / 2 - ii ** 2 / 2 - 1
for j in mi[i+1:]:
# also here, adding j only for having the correct index
li = ri + j
cmem[li] = 1
return np.array(cmem)
def pair_counts(c1, c2):
p1 = comembership(c1)
p2 = comembership(c2)
n = len(c1.memberships)
a11 = p1 * p2
n11 = a11.sum()
n10 = (p1 - a11).sum()
n01 = (p2 - a11).sum()
n00 = n - n10 - n01 - n11
return n11, n10, n01, n00
</code></pre>
<p>3) This is a pure numpy based solution with creating an <code>n x n</code> boolean array of comemberships of entity pairs. The inputs are the membership vectors (<code>a1, a2</code>).</p>
<pre><code>def pair_counts(a1, a2):
n = len(a1)
cmem1 = a1.reshape([n,1]) == a1.reshape([1,n])
cmem2 = a2.reshape([n,1]) == a2.reshape([1,n])
n11 = int(((cmem1 == cmem2).sum() - n) / 2)
n10 = int((cmem1.sum() - n) / 2) - n11
n01 = int((cmem2.sum() - n) / 2) - n11
n00 = n - n11 - n10 - n01
return n11, n10, n01, n00
</code></pre>
<p><strong>Edit:</strong> example data</p>
<pre><code>import numpy as np
a1 = np.random.randint(0, 1868, 14440, dtype = np.int32)
a2 = np.random.randint(0, 484, 14440, dtype = np.int32)
# to have the members dicts used in example 2:
def get_cnum(a):
"""
Returns number of clusters.
"""
return len(np.unique(a))
def get_members(a):
"""
Returns a dict with cluster numbers as keys and member entities
as sorted numpy arrays.
"""
members = dict(map(lambda i: (i, []), range(max(a) + 1)))
list(map(lambda m: members[m[1]].append(m[0]),
enumerate(a)))
members = dict(map(lambda m:
(m[0], np.array(sorted(m[1]), dtype = np.int)),
members.items()))
return members
members1 = get_members(a1)
members2 = get_members(a2)
cnum1 = get_cnum(a1)
cnum2 = get_cnum(a2)
</code></pre>
| 3 | 2016-09-28T08:36:43Z | 39,747,414 | <p>To compare two clusterings <code>A</code> and <code>B</code> in linear time:</p>
<ol>
<li>Iterate through clusters in <code>A</code>. Let the size of each cluster be <code>a_i</code>. The total number of pairs in the same cluster in <code>A</code> is the total of all <code>a_i*(a_i-1)/2</code>.</li>
<li>Partition each A-cluster according to its cluster in <code>B</code>. Let the size of each partition be <code>b_j</code>. The total number of pairs in the same cluster in both <code>A</code> and <code>B</code> is the total of all <code>b_j *(b_j-1)/2</code>.</li>
<li>The difference between the two is the total number of pairs that are in the same cluster in A but not B</li>
<li>Iterate through the custers in <code>B</code> to get the total number of pairs in the same cluster in <code>B</code>, and subtract the result from (2) to get pairs in the same cluster in <code>B</code> but not <code>A</code>.</li>
<li>The sum of the above 3 results is the number of pairs that are the same in either A or B. Subtract from n*(n-1)/2 to get the total number of pairs that are in different clusters in A and B</li>
</ol>
<p>The partitioning in step (2) is done by making a dictionary mapping item -> cluster for B and then looking up each item in each A-cluster. If you're cross-comparing lots of clusterings, you can save a lot of time by computing these maps just once for each clustering and keeping them around.</p>
| 1 | 2016-09-28T12:23:22Z | [
"python",
"algorithm",
"numpy",
"cluster-analysis",
"cython"
]
|
High performance all-to-all comparison of vectors in Python | 39,742,127 | <p>First to tell about the background: several methods for comparison between clusterings relies on so called pair counting. We have two vectors of flat clusterings <code>a</code> and <code>b</code> over the same <code>n</code> entities. At pair counting for all possible pairs of entities we check whether if those belong to the same cluster in both, or to the same in <code>a</code> but different in <code>b</code>, or the opposite, or to different clusters in both. This way we get 4 counts, let's call them <code>n11, n10, n01, n00</code>. These are input for different metrics.</p>
<p>When the number of entities is around 10,000, and the number of clusterings to compare is dozens or more, the performance becomes an issue, as the number of pairs is <code>10^8</code> for each comparison, and for an all-to-all comparison of clusterings this needs to be performed <code>10^4</code> times.</p>
<p>With a naive Python implementation it took really forever, so I turned to Cython and numpy. This way I could push the time for one comparison down to around <code>0.9-3.0</code> seconds. Which still means half day runtime in my case. I am wondering if you see any possibility for performance achievement, with some clever algorithm or C library, or whatever.</p>
<p>Here are my attempts:</p>
<p>1) This counts without allocating huge arrays for all the pairs, takes 2 membership vectors <code>a1, a2</code> of length <code>n</code>, and returns the counts:</p>
<pre><code>cimport cython
import numpy as np
cimport numpy as np
ctypedef np.int32_t DTYPE_t
@cython.boundscheck(False)
def pair_counts(
np.ndarray[DTYPE_t, ndim = 1] a1,
np.ndarray[DTYPE_t, ndim = 1] a2,
):
cdef unsigned int a1s = a1.shape[0]
cdef unsigned int a2s = a2.shape[0]
cdef unsigned int n11, n10, n01, n00
n11 = n10 = n01 = n00 = 0
cdef unsigned int j0
for i in range(0, a1s - 1):
j0 = i + 1
for j in range(j0, a2s):
if a1[i] == a1[j] and a2[i] == a2[j]:
n11 += 1
elif a1[i] == a1[j]:
n10 += 1
elif a2[i] == a2[j]:
n01 += 1
else:
n00 += 1
return n11, n10, n01, n00
</code></pre>
<p>2) This first calculates comembership vectors (length <code>n * (n-1) / 2</code>, one element for each entity pair) for each of the 2 clusterings, then calculates the counts from these vectors. It allocates ~20-40M memory at each comparison, but interestingly, faster than the previous. Note: <code>c</code> is a wrapper class around a clustering, having the usual membership vector, but also a <code>c.members</code> dict which contains the indices of entities for each cluster in numpy arrays.</p>
<pre><code>cimport cython
import numpy as np
cimport numpy as np
@cython.boundscheck(False)
def comembership(c):
"""
Returns comembership vector, where each value tells if one pair
of entites belong to the same group (1) or not (0).
"""
cdef int n = len(c.memberships)
cdef int cnum = c.cnum
cdef int ri, i, ii, j, li
cdef unsigned char[:] cmem = \
np.zeros((int(n * (n - 1) / 2), ), dtype = np.uint8)
for ci in xrange(cnum):
# here we use the members dict to have the indices of entities
# in cluster (ci), as a numpy array (mi)
mi = c.members[ci]
for i in xrange(len(mi) - 1):
ii = mi[i]
# this is only to convert the indices of an n x n matrix
# to the indices of a 1 x (n x (n-1) / 2) vector:
ri = n * ii - 3 * ii / 2 - ii ** 2 / 2 - 1
for j in mi[i+1:]:
# also here, adding j only for having the correct index
li = ri + j
cmem[li] = 1
return np.array(cmem)
def pair_counts(c1, c2):
p1 = comembership(c1)
p2 = comembership(c2)
n = len(c1.memberships)
a11 = p1 * p2
n11 = a11.sum()
n10 = (p1 - a11).sum()
n01 = (p2 - a11).sum()
n00 = n - n10 - n01 - n11
return n11, n10, n01, n00
</code></pre>
<p>3) This is a pure numpy based solution with creating an <code>n x n</code> boolean array of comemberships of entity pairs. The inputs are the membership vectors (<code>a1, a2</code>).</p>
<pre><code>def pair_counts(a1, a2):
n = len(a1)
cmem1 = a1.reshape([n,1]) == a1.reshape([1,n])
cmem2 = a2.reshape([n,1]) == a2.reshape([1,n])
n11 = int(((cmem1 == cmem2).sum() - n) / 2)
n10 = int((cmem1.sum() - n) / 2) - n11
n01 = int((cmem2.sum() - n) / 2) - n11
n00 = n - n11 - n10 - n01
return n11, n10, n01, n00
</code></pre>
<p><strong>Edit:</strong> example data</p>
<pre><code>import numpy as np
a1 = np.random.randint(0, 1868, 14440, dtype = np.int32)
a2 = np.random.randint(0, 484, 14440, dtype = np.int32)
# to have the members dicts used in example 2:
def get_cnum(a):
"""
Returns number of clusters.
"""
return len(np.unique(a))
def get_members(a):
"""
Returns a dict with cluster numbers as keys and member entities
as sorted numpy arrays.
"""
members = dict(map(lambda i: (i, []), range(max(a) + 1)))
list(map(lambda m: members[m[1]].append(m[0]),
enumerate(a)))
members = dict(map(lambda m:
(m[0], np.array(sorted(m[1]), dtype = np.int)),
members.items()))
return members
members1 = get_members(a1)
members2 = get_members(a2)
cnum1 = get_cnum(a1)
cnum2 = get_cnum(a2)
</code></pre>
| 3 | 2016-09-28T08:36:43Z | 39,762,726 | <p>You do <strong>not</strong> need to <em>enumerate and count</em> the pairs.</p>
<p>Instead, compute a <em>confusion matrix</em> containing the intersection sizes of each cluster from the first clustering with every cluster of the second clustering (this is one loop over all objects), then compute the number of pairs from this matrix using the equation <code>n*(n-1)/2</code>.</p>
<p>This reduces your runtime from O(n^2) to O(n), so it should give you a <em>considerable</em> speedup.</p>
| 1 | 2016-09-29T06:16:36Z | [
"python",
"algorithm",
"numpy",
"cluster-analysis",
"cython"
]
|
Retrieving the correct .json for wunderground | 39,742,176 | <p>So far I have produced the following code:</p>
<pre><code>import requests
def weatherSearch():
Search = raw_input('Enter your location: ')
r = requests.get("http://api.wunderground.com/api/a8c3e5ce8970ae66/conditions/q/{}.json".format(Search))
weatherData = r.json()
print weatherData
weatherSearch()
</code></pre>
<p>For example, if <code>Search</code> was set to <code>London</code>, it would produce:</p>
<p><a href="http://api.wunderground.com/api/a8c3e5ce8970ae66/conditions/q/London.json" rel="nofollow">http://api.wunderground.com/api/a8c3e5ce8970ae66/conditions/q/London.json</a></p>
<p>However, this .json doesn't contain the temperature which is what I'm trying to find: <code>"temp_c":</code></p>
<p>Whereas on the following link, <code>"temp_c":</code> can be found:</p>
<p><a href="http://api.wunderground.com/api/a8c3e5ce8970ae66/conditions/q/CA/San_Francisco.json" rel="nofollow">http://api.wunderground.com/api/a8c3e5ce8970ae66/conditions/q/CA/San_Francisco.json</a></p>
<p>I'm struggling to understand what I'm doing wrong in order to retrieve the weather data.</p>
| 0 | 2016-09-28T08:38:58Z | 39,742,412 | <p>Looks like your query returns a list of possible matches, each of which has an <code>l</code> key which contains a link. Using that link brings you back the full data for that location. So, for instance, the full data for London, UK is at <a href="http://api.wunderground.com/api/a8c3e5ce8970ae66/conditions/q/zmw:00000.1.03772.json" rel="nofollow">http://api.wunderground.com/api/a8c3e5ce8970ae66/conditions/q/zmw:00000.1.03772.json</a>.</p>
| 1 | 2016-09-28T08:49:30Z | [
"python",
"json",
"python-2.7",
"python-requests"
]
|
Parsing the SLA from Jira API | 39,742,177 | <p>I am currently in the process of using the Jira API to pull some data on tickets created into a separate database.</p>
<p>Tickets have been designed to follow ITIL standards and have a 'Time to first response' and a Time to resolution'</p>
<p>Both of these are retrieved in Jason as the following:</p>
<pre><code> <customfield_10110>com.atlassian.servicedesk.internal.sla.model.SLAValue@619c2aec</customfield_10110>
<customfield_10111>com.atlassian.servicedesk.internal.sla.model.SLAValue@705770b9</customfield_10111>
</code></pre>
<p>It looks like there is a Hex value, but how do I get that to show the actual TTR and TTFR value I see in the ticket?</p>
<p><a href="http://i.stack.imgur.com/xh6UP.png" rel="nofollow"><img src="http://i.stack.imgur.com/xh6UP.png" alt="enter image description here"></a></p>
| 0 | 2016-09-28T08:38:59Z | 39,743,985 | <p>It seems this was a bug that was fixed in version 2.0</p>
| 0 | 2016-09-28T09:53:37Z | [
"python",
"jira",
"jira-rest-api"
]
|
How do I get the information that the user has changed in a table in PyQT with Python and SQLite3 | 39,742,199 | <p>I have a table that comes up on my GUI. The user can edit this table from the GUI. how do I get all of the information that has been edited and update it in the database? The user checks the checkbox for each row they want to have updated to the database, so I have a list of all rows that require updating. I want to have a list of tuples, where each tuple is a row of new values that need to be updated, given that the ID field remains unchanged (I also want to know how to make the user unable to edit some fields).</p>
<pre><code>def click_btn_mailouts(self):
self.screen_name = "mailouts"
self.cur.execute("""SELECT s.StudentID, s.FullName, m.PreviouslyMailed, m.nextMail, m.learnersDate, m.RestrictedDate, m.DefensiveDate FROM
StudentProfile s LEFT JOIN Mailouts m ON s.studentID=m.studentID""")
self.all_data = self.cur.fetchall()
self.table.setRowCount(len(self.all_data))
self.tableFields = ["Check","Full name","Previously mailed?","Next mail","learnersDate","Restricted date","Defensive driving date"]
self.table.setColumnCount(len(self.tableFields))
self.table.setHorizontalHeaderLabels(self.tableFields)
self.checkbox_list = []
for i, item in enumerate(self.all_data):
FullName = QtGui.QTableWidgetItem(str(item[1]))
PreviouslyMailed = QtGui.QTableWidgetItem(str(item[2]))
LearnersDate = QtGui.QTableWidgetItem(str(item[3]))
RestrictedDate = QtGui.QTableWidgetItem(str(item[4]))
DefensiveDate = QtGui.QTableWidgetItem(str(item[5]))
NextMail = QtGui.QTableWidgetItem(str(item[6]))
self.table.setItem(i, 1, FullName)
self.table.setItem(i, 2, PreviouslyMailed)
self.table.setItem(i, 3, LearnersDate)
self.table.setItem(i, 4, RestrictedDate)
self.table.setItem(i, 5, DefensiveDate)
self.table.setItem(i, 6, NextMail)
chkBoxItem = QtGui.QTableWidgetItem()
chkBoxItem.setFlags(QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsEnabled)
chkBoxItem.setCheckState(QtCore.Qt.Unchecked)
self.checkbox_list.append(chkBoxItem)
self.table.setItem(i, 0, self.checkbox_list[i])
"""here is the format that I have for the edit function"""
def click_btn_edit(self):
checkedRows = []
for i, checkbox in enumerate(self.checkbox_list):
if checkbox.checkState() == QtCore.Qt.Checked:
checkedRows.append(i)
"""as the list itterates, if the checkbox item is ticked,
it passes through the if statement, otherwise it is ignored.
checkedRows becomes a list of all the indexes in the table where
an edit needs to be made"""
</code></pre>
<p>So basically I need to know how to get the changes made in the QTableWidget in the GUI given a list of indexes where changes have been made, and somehow get those changes updated into the database. It would also be helpful to know how to stop the user from editing some of the fields, as that would mess up the database.</p>
| 1 | 2016-09-28T08:40:25Z | 39,755,269 | <p>You can do a few different things.</p>
<p>To prevent editing, you can just remove the edit flag for the items you don't want the user to edit</p>
<pre><code>FullName.setFlags(FullName.flags() & ~Qt.ItemIsEditable)
</code></pre>
<p>It looks like you're storing the original data (i.e. <code>self.all_data</code>). You could just compare the data in the selected table cells with the original data and only update fields that have changed.</p>
<p>You could also connect to the <a href="http://pyqt.sourceforge.net/Docs/PyQt4/qtablewidget.html#itemChanged" rel="nofollow"><code>itemChanged</code></a> signal for the table widget and keep a running list of all the indexes that have changed since the last refresh</p>
<pre><code> ...
self.changed_items = set()
self.table.itemChanged.connect(self.log_change)
def log_change(self, item):
self.changed_items.add(item)
</code></pre>
<p>Alternatively, depending on how much control you want, you can also create a <code>QItemDelegate</code> to do all of this. </p>
| 0 | 2016-09-28T18:39:06Z | [
"python",
"sqlite3",
"pyqt"
]
|
Python) Combing three textfiles(lists) into one with sorting | 39,742,260 | <p>I'm trying to make a new text file which is combined with 3 different text files. I'll give more detail with codes.</p>
<pre><code>text a file = z
a
y
text b file = s
x
d
text c file = 1
3
2
</code></pre>
<p>so there are a,b,c text files, and I want to make :</p>
<pre><code>text newfile: z s 1
y d 2
a x 3
</code></pre>
<p>As you can see above, I want newfile to be in order of 'C' file.
Below is what i've been made.</p>
<pre><code>def main():
a = open("text1","r")
b = open("text2","r")
c = open("text3","r")
text1list = []
text2list = []
text3list = []
for line1 in a:
line1 = line1.strip()
text1list.append(line1)
for line2 in b:
line2 = line2.strip()
text2list.append(line2)
for line3 in c:
line3 = line3.strip()
text3list.append(line3)
aa,bb,cc = zip(*sorted(zip(text3list, text1list, text2list)))
combine = list(zip(bb,cc,aa))
with open("finalfiles", 'w') as zzzz:
for item in combine:
zzzz.write("{}\n".format(item))
</code></pre>
<p>problem is, right now, my output is</p>
<pre><code>('z','s','1')
('y','d','2')
('a','x','3')
</code></pre>
<p>my sorting is working, but it's different from what I expect.
I don't want those ' ' and (). I think it's because those are lists...?
I'm stuck at this point.
Plus, please tell me my sorting looks fine !</p>
| 0 | 2016-09-28T08:43:06Z | 39,742,633 | <p>Your <code>combine</code> variable is a list of tuples. Join each tuple (<code>item</code>) and write to file</p>
<pre><code>zzzz.write(" ".join(item))
</code></pre>
<p>A bit shorten and DRY version of your code:</p>
<pre><code>files = ["text1", "text2", "text3"]
groups = []
for each_file in files:
with open(each_file, 'r') as fo:
groups.append(sorted(fo.read().split('\n')))
with open("finalfiles", 'w') as out_file:
for each_group in zip(*groups):
out_file.write("{} {} {}\n".format(*each_group))
</code></pre>
| 1 | 2016-09-28T08:59:54Z | [
"python"
]
|
Efficiently collect links in dataframe | 39,742,275 | <p>Say I have a dataframe of type</p>
<pre><code>individual, location, food
1 A a
1 A b
1 B a
1 A c
2 C a
2 C b
</code></pre>
<p>where individuals are creating links between location and food. I would like to collect all links on the individual basis. That is, if an individual was observed at locations <code>A</code> and <code>B</code> and had (eventually) food at <code>a</code>, <code>b</code>, and <code>c</code>, I want to link <em>all</em> these locations and food types against each other:</p>
<pre><code> location food
A a
A b
A c
B a
B b
B c
C a
C b
</code></pre>
<p>One - extremely inefficient - way of doing so is</p>
<pre><code>import itertools
def foo(group):
list1 = group.location.unique()
list2 = group.food.unique()
return pd.DataFrame(data=list(itertools.product(list1, list2)), columns=['location', 'food'])
df.groupby(df.individual).apply(foo)
</code></pre>
<p>Is there any better way to get this done?</p>
| 0 | 2016-09-28T08:43:39Z | 39,744,079 | <p>You can pick up some efficiency by using numpy's <code>meshgrid</code>. </p>
<pre><code>import itertools
import numpy as np
def foo(group):
list1 = group.location.unique()
list2 = group.food.unique()
return pd.DataFrame(data=list(itertools.product(list1, list2)), columns=['location', 'food'])
def bar(group):
list1 = group.location.unique()
list2 = group.food.unique()
product = np.meshgrid(list1, list2)
# reversing the order is necessary to get the same output as foo
list3 = np.dstack([product[1], product[0]]).reshape(-1, 2)
return pd.DataFrame(data=list3, columns=['location', 'food'])
</code></pre>
<p>On my machine there was a small, (~20 %) speedup</p>
<pre><code>In [66]: %timeit df.groupby(df.individual).apply(foo)
100 loops, best of 3: 2.57 ms per loop
In [67]: %timeit df.groupby(df.individual).apply(bar)
100 loops, best of 3: 2.16 ms per loop
</code></pre>
| 2 | 2016-09-28T09:56:19Z | [
"python",
"pandas"
]
|
Python check if any part of string in list | 39,742,438 | <p>I have a list containing synonyms for the word 'Good' (this list here is shortened)</p>
<pre><code>good_synonym = ['Good','good','Well','well']
</code></pre>
<p>And the program asks how the user is feeling</p>
<pre><code>print = 'Hello, ' + name + ', How are you?'
status = raw_input('')
</code></pre>
<p>But sometimes, the user may respond to the question with "I am good" (or similar)</p>
<p>If the answer contains a word in the good synonym list, I want the program to reply</p>
<pre><code>if status contains a word in good_synonym:
print ('That is good')
else:
print ('That is not so good')
</code></pre>
<h1>note that the first line is not real python language</h1>
<p>But I don't know which phrase to use to do the action.</p>
| 0 | 2016-09-28T08:50:46Z | 39,742,512 | <p>Instead of a list with mixed-case words, use <a href="https://docs.python.org/3/library/stdtypes.html#set-types-set-frozenset" rel="nofollow"><em>set</em> objects</a>; sets make membership testing and intersection testing much easier. Store <em>lowercase text</em> only, and simply lowercase the input string:</p>
<pre><code>good_synonym = {'good', 'well'}
# good_synonym = set(['good', 'well']) # Python 2.6
</code></pre>
<p>Now test if the input string, lowercased and split on whitespace, is a <a href="https://en.wikipedia.org/wiki/Disjoint_sets" rel="nofollow"><em>disjoint set</em></a> with <a href="https://docs.python.org/3/library/stdtypes.html#set.isdisjoint" rel="nofollow"><code>set.isdisjoint()</code></a>. If it is not a disjoint set, there is overlap between the two sets and that means at least <code>'good'</code> or <code>'well'</code> is present:</p>
<pre><code>if not good_synonym.isdisjoint(status.lower().split()):
print ('That is good')
else:
print ('That is not so good')
</code></pre>
<p>Testing if a set is disjoint is efficient; it only has to test words up to the first one that is in the <code>good_synonym</code> set to return <code>False</code> quickly. You could calculate the intersection instead, but that would always test <em>all</em> words in the status to build a new set object.</p>
<p>Other solutions you may have seen, use the <a href="https://docs.python.org/3/library/functions.html#any" rel="nofollow"><code>any()</code> function</a>; given a <a href="https://docs.python.org/3/tutorial/classes.html#generator-expressions" rel="nofollow">generator expression</a> it too can be efficient as it would return <code>True</code> early if any of the outputs is true:</p>
<pre><code>if any(word in good_synonym for word in status.lower().split()):
</code></pre>
<p>This, however, does all the looping and testing in Python code, while <code>set.isdisjoint()</code> is implemented entirely in C code.</p>
| 7 | 2016-09-28T08:54:06Z | [
"python"
]
|
Python check if any part of string in list | 39,742,438 | <p>I have a list containing synonyms for the word 'Good' (this list here is shortened)</p>
<pre><code>good_synonym = ['Good','good','Well','well']
</code></pre>
<p>And the program asks how the user is feeling</p>
<pre><code>print = 'Hello, ' + name + ', How are you?'
status = raw_input('')
</code></pre>
<p>But sometimes, the user may respond to the question with "I am good" (or similar)</p>
<p>If the answer contains a word in the good synonym list, I want the program to reply</p>
<pre><code>if status contains a word in good_synonym:
print ('That is good')
else:
print ('That is not so good')
</code></pre>
<h1>note that the first line is not real python language</h1>
<p>But I don't know which phrase to use to do the action.</p>
| 0 | 2016-09-28T08:50:46Z | 39,742,553 | <p>There are many ways you could try to do this. Since you are a beginner, let's just go for something that will work - efficiency should NOT be your first consideration.</p>
<pre><code>status = status.split() # breaks response into words
if any(s in good_synonyms for s in status):
print('That is good')
</code></pre>
<p>Of course it won't stop your program from acting as though "not good" is a reply deserving a happy answer, but this is a programming site.</p>
| 1 | 2016-09-28T08:56:14Z | [
"python"
]
|
Python check if any part of string in list | 39,742,438 | <p>I have a list containing synonyms for the word 'Good' (this list here is shortened)</p>
<pre><code>good_synonym = ['Good','good','Well','well']
</code></pre>
<p>And the program asks how the user is feeling</p>
<pre><code>print = 'Hello, ' + name + ', How are you?'
status = raw_input('')
</code></pre>
<p>But sometimes, the user may respond to the question with "I am good" (or similar)</p>
<p>If the answer contains a word in the good synonym list, I want the program to reply</p>
<pre><code>if status contains a word in good_synonym:
print ('That is good')
else:
print ('That is not so good')
</code></pre>
<h1>note that the first line is not real python language</h1>
<p>But I don't know which phrase to use to do the action.</p>
| 0 | 2016-09-28T08:50:46Z | 39,742,645 | <p>This is a NLP question, the following code is a simple version of detecting synonym:</p>
<pre><code>def is_contains_synonym(sentence, synonym):
token = sentence.split(' ')
return len(filter(lambda x: x in synonym, token)) > 0
if is_contains_synonym(status, good_synonym):
print ('That is good')
else:
print ('That is not so good')
</code></pre>
| -1 | 2016-09-28T09:00:37Z | [
"python"
]
|
Python check if any part of string in list | 39,742,438 | <p>I have a list containing synonyms for the word 'Good' (this list here is shortened)</p>
<pre><code>good_synonym = ['Good','good','Well','well']
</code></pre>
<p>And the program asks how the user is feeling</p>
<pre><code>print = 'Hello, ' + name + ', How are you?'
status = raw_input('')
</code></pre>
<p>But sometimes, the user may respond to the question with "I am good" (or similar)</p>
<p>If the answer contains a word in the good synonym list, I want the program to reply</p>
<pre><code>if status contains a word in good_synonym:
print ('That is good')
else:
print ('That is not so good')
</code></pre>
<h1>note that the first line is not real python language</h1>
<p>But I don't know which phrase to use to do the action.</p>
| 0 | 2016-09-28T08:50:46Z | 39,743,032 | <p>Simple!</p>
<p>We can iterate over the good_synonyms list and check if <em>any</em> of them are present in the input string.</p>
<pre><code>if any(synonym in status for synonym in good_synonyms):
print('That is good')
else:
print('That is not so good')
</code></pre>
<p>PS: To save memory, you could perhaps store the synonyms only in lower-case, as ['good', 'well'], and when you check if these are in the 'status' variable, you could just apply the .lower() on it, which just converts the entire string into lower-case, as:</p>
<pre><code>good_synonyms = ['good', 'well']
if any(synonym in status.lower() for synonym in good_synonyms):
print('That is good')
</code></pre>
<p>Hope this helps!</p>
<p><strong>Note</strong>: holdenweb's answer works too, but applying the split function on status isn't really required as you can check whether a word is present in a string(provided the words in the string are separated by a space) or not using the 'in' keyword as described above.</p>
| 0 | 2016-09-28T09:17:29Z | [
"python"
]
|
Python check if any part of string in list | 39,742,438 | <p>I have a list containing synonyms for the word 'Good' (this list here is shortened)</p>
<pre><code>good_synonym = ['Good','good','Well','well']
</code></pre>
<p>And the program asks how the user is feeling</p>
<pre><code>print = 'Hello, ' + name + ', How are you?'
status = raw_input('')
</code></pre>
<p>But sometimes, the user may respond to the question with "I am good" (or similar)</p>
<p>If the answer contains a word in the good synonym list, I want the program to reply</p>
<pre><code>if status contains a word in good_synonym:
print ('That is good')
else:
print ('That is not so good')
</code></pre>
<h1>note that the first line is not real python language</h1>
<p>But I don't know which phrase to use to do the action.</p>
| 0 | 2016-09-28T08:50:46Z | 39,743,088 | <p>A short and simple solution would be to use regular expressions for pattern matching like this:</p>
<pre><code>import re
line = "it is good"
good_synonyms = ["good","well"]
line = line.lower()
if any(re.search(synonym,line) for synonym in good_synonyms):
print "That is good"
else:
print "nope"
</code></pre>
<p>The <code>search</code> function of <code>re</code> looks for a match of pattern anywhere in the string and returns a boolean which then can be used in an if statement with <code>any</code></p>
| -1 | 2016-09-28T09:19:07Z | [
"python"
]
|
Database migration from Sybase to MySQL: " error calling Python module function DbSQLAnywhereRE.reverseEngineer" | 39,742,624 | <p>I'm trying to migrate a database from <a href="https://en.wikipedia.org/wiki/Sybase" rel="nofollow">Sybase</a> to <a href="http://en.wikipedia.org/wiki/MySQL" rel="nofollow">MySQL</a> with the <a href="https://en.wikipedia.org/wiki/MySQL_Workbench" rel="nofollow">MySQL Workbench</a> migration tool.</p>
<p>I have no problem connecting the datasource and the target database, but when it starts migrating I get the following issue from the log message.</p>
<blockquote>
<p>Starting...<br>
Connect to source DBMS...<br>
- Connecting...<br>
Connect to source DBMS done<br>
Reverse engineer selected schemas....<br>
Reverse engineering DBA, SYS, dbo, rs_systabgroup from corsi<br>
- Reverse engineering catalog information<br>
- Preparing...<br>
Traceback (most recent call last):<br>
File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.3 <br>CE\modules\db_sqlanywhere_re_grt.py", line 489, in reverseEngineer<br>
return SQLAnywhereReverseEngineering.reverseEngineer(connection, catalog_name, schemata_list, context)<br>
File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.3 CE\modules\db_sqlanywhere_re_grt.py", line 169, in reverseEngineer
catalog = super(SQLAnywhereReverseEngineering, cls).reverseEngineer(connection, '', schemata_list, context)<br>
File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.3 CE\modules\db_generic_re_grt.py", line 258, in reverseEngineer
table_count_per_schema[schema_name] = len(cls.getTableNames(connection, catalog_name, schema_name)) if get_tables else 0<br>
File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.3 CE\modules\db_sqlanywhere_re_grt.py", line 41, in wrapped_method
res = method(cls, connection, *args)<br>
File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.3 CE\modules\db_sqlanywhere_re_grt.py", line 145, in getTableNames
return [row[0] for row in cls.execute_query(connection, query)]<br>
File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.3 CE\modules\db_generic_re_grt.py", line 76, in execute_query
return cls.get_connection(connection_object).cursor().execute(query, *args, **kwargs)
pyodbc.ProgrammingError: ('42S02', "[42S02] [Sybase][ODBC Driver][Adaptive Server Anywhere]Table or view not found: Table 'SYSTAB' not found (-141) (SQLExecDirectW)")<br></p>
<p>Traceback (most recent call last):<br> File "C:\Program Files
(x86)\MySQL\MySQL Workbench 6.3
CE\workbench\wizard_progress_page_widget.py", line 192, in thread_work
self.func()<br> File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.3 CE\modules\migration_schema_selection.py", line 175, in
task_reveng
self.main.plan.migrationSource.reverseEngineer()<br> File "C:\Program Files (x86)\MySQL\MySQL Workbench 6.3
CE\modules\migration.py", line 369, in reverseEngineer
self.state.sourceCatalog = self._rev_eng_module.reverseEngineer(self.connection,
self.selectedCatalogName, self.selectedSchemataNames,
self.state.applicationData) SystemError: ProgrammingError("('42S02',
"[42S02] [Sybase][ODBC Driver][Adaptive Server Anywhere]Table or view
not found: Table 'SYSTAB' not found (-141) (SQLExecDirectW)")"): error
calling Python module function DbSQLAnywhereRE.reverseEngineer<br>
ERROR: Reverse engineer selected schemas: ProgrammingError("('42S02',
"[42S02] [Sybase][ODBC Driver][Adaptive Server Anywhere]Table or view
not found: Table 'SYSTAB' not found (-141) (SQLExecDirectW)")"): error
calling Python module function DbSQLAnywhereRE.reverseEngineer<br>
Failed</p>
</blockquote>
<p>How do I solve this issue?</p>
| 2 | 2016-09-28T08:59:34Z | 39,786,003 | <p>That's because of SQL to get table names. Look at db_sqlanywhere_re_grt.py:142, there is:</p>
<pre><code>SELECT st.table_name
FROM SYSTAB st LEFT JOIN SYSUSER su ON st.creator=su.user_id
WHERE su.user_name = '%s' AND st.table_type = 1
</code></pre>
<p>This is valid sql for sql anywhere version < 10, and I guess you have newest version. So you can edit and update that file with compatible SQL.
Keep in mind there is much more SQL's that also need to be updated in that file.
In meantime, please fill the bug report at <a href="http://bugs.mysql.com" rel="nofollow">bugs.mysql.com</a>.</p>
| 0 | 2016-09-30T07:29:12Z | [
"python",
"mysql",
"mysql-workbench",
"database-migration"
]
|
Sort tenors in finance notation | 39,742,758 | <p>I have an array of tenors</p>
<pre><code>Tenors = np.array(['10Y', '15Y', '1M', '1Y', '20Y', '2Y', '30Y', '3M', '5Y', '6M', '9M'])
</code></pre>
<p>where <code>M</code> stands for month and <code>Y</code> stands for years. The correctly sorted order (ascending) would then be</p>
<pre><code>['1M', '3M', '6M', '9M', '1Y', '2Y', '5Y', '10Y', '15Y', '20Y', '30Y']
</code></pre>
<p>How do I achieve that using python with scipy/numpy? As the <code>tenors</code> originate from a <code>pandas</code> dataframe a solution based on <code>pandas</code> would be fine as well.</p>
| 1 | 2016-09-28T09:05:22Z | 39,742,934 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html" rel="nofollow"><code>str.extract</code></a> for parsing numbers and values, then convert to <code>int</code> and <code>categories</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html" rel="nofollow"><code>astype</code></a> and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="nofollow"><code>sort_values</code></a>:</p>
<pre><code>df = pd.DataFrame({'a':Tenors})
df[['b','c']] = df.a.str.extract("(\d+)([MY])", expand=True)
df.b = df.b.astype(int)
df.c = df.c.astype('category', ordered=True, categories=['M','Y'])
df = df.sort_values(['c','b'])
print (df)
a b c
2 1M 1 M
7 3M 3 M
9 6M 6 M
10 9M 9 M
3 1Y 1 Y
5 2Y 2 Y
8 5Y 5 Y
0 10Y 10 Y
1 15Y 15 Y
4 20Y 20 Y
6 30Y 30 Y
print (df.a.tolist())
['1M', '3M', '6M', '9M', '1Y', '2Y', '5Y', '10Y', '15Y', '20Y', '30Y']
</code></pre>
| 1 | 2016-09-28T09:13:02Z | [
"python",
"sorting",
"pandas",
"numpy",
"finance"
]
|
Sort tenors in finance notation | 39,742,758 | <p>I have an array of tenors</p>
<pre><code>Tenors = np.array(['10Y', '15Y', '1M', '1Y', '20Y', '2Y', '30Y', '3M', '5Y', '6M', '9M'])
</code></pre>
<p>where <code>M</code> stands for month and <code>Y</code> stands for years. The correctly sorted order (ascending) would then be</p>
<pre><code>['1M', '3M', '6M', '9M', '1Y', '2Y', '5Y', '10Y', '15Y', '20Y', '30Y']
</code></pre>
<p>How do I achieve that using python with scipy/numpy? As the <code>tenors</code> originate from a <code>pandas</code> dataframe a solution based on <code>pandas</code> would be fine as well.</p>
| 1 | 2016-09-28T09:05:22Z | 39,742,986 | <pre><code>print sorted(Tenors, key=lambda Tenors: (Tenors[-1], int(Tenors[:-1])))
</code></pre>
<p>Sorts by the last character and then by the integer value up to the last character</p>
| 1 | 2016-09-28T09:15:29Z | [
"python",
"sorting",
"pandas",
"numpy",
"finance"
]
|
Sort tenors in finance notation | 39,742,758 | <p>I have an array of tenors</p>
<pre><code>Tenors = np.array(['10Y', '15Y', '1M', '1Y', '20Y', '2Y', '30Y', '3M', '5Y', '6M', '9M'])
</code></pre>
<p>where <code>M</code> stands for month and <code>Y</code> stands for years. The correctly sorted order (ascending) would then be</p>
<pre><code>['1M', '3M', '6M', '9M', '1Y', '2Y', '5Y', '10Y', '15Y', '20Y', '30Y']
</code></pre>
<p>How do I achieve that using python with scipy/numpy? As the <code>tenors</code> originate from a <code>pandas</code> dataframe a solution based on <code>pandas</code> would be fine as well.</p>
| 1 | 2016-09-28T09:05:22Z | 39,743,085 | <p><strong>Approach #1</strong> Here's a NumPy based approach using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.core.defchararray.replace.html#numpy.core.defchararray.replace" rel="nofollow"><code>np.core.defchararray.replace</code></a> -</p>
<pre><code>repl = np.core.defchararray.replace
out = Tenors[repl(repl(Tenors,'M','00'),'Y','0000').astype(int).argsort()]
</code></pre>
<hr>
<p><strong>Approach #2</strong> If you are working with strings like <code>'18M'</code>, we need to do a bit more of work, like so -</p>
<pre><code>def generic_case_vectorized(Tenors):
# Get shorter names for functions
repl = np.core.defchararray.replace
isalph = np.core.defchararray.isalpha
# Get scaling values
TS1 = Tenors.view('S1')
scale = repl(repl(TS1[isalph(TS1)],'Y','12'),'M','1').astype(int)
# Get the numeric values
vals = repl(repl(Tenors,'M',''),'Y','').astype(int)
# Finally scale numeric values and use sorted indices for sorting input arr
return Tenors[(scale*vals).argsort()]
</code></pre>
<p><strong>Approach #3</strong> Here's another approach, though a loopy one to again handle generic cases -</p>
<pre><code>def generic_case_loopy(Tenors):
arr = np.array([[i[:-1],i[-1]] for i in Tenors])
return Tenors[(arr[:,0].astype(int)*((arr[:,1]=='Y')*11+1)).argsort()]
</code></pre>
<p>Sample run -</p>
<pre><code>In [84]: Tenors
Out[84]:
array(['10Y', '15Y', '1M', '1Y', '20Y', '2Y', '30Y', '3M', '25M', '5Y',
'6M', '18M'],
dtype='|S3')
In [85]: generic_case_vectorized(Tenors)
Out[85]:
array(['1M', '3M', '6M', '1Y', '18M', '2Y', '25M', '5Y', '10Y', '15Y',
'20Y', '30Y'],
dtype='|S3')
In [86]: generic_case_loopy(Tenors)
Out[86]:
array(['1M', '3M', '6M', '1Y', '18M', '2Y', '25M', '5Y', '10Y', '15Y',
'20Y', '30Y'],
dtype='|S3')
</code></pre>
| 1 | 2016-09-28T09:18:58Z | [
"python",
"sorting",
"pandas",
"numpy",
"finance"
]
|
Sort tenors in finance notation | 39,742,758 | <p>I have an array of tenors</p>
<pre><code>Tenors = np.array(['10Y', '15Y', '1M', '1Y', '20Y', '2Y', '30Y', '3M', '5Y', '6M', '9M'])
</code></pre>
<p>where <code>M</code> stands for month and <code>Y</code> stands for years. The correctly sorted order (ascending) would then be</p>
<pre><code>['1M', '3M', '6M', '9M', '1Y', '2Y', '5Y', '10Y', '15Y', '20Y', '30Y']
</code></pre>
<p>How do I achieve that using python with scipy/numpy? As the <code>tenors</code> originate from a <code>pandas</code> dataframe a solution based on <code>pandas</code> would be fine as well.</p>
| 1 | 2016-09-28T09:05:22Z | 39,752,239 | <p>I opted for the long solution since I needed <code>convert_tenors</code> anyway. This also solves <a href="http://stackoverflow.com/questions/39742758/sort-tenors-in-finance-notation/39752239#comment66797667_39742758">Jim's objection</a>.</p>
<pre><code>import scipy
def convert_tenors(tenors):
#convert tenors to years
new_tenors = scipy.zeros_like(tenors,dtype=float)
for i,o in enumerate(tenors):
if(o[-1]=='M'):
new_tenors[i] = int(o[:-1])/12
else:
new_tenors[i] = int(o[:-1])
return new_tenors
def sort_tenors(tenors):
#sort tenors in ascending order
idx = scipy.argsort(convert_tenors(tenors))
return tenors[idx]
Tenors = scipy.array(['10Y', '15Y', '1M', '1Y', '20Y', '18M', '2Y', '30Y', '3M', '5Y', '6M', '9M'])
print(sort_tenors(Tenors))
</code></pre>
<p>returns</p>
<pre><code>['1M' '3M' '6M' '9M' '1Y' '18M' '2Y' '5Y' '10Y' '15Y' '20Y' '30Y']
</code></pre>
| 0 | 2016-09-28T15:51:15Z | [
"python",
"sorting",
"pandas",
"numpy",
"finance"
]
|
Calculating weighted moving average using pandas Rolling method | 39,742,797 | <p>I calculate simple moving average:</p>
<pre><code>def sma(data_frame, length=15):
# TODO: Be sure about default values of length.
smas = data_frame.Close.rolling(window=length, center=False).mean()
return smas
</code></pre>
<p>Using the rolling function is it possible to calculate weighted moving average? As I read <a href="http://pandas.pydata.org/pandas-docs/stable/computation.html#rolling-windows" rel="nofollow">in the documentation</a>, I think that I have to pass <strong>win_type</strong> parameter. But I'm not sure which one I have to choose. </p>
<p>Here is a <a href="http://www.investopedia.com/ask/answers/071414/whats-difference-between-moving-average-and-weighted-moving-average.asp" rel="nofollow">definition</a> for weighted moving average.</p>
<p>Thanks in advance,</p>
| 0 | 2016-09-28T09:07:06Z | 39,744,105 | <p>Yeah, that part of pandas really isn't very well documented. I think you might have to use rolling.apply() if you aren't using one of the standard window types. I poked at it and got this to work:</p>
<pre><code>>>> import numpy as np
>>> import pandas as pd
>>> d = pd.DataFrame({'a':range(10), 'b':np.random.random(size=10)})
>>> d.b = d.b.round(2)
>>> d
a b
0 0 0.28
1 1 0.70
2 2 0.28
3 3 0.99
4 4 0.72
5 5 0.43
6 6 0.71
7 7 0.75
8 8 0.61
9 9 0.14
>>> wts = np.array([-1, 2])
>>> def f(w):
def g(x):
return (w*x).mean()
return g
>>> d.rolling(window=2).apply(f(wts))
a b
0 NaN NaN
1 1.0 0.560
2 1.5 -0.070
3 2.0 0.850
4 2.5 0.225
5 3.0 0.070
6 3.5 0.495
7 4.0 0.395
8 4.5 0.235
9 5.0 -0.165
</code></pre>
<p>I think that is correct. The reason for the closure there is that the signature for rolling.apply is <code>rolling.apply(func, *args, **kwargs)</code>, so the weights get tuple-unpacked if you just send them to the function directly, unless you send them as a 1-tuple <code>(wts,)</code>, but that's weird.</p>
| 1 | 2016-09-28T09:57:29Z | [
"python",
"pandas",
"moving-average",
"weighted-average",
"technical-indicator"
]
|
How to limit number of digits after decimal point in python random | 39,742,893 | <p>I am using this code:<code>newAgent = [random.gauss(0, 1), random.gauss(0, 1)]</code>
and I want to limit the number of digits after decimal point.now is about 20 digits.</p>
| -5 | 2016-09-28T09:11:42Z | 39,800,725 | <p>The answer is quite simple. Just use <code>round</code> and then specify the places with <code>, #Number of spaces</code>. In this example, I did to 5 decimal spaces.</p>
<pre><code>import random
newAgent = [random.gauss(0, 1), random.gauss(0, 1)]
print(round(newAgent[0], 5), round(newAgent[1], 5))
</code></pre>
| 0 | 2016-09-30T22:06:13Z | [
"python"
]
|
Python keeps crashing when I'm deploying with google app engine | 39,743,002 | <p>I'm running a static server on google app engine. I have been able to deploy a couple of times, but now it keeps crashing and it says</p>
<p><strong>Python quit unexpectedly.</strong></p>
<p>You can find my log on this URL: <a href="https://gist.github.com/rickbrunstedt/39a949016ca8bae46bad07395175a3e5" rel="nofollow">https://gist.github.com/rickbrunstedt/39a949016ca8bae46bad07395175a3e5</a></p>
<p>the short version:</p>
<pre><code>105 org.python.python 0x00000001000eef7a
PyRun_SimpleFileExFlags + 458
106 org.python.python 0x000000010010614d Py_Main + 3101
107 org.python.python 0x0000000100000f14 0x100000000 + 3860
Thread 0 crashed with X86 Thread State (64-bit):
rax: 0x00007fff5fbf7600 rbx: 0x00000001025188a0 rcx: 0x0030000000000203 rdx: 0x00007fff5fbf7630
rdi: 0x0000000104940830 rsi: 0x00000001025188a0 rbp: 0x00007fff5fbf75a0 rsp: 0x00007fff5fbf7568
r8: 0x0000000000000001 r9: 0x0000000000000048 r10: 0x00000000ffffffff r11: 0xffffffff846a3210
r12: 0x00007ffff287bf68 r13: 0x00007fffe9dbc94a r14: 0x00007fff5fbf7630 r15: 0x00000001025188b0
rip: 0x00007fffe9dbc94e rfl: 0x0000000000010206 cr2: 0x0000000104940832
Logical CPU: 2
Error Code: 0x00000004
Trap Number: 14
Binary Images:
0x100000000 - 0x100000fff +org.python.python (2.7.12 - 2.7.12) <C2055A43-D803-3CC5-FFF0-363A42F24F4E> /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
</code></pre>
<p>I'm not sure where to begin to be able to solve this problem.</p>
<p>UPDATE:
I think the problem is in the python version on my machine.
This is the log message I have where I found the crash, but that doesn't tell me much about it, it's just too advanced for me.
<a href="https://gist.github.com/rickbrunstedt/b0c6524b427e7f1b57561205bdcf146f" rel="nofollow">https://gist.github.com/rickbrunstedt/b0c6524b427e7f1b57561205bdcf146f</a></p>
<p>I tried to install the newer version of python, but that didn't work either.</p>
| 1 | 2016-09-28T09:16:15Z | 39,882,370 | <p>It appears that this might be related to <a href="https://code.google.com/p/google-cloud-sdk/issues/detail?id=1168" rel="nofollow">a known issue in the gcloud Public Issue Tracker</a>. Please follow the thread there for updates, and attempt the suggested workaround: </p>
<pre><code>gcloud config set app/num_file_upload_processes 1
</code></pre>
| 1 | 2016-10-05T19:42:49Z | [
"python",
"google-app-engine",
"gcloud"
]
|
Sphinx matlab documentation error: missing module 'std' | 39,743,024 | <p>I'm trying to document my MATLAB classes using sphinx. But whenever I want to run <code>make html</code> I get the following error:</p>
<pre><code>% make html
sphinx-build -b html -d _build/doctrees . _build/html
Running Sphinx v1.4.6
Extension error:
Could not import extension sphinxcontrib.matlab (exception: No module named 'std')
make: *** [Makefile:53: html] Error 1
</code></pre>
<p>I'm on ArchLinux and tried the following installation ways, but all result in the same problem:</p>
<p>Try 1:</p>
<pre><code>yaourt -S python-sphinx # (was already installed by default, just to show that the package came from arch repo)
sudo pip install -U sphinxcontrib-matlabdomain
</code></pre>
<p>Try 2:</p>
<pre><code>yaourt -R python-sphinx # (I also removed all dependencies)
sudo pip -U install sphinx
sudo pip -U install -U sphinxcontrib-matlabdomain
</code></pre>
<p>In neither of the cases it worked (always the error from above). In each try I also verified that the <code>std</code> module is there via</p>
<pre><code># ll /usr/lib/python3.5/site-packages/sphinxcontrib*
-rw-r--r-- 1 root root 326 Sep 28 11:02 /usr/lib/python3.5/site-packages/sphinxcontrib_blockdiag-1.5.5-py2.7-nspkg.pth
-rw-r--r-- 1 root root 326 Sep 28 11:00 /usr/lib/python3.5/site-packages/sphinxcontrib_matlabdomain-0.2.7-py3.5-nspkg.pth
/usr/lib/python3.5/site-packages/sphinxcontrib:
total 152
-rw-r--r-- 1 root root 11457 Sep 28 11:02 blockdiag.py
-rw-r--r-- 1 root root 37815 Jun 20 2015 mat_documenters.py
-rw-r--r-- 1 root root 27529 Oct 7 2014 matlab.py
-rw-r--r-- 1 root root 46088 Jun 20 2015 mat_types.py
drwxr-xr-x 1 root root 126 Sep 28 11:03 __pycache__
-rw-r--r-- 1 root root 22278 Feb 7 2014 std.py
/usr/lib/python3.5/site-packages/sphinxcontrib_blockdiag-1.5.5.dist-info:
total 32
-rw-r--r-- 1 root root 1033 Sep 28 11:02 DESCRIPTION.rst
-rw-r--r-- 1 root root 4 Sep 28 11:03 INSTALLER
-rw-r--r-- 1 root root 2127 Sep 28 11:02 METADATA
-rw-r--r-- 1 root root 1193 Sep 28 11:02 metadata.json
-rw-r--r-- 1 root root 14 Sep 28 11:02 namespace_packages.txt
-rw-r--r-- 1 root root 1054 Sep 28 11:03 RECORD
-rw-r--r-- 1 root root 14 Sep 28 11:02 top_level.txt
-rw-r--r-- 1 root root 110 Sep 28 11:02 WHEEL
/usr/lib/python3.5/site-packages/sphinxcontrib_matlabdomain-0.2.7-py3.5.egg-info:
total 40
-rw-r--r-- 1 root root 1 Sep 28 11:00 dependency_links.txt
-rw-r--r-- 1 root root 487 Sep 28 11:00 installed-files.txt
-rw-r--r-- 1 root root 14 Sep 28 11:00 namespace_packages.txt
-rw-r--r-- 1 root root 1 Jun 20 2015 not-zip-safe
-rw-r--r-- 1 root root 8547 Sep 28 11:00 PKG-INFO
-rw-r--r-- 1 root root 28 Sep 28 11:00 requires.txt
-rw-r--r-- 1 root root 549 Sep 28 11:00 SOURCES.txt
-rw-r--r-- 1 root root 14 Sep 28 11:00 top_level.txt
</code></pre>
<p>P.S.: my default python has version 3.5.2</p>
<p>Edit 1:</p>
<pre><code>% head $(which sphinx-build)
#!/usr/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from sphinx import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
% which sphinx-build
/usr/bin/sphinx-build
% sphinx-build --version
Sphinx (sphinx-build) 1.4.6
% python --version
Python 3.5.2
% /usr/bin/python --version
Python 3.5.2
</code></pre>
| 1 | 2016-09-28T09:16:58Z | 39,800,016 | <p>The extensions in the sphinx-contrib repository seem to be meant for Python 2. The import rules have changed in Python 3, therefore such errors can occur when Python 3 code is run with a Python 2 interpreter.</p>
<p>The solution is to install Sphinx and all its dependencies for Python 2. Your distribution might have <code>python2-sphinx</code>. On Fedora and Ubuntu the packages <code>python-*</code> are always Python 2 or both, <code>python3-*</code> is the Python 3 package.</p>
<p>On Arch Linux I know that <code>python</code> is symlinked to <code>python3</code>. So there might be additional <code>python2-*</code> packages. Install <code>pip2</code> (because <code>pip</code> is likely to be <code>pip3</code> on Arch Linux) and use that to install Sphinx.</p>
<p>You can find out which interpreter is called by running <code>head -n 1 $(which sphinx-build)</code> and then check the path.</p>
<ul>
<li><code>/usr/bin/python</code>: That is Python 2 on Ubuntu or Fedora, Python 3 on Arch Linux</li>
<li><code>/usr/bin/python3</code>: Definitely Python 3</li>
<li><code>/usr/bin/python2</code>: Definitely Python 2</li>
<li><code>/usr/bin/env python</code>: Similar to the others.</li>
</ul>
<p>Otherwise one could do a <code>print</code> of the Python version inside of the <code>conf.py</code> such that it is printed explicitly.</p>
| 1 | 2016-09-30T20:59:40Z | [
"python",
"matlab",
"python-3.x",
"python-sphinx"
]
|
How to get quotas of AWS EC2 via boto3? | 39,743,071 | <p>I'm working on boto3 - SDK python for AWS.</p>
<p>How can I get AWS Service Limits via boto3 library like bellow:</p>
<p><a href="http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html" rel="nofollow">http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html</a></p>
| 1 | 2016-09-28T09:18:50Z | 39,743,928 | <p>Why not use <a href="https://awslimitchecker.readthedocs.io/en/latest/" rel="nofollow">awsLimitchecker</a>?
I use it to create a report every morning and send it to a slack group, where I monitor our current limits. Works really well.</p>
| 0 | 2016-09-28T09:51:38Z | [
"python",
"amazon-web-services",
"boto3"
]
|
How to get quotas of AWS EC2 via boto3? | 39,743,071 | <p>I'm working on boto3 - SDK python for AWS.</p>
<p>How can I get AWS Service Limits via boto3 library like bellow:</p>
<p><a href="http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html" rel="nofollow">http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html</a></p>
| 1 | 2016-09-28T09:18:50Z | 39,744,135 | <p>Some of those limits can be queried through <a href="http://boto3.readthedocs.io/en/latest/reference/services/ec2.html?highlight=describeaccountattributes#EC2.Client.describe_account_attributes" rel="nofollow">EC2.Client.describe_account_attributes()</a></p>
<blockquote>
<p><code>describe_account_attributes(**kwargs)</code></p>
<p>Describes attributes of your
AWS account. The following are the supported account attributes:</p>
<p>...</p>
<ul>
<li><p><code>max-instances</code> : The maximum number of On-Demand instances that you can
run.</p></li>
<li><p><code>vpc-max-security-groups-per-interface</code> : The maximum number of
security groups that you can assign to a network interface.</p></li>
<li><p><code>max-elastic-ips</code> : The maximum number of Elastic IP addresses that you
can allocate for use with EC2-Classic.</p></li>
<li><p><code>vpc-max-elastic-ips</code> : The
maximum number of Elastic IP addresses that you can allocate for use
with EC2-VPC.</p></li>
</ul>
</blockquote>
| 2 | 2016-09-28T09:58:41Z | [
"python",
"amazon-web-services",
"boto3"
]
|
how to obtain your own channel ID on youtube, using python and youtube API? | 39,743,136 | <p>I was wondering how you can obtain your own channel ID using youtube API, or printing a list of the specific channelIDs from your user, since you can have multiple channels on your own user.(im using client_secrets)</p>
<p>i've been watchin alot of the documentation for youtube, but aint finding anything relevant for just this.(maybe im wrong, hehe)</p>
<p>i was watching this:
<a href="http://stackoverflow.com/questions/13572657/how-to-retrieve-a-channel-id-from-a-channel-name-or-url">How to retrieve a channel id from a channel name or url</a></p>
<p>and that was for a search for every channel, but it should be an easier solution just for your own user.(tell me if im wrong)</p>
<p>and is the right path to go?:</p>
<pre><code>channels_list = youtube.channels().list(
part="id",
mine=True
).execute()
channelID = channel_list["items"]["id"]
</code></pre>
<p>im going to use the channelId to upload a specific video to the channel.
I hope someone can help! </p>
| 0 | 2016-09-28T09:21:02Z | 39,743,677 | <p>Judging from <a href="https://developers.google.com/youtube/v3/docs/channels/list" rel="nofollow">the docs</a>, I'd say you're on the right track.</p>
<pre><code>channels_list = youtube.channels().list(mine=True)
</code></pre>
<p>Should return a list of your owned channels, if you're sending an authenticated <code>request</code>. </p>
<p>You can then simply access the list directly by calling</p>
<pre><code>channels_list['items']
</code></pre>
<p>Note that the <code>ChannelItem</code> is a dict within a list, so you'll have to access the channel item's <code>index</code>, and then the <code>key</code></p>
<pre><code>channels_list['items'][0]['items']['id']
</code></pre>
<p>If you'd like to get your channel ids in a single step, this might be what you're looking for:</p>
<pre><code>chan_ids = [chan['items']['id'] for chan in youtube.channels().list(mine=True)['items']]
</code></pre>
<p><a href="https://developers.google.com/youtube/v3/docs/channels" rel="nofollow">This section here</a> might be of help to you, as well.</p>
| 1 | 2016-09-28T09:42:16Z | [
"python",
"api",
"youtube",
"user",
"channel"
]
|
How to create a list of functions for call in python | 39,743,194 | <p>My question is like this one here, but I don't understand.</p>
<p><a href="http://stackoverflow.com/questions/26881396/how-to-add-a-function-call-to-a-list#">How to add a function call to a list?</a></p>
<p>If I had approximately 10 commands, and they all served specific purposes, so they couldn't be modified, but I wanted to put them in a list without calling them.</p>
<pre><code>def print_hello():
print("hello")
command_list=[print_hello()]
</code></pre>
<p>This would only print <code>"hello"</code>, then leave <code>command_list</code> equal to <code>[None]</code></p>
<p>How would I get it so that when I type <code>command_list[0]</code>, it would perform <code>print_hello()</code>?</p>
| 0 | 2016-09-28T09:23:13Z | 39,743,358 | <p>If you want to add it to the list without calling them, just refrain from calling them:</p>
<pre><code>command_list=[print_hello]
</code></pre>
<p>At the time you want to call them, call them:</p>
<pre><code>command_list[0]()
</code></pre>
<p>If you want something to happen by just doing <code>command_list[0]</code>, you could subclass <code>list</code> and give it a</p>
<pre><code>def __getitem__(self, index):
item = list.__getitem__(self, index)
return item()
</code></pre>
<p>(untested). Then the item getting operation on the lists causes the function to be called.</p>
| 4 | 2016-09-28T09:29:43Z | [
"python",
"list"
]
|
Delete repeated columns of array keeping the order | 39,743,315 | <p>Is there a relatively simple way of removing columns of an (numpy) array and keeping the order of the columns?</p>
<p>As an example, consider this array:</p>
<pre><code>a = np.array([[2, 1, 1, 3],
[2, 1, 1, 3]])
</code></pre>
<p>where I would like column three to be removed such that:</p>
<pre><code>a = np.array([[2, 1, 3],
[2, 1, 3]])
</code></pre>
| 1 | 2016-09-28T09:28:12Z | 39,743,586 | <p><strong>Approach #1</strong> Here's an approach using <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>broadcasting</code></a> -</p>
<pre><code>a[:,~np.triu((a[:,None,:] == a[...,None]).all(0),1).any(0)]
</code></pre>
<p>Sample run -</p>
<pre><code>In [115]: a
Out[115]:
array([[2, 1, 3, 5, 1, 3, 7],
[6, 5, 4, 6, 5, 4, 8]])
In [116]: a[:,~np.triu((a[:,None,:] == a[...,None]).all(0),1).any(0)]
Out[116]:
array([[2, 1, 3, 5, 7],
[6, 5, 4, 6, 8]])
</code></pre>
<p><strong>Explanation</strong></p>
<p>1) Input array -</p>
<pre><code>In [156]: a
Out[156]:
array([[2, 1, 3, 5, 1, 3, 7],
[6, 5, 4, 6, 5, 4, 8]])
</code></pre>
<p>2) Use broadcasting to perform elementwise equality comparison keeping the first axis aligned, which would correspond to the column axis from original 2D array -</p>
<pre><code>In [157]: a[:,None,:] == a[...,None]
Out[157]:
array([[[ True, False, False, False, False, False, False],
[False, True, False, False, True, False, False],
[False, False, True, False, False, True, False],
[False, False, False, True, False, False, False],
[False, True, False, False, True, False, False],
[False, False, True, False, False, True, False],
[False, False, False, False, False, False, True]],
[[ True, False, False, True, False, False, False],
[False, True, False, False, True, False, False],
[False, False, True, False, False, True, False],
[ True, False, False, True, False, False, False],
[False, True, False, False, True, False, False],
[False, False, True, False, False, True, False],
[False, False, False, False, False, False, True]]], dtype=bool)
</code></pre>
<p>3) Since we are looking for duplicate cols, let's look for ALL matches along the first axis -</p>
<pre><code>In [158]: (a[:,None,:] == a[...,None]).all(0)
Out[158]:
array([[ True, False, False, False, False, False, False],
[False, True, False, False, True, False, False],
[False, False, True, False, False, True, False],
[False, False, False, True, False, False, False],
[False, True, False, False, True, False, False],
[False, False, True, False, False, True, False],
[False, False, False, False, False, False, True]], dtype=bool)
</code></pre>
<p>4) We are looking to keep the first occurrence only, so we can use a upper triangular matrix to set all diagonal and lower triangular elems as <code>False</code> -</p>
<pre><code>In [163]: np.triu((a[:,None,:] == a[...,None]).all(0),1)
Out[163]:
array([[False, False, False, False, False, False, False],
[False, False, False, False, True, False, False],
[False, False, False, False, False, True, False],
[False, False, False, False, False, False, False],
[False, False, False, False, False, False, False],
[False, False, False, False, False, False, False],
[False, False, False, False, False, False, False]], dtype=bool)
</code></pre>
<p>5) Next up, we look for ANY matches along the first axis indicating the duplicates -</p>
<pre><code>In [164]: (np.triu((a[:,None,:] == a[...,None]).all(0),1)).any(0)
Out[164]: array([False, False, False, False, True, True, False], dtype=bool)
</code></pre>
<p>6) We are looking to remove those duplicates, so invert the mask -</p>
<pre><code>In [165]: ~(np.triu((a[:,None,:] == a[...,None]).all(0),1)).any(0)
Out[165]: array([ True, True, True, True, False, False, True], dtype=bool)
</code></pre>
<p>7) Finally, we index into the columns of input array with the mask for final output -</p>
<pre><code>In [166]: a[:,~(np.triu((a[:,None,:] == a[...,None]).all(0),1)).any(0)]
Out[166]:
array([[2, 1, 3, 5, 7],
[6, 5, 4, 6, 8]])
</code></pre>
<hr>
<p><strong>Approach #2</strong> With focus on memory efficiency and might even be faster, here's an approach considering each column as an indexing tuple -</p>
<pre><code>lidx = np.ravel_multi_index(a,a.max(1)+1)
out = a[:,np.sort(np.unique(lidx,return_index=1)[1])]
</code></pre>
<p><strong>Explanation</strong></p>
<p>1) Input array -</p>
<pre><code>In [203]: a
Out[203]:
array([[2, 1, 3, 5, 1, 3, 7],
[6, 5, 4, 6, 5, 4, 8]])
</code></pre>
<p>2) Calculate linear index equivalents for each column -</p>
<pre><code>In [207]: lidx = np.ravel_multi_index(a,a.max(1)+1)
In [208]: lidx
Out[208]: array([24, 14, 31, 51, 14, 31, 71])
</code></pre>
<p>3) Get the first occurence of each unique linear index</p>
<pre><code>In [209]: np.unique(lidx,return_index=1)[1]
Out[209]: array([1, 0, 2, 3, 6])
</code></pre>
<p>4) Sort those and index into cols of input array for final o/p -</p>
<pre><code>In [210]: np.sort(np.unique(lidx,return_index=1)[1])
Out[210]: array([0, 1, 2, 3, 6])
In [211]: a[:,np.sort(np.unique(lidx,return_index=1)[1])]
Out[211]:
array([[2, 1, 3, 5, 7],
[6, 5, 4, 6, 8]])
</code></pre>
<p>For a detailed info on the considerations related to converting to indexing tuples, please refer to <a href="http://stackoverflow.com/a/38674038/3293881"><code>this post</code></a>.</p>
| 1 | 2016-09-28T09:38:19Z | [
"python",
"arrays",
"numpy"
]
|
Writing lines from input file to output file based on the order in a list | 39,743,368 | <p>I have an input data <code>input.dat</code> that looks like this:</p>
<pre><code>0.00 0.00
0.00 0.00
0.00 0.00
-0.28 1.39
-0.49 1.24
-0.57 1.65
-0.61 2.11
-0.90 1.73
-0.87 2.29
</code></pre>
<p>I have have a list denoting line numbers as follows:</p>
<pre><code>linenum = [7, 2, 6]
</code></pre>
<p>I need to write to a file <code>output_veloc_max.dat</code> the rows in <code>input.dat</code> that correspond to <code>linenum</code> values in the same order.</p>
<p>The result should look like this:</p>
<pre><code>-0.61 2.11
0.00 0.00
-0.57 1.65
</code></pre>
<p>I have written the following code:</p>
<pre><code>linenum=[7,2,6]
i=1
with open('inputv.dat', 'r') as f5, open('output_veloc_max.dat', 'w') as out:
for line1 in f5:
if i in linenum:
print(line1, end=' ', file=out)
print(i,line1)
i+=1
</code></pre>
<p>But, it gives me output that looks like this:</p>
<pre><code>2 0.00 0.00
6 -0.57 1.65
7 -0.61 2.11
</code></pre>
<p>What am I doing wrong?</p>
| 0 | 2016-09-28T09:30:02Z | 39,743,762 | <p>Store the values as you encounter them in a dictionary <code>d</code> with the keys denoting the line number and the value holding the line contents. Write them to the file with <code>writelines</code> according to the order of <code>linenum</code>. Use <code>enumerate(fileobj, 1)</code> to get a line number for each line instead of an explicit counter like <code>i</code>:</p>
<pre><code>linenum=[7,2,6]
d = {}
with open('inputv.dat', 'r') as f5, open('output_veloc_max.dat', 'w') as out:
for num, line1 in enumerate(f5, 1):
if num in linenum:
d[num] = line1
out.writelines([d[i] for i in linenum])
</code></pre>
<p>Of course, you can further trim this down with a dictionary comprehension:</p>
<pre><code>linenum = [7, 2, 6]
with open('inputv.dat', 'r') as f5, open('output_veloc_max.dat', 'w') as out:
d = {k: v for k, v in enumerate(f5, 1) if k in linenum}
out.writelines([d[i] for i in linenum])
</code></pre>
| 1 | 2016-09-28T09:45:35Z | [
"python",
"python-3.x",
"for-loop"
]
|
Loop through Microsoft Access queries with Python | 39,743,376 | <p>I need to loop through some queries of a Microsoft Access database (mdb).
Is there a possibility to do so with Python? (I am not familiar with Python.)</p>
<p>I was thinking of creating a list with the query names and then loop through it.</p>
<p>So far I have this:</p>
<pre><code># Import system modules (ArcGIS, Excel, Microsoft Access)
import arcpy
from arcpy import env
import csv
import pyodbc
# Set workspace
arcpy.env.workspace = r"\\mars\Skript\Connection to ERDE.XYZ.XX.sde"
MDB = r"\\mars\Konzept\auswertung_gdm.mdb"
</code></pre>
| 0 | 2016-09-28T09:30:12Z | 39,749,327 | <blockquote>
<p>I was thinking of creating a list with the query names and then loop through it.</p>
</blockquote>
<p>That's certainly possible. For example.</p>
<pre class="lang-python prettyprint-override"><code>query_names = ['Query1', 'Query2']
for query_name in query_names:
sql = "SELECT * FROM [{}]".format(query_name)
print(sql)
</code></pre>
<p>prints</p>
<pre class="lang-none prettyprint-override"><code>SELECT * FROM [Query1]
SELECT * FROM [Query2]
</code></pre>
<p>For your code the loop could execute each SELECT statement using a pyodbc <code>cursor</code> ...</p>
<pre class="lang-python prettyprint-override"><code>conn = pyodbc.connect(your_connection_string) # e.g. "DRIVER=...;DBQ=...;"
crsr = conn.cursor()
crsr.execute(sql)
# do stuff with the results
crsr.close()
conn.close()
</code></pre>
<p>... retrieving the query results (via <code>crsr.fetchall()</code> or <code>crsr.fetchone()</code>) and processing them as required.</p>
| 1 | 2016-09-28T13:43:15Z | [
"python",
"loops",
"ms-access"
]
|
Use of scipy sparse in ode solver | 39,743,402 | <p>I am trying to solve a differential equation system </p>
<p>x´=Ax with x(0) = f(x)</p>
<p>in python, where A indeed is a complex sparse matrix.</p>
<p>For now i have been solving the system using the scipy.integrate.complex_ode class in the following way.</p>
<pre><code>def to_solver_function(time,vector):
sendoff = np.dot(A, np.transpose(vector))
return sendoff
solver = complex_ode(to_solver_function)
solver.set_initial_value(f(x),0)
solution = [f(x)]
for time in time_grid:
next = solver.integrate(time)
solution.append(next)
</code></pre>
<p>This has been working OK, but I need to "tell the solver" that my matrix is sparse. I figured out that i should use</p>
<pre><code>Asparse = sparse.lil_matrix(A)
</code></pre>
<p>but how do i change my solver to work with this?</p>
| 0 | 2016-09-28T09:31:08Z | 39,753,110 | <p>How large and sparse is <code>A</code>?</p>
<p>It looks like <code>A</code> is just a constant in this function:</p>
<pre><code>def to_solver_function(time,vector):
sendoff = np.dot(A, np.transpose(vector))
return sendoff
</code></pre>
<p>Is <code>vector</code> 1d? Then <code>np.transpose(vector)</code> does nothing.</p>
<p>For calculation purposes you want</p>
<pre><code>Asparse = sparse.csr_matrix(A)
</code></pre>
<p>Does <code>np.dot(Asparse, vector)</code> work? <code>np.dot</code> is supposed to be sparse aware. If not, try <code>Asparse*vector</code>. This probably produces a dense matrix, so you may need <code>(Asparse*vector).A1</code> to produce a 1d array.</p>
<p>But check the timings. <code>Asparse</code> needs to quite large and very sparse to perform faster than <code>A</code> in a dot product.</p>
| 0 | 2016-09-28T16:35:56Z | [
"python",
"scipy",
"sparse-matrix",
"scientific-computing"
]
|
Convert a ctypes int** to numpy 2 dimensional array | 39,743,432 | <p>I have a c++ implementation wrapped with SWIG and compiled to a module which can be used by python.</p>
<p>I am using ctypes to call the function with ctype arguments, int double etc.
The output of my_function(ctype args) is an int**, i.e. it's a multidimensional array.</p>
<p>How can I cast this into a 2D numpy array inside the python script? I have been looking at ctypes pointers but so far I have had no luck. I have spent many, many hours reading the C-API of python and numpy for use with SWIG, and implementing on the c++ side to return a numpy array has so far been incredibly hard and completely unsuccessful.</p>
| 1 | 2016-09-28T09:32:09Z | 39,749,730 | <p>I don't think this can be done on the Python side; it will have to be done within the C/C++ layer using NumPy's C-API interface (<code>PyArray_SimpleNewFromData</code> is the relevant function â see <a href="http://stackoverflow.com/questions/30357115/pyarray-simplenewfromdata-example">this answer</a> for some details). <a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/neighbors/dist_metrics.pyx#L37" rel="nofollow">Here</a> is an example of this in a Cython script.</p>
<p>Note that handling of deallocation is complicated in this case: as far as I know, there's no mechanism to allow numpy to handle it automatically. You'll just have to make sure that whatever script deallocates the array doesn't do so when the numpy wrapper is still being used.</p>
<p>Edit: if your <code>int**</code> does not point to a contiguous memory block, I don't believe this will work. NumPy can only (easily) handle contiguous data buffers.</p>
| 0 | 2016-09-28T13:58:39Z | [
"python",
"c++",
"arrays",
"numpy",
"swig"
]
|
Convert a ctypes int** to numpy 2 dimensional array | 39,743,432 | <p>I have a c++ implementation wrapped with SWIG and compiled to a module which can be used by python.</p>
<p>I am using ctypes to call the function with ctype arguments, int double etc.
The output of my_function(ctype args) is an int**, i.e. it's a multidimensional array.</p>
<p>How can I cast this into a 2D numpy array inside the python script? I have been looking at ctypes pointers but so far I have had no luck. I have spent many, many hours reading the C-API of python and numpy for use with SWIG, and implementing on the c++ side to return a numpy array has so far been incredibly hard and completely unsuccessful.</p>
| 1 | 2016-09-28T09:32:09Z | 39,754,263 | <p>Using NumPy and <code>numpy.i</code>, this is quite easy</p>
<p>Interface header</p>
<pre><code>#pragma once
void fun(int** outArray, int* nRows, int* nCols);
</code></pre>
<p>Implementation</p>
<pre><code>#include "test.h"
#include <malloc.h>
void fun(int** outArray, int* nRows, int* nCols) {
int _nRows = 100;
int _nCols = 150;
int* _outArray = (int*)malloc(sizeof(int)*_nRows*_nCols);
*outArray = _outArray;
*nRows = _nRows;
*nCols = _nCols;
}
</code></pre>
<p>SWIG interface header</p>
<pre><code>%module example
%{
#define SWIG_FILE_WITH_INIT
#include "test.h"
%}
%include "numpy.i"
%init
%{
import_array();
%}
%apply (int** ARGOUTVIEWM_ARRAY2, int* DIM1, int* DIM2) {(int** outArray, int* nRows, int* nCols)}
%include "test.h"
</code></pre>
<p>The typemap <code>ARGOUTVIEWM_ARRAY2</code> creates a managed NumPy array and free is automatically called, when the NumPy object is destroyed in Python.</p>
<p>If you want to create the wrapper yourself using the Python C API, you can look into the generated code made by SWIG using <code>numpy.i</code></p>
| 0 | 2016-09-28T17:39:41Z | [
"python",
"c++",
"arrays",
"numpy",
"swig"
]
|
Does importing a Python module affect performance? | 39,743,441 | <p>When searching for a solution, it's common to come across several methods. I often use the solution that most closely aligns with syntax I'm familiar with. But sometimes the far-and-away most upvoted solution involves importing a module new to me, as in <a href="http://stackoverflow.com/questions/613183/sort-a-python-dictionary-by-value">this thread</a>.</p>
<p>I'm already importing various modules in large script that will be looping 50K times. Does importing additional modules affect processing time, or otherwise affect the script's efficiency? Do I need to worry about the size of the module being called? Seeking guidance on whether, generally, it's worth the extra time/effort to find solutions using methods contained in modules I'm already using.</p>
| 0 | 2016-09-28T09:32:32Z | 39,743,737 | <p>Every bytecode in Python affects performance. However, unless that code is on a critical path and repeated a high number of times, the effect is so small as to not matter.</p>
<p>Using <code>import</code> consists of two distinct steps: loading the module (done just <em>once</em>), and binding names (where the imported name is added to your namespace to refer to something loaded by the module, or the module object itself). Binding names is almost costless. Because loading a module happens just once, it won't affect your performance.</p>
<p>Focus instead on what the module functionality can do to help you solve your problem efficiently.</p>
| 1 | 2016-09-28T09:44:31Z | [
"python",
"import",
"module"
]
|
Searching for two or more words in string - Python Troubleshooting Program | 39,743,497 | <p>I know the code for searching words in a string that match to another string.</p>
<pre><code>if any(word in problem for word in keyword_virus):
#Some code her e.g. pc_virus()
</code></pre>
<p>But is there any code that would allow me to check if two or more words matched/or even any modifications to this code?</p>
<p>Thank you :)</p>
| 0 | 2016-09-28T09:34:38Z | 39,743,785 | <p>If I understand your question correctly, I'd rewrite the for loop to check every word in your checklist and append each match.</p>
<pre><code>matches = []
for word in check_list:
if word in problem_list:
matches.append(word)
</code></pre>
<p>You'll end up with a list of words that match, from which you can count the occurrence of each word.</p>
| 0 | 2016-09-28T09:46:25Z | [
"python",
"variables",
"if-statement",
"search"
]
|
Searching for two or more words in string - Python Troubleshooting Program | 39,743,497 | <p>I know the code for searching words in a string that match to another string.</p>
<pre><code>if any(word in problem for word in keyword_virus):
#Some code her e.g. pc_virus()
</code></pre>
<p>But is there any code that would allow me to check if two or more words matched/or even any modifications to this code?</p>
<p>Thank you :)</p>
| 0 | 2016-09-28T09:34:38Z | 39,743,961 | <pre><code>keyword_virus = 'the brown fox jumps'
print([x for x in ['brown', 'jumps', 'notinstring'] if x in keyword_virus.split()])
#['brown', 'jumps']
</code></pre>
<p>That will return all matched words in keyword_virus.</p>
| 1 | 2016-09-28T09:52:52Z | [
"python",
"variables",
"if-statement",
"search"
]
|
Unable to return the list that is read from a file(.csv) | 39,743,572 | <p>I have been learning and practicing python and during which
I found one error in my program, but I'm unable to resolve. I want to return list of that is retrieved from a csv file. I tried the below code and it returns me an error. </p>
<pre><code>import csv
def returnTheRowsInTheFile(fileName):
READ = 'r'
listOfRows = []
try:
with open(fileName, READ) as myFile:
listOfRows = csv.reader(myFile)
return listOfRows
except FileNotFoundError:
print('The file ' + fileName + ' is not found')
except:
print('Something went wrong')
finally:
#myFile.close()
print()
def main():
fullString = returnTheRowsInTheFile('ABBREVATIONS.CSV')
for eachRow in fullString:
print(eachRow)
return
main()
</code></pre>
<p>And the error is</p>
<blockquote>
<p>Traceback (most recent call last): File
"C:\Users\santo\workspace\PyProject\hello\FinalChallenge.py", line 36,
in
main() File "C:\Users\santo\workspace\PyProject\hello\FinalChallenge.py", line 32,
in main
for eachRow in fullString: ValueError: I/O operation on closed file.</p>
</blockquote>
| 1 | 2016-09-28T09:37:56Z | 39,743,764 | <p>When you use <code>with open</code> it closes the file when the context ends. Now <code>listOfRows</code> is of the return type of <code>csv.Reader</code>, and so is then <code>fullString</code> (not a list). You are trying to iterate on it, which seems to iterate over a file object, which is already closed.</p>
| 0 | 2016-09-28T09:45:42Z | [
"python",
"csv"
]
|
Unable to return the list that is read from a file(.csv) | 39,743,572 | <p>I have been learning and practicing python and during which
I found one error in my program, but I'm unable to resolve. I want to return list of that is retrieved from a csv file. I tried the below code and it returns me an error. </p>
<pre><code>import csv
def returnTheRowsInTheFile(fileName):
READ = 'r'
listOfRows = []
try:
with open(fileName, READ) as myFile:
listOfRows = csv.reader(myFile)
return listOfRows
except FileNotFoundError:
print('The file ' + fileName + ' is not found')
except:
print('Something went wrong')
finally:
#myFile.close()
print()
def main():
fullString = returnTheRowsInTheFile('ABBREVATIONS.CSV')
for eachRow in fullString:
print(eachRow)
return
main()
</code></pre>
<p>And the error is</p>
<blockquote>
<p>Traceback (most recent call last): File
"C:\Users\santo\workspace\PyProject\hello\FinalChallenge.py", line 36,
in
main() File "C:\Users\santo\workspace\PyProject\hello\FinalChallenge.py", line 32,
in main
for eachRow in fullString: ValueError: I/O operation on closed file.</p>
</blockquote>
| 1 | 2016-09-28T09:37:56Z | 39,743,883 | <p>As JulienD already pointed the file is alread closed when you try to read the rows from it. You can get rid of this exception using this for example:</p>
<pre><code> with open(fileName, READ) as myFile:
listOfRows = csv.reader(myFile)
for row in listOfRows:
yield row
</code></pre>
<p><strong>UPDATE</strong></p>
<p>Btw the way you handle exceptions makes it pretty hard to debug. I'd suggest something like this.</p>
<pre><code>except Exception as e:
print('Something went wrong: "%s"' e)
</code></pre>
<p>This way you can at least see the error message.</p>
| 0 | 2016-09-28T09:50:13Z | [
"python",
"csv"
]
|
Unable to return the list that is read from a file(.csv) | 39,743,572 | <p>I have been learning and practicing python and during which
I found one error in my program, but I'm unable to resolve. I want to return list of that is retrieved from a csv file. I tried the below code and it returns me an error. </p>
<pre><code>import csv
def returnTheRowsInTheFile(fileName):
READ = 'r'
listOfRows = []
try:
with open(fileName, READ) as myFile:
listOfRows = csv.reader(myFile)
return listOfRows
except FileNotFoundError:
print('The file ' + fileName + ' is not found')
except:
print('Something went wrong')
finally:
#myFile.close()
print()
def main():
fullString = returnTheRowsInTheFile('ABBREVATIONS.CSV')
for eachRow in fullString:
print(eachRow)
return
main()
</code></pre>
<p>And the error is</p>
<blockquote>
<p>Traceback (most recent call last): File
"C:\Users\santo\workspace\PyProject\hello\FinalChallenge.py", line 36,
in
main() File "C:\Users\santo\workspace\PyProject\hello\FinalChallenge.py", line 32,
in main
for eachRow in fullString: ValueError: I/O operation on closed file.</p>
</blockquote>
| 1 | 2016-09-28T09:37:56Z | 39,745,590 | <p>The easy way to solve this problem is to return a <em>list</em> from your function. I know you assigned <code>listOfRows = []</code> but this was overwritten when you did <code>listOfRows = csv.reader(myFile)</code>.</p>
<p>So, the easy solution is:</p>
<pre><code>def returnTheRowsInTheFile(fileName):
READ = 'r'
try:
with open(fileName, READ) as myFile:
listOfRows = csv.reader(myFile)
return list(listOfRows) # convert to a list
except FileNotFoundError:
print('The file ' + fileName + ' is not found')
except:
print('Something went wrong')
</code></pre>
<p>You should also read <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow">pep8</a> which is the style guide for Python; in order to understand how to name your variables and functions.</p>
| 1 | 2016-09-28T11:01:02Z | [
"python",
"csv"
]
|
Tkinter checkbox 'Name not defined' | 39,743,579 | <p>I am currently writing an application that uses Tkinter to provide a graphical interface for the user.</p>
<p>The app has been working very well and recently I decided to add some checkboxes, the idea being that when the user checks one of the boxes, another set of text is sent through an API.</p>
<p>I have input boxes that I am able to get to work perfectly, however for some reason whenever I try and retrieve the value of the checkbox, I get the following error:</p>
<pre><code> if check.get():
NameError: name 'check' is not defined
</code></pre>
<p>For the life of me I can't figure out why this error is popping up, here is the rest of my code, to make it clearer I have removed the working code for the input boxes. </p>
<pre><code>from tkinter import *
class GUI:
def __init__(self, master):
check = IntVar()
self.e = Checkbutton(root, text="check me", variable=check)
self.e.grid(row=4, column=2)
self.macro_button = Button(master, text="Test Button", command=self.test)
self.macro_button.grid(row=11, column=1)
def test(self):
if check.get():
print('its on')
else:
print('its off')
root = Tk()
root.resizable(width=False, height=False)
my_gui = GUI(root)
root.mainloop()
</code></pre>
<p>When I run this code, and press the button labeled 'test button', that is when the error appears in my terminal.</p>
<p>Anyone have any idea why this is happening for my checkboxes, and not for my inputboxes?</p>
<p>EDIT:</p>
<p>what's even odder for me is that this code that I found online made to teach you how to use a tkinter checkbox works like a charm, and its almost identical to mine:</p>
<pre><code>import tkinter as tk
root = tk.Tk()
var = tk.IntVar()
cb = tk.Checkbutton(root, text="the lights are on", variable=var)
cb.pack()
def showstate():
if var.get():
print ("the lights are on")
else:
print ("the lights are off")
button = tk.Button(root, text="show state", command=showstate)
button.pack()
root.mainloop()
</code></pre>
| 0 | 2016-09-28T09:38:02Z | 39,743,685 | <p>You just need to make <code>check</code> an instance variable with <code>self</code>.</p>
<p>I.E</p>
<pre><code>class GUI:
def __init__(self, master):
self.check = IntVar()
self.e = Checkbutton(root, text="check me", variable=self.check)
self.e.grid(row=4, column=2)
self.macro_button = Button(master, text="Test Button", command=self.test)
self.macro_button.grid(row=11, column=1)
def test(self):
if self.check.get():
print('its on')
else:
print('its off')
root = Tk()
root.resizable(width=False, height=False)
my_gui = GUI(root)
root.mainloop()
</code></pre>
<p>The example you found online is written in an 'inline' style - which is fine until your GUI becomes larger and you require many methods and variables to be used / passed.</p>
| 1 | 2016-09-28T09:42:31Z | [
"python",
"checkbox",
"tkinter"
]
|
How do you make a leaderboard in python? | 39,743,789 | <p>I am working on a project for my school and I was wondering if there is a way to make a leaderboard in python? I am fairly new to python and so far, what I have done is get the user's input and store it in a text file. I'm not sure how to continue. Any help is appreciated, thanks!</p>
<pre><code>x = 0
Name = [None]*1000
Class = [None]*1000
Score = [0]*100
# opens the text file called text_file
text_file = open("write.txt","a")
# puts in the values of the highest scores and "saves it" by closing and opening the file
def write_in_file():
global text_file
text_file.write(Name[x])
text_file.write("\n")
text_file.write(Class[x])
text_file.write("\n")
text_file.write(Score[x])
text_file.write("\n")
text_file.write("\n")
text_file.close()
text_file = open("write.txt","a")
# asks for player data and puts highest value in a file
for i in Name:
Name[x] = input("Name:")
Class[x] = input("Class:")
Score[x] = input("Score:")
write_in_file()
print(Score)
x += 1
</code></pre>
| 1 | 2016-09-28T09:46:29Z | 39,744,679 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/version/0.18.1/tutorials.html" rel="nofollow">pandas</a> for making a leaderboard table. Here is a sample:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Name': ['x','y','z'],
'Class': ['B','A','C'],
'Score' : [75,92,56]})
print (df)
Out[3]:
Class Name Score
0 B x 75
1 A y 92
2 C z 56
# changing order of columns
df = df.reindex_axis(['Name','Class','Score'], axis=1)
# appending
df.loc[3] = ['a','A', 96]
print (df)
Out[15]:
Name Class Score
1 y A 92
3 a A 96
0 x B 75
2 z C 56
# sorting
df = df.sort(['Class', 'Score'], ascending=[1, 0])
print (df)
Out[16]:
Name Class Score
3 a A 96
1 y A 92
0 x B 75
2 z C 56
</code></pre>
| 1 | 2016-09-28T10:20:17Z | [
"python"
]
|
How to define namespace in python? | 39,743,847 | <p>It seems that python do have namespaces. But All I can get is people telling me what namespace or scope is. So how do you define a namespace in python? All I need is the syntax (and with an example would be much better). </p>
| -1 | 2016-09-28T09:48:53Z | 39,744,275 | <p>A "namespace" in Python is defined more by the layout of the code on disk than it is with any particular syntax. Given a directory structure of:</p>
<pre><code>my_code/
module_a/
__init__.py
a.py
b.py
module_b/
__init__.py
a.py
b.py
__init__.py
main.py
</code></pre>
<p>and assuming each <code>a.py</code> and <code>b.py</code> file contains a function, <code>fn()</code>, the import syntax to resolve the namespace works like so (from <code>main.py</code>):</p>
<pre><code>from module_a.a import fn # fn() from module_a/a.py
from module_a.b import fn # fn() from module_a/b.py
from module_b.a import fn # fn() from module_b/a.py
from module_b.b import fn # fn() from module_b/b.py
</code></pre>
<p>At this point, <code>fn()</code> is available within <code>main.py</code> and will call whichever implementation you imported.</p>
<p>It's also possible to use the <code>from module import *</code> syntax, but this is discouraged in favour of being more specific:</p>
<pre><code>from module_a.a import *
</code></pre>
<p>Here, <code>fn()</code> is available within <code>main.py</code>, and also <em>any other symbol defined in <code>module_a/a.py</code></em>.</p>
<p>If we want to have access to both <code>module_a/a.py</code>'s <code>fn()</code> and also the <code>fn()</code> from <code>module_b/b.py</code>, we can do one of two things: either we use the <code>from module import thing as something</code> syntax:</p>
<pre><code>from module_a.a import fn as module_a_fn
from module_b.b import fn as module_b_fn
</code></pre>
<p>and use them in our <code>main.py</code> as <code>module_a_fn()</code> and <code>module_b_fn()</code>, or we can just import the module and reference it directly in the code, so in <code>main.py</code>:</p>
<pre><code>import module_a.a
import module_a.b
module_a.a.fn() # call fn() from module_a/a.py
module_a_b.fn() # call fn() from module_a/b.py
</code></pre>
<p>I hope that helps elucidate the usage a bit more.</p>
| 0 | 2016-09-28T10:04:06Z | [
"python"
]
|
Sending long JSON via POST Python | 39,743,884 | <p>I know this seems fundamental, but how do we do it? For example,</p>
<pre><code>import urllib2
import ssl
import requests
import json
SetProxy={'https':'https://sample.sample.com'}
proxy = urllib2.ProxyHandler(SetProxy)
url="https://something.com/getsomething"
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
payload = {
"uaaURL": "https://com-example.something.com",
"sampleID": "admin",
"sampleSecret": "password",
"sampleID2": "example-sample-el",
"sampleSecret2": "ssenjsoemal/+11=",
"username": "test",
"someAttributes": {
"Groups": [
"example_com-abc"
],
"attribute": [
"value1"
]
}
}
req = urllib2.Request(url)
req.add_header('Content-Type','application/json')
response = urllib2.urlopen(req, json=json.dumps(payload))
</code></pre>
<p>The above runs and gives me a 400 which means my JSON is bad. I was not able to find a post that sent large JSON data via a post request. Any advice? Thanks! </p>
| 0 | 2016-09-28T09:50:13Z | 39,744,351 | <p>try to use below :</p>
<pre><code>import json
import urllib2
import ssl
import requests
SetProxy={'https':'https://sample.sample.com'}
proxy = urllib2.ProxyHandler(SetProxy)
url='https://something.com/getsomething'
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
data = {
'uaaURL' : 'https://com-example.something.com',
'sampleID': 'admin',
'sampleSecret': 'password',
'sampleID2': 'example-sample-el',
'sampleSecret2': 'ssenjsoemal/+11=',
'username': 'test',
'someAttributes': {
'Groups': ['example_com-abc' ],
'attribute': [ 'value1' ]
}
req = urllib2.Request('https://something.com/getsomething')
req.add_header('Content-Type', 'application/json')
response = urllib2.urlopen(req, json.dumps(data))
</code></pre>
| -1 | 2016-09-28T10:07:01Z | [
"python",
"json"
]
|
Automatically creating a list | 39,743,916 | <p>I am trying to automatically create the following list:</p>
<pre><code>[Slot('1A', '0', 0), Slot('2A', '0', 0),
Slot('1B', '0', 0], Slot ('2B,'0', 0), ....]
</code></pre>
<p>By defining slot as:</p>
<pre><code>class Slot:
def __init__(self, address , card, stat):
self.address = address
self.card = card
self.stat = stat
board = []
for i in range(1, 13):
for j in ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I']:
board.append(Slot ((str(i)+j,'0', 0)))
print(board)
</code></pre>
<p>Using Python 3.5 in Windows. What is wrong? How I can do that? Thanks.</p>
| 0 | 2016-09-28T09:51:14Z | 39,744,178 | <p>You have enclosed all arguments to <code>Slot</code> in a single parenthesis thereby passing a single argument to an <code>__init__</code> that expects three (and the <code>TypeError</code> raised hints to that). Remove the unnecessary set of parenthesis:</p>
<pre><code>board.append(Slot(str(i)+j,'0', 0))
</code></pre>
<p>and it works fine.</p>
<p>As an addendum, <code>print(board)</code> will return a quite unpleasant view of the objects, I'd suggest overloading <code>__str__</code> and <code>__repr__</code> to get a better view of the created objects: </p>
<pre><code>class Slot:
def __init__(self, address , card, stat):
self.address = address
self.card = card
self.stat = stat
def __str__(self):
return "Slot: ({0}, {1}, {2})".format(self.address, self.card, self.stat)
def __repr__(self):
return str(self)
</code></pre>
<p>Now <code>print(board)</code> prints:</p>
<pre><code>print(board)
[Slot: (1A, 0, 0), Slot: (1B, 0, 0),..., Slot: (12H, 0, 0), Slot: (12I, 0, 0)]
</code></pre>
| 2 | 2016-09-28T10:00:21Z | [
"python",
"python-3.x"
]
|
Automatically creating a list | 39,743,916 | <p>I am trying to automatically create the following list:</p>
<pre><code>[Slot('1A', '0', 0), Slot('2A', '0', 0),
Slot('1B', '0', 0], Slot ('2B,'0', 0), ....]
</code></pre>
<p>By defining slot as:</p>
<pre><code>class Slot:
def __init__(self, address , card, stat):
self.address = address
self.card = card
self.stat = stat
board = []
for i in range(1, 13):
for j in ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I']:
board.append(Slot ((str(i)+j,'0', 0)))
print(board)
</code></pre>
<p>Using Python 3.5 in Windows. What is wrong? How I can do that? Thanks.</p>
| 0 | 2016-09-28T09:51:14Z | 39,744,485 | <p>You are passing a single tuple to the constructor. If you remove the parentheses, your code is golden.</p>
<pre><code>From:
board.append(Slot ((str(i)+j,'0', 0)))
To:
board.append(Slot(str(i)+j,'0', 0))
</code></pre>
| 0 | 2016-09-28T10:12:25Z | [
"python",
"python-3.x"
]
|
I'm trying to use 2 data types on one variable in python? | 39,743,934 | <p>Im trying to get float data type available on my code. My code so far is;</p>
<pre><code>age = int (input("Age of the dog: "))
if age <= 0:
print ("This can not be true!")
elif age == 1:
print ("He/she is about 14 in human years.")
elif age == 2:
print ("He/she is about 22 in human years.")
elif age > 2:
human = 22 + (age - 2) * 5
print ("He/she is about", human, "human years")
</code></pre>
| -2 | 2016-09-28T09:52:00Z | 39,744,075 | <p>If you change your first line to</p>
<pre><code>age = float(input("Age of the dog: "))
</code></pre>
<p>your program will show a floating point age.</p>
| 0 | 2016-09-28T09:56:09Z | [
"python"
]
|
I'm trying to use 2 data types on one variable in python? | 39,743,934 | <p>Im trying to get float data type available on my code. My code so far is;</p>
<pre><code>age = int (input("Age of the dog: "))
if age <= 0:
print ("This can not be true!")
elif age == 1:
print ("He/she is about 14 in human years.")
elif age == 2:
print ("He/she is about 22 in human years.")
elif age > 2:
human = 22 + (age - 2) * 5
print ("He/she is about", human, "human years")
</code></pre>
| -2 | 2016-09-28T09:52:00Z | 39,744,081 | <p>Try to use something like</p>
<pre><code> human = 22 + (age - 2) * 5.0
</code></pre>
<p>Multiplying by a float automatically converts to float.</p>
| -1 | 2016-09-28T09:56:21Z | [
"python"
]
|
I'm trying to use 2 data types on one variable in python? | 39,743,934 | <p>Im trying to get float data type available on my code. My code so far is;</p>
<pre><code>age = int (input("Age of the dog: "))
if age <= 0:
print ("This can not be true!")
elif age == 1:
print ("He/she is about 14 in human years.")
elif age == 2:
print ("He/she is about 22 in human years.")
elif age > 2:
human = 22 + (age - 2) * 5
print ("He/she is about", human, "human years")
</code></pre>
| -2 | 2016-09-28T09:52:00Z | 39,744,206 | <p>Since you didn't mention whether you want the age given is float or the output should be in float, I added both here :-)</p>
<p>For only "getting" floating point, your code will suffice. Except that the floating point number would be rounded off. If you do not want the number to be rounded, instead of <code>int(input(</code>, use <code>float(input(</code> and you are good to go.</p>
<p>In case you want to process the age as floating point, you can just change the age calculation logic a little to ensure more readability that you are processing age as float. Also adding exception handling won't hurt either. </p>
<p>The code with exception handling as well:</p>
<pre><code>try:
age = float(input("Age of the dog: "))
except:
print "Age given is not a valid number"
age = 0
if age <= 0:
print ("This can not be true!")
elif age == 1:
human_age = 14
elif age == 2:
human_age = 22
elif age > 2:
human_age = 22.0 + ((age - 2) * 5.0)
print ("He/she is about %f human years" % (human_age))
</code></pre>
| 1 | 2016-09-28T10:01:19Z | [
"python"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.