title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Fast lookup from floating point index into array | 39,721,958 | <p>Suppose I have <code>n</code> intervals of length <code>len</code> on an axis. Each of them is assigned a value in an <code>ndarray</code>. Now I'd like to look-up some values at positions <code>query_pos</code> given as floating point numbers. Currently, I plan to do it the following way:</p>
<pre><code>n = 100
len = 0.39483
data = ... # creates ndarray with data of length n = 100
query_pos = ... # creates an ndarray with positions to query
values_at_query_pos = data[(query_pos / len).astype(int)] # questionable line
</code></pre>
<p><strong>Is that a good way to do it or are there more efficient ways to convert a floating point query position into an integer index and then read from an array?</strong> I particularly wonder, wether <code>astype(int)</code> is a cheap or an expensive operation, compared for example to the division or the memory read.</p>
<p>Some more remarks:</p>
<ul>
<li><p>Finally, it will be used in 2 and 3 dimensions. Currently, I plan to
catch positions that would lead to illegal indices <em>before</em> they go
into the lookup stage.</p></li>
<li><p>The <code>data</code> array will have a high-enough resolution so that I don't
need any filtering or interpolation. That will be done in previous
stages.</p></li>
</ul>
| 0 | 2016-09-27T10:11:28Z | 39,722,216 | <p>Instead of dividing each element of <code>query_pos</code> by that scalar, we can pre-calculate the reciprocal of the scalar and use multiplication instead for some speedup there. The intuition is that division is a more costly affair than multiplication.</p>
<p>Here's a quick runtime test on it -</p>
<pre><code>In [177]: # Setup inputs
...: n = 100
...: len1 = 0.39483
...: data = np.random.rand(100)
...: query_pos = np.random.randint(0,25,(100000))
...:
In [178]: %timeit query_pos / len1
1000 loops, best of 3: 612 µs per loop
In [179]: %timeit query_pos * (1/len1)
10000 loops, best of 3: 173 µs per loop
</code></pre>
<p>Secondly, if there are many repeated indices, just like in the setup used for the runtime test shown earlier, we can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.take.html" rel="nofollow"><code>np.take</code></a> for some further marginal improvement, as shown below -</p>
<pre><code>In [196]: %timeit data[(query_pos *(1/len1)).astype(int)]
1000 loops, best of 3: 538 µs per loop
In [197]: %timeit np.take(data,(query_pos * (1/len1)).astype(int))
1000 loops, best of 3: 526 µs per loop
</code></pre>
<p>If you are planning to use it on generic ndarrays, we would need to use <code>axis</code> param with <code>np.take</code>.</p>
<p>Comparing it with the original approach -</p>
<pre><code>In [202]: %timeit data[(query_pos / len1).astype(int)]
1000 loops, best of 3: 967 µs per loop
</code></pre>
<p>Finally, on the question of how the division operation stacks up against converting to <code>int</code>, they seem comparable on the big dataset. But, it seems you can't avoid the conversion as needed for indexing. Here's a timing test on it -</p>
<pre><code>In [210]: idx = query_pos * (1/len1)
In [211]: %timeit query_pos * (1/len1)
10000 loops, best of 3: 165 µs per loop
In [212]: %timeit idx.astype(int)
10000 loops, best of 3: 110 µs per loop
</code></pre>
| 4 | 2016-09-27T10:24:15Z | [
"python",
"numpy",
"optimization",
"floating-point",
"indices"
]
|
Fast lookup from floating point index into array | 39,721,958 | <p>Suppose I have <code>n</code> intervals of length <code>len</code> on an axis. Each of them is assigned a value in an <code>ndarray</code>. Now I'd like to look-up some values at positions <code>query_pos</code> given as floating point numbers. Currently, I plan to do it the following way:</p>
<pre><code>n = 100
len = 0.39483
data = ... # creates ndarray with data of length n = 100
query_pos = ... # creates an ndarray with positions to query
values_at_query_pos = data[(query_pos / len).astype(int)] # questionable line
</code></pre>
<p><strong>Is that a good way to do it or are there more efficient ways to convert a floating point query position into an integer index and then read from an array?</strong> I particularly wonder, wether <code>astype(int)</code> is a cheap or an expensive operation, compared for example to the division or the memory read.</p>
<p>Some more remarks:</p>
<ul>
<li><p>Finally, it will be used in 2 and 3 dimensions. Currently, I plan to
catch positions that would lead to illegal indices <em>before</em> they go
into the lookup stage.</p></li>
<li><p>The <code>data</code> array will have a high-enough resolution so that I don't
need any filtering or interpolation. That will be done in previous
stages.</p></li>
</ul>
| 0 | 2016-09-27T10:11:28Z | 39,723,171 | <p>You can output the result of the division straight to an <code>int</code> array:</p>
<pre><code>idx = np.empty_like(query_pos, int)
np.divide(query_pos, len, out=idx, casting='unsafe')
</code></pre>
<p>This will be noticeably faster only for large arrays. But this code is harder to read, so only optimize if it's a bottleneck!</p>
| 1 | 2016-09-27T11:11:47Z | [
"python",
"numpy",
"optimization",
"floating-point",
"indices"
]
|
Pandas: count some values in a column | 39,722,185 | <p>I have dataframe, it's part of them</p>
<pre><code> ID,"url","app_name","used_at","active_seconds","device_connection","device_os","device_type","device_usage"
e990fae0f48b7daf52619b5ccbec61bc,"",Phone,2015-05-01 09:29:11,13,3g,android,smartphone,home
e990fae0f48b7daf52619b5ccbec61bc,"",Phone,2015-05-01 09:33:00,3,unknown,android,smartphone,home
e990fae0f48b7daf52619b5ccbec61bc,"",Phone,2015-06-01 09:33:07,1,unknown,android,smartphone,home
e990fae0f48b7daf52619b5ccbec61bc,"",Phone,2015-06-01 09:34:30,5,unknown,android,smartphone,home
e990fae0f48b7daf52619b5ccbec61bc,"",Messaging,2015-06-01 09:36:22,133,3g,android,smartphone,home
e990fae0f48b7daf52619b5ccbec61bc,"",Messaging,2015-05-02 09:38:40,5,3g,android,smartphone,home
574c4969b017ae6481db9a7c77328bc3,"",Yandex.Navigator,2015-05-01 11:04:48,70,3g,ios,smartphone,home
574c4969b017ae6481db9a7c77328bc3,"",VK Client,2015-6-01 12:02:27,248,3g,ios,smartphone,home
574c4969b017ae6481db9a7c77328bc3,"",Viber,2015-07-01 12:06:35,7,3g,ios,smartphone,home
574c4969b017ae6481db9a7c77328bc3,"",VK Client,2015-08-01 12:23:26,86,3g,ios,smartphone,home
574c4969b017ae6481db9a7c77328bc3,"",Talking Angela,2015-08-02 12:24:52,0,3g,ios,smartphone,home
574c4969b017ae6481db9a7c77328bc3,"",My Talking Angela,2015-08-03 12:24:52,167,3g,ios,smartphone,home
574c4969b017ae6481db9a7c77328bc3,"",Talking Angela,2015-08-04 12:27:39,34,3g,ios,smartphone,home
</code></pre>
<p>I need to count quantity of days in every month to every <code>ID</code>.</p>
<p>If I try <code>df.groupby('ID')['used_at'].count()</code> I get quantity of visiting, how can I take and count <code>days</code> at <code>month</code>? </p>
| 1 | 2016-09-27T10:22:48Z | 39,722,273 | <p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> by <code>ID</code>, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.month.html" rel="nofollow"><code>month</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.day.html" rel="nofollow"><code>day</code></a> and aggregate <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow"><code>size</code></a>:</p>
<pre><code>df1 = df.used_at.groupby([df['ID'], df.used_at.dt.month,df.used_at.dt.day ]).size()
print (df1)
ID used_at used_at
574c4969b017ae6481db9a7c77328bc3 5 1 1
6 1 1
7 1 1
8 1 1
2 1
3 1
4 1
e990fae0f48b7daf52619b5ccbec61bc 5 1 2
2 1
6 1 3
dtype: int64
</code></pre>
<p>Or by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.date.html" rel="nofollow"><code>date</code></a> - it is same as by <code>year</code>, <code>month</code> and <code>day</code>:</p>
<pre><code>df1 = df.used_at.groupby([df['ID'], df.used_at.dt.date]).size()
print (df1)
ID used_at
574c4969b017ae6481db9a7c77328bc3 2015-05-01 1
2015-06-01 1
2015-07-01 1
2015-08-01 1
2015-08-02 1
2015-08-03 1
2015-08-04 1
e990fae0f48b7daf52619b5ccbec61bc 2015-05-01 2
2015-05-02 1
2015-06-01 3
dtype: int64
</code></pre>
<p>Differences between <code>count</code> and <code>size</code> in <a class='doc-link' href="http://stackoverflow.com/documentation/pandas/1822/grouping-data/6874/aggregating-by-size-and-count#t=201609271029599912108">SO documentation</a>.</p>
| 1 | 2016-09-27T10:27:08Z | [
"python",
"datetime",
"pandas",
"aggregate",
"days"
]
|
Unable to serve the nested css & js files via nginx? | 39,722,236 | <p>I've read some documentation and tried and number of alternatives from previous answers on this site but cannot get nginx to serve my css & js files from my flask app. In my code i have for example:</p>
<pre><code><link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}"/>
<script type="text/javascript" src=" ../static/js/jquery.tablesorter.js"></script>
</code></pre>
<p>and my file structure looks like this:
<a href="http://i.stack.imgur.com/4g8G5.png" rel="nofollow"><img src="http://i.stack.imgur.com/4g8G5.png" alt="file structure"></a></p>
<p>In my nginx config '/etc/nginx/sites-enabled/src' I am able to get the html files within templates to serve ok and also able to get the 'log_big.png' file within the static folder to serve ok... but I cannot get the 3rd location entry to work and serve these nested files:</p>
<pre><code>server {
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location /templates {
alias /home/www/src/templates/; # This works ok for html files
}
location /static/ {
alias /home/www/src/static/; # This works ok for the .png file
}
location /CSS/ {
autoindex on;
root /home/www/src/static/; # This doesn't work ?
}
include /etc/nginx/mime.types;
}
</code></pre>
<p>Can anyone advise what I am doing wrong please?</p>
| -1 | 2016-09-27T10:25:23Z | 39,730,495 | <p>Not only do you mix the case (<code>CSS</code> vs <code>css</code>), you include <code>../static/</code> in the JavaScript path. Fix the capitalization and remove the <code>../static/</code>.</p>
| 0 | 2016-09-27T17:03:52Z | [
"python",
"nginx",
"flask",
"gunicorn",
"nginx-location"
]
|
Getting Django function to return and render HTML page | 39,722,274 | <p>I have function (inside a <a href="https://docs.python.org/3/tutorial/classes.html#classes" rel="nofollow">class</a>) that returns <code>True</code> if a user <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/auth/#django.contrib.auth.models.User.is_authenticated" rel="nofollow">is_authenticate()</a> and <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/auth/#django.contrib.auth.models.User.is_active" rel="nofollow">is_active</a> but it is not returning <strong>HTML</strong> as I thought it would.</p>
<pre><code>class LoginDetails:
def __init__(self, request):
self.request = request
self.template = 'myapp/errorpage.html'
self.context = {}
def user_is_logged_in(self):
if self.request.user.is_authenticated() and self.request.user.is_active:
print 'USER IS AUTHENTICATED!'
return True
print 'USER IS NOT AUTHENTICATED!'
return render(self.request, self.template, self.context)
</code></pre>
<p><strong>views.py</strong>:</p>
<pre><code>def authUsers(request):
logindetails = LoginDetails(request).user_is_logged_in()
return HttpResponse("HTML Error Page Not Rendered")
</code></pre>
<p>How do I get the function to display/render error page without me doing it in <em>views.py</em>? Example of what I want to achieve is Stackoverflow's <a href="https://stackoverflow.com/questions/397288888888888882033/android-retrotfit-modify-fields-of-object-after-receive">Page Not Found</a>. </p>
| -1 | 2016-09-27T10:27:11Z | 39,722,485 | <p>First of all your class really is redundant, calling it takes one line and modifying your view is also one line</p>
<pre><code>from django.http import HttpResponse, Http404
def authUsers(request):
if request.user.is_authenticated() and request.user.is_active:
raise Http404
else :
return render(...)
</code></pre>
<p>Using that class merely complicates matters. But if you really wanted to, you could modify it like this: </p>
<pre><code>class LoginDetails:
def __init__(self, request):
self.request = request
self.template = 'myapp/errorpage.html'
self.context = {}
def user_is_logged_in(self):
if self.request.user.is_authenticated() and self.request.user.is_active:
return False
return render(self.request, self.template, self.context)
def authUsers(request):
logindetails = LoginDetails(request).user_is_logged_in()
if not logindetails:
return HttpResponse("HTML Error Page Not Rendered")
else:
return logindetails
</code></pre>
| 0 | 2016-09-27T10:38:54Z | [
"python",
"django"
]
|
Is it a good idea to apply ML libraries on pandas data frame? | 39,722,279 | <p>i am building a cognitive miner AI Bot. where My Bot has two task , one is train and other is predict.i'm using some/few ML functionalities. so here i have lots of documents(~200,000 docs) which i'm training. and then in predicting for a query, i'm following some steps to find most accurate matched document(by looking score, confidence on each document) from training. and some known functions i'm using finding like TF-IDF,n-gram,cosine-similarity of each tokens available in asked query. for doing this i am using core python , python third-party libraries,NoSQL database for keeping training data. </p>
<p>NOTE: all performance improvement taken care using core python as much as possible. (please don't give suggestion to use Elastic Search or python whoosh because i just want to use my silly code for another decade.:) )</p>
<p>I'm facing Performance issue. like to do score it is taking 2-3 seconds time. which is not good.i want that result should come in some milliseconds.</p>
<p>SO my question to you, if i use pandas , and try to apply all above functionality to it, will it give better performance ? or numpy matrix calculation will give better performance ? </p>
<p>so here i don't think code required to be paste. i just need experienced peoples view on my problem. and of course keeping in mind solution should be scalable. </p>
| 1 | 2016-09-27T10:27:28Z | 39,722,582 | <p>It probably won't make much of a difference either way, in terms of performance.</p>
<p>Pandas is extremely efficient for loading data and munging it (grouping it in different ways, pivoting, creating new columns from existing columns, and so forth). </p>
<p>Once your data is ready for passing to a machine learning algorithm (say, in <code>sklearn</code>), then, basically, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.as_matrix.html" rel="nofollow"><code>pd.DataFrame.as_matrix()</code></a> can transform it into a numpy array, without fundamentally affecting overall performance. It's hard to conceive of any <code>sklearn</code> prediction/classification stage whose cost doesn't dominate this.</p>
<p>The <a href="https://pypi.python.org/pypi/sklearn-pandas" rel="nofollow"><code>sklearn-pandas</code> package</a> facilitates this even further.</p>
<p>If your performance isn't satisfactory at this point, the solution lies elsewhere.</p>
| 1 | 2016-09-27T10:43:29Z | [
"python",
"pandas",
"numpy",
"artificial-intelligence"
]
|
Python how to reference variable | 39,722,409 | <p>I'm getting a problem when referencing variables on a python file. Here is the code:</p>
<pre><code>FG_E = 9
FG_R = 8
START = 7
READY = 9
MC = 3
BRAKE = 5
ERROR = 6
a = 2
b = 3
position = 0
def build_message(signal):
message = position
message = message | (0b1<<signal)
s = bin(message)
s = s[2:len(s)]
s = (16-len(s))*'0' + s
s0 = s[0:len(s)/2]
s1 = s[len(s)/2:len(s)]
s0 = s0[::-1]
s1 = s1[::-1]
s_final = int(s0 + s1, 2)
position = s_final
print bin(s_final)
return s_final
build_message(FG_R)
</code></pre>
<p>The error I get is:
UnboundLocalError: local variable 'position' referenced berofe assigment</p>
| 0 | 2016-09-27T10:34:36Z | 39,722,503 | <p>The problematic line is actually <code>position = s_final</code> in the function <code>build_message</code>.</p>
<p>If it wasn't there then <code>message = position</code> would work because the Python interpreter would know to which <code>position</code> variable you are referring. </p>
<p>But in this case it is ambiguous because you're are later reassigning to <code>position</code> (<code>position = s_final</code>). </p>
<p>You should either re think the design of the code, or add <code>global position</code> as the first line in <code>build_message</code>. Keep in mind that as it says, it would make <code>position</code> a global variable and <code>build_message</code> will change the value of <code>position</code> every where throughout your code.</p>
<p><strong>EDIT</strong> A quick demo:</p>
<pre><code>global_var = 0
def foo1():
print(global_var)
def foo2():
print(global_var)
global_var = 1
def foo3():
global global_var
print(global_var)
global_var = 1
print(global_var)
foo1()
>> 0
foo2()
>> UnboundLocalError: local variable 'global_var' referenced before assignment
foo3()
>> 0
1
</code></pre>
| 3 | 2016-09-27T10:39:52Z | [
"python"
]
|
Python how to reference variable | 39,722,409 | <p>I'm getting a problem when referencing variables on a python file. Here is the code:</p>
<pre><code>FG_E = 9
FG_R = 8
START = 7
READY = 9
MC = 3
BRAKE = 5
ERROR = 6
a = 2
b = 3
position = 0
def build_message(signal):
message = position
message = message | (0b1<<signal)
s = bin(message)
s = s[2:len(s)]
s = (16-len(s))*'0' + s
s0 = s[0:len(s)/2]
s1 = s[len(s)/2:len(s)]
s0 = s0[::-1]
s1 = s1[::-1]
s_final = int(s0 + s1, 2)
position = s_final
print bin(s_final)
return s_final
build_message(FG_R)
</code></pre>
<p>The error I get is:
UnboundLocalError: local variable 'position' referenced berofe assigment</p>
| 0 | 2016-09-27T10:34:36Z | 39,722,515 | <p>You need to use <code>global</code> keyword to access global variable. </p>
<pre><code>def build_message(signal):
global position
message = position
</code></pre>
| 1 | 2016-09-27T10:40:25Z | [
"python"
]
|
Python how to reference variable | 39,722,409 | <p>I'm getting a problem when referencing variables on a python file. Here is the code:</p>
<pre><code>FG_E = 9
FG_R = 8
START = 7
READY = 9
MC = 3
BRAKE = 5
ERROR = 6
a = 2
b = 3
position = 0
def build_message(signal):
message = position
message = message | (0b1<<signal)
s = bin(message)
s = s[2:len(s)]
s = (16-len(s))*'0' + s
s0 = s[0:len(s)/2]
s1 = s[len(s)/2:len(s)]
s0 = s0[::-1]
s1 = s1[::-1]
s_final = int(s0 + s1, 2)
position = s_final
print bin(s_final)
return s_final
build_message(FG_R)
</code></pre>
<p>The error I get is:
UnboundLocalError: local variable 'position' referenced berofe assigment</p>
| 0 | 2016-09-27T10:34:36Z | 39,722,567 | <p>If you are using an outside variable into a function maybe you should consider passing it as an argument, like:</p>
<pre><code>def build_message(signal,position):
pass
</code></pre>
| 0 | 2016-09-27T10:43:03Z | [
"python"
]
|
Django error while saving Foreign Key - Can not assign none , Doesn't allow null values | 39,722,410 | <p>I have two models - Events and Tenants. </p>
<pre><code>class EventsModel(models.Model):
sys_id = models.AutoField(primary_key=True, null=False, blank=True)
tenant_sys_id = models.ForeignKey('tenant.TenantModel', on_delete = models.CASCADE, null=False, blank=True)
name = models.CharField(max_length=80, null=False, blank=False)
start_date_time = models.DateTimeField(null=False, blank=False)
end_date_time = models.DateTimeField(null=False, blank=False)
created_when = models.DateTimeField(null=False, blank=True)
def save(self, *args, **kwargs):
self.created_when = timezone.now()
self.tenant_sys_id = 1 #for-testing-only. actual current_logged_in_user.tenant_sys_id
return super(EventsModel, self).save(*args, **kwargs)
</code></pre>
<p>This model have a FK tenant_sys_id to tenant model. If for this I set null = False then below error is thrown.</p>
<pre><code>ValueError('Cannot assign None: "EventsModel.tenant_sys_id" does not allow null values.',)
</code></pre>
<p>I am setting the value of this field in overridden save method from the logged in user details.
Setting null = True doesn't throw this error.</p>
<p>However is I do the same for created_when field this behavior is not shown. Irrespective of null=True/False, value is being set from save method and saved.</p>
<p>I am getting this error in View File.</p>
<pre><code>form = EventsForm(request.POST)
print(request.POST) # print
print(form) # do not print. error here <--
if form.is_valid(): # error here if above line is commented.
form.save()
</code></pre>
<p>ModelForm class</p>
<pre><code>class EventsForm(ModelForm):
class Meta:
model = EventsModel
fields = '__all__'
</code></pre>
<p>So here I am not able to understand -<br>
1. why I must set FK to null=True and not created_when in order to work.<br>
2. Why it is throwing error while creating form and not while saving data.</p>
<p><strong>Update</strong>:
Anybody downvoting the question - Please atleast mention the reason in comments about why you think this question is not asked in a proper way?</p>
| -1 | 2016-09-27T10:34:39Z | 39,722,792 | <p>Your ForeignKey should be called <code>tenant_sys</code>, not <code>tenant_sys_id</code>. Django will automatically add the <code>_id</code> suffix to the underlying database field, but the value you have in Python is actually an instance of the target model, not an ID.</p>
<p>With the code you have, you are still not setting the db value; you set the object, which Django will actually convert into <code>tenant_sys_id_id</code>, but the db column remains empty, hence the integrity error. The reason why there is no form validation error is because you also have <code>blank=True</code> on the field definition.</p>
| 0 | 2016-09-27T10:54:11Z | [
"python",
"django",
"django-models",
"foreign-keys"
]
|
Error logged while trying to run Robot Test in IE11 - WebDriverException: 404 - File or directory not found | 39,722,647 | <p>I'm trying to execute certain tests in Robot framework - tests that are working fine with Chrome using the Chrome Driver.</p>
<p>Once execution starts, I get an exception in the command prompt and execution stops without the browser even opening (Attaching the log.html file screenshot and snippet)</p>
<pre><code>Suite setup failed:
WebDriverException: Message: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">;
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"/>
<title>404 - File or directory not found.</title>
<style type="text/css">
<!--
body{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica, sans-serif;background:#EEEEEE;}
fieldset{padding:0 15px 10px 15px;}
h1{font-size:2.4em;margin:0;color:#FFF;}
h2{font-size:1.7em;margin:0;color:#CC0000;}
h3{font-size:1.2em;margin:10px 0 0 0;color:#000000;}
#header{width:96%;margin:0 0 0 0;padding:6px 2% 6px 2%;font-family:"trebuchet MS", Verdana, sans-serif;color:#FFF;
background-color:#555555;}
#content{margin:0 0 0 2%;position:relative;}
.content-container{background:#FFF;width:96%;margin-top:8px;padding:10px;position:relative;}
-->
</style>
</head>
<body>
<div id="header"><h1>Server Error</h1></div>
<div id="content">
<div class="content-container"><fieldset>
<h2>404 - File or directory not found.</h2>
<h3>The resource you are looking for might have been removed, had its name changed, or is temporarily unavailable.</h3>
</fieldset></div>
</div>
</body>
</html>
</code></pre>
<hr>
<p>Specifications:</p>
<ol>
<li>IE 11</li>
<li>IEDriverServer.exe for 64 bit machine</li>
<li>Windows 7 64 bit system</li>
<li><strong>Selenium version</strong> 2.53</li>
<li><strong>Robot version</strong> 3.0</li>
<li><strong>Python version</strong> 2.7.11</li>
<li><strong>IEDriverServer</strong> - 2.53.1</li>
</ol>
<hr>
<p>Been stuck with this issue for the past 4 days. I tried searching many forums and pages on the internet with all possible combinations of keywords, but no luck. :(</p>
<p>Please, tell me how I can proceed with this issue.</p>
<p>Thanks.</p>
<p><a href="http://i.stack.imgur.com/geymk.jpg" rel="nofollow">WebdriverException - IEDriverServer</a></p>
<p>I don't have a separate Test Setup. I just use a Suite Setup as below: </p>
<pre><code>*** Settings ***
Documentation Execution: Performance Validations
Resource Performance_Tests_Resource.txt
Suite Setup Open Web Browser
Suite Teardown Close Browser
</code></pre>
<p>where my <em>Open Web Browser</em> keyword has the following: </p>
<pre><code>Open Web Browser
Open Browser ${URLGoogle} ${BROWSER}
Maximize Browser Window
</code></pre>
<p>and I pass ${BROWSER} as a command line variable</p>
<p><strong>EDIT</strong> - The problem seems to be with IE11, because if I uninstall the IE11 and IE10 updates and try to start execution with IE8, it works just fine!</p>
| -1 | 2016-09-27T10:46:38Z | 39,738,517 | <blockquote>
<p>For IE 11 only, you will need to set a registry entry on the target computer so that the driver can maintain a connection to the instance of Internet Explorer it creates. For 32-bit Windows installations, the key you must examine in the registry editor is HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BFCACHE. For 64-bit Windows installations, the key is HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BFCACHE. Please note that the FEATURE_BFCACHE subkey may or may not be present, and should be created if it is not present. Important: Inside this key, create a DWORD value named iexplore.exe with the value of 0.</p>
</blockquote>
<p>Source <a href="https://github.com/SeleniumHQ/selenium/wiki/InternetExplorerDriver#required-configuration" rel="nofollow">https://github.com/SeleniumHQ/selenium/wiki/InternetExplorerDriver#required-configuration</a></p>
| 0 | 2016-09-28T05:11:32Z | [
"python",
"selenium",
"internet-explorer-11",
"robotframework",
"selenium-iedriver"
]
|
Python error: Index out of range | 39,722,762 | <pre><code>seq_sum = []
for i in range(len(sequence)):
seq_sum[i] = sequence[i] + inv_sequence[i]
print (seq_sum)
</code></pre>
<p>When I try to run this code it return an error: list assignment index out of range. How can I fix the problem?
sequence and inv_sequence are arrays of integers.</p>
| -1 | 2016-09-27T10:52:40Z | 39,722,790 | <p><code>seq_sum[i]</code> will raise an <code>IndexError</code> as the <code>seq_sum</code> list is empty. You should use <code>append</code> instead:</p>
<pre><code>seq_sum = []
for i in range(len(sequence)):
seq_sum.append(sequence[i] + inv_sequence[i])
print(seq_sum)
</code></pre>
<p>You can achieve the same result with a prettier code using list comprehension:</p>
<pre><code>seq_sum = [seq_elem + inv_elem for seq_elem, inv_elem in zip(sequence, inv_sequence)]
</code></pre>
<p>You could also use <code>map</code> but some would argue its readability:</p>
<pre><code>import operator
seq_sum = list(map(operator.add, sequence, inv_sequence))
</code></pre>
| 8 | 2016-09-27T10:54:07Z | [
"python",
"python-3.x"
]
|
Python error: Index out of range | 39,722,762 | <pre><code>seq_sum = []
for i in range(len(sequence)):
seq_sum[i] = sequence[i] + inv_sequence[i]
print (seq_sum)
</code></pre>
<p>When I try to run this code it return an error: list assignment index out of range. How can I fix the problem?
sequence and inv_sequence are arrays of integers.</p>
| -1 | 2016-09-27T10:52:40Z | 39,722,819 | <p>You've declared <code>seq_sum</code> to be an empty list. You then try and index in a position other than <code>0</code> which results in an <code>IndexError</code>.</p>
<p>Expanding a list to make it larger is essentially done with <code>append</code>ing, <code>extend</code>ing or <code>slice</code> assignments. Since you <em>sequentially</em> access elements, <code>seq_num.append</code> is the best way to go about this. </p>
<p>That is:</p>
<pre><code>seq_sum[i] = sequence[i] + inv_sequence[i]
</code></pre>
<p>Should be instead changed to:</p>
<pre><code>seq_sum.append(sequence[i] + inv_sequence[i])
</code></pre>
| 1 | 2016-09-27T10:55:09Z | [
"python",
"python-3.x"
]
|
Merging all rows with common value | 39,722,874 | <p>Here's what I want to do using Python:</p>
<p><code>file1.csv</code> contains:</p>
<pre><code>Code,Expenditure
1,Meal
2,Taxi
3,Apartment
4,Laundry
</code></pre>
<p><code>file2.csv</code> contains:</p>
<pre><code>Code,Amount
1,150
2,90
2,100
2,85
3,5000
</code></pre>
<p>Now I want to merge them into another file (<code>output.csv</code>) that will look like this:</p>
<pre><code>Code,Expenditure,Amount
1,Meal,150
2,Taxi,90
2,Taxi,100
2,Taxi,85
3,Apartment,5000
4,Laundry,
</code></pre>
<p>Any help or suggestions will be greatly appreciated!</p>
| -1 | 2016-09-27T10:57:21Z | 39,723,771 | <p>Read about file handling and python basics. As @Scott hunter told, you should try something, if you face any problems you can ask your question with your code here.</p>
<p>Anyway, <strong>Here is what you need,</strong></p>
<pre><code>a=open('a.txt','r').readlines()
b=open('b.txt','r').readlines()
for i in a:
i=i.rstrip().split(',')
for j in b:
j=j.rstrip().split(',')
if i[0]==j[0]:
A=i[0],i[1],j[1]
print ' '.join(A)
</code></pre>
| -1 | 2016-09-27T11:41:59Z | [
"python",
"csv",
"merge"
]
|
Are Python modules compiled? | 39,722,984 | <p>Trying to understand whether python libraries are compiled because I want to know if the interpreted code I write will perform the same or worse.</p>
<p>e.g. I saw it mentioned somewhere that numpy and scipy are efficient because they are compiled. I don't think this means byte code compiled so how was this done? Was it compiled to c using something like cython? Or was it written using a language like c and compiled in a compatible way?</p>
<p>Does this apply to all modules or is it on a case-by-case basis?</p>
| 1 | 2016-09-27T11:02:46Z | 39,723,231 | <p>NumPy and several other libraries are partly wrappers for code written in C and other languages like FORTRAN, which when compiled will run faster than Python. This helps by avoiding the cost of loops, pointer indirection and per-element dynamic type checking in Python. This is explained in <a href="http://stackoverflow.com/questions/8385602/why-are-numpy-arrays-so-fast">this question</a>:</p>
<blockquote>
<p>Numpy arrays are densely packed arrays of homogeneous type. Python lists, by contrast, are arrays of pointers to objects, even when all of them are of the same type. So, you get the benefits of locality of reference.</p>
<p>Also, many Numpy operations are implemented in C, avoiding the general cost of loops in Python, pointer indirection and per-element dynamic type checking. The speed boost depends on which operations you're performing, but a few orders of magnitude isn't uncommon in number crunching programs.</p>
</blockquote>
<p>Python code that is compiled to bytecode (.pyc files) is a separate topic, in which python scripts are compiled to increase startup performance (see <a href="http://stackoverflow.com/questions/471191/why-compile-python-code">this question</a>).</p>
| 4 | 2016-09-27T11:15:23Z | [
"python",
"numpy",
"compilation"
]
|
Are Python modules compiled? | 39,722,984 | <p>Trying to understand whether python libraries are compiled because I want to know if the interpreted code I write will perform the same or worse.</p>
<p>e.g. I saw it mentioned somewhere that numpy and scipy are efficient because they are compiled. I don't think this means byte code compiled so how was this done? Was it compiled to c using something like cython? Or was it written using a language like c and compiled in a compatible way?</p>
<p>Does this apply to all modules or is it on a case-by-case basis?</p>
| 1 | 2016-09-27T11:02:46Z | 39,728,900 | <p>Python can execute functions written in Python (interpreted) and compiled functions. There are whole API docs about writing code for integration with Python. <code>cython</code> is one of the easier tools for doing this. </p>
<p>Libraries can be any combination - pure Python, Python plus interfaces to compiled code, or all compiled. The interpreted files end with <code>.py</code>, the compiled stuff usually is <code>.so</code> or <code>.dll</code> (depending on the operating system). It's easy to install pure Python code - just load, unzip if needed, and put the right directory. Mixed code requires a compilation step (and hence a c compiler, etc), or downloading a version with binaries.</p>
<p>Typically developers get the code working in Python, and then rewrite speed sensitive portions in <code>c</code>. Or they find some external library of working <code>c</code> or <code>Fortran</code> code, and link to that.</p>
<p><code>numpy</code> and <code>scipy</code> are mixed. They have lots of Python code, core compiled portions, and use external libraries. And the <code>c</code> code can be extraordinarily hard to read.</p>
<p>As a <code>numpy</code> user, you should first try to get as much clarity and performance with Python code. Most of the optimization SO questions discuss ways of making use of the compiled functionality of <code>numpy</code> - all the operations that work on whole arrays. It's only when you can't express your operations in efficient numpy code that you need to resort to using a tool like <code>cython</code> or <code>numba</code>.</p>
<p>In general if you have to iterate extensively then you are using low level operations. Either replace the loops with array operations, or rewrite the loop in cython.</p>
| 1 | 2016-09-27T15:40:30Z | [
"python",
"numpy",
"compilation"
]
|
Python: coursera assignments - Dictionary chapter | 39,722,987 | <p>I'm trying to solve the assignments from coursera - Python.</p>
<p>Write a program to read through the mbox-short.txt and figure out who has the sent the greatest number of mail messages. The program looks for 'From ' lines and takes the second word of those lines as the person who sent the mail. The program creates a Python dictionary that maps the sender's mail address to a count of the number of times they appear in the file. After the dictionary is produced, the program reads through the dictionary using a maximum loop to find the most prolific committer.</p>
<pre><code>name = raw_input("Enter file:")
if len(name) < 1 : name = "mbox-short.txt"
handle = open(name)
lst = list()
for line in handle:
line = line.strip()
if line.startswith("From"):
words = line.split()
email = words[1]
lst.append(email)
dct = dict()
for email in lst:
dct[email] = dct.get(email,0)+1
bigcount = None
email_address = None
for key,value in dct.items():
if bigcount is None or value > bigcount:
bigcount = value
email_address = key
print email_address, bigcount
</code></pre>
<p>My code runs, but the desired output should be: cwen@iupui.edu 5 , but i get somehow "doubled" output: cwen@iupui.edu 10</p>
<p>Did anyone solved this problem? Could you please give me a hint what have i missed? Thank you a lot!</p>
| 0 | 2016-09-27T11:03:00Z | 39,723,564 | <p>Check line 3726 and 3763. There is a colon after <strong>From</strong>. I think you are missing that while searching manually.</p>
<blockquote>
<p>Line 3726 From cwen@iupui.edu Thu Jan 3 16:23:48 2008 </p>
<p>Line 3763 From: cwen@iupui.edu</p>
</blockquote>
<p>Otherwise the code is correct. It is showing correct output.</p>
| 0 | 2016-09-27T11:32:27Z | [
"python",
"dictionary"
]
|
Python: coursera assignments - Dictionary chapter | 39,722,987 | <p>I'm trying to solve the assignments from coursera - Python.</p>
<p>Write a program to read through the mbox-short.txt and figure out who has the sent the greatest number of mail messages. The program looks for 'From ' lines and takes the second word of those lines as the person who sent the mail. The program creates a Python dictionary that maps the sender's mail address to a count of the number of times they appear in the file. After the dictionary is produced, the program reads through the dictionary using a maximum loop to find the most prolific committer.</p>
<pre><code>name = raw_input("Enter file:")
if len(name) < 1 : name = "mbox-short.txt"
handle = open(name)
lst = list()
for line in handle:
line = line.strip()
if line.startswith("From"):
words = line.split()
email = words[1]
lst.append(email)
dct = dict()
for email in lst:
dct[email] = dct.get(email,0)+1
bigcount = None
email_address = None
for key,value in dct.items():
if bigcount is None or value > bigcount:
bigcount = value
email_address = key
print email_address, bigcount
</code></pre>
<p>My code runs, but the desired output should be: cwen@iupui.edu 5 , but i get somehow "doubled" output: cwen@iupui.edu 10</p>
<p>Did anyone solved this problem? Could you please give me a hint what have i missed? Thank you a lot!</p>
| 0 | 2016-09-27T11:03:00Z | 39,779,835 | <p>Thanks to all for hints. I checked manually and the answer is correct to be 10, which means my code is ok. The desired output of "5", was indicated by the assignment from Coursera courses (as i follow them).
Thank a lot for the help! </p>
| 0 | 2016-09-29T20:53:43Z | [
"python",
"dictionary"
]
|
How to extract date form data frame column? | 39,723,061 | <p>I have data frame like that:</p>
<pre><code> month items
0 1962-01-01 589
1 1962-02-01 561
2 1962-03-01 640
3 1962-04-01 656
4 1962-05-01 723
</code></pre>
<p>I need to get year or month from this data frame and create array, but I don't know how to do that.</p>
<p>expected result:</p>
<pre><code>years = [1962, 1962, 1962....]
monthes = [1, 2, 3, 4, 5.....]
</code></pre>
<p>Can you help me?</p>
| 2 | 2016-09-27T11:06:11Z | 39,723,167 | <p>Assuming this is <code>pandas</code> you may need to convert the month column to dtype <code>datetime</code> and then you can use <code>.dt</code> accessor for the year and month attributes:</p>
<pre><code>In [33]:
df['month'] = pd.to_datetime(df['month'])
df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 5 entries, 0 to 4
Data columns (total 2 columns):
month 5 non-null datetime64[ns]
items 5 non-null int64
dtypes: datetime64[ns](1), int64(1)
memory usage: 120.0 bytes
In [35]:
years = df['month'].dt.year.tolist()
months = df['month'].dt.month.tolist()
print(years)
print(months)
[1962, 1962, 1962, 1962, 1962]
[1, 2, 3, 4, 5]
</code></pre>
| 2 | 2016-09-27T11:11:20Z | [
"python",
"python-2.7",
"pandas",
"dataframe"
]
|
Cant see image from project directory | 39,723,072 | <p>I tried to handle this but I gave up. I have folder with images and I want to display my some image in html view but It wont work.</p>
<p>I followed this tutorial <a href="https://docs.djangoproject.com/en/1.10/howto/static-files/#serving-files-uploaded-by-a-user-during-development" rel="nofollow">enter link description here</a></p>
<p>this my project tree:</p>
<p><a href="http://i.stack.imgur.com/nbegK.png" rel="nofollow"><img src="http://i.stack.imgur.com/nbegK.png" alt="enter image description here"></a></p>
<p>As you can see I tried to create plenty of directories to make it work. </p>
<p>This is my settins:</p>
<pre><code>STATIC_URL = '/static/'
# STATICFILES_DIRS = [
# os.path.join(BASE_DIR, "media"),
# '/webstore/',
# ]
MEDIA_ROOT = '/webstore/media/'
MEDIA_URL = '/media/'
</code></pre>
<p>this is my html view where I try to display my image</p>
<pre><code><img src="/media/example.jpg" />
</code></pre>
<p>this is my <code>urls.py</code> file</p>
<pre><code> from django.conf.urls import url, include
from django.contrib import admin
from djangoproject import settings
from django.conf.urls.static import static
urlpatterns = [
url(r'^', include('webstore.urls')),
url(r'^admin/', admin.site.urls),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
</code></pre>
| 0 | 2016-09-27T11:06:39Z | 39,723,634 | <p><code>MEDIA_ROOT</code> is the absolute filesystem path to the directory that will hold user-uploaded files.</p>
<p><code>STATIC_ROOT</code> is the absolute filesystem path to the directory from which youâd like to serve these files.</p>
<p>Since, you would like to serve images, give the absolute path of your static directory to STATIC_ROOT.<br>
Give relative path with respect to your STATIC_ROOT in STATIC_URL.
Also, change your <code>urls.py</code> with the static_url and static_root.</p>
<p>Another Suggestion: Easier way to display an image is to upload it to image servers like imgur and give the url of the image in the html.
For e.g. : </p>
<pre><code><img src="http://i.stack.imgur.com/nbegK.png" />
</code></pre>
| 1 | 2016-09-27T11:36:01Z | [
"python",
"django"
]
|
Cant see image from project directory | 39,723,072 | <p>I tried to handle this but I gave up. I have folder with images and I want to display my some image in html view but It wont work.</p>
<p>I followed this tutorial <a href="https://docs.djangoproject.com/en/1.10/howto/static-files/#serving-files-uploaded-by-a-user-during-development" rel="nofollow">enter link description here</a></p>
<p>this my project tree:</p>
<p><a href="http://i.stack.imgur.com/nbegK.png" rel="nofollow"><img src="http://i.stack.imgur.com/nbegK.png" alt="enter image description here"></a></p>
<p>As you can see I tried to create plenty of directories to make it work. </p>
<p>This is my settins:</p>
<pre><code>STATIC_URL = '/static/'
# STATICFILES_DIRS = [
# os.path.join(BASE_DIR, "media"),
# '/webstore/',
# ]
MEDIA_ROOT = '/webstore/media/'
MEDIA_URL = '/media/'
</code></pre>
<p>this is my html view where I try to display my image</p>
<pre><code><img src="/media/example.jpg" />
</code></pre>
<p>this is my <code>urls.py</code> file</p>
<pre><code> from django.conf.urls import url, include
from django.contrib import admin
from djangoproject import settings
from django.conf.urls.static import static
urlpatterns = [
url(r'^', include('webstore.urls')),
url(r'^admin/', admin.site.urls),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
</code></pre>
| 0 | 2016-09-27T11:06:39Z | 39,724,404 | <p>The only thing you have to do is</p>
<pre><code>MEDIA_ROOT = '/absolutepath/to/djangoproject/webstore/media/'
</code></pre>
<p>Then you have already</p>
<pre><code>MEDIA_URL = '/media/'
</code></pre>
<p>and try </p>
<pre><code><img src="/media/example.jpg" />
</code></pre>
| 1 | 2016-09-27T12:15:35Z | [
"python",
"django"
]
|
Cant see image from project directory | 39,723,072 | <p>I tried to handle this but I gave up. I have folder with images and I want to display my some image in html view but It wont work.</p>
<p>I followed this tutorial <a href="https://docs.djangoproject.com/en/1.10/howto/static-files/#serving-files-uploaded-by-a-user-during-development" rel="nofollow">enter link description here</a></p>
<p>this my project tree:</p>
<p><a href="http://i.stack.imgur.com/nbegK.png" rel="nofollow"><img src="http://i.stack.imgur.com/nbegK.png" alt="enter image description here"></a></p>
<p>As you can see I tried to create plenty of directories to make it work. </p>
<p>This is my settins:</p>
<pre><code>STATIC_URL = '/static/'
# STATICFILES_DIRS = [
# os.path.join(BASE_DIR, "media"),
# '/webstore/',
# ]
MEDIA_ROOT = '/webstore/media/'
MEDIA_URL = '/media/'
</code></pre>
<p>this is my html view where I try to display my image</p>
<pre><code><img src="/media/example.jpg" />
</code></pre>
<p>this is my <code>urls.py</code> file</p>
<pre><code> from django.conf.urls import url, include
from django.contrib import admin
from djangoproject import settings
from django.conf.urls.static import static
urlpatterns = [
url(r'^', include('webstore.urls')),
url(r'^admin/', admin.site.urls),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
</code></pre>
| 0 | 2016-09-27T11:06:39Z | 39,724,655 | <p><code>MEDIA_URL</code> is the base URL to serve the media files uploaded by users, and <code>MEDIA_ROOT</code> is the local path where they reside.</p>
<p>so try to use it on your <code>setting.py</code></p>
<pre><code>MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media/')
</code></pre>
<p>and on your main <code>urls.py</code></p>
<pre><code>urlpatterns = [
....
]
if settings.DEBUG: urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
</code></pre>
<p>and should the entire image will be saved in your <code>djangoproject/media/</code></p>
<p>one more thing don't forget to add <code>{% load staticfiles %}</code> on top your <code>html</code> file. <code>{% load staticfiles %}</code> tells Django to load the staticfiles template tags that are provided by the <code>django.contrib.staticfiles</code> application.</p>
| 1 | 2016-09-27T12:25:38Z | [
"python",
"django"
]
|
Python->BeautifulSoup->Webscraping->Drop Down Menu | 39,723,135 | <p>So I am trying to dump all of the reports from this website:</p>
<p><a href="https://www.treasurydirect.gov/govt/reports/tfmp/tfmp_utf.htm" rel="nofollow">https://www.treasurydirect.gov/govt/reports/tfmp/tfmp_utf.htm</a></p>
<p>State: All States (Not the Reed Act Benefit or Reed Act Admin)</p>
<p>Report: Transaction Statement</p>
<p>Month: All Months</p>
<p>Year: All Years</p>
<p>Looking at the Source Code of the website, I know that the state variables:</p>
<pre><code><form action="get" name="UtfReport">
<fieldset>
<table>
<tr>
<td>
<label for="states">State</label><br />
<select name="states" id="states" size="01">
<option value="al" selected>Alabama</option>
<option value="b2">Alabama Reed Act Benefit</option>
<option value="b3">Alabama Reed Act Admin</option>
<option value="ak">Alaska</option>
<option value="a2">Alaska Reed Act Benefit</option>
</code></pre>
<p>So I know that I need to create a list of string like this</p>
<pre><code>https://www.treasurydirect.gov/govt/reports/tfmp/utf/[a1]/dfiw00[116]tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/[a1]/dfiw00[216]tsar.txt
</code></pre>
<p>....</p>
<p>So here is my current approach: </p>
<pre><code>import requests, bs4
for i in range(1,13):
print('https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw0'+str(i).zfill(2),'16tsar.txt')
res = requests.get('https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw00216tsar.txt')
res.raise_for_status()
states = bs4.BeautifulSoup(res.text, 'lxml')
result1.append(res.text)
</code></pre>
<p>My effort in create the string of url ran into a problem as well, as this is the output from the code above (there is a space between dfiw00X and 16tsar.txt and I don't know why):</p>
<pre><code>https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw001 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw002 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw003 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw004 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw005 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw006 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw007 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw008 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw009 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw010 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw011 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw012 16tsar.txt
</code></pre>
<p>So My Question is: There must be a better way of doing this than the way I am currently trying, so if anyone can show me how, I would really appreciate it.</p>
<p>Thank you for your time,</p>
| 1 | 2016-09-27T11:09:55Z | 39,723,455 | <h1>A good first approach to scraping: Pattern Matching Heuristic</h1>
<p>What you want to do, at a high level, is this:</p>
<ol>
<li>Identify a pattern in the source.</li>
<li>Represent the nature of the pattern in the code.</li>
<li>Scrape according to that code.</li>
</ol>
<h3>I won't code the entire thing here, but outline the general approach I would take.</h3>
<p>A. Notice that there's a pattern to the way the reports are named. If there is a pattern, then we can assume it is possible to represent it in code.</p>
<p>B. Of primary interest is the last part of the url, <code>'/ar/dfiw00216tsar.txt'.</code></p>
<ol>
<li>/ar/ references the state</li>
<li>dfiw appears constant, at first glance</li>
<li>00216 references the date</li>
<li>tsar references the type of report</li>
<li>.txt appears constant, at first glance</li>
</ol>
<p>From here, we can know to build a dictionary of all the possible states, and a dictionary of all the possible report types, and iterate through all of those combinations, including date, in each iteration of the for loop, getting the url, then saving it or otherwise processing it as needed.</p>
| -1 | 2016-09-27T11:26:52Z | [
"python",
"drop-down-menu",
"web-scraping",
"beautifulsoup"
]
|
Python->BeautifulSoup->Webscraping->Drop Down Menu | 39,723,135 | <p>So I am trying to dump all of the reports from this website:</p>
<p><a href="https://www.treasurydirect.gov/govt/reports/tfmp/tfmp_utf.htm" rel="nofollow">https://www.treasurydirect.gov/govt/reports/tfmp/tfmp_utf.htm</a></p>
<p>State: All States (Not the Reed Act Benefit or Reed Act Admin)</p>
<p>Report: Transaction Statement</p>
<p>Month: All Months</p>
<p>Year: All Years</p>
<p>Looking at the Source Code of the website, I know that the state variables:</p>
<pre><code><form action="get" name="UtfReport">
<fieldset>
<table>
<tr>
<td>
<label for="states">State</label><br />
<select name="states" id="states" size="01">
<option value="al" selected>Alabama</option>
<option value="b2">Alabama Reed Act Benefit</option>
<option value="b3">Alabama Reed Act Admin</option>
<option value="ak">Alaska</option>
<option value="a2">Alaska Reed Act Benefit</option>
</code></pre>
<p>So I know that I need to create a list of string like this</p>
<pre><code>https://www.treasurydirect.gov/govt/reports/tfmp/utf/[a1]/dfiw00[116]tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/[a1]/dfiw00[216]tsar.txt
</code></pre>
<p>....</p>
<p>So here is my current approach: </p>
<pre><code>import requests, bs4
for i in range(1,13):
print('https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw0'+str(i).zfill(2),'16tsar.txt')
res = requests.get('https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw00216tsar.txt')
res.raise_for_status()
states = bs4.BeautifulSoup(res.text, 'lxml')
result1.append(res.text)
</code></pre>
<p>My effort in create the string of url ran into a problem as well, as this is the output from the code above (there is a space between dfiw00X and 16tsar.txt and I don't know why):</p>
<pre><code>https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw001 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw002 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw003 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw004 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw005 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw006 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw007 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw008 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw009 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw010 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw011 16tsar.txt
https://www.treasurydirect.gov/govt/reports/tfmp/utf/ar/dfiw012 16tsar.txt
</code></pre>
<p>So My Question is: There must be a better way of doing this than the way I am currently trying, so if anyone can show me how, I would really appreciate it.</p>
<p>Thank you for your time,</p>
| 1 | 2016-09-27T11:09:55Z | 39,724,429 | <p>You will have to do a little hardcoding, the requests is put together with the code in <a href="https://www.treasurydirect.gov/js/utfnav.js" rel="nofollow">utfnav.js</a>, the main part we are interested in is below:</p>
<pre><code>//assembles path to reports
ReportPath = "/govt/reports/tfmp/utf/"+StateName+"/dfi";
LinkData = (ReportPath+WeekName+MonthName+YearName+ReportName+StateName+".txt");
return true;
}
}
else
{
//displays when dates are not valid for report type selection
alert ("The requested report is not available at this time.");
return false;
}
}
function create(form){
//state selection
var index;
index = document.UtfReport.states.selectedIndex;
StateName = document.UtfReport.states.options[index].value;
//report selection
index = document.UtfReport.report.selectedIndex;
ReportName = document.UtfReport.report.options[index].value;
//Month selection
index = document.UtfReport.month.selectedIndex;
MonthName = document.UtfReport.month.options[index].value;
//Year selection
index = document.UtfReport.year.selectedIndex;
YearName = document.UtfReport.year.options[index].value;
//Week selection
WeekName = "w0"; # this is hardcoded even in Js
</code></pre>
<p>So we need to recreate that logic:</p>
<pre><code>import requests
# ReportPath = .. + LinkData = ...
temp = "https://www.treasurydirect.gov/govt/reports/tfmp/utf/{state}/dfiw0{mn:0>2}{yr}{rep_name}{state}.txt"
with requests.Session() as s:
soup = BeautifulSoup(s.get("https://www.treasurydirect.gov/govt/reports/tfmp/tfmp_utf.htm").content)
# StateName = document.UtfReport.states.options[index].value;
states = [opt["value"] for opt in soup.select("#states option") if " Reed " not in opt.text]
#YearName = document.UtfReport.year.options[index].value;
available_years = [opt["value"] for opt in soup.select("#year option")]
# ReportName = document.UtfReport.report.options[index].value;
report_name = soup.find(id="report").find("option", text="Transaction Statement")["value"]
for state in states:
for year in available_years:
# could do [opt["value"] for opt in soup.select("#month option")]
# but always 12 months in a year
for mnth in range(1, 13):
url = temp.format(state=state, rep_name=report_name, yr=year, mn=mnth)
print(s.get(url).text)
</code></pre>
<p>If you run it you will see output like:</p>
<pre><code>Final Report
Transaction Location
Effective Date Shares/Par Description Code Memo Number Code Account Number
--------------- ------------------------ ------------------------- ------------- -------- -------------------------
11-10 STATE DEPOSITS
01/04/2016 17,000.0000 11-10 STATE DEPOSITS 3308616 AL
01/05/2016 57,000.0000 11-10 STATE DEPOSITS 3308619 AL
01/06/2016 118,000.0000 11-10 STATE DEPOSITS 3308638 AL
01/07/2016 129,000.0000 11-10 STATE DEPOSITS 3308657 AL
01/08/2016 145,000.0000 11-10 STATE DEPOSITS 3308675 AL
01/11/2016 260,000.0000 11-10 STATE DEPOSITS 3308720 AL
01/12/2016 566,000.0000 11-10 STATE DEPOSITS 3308743 AL
01/13/2016 307,000.0000 11-10 STATE DEPOSITS 3308764 AL
01/14/2016 240,000.0000 11-10 STATE DEPOSITS 3308783 AL
01/15/2016 340,000.0000 11-10 STATE DEPOSITS 3308802 AL
01/19/2016 345,000.0000 11-10 STATE DEPOSITS 3308832 AL
01/20/2016 510,000.0000 11-10 STATE DEPOSITS 3308859 AL
01/21/2016 533,000.0000 11-10 STATE DEPOSITS 3308889 AL
01/22/2016 262,000.0000 11-10 STATE DEPOSITS 3308916 AL
01/25/2016 377,000.0000 11-10 STATE DEPOSITS 3308942 AL
01/26/2016 778,000.0000 11-10 STATE DEPOSITS 3308968 AL
01/27/2016 873,000.0000 11-10 STATE DEPOSITS 3308997 AL
01/28/2016 850,000.0000 11-10 STATE DEPOSITS 3309019 AL
01/29/2016 1,388,000.0000 11-10 STATE DEPOSITS 3309045 AL
01/29/2016 -6,997.0000 11-10 STATE DEPOSITS 3309069 AL AL
------------------------
8,088,003.0000
21-10 STATE UI WITHDRAWAL
01/04/2016 -183,550.0000 21-10 STATE UI WITHDRAWAL 3308617 AL AL
01/05/2016 -3,528,550.0000 21-10 STATE UI WITHDRAWAL 3308636 AL AL
01/06/2016 -333,800.0000 21-10 STATE UI WITHDRAWAL 3308655 AL AL
01/07/2016 -404,700.0000 21-10 STATE UI WITHDRAWAL 3308674 AL AL
01/08/2016 -276,600.0000 21-10 STATE UI WITHDRAWAL 3308717 AL AL
01/11/2016 -177,600.0000 21-10 STATE UI WITHDRAWAL 3308741 AL AL
01/12/2016 -3,207,250.0000 21-10 STATE UI WITHDRAWAL 3308760 AL AL
01/13/2016 -288,450.0000 21-10 STATE UI WITHDRAWAL 3308781 AL AL
01/14/2016 -192,050.0000 21-10 STATE UI WITHDRAWAL 3308800 AL AL
01/15/2016 -184,650.0000 21-10 STATE UI WITHDRAWAL 3308825 AL AL
01/19/2016 -3,115,900.0000 21-10 STATE UI WITHDRAWAL 3308855 AL AL
01/20/2016 -343,100.0000 21-10 STATE UI WITHDRAWAL 3308876 AL AL
01/21/2016 -187,750.0000 21-10 STATE UI WITHDRAWAL 3308906 AL AL
01/22/2016 -135,950.0000 21-10 STATE UI WITHDRAWAL 3308937 AL AL
01/25/2016 -136,000.0000 21-10 STATE UI WITHDRAWAL 3308963 AL AL
01/26/2016 -3,186,100.0000 21-10 STATE UI WITHDRAWAL 3308985 AL AL
01/27/2016 -310,500.0000 21-10 STATE UI WITHDRAWAL 3309014 AL AL
01/28/2016 -250,500.0000 21-10 STATE UI WITHDRAWAL 3309036 AL AL
01/29/2016 -147,300.0000 21-10 STATE UI WITHDRAWAL 3309066 AL AL
------------------------
-16,590,300.0000
34-10 BT FROM UI
01/22/2016 -63,394.0000 34-10 BT FROM UI 3308938 AL AL
01/29/2016 -19,169.0000 34-10 BT FROM UI 3309067 AL AL
------------------------
-82,563.0000
34-60 CWC OUT
01/08/2016 -2,577.9500 34-60 CWC OUT 3308718 HI AL
01/12/2016 -29,354.7300 34-60 CWC OUT 3308761 WY AL
01/12/2016 -4,186.2000 34-60 CWC OUT 3308762 NH AL
01/15/2016 -7,390.5700 34-60 CWC OUT 3308826 MT AL
01/15/2016 -34,003.1200 34-60 CWC OUT 3308827 WV AL
01/15/2016 -2,674.2900 34-60 CWC OUT 3308828 RI AL
01/15/2016 -12,695.3300 34-60 CWC OUT 3308829 NE AL
01/15/2016 -30,307.5600 34-60 CWC OUT 3308830 IN AL
01/20/2016 -115,833.7900 34-60 CWC OUT 3308879 VA AL
01/20/2016 -6,549.9200 34-60 CWC OUT 3308880 AK AL
01/20/2016 -10,316.4900 34-60 CWC OUT 3308881 ME AL
01/20/2016 -89,399.3900 34-60 CWC OUT 3308882 CA AL
01/25/2016 -10,015.5900 34-60 CWC OUT 3308966 MO AL
01/26/2016 -117.6100 34-60 CWC OUT 3308988 VT AL
01/26/2016 -17,058.7500 34-60 CWC OUT 3308989 NV AL
01/26/2016 -23,359.8400 34-60 CWC OUT 3308990 UT AL
01/26/2016 -21,240.3200 34-60 CWC OUT 3308991 OK AL
01/26/2016 -110,025.5800 34-60 CWC OUT 3308992 OH AL
01/26/2016 -87,745.5400 34-60 CWC OUT 3308993 MN AL
01/26/2016 -1,747.0500 34-60 CWC OUT 3308994 DE AL
01/28/2016 -439,500.8500 34-60 CWC OUT 3309039 TX AL
01/28/2016 -22,375.9600 34-60 CWC OUT 3309040 NC AL
01/28/2016 -49,726.7300 34-60 CWC OUT 3309041 MS AL
01/28/2016 -54,329.9400 34-60 CWC OUT 3309042 MA AL
01/28/2016 -221,805.0100 34-60 CWC OUT 3309043 GA AL
------------------------
-1,404,338.1100
</code></pre>
| 2 | 2016-09-27T12:16:17Z | [
"python",
"drop-down-menu",
"web-scraping",
"beautifulsoup"
]
|
Django Standalone Script | 39,723,310 | <p>I am trying to access my Django (v1.10) app DB from another python script and having some trouble doing so.</p>
<p>This is my file and folder structure:</p>
<pre><code>store
store
__init.py__
settings.py
urls.py
wsgi.py
store_app
__init.py__
admin.py
apps.py
models.py
...
db.sqlite3
manage.py
other_script.py
</code></pre>
<p>In accordance with <a href="https://docs.djangoproject.com/en/1.10/topics/settings/#calling-django-setup-is-required-for-standalone-django-usage" rel="nofollow">Django's documentations</a> my <code>other_script.py</code> looks like this:</p>
<pre><code>import django
from django.conf import settings
settings.configure(DEBUG=True)
django.setup()
from store.store_app.models import MyModel
</code></pre>
<p>But it generates a runtime error:</p>
<pre><code>RunTimeError: Model class store.store_app.models.MyModel doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS.
</code></pre>
<p>I should note that my <code>INSTALLED_APPS</code> list contains <code>store_app</code> as its last element.</p>
<p>If instead I try to pass a config like this:</p>
<pre><code>import django
from django.conf import settings
from store.store_app.apps import StoreAppConfig
settings.configure(StoreAppConfig, DEBUG=True)
django.setup()
from store.store_app.models import MyModel
</code></pre>
<p>I get:</p>
<pre><code>AttributeError: type object 'StoreAppConfig has no attribute 'LOGGING_CONFIG'.
</code></pre>
<p>If I edit <code>settings.py</code> and add <code>LOGGING_CONFIG=None</code> I get another error about another missing attribute, and so on.</p>
<p>Any suggestions will be appreciated. </p>
| 1 | 2016-09-27T11:19:54Z | 39,723,527 | <p>try to import from <code>store_app.models</code> - as the surrounding <code>store</code> folder is not a python module and should not be used when importing.</p>
<pre><code>import django
from django.conf import settings
settings.configure(DEBUG=True)
django.setup()
from store_app.models import MyModel
</code></pre>
<p>update: i just noticed that you put that script next to your project folder - you should move it inside for this to work.</p>
| 0 | 2016-09-27T11:30:38Z | [
"python",
"django"
]
|
Django Standalone Script | 39,723,310 | <p>I am trying to access my Django (v1.10) app DB from another python script and having some trouble doing so.</p>
<p>This is my file and folder structure:</p>
<pre><code>store
store
__init.py__
settings.py
urls.py
wsgi.py
store_app
__init.py__
admin.py
apps.py
models.py
...
db.sqlite3
manage.py
other_script.py
</code></pre>
<p>In accordance with <a href="https://docs.djangoproject.com/en/1.10/topics/settings/#calling-django-setup-is-required-for-standalone-django-usage" rel="nofollow">Django's documentations</a> my <code>other_script.py</code> looks like this:</p>
<pre><code>import django
from django.conf import settings
settings.configure(DEBUG=True)
django.setup()
from store.store_app.models import MyModel
</code></pre>
<p>But it generates a runtime error:</p>
<pre><code>RunTimeError: Model class store.store_app.models.MyModel doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS.
</code></pre>
<p>I should note that my <code>INSTALLED_APPS</code> list contains <code>store_app</code> as its last element.</p>
<p>If instead I try to pass a config like this:</p>
<pre><code>import django
from django.conf import settings
from store.store_app.apps import StoreAppConfig
settings.configure(StoreAppConfig, DEBUG=True)
django.setup()
from store.store_app.models import MyModel
</code></pre>
<p>I get:</p>
<pre><code>AttributeError: type object 'StoreAppConfig has no attribute 'LOGGING_CONFIG'.
</code></pre>
<p>If I edit <code>settings.py</code> and add <code>LOGGING_CONFIG=None</code> I get another error about another missing attribute, and so on.</p>
<p>Any suggestions will be appreciated. </p>
| 1 | 2016-09-27T11:19:54Z | 39,723,977 | <p>This sounds like a great use case for <a href="https://docs.djangoproject.com/en/1.10/howto/custom-management-commands/" rel="nofollow">Django Management commands.</a> which has the added bonus you can run it scheduled from cron, direct from the commandline, or call from inside django. This gives the script full access to the same settings and environment variables as your main project. </p>
<p>If you move this into an appropriate directory - using store here as an example, not a suggestion, it should work. </p>
<pre><code>store
management
__init__.py
otherscript.py
</code></pre>
| 2 | 2016-09-27T11:53:56Z | [
"python",
"django"
]
|
Django Standalone Script | 39,723,310 | <p>I am trying to access my Django (v1.10) app DB from another python script and having some trouble doing so.</p>
<p>This is my file and folder structure:</p>
<pre><code>store
store
__init.py__
settings.py
urls.py
wsgi.py
store_app
__init.py__
admin.py
apps.py
models.py
...
db.sqlite3
manage.py
other_script.py
</code></pre>
<p>In accordance with <a href="https://docs.djangoproject.com/en/1.10/topics/settings/#calling-django-setup-is-required-for-standalone-django-usage" rel="nofollow">Django's documentations</a> my <code>other_script.py</code> looks like this:</p>
<pre><code>import django
from django.conf import settings
settings.configure(DEBUG=True)
django.setup()
from store.store_app.models import MyModel
</code></pre>
<p>But it generates a runtime error:</p>
<pre><code>RunTimeError: Model class store.store_app.models.MyModel doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS.
</code></pre>
<p>I should note that my <code>INSTALLED_APPS</code> list contains <code>store_app</code> as its last element.</p>
<p>If instead I try to pass a config like this:</p>
<pre><code>import django
from django.conf import settings
from store.store_app.apps import StoreAppConfig
settings.configure(StoreAppConfig, DEBUG=True)
django.setup()
from store.store_app.models import MyModel
</code></pre>
<p>I get:</p>
<pre><code>AttributeError: type object 'StoreAppConfig has no attribute 'LOGGING_CONFIG'.
</code></pre>
<p>If I edit <code>settings.py</code> and add <code>LOGGING_CONFIG=None</code> I get another error about another missing attribute, and so on.</p>
<p>Any suggestions will be appreciated. </p>
| 1 | 2016-09-27T11:19:54Z | 39,724,171 | <p>Try this</p>
<pre><code>import sys, os, django
sys.path.append("/path/to/store") #here store is root folder(means parent).
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "store.settings")
django.setup()
from store_app.models import MyModel
</code></pre>
<p>This script you can use anywhere in your system.</p>
| 0 | 2016-09-27T12:03:52Z | [
"python",
"django"
]
|
How to inherit Base class with singleton in python | 39,723,398 | <p>I have base class which is singleton, i need to inherit that in my another class but i get error message as
TypeError: Error when calling the metaclass bases
function() argument 1 must be code, not str</p>
<p>Can someone help with this.
Below is sample code.</p>
<pre><code>def singleton(cls):
instances = {}
def getinstance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return getinstance
@singleton
class ClassOne(object):
def methodOne(self):
print "Method One"
def methodTwo(self):
print "Method Two"
class ClassTwo(ClassOne):
pass
</code></pre>
| 1 | 2016-09-27T11:23:56Z | 39,726,451 | <p>You must make the <code>singleton</code> a class instead of a function for derivation to work. Here is an example that has been tested on both Python 2.7 and 3.5:</p>
<pre><code>class singleton(object):
instances = {}
def __new__(cls, clz = None):
if clz is None:
# print ("Creating object for", cls)
if not cls.__name__ in singleton.instances:
singleton.instances[cls.__name__] = \
object.__new__(cls)
return singleton.instances[cls.__name__]
# print (cls.__name__, "creating", clz.__name__)
singleton.instances[clz.__name__] = clz()
singleton.first = clz
return type(clz.__name__, (singleton,), dict(clz.__dict__))
</code></pre>
<p>If you use this with your example classes:</p>
<pre><code>@singleton
class ClassOne(object):
def methodOne(self):
print "Method One"
def methodTwo(self):
print "Method Two"
class ClassTwo(ClassOne):
pass
</code></pre>
<p>classes <code>A</code> and <code>B</code> will both be singletons</p>
<p>Beware, it is uncommon to inherit from a singleton class.</p>
| 0 | 2016-09-27T13:48:56Z | [
"python",
"python-2.7"
]
|
python subprocess: order of output changes when using subprocess.PIPE | 39,723,437 | <p>When I write a python script called <code>outer.py</code> containing</p>
<pre><code>p = subprocess.Popen(['./inner.py'])
print('Called inner.py without options, waiting for process...')
p.wait()
print('Waited for inner.py without options')
p = subprocess.Popen(['./inner.py'], stdout=subprocess.PIPE)
print('Called inner.py with PIPE, communicating...')
b_out, b_err = p.communicate()
out = b_out.decode('utf8')
print('out is "{}"'.format(out))
</code></pre>
<p>And an <code>inner.py</code> containing</p>
<pre><code>print("inner: Echoing Hallo")
p = subprocess.Popen(['echo', 'hallo'])
print("inner: Waiting for Echo to finish...")
p.wait()
print("inner: Waited for Echo")
</code></pre>
<p>I get the following when calling <code>outer.py</code> from a terminal:</p>
<pre><code>Called inner.py without options, waiting for process...
inner: Echoing Hallo
inner: Waiting for Echo to finish...
hallo
inner: Waited for Echo
Waited for inner.py without options
Called inner.py with PIPE, communicating...
out is "hallo
inner: Echoing Hallo
inner: Waiting for Echo to finish...
inner: Waited for Echo
"
</code></pre>
<p>Why, when calling <code>inner.py</code> with <code>stdout=subprocess.PIPE</code>, does the "hallo" appear before the "inner: Echoing Hallo" in the captured output?</p>
| 2 | 2016-09-27T11:25:58Z | 39,724,806 | <p>I would guess that, for some reason (related to pipes vs. ttys, see <a href="http://stackoverflow.com/questions/107705/disable-output-buffering/107717#comment24604506_107717">this comment</a>), the output of the <code>inner.py</code> Python process is unbuffered the first time you call it, and buffered the second time you call it. The first time, with unbuffered output, you get the result in the expected order written to your tty. The second time, with buffering, the output from the <code>echo</code> command gets flushed first (because <code>echo</code> runs and terminates), and then all of the output from the <code>inner.py</code> process shows up at once, when <code>python</code> terminates. If you disable output buffering for <code>inner.py</code>, you should get the same output in both cases.</p>
<p>Disable output buffering by setting the <code>PYTHONUNBUFFERED</code> environment variable, or by calling python with the <code>-u</code> switch, or by explicitly calling <code>sys.stdout.flush()</code> after every <code>print</code> (or <code>print(..., flush=True)</code> on Python 3).</p>
<p>The difference between the behaviour of pipes and ttys seems to be a <a href="https://www.turnkeylinux.org/blog/unix-buffering" rel="nofollow">general behaviour of <code>stdio</code></a>: output to ttys is line-buffered (so, in your code, which reads line by line, it will seem to be unbuffered), whereas output to pipes is buffered.</p>
| 1 | 2016-09-27T12:33:26Z | [
"python",
"order",
"pipe",
"subprocess",
"python-3.5"
]
|
Unable to install Xvfb on Suse Linux | 39,723,439 | <p>I am trying to run the selenium-webdriver using the python library on Suse 11.4 (64-bit)
For it to run headlessly, it requires another python package "pyvirtualdisplay" to run. I have been able to install both perfectly.
The problem now is that pyvirtualdisplay requires a system level package called Xvfb which is not installed.
Zypper is unavailable and i need to install this using a tarball or an rpm.
Please guide me as to how i can install this library and where i can find it. I have been unable to find a suitable Xvfb package that I can install on this distribution of Linux.</p>
| -1 | 2016-09-27T11:25:59Z | 39,729,149 | <p>According to <a href="http://unix.stackexchange.com/questions/11364/installing-xvfb-on-suse">this</a> post Xvfb is provided by the <code>xorg-x11-server</code> package. Since you dont have access to zypper you can download that package directly from <a href="https://software.opensuse.org/package/xorg-x11-server" rel="nofollow">here</a> (<a href="http://download.opensuse.org/repositories/openSUSE:/11.4/standard/x86_64/xorg-x11-server-7.6_1.9.3-15.18.4.x86_64.rpm" rel="nofollow">direct link to 64 bit RPM for Suse 11.4</a>) and install it manually.</p>
| 0 | 2016-09-27T15:52:21Z | [
"python",
"selenium",
"suse",
"xvfb",
"pyvirtualdisplay"
]
|
How to select list item with dynamically generated ids - Selenium test Python | 39,723,473 | <p>I have a problem here where I am unable to locate the list items. </p>
<pre><code><ul id="select2-id_faculty_advisor-results" class="select2-results__options" role="tree" aria-multiselectable="true" aria-expanded="true" aria-hidden="false">
<li id="select2-id_faculty_advisor-result-0pu4-1" class="select2-results__option select2-results__option--highlighted" role="treeitem" aria-selected="true">Alice</li>
<li id="select2-id_faculty_advisor-result-cayw-2" class="select2-results__option" role="treeitem" aria-selected="false">Bob</li>
<li id="select2-id_faculty_advisor-result-4h8e-3" class="select2-results__option" role="treeitem" aria-selected="false">Candy</li>
<li id="select2-id_faculty_advisor-result-el4l-4" class="select2-results__option" role="treeitem" aria-selected="false">Dark</li>
</ul>
</code></pre>
<p>As seen above I'll not be able to find elements by id as the characters in between are generated dynamically. Any idea of how to locate a particular element??</p>
| 0 | 2016-09-27T11:27:44Z | 39,723,592 | <p>Use the part of the id that does not change in a css selector like:<br>
<code>li[id*='faculty_advisor-result']</code> </p>
<p>You can also add extra attributes if needed, for example aria-selected like:<br>
<code>li[id*='faculty_advisor-result'][aria-selected='false']</code> or any other attribute.</p>
<p>If you don't have any specific attribute for a particular option then you could use xpath to select the element that has that option like:<br>
<code>//li[contains(@id, 'faculty_advisor-result')][text()='Alice']</code>
the same way you can add extra filters to narrow the results like <code>[@role='tree']</code></p>
| 0 | 2016-09-27T11:33:48Z | [
"python",
"selenium",
"selenium-webdriver"
]
|
How to select list item with dynamically generated ids - Selenium test Python | 39,723,473 | <p>I have a problem here where I am unable to locate the list items. </p>
<pre><code><ul id="select2-id_faculty_advisor-results" class="select2-results__options" role="tree" aria-multiselectable="true" aria-expanded="true" aria-hidden="false">
<li id="select2-id_faculty_advisor-result-0pu4-1" class="select2-results__option select2-results__option--highlighted" role="treeitem" aria-selected="true">Alice</li>
<li id="select2-id_faculty_advisor-result-cayw-2" class="select2-results__option" role="treeitem" aria-selected="false">Bob</li>
<li id="select2-id_faculty_advisor-result-4h8e-3" class="select2-results__option" role="treeitem" aria-selected="false">Candy</li>
<li id="select2-id_faculty_advisor-result-el4l-4" class="select2-results__option" role="treeitem" aria-selected="false">Dark</li>
</ul>
</code></pre>
<p>As seen above I'll not be able to find elements by id as the characters in between are generated dynamically. Any idea of how to locate a particular element??</p>
| 0 | 2016-09-27T11:27:44Z | 39,724,825 | <p>the best way to do it make collection of list by xpath "//ul[@id='select2-id_faculty_advisor-results']/li"</p>
<p>after that you can select them by index from the collection.</p>
| 0 | 2016-09-27T12:34:49Z | [
"python",
"selenium",
"selenium-webdriver"
]
|
GitPython List all commits for a specific file | 39,723,521 | <p>I am writing a Python script and there I need to know all commits for a specific file. In my code I use <a href="http://gitpython.readthedocs.io/en/stable/intro.html" rel="nofollow">GitPython</a> for other tasks but for this problem I can't find something.</p>
<p>In cmd line I use:</p>
<pre><code>git log --pretty='%H' file-path
</code></pre>
| 0 | 2016-09-27T11:30:19Z | 39,723,678 | <p>What we are looking for in Git is:</p>
<pre><code>git log --follow filename
</code></pre>
<p>not sure GitPython has it tho.</p>
| 0 | 2016-09-27T11:37:48Z | [
"python",
"git",
"gitpython"
]
|
how to create a text file through python | 39,723,570 | <p>Looking to create a program that needs to take the users sentence, list its positions, and also identify each individual positions of word and then save a list of the position numbers to a file, I've got as far as:</p>
<pre><code> text = input('Please type your sentence: ')
sentence = text.split()
word= input('Thank-you, now type your word: ')
if word in sentence:
print ('This word occurs in the places:', sentence.index(word)+1)
elif word not in sentence:
print ('Sorry, '+word+' does not appear in the sentence.')
text=text.lower()
words=text.split()
place=[]
for c,a in enumerate(words):
if words.count(a)>2 :
place.append(words.index(a+1))
else:
place.append(c+1)
print(text)
print(place)
</code></pre>
<p>this is what I have so far but can't seem to find anything that creates any files, I don't really know how to go about that, any help with the direction to head in would be much appreciated.</p>
| -2 | 2016-09-27T11:32:54Z | 39,723,639 | <p>Use the open function with the 'w' or 'a' access modifier.</p>
<p>e.g.</p>
<pre><code>fo = open("foo.txt", "w")
</code></pre>
| 0 | 2016-09-27T11:36:12Z | [
"python",
"file"
]
|
how to create a text file through python | 39,723,570 | <p>Looking to create a program that needs to take the users sentence, list its positions, and also identify each individual positions of word and then save a list of the position numbers to a file, I've got as far as:</p>
<pre><code> text = input('Please type your sentence: ')
sentence = text.split()
word= input('Thank-you, now type your word: ')
if word in sentence:
print ('This word occurs in the places:', sentence.index(word)+1)
elif word not in sentence:
print ('Sorry, '+word+' does not appear in the sentence.')
text=text.lower()
words=text.split()
place=[]
for c,a in enumerate(words):
if words.count(a)>2 :
place.append(words.index(a+1))
else:
place.append(c+1)
print(text)
print(place)
</code></pre>
<p>this is what I have so far but can't seem to find anything that creates any files, I don't really know how to go about that, any help with the direction to head in would be much appreciated.</p>
| -2 | 2016-09-27T11:32:54Z | 39,723,680 | <p>Refer article - <a href="https://docs.python.org/2/tutorial/inputoutput.html#reading-and-writing-files" rel="nofollow">File Operation In Python</a></p>
<p>Example to create a file</p>
<pre><code>f = open('file_path', 'w')
f.write('0123456789abcdef')
f.close()
</code></pre>
<p>EDIT: added mode</p>
| -1 | 2016-09-27T11:37:53Z | [
"python",
"file"
]
|
how to create a text file through python | 39,723,570 | <p>Looking to create a program that needs to take the users sentence, list its positions, and also identify each individual positions of word and then save a list of the position numbers to a file, I've got as far as:</p>
<pre><code> text = input('Please type your sentence: ')
sentence = text.split()
word= input('Thank-you, now type your word: ')
if word in sentence:
print ('This word occurs in the places:', sentence.index(word)+1)
elif word not in sentence:
print ('Sorry, '+word+' does not appear in the sentence.')
text=text.lower()
words=text.split()
place=[]
for c,a in enumerate(words):
if words.count(a)>2 :
place.append(words.index(a+1))
else:
place.append(c+1)
print(text)
print(place)
</code></pre>
<p>this is what I have so far but can't seem to find anything that creates any files, I don't really know how to go about that, any help with the direction to head in would be much appreciated.</p>
| -2 | 2016-09-27T11:32:54Z | 39,723,840 | <p>if you use the open function with "w+" and there is no such file "open" will create a new file. </p>
<pre><code>file = open(Max.txt","w+")
</code></pre>
<p>for more infos about r,r+,w,w+... <a href="http://stackoverflow.com/questions/1466000/python-open-built-in-function-difference-between-modes-a-a-w-w-and-r">here</a></p>
| 0 | 2016-09-27T11:46:12Z | [
"python",
"file"
]
|
Compare two csv files and color code the difference in csv | 39,723,619 | <p>I want to compare two CSV files. If there is difference in a particular cell (Ex: 5th row and 3rd column) then give red color to that cell.</p>
<p>I can able to compare two files but unable to give red color to the difference cell I have tried this code</p>
<pre><code>def compare():
try:
assert_frame_equal(df_sort_sas, df_sort_py)
return True
except: # appeantly AssertionError doesn't catch all
return False
compare()
</code></pre>
<p>I want output like this:
Here red colored cell means that particular value is not equal with first csv cell </p>
<p><a href="http://i.stack.imgur.com/Cwyqq.png" rel="nofollow"><img src="http://i.stack.imgur.com/Cwyqq.png" alt="enter image description here"></a></p>
| -1 | 2016-09-27T11:35:06Z | 39,724,334 | <p>You can't set a colour in a csv file. Want you can do is doing it in excel:
Have a look at those two questions: <a href="http://stackoverflow.com/q/10532367/3659824">How to change background color of excel cell with python xlwt library?</a> and <a href="http://stackoverflow.com/q/11444207/3659824">Setting a cell's fill RGB color with pywin32 in excel</a></p>
<p>To summen up the <a href="http://stackoverflow.com/a/10542220/3659824">answers</a>: </p>
<pre><code>from xlwt import Workbook
import xlwt
book = Workbook()
sheet1 = book.add_sheet('Sheet 1')
book.add_sheet('Sheet 2')
for i in range(0, 100):
st = xlwt.easyxf('pattern: pattern solid;')
st.pattern.pattern_fore_colour = i
sheet1.write(i % 24, i / 24, 'Test text',st)
book.save('simple.xls')
</code></pre>
<p>or <a href="http://stackoverflow.com/a/11445155/3659824">here</a>: </p>
<pre><code>def rgb_to_hex(rgb):
strValue = '%02x%02x%02x' % rgb
iValue = int(strValue, 16)
return iValue
xl.ActiveSheet.Cells(row, column).interior.color = rgb_to_hex((255,255,0))
</code></pre>
<p>Credits to the authors not me </p>
| 0 | 2016-09-27T12:12:07Z | [
"python",
"csv",
"pandas"
]
|
Compare two csv files and color code the difference in csv | 39,723,619 | <p>I want to compare two CSV files. If there is difference in a particular cell (Ex: 5th row and 3rd column) then give red color to that cell.</p>
<p>I can able to compare two files but unable to give red color to the difference cell I have tried this code</p>
<pre><code>def compare():
try:
assert_frame_equal(df_sort_sas, df_sort_py)
return True
except: # appeantly AssertionError doesn't catch all
return False
compare()
</code></pre>
<p>I want output like this:
Here red colored cell means that particular value is not equal with first csv cell </p>
<p><a href="http://i.stack.imgur.com/Cwyqq.png" rel="nofollow"><img src="http://i.stack.imgur.com/Cwyqq.png" alt="enter image description here"></a></p>
| -1 | 2016-09-27T11:35:06Z | 39,724,400 | <p>You can only apply conditional formatting to excel files (e.g. xlsx), not .csv</p>
<p>I would organise the code in the following way:</p>
<ul>
<li>Use pandas to compare the two csv and create a dataframe with a flag for the differences</li>
<li>Use xlsx module to write a xlsx file, coloring in red the cells where you have mismatch (<a href="http://xlsxwriter.readthedocs.io/example_conditional_format.html?highlight=conditional%20formatting" rel="nofollow">http://xlsxwriter.readthedocs.io/example_conditional_format.html?highlight=conditional%20formatting</a>)</li>
</ul>
<p>Hope this helps</p>
| 0 | 2016-09-27T12:15:18Z | [
"python",
"csv",
"pandas"
]
|
Python Queue double threading | 39,723,702 | <p>I am trying to run 5 threads which run another 10 threads each. Something like:</p>
<pre><code>dictionaryFileQueue = Queue.Queue()
with open(dictionaryFile, "r") as wordsToCheck:
for word in wordsToCheck:
dictionaryFileQueue.put(word)
for wordToScan in wordsList:
#...some code
dictionaryFileOpen = dictionaryFileQueue
for i in range(10):
i_thread = Thread(target=words_Scan_Start, args=(dictionaryFileOpen, wordToScan))
threads.append(i_thread)
for thread in threads:
for wordThread in thread:
wordThread.start()
def words_Scan_Start(dictionaryFile, wordToScan):
# Here I need to make an action with "wordToScan".
# But in 10 threads "range(10)" and use the same "dictionaryFileOpen" for every thread.
#...............
</code></pre>
<p>BUT, I need that each <code>wordToSacn</code> thread uses its own <code>dictionaryFileQueue</code>, which is comes from common source (<code>dictionaryFileQueue</code> at first line). Because when I'm starting all the threads, it all use just one Queue. But I need copy of Queue for every <code>wordToScan</code>. How can I make each <code>wordToScan</code> uses only copy of Queue and start its enumeration from the first line? How to make it use its own Queue which is coming from the common source?</p>
| 0 | 2016-09-27T11:38:41Z | 39,726,658 | <p>It turns out, that was so easy to do.
Just need to paste this part:</p>
<pre><code>dictionaryFileQueue = Queue.Queue()
for word in linesOfDictionary:
dictionaryFileQueue.put(word.strip())
</code></pre>
<p>Inside of this part:</p>
<pre><code>for wordToScan in wordsList:
#...some code
# INSTEAD OF THIS LINE BELOW
#dictionaryFileOpen = dictionaryFileQueue
</code></pre>
<p>And add this code at the begining of the script:</p>
<pre><code>openDictionaryFile = open(dictionaryFile, "r")
linesOfDictionary = openDictionaryFile.readlines()
openDictionaryFile.close()
</code></pre>
<p>instead of:</p>
<pre><code>#dictionaryFileQueue = Queue.Queue()
#with open(dictionaryFile, "r") as wordsToCheck:
#for word in wordsToCheck:
#dictionaryFileQueue.put(word)
</code></pre>
| 0 | 2016-09-27T13:57:02Z | [
"python",
"multithreading",
"queue"
]
|
How to separate a CSV file when there are "" lines? | 39,723,925 | <p>When I am reading in a CSV file that looks like this:</p>
<pre><code>To, ,New York ,Norfolk ,Charleston ,Savannah
Le Havre (Fri), ,15 ,18 ,22 ,24
Rotterdam (Sun) ,"",13 ,16 ,20 ,22
Hamburg (Thu) ,"",11 ,14 ,18 ,20
Southampton (Fri) , "" ,8 ,11 ,15 ,17
</code></pre>
<p>using pandas, as follows:</p>
<pre><code>duration_route1 = pd.read_csv(file_name, sep = ',')
</code></pre>
<p>I get the following result (I use Sublime Text to run my Python code):</p>
<p><a href="http://i.stack.imgur.com/jO08M.png" rel="nofollow"><img src="http://i.stack.imgur.com/jO08M.png" alt="enter image description here"></a></p>
<p>You see that when there is a <code>""</code>, it doesn't separate the string. Why does it not do this?</p>
| 2 | 2016-09-27T11:51:01Z | 39,724,505 | <p>Just use csv library in python,
import it and use .</p>
<pre><code>import csv
file_obj = #your_file_object_read_mode
rows = file_obj.readlines()
for raw in csv.DictReader(rows, delimiter=","):
print(raw) # the raw will be a dictionary and you can use it well for any need.
</code></pre>
<p>each raw will look like ie, </p>
<pre><code>{'number3': '88', 'number2': '22', 'name': 'vipul', 'number1': '23'}
</code></pre>
<p>This solves your problem I think, Just give it a try. </p>
| 0 | 2016-09-27T12:19:44Z | [
"python",
"csv",
"pandas"
]
|
How to separate a CSV file when there are "" lines? | 39,723,925 | <p>When I am reading in a CSV file that looks like this:</p>
<pre><code>To, ,New York ,Norfolk ,Charleston ,Savannah
Le Havre (Fri), ,15 ,18 ,22 ,24
Rotterdam (Sun) ,"",13 ,16 ,20 ,22
Hamburg (Thu) ,"",11 ,14 ,18 ,20
Southampton (Fri) , "" ,8 ,11 ,15 ,17
</code></pre>
<p>using pandas, as follows:</p>
<pre><code>duration_route1 = pd.read_csv(file_name, sep = ',')
</code></pre>
<p>I get the following result (I use Sublime Text to run my Python code):</p>
<p><a href="http://i.stack.imgur.com/jO08M.png" rel="nofollow"><img src="http://i.stack.imgur.com/jO08M.png" alt="enter image description here"></a></p>
<p>You see that when there is a <code>""</code>, it doesn't separate the string. Why does it not do this?</p>
| 2 | 2016-09-27T11:51:01Z | 39,724,573 | <p>You need <code>quoting=csv.QUOTE_NONE</code> because there are <code>quoting</code> in <code>file</code>:</p>
<pre><code>df = pd.read_csv('TAT_AX1_westbound_style3.csv', quoting=csv.QUOTE_NONE)
print (df)
To New York Norfolk Charleston Savannah
0 Le Havre (Fri) 15 18 22 24
1 "Rotterdam (Sun) """" 13 16 20 22 "
2 "Hamburg (Thu) """" 11 14 18 20 "
3 "Southampton (Fri) """" 8 11 15 17 "
</code></pre>
<pre><code>#remove first column
df = df.drop(df.columns[0], axis=1)
#remove all " values to empty string, convert to int
df = df.replace({'"':''}, regex=True).astype(int)
print (df)
New York Norfolk Charleston Savannah
To
Le Havre (Fri) 15 18 22 24
"Rotterdam (Sun) 13 16 20 22
"Hamburg (Thu) 11 14 18 20
"Southampton (Fri) 8 11 15 17 15 17
</code></pre>
| 3 | 2016-09-27T12:22:24Z | [
"python",
"csv",
"pandas"
]
|
How to separate a CSV file when there are "" lines? | 39,723,925 | <p>When I am reading in a CSV file that looks like this:</p>
<pre><code>To, ,New York ,Norfolk ,Charleston ,Savannah
Le Havre (Fri), ,15 ,18 ,22 ,24
Rotterdam (Sun) ,"",13 ,16 ,20 ,22
Hamburg (Thu) ,"",11 ,14 ,18 ,20
Southampton (Fri) , "" ,8 ,11 ,15 ,17
</code></pre>
<p>using pandas, as follows:</p>
<pre><code>duration_route1 = pd.read_csv(file_name, sep = ',')
</code></pre>
<p>I get the following result (I use Sublime Text to run my Python code):</p>
<p><a href="http://i.stack.imgur.com/jO08M.png" rel="nofollow"><img src="http://i.stack.imgur.com/jO08M.png" alt="enter image description here"></a></p>
<p>You see that when there is a <code>""</code>, it doesn't separate the string. Why does it not do this?</p>
| 2 | 2016-09-27T11:51:01Z | 39,724,665 | <p>From your sample you have provided, it is clear that the problem is with the data set and pandas is working correctly.</p>
<p>Only the first row is separated correctly, the second row is all in one column; as a single string (pay attention to the <code>"</code>). If I replace the <code>,</code> with <code>|</code>, your problem becomes a bit clearer:</p>
<pre><code>To | |New York |Norfolk |Charleston |Savannah
Le Havre (Fri) | |15 |18 |22 |24
"Rotterdam (Sun) ,"""",13 ,16 ,20 ,22 " |
"Hamburg (Thu) ,"""",11 ,14 ,18 ,20 " |
"Southampton (Fri) , """" ,8 ,11 ,15 ,17 "|
</code></pre>
<p>Now you have to manually split the second row in order to create the correct data set.</p>
<pre><code>>>> with open('sample2.txt') as f:
... headers = next(f).split(',')
... rows = [i.split(',') for i in f]
...
>>> rows = [list(map(str.strip, list(map(lambda x: x.replace('"', ''), i)))) for i in rows]
>>> pd.DataFrame(rows, columns=headers)
To New York Norfolk Charleston Savannah
0 Le Havre (Fri) 15 18 22 24
1 Rotterdam (Sun) 13 16 20 22
2 Hamburg (Thu) 11 14 18 20
3 Southampton (Fri) 8 11 15 17
</code></pre>
| 0 | 2016-09-27T12:26:03Z | [
"python",
"csv",
"pandas"
]
|
Pandas and apply function to match a string | 39,724,182 | <p>I have a df column containing various links, some of them containing the string <code>"search"</code>.</p>
<p>I want to create a function that - being applied to the column - returns a column containing <code>"search"</code> or <code>"other"</code>.</p>
<p>I write a function like:</p>
<pre><code>search = 'search'
def page_type(x):
if x.str.contains(search):
return 'Search'
else:
return 'Other'
df['link'].apply(page_type)
</code></pre>
<p>but it gives me an error like:</p>
<blockquote>
<p>AttributeError: 'unicode' object has no attribute 'str'</p>
</blockquote>
<p>I guess I'm missing something when calling the str.contains().</p>
| 1 | 2016-09-27T12:04:36Z | 39,724,270 | <p>I think you need <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html" rel="nofollow"><code>numpy.where</code></a>:</p>
<pre><code>df = pd.DataFrame({'link':['search','homepage d','login dd', 'profile t', 'ff']})
print (df)
link
0 search
1 homepage d
2 login dd
3 profile t
4 ff
</code></pre>
<pre><code>search = 'search'
profile = 'profile'
homepage = 'homepage'
login = "login"
def page_type(x):
if search in x:
return 'Search'
elif profile in x:
return 'Profile'
elif homepage in x:
return 'Homepage'
elif login in x:
return 'Login'
else:
return 'Other'
df['link_new'] = df['link'].apply(page_type)
df['link_type'] = np.where(df.link.str.contains(search),'Search',
np.where(df.link.str.contains(profile),'Profile',
np.where(df.link.str.contains(homepage), 'Homepage',
np.where(df.link.str.contains(login),'Login','Other'))))
print (df)
link link_new link_type
0 search Search Search
1 homepage d Homepage Homepage
2 login dd Login Login
3 profile t Profile Profile
4 ff Other Other
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>#[5000 rows x 1 columns]
df = pd.DataFrame({'link':['search','homepage d','login dd', 'profile t', 'ff']})
df = pd.concat([df]*1000).reset_index(drop=True)
In [346]: %timeit df['link'].apply(page_type)
1000 loops, best of 3: 1.72 ms per loop
In [347]: %timeit np.where(df.link.str.contains(search),'Search', np.where(df.link.str.contains(profile),'Profile', np.where(df.link.str.contains(homepage), 'Homepage', np.where(df.link.str.contains(login),'Login','Other'))))
100 loops, best of 3: 11.7 ms per loop
</code></pre>
| 0 | 2016-09-27T12:08:59Z | [
"python",
"string",
"pandas",
"condition",
"contains"
]
|
Pandas and apply function to match a string | 39,724,182 | <p>I have a df column containing various links, some of them containing the string <code>"search"</code>.</p>
<p>I want to create a function that - being applied to the column - returns a column containing <code>"search"</code> or <code>"other"</code>.</p>
<p>I write a function like:</p>
<pre><code>search = 'search'
def page_type(x):
if x.str.contains(search):
return 'Search'
else:
return 'Other'
df['link'].apply(page_type)
</code></pre>
<p>but it gives me an error like:</p>
<blockquote>
<p>AttributeError: 'unicode' object has no attribute 'str'</p>
</blockquote>
<p>I guess I'm missing something when calling the str.contains().</p>
| 1 | 2016-09-27T12:04:36Z | 39,724,774 | <p><code>.str</code> applies to the whole Series but here you are dealing with the value inside the Series.</p>
<p>You can either do : <code>df['link'].str.contains(search)</code><br>
Or like you want : <code>df['link'].apply(lambda x: 'Search' if search in x else 'Other')</code></p>
<p><strong>Edit</strong> </p>
<p>More generic way:</p>
<pre><code>def my_filter(x, val, c_1, c_2):
return c_1 if val in x else c_2
df['link'].apply(lambda x: my_filter(x, 'homepage', 'homepage', 'other'))
</code></pre>
| 1 | 2016-09-27T12:31:26Z | [
"python",
"string",
"pandas",
"condition",
"contains"
]
|
Pandas and apply function to match a string | 39,724,182 | <p>I have a df column containing various links, some of them containing the string <code>"search"</code>.</p>
<p>I want to create a function that - being applied to the column - returns a column containing <code>"search"</code> or <code>"other"</code>.</p>
<p>I write a function like:</p>
<pre><code>search = 'search'
def page_type(x):
if x.str.contains(search):
return 'Search'
else:
return 'Other'
df['link'].apply(page_type)
</code></pre>
<p>but it gives me an error like:</p>
<blockquote>
<p>AttributeError: 'unicode' object has no attribute 'str'</p>
</blockquote>
<p>I guess I'm missing something when calling the str.contains().</p>
| 1 | 2016-09-27T12:04:36Z | 39,726,510 | <p>You can use also a <code>list comprehesion</code> if you want to find the word search within a link:</p>
<p>Fo example:</p>
<pre><code>df['Search'] = [('search' if 'search' in item else 'other') for item in df['link']]
</code></pre>
<p>The output:</p>
<pre><code> ColumnA link Search
0 a http://word/12/word other
1 b https://search-125.php search
2 c http://news-8282.html other
3 d http://search-hello-1.html search
</code></pre>
<p>Create function:</p>
<pre><code>def page_type(x, y):
df[x] = [('search' if 'search' in item else 'other') for item in df[y]]
page_type('Search', 'link')
In [6]: df
Out[6]:
ColumnA link Search
0 a http://word/12/word other
1 b https://search-125.php search
2 c http://news-8282.html other
3 d http://search-hello-1.html search
</code></pre>
| 0 | 2016-09-27T13:51:17Z | [
"python",
"string",
"pandas",
"condition",
"contains"
]
|
Comparing strings in a list | 39,724,216 | <p>I'm trying to filter out search results from an API by trying to find and exclude dictionary entries which have 'affiliation names' which are all the same.</p>
<p>To cut a long story short, in the code below, entry2 is a list of 20 dictionaries all of which have nested dictionaries within them, one of which is 'affiliation'. Within this nested dictionary 'affiliation' for each element of entry2, I want to compare the 'affilnames' and if they are not all equal pass the entry2 dictionary element in question to a new list, entry3.</p>
<p>So far, I have the following (since all entry2 dictionaries only have 2 list elements within 'affiliation'):</p>
<pre><code>entry3 = [s for s in entry2 if s['affiliation'][0]['affilname'] != s['affiliation'][1]['affilname']]
</code></pre>
<p>which works fine (and returns entry3 having 9 dictionary entries). However, it may not always be the case that there are only 2 list entries within 'affiliation' and so I want to find a way to compare all of the strings within 'affiliation'. I have the following line of code which logically makes sense to me but is returning entry3 as having the same number of dictionary elements as entry2:</p>
<pre><code>entry3 = [s for s in entry2 if any(s['affiliation'][i]['affilname'] for i in range(1,len(s['affiliation'])-1)) != s['affiliation'][0]['affilname']]
</code></pre>
<p>Can anyone help me with what is going on?</p>
<p>Thanks</p>
| 1 | 2016-09-27T12:06:29Z | 39,724,340 | <p>The <em>filter</em> condition of your <em>list comprehension</em> is not properly structured. <code>any</code> returns a boolean which you're comparing with the <code>affilname</code> entry - a string. That would return all the entries since a string will never be equal to a boolean.</p>
<p>You can instead check if there is any entry with <code>affilname</code> subdict that is not the matching the first <code>affilname</code> in that category/sub-dict level:</p>
<pre><code>entry3 = [s for s in entry2 if any(dct['affilname'] != s['affiliation'][0]['affilname'] for dct in s['affiliation'])]
</code></pre>
<p>Once there is a mismatch at that subdict level, any <em>breaks</em> and returns <code>True</code>, which will add that entry to <code>entry3</code></p>
| 2 | 2016-09-27T12:12:29Z | [
"python",
"python-3.5"
]
|
HTTP/1.1 400 Bad Request. Bad number of command parts | 39,724,225 | <p>Im trying to execute the following code from Chapter 12 of the book "Python for Informatics". </p>
<pre><code>import socket
mysock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mysock.connect(('www.py4inf.com', 80))
mysock.send('GET http://www.py4inf.com/code/romeo.txt HTTP/1.0\n\n')
while True:
data = mysock.recv(512)
if ( len(data) < 1 ) :
break
print data
mysock.close()
</code></pre>
<p>According to the book, the script should print the following:</p>
<blockquote>
<p>HTTP/1.1 200 OK
Date: Sun, 14 Mar 2010 23:52:41 GMT
Server: Apache
Last-Modified: Tue, 29 Dec 2009 01:31:22 GMT
ETag: "143c1b33-a7-4b395bea"
Accept-Ranges: bytes
Content-Length: 167
Connection: close
Content-Type: text/plain
But soft what light through yonder window breaks
It is the east and Juliet is the sun
Arise fair sun and kill the envious moon
Who is already sick and pale with grief</p>
</blockquote>
<p>Unfortunately the variable data is filled with the following string: </p>
<blockquote>
<p>'HTTP/1.1 400 Bad Request. Bad number of command parts ['GET', '<a href="http://www.py4inf.com/code/romeo.txt" rel="nofollow">http://www.py4inf.com/code/romeo.txt</a>', 'HTTP/1.0', 'X-WS-Ver:', '1.0']'</p>
</blockquote>
<p>I cannot find any good explanation for this error. I hope someone can help!</p>
| 1 | 2016-09-27T12:06:46Z | 39,724,680 | <p>You need to change
mysock.send('GET <a href="http://www.py4inf.com/code/romeo.txt" rel="nofollow">http://www.py4inf.com/code/romeo.txt</a> HTTP/1.0\n \n')
to</p>
<pre><code>mysock.send('GET http://www.py4inf.com/code/romeo.txt HTTP/1.0\r \n')
</code></pre>
<p>To properly end a stream of bytes, you need to use "\r\n", also: Shouldn't you use a byte object for sockets?</p>
| -1 | 2016-09-27T12:26:33Z | [
"python",
"sockets",
"http",
"web",
"bad-request"
]
|
HTTP/1.1 400 Bad Request. Bad number of command parts | 39,724,225 | <p>Im trying to execute the following code from Chapter 12 of the book "Python for Informatics". </p>
<pre><code>import socket
mysock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mysock.connect(('www.py4inf.com', 80))
mysock.send('GET http://www.py4inf.com/code/romeo.txt HTTP/1.0\n\n')
while True:
data = mysock.recv(512)
if ( len(data) < 1 ) :
break
print data
mysock.close()
</code></pre>
<p>According to the book, the script should print the following:</p>
<blockquote>
<p>HTTP/1.1 200 OK
Date: Sun, 14 Mar 2010 23:52:41 GMT
Server: Apache
Last-Modified: Tue, 29 Dec 2009 01:31:22 GMT
ETag: "143c1b33-a7-4b395bea"
Accept-Ranges: bytes
Content-Length: 167
Connection: close
Content-Type: text/plain
But soft what light through yonder window breaks
It is the east and Juliet is the sun
Arise fair sun and kill the envious moon
Who is already sick and pale with grief</p>
</blockquote>
<p>Unfortunately the variable data is filled with the following string: </p>
<blockquote>
<p>'HTTP/1.1 400 Bad Request. Bad number of command parts ['GET', '<a href="http://www.py4inf.com/code/romeo.txt" rel="nofollow">http://www.py4inf.com/code/romeo.txt</a>', 'HTTP/1.0', 'X-WS-Ver:', '1.0']'</p>
</blockquote>
<p>I cannot find any good explanation for this error. I hope someone can help!</p>
| 1 | 2016-09-27T12:06:46Z | 39,724,949 | <p>On your Python installation, machine, or your network, something is rewriting requests and injecting its own code. Prime suspects are</p>
<ul>
<li>Anything marketed as "Anti-Virus"</li>
<li>Anything marketed as "Network Security Solution"</li>
<li>Malware on your computer or router</li>
<li>A network-wide (transparent) proxy</li>
<li>A library in your Python program or installation</li>
<li>Your ISP or network administrator</li>
</ul>
<p>This service wants, in your case, to inject a totally useless header <code>X-WS-Ver</code>. However, this service's interpretation of HTTP is more strict than yours and that of the py4inf.com server; HTTP lines are supposed to end with <code>\r\n</code>, but you use <code>\n</code> only. This service modifies the data you sent to</p>
<pre><code>GET http://www.py4inf.com/code/romeo.txt HTTP/1.0\n\nX-WS-Ver: 1.0\r\n\r\n
</code></pre>
<p>or something similar. This is of course a very buggy behavior by this service. Since the new request is not valid HTTP anymore, py4inf.com will return an error message, correctly indicating that the request it received is malformed (<code>400 Bad request</code>).</p>
<p>To make your program work, you can take any of these options:</p>
<ul>
<li>If something on your local machine is the problem, disable the offending service (if it uselessly and incorrectly modifies connections, it's not likely to provide much, if any, security anyways), or use another machine</li>
<li>Get unrestricted network access, maybe with the help of a VPN provider</li>
<li>Send proper HTTP, i.e.</li>
</ul>
<pre><code>mysock.send('GET http://www.py4inf.com/code/romeo.txt HTTP/1.0\r\n\r\n')
# ^^ ^^
</code></pre>
<ul>
<li>Use an encrypted connection (with Python's <a href="https://docs.python.org/dev/library/ssl.html#ssl.wrap_socket" rel="nofollow">ssl</a> module), i.e. URLs under the <code>https://</code> scheme.</li>
</ul>
| 1 | 2016-09-27T12:40:13Z | [
"python",
"sockets",
"http",
"web",
"bad-request"
]
|
Pandas extract comment lines | 39,724,298 | <p>I have a data file containing a first few lines of comments and then the actual data.</p>
<pre><code>#param1 : val1
#param2 : val2
#param3 : val3
12
2
1
33
12
0
12
...
</code></pre>
<p>I can read the data as <code>pandas.read_csv(filename, comment='#',header=None)</code>. However I also wish to separately read the comment lines in order to extract read the parameter values. So far I only came across skipping or removing the comment lines, but how to also separately extract the comment lines?</p>
| 0 | 2016-09-27T12:10:28Z | 39,724,905 | <p>In the call to <code>read_csv</code> you can't really. If you're just processing a header you can open the file, extract the commented lines and process them, then read in the data in a separate call.</p>
<pre><code>from itertools import takewhile
with open(filename, 'r') as fobj:
# takewhile returns an iterator over all the lines
# that start with the comment string
headiter = takewhile(lambda s: s.startswith('#'), fobj)
# you may want to process the headers differently,
# but here we just convert it to a list
header = list(headiter)
df = pandas.read_csv(filename)
</code></pre>
| 2 | 2016-09-27T12:38:27Z | [
"python",
"pandas"
]
|
Pandas extract comment lines | 39,724,298 | <p>I have a data file containing a first few lines of comments and then the actual data.</p>
<pre><code>#param1 : val1
#param2 : val2
#param3 : val3
12
2
1
33
12
0
12
...
</code></pre>
<p>I can read the data as <code>pandas.read_csv(filename, comment='#',header=None)</code>. However I also wish to separately read the comment lines in order to extract read the parameter values. So far I only came across skipping or removing the comment lines, but how to also separately extract the comment lines?</p>
| 0 | 2016-09-27T12:10:28Z | 39,725,204 | <p>Maybe you can read this file again in normal way, read each line to get your parameters.</p>
<pre><code>def get_param( filename):
para_dic = {}
with open(filename,'r') as cmt_file: # open file
for line in cmt_file: # read each line
if line[0] == '#': # check the first character
line = line[1:] # remove first '#'
para = line.split(':') # seperate string by ':'
if len(para) == 2:
para_dic[ para[0].strip()] = para[1].strip()
return para_dic
</code></pre>
<p>This function will return a dictionary contain parameters.</p>
<pre><code>{'param3': 'val3', 'param2': 'val2', 'param1': 'val1'}
</code></pre>
| 1 | 2016-09-27T12:52:12Z | [
"python",
"pandas"
]
|
Tree view in django template | 39,724,358 | <p>I try to create simple TODOList app. Where you can create Project, then create tasks for project, subtasks for tasks and subtasks. I create a template to show task:</p>
<pre><code><li class='task'>
<div class="collapsible-header" id="task-name"> {{task.title}}</div>
<div class="collapsible-body" data-task-pk='{{task.pk}}' id="task-details">
{% include 'ProjectManager/views/control-block.html' %}
<p>{{task.description}}</p>
<ul class="collapsible popout" data-collapsible="expandable" id="subtasks">
{% for sub_task in task.subtasks.all %}
{% include "ProjectManager/views/task_view.html" with task=sub_task %}
{% endfor %}
</ul>
</div>
</li>
</code></pre>
<p>You can see i try to create a list of subtask, by using this template recursively, but I got an error:</p>
<pre><code>'RecursionError' object has no attribute 'token'
</code></pre>
<p>I found some informations, that i should use variable to store template name, like this:</p>
<pre><code><li class='task'>
<div class="collapsible-header" id="task-name"> {{task.title}}</div>
<div class="collapsible-body" data-task-pk='{{task.pk}}' id="task-details">
{% include 'ProjectManager/views/control-block.html' %}
<p>{{task.description}}</p>
<ul class="collapsible popout" data-collapsible="expandable" id="subtasks">
{% for sub_task in task.subtasks.all %}
{% with node=sub_task template_name="ProjectManager/views/task_view.html" %}
{% include template_name with task=node%}
{% endwith %}
{% endfor %}
</ul>
</div>
</li>
</code></pre>
<p>I got an error:</p>
<pre><code>maximum recursion depth exceeded
</code></pre>
<p>But at start I wrote wrong:</p>
<pre><code> {% with node=**subtask** template_name="ProjectManager/views/task_view.html" %}
</code></pre>
<p>And template display list of subtasks with empty elements (without task.title and description).</p>
<p>Then I tried to put some if condition:</p>
<pre><code><li class='task'>
<div class="collapsible-header" id="task-name"> {{task.title}}</div>
<div class="collapsible-body" data-task-pk='{{task.pk}}' id="task-details">
{% include 'ProjectManager/views/control-block.html' %}
<p>{{task.description}}</p>
<ul class="collapsible popout" data-collapsible="expandable" id="subtasks">
{% if task.subtasks.all|length %}
{% for sub_task in task.subtasks.all %}
{% with node=sub_task template_name="ProjectManager/views/task_view.html" %}
{% include template_name with task=node%}
{% endwith %}
{% endfor %}
{% endif %}
</ul>
</div>
</li>
</code></pre>
<p>But I got new error:</p>
<pre><code>maximum recursion depth exceeded while calling a Python object
</code></pre>
<p>How can I do this with Django templates?
Full <a href="http://dpaste.com/0JKT0QF" rel="nofollow">traceback</a></p>
<p>In this way I show tasks list:</p>
<pre><code><div class="card-content">
<ul class="collapsible popout" data-collapsible="expandable" id="main-tasks">
{% for task in project.tasks.all %}
{% include 'ProjectManager/views/task_view.html' with task=task%}
{%endfor%}
</ul>
</div>
</code></pre>
| 0 | 2016-09-27T12:13:18Z | 39,729,832 | <p>The <em>global</em> var's name is <code>task</code>. But your <em>local</em> var is <strong>also</strong> called <code>task</code>.</p>
<pre><code>{% for task in project.tasks.all %}
{% include 'ProjectManager/views/task_view.html' with task=task%}
{%endfor%}
</code></pre>
<p>so i guess what you where <em>trying to do</em> is:</p>
<pre><code>{% for task_local in project.tasks.all %}
{% include 'ProjectManager/views/task_view.html' with task_global_of_next_inheritance=task_local%}
{%endfor%}
</code></pre>
<p>but what happened is</p>
<pre><code>{% for task_local in project.tasks.all %}
{% include 'ProjectManager/views/task_view.html' with task_global_of_next_inheritance=task_global%}
{%endfor%}
</code></pre>
<p>(using the global instead of the local var)
so you are just making the same call over and over again. if i am right, fix with</p>
<pre><code>{% for task_local in project.tasks.all %}
{% include 'ProjectManager/views/task_view.html' with task=task_local%}
{%endfor%}
</code></pre>
| 0 | 2016-09-27T16:26:59Z | [
"python",
"django",
"django-templates"
]
|
Form wizard with ModelForms having parameters in __init__ | 39,724,387 | <p>I am using python 2.7, Django 1.9.4 on Ubuntu 14.04.</p>
<p>I have been struggling with django-formtools (specifically form wizard) for quite a few days now. The scenario is as follows:</p>
<p>The form wizard is just a 2 step process:</p>
<ol>
<li>1st step: I have a ModelForm based on a Model. The form's <code>__init()__</code> requires a parameter (the id of logged in user which is an integer)</li>
<li>2nd step: A simple check box that asks user of he/she wants to submit the form.</li>
</ol>
<p>The source for forms.py:</p>
<pre><code>from django import forms
from publishermanagement import models
from localemanagement import models as locale_models
from usermanagement import models as user_models
class AddPublisherForm(forms.ModelForm):
def __init__(self, user_id, *args, **kwargs):
super(AddPublisherForm, self).__init__(*args, **kwargs)
permitted_locale_ids = (
user_models
.PublisherPermission
.objects
.filter(user=user_id)
.values_list('locale', flat=True))
self.fields['locale'].queryset = (
locale_models
.Locale
.objects
.filter(pk__in=permitted_locale_ids))
class Meta:
model = models.Information
fields = (
'channel_type',
'current_deal_type',
'locale',
'name',
'contact_primary',
'contact_auxiliary',
'website',
'phone',
'is_active',)
class ConfirmPublisherForm(forms.Form):
confirmation = forms.BooleanField(
label="Check to confirm provided publisher data")
</code></pre>
<p>I overwrote the <code>get_form_instance()</code> in line with the suggestions in various forums including Stack Overflow. <code>Information</code> is the Model class based on which <code>AddPublisherForm</code> is created.</p>
<p>views.py</p>
<pre><code>from django.shortcuts import render_to_response
from django.contrib.auth.decorators import login_required
from publishermanagement import models as publisher_models
from formtools.wizard.views import SessionWizardView
class CreatePublisherWizard(SessionWizardView):
@login_required(login_url='/account/login/')
def done(self, form_list, **kwargs):
# code for saving form data to be done here if user confirms the data
# else redirect to the main form.
return render_to_response(
'publishermanagement/wiz.html',
{'form_data': [form.cleaned_data for form in form_list]})
def get_form_instance(self, step):
if step == u'0':
info = publisher_models.Information()
return info
# the default implementation
return self.instance_dict.get(step, None)
</code></pre>
<p>However, upon execution, when I call the URL in Firefox, I am getting the error <code>__init__() takes at least 2 arguments (1 given)</code>. When I remove the <code>__init__()</code> from my forms.py, the code runs fine.</p>
<p>And ideas on how can create this ModelForm in django-formtools be integrated.</p>
<p>Some relevant post: <a href="http://stackoverflow.com/questions/36241020/access-request-object-in-wizardview-sublcass">Access Request Object in WizardView Sublcass</a></p>
| 0 | 2016-09-27T12:14:46Z | 39,725,194 | <p>You need to override <a href="https://django-formtools.readthedocs.io/en/latest/wizard.html#formtools.wizard.views.WizardView.get_form_kwargs" rel="nofollow"><code>get_form_kwargs</code></a> and include the <code>user_id</code>.</p>
<pre><code>class CreatePublisherWizard(SessionWizardView):
...
def get_form_kwargs(self, step):
kwargs = super(CreatePublisherWizard, self).get_form_kwargs(step)
if step == u'0':
kwargs['user_id'] = self.request.user.id
return kwargs
</code></pre>
| 0 | 2016-09-27T12:51:35Z | [
"python",
"django",
"python-2.7",
"django-formwizard"
]
|
Sqlite date field error for transactions | 39,724,411 | <p>trying to build two databases, one for houses, and another for the dates and prices that they were sold for in the last 16 years </p>
<pre><code>conn = sqlite3.connect('houses_in_london.db')
database = conn.cursor()
database.execute('CREATE TABLE houses (id INTEGER PRIMARY KEY, address TEXT,'
' area TEXT NOT NULL, postcode TEXT, bedrooms TEXT)')
database.execute('CREATE TABLE transactions (transaction_id INTEGER, house_id INTEGER, '
'FOREIGN KEY(house_id) REFERENCES houses(id), date TEXT, sale_price INTEGER )')
database.commit()
database.close()
</code></pre>
<p>if you guys notice, I am trying to put a date field in the transactions table to mark each sale and have it as a text field, but It returns </p>
<pre><code>Traceback (most recent call last):
File "/Users/saminahbab/Documents/House_Prices/final_spider.py", line 14, in <module>
database.execute('CREATE TABLE transactions (transaction_id INTEGER, house_id INTEGER, '
sqlite3.OperationalError: near "DATE": syntax error
</code></pre>
<p>which does not make sense to me as I am just trying to create a text date field, which should pass, and I can do analysis with pandas which can read in that field and turn it into date objects.
anyone can help me make the date field?</p>
| 1 | 2016-09-27T12:15:45Z | 39,724,551 | <p>This more concise syntax does the needeful</p>
<pre><code>database.execute(
'CREATE TABLE transactions (transaction_id INTEGER, house_id INTEGER '
'REFERENCES houses(id), date TEXT, sale_price INTEGER )')
</code></pre>
<p>Alternatively, you need to move the constraints to the end of the create statement</p>
<pre><code>database.execute(
'CREATE TABLE transactions (transaction_id INTEGER, house_id INTEGER, '
'date TEXT, sale_price INTEGER,
'FOREIGN KEY(house_id) REFERENCES houses(id) )')
</code></pre>
| 1 | 2016-09-27T12:21:56Z | [
"python",
"database",
"sqlite"
]
|
Sqlite date field error for transactions | 39,724,411 | <p>trying to build two databases, one for houses, and another for the dates and prices that they were sold for in the last 16 years </p>
<pre><code>conn = sqlite3.connect('houses_in_london.db')
database = conn.cursor()
database.execute('CREATE TABLE houses (id INTEGER PRIMARY KEY, address TEXT,'
' area TEXT NOT NULL, postcode TEXT, bedrooms TEXT)')
database.execute('CREATE TABLE transactions (transaction_id INTEGER, house_id INTEGER, '
'FOREIGN KEY(house_id) REFERENCES houses(id), date TEXT, sale_price INTEGER )')
database.commit()
database.close()
</code></pre>
<p>if you guys notice, I am trying to put a date field in the transactions table to mark each sale and have it as a text field, but It returns </p>
<pre><code>Traceback (most recent call last):
File "/Users/saminahbab/Documents/House_Prices/final_spider.py", line 14, in <module>
database.execute('CREATE TABLE transactions (transaction_id INTEGER, house_id INTEGER, '
sqlite3.OperationalError: near "DATE": syntax error
</code></pre>
<p>which does not make sense to me as I am just trying to create a text date field, which should pass, and I can do analysis with pandas which can read in that field and turn it into date objects.
anyone can help me make the date field?</p>
| 1 | 2016-09-27T12:15:45Z | 39,724,912 | <p>From the docs: </p>
<blockquote>
<p>CREATE TABLE includes one or more column definitions, optionally followed by a list of table constraints.</p>
</blockquote>
<p><a href="https://www.sqlite.org/lang_createtable.html" rel="nofollow">https://www.sqlite.org/lang_createtable.html</a></p>
<p>This is syntactiaclly a table constraint and MUST go after all column definitions:</p>
<pre><code>FOREIGN KEY(house_id) REFERENCES houses(id)
</code></pre>
<p>This is a column definition and can be between other column definitions:</p>
<pre><code>house_id INTEGER REFERENCES houses(id)
</code></pre>
| 0 | 2016-09-27T12:38:36Z | [
"python",
"database",
"sqlite"
]
|
matlab to python port optimization | 39,724,538 | <p>I have ported a matlab piece of code to python and faced problems with efficiency.</p>
<p>For instance, here comes a snippet : </p>
<pre><code>G = np.vstack((Gx.toarray(), Gy.toarray(), Gd1.toarray(), Gd2.toarray()))
</code></pre>
<p>Here all elements to be stacked are 22500 by 22500 sparce matrices. It dies directly on my Windows 64 bit machine with following error : </p>
<pre><code>return _nx.concatenate([atleast_2d(_m) for _m in tup], 0)
MemoryError
</code></pre>
<p>I'm quite new to Python, is there any good article on best practices for such optimization? Any information on how numpy works with memory? </p>
<p>As far as I know sparce matrices stored in some kind of compressed format and take much less space then but take much more time to work with.</p>
<p>Thx! </p>
| 1 | 2016-09-27T12:21:02Z | 39,725,023 | <p>For stacking sparse matrices, you can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.vstack.html" rel="nofollow">Scipy sparse's vstack function</a> instead of NumPy's <code>vstack</code> one, like so -</p>
<pre><code>import scipy.sparse as sp
Gout = sp.vstack((Gx,Gy,Gd1,Gd2))
</code></pre>
<p>Sample run -</p>
<pre><code>In [364]: # Generate random sparse matrices
...: Gx = sp.coo_matrix(3*(np.random.rand(10,10)>0.7).astype(int))
...: Gy = sp.coo_matrix(4*(np.random.rand(10,10)>0.7).astype(int))
...: Gd1 = sp.coo_matrix(5*(np.random.rand(10,10)>0.7).astype(int))
...: Gd2 = sp.coo_matrix(6*(np.random.rand(10,10)>0.7).astype(int))
...:
In [365]: # Run original and proposed approaches
...: G = np.vstack((Gx.toarray(), Gy.toarray(), Gd1.toarray(), Gd2.toarray()))
...: Gout = sp.vstack((Gx,Gy,Gd1,Gd2))
...:
In [366]: # Finally verify results
...: np.allclose(G,Gout.toarray())
Out[366]: True
</code></pre>
| 1 | 2016-09-27T12:43:01Z | [
"python",
"matlab",
"numpy"
]
|
Pandas broadcasting values given logical condition in group by | 39,724,586 | <p>I have a data fram according to example below:</p>
<pre>
key1 key2 value1
1 201501 NaN
1 201502 NaN
1 201503 201503
1 201504 NaN
2 201507 NaN
2 201508 NaN
2 201509 NaN
3 201509 NaN
3 201510 201509
3 201511 NaN
3 201512 NaN
3 201513 NaN
</pre>
<p>and I want the following output... </p>
<pre>
key1 key2 value1 value2
1 201501 NaN 0
1 201502 NaN 0
1 201503 201503 1
1 201504 NaN 1
2 201507 NaN 0
2 201508 NaN 0
2 201509 NaN 0
3 201509 NaN 0
3 201510 201509 1
3 201511 NaN 1
3 201512 NaN 1
3 201601 NaN 1
</pre>
<p>The output is simply a binary flag that takes on the <em>value1</em> if it has a yyyymm-stamp in <em>value1</em> and then it keeps it for the reminder of its key1-group. In the rows preceeding it should be 0. If the <em>key1</em> only has <em>np.NaN</em> then it should be 0, like for <em>key1</em> = 2.</p>
<p>I have tried a version with an apply using lambda operator but its really slow. I was hoping someone could given me a tip on how to broadcast this using a more vectorized approach to save some execution time.</p>
<p>code for df below!</p>
<p>Many thanks in advance for time and input! </p>
<p>Best regards,</p>
<p>/swepab</p>
<pre><code>import numpy as np
df = pd.DataFrame({'key1' : [1,1,1,1,2,2,2,3,3,3,3,3]
,'key2' : [201501, 201502,201503,201504,201507,201508,201509,201509,201510,201511,201512,201601]
,'value1' : [np.nan,np.nan,'201503',np.nan,np.nan,np.nan,np.nan,np.nan,'201509',np.nan,np.nan,np.nan]
,'value2' : [0,0,1,1,0,0,0,0,1,1,1,1]})
</code></pre>
| 1 | 2016-09-27T12:22:55Z | 39,724,878 | <p>IIUC you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.ffill.html" rel="nofollow"><code>ffill</code></a>:</p>
<pre><code>df['value2'] = df.groupby('key1')['value1'].ffill()
df.value2 = np.where(df.value2.notnull(),1,0)
print (df)
key1 key2 value1 value2
0 1 201501 NaN 0
1 1 201502 NaN 0
2 1 201503 201503 1
3 1 201504 NaN 1
4 2 201507 NaN 0
5 2 201508 NaN 0
6 2 201509 NaN 0
7 3 201509 NaN 0
8 3 201510 201509 1
9 3 201511 NaN 1
10 3 201512 NaN 1
11 3 201601 NaN 1
</code></pre>
| 0 | 2016-09-27T12:36:29Z | [
"python",
"pandas",
"transform"
]
|
Pandas broadcasting values given logical condition in group by | 39,724,586 | <p>I have a data fram according to example below:</p>
<pre>
key1 key2 value1
1 201501 NaN
1 201502 NaN
1 201503 201503
1 201504 NaN
2 201507 NaN
2 201508 NaN
2 201509 NaN
3 201509 NaN
3 201510 201509
3 201511 NaN
3 201512 NaN
3 201513 NaN
</pre>
<p>and I want the following output... </p>
<pre>
key1 key2 value1 value2
1 201501 NaN 0
1 201502 NaN 0
1 201503 201503 1
1 201504 NaN 1
2 201507 NaN 0
2 201508 NaN 0
2 201509 NaN 0
3 201509 NaN 0
3 201510 201509 1
3 201511 NaN 1
3 201512 NaN 1
3 201601 NaN 1
</pre>
<p>The output is simply a binary flag that takes on the <em>value1</em> if it has a yyyymm-stamp in <em>value1</em> and then it keeps it for the reminder of its key1-group. In the rows preceeding it should be 0. If the <em>key1</em> only has <em>np.NaN</em> then it should be 0, like for <em>key1</em> = 2.</p>
<p>I have tried a version with an apply using lambda operator but its really slow. I was hoping someone could given me a tip on how to broadcast this using a more vectorized approach to save some execution time.</p>
<p>code for df below!</p>
<p>Many thanks in advance for time and input! </p>
<p>Best regards,</p>
<p>/swepab</p>
<pre><code>import numpy as np
df = pd.DataFrame({'key1' : [1,1,1,1,2,2,2,3,3,3,3,3]
,'key2' : [201501, 201502,201503,201504,201507,201508,201509,201509,201510,201511,201512,201601]
,'value1' : [np.nan,np.nan,'201503',np.nan,np.nan,np.nan,np.nan,np.nan,'201509',np.nan,np.nan,np.nan]
,'value2' : [0,0,1,1,0,0,0,0,1,1,1,1]})
</code></pre>
| 1 | 2016-09-27T12:22:55Z | 39,724,927 | <p>You can do:</p>
<pre><code>df['value2'] = df.groupby('key1')['value1'].apply(lambda x: (~pd.isnull(x)).cumsum())
In [50]: df
Out[50]:
key1 key2 value1 value2
0 1 201501 NaN 0
1 1 201502 NaN 0
2 1 201503 201503 1
3 1 201504 NaN 1
4 2 201507 NaN 0
5 2 201508 NaN 0
6 2 201509 NaN 0
7 3 201509 NaN 0
8 3 201510 201509 1
9 3 201511 NaN 1
10 3 201512 NaN 1
11 3 201601 NaN 1
</code></pre>
| 0 | 2016-09-27T12:39:20Z | [
"python",
"pandas",
"transform"
]
|
Bold text with asterisks | 39,724,626 | <p>In my Django project I want to make text bold if asterisks <code>*</code> are there at the start and end of text, the same feature we have here on Stack Overflow. Although I convert <code>**</code> to <code><b></code>, due to output escaping it becomes <code>&lt;b&gt;</code>. What is the right approach to achieve this?</p>
<p>template file contains <code>{{ anidea.description|format_text}}</code></p>
<p><code>format_text</code> is custom template filter</p>
<p>code..</p>
<pre><code>from django import template
from django.utils.safestring import mark_safe
register = template.Library()
@register.filter(name='format_text')
def custom_formating(value):
for word in value.split():
start = word[:2]
end = word[-2:]
if start == '**' and end == '**':
word = word[2:-2]
word = '<b>' +word+ '</b>'
mark_safe(word)
return value
</code></pre>
| -1 | 2016-09-27T12:24:38Z | 39,724,740 | <p>if you want the full suite of all markdown features, go with an existing markdown library.</p>
<p>if you just want <b> to print directly to the source code w/o escaping, use</p>
<pre><code> {{ some_var|safe }}
</code></pre>
| 0 | 2016-09-27T12:29:16Z | [
"python",
"django"
]
|
Bold text with asterisks | 39,724,626 | <p>In my Django project I want to make text bold if asterisks <code>*</code> are there at the start and end of text, the same feature we have here on Stack Overflow. Although I convert <code>**</code> to <code><b></code>, due to output escaping it becomes <code>&lt;b&gt;</code>. What is the right approach to achieve this?</p>
<p>template file contains <code>{{ anidea.description|format_text}}</code></p>
<p><code>format_text</code> is custom template filter</p>
<p>code..</p>
<pre><code>from django import template
from django.utils.safestring import mark_safe
register = template.Library()
@register.filter(name='format_text')
def custom_formating(value):
for word in value.split():
start = word[:2]
end = word[-2:]
if start == '**' and end == '**':
word = word[2:-2]
word = '<b>' +word+ '</b>'
mark_safe(word)
return value
</code></pre>
| -1 | 2016-09-27T12:24:38Z | 39,727,294 | <p>I did it in following way.</p>
<p>views.py</p>
<pre><code>i.description = i.description.split() #use of split()
</code></pre>
<p>template file (format_text is <code>custom template filter</code>)</p>
<pre><code>{% for text in anidea.description %}
{{ text|format_text }}
{% endfor %}
</code></pre>
<p>filter</p>
<pre><code>@register.filter(name='format_text')
def custom_formating(value):
start = value[:2]
end = value[-2:]
if start == '**' and end == '**':
value = value[2:-2]
value = '<b>' +value+ '</b>'
return mark_safe(value)
else:
return value
</code></pre>
<p>with this way I can achieve output escaping for description and desired text formatting.</p>
| 0 | 2016-09-27T14:25:23Z | [
"python",
"django"
]
|
Get the range of colors in an image using Python OpenCV | 39,724,764 | <p>I'm trying to build a portable green screen photo booth. There's no way to know the lighting conditions ahead of time so I can't hard code the the color values for chroma key.</p>
<p>I thought the easiest way to get around this issue would be to build a calibration script that will take a picture of the blank background, get the "highest" and "lowest" colors from it, and use those to produce the background mask.</p>
<p>I'm running into trouble because while I can get the highest or lowest value in each of the channels, there's no guarantee that when the three are combined they match the actual color range of the image.</p>
<p>UPDATE: I have changed to use only the hue channel. This helped a lot but still isn't perfect. I think better lighting will make a difference but if you can see any way to help I would be grateful.</p>
<p>Here's what I have (edited with updates)</p>
<pre><code>import cv2
import numpy as np
screen = cv2.imread("screen.jpg")
test = cv2.imread("test.jpg")
hsv_screen = cv2.cvtColor(screen, cv2.COLOR_BGR2HSV)
hsv_test = cv2.cvtColor(test, cv2.COLOR_BGR2HSV)
hueMax = hsv_screen[:,:,0].max()
hueMin = hsv_screen[:,:,0].min()
lowerBound = np.array([hueMin-10,100,100], np.uint8)
upperBound = np.array([hueMax+10,255,255], np.uint8)
mask = cv2.inRange(hsv_test,lowerBound,upperBound)
cv2.imwrite("mask.jpg",mask)
output_img = cv2.bitwise_and(test,test,mask=inv_mask)
cv2.imwrite("output.jpg",output_img)
</code></pre>
<p>screen and test images</p>
<p>screen.jpg
<a href="http://i.stack.imgur.com/BGFQI.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/BGFQI.jpg" alt="screen"></a></p>
<p>test.jpg
<a href="http://i.stack.imgur.com/ALJpc.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/ALJpc.jpg" alt="test.jpg"></a></p>
| 1 | 2016-09-27T12:31:12Z | 39,732,275 | <p>HUE colors have specified intervals, get the green interval and then do an <code>inRange</code> using it, ignoring the saturation and value, that would give you all degrees of green, to minimize noise much, make the value range to be from 20% to 80%, that would avoid the too much light, or too much dark regions, and ensure you only get what's green anywhere in the screen.
Detecting using only HUE channel is dependable.</p>
| 1 | 2016-09-27T18:52:10Z | [
"python",
"opencv"
]
|
I get a linear regression using the SVR by python scikit-learn when the data is not linear | 39,724,999 | <pre><code>train.sort_values(by=['mass'], ascending=True, inplace=True)
x = train['mass']
y = train['pa']
# Fit regression model
svr_rbf = SVR(kernel='rbf', C=1e3, gamma=0.1)
svr_lin = SVR(kernel='linear', C=1e3)
svr_poly = SVR(kernel='poly', C=1e3, degree=2)
x_train = x.reshape(x.shape[0], 1)
x = x_train
y_rbf = svr_rbf.fit(x, y).predict(x)
y_lin = svr_lin.fit(x, y).predict(x)
y_poly = svr_poly.fit(x, y).predict(x)
# look at the results
plt.scatter(x, y, c='k', label='data')
plt.hold('on')
plt.plot(x, y_rbf, c='g', label='RBF model')
plt.plot(x, y_lin, c='r', label='Linear model')
plt.plot(x, y_poly, c='b', label='Polynomial model')
plt.xlabel('data')
plt.ylabel('target')
plt.title('Support Vector Regression')
plt.legend()
plt.show()
</code></pre>
<p>The code is copied from <a href="http://scikit-learn.org/stable/auto_examples/svm/plot_svm_regression.html" rel="nofollow">http://scikit-learn.org/stable/auto_examples/svm/plot_svm_regression.html</a>.
And what I change is only the dataset. I do not know what is the matter.</p>
| 0 | 2016-09-27T12:41:54Z | 39,733,330 | <p>Most likely has to do with the scale of your data. You are using the same penalty hyper-parameter as they are in the example, but your y values are orders of magnitude greater. Thus, the SVR algorithm will favor simplicity over accuracy since your penalty for error is now small compared to your y values. You need to increase C to say <code>1e6</code> (or normalize your y values). </p>
<p>You can see that this is the case if you make C very small in their example code, say <code>C=.00001</code>. Then you get the same kind of results that you are getting in your code. </p>
<p>(More on the algorithm <a href="http://scikit-learn.org/stable/modules/svm.html#svm-regression" rel="nofollow">here</a>.)</p>
<p>As a side note, a huge part of Machine Learning practice is hyper-parameter tuning. This is a good example of how even a good base model can yield bad results if provided with the wrong hyper-parameters. </p>
| 0 | 2016-09-27T20:01:22Z | [
"python",
"machine-learning",
"scikit-learn",
"svm"
]
|
How do I execute multiple shell commands with a single python subprocess call? | 39,725,120 | <p>Ideally it should be like a list of commands that I want to execute and execute all of them using a single subprocess call. I was able to do something similar by storing all the commands as a shell script and calling that script using subprocess, but I want a pure python solution.I will be executing the commands with shell=True and yes I understand the risks. </p>
| 0 | 2016-09-27T12:47:25Z | 39,725,358 | <p>Use semicolon to chain them if they're independent.</p>
<p>For example, (Python 3)</p>
<pre><code>>>> import subprocess
>>> result = subprocess.run('echo Hello ; echo World', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
>>> result
CompletedProcess(args='echo Hello ; echo World', returncode=0, stdout=b'Hello\nWorld\n')
</code></pre>
<p>But technically that's not a pure Python solution, because of <code>shell=True</code>. The arg processing is actually done by shell. (You may think of it as of executing <code>/bin/sh -c "$your_arguments"</code>)</p>
<p>If you want a somewhat more pure solution, you'll have to use <code>shell=False</code> and loop over your several commands. As far as I know, there is no way to start multiple subprocesses directly with subprocess module.</p>
| 0 | 2016-09-27T12:59:44Z | [
"python",
"subprocess"
]
|
insertion sort in python not working | 39,725,226 | <p>I have tried the following code for insertion sort in python</p>
<pre><code>a=[int(x) for x in input().split()]
for i in range(1,len(a)):
temp=a[i]
for k in range (i,1,-1):
a[k]=a[k-1]
if a[k]<temp:
a[k]=temp
break
print(a)
</code></pre>
<p>input: 6 4 3 2 5 8 1</p>
<p>output: [6, 4, 4, 4, 4, 5, 8]</p>
| -4 | 2016-09-27T12:52:58Z | 39,725,917 | <p>It <em>does not work</em> because your implementation is faulty.<br>
When trying to shift the partially sorted list, you overwrite existing numbers by assigning <code>a[k] = a[k-1]</code> -- but where's the former value of <code>a[k]</code> then?</p>
<p>A very basic solution (yet not in-place as the original definition on a single list is defined) could look like this.</p>
<pre><code>inp = '1 4 6 3 1 6 3 5 8 1'
# 'a' is the input list
a = [int(x) for x in inp.split()]
# 'r' is the sorted list
r = []
# In the original descriptions, insertion sort operates
# on a single list while iterating over it. However, this
# may lead to major failurs, thus you better carry the
# sorted list in a separate variable if memory is not
# a limiting factor (which it can hardly be for lists that
# are typed in by the user).
for e in a:
if not len(r):
# The first item is the inialization
r.append(e)
else:
# For each subsequent item, find the spot in 'r'
# where it has to go.
idx = 0
while idx < len(r) and r[idx] < e: idx += 1
# We are lazy and use insert() instead of manually
# extending the list by 1 place and copying over
# all subsequent items [idx:] to the right
r.insert(idx, e)
print(r)
</code></pre>
| 0 | 2016-09-27T13:26:51Z | [
"python",
"insertion-sort"
]
|
Nested defaultdicts | 39,725,255 | <p>Why does the below work</p>
<pre><code>x = defaultdict(dict)
for a,b,c in [('eg', 'ef', 'ee'), ('eg', 'eu', 'e4'), ('kk', 'nn', 'bb')]:
x[a][b] = c
</code></pre>
<p>And the below throws an error ?</p>
<pre><code>x = defaultdict(dict)
for a,b,c,d in [('eg', 'ef', 'ee', 'gg'), ('eg', 'eu', 'e4', 'hh'),
('kk', 'nn', 'bb', 'ff')]:
x[a][b][c] = d
</code></pre>
| 1 | 2016-09-27T12:54:11Z | 39,735,596 | <p>The issue here is that <code>defaultdict</code> accepts a callable, which is used as a factory to generate the value when a key is missing. Once you understand that, the behaviour is clear:</p>
<pre><code>x = defaultdict(dict)
x # it's a default dict
x['a'] # it's just a dict()
x['a']['b'] = 'c' # it's just setting the 'b' item in the dict x['a']
x['a']['b']['z'] # oops, error, because x['a']['b'] is not a dict!
</code></pre>
<p>If you only require a finite level of nesting, using a plain old <code>dict</code> with <code>tuple</code> keys is usually a much easier data structure to work with. That will work fine for both the 2-d and 3-d examples shown in your question. </p>
<p>If you require arbitrary levels of nesting, however, you can consider the recursive <code>defaultdict</code> example shown <a href="http://stackoverflow.com/a/19189356/674039">here</a>.</p>
| 1 | 2016-09-27T22:58:32Z | [
"python",
"python-2.7"
]
|
DeprecationWarning while using knn algorithm in scikit-learn | 39,725,403 | <p>I am trying my hands on scikit-learn library. I imported the iris dataset, and tried to train knn algorithm to predict some outcomes. Here is the code:</p>
<pre><code>from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
iris = datasets.load_iris()
knn = KNeighborsClassifier(n_neighbors=1)
X = iris.data
y = iris.target
print X.shape
print y.shape
#training the model
knn.fit(X, y)
knn.predict([3, 4, 5, 2])
</code></pre>
<p>But I get the following error:</p>
<pre><code>(150L, 4L)
(150L,)
DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and willraise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.
DeprecationWarning)
</code></pre>
<p>I searched on google and found some workarounds. I tried using <code>X = X.reshape(-1, 1)</code> and also <code>X = X.reshape(1, -1)</code>, but then I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "E:/Analytics Practice/Social Media Analytics/Python Services/DataAnalysis/sk-learn-dir/test.py", line 13, in <module>
knn.fit(X, y)
File "C:\python-venv-test-2.7.10\lib\site-packages\sklearn\neighbors\base.py", line 778, in fit
X, y = check_X_y(X, y, "csr", multi_output=True)
File "C:\python-venv-test-2.7.10\lib\site-packages\sklearn\utils\validation.py", line 520, in check_X_y
check_consistent_length(X, y)
File "C:\python-venv-test-2.7.10\lib\site-packages\sklearn\utils\validation.py", line 176, in check_consistent_length
"%s" % str(uniques))
ValueError: Found arrays with inconsistent numbers of samples: [150 600]
</code></pre>
<p>What is the correct format of dimensions that the knn algorithm requires to be trained in scikit-learn?</p>
| -2 | 2016-09-27T13:02:02Z | 39,738,702 | <p>Thank you @tttthomasssss for the help. Here is what I was doing wrong:</p>
<p>When I write <code>[3, 4, 5, 2]</code>, python interprets it as an array of dimensions 4X1, but when I write <code>[[3, 4, 5, 2]]</code>, python interprets it as 1X4 array. Since it is one data point having 4 different values for different features, I will have to input the predict model with <code>[[3, 4, 5, 2]]</code>. Here is the code which helped me to figure out the dimensions of both the arrays:</p>
<pre><code>predict_array = [3, 4, 5, 2]
predict_array = np.asarray(predict_array)
print predict_array.shape
predict_array = [[3, 4, 5, 2]]
predict_array = np.asarray(predict_array)
print predict_array.shape
</code></pre>
<p>And here is the output:</p>
<pre><code>(4L,)
(1L, 4L)
</code></pre>
| 0 | 2016-09-28T05:27:10Z | [
"python",
"scikit-learn",
"knn"
]
|
Checking if element in list by substring | 39,725,411 | <p>I have a list of urls (<code>unicode</code>), and there is a lot of repetition.
For example, urls <code>http://www.myurlnumber1.com</code> and <code>http://www.myurlnumber1.com/foo+%bar%baz%qux</code> lead to the same place.</p>
<p>So I need to weed out all of those duplicates.</p>
<p>My first idea was to check if the element's substring is in the list, like so:</p>
<pre><code>for url in list:
if url[:30] not in list:
print(url)
</code></pre>
<p>However, it tries to mach literal <code>url[:30]</code> to a list element and obviously returns all of them, since there is no element that exactly matches <code>url[:30]</code>. </p>
<p>Is there an easy way to solve this problem?</p>
<p>EDIT:</p>
<p>Often the host and path in the urls stays the same, but the parameters are different. For my purposes, a url with the same hostname and path, but different parameters are still the same url and constitute a duplicate.</p>
| 5 | 2016-09-27T13:02:35Z | 39,725,573 | <p>You can try adding another for loop, if you are fine with that.
Something like:</p>
<pre><code>for url in list:
for i in range(len(list)):
if url[:30] not in list[i]:
print(url)
</code></pre>
<p>That will compare every word with every other word to check for sameness. That's just an example, I'm sure you could make it more robust.</p>
| 0 | 2016-09-27T13:10:53Z | [
"python",
"list"
]
|
Checking if element in list by substring | 39,725,411 | <p>I have a list of urls (<code>unicode</code>), and there is a lot of repetition.
For example, urls <code>http://www.myurlnumber1.com</code> and <code>http://www.myurlnumber1.com/foo+%bar%baz%qux</code> lead to the same place.</p>
<p>So I need to weed out all of those duplicates.</p>
<p>My first idea was to check if the element's substring is in the list, like so:</p>
<pre><code>for url in list:
if url[:30] not in list:
print(url)
</code></pre>
<p>However, it tries to mach literal <code>url[:30]</code> to a list element and obviously returns all of them, since there is no element that exactly matches <code>url[:30]</code>. </p>
<p>Is there an easy way to solve this problem?</p>
<p>EDIT:</p>
<p>Often the host and path in the urls stays the same, but the parameters are different. For my purposes, a url with the same hostname and path, but different parameters are still the same url and constitute a duplicate.</p>
| 5 | 2016-09-27T13:02:35Z | 39,725,580 | <p>If you consider any netloc's to be the same you can parse with <a href="https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlparse" rel="nofollow"><code>urllib.parse</code></a></p>
<pre><code>from urllib.parse import urlparse # python2 from urlparse import urlparse
u = "http://www.myurlnumber1.com/foo+%bar%baz%qux"
print(urlparse(u).netloc)
</code></pre>
<p>Which would give you:</p>
<pre><code>www.myurlnumber1.com
</code></pre>
<p>So to get unique netlocs you could do something like:</p>
<pre><code>unique = {urlparse(u).netloc for u in urls}
</code></pre>
<p>If you wanted to keep the url scheme:</p>
<pre><code>urls = ["http://www.myurlnumber1.com/foo+%bar%baz%qux", "http://www.myurlnumber1.com"]
unique = {"{}://{}".format(u.scheme, u.netloc) for u in map(urlparse, urls)}
print(unique)
</code></pre>
<p>Presuming they all have schemes and you don't have http and https for the same netloc and consider them to be the same.</p>
<p>If you also want to add the path:</p>
<pre><code>unique = {u.netloc, u.path) for u in map(urlparse, urls)}
</code></pre>
<p>The table of attributes is listed in the docs:</p>
<pre><code>Attribute Index Value Value if not present
scheme 0 URL scheme specifier scheme parameter
netloc 1 Network location part empty string
path 2 Hierarchical path empty string
params 3 Parameters for last path element empty string
query 4 Query component empty string
fragment 5 Fragment identifier empty string
username User name None
password Password None
hostname Host name (lower case) None
port Port number as integer, if present None
</code></pre>
<p>You just need to use whatever you consider to be the unique parts.</p>
<pre><code>In [1]: from urllib.parse import urlparse
In [2]: urls = ["http://www.url.com/foo-bar", "http://www.url.com/foo-bar?t=baz", "www.url.com/baz-qux", "www.url.com/foo-bar?t=baz"]
In [3]: unique = {"".join((u.netloc, u.path)) for u in map(urlparse, urls)}
In [4]:
In [4]: print(unique)
{'www.url.com/baz-qux', 'www.url.com/foo-bar'}
</code></pre>
| 6 | 2016-09-27T13:11:02Z | [
"python",
"list"
]
|
Solving sets of non-linear equations as an array | 39,725,419 | <p>I'm trying to solve for the intersection of the two equations: <code>y=Rx^1.75</code> and <code>y=ax^2+bx+c</code> for all rows in my dataframe (about 100K rows). Each value of <code>R,a,b,c</code> is different for each row. I can solve them one by one by iterating through the dataframe and calling <code>fsolve()</code> for each row (as done below), but I'm wondering if there is a better way to do this. </p>
<p><strong>My question is</strong>: is it possible to turn this into an array calculation, that is, to solve all the rows at once? Any ideas on how to get this calculation done faster would be really helpful. </p>
<p>Here is an example dataframe with coefficients</p>
<pre><code> R a b c
0 0.5 -0.01 -0.50 32.42
1 0.6 0.00 0.07 14.12
2 0.7 -0.01 -0.50 32.42
</code></pre>
<p>And here is the working example code that I'm using to test methods:</p>
<pre><code>import numpy as np
import pandas as pd
from scipy.optimize import *
# The fSolve function
def myFunction(zGuess,*Params):
# Get the coefficients
R,a,b,c = Params
# Get the initial guess
x,y = zGuess
F = np.empty((2))
F[0] = R*x**1.75-y
F[1] = a*x**2+b*x+c-y
return F
# Example Dataframe that is 10K rows of different coefficients
df = pd.DataFrame({"R":[0.500, 0.600,0.700],
"a":[-0.01, 0.000,-0.01],
"b":[-0.50, 0.070,-0.50],
"c":[32.42, 14.12,32.42]})
# Initial guess
zGuess = np.array([50,50])
# Make a place to store the answers
df["x"] = None
df["y"] = None
# Loop through the rows?
for index, coeffs in df.iterrows():
# Get the coefficients
Params = (coeffs["R"],coeffs["a"],coeffs["b"],coeffs["c"])
# fSolve
z = fsolve(myFunction,zGuess,args=Params)
# Set the answers
df.loc[index,"x"] = z[0]
df.loc[index,"y"] = z[1]
print df
</code></pre>
<p>============================================</p>
<h2>Solution (who's answer is faster):</h2>
<p>I got two answers below that both gave mathematically correct answers. So at this point, it's all who's calculation is faster! The test dataframe will be 3K rows. </p>
<p><strong>Answer #1 (Newtons Method)</strong></p>
<pre><code># Solution 1
import numpy as np
import pandas as pd
Count = 1000
df = pd.DataFrame({"R":[0.500, 0.600,0.700]*Count,
"a":[-0.01, 0.000,-0.01]*Count,
"b":[-0.50, 0.070,-0.50]*Count,
"c":[32.42, 14.12,32.42]*Count})
from datetime import datetime
t_start = datetime.now()
#---------------------------------
InitialGuess = 50.0
Iterations = 20
x = np.full(df["a"].shape, InitialGuess)
for i in range(Iterations):
x = x - (-df["R"]*x**1.75 + df["a"]*x**2 + df["b"]*x + df["c"])/(-1.75*df["R"]*x**0.75 + 2*df["a"]*x + df["b"])
df["x"] = x
df["y"] = df["R"]*x**1.75
df["x Error"] = df["a"]*x**2 + df["b"]*x + df["c"] - df["R"]*x**1.75
#---------------------------------
t_end = datetime.now()
print ('\n\n\nTime spent running this was:')
print(t_end - t_start)
print df
</code></pre>
<p>And the time spent was:</p>
<pre><code>Time spent running this was:
0:00:00.015000
</code></pre>
<p><strong>Answer #2 (fSolve)</strong></p>
<pre><code># Solution 2
import numpy as np
import pandas as pd
from scipy.optimize import *
Count = 1000
df = pd.DataFrame({"R":[0.500, 0.600,0.700]*Count,
"a":[-0.01, 0.000,-0.01]*Count,
"b":[-0.50, 0.070,-0.50]*Count,
"c":[32.42, 14.12,32.42]*Count})
from datetime import datetime
t_start = datetime.now()
#---------------------------------
coefs = df.values[:, 0:4]
def mfun(x, *args):
args = np.array(args[0], dtype=np.float64)
return args[:,1] * x**2 + args[:,2] * x + args[:,3] - args[:,0] * x**1.75
nrows = coefs.shape[0]
df["x"] = fsolve(mfun, np.ones(nrows) * 50, args=coefs)
df["y"] = coefs[:, 0] * df["x"]**1.75
#---------------------------------
t_end = datetime.now()
print ('\n\n\nTime spent running this was:')
print(t_end - t_start)
print df
</code></pre>
<p>And the time spent was:</p>
<pre><code>Time spent running this was:
0:00:35.786000
</code></pre>
<h2>Final thoughts:</h2>
<p>For this particular case, Newtons method was much faster (I can run 300K rows in <code>0:00:01.139000</code>!). Thank you both!</p>
| 1 | 2016-09-27T13:02:56Z | 39,726,868 | <p>Maybe you can use Newton's method:</p>
<pre><code>import numpy as np
data = np.array(
[[0.5, -0.01, -0.50, 32.42],
[0.6, 0.00, 0.07, 14.12],
[0.7, -0.01, -0.50, 32.42]])
R, a, b, c = data.T
x = np.full(a.shape, 10.0)
m = 1.0
for i in range(20):
x = x - m * (-R*x**1.75 + a*x**2 + b*x + c)/(-1.75*R*x**0.75 + 2*a*x + b)
print(a*x**2 + b*x + c - R * x**1.75)
</code></pre>
<p>output:</p>
<pre><code>[ 0.00000000e+00 1.77635684e-15 3.55271368e-15]
</code></pre>
<p>be careful to choose the iteration count and initial value of x.</p>
| 0 | 2016-09-27T14:06:36Z | [
"python",
"pandas",
"numpy",
"scipy"
]
|
Solving sets of non-linear equations as an array | 39,725,419 | <p>I'm trying to solve for the intersection of the two equations: <code>y=Rx^1.75</code> and <code>y=ax^2+bx+c</code> for all rows in my dataframe (about 100K rows). Each value of <code>R,a,b,c</code> is different for each row. I can solve them one by one by iterating through the dataframe and calling <code>fsolve()</code> for each row (as done below), but I'm wondering if there is a better way to do this. </p>
<p><strong>My question is</strong>: is it possible to turn this into an array calculation, that is, to solve all the rows at once? Any ideas on how to get this calculation done faster would be really helpful. </p>
<p>Here is an example dataframe with coefficients</p>
<pre><code> R a b c
0 0.5 -0.01 -0.50 32.42
1 0.6 0.00 0.07 14.12
2 0.7 -0.01 -0.50 32.42
</code></pre>
<p>And here is the working example code that I'm using to test methods:</p>
<pre><code>import numpy as np
import pandas as pd
from scipy.optimize import *
# The fSolve function
def myFunction(zGuess,*Params):
# Get the coefficients
R,a,b,c = Params
# Get the initial guess
x,y = zGuess
F = np.empty((2))
F[0] = R*x**1.75-y
F[1] = a*x**2+b*x+c-y
return F
# Example Dataframe that is 10K rows of different coefficients
df = pd.DataFrame({"R":[0.500, 0.600,0.700],
"a":[-0.01, 0.000,-0.01],
"b":[-0.50, 0.070,-0.50],
"c":[32.42, 14.12,32.42]})
# Initial guess
zGuess = np.array([50,50])
# Make a place to store the answers
df["x"] = None
df["y"] = None
# Loop through the rows?
for index, coeffs in df.iterrows():
# Get the coefficients
Params = (coeffs["R"],coeffs["a"],coeffs["b"],coeffs["c"])
# fSolve
z = fsolve(myFunction,zGuess,args=Params)
# Set the answers
df.loc[index,"x"] = z[0]
df.loc[index,"y"] = z[1]
print df
</code></pre>
<p>============================================</p>
<h2>Solution (who's answer is faster):</h2>
<p>I got two answers below that both gave mathematically correct answers. So at this point, it's all who's calculation is faster! The test dataframe will be 3K rows. </p>
<p><strong>Answer #1 (Newtons Method)</strong></p>
<pre><code># Solution 1
import numpy as np
import pandas as pd
Count = 1000
df = pd.DataFrame({"R":[0.500, 0.600,0.700]*Count,
"a":[-0.01, 0.000,-0.01]*Count,
"b":[-0.50, 0.070,-0.50]*Count,
"c":[32.42, 14.12,32.42]*Count})
from datetime import datetime
t_start = datetime.now()
#---------------------------------
InitialGuess = 50.0
Iterations = 20
x = np.full(df["a"].shape, InitialGuess)
for i in range(Iterations):
x = x - (-df["R"]*x**1.75 + df["a"]*x**2 + df["b"]*x + df["c"])/(-1.75*df["R"]*x**0.75 + 2*df["a"]*x + df["b"])
df["x"] = x
df["y"] = df["R"]*x**1.75
df["x Error"] = df["a"]*x**2 + df["b"]*x + df["c"] - df["R"]*x**1.75
#---------------------------------
t_end = datetime.now()
print ('\n\n\nTime spent running this was:')
print(t_end - t_start)
print df
</code></pre>
<p>And the time spent was:</p>
<pre><code>Time spent running this was:
0:00:00.015000
</code></pre>
<p><strong>Answer #2 (fSolve)</strong></p>
<pre><code># Solution 2
import numpy as np
import pandas as pd
from scipy.optimize import *
Count = 1000
df = pd.DataFrame({"R":[0.500, 0.600,0.700]*Count,
"a":[-0.01, 0.000,-0.01]*Count,
"b":[-0.50, 0.070,-0.50]*Count,
"c":[32.42, 14.12,32.42]*Count})
from datetime import datetime
t_start = datetime.now()
#---------------------------------
coefs = df.values[:, 0:4]
def mfun(x, *args):
args = np.array(args[0], dtype=np.float64)
return args[:,1] * x**2 + args[:,2] * x + args[:,3] - args[:,0] * x**1.75
nrows = coefs.shape[0]
df["x"] = fsolve(mfun, np.ones(nrows) * 50, args=coefs)
df["y"] = coefs[:, 0] * df["x"]**1.75
#---------------------------------
t_end = datetime.now()
print ('\n\n\nTime spent running this was:')
print(t_end - t_start)
print df
</code></pre>
<p>And the time spent was:</p>
<pre><code>Time spent running this was:
0:00:35.786000
</code></pre>
<h2>Final thoughts:</h2>
<p>For this particular case, Newtons method was much faster (I can run 300K rows in <code>0:00:01.139000</code>!). Thank you both!</p>
| 1 | 2016-09-27T13:02:56Z | 39,730,719 | <p>You could get rid of one variable, then use Numpy's array broadcasting:</p>
<pre><code># Your `df`:
#R a b c x y
#0 0.5 -0.01 -0.50 32.42 9.69483 26.6327
#1 0.6 0.00 0.07 14.12 6.18463 14.5529
#2 0.7 -0.01 -0.50 32.42 8.17467 27.6644
# Solved in one go
coefs = df.values[:, 0:4]
def mfun(x, *args):
args = np.array(args[0], dtype=np.float64)
return args[:,1] * x**2 + args[:,2] * x + args[:,3] - args[:,0] * x**1.75
nrows = coefs.shape[0]
x = fsolve(mfun, np.ones(nrows) * 50, args=coefs)
y = coefs[:, 0] * x**1.75
x, y
#(array([ 9.69482605, 6.18462999, 8.17467496]),
#array([26.632690454652423, 14.552924099681404, 27.66440941242009], dtype=object))
</code></pre>
| 0 | 2016-09-27T17:17:53Z | [
"python",
"pandas",
"numpy",
"scipy"
]
|
Only allowing a loop of odd numbers in a function | 39,725,498 | <p>I am teaching myself python but am getting really stuck on this question. It asks to "Write a function which accepts as input a list of odd numbers. Loop over the list of odd numbers and turn each into an even number. Store each even number in a new list and return that new list." </p>
<p>I'm happy with the latter part of the question but struggling with only allowing the input to be odd numbers. </p>
<p>Here's what I've written so far, it works for any odd list you submit e.g. ([1,3,5]) and it works when you start with an even number e.g. ([2,3,5]) but I can't get it to work for when the even number is mid way through the list e.g. ([1,2,3]) - I want it to print this can't be done. </p>
<pre><code>def odd_to_even(x):
for i in x:
if i %2 == 0:
print('This is not an odd number')
break
else:
list = []
for n in x:
list.append(n -1)
return list
</code></pre>
| 1 | 2016-09-27T13:07:15Z | 39,725,727 | <p>I would agree with the comments by @jonrsharpe, @ShadowRanger, and @deceze that you probably don't need to include testing, but it wouldn't hurt. I'll use @deceze's line for that check here. Remember, you must declare your list <strong>outside</strong> the loop using it, or the loop will reset it each iteration. Even better to change the names to make things clearer.</p>
<pre><code>def odd_to_even(input_list):
if any(i % 2 == 0 for i in input_list): raise ValueError
output_list = []
for i in input_list:
output_list.append(i - 1)
return output_list
</code></pre>
<p>To incorporate @deceze's good one-liner and keep the validation:</p>
<pre><code>def odd_to_even(input_list):
if any(i % 2 == 0 for i in input_list): raise ValueError
return [i - 1 for i in input_list]
</code></pre>
<p>You asked in a comment what is weird about the double looping, so I'd like to add a small explanation for that here. Sometimes you want to have a loop within a loop, but this was not one of those cases. You have a single list, and looping over it one time is sufficient in this case for you to:</p>
<ol>
<li>Determine whether all numbers inside the list are odd (validate the list)</li>
<li>Take each number, modify it, and add it to your output list</li>
</ol>
<p>By looping a second time inside your first loop, you would end up looping over the list for each time you looped over the list. Maybe that phrasing is confusing. Let's say your input list was <code>[1, 3, 5]</code>. By using a loop inside a loop, you'd end up creating a new list to output 3 times, because you'd create your output one time for each item in the input. I hope that helps clear it up for you.</p>
| 2 | 2016-09-27T13:18:05Z | [
"python",
"list",
"function",
"for-loop"
]
|
Extract string from regex | 39,725,519 | <p>For the below python code, I am using regex to parse the string. However I am struggling to extract the string from the matched pattern.</p>
<pre><code>import re
rx = re.compile(
r'^(?P<interesting>.+?)-(?P<uid>\b\w{8}-(?:\w{4}-){3}\w{12}\b)(?P<junk>.+)$',
re.MULTILINE | re.VERBOSE)
test_str = u"00000 Gin-12-a19ea68e-64bf-4471-b4d1-44f6bd9c1708-62fa6ae2-599c-4ff1-8249-bf6411ce3be7-83930e63-2149-40f0-b6ff-0838596a9b89 Kin\n00000 Gin-a19ea68e-64bf-4471-b4d1-44f6bd9c1708 Kin\ntest123 test 12345678-1234-1234-1234-123456789012 junk afterwards\n"
tmp = re.findall(rx, test_str)
print(tmp[0])
</code></pre>
<p>I get the below output</p>
<pre><code>('00000 Gin-12', 'a19ea68e-64bf-4471-b4d1-44f6bd9c1708', '-62fa6ae2-599c-4ff1-8249-bf6411ce3be7-83930e63-2149-40f0-b6ff-0838596a9b89 Kin')
</code></pre>
<p>My expected ouput is</p>
<pre><code>00000 Gin-12
</code></pre>
| 0 | 2016-09-27T13:08:15Z | 39,725,826 | <p>You have a named group in your regex so just use it:</p>
<pre><code>import re
rx = re.compile(r'^(?P<interesting>.+?)-(?P<uid>\b\w{8}-(?:\w{4}-){3}\w{12}\b)(?P<junk>.+)$', re.MULTILINE | re.VERBOSE)
test_str = u"00000 Gin-12-a19ea68e-64bf-4471-b4d1-44f6bd9c1708-62fa6ae2-599c-4ff1-8249-bf6411ce3be7-83930e63-2149-40f0-b6ff-0838596a9b89 Kin\n00000 Gin-a19ea68e-64bf-4471-b4d1-44f6bd9c1708 Kin\ntest123 test 12345678-1234-1234-1234-123456789012 junk afterwards\n"
tmp = re.match(rx, test_str)
print(tmp.groupdict()["interesting"])
</code></pre>
| 1 | 2016-09-27T13:23:07Z | [
"python",
"regex"
]
|
How to convert to json file formate in python | 39,725,906 | <p>Here is my output:</p>
<pre><code>xyz information
+-----+------+------+
| A | B | C |
+-----+------+------+
| 23 | 76 | 87 |
| 76 | 36 | 37 |
| 83 | 06 | 27 |
+-----+------+------+
</code></pre>
<p>I want to convert this output to json format in python
can anybody suggest how to do that.</p>
| -2 | 2016-09-27T13:26:21Z | 39,726,666 | <p>Given</p>
<pre><code>xyz = '''+-----+------+------+
| A | B | C |
+-----+------+------+
| 23 | 76 | 87 |
| 76 | 36 | 37 |
| 83 | 06 | 27 |
+-----+------+------+'''
</code></pre>
<p>Do</p>
<pre><code>import json
import collections
xyz_rows = [map(str.strip, row.split('|')[1:-1]) for row in xyz.split('\n') if '|' in row]
xyz_cols = collections.OrderedDict() # OrderedDict to preserve column order
for column in zip(*xyz_rows): # rows to columns
xyz_cols[column[0]] = column[1:]
xyz_json = json.dumps(xyz_cols)
</code></pre>
<p><code>xyz_json</code> contains</p>
<pre><code>'{"A": ["23", "76", "83"], "B": ["76", "36", "06"], "C": ["87", "37", "27"]}'
</code></pre>
| 0 | 2016-09-27T13:57:26Z | [
"python",
"json",
"python-2.7"
]
|
Django 1.9 pushing our code to go live today but have static directory issue | 39,725,934 | <p>We are pushing our code up to go live today, and before we do I need to figure out where to put my static files. I have in the project directory a folder called static. Inside I have an admin and an image folder. When looking through the docs it looks like these should not be placed inside the actual project. But instead should be outside the project. These are the files that come with django when running the code <code>python manage.py collectstatic</code>. But for the css I have used on the site itself, it looks like I should have another folder called static, to place it all in. So my question is: Should I have a folder in my project directory called static where I house my css, and should I also have the <code>collectstatic</code> files folder, but held elsewhere? </p>
<p>I was told also to put my css inside a media directory. This doesnt sound correct, and I couldnt find anything in the django docs regarding static files about this. </p>
| 0 | 2016-09-27T13:27:22Z | 39,726,772 | <p>Basically what you have to do, is to tell django STATICFILES_DIR settings where all your static folder live</p>
<pre><code>STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static'),
os.path.join(BASE_DIR, 'mycssfolder'),
os.path.join(BASE_DIR, 'otherstaticfolder')
]
</code></pre>
<p>so when you run djangoadmin collecstatic command it looks into all those folder for static files.</p>
| 0 | 2016-09-27T14:01:47Z | [
"python",
"django",
"django-admin",
"django-staticfiles"
]
|
Django 1.9 pushing our code to go live today but have static directory issue | 39,725,934 | <p>We are pushing our code up to go live today, and before we do I need to figure out where to put my static files. I have in the project directory a folder called static. Inside I have an admin and an image folder. When looking through the docs it looks like these should not be placed inside the actual project. But instead should be outside the project. These are the files that come with django when running the code <code>python manage.py collectstatic</code>. But for the css I have used on the site itself, it looks like I should have another folder called static, to place it all in. So my question is: Should I have a folder in my project directory called static where I house my css, and should I also have the <code>collectstatic</code> files folder, but held elsewhere? </p>
<p>I was told also to put my css inside a media directory. This doesnt sound correct, and I couldnt find anything in the django docs regarding static files about this. </p>
| 0 | 2016-09-27T13:27:22Z | 39,726,869 | <p>You can store your static files in multiple folders which can reside anywhere</p>
<p>It will look like this in settings</p>
<pre><code>STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'static'),
#You can have multiple directories here
)
</code></pre>
<p>Set the STATIC_ROOT like this in settings</p>
<pre><code>STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
</code></pre>
<p>When you run <code>$ python manage.py collectstatic</code>
This will copy all files from your static folders into the STATIC_ROOT directory.</p>
<p>The purpose of this to gather all static files in a single directory so you can serve them easily.</p>
<p>**Note</p>
<p>Set the base directory in settings</p>
<pre><code>BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
</code></pre>
| 1 | 2016-09-27T14:06:37Z | [
"python",
"django",
"django-admin",
"django-staticfiles"
]
|
Python - In-line boolean evaluation without IF statements | 39,726,028 | <p>I am trying to assess the value of a column of a dataframe to determine the value of another column. I did this by using an <code>if</code> statement and <code>.apply()</code> function successfully. I.e. </p>
<pre><code>if Col x < 0.3:
return y
elif Col x > 0.6:
return z
</code></pre>
<p>Etc. The problem is this takes quite a while to run with a lot of data. Instead I am trying to use the following logic to determine the new column value:</p>
<p>(x<0.3)*y + (x>0.6)*z</p>
<p>So Python evaluates TRUE/FALSE and applies the correct value. This seems to work much faster, the only thing is Python says:
"UserWarning: evaluating in Python space because the '*' operator is not supported by numexpr for the bool dtype, use '&' instead
unsupported[op_str]))"</p>
<p>Is this a problem? Should I be using "&"? I feel using "&" would be incorrect when multiplying.</p>
<p>Thank you!</p>
| 0 | 2016-09-27T13:31:19Z | 39,740,389 | <p>From what I have read so far, the performance gap is issued by the <em>parser</em> backend chosen by <code>pandas</code>. There's the regular python parser as a backand and, additionally, a pandas parsing backend.<br>
The docs say, that there is no performance gain if using plain old python over pandas here: <a href="http://pandas.pydata.org/pandas-docs/stable/enhancingperf.html#pandas-eval-backends" rel="nofollow">Pandas eval Backends</a></p>
<p>However, you obviously hit a white spot in the pandas backend; i.e. you formed an expression that cannot be evaluated using pandas. The result is that pandas falls back to the original python parsing backend, as stated in the resulting UserWarning:</p>
<blockquote>
<p>UserWarning: evaluating in Python space because the '*' operator is not supported by numexpr for the bool dtype, use '&' instead
unsupported[op_str]))</p>
</blockquote>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/enhancingperf.html#technical-minutia-regarding-expression-evaluation" rel="nofollow">(More on this topic)</a></p>
<h2>Timing evaluations</h2>
<p>So, as we now know about different parsing backends, it's time to check a few options provided by <code>pandas</code> that are suitable for your desired dataframe operation (complete script below):</p>
<pre><code>expr_a = '''(a < 0.3) * 1 + (a > 0.6) * 3 + (a >= 0.3) * (a <= 0.6) * 2'''
</code></pre>
<ol>
<li>Evaluate the expression as a string using the <code>pandas</code> backend</li>
<li>Evaluate the <strong>same</strong> string using the <code>python</code> backend</li>
<li>Evaluate the expression string with external variable reference using <code>pandas</code></li>
<li>Solve the problem using <code>df.apply()</code></li>
<li>Solve the problem using <code>df.applymap()</code></li>
<li>Direct submission of the expression (no string evaluation)</li>
</ol>
<p>The results on my machine for a dataframe with 10,000,000 random float values in one column are:</p>
<pre><code>(1) Eval (pd) 0.240498406269
(2) Eval (py) 0.197919774926
(3) Eval @ (pd) 0.200814546686
(4) Apply 3.242620778595
(5) ApplyMap 6.542354086152
(6) Direct 0.140075372736
</code></pre>
<p>The major points explaining the performance differences are most likely the following:</p>
<ul>
<li>Using a python function (as in <code>apply()</code> and <code>applymap()</code>) is (<strong>of course!</strong>) much slower than using functionality completely implemented in C</li>
<li>String evaluation is expensive (see (6) vs (2))</li>
<li>The overhead (1) has over (2) is probably the backend choice and fallback to also using the <code>python</code> backend, because <code>pandas</code> does not evaluate <code>bool * int</code>.</li>
</ul>
<p>Nothing new, eh?</p>
<h2>How to proceed</h2>
<p>We basically just proved what our gut feeling was telling us before (namely: pandas chooses the right backend for a task).</p>
<p>As a consequence, I think it is <strong>totally okay</strong> to ignore the UserWarning, as long as you know the underlying <em>hows and whys</em>.</p>
<p>Thus: Keep going and have <code>pandas</code> use the fastest of all implementations, which is, as usual, the C functions.</p>
<h2>The Test Script</h2>
<pre><code>from __future__ import print_function
import sys
import random
import pandas as pd
import numpy as np
from timeit import default_timer as timer
def conditional_column(val):
if val < 0.3:
return 1
elif val > 0.6:
return 3
return 2
if __name__ == '__main__':
nr = 10000000
df = pd.DataFrame({
'a': [random.random() for _ in range(nr)]
})
print(nr, 'rows')
expr_a = '''(a < 0.3) * 1 + (a > 0.6) * 3 + (a >= 0.3) * (a <= 0.6) * 2'''
expr_b = '''(@df.a < 0.3) * 1 + (@df.a > 0.6) * 3 + (@df.a >= 0.3) * (@df.a <= 0.6) * 2'''
fmt = '{:16s} {:.12f}'
# Evaluate the string expression using pandas parser
t0 = timer()
b = df.eval(expr_a, parser='pandas')
print(fmt.format('(1) Eval (pd)', timer() - t0))
# Evaluate the string expression using python parser
t0 = timer()
c = df.eval(expr_a, parser='python')
print(fmt.format('(2) Eval (py)', timer() - t0))
# Evaluate the string expression using pandas parser with external variable access (@)
t0 = timer()
d = df.eval(expr_b, parser='pandas')
print(fmt.format('(3) Eval @ (pd)', timer() - t0))
# Use apply to map the if/else function to each row of the df
t0 = timer()
d = df['a'].apply(conditional_column)
print(fmt.format('(4) Apply', timer() - t0))
# Use element-wise apply (WARNING: requires a dataframe and walks ALL cols AND rows)
t0 = timer()
e = df.applymap(conditional_column)
print(fmt.format('(5) ApplyMap', timer() - t0))
# Directly access the pandas series objects returned by boolean expressions on columns
t0 = timer()
f = (df['a'] < 0.3) * 1 + (df['a'] > 0.6) * 3 + (df['a'] >= 0.3) * (df['a'] <= 0.6) * 2
print(fmt.format('(6) Direct', timer() - t0))
</code></pre>
| 1 | 2016-09-28T07:10:14Z | [
"python",
"performance",
"if-statement",
"boolean",
"apply"
]
|
difference between range / len etc. when iterating over tuples | 39,726,103 | <p>One thing upfront: I am fairly now to the coding world so maybe my question is a bit stupid ... I was trying to write a function that returns the every other element of a tuple. The easiest way obviously is </p>
<pre><code>def oddTuples(aTup):
return aTup[::2]
</code></pre>
<p>I tried to solve it differently by using the following code</p>
<pre><code>def oddTuples(aTup):
newTup = ()
for i in len(aTup):
if i%2 != 0:
newTup = newTup + (i,)
return newTup
</code></pre>
<p>But that doesn't give me back anything at all.</p>
<p>I thought I (if used over <code>len</code>) gives back the position, so if <code>aTup = ((12, 34, 'abc', 'dfdf', 2340))</code> the return would be <code>newTup = ((12, 'abc', 2340))</code>.</p>
<p>What's the <code>i</code> iterating over when used with <code>range</code>, <code>len</code> or -in that case- while iterating over <code>for i in aTup:</code>?</p>
| 0 | 2016-09-27T13:34:18Z | 39,726,269 | <pre><code>newTup = ()
b = True
for i in aTup:
if b:
newTup = newTup + (i,)
b = not b
return newTup
</code></pre>
<p>Try that on for size. The <code>for</code> statement gives us the tuple values one by one, and the <code>boolean</code> b lets us skip every other one. </p>
<p>The best way to do this is the way you put at the top of your post. If you don't understand what it's doing, then I suggest researching list slicing.</p>
| 0 | 2016-09-27T13:41:32Z | [
"python",
"for-loop",
"range",
"iteration",
"tuples"
]
|
difference between range / len etc. when iterating over tuples | 39,726,103 | <p>One thing upfront: I am fairly now to the coding world so maybe my question is a bit stupid ... I was trying to write a function that returns the every other element of a tuple. The easiest way obviously is </p>
<pre><code>def oddTuples(aTup):
return aTup[::2]
</code></pre>
<p>I tried to solve it differently by using the following code</p>
<pre><code>def oddTuples(aTup):
newTup = ()
for i in len(aTup):
if i%2 != 0:
newTup = newTup + (i,)
return newTup
</code></pre>
<p>But that doesn't give me back anything at all.</p>
<p>I thought I (if used over <code>len</code>) gives back the position, so if <code>aTup = ((12, 34, 'abc', 'dfdf', 2340))</code> the return would be <code>newTup = ((12, 'abc', 2340))</code>.</p>
<p>What's the <code>i</code> iterating over when used with <code>range</code>, <code>len</code> or -in that case- while iterating over <code>for i in aTup:</code>?</p>
| 0 | 2016-09-27T13:34:18Z | 39,726,284 | <pre><code>for i in len(aTup):
</code></pre>
<p>Will raise an error because <code>len()</code> returns an integer which can not be iterated over in a <code>for</code> loop.</p>
<p>In the case of:</p>
<pre><code>for i in range(len(aTup)):
</code></pre>
<p>In each iteration of the loop <code>i</code> will be an integer starting from 0 and up to the length of your tuple <strong>- 1</strong>.</p>
<p>In the case of:</p>
<pre><code>for i in aTup:
</code></pre>
<p>Each <code>i</code> will be a member of the tuple. The best way to get used to how these things work is to just pop open an interactive interpreter and do some experiments!</p>
<pre><code>>>> aTup = ('hello', 'world')
>>> for i in len(aTup):
... print i
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'int' object is not iterable
>>> for i in range(len(aTup)):
... print i
...
0
1
>>> for i in aTup:
... print i
...
hello
world
>>>
</code></pre>
| 0 | 2016-09-27T13:41:46Z | [
"python",
"for-loop",
"range",
"iteration",
"tuples"
]
|
difference between range / len etc. when iterating over tuples | 39,726,103 | <p>One thing upfront: I am fairly now to the coding world so maybe my question is a bit stupid ... I was trying to write a function that returns the every other element of a tuple. The easiest way obviously is </p>
<pre><code>def oddTuples(aTup):
return aTup[::2]
</code></pre>
<p>I tried to solve it differently by using the following code</p>
<pre><code>def oddTuples(aTup):
newTup = ()
for i in len(aTup):
if i%2 != 0:
newTup = newTup + (i,)
return newTup
</code></pre>
<p>But that doesn't give me back anything at all.</p>
<p>I thought I (if used over <code>len</code>) gives back the position, so if <code>aTup = ((12, 34, 'abc', 'dfdf', 2340))</code> the return would be <code>newTup = ((12, 'abc', 2340))</code>.</p>
<p>What's the <code>i</code> iterating over when used with <code>range</code>, <code>len</code> or -in that case- while iterating over <code>for i in aTup:</code>?</p>
| 0 | 2016-09-27T13:34:18Z | 39,726,287 | <p>The <code>i</code> will not give you the index instead it will return the value.</p>
<p>You can solve your problem simply by defining a local variable that can be incriminated for each iteration and use that to get the odd indexed values.</p>
<p>Summery: Use a local variable to act as an index as Python for will return the value.</p>
| 0 | 2016-09-27T13:42:02Z | [
"python",
"for-loop",
"range",
"iteration",
"tuples"
]
|
difference between range / len etc. when iterating over tuples | 39,726,103 | <p>One thing upfront: I am fairly now to the coding world so maybe my question is a bit stupid ... I was trying to write a function that returns the every other element of a tuple. The easiest way obviously is </p>
<pre><code>def oddTuples(aTup):
return aTup[::2]
</code></pre>
<p>I tried to solve it differently by using the following code</p>
<pre><code>def oddTuples(aTup):
newTup = ()
for i in len(aTup):
if i%2 != 0:
newTup = newTup + (i,)
return newTup
</code></pre>
<p>But that doesn't give me back anything at all.</p>
<p>I thought I (if used over <code>len</code>) gives back the position, so if <code>aTup = ((12, 34, 'abc', 'dfdf', 2340))</code> the return would be <code>newTup = ((12, 'abc', 2340))</code>.</p>
<p>What's the <code>i</code> iterating over when used with <code>range</code>, <code>len</code> or -in that case- while iterating over <code>for i in aTup:</code>?</p>
| 0 | 2016-09-27T13:34:18Z | 39,726,292 | <p>Here is an example you can use to finish the program.</p>
<pre><code>myTup = (1,2,3,4,5,6,7,8)
for i in range(len(myTup)):
if i%2 != 0:
print("Tuple items: " ,myTup[i])
print("i here is: " , i)
</code></pre>
<p><strong>len</strong> -> gives you the length of a object ; eg. if you say len(myTup). That will be 8</p>
<p><strong>Range</strong> -> is like a range, (number of elements) you will be iterating over. </p>
<p><strong>i</strong> -> is the index. </p>
<p>Here is the output </p>
<pre><code>Tuple items: 2
i here is: 1
Tuple items: 4
i here is: 3
Tuple items: 6
i here is: 5
Tuple items: 8
i here is: 7
</code></pre>
<p>You can get the same output above by running the code without using range and len.</p>
<pre><code>for i in myTup:
if i%2 != 0:
print("Tuple items: " ,myTup[i])
print("i here is: " , i)
</code></pre>
| 1 | 2016-09-27T13:42:17Z | [
"python",
"for-loop",
"range",
"iteration",
"tuples"
]
|
difference between range / len etc. when iterating over tuples | 39,726,103 | <p>One thing upfront: I am fairly now to the coding world so maybe my question is a bit stupid ... I was trying to write a function that returns the every other element of a tuple. The easiest way obviously is </p>
<pre><code>def oddTuples(aTup):
return aTup[::2]
</code></pre>
<p>I tried to solve it differently by using the following code</p>
<pre><code>def oddTuples(aTup):
newTup = ()
for i in len(aTup):
if i%2 != 0:
newTup = newTup + (i,)
return newTup
</code></pre>
<p>But that doesn't give me back anything at all.</p>
<p>I thought I (if used over <code>len</code>) gives back the position, so if <code>aTup = ((12, 34, 'abc', 'dfdf', 2340))</code> the return would be <code>newTup = ((12, 'abc', 2340))</code>.</p>
<p>What's the <code>i</code> iterating over when used with <code>range</code>, <code>len</code> or -in that case- while iterating over <code>for i in aTup:</code>?</p>
| 0 | 2016-09-27T13:34:18Z | 39,726,296 | <p>Python's <code>for</code> loop is a <a href="https://en.wikipedia.org/wiki/Foreach_loop" rel="nofollow"><em>foreach</em> construct</a>; it'll loop over a sequence or iterable and bind the target variable (<code>i</code> in your case) to each element in that sequence one by one.</p>
<p>So for <code>for i in aTuple:</code>, with each iteration, <code>i</code> is bound to the next value from the tuple. If you used a <code>range()</code> object, then looping over that object would produce integers in the range, from start (defaulting to <code>0</code>) up to the end value minus 1 (the end value is excluded).</p>
<p>Your code, however, doesn't loop over <code>range()</code>; you try to loop over the result of <code>len(aTuple)</code>, which will be a <em>single integer</em>. That gives a <code>TypeError: 'int' object is not iterable</code> exception.</p>
<p>If you want to use the <a href="https://docs.python.org/3/library/stdtypes.html#ranges" rel="nofollow"><code>range()</code> type</a>, that's fine, but then you'll have to translate the index back into a value from <code>aTuple</code> by using indexing:</p>
<pre><code>def oddTuples(aTup):
newTup = ()
for i in range(len(aTup)):
if i%2 != 0:
newTup = newTup + (aTup[i],)
return newTup
</code></pre>
<p>Here <code>aTup[i]</code> produces the value at index <code>i</code>; where <code>i</code> is so index <code>1</code>, <code>3</code>, etc, so you get every odd element. Note that this <em>differs</em> from <code>aTup[::2]</code>, which starts at <code>0</code> and includes every even-numbered element! Python starts counting at <code>0</code>, so take that into account when counting out elements.</p>
<p>You can avoid having to index back in by using the <a href="https://docs.python.org/3/library/functions.html#enumerate" rel="nofollow"><code>enumerate()</code> function</a>; for every element in a sequence it'll produce a tuple with an ever-increasing index number. Let's use that to fix the odd-even issue, mixing in some <code>+=</code> augmented assignment too:</p>
<pre><code>def oddTuples(aTup):
newTup = ()
for i, value in enumerate(aTup):
if i % 2 == 0:
newTup += value,
return newTup
</code></pre>
<p>You don't really need the <code>(...)</code> parentheses here either, tuples are formed by commas (except for the empty tuple, and use parentheses when the comma could mean something else, like in a function call).</p>
| 0 | 2016-09-27T13:42:37Z | [
"python",
"for-loop",
"range",
"iteration",
"tuples"
]
|
Dictionary of dictionaries vs dictionary of class instances | 39,726,124 | <p>I understand what a class is, a bundle of attributes and methods stored together in one object. However, i don't think i have ever really grasped their full power. I taught myself to manipulate large volumes of data by using 'dictionary of dictionary' data structures. I'm now thinking if i want to fit in with the rest of the world then i need to implement classes in my code, but i just don't get how to make the transition.</p>
<p>I have a script which gets information about sales orders from a SQL query, performs operations on the data, and outputs it to a csv.</p>
<p>1) (the way i currently do it, store all the orders in a dictionary of dictionaries)</p>
<pre><code>cursor.execute(querystring)
# create empty dictionary to hold orders
orders = {}
# define list of columns returned by query
columns = [col[0] for col in cursor.description]
for row in cursor:
# create dictionary of dictionaries where key is order_id
# this allows easy access of attributes given the order_id
orders[row.order_id] = {}
for i, v in enumerate(columns):
# add each field to each order
orders[row.order_id][v] = row[i]
# example operation
for order, fields in orders.iteritems():
fields['long'], fields['lat'] = getLongLat(fields['post_code'])
# example of another operation
cancelled_orders = getCancelledOrders()
for order_id in cancelled_orders:
orders[order_id]['status'] = 'cancelled'
# Other similar operations go here...
# write to file here...
</code></pre>
<p>2) (the way i THINK i would do it if i was using classes)</p>
<pre><code>class salesOrder():
def __init__(self, cursor_row):
for i, v in enumerate(columns):
setattr(self, v, cursor_row[i])
def getLongLat(self, long_lat_dict):
self.long, self.lat = long_lat_dict[self.post_code]['long'], long_lat_dict[self.post_code]['lat']
def cancelOrder(self):
self.status = 'cancelled'
# more methods here
cursor.execute(querystring)
# create empty dictionary to hold orders
orders = {}
# define list of columns returned by query
columns = [col[0] for col in cursor.description]
for row in cursor:
orders[row.order_id] = salesOrder(row)
orders[row.order_id].getLongLat()
# example of another operation
cancelled_orders = getCancelledOrders()
for order_id in cancelled_orders:
orders[order_id].cancelOrder()
# other similar operations go here
# write to file here
</code></pre>
<p>I just get the impression that i'm not quite understanding the best way to use classes. Have i got the complete wrong idea about how to use classes? Is there some sense to what i'm doing but it needs refactoring? or am i trying to use classes for the wrong purpose?</p>
| 3 | 2016-09-27T13:35:32Z | 39,728,328 | <p>I am trying to guess what you are trying to do since I have no idea what your "row" looks like. I assume you have the variable <code>columns</code> which is a list of column names. If that is the case, please consider this code snippet:</p>
<pre><code>class SalesOrder(object):
def __init__(self, columns, row):
""" Transfer all the columns from row to this object """
for name in columns:
value = getattr(row, name)
setattr(self, name, value)
self.long, self.lat = getLongLat(self.post_code)
def cancel(self):
self.status = 'cancelled'
def as_row(self):
return [getattr(self, name) for name in columns]
def __repr__(self):
return repr(self.as_row())
# Create the dictionary of class
orders = {row.order_id: SalesOrder(columns, row) for row in cursor}
# Cancel
cancelled_orders = getCancelledOrders()
for order_id in cancelled_orders:
orders[order_id].cancel()
# Print all sales orders
for sales_order in orders.itervalues():
print(sales_order)
</code></pre>
<p>At the lowest level, we need to be able to create a new <code>SalesOrder</code> object from the <code>row</code> object by copying all the attributes listed in <code>columns</code> over. When initializing a <code>SalesOrder</code> object, we also calculate the longitude and latitude as well.</p>
<p>With that, the task of creating the dictionary of class objects become easier:</p>
<pre><code>orders = {row.order_id: SalesOrder(columns, row) for row in cursor}
</code></pre>
<p>Our <code>orders</code> is a dictionary with <code>order_id</code> as keys and <code>SalesOrder</code> as values. Finally, the task up cancelling the orders is the same as your code.</p>
<p>In addition to what you have, I created a method called <code>as_row()</code> which is handy if later you wish to write a <code>SalesOrder</code> object into a CSV or database. For now, I use it to display the "raw" row. Normally, the <code>print</code> statement/function will invoke the <code>__str__()</code> method to get a string presentation for an object, if not found, it will attempt to invoke the <code>__repr__()</code> method, which is what we have here.</p>
| 1 | 2016-09-27T15:14:47Z | [
"python",
"class",
"dictionary"
]
|
Dictionary of dictionaries vs dictionary of class instances | 39,726,124 | <p>I understand what a class is, a bundle of attributes and methods stored together in one object. However, i don't think i have ever really grasped their full power. I taught myself to manipulate large volumes of data by using 'dictionary of dictionary' data structures. I'm now thinking if i want to fit in with the rest of the world then i need to implement classes in my code, but i just don't get how to make the transition.</p>
<p>I have a script which gets information about sales orders from a SQL query, performs operations on the data, and outputs it to a csv.</p>
<p>1) (the way i currently do it, store all the orders in a dictionary of dictionaries)</p>
<pre><code>cursor.execute(querystring)
# create empty dictionary to hold orders
orders = {}
# define list of columns returned by query
columns = [col[0] for col in cursor.description]
for row in cursor:
# create dictionary of dictionaries where key is order_id
# this allows easy access of attributes given the order_id
orders[row.order_id] = {}
for i, v in enumerate(columns):
# add each field to each order
orders[row.order_id][v] = row[i]
# example operation
for order, fields in orders.iteritems():
fields['long'], fields['lat'] = getLongLat(fields['post_code'])
# example of another operation
cancelled_orders = getCancelledOrders()
for order_id in cancelled_orders:
orders[order_id]['status'] = 'cancelled'
# Other similar operations go here...
# write to file here...
</code></pre>
<p>2) (the way i THINK i would do it if i was using classes)</p>
<pre><code>class salesOrder():
def __init__(self, cursor_row):
for i, v in enumerate(columns):
setattr(self, v, cursor_row[i])
def getLongLat(self, long_lat_dict):
self.long, self.lat = long_lat_dict[self.post_code]['long'], long_lat_dict[self.post_code]['lat']
def cancelOrder(self):
self.status = 'cancelled'
# more methods here
cursor.execute(querystring)
# create empty dictionary to hold orders
orders = {}
# define list of columns returned by query
columns = [col[0] for col in cursor.description]
for row in cursor:
orders[row.order_id] = salesOrder(row)
orders[row.order_id].getLongLat()
# example of another operation
cancelled_orders = getCancelledOrders()
for order_id in cancelled_orders:
orders[order_id].cancelOrder()
# other similar operations go here
# write to file here
</code></pre>
<p>I just get the impression that i'm not quite understanding the best way to use classes. Have i got the complete wrong idea about how to use classes? Is there some sense to what i'm doing but it needs refactoring? or am i trying to use classes for the wrong purpose?</p>
| 3 | 2016-09-27T13:35:32Z | 39,730,180 | <p>Classes are mostly useful for coupling data to behaviour, and for providing structure (naming and documenting the association of certain properties, for example).</p>
<p>You're not doing either of those here - there's no real behaviour in your class (it doesn't <em>do</em> anything to the data), and all the structure is provided externally. The class instances are just used for their attribute dictionaries, so they're just a fancy wrapper around your old dictionary.</p>
<p>If you <em>do</em> add some real behaviour (above <code>getLongLat</code> and <code>cancelOrder</code>), or some real structure (other than a list of arbitrary column names and field values passed in from outside), then it makes sense to use a class.</p>
| 1 | 2016-09-27T16:46:12Z | [
"python",
"class",
"dictionary"
]
|
How to find Bragg reflexes fast with numpy? | 39,726,139 | <p>For x-ray diffraction one needs to find the solutions to the so called Laue-Equation</p>
<p>G_hkl - k_in + |k_in|*(sin(theta) cos(phi) , sin(theta) sin(phi) , cos(theta))=0</p>
<p>where G_hkl is a given 3 dimensional vector, k_in can be chosen as (0,0,1) and theta and phi are free parameters to fulfill the equation. In a typical experiment G_hkl is then rotated around the x-axis and for every step in the rotation one needs to find the solution to the equation. The equation cannot have more than one solution at a given rotation.</p>
<p>I have written this python script to find these solution but for my application it is not fast enough.</p>
<pre><code>import numpy as np
import time
# Just initialization of variables. This is fast enough
res_list = []
N_rot = 100
Ghkl = np.array([0,0.7,0.7])
Ghkl_norm = np.linalg.norm(Ghkl)
kin = np.array([0,0,1])
kin_norm = np.linalg.norm(kin)
alpha_step = 2*np.pi/(N_rot-1)
rot_mat = np.array([[1,0,0],[0,np.cos(alpha_step),-np.sin(alpha_step)],[0,np.sin(alpha_step),np.cos(alpha_step)]])
# You can deduce theta from the norm of the vector equation
theta = np.arccos(1-(Ghkl_norm**2/(2*kin_norm**2)))
sint = np.sin(theta)
cost = np.cos(theta)
# This leaves only phi as paramter to find
# I find phi by introducing a finite test vector
# and testing whether the norm of the vector equation is close
# to zero for any of those test phis
phi_test = np.linspace(0,2*np.pi,200)
kout = kin_norm * np.array([sint * np.cos(phi_test), sint * np.sin(phi_test), cost + 0*phi_test]).T
##############
start_time = time.time()
for j in range(100): # just to make it longer to measure the time
res_list = []
for i in range(N_rot): # This loop is too slow
# Here the norm of the vector equation is calculated for all phi_test
norm_vec = np.linalg.norm(Ghkl[np.newaxis, :] - kin[np.newaxis, :] + kout, axis=1)
if (norm_vec < 0.01 * kin_norm).any(): # check whether any fulfills the criterion
minarg = np.argmin(norm_vec)
res_list.append([theta, phi_test[minarg]])
Ghkl = np.dot(rot_mat,Ghkl)
print('Time was {0:1.2f} s'.format( (time.time()-start_time)))
# On my machine it takes 0.3s and res_list should be
# [[1.0356115365192968, 1.578689775673263]]
</code></pre>
<p>Do you know a faster way to calculate this, either conceptually by solving the equation totally different or by just making it faster with my existing method?</p>
| 1 | 2016-09-27T13:36:13Z | 39,727,260 | <p>There's a dependency as <code>Ghkl</code> is being updated at each iteration and re-used at the next. The corresponding closed form might be difficult to trace out. So, I would focus on improving the performance on the rest of the code inside that innermost loop.</p>
<p>Now, there we are calculating <code>norm_vec</code>, which I think could be sped up with two methods as listed next.</p>
<p><strong>Appproach #1</strong> Using <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html#scipy.spatial.distance.cdist" rel="nofollow"><code>Scipy's cdist</code></a> -</p>
<pre><code>from scipy.spatial.distance import cdist
norm_vec = cdist(kout,(kin-Ghkl)[None]) # Rest of code stays the same
</code></pre>
<p><strong>Appproach #2</strong> Using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow"><code>np.einsum</code></a> -</p>
<pre><code>sums = (Ghkl-kin)+kout
norm_vec_sq = np.einsum('ij,ij->i',sums,sums)
if (norm_vec_sq < (0.01 * kin_norm)**2 ).any():
minarg = np.argmin(norm_vec_sq) # Rest of code stays the same
</code></pre>
<p>Runtime test -</p>
<p>Using the inputs listed in the question, we have these results :</p>
<pre><code>In [91]: %timeit np.linalg.norm(Ghkl- kin + kout, axis=1)
10000 loops, best of 3: 31.1 µs per loop
In [92]: sums = (Ghkl-kin)+kout
...: norm_vec_sq = np.einsum('ij,ij->i',sums,sums)
...:
In [93]: %timeit (Ghkl-kin)+kout # Approach2 - step1
100000 loops, best of 3: 7.09 µs per loop
In [94]: %timeit np.einsum('ij,ij->i',sums,sums) # Approach2 - step2
100000 loops, best of 3: 3.82 µs per loop
In [95]: %timeit cdist(kout,(kin-Ghkl)[None]) # Approach1
10000 loops, best of 3: 44.1 µs per loop
</code></pre>
<p>So, approach #1 isn't showing any improvement for those input sizes, but approach #2 is getting around <code>3x</code> speedup there on calculating <code>norm_vec</code>.</p>
<p><strong>Short note on why <code>np.einsum</code> is beneficial here :</strong> Well, <code>einsum</code> is great for elementwise multiplication and sum-reduction. We are exploiting that very nature of problem here. <code>np.linalg.norm</code> gives us summations along the second axis on the squared version of the input. So, the <code>einsum</code> counterpart would be to just feed the same input twice as the two inputs to it, thus taking care of the squaring and then lose the second axis with its sum-reduction that is expressed with its string notation. It does both these in one go and that's probably why it's so fast here.</p>
| 3 | 2016-09-27T14:23:33Z | [
"python",
"performance",
"numpy",
"scientific-computing"
]
|
NetworkX: How do I iteratively apply a network layout like spring_layout? | 39,726,267 | <p>I have a graph <code>G</code> and I want to layout the graph using the function </p>
<p><code>node_positions=nx.spring_layout(G, iterations=5)</code></p>
<p>However, I want to apply this function say 10 times and see how the layout changes with each application. Seems like every time I apply it, it starts from scratch giving me 10 layouts with 5 iterations each.</p>
<p>What I tried so far:</p>
<pre><code>for i in range(10):
node_positions=nx.spring_layout(G, iterations=5)
nx.set_node_attributes(G,'pos',node_positions)
# draw network
plt.figure()
ns = nx.draw_networkx_nodes(G, pos=node_positions, node_color=node_colors, cmap = cm.PuRd, vmin=0, vmax = 0.035, node_size=70, alpha=.9)
es = nx.draw_networkx_edges(G, pos=node_positions, alpha=.2, edge_color='#1a1a1a')
plt.axis('off')
plt.show()
</code></pre>
<p>I'd like to see how the spring layout works by visualizing its results every 5 iterations. Is there a way to achieve this? Thanks!</p>
| 0 | 2016-09-27T13:41:22Z | 39,727,780 | <p><code>spring_layout</code> takes an argument <code>pos</code> which serves as the initial condition.</p>
<p>So <code>pos = nx.spring_layout(G, pos= pos, iterations=5)</code> will work. For the first time through, just set <code>pos=None</code>.</p>
| 1 | 2016-09-27T14:49:08Z | [
"python",
"graph",
"networkx"
]
|
How to authenticate user for a specific object rather than whole class in django? | 39,726,291 | <p>I am working on making an app to add clubs in website. This is my model.py file</p>
<pre><code>from django.db import models
from stdimage import StdImageField
# Create your models here.
class Club(models.Model):
ClubName = models.CharField(max_length=200)
ClubLogo = StdImageField(upload_to='club_logo', variations={'thumbnail':(150, 200, True)})
ClubDetails = models.TextField()
ClubStartDate = models.DateField()
def __str__(self):
return self.ClubName
class Notice(models.Model):
NOTICE = 'NOTICE'
UPDATES = 'UPDATES'
EVENTS = 'EVENTS'
NOTICE_IN_CHOICES = (
(NOTICE, 'Notice'),
(UPDATES, 'Updates'),
(EVENTS, 'Events'),)
NoticeType = models.CharField(
max_length=20, choices=NOTICE_IN_CHOICES, default=NOTICE)
NoticeTag = models.CharField(max_length=30)
NoticeStartDate = models.DateField(auto_now_add=True)
NoticeEndDate = models.DateField()
NoticeFile = models.FileField(default='#', upload_to='notice/%Y/%m/%d')
NoticeContent = models.TextField(default='NA')
NoticeClub = models.ForeignKey(Club)
def __str__(self):
return self.NoticeTag
class Members(models.Model):
MemeberName = models.CharField(max_length=200)
MemberImage = StdImageField(upload_to='member_photo', variations={'thumbnail':(150, 120, True)})
MemberEmail = models.EmailField()
MemberClub = models.ForeignKey(Club)
def __str__(self):
return self.MemeberName
</code></pre>
<p>Now when i am making users via django's inbuilt admin panel i have option to give permission to users to change member of any club but i want to give access to change members of only that particular club which he is member of.
As you can see in this <a href="http://i.stack.imgur.com/pCrm3.png" rel="nofollow">picture </a>that all club are in dropdown option when someone who has access to add notices adding otices. But instead of that i want only one option in the dropdown for the useradmin to which he is associated.</p>
<p>this is my admin.py file</p>
<pre><code>from django.contrib import admin
# Register your models here.
from club.models import Club, Members, Notice
admin.site.register(Club),
admin.site.register(Members),
admin.site.register(Notice),
</code></pre>
| -3 | 2016-09-27T13:42:08Z | 39,726,397 | <p>This is a problem with which many users have been struggling with. </p>
<p>I have been using couple of external packages, and couple of self made solutions. But the best one I have found so far is <a href="https://github.com/django-guardian/django-guardian" rel="nofollow">Django Guardian</a> It's an implementation of per object permission .This means you can manage users and permissions to which they have access to. </p>
| 1 | 2016-09-27T13:46:50Z | [
"python",
"django"
]
|
Dynamic renormalize in matplotlib after set_data | 39,726,334 | <p>I am making an interactive display of 3d data in 2d via .imshow() method. I let the user to change the mode between viewing a single 2d layer and viewing the sum along all 2d layers. This results in large changes of range of the displayed values. For this reason keeping the same color mapping all the time results in the image becoming oversaturated and unreadable. I use .set_data() method of AxesImage class for changing the displayed data and I need a way of recalculating the color mapping at the same time. The closest I got to this goal is this function:</p>
<pre><code>def blit_data(self, data):
c_norm = cs.Normalize(vmin=np.nanmin(data), vmax=np.nanmax(data))
cmap = plt.get_cmap('viridis')
scalar_map = cmx.ScalarMappable(norm=c_norm, cmap=cmap)
cmapped = scalar_map.to_rgba(data)
self.display.set_data(cmapped)
(cmx = matplotlib.cm, cs = matplotlib.colors, plt = matplotlib.pyplot)
</code></pre>
<p>However this has an unwanted side effect: mousing over a pixel in the displayed image now displays [r g b] tuple as tooltip, instead of the original float64 value, which hinders exploration of this data. For this reason I am looking for another method to achieve the same effect. A follow up question will be how to communicate this renormalization to a colorbar, so it stays relevant.</p>
| 0 | 2016-09-27T13:44:02Z | 39,731,470 | <pre><code>import matplotlib.pyplot as plt
import numpy as np
fig, (ax1, ax2) = plt.subplots(1, 2)
data = np.random.rand(10, 10)
im1 = ax1.imshow(data, interpolation='none', cmap='viridis')
im2 = ax2.imshow(data, interpolation='none', cmap='viridis')
im2.set_clim(0, .5)
</code></pre>
<p><a href="http://i.stack.imgur.com/3DAYq.png" rel="nofollow"><img src="http://i.stack.imgur.com/3DAYq.png" alt="example output"></a></p>
| 0 | 2016-09-27T18:03:56Z | [
"python",
"matplotlib"
]
|
Counting occurrence of a gender in a dataframe by age | 39,726,335 | <p>I have a dataframe like</p>
<pre><code>age gender
69 M
29 F
61 F
66 F
52 M
</code></pre>
<p>What I would like to know is how many males and females there are in each age group. Is there somehow I could use <code>groupby</code> to group the data by age and then use <code>agg</code> to count the instances of male and female?</p>
| 1 | 2016-09-27T13:44:03Z | 39,726,358 | <p>you can groupby age AND gender:</p>
<pre><code>df.groupby(['age','gender']).size()
df.reset_index(inplace=True)
</code></pre>
<p>will give</p>
<pre><code>age gender
69 M 1
29 F 1
61 F 1
66 F 1
52 M 1
</code></pre>
| 3 | 2016-09-27T13:45:02Z | [
"python",
"pandas"
]
|
How to subtract QPainterPaths, QPolygon? | 39,726,477 | <p>I'm trying to understand how <code>path1.subtracted(path2)</code> works. </p>
<ul>
<li><p>I have path1 and path2: <a href="http://i.stack.imgur.com/HcyA0.png" rel="nofollow"><img src="http://i.stack.imgur.com/HcyA0.png" alt="image1"></a></p></li>
<li><p>And I'm getting path3 using <code>path3=path1.subtracted(path2)</code>.<br>
Why I'm not getting a path I want? Image: <a href="http://i.stack.imgur.com/h5TxB.png" rel="nofollow"><img src="http://i.stack.imgur.com/h5TxB.png" alt="image2"></a></p></li>
</ul>
<p>Here is the code:</p>
<pre><code>from PyQt5.QtCore import QPointF
from PyQt5.QtCore import QRectF, Qt
from PyQt5.QtGui import QPainterPath, QPen
from PyQt5.QtGui import QPolygonF
from PyQt5.QtWidgets import QApplication, QGraphicsScene, \
QGraphicsView, QPushButton, QWidget, \
QVBoxLayout, QGraphicsItem, QGraphicsPathItem, QGraphicsRectItem
class Window(QWidget):
scene = None
def __init__(self):
QWidget.__init__(self)
self.view = View(self)
self.button = QPushButton('Clear View', self)
self.button.clicked.connect(self.handleClearView)
layout = QVBoxLayout(self)
layout.addWidget(self.view)
layout.addWidget(self.button)
def handleClearView(self):
self.view.scene.clear()
class View(QGraphicsView):
def __init__(self, parent):
self.scribing = False
self.erasing = False
QGraphicsView.__init__(self, parent)
self.scene = QGraphicsScene()
self.setScene(self.scene)
def resizeEvent(self, QResizeEvent):
self.setSceneRect(QRectF(self.viewport().rect()))
def mousePressEvent(self, event):
if event.buttons() == Qt.LeftButton:
self.scribing = True
self.path1 = QPainterPath()
self.path2 = QPainterPath()
self.polygon1 = QPolygonF()
self.polygon1.append(QPointF(100,100))
self.polygon1.append(QPointF(100, 300))
self.polygon1.append(QPointF(300, 300))
self.polygon1.append(QPointF(300, 100))
self.polygon2 = QPolygonF()
self.polygon2.append(QPointF(300,100))
self.polygon2.append(QPointF(300, 300))
self.polygon2.append(QPointF(100, 300))
self.path1.addPolygon(self.polygon1)
self.path2.addPolygon(self.polygon2)
path3 = self.path1.subtracted(self.path2)
# self.scene.addPath(self.path1, QPen(Qt.blue))
# self.scene.addPath(self.path2, QPen(Qt.green))
self.scene.addPath(path3, QPen(Qt.red))
if event.buttons() == Qt.RightButton:
self.erasing = True
def mouseMoveEvent(self, event):
if (event.buttons() & Qt.LeftButton) and self.scribing:
if self.free_draw_item:
pass
if event.buttons() & Qt.RightButton and self.erasing:
pass
def mouseReleaseEvent(self, event):
self.scribing = False
self.erasing = False
# if self.eraser_item != None:
# self.scene.removeItem(self.eraser_item)
# if self.free_draw_item != None:
# self.free_draw_item.setSelected(True)
if __name__ == '__main__':
import sys
app = QApplication(sys.argv)
window = Window()
window.resize(640, 480)
window.show()
sys.exit(app.exec_())
</code></pre>
<p>In this sample I'm working with <code>QPolygonF</code>. Also I've tried to create <code>p1=QPainterPath()</code>, <code>p2=QPainterPath()</code> and subtracted to get <code>p3</code>. But, without success, getting the same result.</p>
| 2 | 2016-09-27T13:50:07Z | 39,733,789 | <p><code>QpainterPath.subtracted()</code> doesn't subtract path elements but path areas,
<a href="http://doc.qt.io/qt-5/qpainterpath.html#subtracted" rel="nofollow">see documentation</a></p>
<p>same effect if <code>QpainterPath::operator-()</code> is used:</p>
<pre><code> # path3 = self.path1.subtracted(self.path2)
path3 = self.path1 â self.path2
</code></pre>
<p>You can identify the elements of a path by something like this</p>
<pre><code> c = path3.elementCount()
for i in range(c):
e = path3.elementAt(i)
print('Element-nr.: ', i, 'Type: ', e.type, 'x: ', e.x, 'y: ', e.y) # type: 0 = MoveTo, 1 = LineTo
</code></pre>
<p>I think, you have to write an own method, which creates path3 from the elements of path1 and path2.</p>
| 1 | 2016-09-27T20:33:02Z | [
"python",
"qt",
"pyqt",
"pyqt5"
]
|
How can I out put an Excel file as Email attachment in SAP CMC? | 39,726,495 | <p>I have been trying to schedule a report in SAP BO CMC. This report was initially written in Python and built into a .exe file. This .exe application runs to save the report into an .xlsx file in a local folder.
I want to utilize the convenient scheduling functions in SAP BO CMC to send the report in Emails. I tried and created a "Local Program" in CMC and linked it to the .exe file, but you can easily imagine the problem I am faced with -- the application puts the file in the folder as usual but CMC won't be able to grab the Excel file generated.
Is there a way to re-write the Python program a bit so that the output is not a file in some folder, <strong>but an object that CMC can get as an attachment to the Emails?</strong>
I have been scheduling Crystal reports in CMC and this happens naturally. The Crystal output can be sent as an attachment to the Email. Wonder if the similar could happen for a .exe , and how?
Kindly share your thoughts. Thank you very much!</p>
<p>P.S. Don't think it possible to re-write the report in Crystal though, as the data needs to be manipulated based on inputs from different data sources. That's where Python comes in to help. And I hope I don't need to write the program as to cover the Emailing stuff and schedule it in windows' scheduled tasks. Last option... This would be too inconvenient to maintain. We don't get access to the server easily. </p>
| 0 | 2016-09-27T13:50:54Z | 39,727,668 | <p>It's kind of hack-ish, but it can be done. Have the program (exe) write out the bytes of the Excel file to standard output. Then configure the program object for email destination, and set the filename to a specific name (ex. "whatever.xlsx").</p>
<p>When emailing a program object, the attached file will contain the standard output/error of the program. Generally this will just be text but it works for binary output as well.</p>
<p>As this is a hack, if the program generates any other text (such as error message) to standard out, it will be included in the .xlsx file, which will make the file invalid. I'd suggest managing program errors such that they get logged to a file and NOT to standard out/error.</p>
<p>I've tested this with a <strong>Java</strong> program object; but an exe should work just as well.</p>
| 1 | 2016-09-27T14:42:26Z | [
"python",
"excel",
"email",
"sap",
"business-objects"
]
|
How to determine what fields are required by a REST API, from the API? | 39,726,577 | <p>I'm working with a networking appliance that has vague API documentation. I'm able to execute PATCH and GET requests fine, but POST isn't working. I receive HTTP status error 422 as a response, I'm missing a field in the JSON request, but I am providing the required fields as specified in the documentation. I have tried the Python Requests module and the vendor-provided PyCurl module in their sample code, but have encountered the same error.</p>
<p>Does the REST API have a debug method that returns the required fields, and its value types, for a specific POST? I'm speaking more of what the template is configured to see in the request (such as JSON <code>{str(ServerName) : int(ServerID)}</code>, not what the API developer may have created.</p>
| 3 | 2016-09-27T13:54:04Z | 39,726,781 | <p>No this does not exist in general. Some services support an OPTIONS request to the route in question, which should return you documentation about the route. If you are lucky this is machine generated from the same source code that implements the route, so is more accurate than static documentation. However, it may just return a very simple summary, such as which HTTP verbs are supported, which you already know.</p>
<p>Even better, some services may support a machine description of the API using WSDL or WADL, although you probably will only find that if the service also supports XML. This can be better because you will be able to find a library that can parse the description and generate a local object model of the service to use to interact with the API.</p>
<p>However, even if you have OPTIONS or WADL file, the kind of error you are facing could still happen. If the documents are not helping, you probably need to contact the service support team with a demonstration of your problem and request assistance.</p>
| 1 | 2016-09-27T14:02:13Z | [
"python",
"rest",
"pycurl",
"http-status-code-422"
]
|
Plot is behaving weird | 39,726,583 | <p>I am trying to plot some trajectories in 3D. I noticed that the plot function is behaving weird. </p>
<p>I defined a variable named <code>pos</code>, which is a 2 dimensional matrix. It has 3 columns, where each column represents a coordinate axis. Please see the complete code below-</p>
<pre><code>import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
max = 1.0
min = -1.0
cols = 3
goals = 4
timesteps = 20
#pos = np.zeros((timesteps, cols)) # this doesn't works hence commented
fig = plt.figure()
ax = fig.gca(projection='3d')
for i in range(goals):
pos = np.zeros((timesteps, cols)) # this works as expected
for t in range(timesteps):
pos[t] = np.random.uniform(low=min, high=max, size=cols)
ax.plot(pos[:, 0], pos[:, 1], pos[:, 2])
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plt.show()
</code></pre>
<p>The plot doesn't draw everthing, when <code>pos</code> is defined globally. I noticed that defining <code>pos</code> inside the <code>for</code> loop, solve the problem. It looks weird to me.</p>
<p>Below is the plot, generated from global <code>pos</code> variable (after commenting the <code>pos</code> defined inside <code>for</code> loop and keeping the global <code>pos</code> variable enabled)-
<a href="http://i.stack.imgur.com/dfbzx.png" rel="nofollow"><img src="http://i.stack.imgur.com/dfbzx.png" alt="enter image description here"></a></p>
<p>Below is the plot, generated from inner <code>pos</code> variable (after commenting the global <code>pos</code> variable and keeping the <code>pos</code> defined inside <code>for</code> loop enabled)-
<a href="http://i.stack.imgur.com/iaw2u.png" rel="nofollow"><img src="http://i.stack.imgur.com/iaw2u.png" alt="enter image description here"></a></p>
<p>What is the reason of for this kind of behavior?</p>
| 3 | 2016-09-27T13:54:21Z | 39,728,895 | <h2>The fix</h2>
<p>Replace the line</p>
<pre><code>ax.plot(pos[:, 0], pos[:, 1], pos[:, 2])
</code></pre>
<p>with</p>
<pre><code>ax.plot(list(pos[:, 0]), list(pos[:, 1]), list(pos[:, 2]))
</code></pre>
<p>and it will work as expected for global <code>pos</code>.</p>
<h2>Explanation</h2>
<p>The problem is that <code>ax.plot(xlist, ylist, zlist)</code> doesn't immediately plot the data. It merely stores references to <code>xlist</code>, <code>ylist</code>, and <code>zlist</code> and uses the data to construct the plot when <code>plt.show()</code> is called. Next, <code>pos[:, 0]</code>, <code>pos[:, 1]</code>, and <code>pos[:, 2]</code> do not return the corresponding columns by value. Instead they return some proxy objects that reference the original matrix.</p>
<p>As a result, actual plotting is performed using the data that ends up in the matrix after the last iteration and all the plots coincide. By wrapping each of <code>pos[:, 0]</code>, <code>pos[:, 1]</code>, and <code>pos[:, 2]</code> in a <code>list()</code> we force the column data to be copied, decoupling the plots from each other.</p>
| 1 | 2016-09-27T15:40:15Z | [
"python",
"matplotlib"
]
|
How can I travel through the words of a file in PYTHON? | 39,726,737 | <p>I have a file .txt and I want to travel through the words of it. I have a problem, I need to remove the punctuation marks before travelling through the words. I have tried this, but it isn't removing the punctuation marks.</p>
<pre><code>file=open(file_name,"r")
for word in file.read().strip(",;.:- '").split():
print word
file.close()
</code></pre>
| 2 | 2016-09-27T14:00:15Z | 39,726,900 | <p>The problem with your current method is that <code>.strip()</code> doesn't really do what you want. It removes leading and trailing characters (and you want to remove ones within the text), and if you want to specify characters in addition to whitespace, they need to be in a list.</p>
<p>Another problem is that there are many more potential punctuation characters (question marks, exclamations, unicode ellipses, em dashes) that wouldn't get filtered out by your list. Instead, you can use <code>string.punctuation</code> to get a wide range of characters (note that <code>string.punctuation</code> doesn't include some non-English characters, so its viability may depend on the source of your input):</p>
<pre><code>import string
punctuation = set(string.punctuation)
text = ''.join(char for char in text if char not in punctuation)
</code></pre>
<p>An even faster method (shown in <a href="http://stackoverflow.com/questions/265960/best-way-to-strip-punctuation-from-a-string-in-python">other</a> <a href="http://stackoverflow.com/questions/36464160/what-does-python-string-maketrans">answers</a> on SO) uses <code>string.translate()</code> to replace the characters:</p>
<pre><code>import string
text = text.translate(string.maketrans('', ''), string.punctuation)
</code></pre>
| 2 | 2016-09-27T14:07:42Z | [
"python",
"python-2.7",
"text",
"punctuation"
]
|
How can I travel through the words of a file in PYTHON? | 39,726,737 | <p>I have a file .txt and I want to travel through the words of it. I have a problem, I need to remove the punctuation marks before travelling through the words. I have tried this, but it isn't removing the punctuation marks.</p>
<pre><code>file=open(file_name,"r")
for word in file.read().strip(",;.:- '").split():
print word
file.close()
</code></pre>
| 2 | 2016-09-27T14:00:15Z | 39,726,956 | <p>I would remove the punctuation marks with the <code>replace</code> function after storing the words in a list like so:</p>
<pre><code>with open(file_name,"r") as f_r:
words = []
for row in f_r:
words.append(row.split())
punctuation = [',', ';', '.', ':', '-']
words = [x.replace(y, '') for y in punctuation for x in words]
</code></pre>
| 0 | 2016-09-27T14:10:27Z | [
"python",
"python-2.7",
"text",
"punctuation"
]
|
How can I travel through the words of a file in PYTHON? | 39,726,737 | <p>I have a file .txt and I want to travel through the words of it. I have a problem, I need to remove the punctuation marks before travelling through the words. I have tried this, but it isn't removing the punctuation marks.</p>
<pre><code>file=open(file_name,"r")
for word in file.read().strip(",;.:- '").split():
print word
file.close()
</code></pre>
| 2 | 2016-09-27T14:00:15Z | 39,727,047 | <p>You can try using the <code>re</code> module:</p>
<pre><code>import re
with open(file_name) as f:
for word in re.split('\W+', f.read()):
print word
</code></pre>
<p>See the <a href="https://docs.python.org/2/library/re.html" rel="nofollow">re documentation</a> for more details.</p>
<p>Edit: In case of non ASCII characters, the previous code ignore them. In that case the following code can help:</p>
<pre><code>import re
with open(file_name) as f:
for word in re.compile('\W+', re.unicode).split(f.read().decode('utf8')):
print word
</code></pre>
| 1 | 2016-09-27T14:14:27Z | [
"python",
"python-2.7",
"text",
"punctuation"
]
|
How can I travel through the words of a file in PYTHON? | 39,726,737 | <p>I have a file .txt and I want to travel through the words of it. I have a problem, I need to remove the punctuation marks before travelling through the words. I have tried this, but it isn't removing the punctuation marks.</p>
<pre><code>file=open(file_name,"r")
for word in file.read().strip(",;.:- '").split():
print word
file.close()
</code></pre>
| 2 | 2016-09-27T14:00:15Z | 39,727,067 | <p><code>strip()</code>only removes characters found at the beginning or end of a string.
So <code>split()</code> first to cut into words, then <code>strip()</code> to remove punctuation.</p>
<pre><code>import string
with open(file_name, "rt") as finput:
for line in finput:
for word in line.split():
print word.strip(string.punctuation)
</code></pre>
<p>Or use a natural language aware library like <code>nltk</code>: <a href="http://www.nltk.org/" rel="nofollow">http://www.nltk.org/</a></p>
| 2 | 2016-09-27T14:15:06Z | [
"python",
"python-2.7",
"text",
"punctuation"
]
|
How can I travel through the words of a file in PYTHON? | 39,726,737 | <p>I have a file .txt and I want to travel through the words of it. I have a problem, I need to remove the punctuation marks before travelling through the words. I have tried this, but it isn't removing the punctuation marks.</p>
<pre><code>file=open(file_name,"r")
for word in file.read().strip(",;.:- '").split():
print word
file.close()
</code></pre>
| 2 | 2016-09-27T14:00:15Z | 39,754,324 | <p>The following code preserves apostrophes and blanks, and could easily be modified to preserve double quotations marks, if desired. It works by using a translation table based on a subclass of the string object. I think the code is fairly easy to understand. It might be made more efficient if necessary.</p>
<pre><code>class SpecialTable(str):
def __getitem__(self, chr):
if chr==32 or chr==39 or 48<=chr<=57 \
or 65<=chr<=90 or 97<=chr<=122:
return chr
else:
return None
specialTable = SpecialTable()
with open('temp2.txt') as inputText:
for line in inputText:
print (line)
convertedLine=line.translate(specialTable)
print (convertedLine)
print (convertedLine.split(' '))
</code></pre>
<p>Here's typical output.</p>
<pre><code>This! is _a_ single (i.e. 1) English sentence that won't cause any trouble, right?
This is a single ie 1 English sentence that won't cause any trouble right
['This', 'is', 'a', 'single', 'ie', '1', 'English', 'sentence', 'that', "won't", 'cause', 'any', 'trouble', 'right']
'nother one.
'nother one
["'nother", 'one']
</code></pre>
| 1 | 2016-09-28T17:43:41Z | [
"python",
"python-2.7",
"text",
"punctuation"
]
|
Generating regex string to be used in re.match() | 39,726,805 | <p>I am trying to a string to be used as regex String.<br>
In the following code: <br>
<code>_pattern</code> is a pattern like <code>abba</code> and I am trying to check <code>_string</code> follows the <code>_pattern</code> (eg. <code>catdogdogcat</code>)<br></p>
<p><code>rxp</code> in the following code is the regular expression that I am trying to create to match to <code>_string</code> (eg. for above example it will be <code>(.+)(.+)\\2\\1</code> ). Which is being successfully generated. But the <code>re.match()</code> is returning <code>None</code>. <br><br>
I want to understand why it is not working and how to correct it ? </p>
<pre><code>import re
_pattern = "abba" #raw_input().strip()
_string = "catdogdogcat" #raw_input().strip()
hm = {}
rxp = ""
c = 1
for x in _pattern:
if hm.has_key(x):
rxp += hm[x]
continue
else:
rxp += "(.+)"
hm[x]="\\\\"+str(c)
c+=1
print rxp
#print re.match(rxp,_string) -> (Tried) Not working
#print re.match(r'rxp', _string) -> (Tried) Not working
print re.match(r'%s' %rxp, _string) # (Tried) Not working
</code></pre>
<p><strong>Output</strong> <br>
<code>(.+)(.+)\\2\\1
None</code></p>
<p><strong>Expected Output</strong><br>
<code>(.+)(.+)\\2\\1
<_sre.SRE_Match object at 0x000000000278FE88>
</code></p>
| 1 | 2016-09-27T14:03:23Z | 39,726,843 | <p>You should use string formatting, and not hard-code <code>rxp</code> into the string:</p>
<pre><code>print re.match(r'%s'%rxp, _string)
</code></pre>
| 0 | 2016-09-27T14:05:28Z | [
"python",
"regex"
]
|
Generating regex string to be used in re.match() | 39,726,805 | <p>I am trying to a string to be used as regex String.<br>
In the following code: <br>
<code>_pattern</code> is a pattern like <code>abba</code> and I am trying to check <code>_string</code> follows the <code>_pattern</code> (eg. <code>catdogdogcat</code>)<br></p>
<p><code>rxp</code> in the following code is the regular expression that I am trying to create to match to <code>_string</code> (eg. for above example it will be <code>(.+)(.+)\\2\\1</code> ). Which is being successfully generated. But the <code>re.match()</code> is returning <code>None</code>. <br><br>
I want to understand why it is not working and how to correct it ? </p>
<pre><code>import re
_pattern = "abba" #raw_input().strip()
_string = "catdogdogcat" #raw_input().strip()
hm = {}
rxp = ""
c = 1
for x in _pattern:
if hm.has_key(x):
rxp += hm[x]
continue
else:
rxp += "(.+)"
hm[x]="\\\\"+str(c)
c+=1
print rxp
#print re.match(rxp,_string) -> (Tried) Not working
#print re.match(r'rxp', _string) -> (Tried) Not working
print re.match(r'%s' %rxp, _string) # (Tried) Not working
</code></pre>
<p><strong>Output</strong> <br>
<code>(.+)(.+)\\2\\1
None</code></p>
<p><strong>Expected Output</strong><br>
<code>(.+)(.+)\\2\\1
<_sre.SRE_Match object at 0x000000000278FE88>
</code></p>
| 1 | 2016-09-27T14:03:23Z | 39,727,628 | <p>The thing is that your regex string variable has double <code>\\</code> instead of a single one.</p>
<p>You can use </p>
<pre><code>rxp.replace("\\\\", "\\")
</code></pre>
<p>in <code>.match</code> like this:</p>
<pre><code>>>> print re.match(rxp.replace("\\\\", "\\"), _string)
<_sre.SRE_Match object at 0x10bf87c68>
>>> print re.match(rxp.replace("\\\\", "\\"), _string).groups()
('cat', 'dog')
</code></pre>
<hr>
<p><strong>EDIT:</strong></p>
<p>You can also avoid getting double <code>\\</code> like this:
import re</p>
<pre><code>_pattern = "abba" #raw_input().strip()
_string = "catdogdogcat" #raw_input().strip()
hm = {}
rxp = ""
c = 1
for x in _pattern:
if x in hm:
rxp += hm[x]
continue
else:
rxp += "(.+)"
hm[x]="\\" + str(c)
c+=1
print rxp
print re.match(rxp,_string)
</code></pre>
| 1 | 2016-09-27T14:40:42Z | [
"python",
"regex"
]
|
401 error for tastypie for api_client | 39,726,850 | <p>Hi I am trying to write a test case for my app. URL = '/api/project/'. I have enabled get, post and put methods along with authentication. But still i get 401 error for post request</p>
<pre><code>class EntryResourceTest(ResourceTestCaseMixin, TestCase):
class Meta:
queryset = Project.objects.all()
resource_name = 'project'
allowed_methods = ['get', 'post', 'put']
authentication = Authentication()
authorization = Authorization()
def setUp(self):
super(EntryResourceTest, self).setUp()
# Create a user.
self.username = 'daniel'
self.password = 'pass'
self.user = User.objects.create_user(self.username, 'daniel@example.com', self.password)
def login(self):
return self.api_client.client.login(
username=self.username, password=self.password)
def get_credentials(self):
return self.create_basic(username=self.username, password=self.password)
def test_post_list(self):
self.login()
req_get = self.api_client.get('/api/project/', format='json', authentication=self.get_credentials()) # -> get works and i get 200 status code
req_post = self.api_client.post('/api/project/', format='json', data=self.post_data, authentication=self.get_credentials())
</code></pre>
<p>I run the test case using following command , and here get works fine but not the post request. And get request works even if i don't pass authentication parameter as it uses the default login which i have defined in self.login()</p>
<pre><code>django-admin test myapp.api.project.tests
</code></pre>
| 0 | 2016-09-27T14:05:46Z | 39,742,007 | <p>Use <code>BasicAuthentication</code> instead of <code>Authentication</code>.</p>
<p><code>self.create_basic(...)</code> create a headers for <code>BasicAuthentication</code>.</p>
<pre><code>def create_basic(self, username, password):
"""
Creates & returns the HTTP ``Authorization`` header for use with BASIC
Auth.
"""
</code></pre>
| 0 | 2016-09-28T08:30:13Z | [
"python",
"django",
"tastypie",
"django-unittest"
]
|
Split CSV file using Python shows not all data in Excel | 39,726,884 | <p>I am trying to dump the values in my Django database to a csv, then write the contents of the csv to an Excel spreadsheet which looks like a table (one value per cell), so that my users can export a spreadsheet of all records in the database from Django admin. Right now when I export the file, I get this (only one random value out of many and not formatted correctly):</p>
<p><a href="http://i.stack.imgur.com/yGdrK.png" rel="nofollow"><img src="http://i.stack.imgur.com/yGdrK.png" alt="enter image description here"></a></p>
<p>What am I doing wrong? Not sure if I am using list comprehensions wrong, reading the file incorrectly, or if there is something wrong with my <code>for</code> loop. Please help! </p>
<pre><code>def dump_table_to_csv(db_table, io):
with connection.cursor() as cursor:
cursor.execute("SELECT * FROM %s" % db_table, [])
row = cursor.fetchall()
writer = csv.writer(io)
writer.writerow([i[0] for i in cursor.description])
writer.writerow(row)
with open('/Users/nicoletorek/emarshal/myfile.csv', 'w') as f:
dump_table_to_csv(Attorney._meta.db_table, f)
with open('/Users/nicoletorek/emarshal/myfile.csv', 'r') as f:
db_list = f.read()
split_db_list = db_list.split(',')
output = BytesIO()
workbook = xlsxwriter.Workbook(output)
worksheet_s = workbook.add_worksheet("Summary")
header = workbook.add_format({
'bg_color': '#F7F7F7',
'color': 'black',
'align': 'center',
'valign': 'top',
'border': 1
})
row = 0
col = 0
for x in split_db_list:
worksheet_s.write(row + 1, col + 1, x, header)
</code></pre>
| 0 | 2016-09-27T14:07:10Z | 39,728,754 | <p>Your CSV file could be read in and written as follows:</p>
<pre><code>import csv
workbook = xlsxwriter.Workbook('output.xlsx')
worksheet_s = workbook.add_worksheet("Summary")
with open(r'\Users\nicoletorek\emarshal\myfile.csv', 'rb') as f_input:
csv_input = csv.reader(f_input)
for row_index, row_data in enumerate(csv_input):
worksheet_s.write_row(row_index, 0, row_data)
workbook.close()
</code></pre>
<p>This uses the <code>csv</code> library to ensure the rows are correctly read in, and the <code>write_row</code> function to allow the whole row to be written using a single call. The <code>enumerate()</code> function is used to provide a running <code>row_index</code> value.</p>
| 0 | 2016-09-27T15:33:15Z | [
"python",
"django",
"csv",
"split",
"list-comprehension"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.