title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
How to solve "table "auth_permission" already exists" error when the database is shared among two Django projects | 39,890,835 | <p>in <a href="http://stackoverflow.com/questions/39774580/how-to-make-two-django-projects-share-the-same-database/39781404?noredirect=1#comment66877315_39781404">this</a> question I learned how to make two Django projects use the same database.
I have:</p>
<pre><code>projects
project_1
settings.py
...
project_2
settings.py
...
</code></pre>
<p>and</p>
<pre><code># project_1/settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(PROJECT_ROOT, 'development.db'),
},
}
# project_2/settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(
os.path.dirname(os.path.dirname(PROJECT_ROOT)),
'project_1',
'development.db'
),
},
}
</code></pre>
<p>In <code>project_2/</code>, when I run:</p>
<pre><code>python manage.py syncdb
</code></pre>
<p>I get:</p>
<pre><code>django.db.utils.OperationalError: table "auth_permission" already exists
</code></pre>
<p>I guess this happens because python fails in trying to add <code>project_2</code> tables that already exists in the shared db.</p>
<p>How can I add to the shared db only those <code>project_2</code> tables not already existing in the common database?</p>
<p><strong>EDIT:</strong>
After telling project_2/ to use project_1/ db, I run the <code>syncdb</code> and get the existing table error. I do not have a migration file.
Shall I run a different command before syncing?</p>
| 0 | 2016-10-06T08:11:24Z | 39,890,888 | <p>You can open the <code>migrations</code> file and comment out the SQL that tries to create the table.
Then run migrations again.</p>
<p>(another possibility would be to delete the table in the database, but you'd lose the data in the table.)</p>
| 2 | 2016-10-06T08:14:47Z | [
"python",
"django"
]
|
How to solve "table "auth_permission" already exists" error when the database is shared among two Django projects | 39,890,835 | <p>in <a href="http://stackoverflow.com/questions/39774580/how-to-make-two-django-projects-share-the-same-database/39781404?noredirect=1#comment66877315_39781404">this</a> question I learned how to make two Django projects use the same database.
I have:</p>
<pre><code>projects
project_1
settings.py
...
project_2
settings.py
...
</code></pre>
<p>and</p>
<pre><code># project_1/settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(PROJECT_ROOT, 'development.db'),
},
}
# project_2/settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(
os.path.dirname(os.path.dirname(PROJECT_ROOT)),
'project_1',
'development.db'
),
},
}
</code></pre>
<p>In <code>project_2/</code>, when I run:</p>
<pre><code>python manage.py syncdb
</code></pre>
<p>I get:</p>
<pre><code>django.db.utils.OperationalError: table "auth_permission" already exists
</code></pre>
<p>I guess this happens because python fails in trying to add <code>project_2</code> tables that already exists in the shared db.</p>
<p>How can I add to the shared db only those <code>project_2</code> tables not already existing in the common database?</p>
<p><strong>EDIT:</strong>
After telling project_2/ to use project_1/ db, I run the <code>syncdb</code> and get the existing table error. I do not have a migration file.
Shall I run a different command before syncing?</p>
| 0 | 2016-10-06T08:11:24Z | 39,891,803 | <p>If you use django version >1.7 you should use migrations (<a href="https://docs.djangoproject.com/en/1.8/topics/migrations/" rel="nofollow">Django Migrations</a> - make sure to check version matches yours)
Then you can make migrations more consistently (no conflicts). Even if they happen (but under very specific conditions), you can always '--fake' migrations on second host</p>
| 1 | 2016-10-06T09:00:02Z | [
"python",
"django"
]
|
How to solve "table "auth_permission" already exists" error when the database is shared among two Django projects | 39,890,835 | <p>in <a href="http://stackoverflow.com/questions/39774580/how-to-make-two-django-projects-share-the-same-database/39781404?noredirect=1#comment66877315_39781404">this</a> question I learned how to make two Django projects use the same database.
I have:</p>
<pre><code>projects
project_1
settings.py
...
project_2
settings.py
...
</code></pre>
<p>and</p>
<pre><code># project_1/settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(PROJECT_ROOT, 'development.db'),
},
}
# project_2/settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(
os.path.dirname(os.path.dirname(PROJECT_ROOT)),
'project_1',
'development.db'
),
},
}
</code></pre>
<p>In <code>project_2/</code>, when I run:</p>
<pre><code>python manage.py syncdb
</code></pre>
<p>I get:</p>
<pre><code>django.db.utils.OperationalError: table "auth_permission" already exists
</code></pre>
<p>I guess this happens because python fails in trying to add <code>project_2</code> tables that already exists in the shared db.</p>
<p>How can I add to the shared db only those <code>project_2</code> tables not already existing in the common database?</p>
<p><strong>EDIT:</strong>
After telling project_2/ to use project_1/ db, I run the <code>syncdb</code> and get the existing table error. I do not have a migration file.
Shall I run a different command before syncing?</p>
| 0 | 2016-10-06T08:11:24Z | 39,893,524 | <blockquote>
<p>Django 1.8.15 for project_2/. I've just checked project_1/ django version and it is 1.6. I was convinced that both projects where using the same django version.. Is this the main problem?</p>
</blockquote>
<p>Yes. Because django 1.6 and django 1.8 use different <code>syncdb</code> commands. <code>syncdb</code> in 1.8 is <code>migrate</code>, so when you do <code>syncdb</code> in 1.8 you are applying migrations, not just creating tables. Use same django version and problem should be solved.</p>
| 1 | 2016-10-06T10:25:06Z | [
"python",
"django"
]
|
How to add and reverse a new column according to an id column | 39,890,857 | <p>This is my first question at stackoverflow and I'm pretty new in the area of programming with python.</p>
<p><a href="http://i.stack.imgur.com/1bKY7.jpg" rel="nofollow">image of what I'm trying to do</a></p>
<p>As you can see in the picture column "id" and "period" are given. It's a csv dataset and I want to add a new column named "newColumn", where the numbers from "period" are in a reversed order according to their "id values". I hope you understand my problem. </p>
<p>Thank you in advance.</p>
| -2 | 2016-10-06T08:12:39Z | 39,890,911 | <p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="nofollow"><code>sort_values</code></a>:</p>
<pre><code>print (df)
id period
0 1 1
1 1 2
2 1 3
3 2 1
4 2 2
5 2 3
6 3 1
7 3 2
8 3 3
print (df.sort_values(by=['id','period'], ascending=[True, False]))
id period
2 1 3
1 1 2
0 1 1
5 2 3
4 2 2
3 2 1
8 3 3
7 3 2
6 3 1
</code></pre>
<p>Then if need create new column, convert sorted column to <code>numpy array</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.values.html" rel="nofollow"><code>values</code></a> for not align by index:</p>
<pre><code>df['new'] = df.sort_values(by=['id','period'], ascending=[True, False])['period'].values
print (df)
id period new
0 1 1 3
1 1 2 2
2 1 3 1
3 2 1 3
4 2 2 2
5 2 3 1
6 3 1 3
7 3 2 2
8 3 3 1
</code></pre>
| 1 | 2016-10-06T08:16:04Z | [
"python",
"pandas",
"numpy",
"order",
"reverse"
]
|
How to add and reverse a new column according to an id column | 39,890,857 | <p>This is my first question at stackoverflow and I'm pretty new in the area of programming with python.</p>
<p><a href="http://i.stack.imgur.com/1bKY7.jpg" rel="nofollow">image of what I'm trying to do</a></p>
<p>As you can see in the picture column "id" and "period" are given. It's a csv dataset and I want to add a new column named "newColumn", where the numbers from "period" are in a reversed order according to their "id values". I hope you understand my problem. </p>
<p>Thank you in advance.</p>
| -2 | 2016-10-06T08:12:39Z | 39,891,189 | <p>somthing like this:</p>
<pre><code>def f(x, y):
if x[0] < y[0] or x[0] == y[0] and x[1] > y[1]:
return -1
return 1
d = [1, 2, 3, 4, 1, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4]
o = [1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3]
new = zip(o, d)
new.sort(f)
print new
# [(1, 4), (1, 3), (1, 2), (1, 1), (2, 7), (2, 6), (2, 5), (2, 4), (2, 3), (2, 2), (2, 1), (3, 4), (3, 3), (3, 2), (3, 1)]
print zip(*new)[1]
# (4, 3, 2, 1, 7, 6, 5, 4, 3, 2, 1, 4, 3, 2, 1)
</code></pre>
| 1 | 2016-10-06T08:28:54Z | [
"python",
"pandas",
"numpy",
"order",
"reverse"
]
|
Skip a list of migrations in Django | 39,890,923 | <p>I have migrations <code>0001_something</code>, <code>0002_something</code>, <code>0003_something</code> in a third-party app and all of them are applied to the database by my own app. I simply want to skip these three migrations. One option is to run the following command</p>
<p><code>python manage.py migrate <third_party_app_name> 0003 --fake</code></p>
<p>But I don't want to run this command manually. I was thinking if there can be any method by which I can specify something in settings to skip these migrations. I would simply run <code>python manage.py migrate</code> and it would automatically recognize that 3 migrations need to be faked. Or if there is any way to always fake <code>0001</code>, <code>0002</code> and <code>0003</code>.</p>
<p>If this was in my own app, I could simply remove the migration files but it is a third party app installed via. <code>pip</code> and I don't want to change that.</p>
| 2 | 2016-10-06T08:16:40Z | 39,891,657 | <p>If you really want to do that.Try</p>
<ul>
<li><p>Add entries in <code>django_migrations</code> table like</p>
<pre><code>app name applied
<thirdpartyname> 003_something #without .py 2014-04-16 14:12:30.839899+08 #some date before now
</code></pre></li>
</ul>
| 1 | 2016-10-06T08:53:24Z | [
"python",
"django",
"django-models",
"django-migrations"
]
|
Skip a list of migrations in Django | 39,890,923 | <p>I have migrations <code>0001_something</code>, <code>0002_something</code>, <code>0003_something</code> in a third-party app and all of them are applied to the database by my own app. I simply want to skip these three migrations. One option is to run the following command</p>
<p><code>python manage.py migrate <third_party_app_name> 0003 --fake</code></p>
<p>But I don't want to run this command manually. I was thinking if there can be any method by which I can specify something in settings to skip these migrations. I would simply run <code>python manage.py migrate</code> and it would automatically recognize that 3 migrations need to be faked. Or if there is any way to always fake <code>0001</code>, <code>0002</code> and <code>0003</code>.</p>
<p>If this was in my own app, I could simply remove the migration files but it is a third party app installed via. <code>pip</code> and I don't want to change that.</p>
| 2 | 2016-10-06T08:16:40Z | 39,891,704 | <p>The django knows about applied migrations is only through migration history table. So if there is no record about applied migration it will think that this migration is not applied. Django does not check real db state against migration files.</p>
| 1 | 2016-10-06T08:55:33Z | [
"python",
"django",
"django-models",
"django-migrations"
]
|
Skip a list of migrations in Django | 39,890,923 | <p>I have migrations <code>0001_something</code>, <code>0002_something</code>, <code>0003_something</code> in a third-party app and all of them are applied to the database by my own app. I simply want to skip these three migrations. One option is to run the following command</p>
<p><code>python manage.py migrate <third_party_app_name> 0003 --fake</code></p>
<p>But I don't want to run this command manually. I was thinking if there can be any method by which I can specify something in settings to skip these migrations. I would simply run <code>python manage.py migrate</code> and it would automatically recognize that 3 migrations need to be faked. Or if there is any way to always fake <code>0001</code>, <code>0002</code> and <code>0003</code>.</p>
<p>If this was in my own app, I could simply remove the migration files but it is a third party app installed via. <code>pip</code> and I don't want to change that.</p>
| 2 | 2016-10-06T08:16:40Z | 39,893,239 | <p>The <a href="https://docs.djangoproject.com/en/1.10/ref/settings/#migration-modules" rel="nofollow"><code>MIGRATION_MODULES</code></a> setting lets you specify an alternative module for an app's migrations. You could set this for your app, then leave out the migrations you wish to skip, or replace them with empty migrations. </p>
| 1 | 2016-10-06T10:11:12Z | [
"python",
"django",
"django-models",
"django-migrations"
]
|
Replace special characters with words in python | 39,890,943 | <p>For the following string:</p>
<p><code>s = The \r\n sun shines, that's fine [latex]not\r\nt for \r\n everyone[/latex] and if it rains, \r\nit Will Be better.</code>.</p>
<p>If I want to replace <code>\n\r</code> by <code>' '</code> between <code>[latex]</code> and <code>[/latex]</code>, I can use:</p>
<pre><code>re.sub("\[latex\][^]]*\[/latex\]", lambda x:x.group(0).replace('\r\n',' '), s)
</code></pre>
<p>which works fine.</p>
<p>However, if the input is:</p>
<pre><code>s = some\r\nthing\r\n[latex]\\[\x08egin{array}{*{20}{l}}\r\n{{\rm{dA}} = {\rm{wdy}}:}\\\r\n{{\rm{dF}} = {\rm{P}}\\;{\rm{dA}} = \rho {\rm{g}}\\left( {{\rm{H}}-{\rm{y}}} \right)\\;\\omega \\;{\rm{dy}}}\r\n\\end{array}\\][/latex]\r\n
</code></pre>
<p>and I use the same expression, nothing gets replaced.
Any idea what I'm doing wrong ?</p>
| 2 | 2016-10-06T08:17:26Z | 39,891,068 | <p>The problem is due to presence of <code>]</code> before <code>[/latex]</code> in second input. Also better to use raw strings for your input and regex.</p>
<p>You can use this regex for search:</p>
<pre><code>\[latex\].*?\[/latex\]
</code></pre>
<p><a href="https://regex101.com/r/DXv0ep/2" rel="nofollow">RegEx Demo</a></p>
<p><strong>Code:</strong></p>
<pre><code>>>> s = r"some\r\nthing\r\n[latex]\\[\x08egin{array}{*{20}{l}}\r\n{{\rm{dA}} = {\rm{wdy}}:}\\\r\n{{\rm{dF}} = {\rm{P}}\\;{\rm{dA}} = \rho {\rm{g}}\\left( {{\rm{H}}-{\rm{y}}} \right)\\;\\omega \\;{\rm{dy}}}\r\n\\end{array}\\][/latex]\r\n"
>>> print re.sub(r"\[latex\].*?\[/latex\]", lambda x:x.group(0).replace(r'\r\n', ' '), s)
some\r\nthing\r\n[latex]\\[\x08egin{array}{*{20}{l}} {{\rm{dA}} = {\rm{wdy}}:}\\ {{\rm{dF}} = {\rm{P}}\\;{\rm{dA}} = \rho {\rm{g}}\\left( {{\rm{H}}-{\rm{y}}} \right)\\;\\omega \\;{\rm{dy}}} \\end{array}\\][/latex]\r\n
</code></pre>
<p><a href="http://ideone.com/yYmiKn" rel="nofollow">Code Demo</a></p>
| 1 | 2016-10-06T08:23:17Z | [
"python",
"regex"
]
|
NameError: name 'self' is not defined" when using inside with statement | 39,890,999 | <p>Can someone help me about using 'self' inside 'with'.
Code below throws "NameError: name 'self' is not defined".</p>
<pre><code>class Versions:
def __init__(self):
self.module_a = '1.0.0'
self.module_b = '1.0.0'
if os.path.exists('config.json'):
with open('config.json') as f:
configs = json.load(f)
for config in configs:
setattr(self, config, configs[config])
</code></pre>
<p>Traceback</p>
<pre><code>NameError Traceback (most recent call last)
<ipython-input-5-ed6a4ca551d4> in <module>()
3 configs = json.load(f)
4 for config in configs:
----> 5 setattr(self, config, configs[config])
6
NameError: name 'self' is not defined
</code></pre>
<p>Thanks.</p>
| 1 | 2016-10-06T08:20:07Z | 39,891,331 | <p>please check Your indentation - if You mix spaces and tabs, this can happen; use $python -tt to verify. Your code snippet works fine for me.</p>
| 2 | 2016-10-06T08:36:51Z | [
"python",
"class",
"nameerror",
"contextmanager"
]
|
Python directory conflicts with python executable location | 39,891,023 | <p>I have a python executable I wish to run from PowerShell using the command <code>python [executable].py</code>.</p>
<p>First I changed the directory in PowerShell to the location of the executable using <code>cd path\to\my\directory</code> which worked fine. However whenever I tried to use python to execute my code, PowerShell immediately searches for the <code>[executable].py</code> in Python's installation folder - fails to find it - and gives the an error that it cannot find the appropriate file.</p>
<p>How do I make sure that Powershell looks for the executable in the directory I indicated as opposed to the default Python installation folder?</p>
<p><a href="http://i.stack.imgur.com/jcOWD.png" rel="nofollow"><img src="http://i.stack.imgur.com/jcOWD.png" alt="Screenshot Attached"></a></p>
| -1 | 2016-10-06T08:21:20Z | 39,893,553 | <p>If you want to run <code>python.exe</code> from a location other than the installation directory you'd call it with its full path:</p>
<pre><code>& 'C:\path\to\python.exe' 'your.py'
</code></pre>
<p>If you want to run it from the current directory, prepend the filename with the relative path <code>.\</code>:</p>
<pre><code>& .\python.exe 'your.py'
</code></pre>
<p>If you call an executable without a path like this:</p>
<pre><code>& python.exe 'your.py'
</code></pre>
<p>PowerShell will look for a matching file in the directories listed in the <code>$env:PATH</code> environment variable, and execute the first match (or report an error if no matching file can be found).</p>
<p>With that said, the error you got in your screenshot is not because of the Python interpreter, but because of the file you want the interpreter to run. You're calling</p>
<pre><code>python conditions
</code></pre>
<p>when you actually want to run</p>
<pre><code>python conditions.py
</code></pre>
<p>Neither PowerShell nor Python magically add the extension for you. Instead they report an error because a file <code>conditions</code> (without an extension) simply doesn't exist.</p>
| 0 | 2016-10-06T10:26:16Z | [
"python",
"python-3.x",
"powershell",
"directory"
]
|
Loading data from text file from only a part of the file name | 39,891,035 | <p>I have a lot of data in different text files. Each file name contains a word that is chosen by me, but they also include a lot of "gibberish". So for example I have a text file called <code>datapoints-(my chosen name)-12iu8w9e8v09wr-140-ad92-dw9</code></p>
<p>So the <code>datapoints</code> string is in all text files, the <code>(my chosen name)</code> is what I define and know how to extract in my code, but the last bit is random. And I don't want to go and delete that part in every text file I have, that would be a bit time consuming. </p>
<p>I just want to load these text files, but I'm unsure of how to target each file without using the "gibberish" in the end. I just want to say something like: "load file that includes (my chosen name)" and then not worry about the rest.</p>
| -1 | 2016-10-06T08:21:57Z | 39,891,316 | <p>this returns a list of all your files using the <a href="https://docs.python.org/2/library/glob.html" rel="nofollow">glob module</a></p>
<pre><code>import glob
your_words = ['word1', 'word2']
files = []
# find files matching 'datapoint-your words-*.txt'
for word in your_words:
# The * is a wildcard, your words are filled in the {}. one by one
files.extend(glob.glob('datapoint-{}-*.txt'.format(word)))
print files
</code></pre>
| 2 | 2016-10-06T08:35:46Z | [
"python"
]
|
Django 1.10 shows a square bracket in display of the form | 39,891,074 | <p>I'm quite new to django and i'm now creating my forms for uploading data in the database. I'm using django 1.10 and python 2.7</p>
<p>I have a upload.html in my templates. And then i </p>
<pre><code><div class="form">]
<form method="post" action="{% url 'upload' %}" enctype=multipart/form-data >
{% csrf_token %}
<!-- This line inserts a CSRF token. -->
<table>
{{ form.as_table }}
<!-- This line displays lines of the form. -->
</table>
<p><input type="submit" value="Create" /></p>
</form>
</div>
</code></pre>
<p>A part of my forms.py</p>
<pre><code>class Form_inscription(forms.Form):
study = forms.ModelChoiceField(label="Choose the study of the database file", queryset=Study.objects.all(),
initial=Study.objects.all()[:1].get().id)
databasefile = forms.FileField(label="Database file")
assay = forms.ModelChoiceField(label="Choose the assay", queryset=LookUpAssay.objects.all(),
initial=LookUpAssay.objects.all()[:1].get().id)
readout = forms.ChoiceField(label="Choose the readout, choose --- if assay contains all readouts", choices=readouttuple)
rawdatafile = forms.FileField(label ="Choose the raw data file")
</code></pre>
<p>It then displays this with the square bracket. and i have no idea why. </p>
<p>]</p>
<p>Choose the study of the database file:<br>
Database file:<br>
Choose the assay:<br>
Choose the readout, choose --- if assay contains all readouts:<br>
Choose the raw data file:</p>
<p>Could someone shed a light on this weird problem?
Thanks in advance,
Dani</p>
| 0 | 2016-10-06T08:23:34Z | 39,891,119 | <p>Its not your form, its your html. Theres a square bracket lurking somewhere in there:</p>
<pre><code><div class="form">]
# ^
</code></pre>
| 1 | 2016-10-06T08:25:16Z | [
"python",
"django",
"django-forms"
]
|
Split string > list of sublists of words and characters | 39,891,137 | <p>No imports allowed (it's a school assignment).</p>
<p>Wish to split a random string into a list of sublists. Words in a sublist, all other characters (including whitespace) would be in a sublist containing only one item. Anyone have some advice on how to do this;</p>
<pre><code>part = "Hi! Goodmorning, I'm fine."
list = [[H,i],[!],[_],[G,o,o,d,m,o,r,n,i,n,g],[,],[_],[I],['],[m],[_],[f,i,n,e],[.]]
</code></pre>
| -4 | 2016-10-06T08:26:14Z | 39,891,364 | <p>Use this :</p>
<pre><code>list1 = []
part = "Hi! Goodmorning, I'm fine."
part = part.split()
num=part.count(" ")
for i in range(num+1):
for item in part:
list2 = []
for i in item:
list2.append(i)
list1.append(list2)
list1.append(['_'])
print list1
</code></pre>
| 0 | 2016-10-06T08:38:25Z | [
"python",
"python-2.7"
]
|
Split string > list of sublists of words and characters | 39,891,137 | <p>No imports allowed (it's a school assignment).</p>
<p>Wish to split a random string into a list of sublists. Words in a sublist, all other characters (including whitespace) would be in a sublist containing only one item. Anyone have some advice on how to do this;</p>
<pre><code>part = "Hi! Goodmorning, I'm fine."
list = [[H,i],[!],[_],[G,o,o,d,m,o,r,n,i,n,g],[,],[_],[I],['],[m],[_],[f,i,n,e],[.]]
</code></pre>
| -4 | 2016-10-06T08:26:14Z | 39,891,600 | <p>This does the trick:</p>
<pre><code>globalList = []
letters = "abcdefghijklmnopqrstuvwxyz"
message = "Hi! Goodmorning, I'm fine."
sublist = []
for char in message:
#if the character is in the list of letters, append it to the current substring
if char.lower() in letters:
sublist.append(char)
else:
#add the previous sublist (aka word) to globalList, if it is not empty
if sublist:
globalList.append(sublist)
#adds the single non-letter character to the globalList
globalList.append([char])
#initiates a fresh new sublist
sublist = []
print(globalList)
#output is [['H', 'i'], ['!'], [' '], ['G', 'o', 'o', 'd', 'm', 'o', 'r', 'n', 'i', 'n', 'g'], [','], [' '], ['I'], ["'"], ['m'], [' '], ['f', 'i', 'n', 'e'], ['.']]
</code></pre>
| 0 | 2016-10-06T08:50:26Z | [
"python",
"python-2.7"
]
|
Split string > list of sublists of words and characters | 39,891,137 | <p>No imports allowed (it's a school assignment).</p>
<p>Wish to split a random string into a list of sublists. Words in a sublist, all other characters (including whitespace) would be in a sublist containing only one item. Anyone have some advice on how to do this;</p>
<pre><code>part = "Hi! Goodmorning, I'm fine."
list = [[H,i],[!],[_],[G,o,o,d,m,o,r,n,i,n,g],[,],[_],[I],['],[m],[_],[f,i,n,e],[.]]
</code></pre>
| -4 | 2016-10-06T08:26:14Z | 39,892,424 | <p>Try this out :</p>
<pre><code>part = "Hi! Goodmorning, I'm fine."
n = part.count(" ")
part = part.split()
k = 0
# Add spaces to the list
for i in range(1,n+1):
part.insert(i+k, "_")
k += 1
new = [] # list to return
for s in part:
new.append([letter for letter in s])
</code></pre>
| 0 | 2016-10-06T09:31:37Z | [
"python",
"python-2.7"
]
|
Split string > list of sublists of words and characters | 39,891,137 | <p>No imports allowed (it's a school assignment).</p>
<p>Wish to split a random string into a list of sublists. Words in a sublist, all other characters (including whitespace) would be in a sublist containing only one item. Anyone have some advice on how to do this;</p>
<pre><code>part = "Hi! Goodmorning, I'm fine."
list = [[H,i],[!],[_],[G,o,o,d,m,o,r,n,i,n,g],[,],[_],[I],['],[m],[_],[f,i,n,e],[.]]
</code></pre>
| -4 | 2016-10-06T08:26:14Z | 39,893,456 | <pre><code>part = "Hi! Goodmorning, I'm fine."
a = []
b = []
c = 0
for i in part:
if i.isalpha():
if c == 1:
a.append(b)
b=[]
b.append(i)
c = 0
else:
b.append(i)
else:
a.append(b)
b=[]
b.append(i)
c = 1
a.append(b)
print a
</code></pre>
| 0 | 2016-10-06T10:21:30Z | [
"python",
"python-2.7"
]
|
Merge two dictionaries | 39,891,195 | <p>I have something like this: </p>
<pre><code>Dict1 = {'a': "blabla", 'b': "gugu"}
Dict2 = {'a': "tadaa", 'b': "duduu"}
</code></pre>
<p>what I want to have is:</p>
<pre><code>Dict3 = {'tadaa': "blabla", 'duduu': "gugu"}
</code></pre>
| -3 | 2016-10-06T08:29:06Z | 39,891,294 | <p>Would something like this work?</p>
<pre><code>Dict3 = dict(zip(Dict2.values(), Dict1.values()))
</code></pre>
| -2 | 2016-10-06T08:34:29Z | [
"python",
"python-2.7",
"python-3.x"
]
|
Merge two dictionaries | 39,891,195 | <p>I have something like this: </p>
<pre><code>Dict1 = {'a': "blabla", 'b': "gugu"}
Dict2 = {'a': "tadaa", 'b': "duduu"}
</code></pre>
<p>what I want to have is:</p>
<pre><code>Dict3 = {'tadaa': "blabla", 'duduu': "gugu"}
</code></pre>
| -3 | 2016-10-06T08:29:06Z | 39,891,336 | <p>Iterate over one dictionary's keys and values, use the value as key in a new dictionary and use the key to pull the associated value out of the second dictionary. In <a href="https://docs.python.org/3.6/tutorial/datastructures.html#dictionaries" rel="nofollow">a single expression</a>:</p>
<pre><code>>>> Dict1 = {'a': "blabla", 'b': "gugu"}
>>> Dict2 = {'a': "tadaa", 'b': "duduu"}
>>> {v: Dict1[k] for k, v in Dict2.items()}
{'duduu': 'gugu', 'tadaa': 'blabla'}
</code></pre>
| 3 | 2016-10-06T08:37:00Z | [
"python",
"python-2.7",
"python-3.x"
]
|
Merge two dictionaries | 39,891,195 | <p>I have something like this: </p>
<pre><code>Dict1 = {'a': "blabla", 'b': "gugu"}
Dict2 = {'a': "tadaa", 'b': "duduu"}
</code></pre>
<p>what I want to have is:</p>
<pre><code>Dict3 = {'tadaa': "blabla", 'duduu': "gugu"}
</code></pre>
| -3 | 2016-10-06T08:29:06Z | 39,891,476 | <p>To some extent, this depends on what the edge cases are, and what you want to do about them (for example, what if the keys in <code>Dict1</code> and <code>Dict2</code> can be different?</p>
<p>Here's a solution that discards keys which only occur in one of the two dicts:</p>
<pre><code>>>> Dict1 = {'a': "blabla", 'b': "gugu", 'x': 'nope'}
>>> Dict2 = {'a': "tadaa", 'b': "duduu", 'y': 'nuh-uh'}
>>> {Dict2[k]: Dict1[k] for k in set(Dict1) & set(Dict2)}
{'tadaa': 'blabla', 'duduu': 'gugu'}
</code></pre>
| 1 | 2016-10-06T08:43:54Z | [
"python",
"python-2.7",
"python-3.x"
]
|
How to send raw string to a dotmatrix printer using python in ubuntu? | 39,891,202 | <p>I have a dot-matrix printer LX-300 connected to my computer through the network. How do I send a raw string with ESCP characters directly to my printer in Python?</p>
<p>The computer is connected to the printer through another computer. I need to send a raw string because LX-300 image printing result is blurry.</p>
| 5 | 2016-10-06T08:29:28Z | 40,046,832 | <p>Ultimately, you will need and want to write your own wrapper/script to do this. And since you are using a distribution of Linux, this is relatively easy.</p>
<p>On a Linux OS, the simplest way to issue a print job is to open a <a href="http://docs.python.org/3/library/subprocess.html" rel="nofollow" title="subprocess">subprocess</a> to the <a href="http://www.computerhope.com/unix/ulpr.htm" rel="nofollow">lpr</a>. Generally, using <code>lpr</code> lets you access the printer without the need to be logged in as root (being a superuser), which is desirable considering the amount of damage that can be done while logged in as a "superuser".</p>
<p>Code like the following:</p>
<pre><code>import subprocess
lpr = subprocess.Popen("/usr/bin/lpr", stdin=subprocess.PIPE)
lpr.stdin.write(data_to_send_to_printer)
</code></pre>
<p>Should be a good jumping off point for you. Essentially, this code should allow you to accomplish what you need.</p>
<p><strong>Be careful though; depending on your privilege levels, a call to open a subprocess might need root level/Superuser permissions.</strong></p>
<p>Subprocesses generally <a href="http://stackoverflow.com/questions/22233454/will-a-python-subprocess-popen-call-inherit-root-privs-if-the-calling-script-i" title="inherit">inherit</a> the User IDs and access rights by the user that is running the command. For example, if the subprocess is created by a root user, then you will need root user/Superuser rights to access that subprocess.</p>
<p>For more information, check out the hyperlinks I've included in the post.</p>
<p>Good luck!</p>
| 2 | 2016-10-14T15:27:37Z | [
"python",
"epson",
"dot-matrix",
"escpos"
]
|
How to send raw string to a dotmatrix printer using python in ubuntu? | 39,891,202 | <p>I have a dot-matrix printer LX-300 connected to my computer through the network. How do I send a raw string with ESCP characters directly to my printer in Python?</p>
<p>The computer is connected to the printer through another computer. I need to send a raw string because LX-300 image printing result is blurry.</p>
| 5 | 2016-10-06T08:29:28Z | 40,047,237 | <h2>The Problem</h2>
<p>To send data down this route:</p>
<p>Client computer ---> Server (Windows machine) ---> printer (dot-matrix)</p>
<p>...and to <em>not</em> let Windows mess with the data; instead to send the raw data, including printer control codes, straight from the client computer.</p>
<h2>My Solution</h2>
<p>Here's how I solved a near-identical problem for a small in-house database application:</p>
<p>Step 1) Make the printer network-accessible without Windows getting its fingers in the data routed to it. I accomplished this by installing the printer using the "Generic/Text Only" driver, then installing
<a href="https://sourceforge.net/projects/rawprintserver/" rel="nofollow">RawPrintServer</a> on the Windows machine connected to the printer.</p>
<p>Step 2) Send raw data over the network to the TCP/IP port specified when you set up RawPrintServer (default is 9100). There are various ways to do that, here's what I did:</p>
<pre><code>data = b"\x1B@A String To Print\x1B@" # be sure to use the right codes for your printer
ip_addr = 123.123.123.123 # address of the machine with the printer
port = 9100 # or whatever you set it to
s = socket.socket()
try:
s.connect((ip_addr, port))
s.send(data)
except:
# deal with the error
finally:
s.close()
</code></pre>
<h2>Background</h2>
<p>I thought about the problem in two parts:</p>
<ol>
<li>Client machine: spitting out the data I need from Python with the correct formatting/control codes for my printer, and sending it across the network</li>
<li>Print server machine: transmitting the data to the locally connected printer</li>
</ol>
<p>Number 1 is the easy part. There are actually <a href="https://pypi.python.org/pypi?%3Aaction=search&term=escpos&submit=search" rel="nofollow">some libraries in PyPI</a> that may help with all the printer codes, but I found most of them are aimed at the little point-of-sale label printers, and were of limited use to me. So I just hard-coded what I needed into my Python program.</p>
<p>Of course, the way you choose to solve number 2 will effect how you send the data from Python. I chose the TCP/IP route to avoid dealing with Samba and Windows print issues.</p>
<p>As you probably discovered, Windows normally tries very hard to convert whatever you want to print to a bitmap and run the printer in graphics mode. We can use the generic driver and dump the data straight into the (local) printer port in order to prevent this.</p>
<p>The missing link, then, is getting from the network to the local printer port on the machine connected to the printer. Again, there are various ways to solve this. You could attempt to access the Windows printer share in some way. If you go the TCP/IP route like I did, you could write your own print server in Python. In my case, the RawPrintServer program "just worked" so I didn't investigate any further. Apparently all it does is grab incoming data from TCP port 9100 and shove it into the local printer port. Obviously you'll have to be sure the firewall isn't blocking the incoming connections on the print server machine. This method does not require the printer to be "shared" as far as Windows is concerned.</p>
<p>Depending on your situation (if you use DHCP), you might need to do some extra work to get the server's IP address in Python. In my case, I got the IP for free because of the peculiarity of my application.</p>
<p>This solution seems to be working out very well for me. I've got an old Panasonic printer running in Epson ESC/P compatibility mode connected to a Windows 7 machine, which I can print to from any other computer on the local network. Incidentally, this general idea should work regardless of what OS the client computer is running.</p>
| 3 | 2016-10-14T15:47:44Z | [
"python",
"epson",
"dot-matrix",
"escpos"
]
|
unable to fetch json attribute in POST request | 39,891,234 | <p>This Json is received as a POST request. Now
I want to get value of <code>text</code> key of each entry in <code>actions</code> array</p>
<p>I am using Python's Bottle to receive the request.
to fetch the value of required attribute, I did this</p>
<pre><code>word = request.forms.get('[attachments][actions][0][text]')
</code></pre>
<p>But this doesn't print required value. </p>
<pre><code>{
"attachments": [
{
"title": "XYZ",
"title_link": "EDWE",
"text": "dxjhvgebndm",
"fields": [
{
"title": "Food",
"value": "$20",
"short": true
}
],
"actions": [
{
"name": "chess",
"text": "Approve",
"type": "button",
"value": "chess",
"style": "primary"
},
{
"name": "maze",
"text": "Decline",
"style": "danger",
"type": "button",
"value": "maze"
},
{
"name": "war",
"text": "More details",
"style": "default",
"type": "button",
"value": "war",
"confirm": {
"title": "Are you sure?",
"text": "Would you like to see more details of your expense?",
"ok_text": "Yes",
"dismiss_text": "No"
}
}
],
"image_url": "",
"thumb_url": "https://i.imgsafe.org/cf40eef.png",
"footer": "fghj",
"footer_icon": "https://i.imgsafe.org/cf2e0eef.png",
"ts": 1475057533
}
]
}
</code></pre>
<p>Note: I am receiving complete JSON, the problem is in fetching correct attribute.</p>
<p><strong>EDIT</strong>
Through this i am receiving POST request</p>
<pre><code>import json
from bottle import route, run, request
import urllib
@route('/ocr_response', method='POST')
def ocr_response():
body = request.body.read()
word = request.forms.get('[attachments][actions][0][text]')
print word
print body
if __name__ == "__main__":
run(host='0.0.0.0', port=80, debug=True)
</code></pre>
| 0 | 2016-10-06T08:31:01Z | 39,891,319 | <p>That's not how you access items in a dictionary at all.</p>
<p>Firstly, the JSON data is available via <code>request.json</code>. Secondly, I'm not sure what you're doing with that string you're passing to get, but you need to use normal dictionary/array syntax. And thirdly, attachments is a list just like actions, so you'd need to add an index there too.</p>
<pre><code>request.json['attachments'][0]['actions'][0]['text']
</code></pre>
| 3 | 2016-10-06T08:36:14Z | [
"python",
"json",
"post",
"bottle"
]
|
Better way convert json to SQLAlchemy object | 39,891,387 | <p>These days I am learning SQLAlchemy. When I want to load an object from json and save it to MySQL, things get difficult because the fields in my model are more that 20 and I wonder whether there're better ways to do this.</p>
<p>My original code follows as an example:</p>
<pre><code>class User(Base):
__tablename__ = 'parent'
id = Column(Integer, primary_key=True)
name = Column(String)
json_str = """{"id": 1, "name": "yetship"}"""
obj = json.loads(json_str)
user = User(id=obj.get("id"), name=obj.get("name"))
</code></pre>
<p>It can work but as I add more fields, it gets awful.</p>
| -1 | 2016-10-06T08:39:41Z | 39,891,537 | <p>If you have in your json file only fields that you can initialize your <code>User</code> from. Then you simply can do.</p>
<pre><code>user = User(**obj)
</code></pre>
<p><code>**obj</code> will unpack your <code>dict</code> object, so if have <code>obj = {'id': 1, 'name': 'Awesome'}</code>, <code>User(**obj)</code> will do like <code>User(id=1, name='Awesome')</code></p>
<p>You can see <a href="https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists" rel="nofollow">the docs</a></p>
<p>NOTE: it's better to avoid using <code>id</code> as a variable|field, because <code>'id' in dir(__builtin__)</code>, so <code>id</code> in bultins</p>
<p><strong>UPD</strong></p>
<p>If you have in your json fields that don't belong to that model you can filter them out with dict comprehension</p>
<pre><code>user = User(**{k:v for k, v in obj.items() if k in {'id', 'name'}})
</code></pre>
| -1 | 2016-10-06T08:47:21Z | [
"python",
"json",
"sqlalchemy"
]
|
How can I simplify my code in an efficient way? | 39,891,501 | <p>so currently im stuck on a question of my assignment,
the assignment question is:
Define the print_most_frequent() function which is passed two parameters, a dictionary containing words and their corresponding frequencies (how many times they occurred in a string of text), e.g.,</p>
<pre><code>{"fish":9, "parrot":8, "frog":9, "cat":9, "stork":1, "dog":4, "bat":9, "rat":4}
</code></pre>
<p>and, an integer, the length of the keywords in the dictionary which are to be considered. </p>
<p>The function prints the keyword length, followed by " letter keywords: ", then prints a sorted list of all the dictionary keywords of the required length, which have the highest frequency, followed by the frequency. For example, the following code:</p>
<pre><code>word_frequencies = {"fish":9, "parrot":8, "frog":9, "cat":9, "stork":1, "dog":4, "bat":9, "rat":4}
print_most_frequent(word_frequencies,3)
print_most_frequent(word_frequencies,4)
print_most_frequent(word_frequencies,5)
print_most_frequent(word_frequencies,6)
print_most_frequent(word_frequencies, 7)
</code></pre>
<p>prints the following:</p>
<pre><code>3 letter keywords: ['bat', 'cat'] 9
4 letter keywords: ['fish', 'frog'] 9
5 letter keywords: ['stork'] 1
6 letter keywords: ['parrot'] 8
7 letter keywords: [] 0
</code></pre>
<p>I have coded to get the answer above however it is saying I'm wrong. Maybe it needs a simplifying but i'm struggling how to. Could someone help thank you.</p>
<pre><code>def print_most_frequent(words_dict, word_len):
word_list = []
freq_list = []
for word,freq in words_dict.items():
if len(word) == word_len:
word_list += [word]
freq_list += [freq]
new_list1 = []
new_list2 = []
if word_list == [] and freq_list == []:
new_list1 += []
new_list2 += [0]
return print(new_list1, max(new_list2))
else:
maximum_value = max(freq_list)
for i in range(len(freq_list)):
if freq_list[i] == maximum_value:
new_list1 += [word_list[i]]
new_list2 += [freq_list[i]]
new_list1.sort()
return print(new_list1, max(new_list2))
</code></pre>
| 0 | 2016-10-06T08:45:29Z | 39,891,801 | <p>You can use:</p>
<pre><code>def print_most_frequent(words_dict, word_len):
max_freq = 0
words = list()
for word, frequency in words_dict.items():
if len(word) == word_len:
if frequency > max_freq:
max_freq = frequency
words = [word]
elif frequency == max_freq:
words.append(word)
print("{} letter keywords:".format(word_len), sorted(words), max_freq)
</code></pre>
<p>It just iterates over the words dictionary, considering only the words whose length is the wanted one and builds the list of the most frequent words, resetting it as soon as a greater frequency is found.</p>
| 0 | 2016-10-06T08:59:55Z | [
"python",
"list",
"dictionary"
]
|
How can I simplify my code in an efficient way? | 39,891,501 | <p>so currently im stuck on a question of my assignment,
the assignment question is:
Define the print_most_frequent() function which is passed two parameters, a dictionary containing words and their corresponding frequencies (how many times they occurred in a string of text), e.g.,</p>
<pre><code>{"fish":9, "parrot":8, "frog":9, "cat":9, "stork":1, "dog":4, "bat":9, "rat":4}
</code></pre>
<p>and, an integer, the length of the keywords in the dictionary which are to be considered. </p>
<p>The function prints the keyword length, followed by " letter keywords: ", then prints a sorted list of all the dictionary keywords of the required length, which have the highest frequency, followed by the frequency. For example, the following code:</p>
<pre><code>word_frequencies = {"fish":9, "parrot":8, "frog":9, "cat":9, "stork":1, "dog":4, "bat":9, "rat":4}
print_most_frequent(word_frequencies,3)
print_most_frequent(word_frequencies,4)
print_most_frequent(word_frequencies,5)
print_most_frequent(word_frequencies,6)
print_most_frequent(word_frequencies, 7)
</code></pre>
<p>prints the following:</p>
<pre><code>3 letter keywords: ['bat', 'cat'] 9
4 letter keywords: ['fish', 'frog'] 9
5 letter keywords: ['stork'] 1
6 letter keywords: ['parrot'] 8
7 letter keywords: [] 0
</code></pre>
<p>I have coded to get the answer above however it is saying I'm wrong. Maybe it needs a simplifying but i'm struggling how to. Could someone help thank you.</p>
<pre><code>def print_most_frequent(words_dict, word_len):
word_list = []
freq_list = []
for word,freq in words_dict.items():
if len(word) == word_len:
word_list += [word]
freq_list += [freq]
new_list1 = []
new_list2 = []
if word_list == [] and freq_list == []:
new_list1 += []
new_list2 += [0]
return print(new_list1, max(new_list2))
else:
maximum_value = max(freq_list)
for i in range(len(freq_list)):
if freq_list[i] == maximum_value:
new_list1 += [word_list[i]]
new_list2 += [freq_list[i]]
new_list1.sort()
return print(new_list1, max(new_list2))
</code></pre>
| 0 | 2016-10-06T08:45:29Z | 39,891,998 | <p>One way you can do is to map the values as keys and vice-versa, this way you can easily get the most frequent words:</p>
<pre><code>a = {"fish":9, "parrot":8, "frog":9, "cat":9, "stork":1, "dog":4, "bat":9, "rat":4}
getfunc = lambda x, dct: [i for i in dct if dct[i] == x]
new_dict = { k : getfunc(k, a) for k in a.values() }
print (new_dict)
</code></pre>
<p>output:</p>
<pre><code>{8: ['parrot'], 1: ['stork'], 4: ['rat', 'dog'], 9: ['bat', 'fish', 'frog', 'cat']}
</code></pre>
<p>So, now if you want <code>9</code> digit words, simply say</p>
<pre><code>b = new_dict[9]
print (b, len(b))
</code></pre>
<p>which will give:</p>
<pre><code>['cat', 'fish', 'bat', 'frog'] 4
</code></pre>
<p>You get to use the dictionary, instead of calling the function again and over. This is faster as you loop over the frequencies just once, but if you still need a function, can just do a one-liner lambda maybe:</p>
<pre><code>print_most_frequent = lambda freq, x: print (freq[x])
print_most_frequent(new_dict, 9)
print_most_frequent(new_dict, 4)
</code></pre>
<p>which gives:</p>
<pre><code>['fish', 'bat', 'frog', 'cat']
['rat', 'dog']
</code></pre>
| 0 | 2016-10-06T09:10:17Z | [
"python",
"list",
"dictionary"
]
|
Python - How to move the file after complete writing | 39,891,584 | <p>How to set python to move the file after complete writing in the server ?</p>
<p>Below is my same to lock the file after complete writing, but it does'nt work in Linux server.</p>
<pre><code>try:
fcntl.lockf(file2,fcntl.LOCK_EX|fcntl.LOCK_NB)
print "Yes Locked"
time.sleep(20)
except:
print "No Lock"
file.close()
</code></pre>
<p>Any right suggestion ? Thank You</p>
| 0 | 2016-10-06T08:49:40Z | 39,909,206 | <p>You could use the os.rename method:</p>
<pre><code>import os
os.rename('oldPath/Name', 'newPath/Name')
</code></pre>
<p>Check out this answer for additional info:
<a href="http://stackoverflow.com/questions/8858008/how-to-move-a-file-in-python">How to move a file in Python</a></p>
| 0 | 2016-10-07T04:06:19Z | [
"python",
"linux"
]
|
List comprehension including elements of different sizes | 39,891,632 | <p>I have the following <code>for/if-elif-else</code> loop which extracts department information from tuples <code>t</code> in a list, based on the size of <code>t[0]</code>:</p>
<pre><code>for t in filt:
if len(t[0]) == 1:
pass
elif len(t[0]) == 2:
if 'organization' in t[0][0]['affiliation']:
depA = t[0][0]['affiliation']['organization']
else: depA = 'Not Available'
if 'organization' in t[0][1]['affiliation']:
depB = t[0][1]['affiliation']['organization']
else: depB = 'Not Available'
depC = 'None'
else:
if 'organization' in t[0][0]['affiliation']:
depA = t[0][0]['affiliation']['organization']
else: depA = 'Not Available'
if 'organization' in t[0][1]['affiliation']:
depB = t[0][1]['affiliation']['organization']
else: depB = 'Not Available'
if 'organization' in t[0][2]['affiliation']:
depC = t[0][1]['affiliation']['organization']
else: depC = 'Not Available'
</code></pre>
<p>Is there a way to do things like this in a single line even though the sizes of <code>t[0]</code> may be different. The reason I ask is that I may be incorrectly assuming that there are a maximum of 3 departments in <code>t[0]</code> when there in fact may be more and I'd like to save lines of code if possible.</p>
<p>In essence what I'd really like is something like having a list of a sensible number of the maximum possible departments based on my data ie 6 and then have something like</p>
<pre><code>for t in filt:
depA = [t[0][0]['affiliation']['organization'] if 'organization' in t[0][0]['affiliation'] else 'Not Available']
</code></pre>
<p>which is fine because <code>t[0]</code> is always of size at least 1. But here's where it gets tricky and the line of code below won't make pythonic sense:</p>
<pre><code>depB = [t[0][1]['affiliation']['organization'] if t[0][1] exists AND 'organization' in t[0][1]['organization'] else 'Not Available']
</code></pre>
<p>and so forth...</p>
<p>If I haven't worded the question title right, please change as required! Thanks!</p>
| 0 | 2016-10-06T08:52:05Z | 39,892,617 | <p>Turns out the following line of code will work, checking the index against the size of the list in question and using if else in the list comprehension:</p>
<pre><code>depB = [t[0][1]['affiliation']['organization'] if 2<=len(t[0]) and 'organization' in t[0][1]['affiliation'] else 'Not Available' for t in filt]
</code></pre>
| 0 | 2016-10-06T09:41:06Z | [
"python"
]
|
Python usage of regular expressions | 39,891,640 | <p>How can I extract <em>string1#string2</em> from the bellow line?</p>
<pre><code><![CDATA[<html><body><p style="margin:0;">string1#string2</p></body></html>]]>
</code></pre>
<p>The # character and the structure of the line is always the same.</p>
| -1 | 2016-10-06T08:52:22Z | 39,891,685 | <p>I would like to refer you to this <a href="http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags">gem</a>:</p>
<p>In synthesis a regex is not the appropriate tool for this job<br>
Also have you tried an <a href="https://docs.python.org/3/library/xml.etree.elementtree.html" rel="nofollow">XML</a> parser instead? </p>
<p>EDIT: </p>
<pre><code>import xml.etree.ElementTree as ET
a = "<html><body><p style=\"margin:0;\">string1#string2</p></body></html>"
root = ET.fromstring(a)
c = root[0][0].text
OUT:
c
'string1#string2'
d = c.replace('#', ' ').split()
Out:
d
['string1', 'string2']
</code></pre>
| 1 | 2016-10-06T08:54:35Z | [
"python",
"regex"
]
|
Python usage of regular expressions | 39,891,640 | <p>How can I extract <em>string1#string2</em> from the bellow line?</p>
<pre><code><![CDATA[<html><body><p style="margin:0;">string1#string2</p></body></html>]]>
</code></pre>
<p>The # character and the structure of the line is always the same.</p>
| -1 | 2016-10-06T08:52:22Z | 39,891,707 | <p>Simple, buggy, not reliable:</p>
<pre><code>line.replace('<![CDATA[<html><body><p style="margin:0;">', "").replace('</p></body></html>]]>', "").split("#")
</code></pre>
| 1 | 2016-10-06T08:55:42Z | [
"python",
"regex"
]
|
Python usage of regular expressions | 39,891,640 | <p>How can I extract <em>string1#string2</em> from the bellow line?</p>
<pre><code><![CDATA[<html><body><p style="margin:0;">string1#string2</p></body></html>]]>
</code></pre>
<p>The # character and the structure of the line is always the same.</p>
| -1 | 2016-10-06T08:52:22Z | 39,891,807 | <p>If you wish to use a regex:</p>
<pre><code>>>> re.search(r"<p.*?>(.+?)</p>", txt).group(1)
'string1#string2'
</code></pre>
| 0 | 2016-10-06T09:00:13Z | [
"python",
"regex"
]
|
Python usage of regular expressions | 39,891,640 | <p>How can I extract <em>string1#string2</em> from the bellow line?</p>
<pre><code><![CDATA[<html><body><p style="margin:0;">string1#string2</p></body></html>]]>
</code></pre>
<p>The # character and the structure of the line is always the same.</p>
| -1 | 2016-10-06T08:52:22Z | 39,892,048 | <pre><code>re.search(r'[^>]+#[^<]+',s).group()
</code></pre>
| 1 | 2016-10-06T09:12:54Z | [
"python",
"regex"
]
|
wxpython treectrl show bitmap picture on hover | 39,891,681 | <p>So i'm programming python program that uses wxPython for UI, with wx.TreeCtrl widget for selecting pictures(.png) on selected directory. I would like to add hover on treectrl item that works like tooltip, but instead of text it shows bitmap picture.</p>
<p>Is there something that already allows this, or would i have to create something with wxWidgets? </p>
<p>I am not too familiar with wxWidgets, so if i have to create something like that how hard would it be, lot of code is already using the treectrl, so it needs to be able to work same way.</p>
<p>So how would i have to go about doing this? And if there might be something i might be missing id be happy to know. </p>
| 0 | 2016-10-06T08:54:28Z | 39,901,257 | <p>Take a look at the <code>wx.lib.agw.supertooltip</code> module. It should help you to create a tooltip-like window that displays custom rich content.</p>
<p>As for triggering the display of the tooltip, you can catch mouse events for the tree widget (be sure to call <code>Skip</code> so the tree widget can see the events too) and reset a timer each time the mouse moves. If the timer expires because the mouse hasn't been moved in that long then you can use <code>tree.HitTest</code> to find the item that the cursor is on and then show the appropriate image for that item.</p>
| 1 | 2016-10-06T16:31:04Z | [
"python",
"bitmap",
"wxpython"
]
|
How to add outer tag to BeautifulSoup object | 39,891,983 | <p>I am trying to replace the content of an iframe a BeautifulSoup object. Let say this</p>
<pre><code> s="""
<!DOCTYPE html>
<html>
<body>
<iframe src="http://www.w3schools.com">
<p>Your browser does not support iframes.</p>
</iframe>
</body>
</html>
"""
</code></pre>
<p>is the original html being parsed with</p>
<pre><code>dom = BeatifulSoup(s, 'html.parser')
</code></pre>
<p>and I get the iframe with <code>f = dom.find('iframe')</code></p>
<p>Now I want to replace only the content of the iframe with another BeautifulSoup object, eg the object newBO. If I do <code>f.replace_with(newBO)</code>
it works but I lose the hierarchy of the original file because the iframe tag is gone. If instead of a BeautifulSoup object I had just a string I could do <code>f.string = 'just a string'</code> and that would replace the content, but if I do <code>f.string = newBO</code></p>
<p>I get</p>
<blockquote>
<p>TypeError: 'NoneType' object is not callable</p>
</blockquote>
<p>So I am trying to use the <code>replace_with</code> but add an <code>iframe</code> tag to the newBO. How can I do that? Can you suggest some other way?</p>
| 2 | 2016-10-06T09:09:15Z | 39,892,989 | <p><a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#extract" rel="nofollow"><em>extract</em></a> the content then <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#insert" rel="nofollow">insert</a>:</p>
<pre><code>from bs4 import BeautifulSoup
dom = BeautifulSoup(s, 'html.parser')
f = dom.find('iframe')
for ele in f.find_all():
ele.extract()
new = BeautifulSoup("<div>foo</div>").find("div")
f.insert(0, new)
print(dom)
</code></pre>
<p>Which would give you:</p>
<pre><code> <!DOCTYPE html>
<html>
<body>
<iframe src="http://www.w3schools.com"><div>foo</div>
</iframe>
</body>
</html>
</code></pre>
<p>To also remove any string set <code>f.string=""</code>:</p>
<pre><code>f = dom.find('iframe')
for ele in f.find_all():
print(type(ele))
ele.extract()
f.string = ""
new = BeautifulSoup("<div>foo</div>","html.parser").find("div")
f.insert(0, new)
print(dom)
</code></pre>
<p>Which would then give you:</p>
<pre><code><!DOCTYPE html>
<html>
<body>
<iframe src="http://www.w3schools.com"><div>foo</div></iframe>
</body>
</html>
</code></pre>
<p>In this case you could also use <code>f.append(new)</code> as it is going to be the only element.</p>
| 2 | 2016-10-06T09:59:14Z | [
"python",
"html",
"iframe",
"beautifulsoup"
]
|
Append and delete elements from a list using property | 39,891,988 | <p>In my base class is <em>_mylist</em> defined as a <em>list</em></p>
<pre><code>class Foo(object):
def __init__(self):
self._mylist = list()
@property
def mylist(self):
return self._mylist
@mylist.setter
def mylist(self, value):
self._mylist = value
</code></pre>
<p>In my derived class Boo </p>
<pre><code>class Boo(Foo)
def __init__(self):
""" """
def add_element(self,value):
Foo.mylist.appened(value)
</code></pre>
<p>I would like to add and delete elements to <em>mylist</em> from <em>Boo</em>.
I tried the following:</p>
<pre><code>boo = Boo()
boo.add\_element(5)
</code></pre>
<p>Following exception came up:</p>
<pre><code>AttributeError: 'property' object has no attribute 'append'
</code></pre>
<p>At present I modified the setter property of <em>mylist</em> to look like:</p>
<pre><code> @mylist.setter
def mylist(self, value):
self._mylist.append( value )
</code></pre>
<p>This allow me to add elements to my <em>mylist</em>, but i don't know how to delete elements from it.</p>
<p>Is there a better way of modifing a list in based class from the derived class?</p>
| 0 | 2016-10-06T09:09:33Z | 39,892,120 | <p>Try something like this:</p>
<pre><code>@property
def del_element(self, value):
self._mylist.remove(value)
@property
def add_element(self,value):
self._mylist.append(value)
</code></pre>
| -2 | 2016-10-06T09:16:10Z | [
"python",
"python-3.x"
]
|
Append and delete elements from a list using property | 39,891,988 | <p>In my base class is <em>_mylist</em> defined as a <em>list</em></p>
<pre><code>class Foo(object):
def __init__(self):
self._mylist = list()
@property
def mylist(self):
return self._mylist
@mylist.setter
def mylist(self, value):
self._mylist = value
</code></pre>
<p>In my derived class Boo </p>
<pre><code>class Boo(Foo)
def __init__(self):
""" """
def add_element(self,value):
Foo.mylist.appened(value)
</code></pre>
<p>I would like to add and delete elements to <em>mylist</em> from <em>Boo</em>.
I tried the following:</p>
<pre><code>boo = Boo()
boo.add\_element(5)
</code></pre>
<p>Following exception came up:</p>
<pre><code>AttributeError: 'property' object has no attribute 'append'
</code></pre>
<p>At present I modified the setter property of <em>mylist</em> to look like:</p>
<pre><code> @mylist.setter
def mylist(self, value):
self._mylist.append( value )
</code></pre>
<p>This allow me to add elements to my <em>mylist</em>, but i don't know how to delete elements from it.</p>
<p>Is there a better way of modifing a list in based class from the derived class?</p>
| 0 | 2016-10-06T09:09:33Z | 39,892,355 | <p>Why must change the semantics of setter?</p>
<p>What if just manipulating the list using list's method.</p>
<pre><code>class Foo(object):
def __init__(self):
self._mylist = list()
@property
def mylist(self):
return self._mylist
@mylist.setter
def mylist(self, value):
self._mylist = value
class Boo(Foo):
pass
b = Boo()
b.mylist.append(1) # append directly
b.mylist.append(2)
b.mylist.append(3)
b.mylist.remove(2) # remove directly
</code></pre>
| 1 | 2016-10-06T09:28:13Z | [
"python",
"python-3.x"
]
|
Python3 - Download file from google docs private and public url | 39,892,017 | <p>I want to download files from google docs urls. I want it to know how to authenticate to google to download private files also. I found this <a href="https://github.com/uid/gdoc-downloader" rel="nofollow">github project</a> and it is very old, The authentication doesn't work, i can only download public files. Using urllib3 doesn't work and gives me js instead of html.</p>
<p>On the google docs api i've found a code that connects to google and knows only how to get recent docs(of the authenticated user) and things like that.</p>
<p>I need a code (prefer in pyhton3) that knows how to authenticate and download urls straight from google docs.</p>
| 0 | 2016-10-06T09:11:41Z | 39,912,030 | <p>The user has to authenticate the app, which will most likely call a browser which will bring up the user consent screen. You can use the official <a href="https://developers.google.com/drive/v3/web/quickstart/python" rel="nofollow">Python Quickstart</a> to handle this.</p>
<p>As for downloading the docs, the documentation also discusses this (<a href="https://developers.google.com/drive/v3/web/manage-downloads" rel="nofollow">Download Files</a>) with some code snippets for python you can use.</p>
| 2 | 2016-10-07T07:43:41Z | [
"python",
"python-3.x",
"google-docs",
"google-docs-api"
]
|
How to merge to two pandas data frames? | 39,892,028 | <p>I have two pandas data frames (see below).I want to merge them based on the id (Dataframe1) and localid(Dataframe2).
This code is not working; it creates additional rows in dfmerged as Dataframe2 may contains multiple same localid(e.g., D3). How can I merge these two dataframes and set the value of the 'color' column as NaN if the localid does not exists in the first dataframe (DataFrame1)?</p>
<pre><code>dfmerged = pd.merge(df1, df2, left_on='id', right_on='localid')
</code></pre>
<p><a href="http://i.stack.imgur.com/5Jr1M.png" rel="nofollow"><img src="http://i.stack.imgur.com/5Jr1M.png" alt="enter image description here"></a></p>
| 1 | 2016-10-06T09:12:19Z | 39,892,174 | <p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> and <code>sum</code> values in <code>list</code> in <code>df2</code> and then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="nofollow"><code>drop</code></a> column <code>localid</code>:</p>
<pre><code>df1 = pd.DataFrame({'id':['D1','D2','D3','D4','D5','D6'],
'Field1':[12,15,11,7,55,8.8]})
print (df1)
Field1 id
0 12.0 D1
1 15.0 D2
2 11.0 D3
3 7.0 D4
4 55.0 D5
5 8.8 D6
df2 = pd.DataFrame({'localid':['D1','D2','D3','D3','D9'],
'color':[['b'],['a'],['a','b'],['s','d'], ['a']]})
print (df2)
color localid
0 [b] D1
1 [a] D2
2 [a, b] D3
3 [s, d] D3
4 [a] D9
</code></pre>
<pre><code>df2 = df2.groupby('localid', as_index=False)['color'].sum()
print (df2)
localid color
0 D1 [b]
1 D2 [a]
2 D3 [a, b, s, d]
3 D9 [a]
dfmerged = pd.merge(df1,
df2,
left_on='id',
right_on='localid',
how='left')
.drop('localid', axis=1)
print (dfmerged)
Field1 id color
0 12.0 D1 [b]
1 15.0 D2 [a]
2 11.0 D3 [a, b, s, d]
3 7.0 D4 NaN
4 55.0 D5 NaN
5 8.8 D6 NaN
</code></pre>
| 2 | 2016-10-06T09:19:09Z | [
"python",
"pandas",
"merge"
]
|
How to merge to two pandas data frames? | 39,892,028 | <p>I have two pandas data frames (see below).I want to merge them based on the id (Dataframe1) and localid(Dataframe2).
This code is not working; it creates additional rows in dfmerged as Dataframe2 may contains multiple same localid(e.g., D3). How can I merge these two dataframes and set the value of the 'color' column as NaN if the localid does not exists in the first dataframe (DataFrame1)?</p>
<pre><code>dfmerged = pd.merge(df1, df2, left_on='id', right_on='localid')
</code></pre>
<p><a href="http://i.stack.imgur.com/5Jr1M.png" rel="nofollow"><img src="http://i.stack.imgur.com/5Jr1M.png" alt="enter image description here"></a></p>
| 1 | 2016-10-06T09:12:19Z | 39,892,509 | <p>You should probably simplify <code>df2</code> to <strong>have no repeating keys</strong>, and then tell <code>pd.merge</code> to use <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html#brief-primer-on-merge-methods-relational-algebra" rel="nofollow">union of keys from both frames</a> (with <code>how:'outer'</code>):</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({ 'id':['D1','D2','D3','D4','D5','D6'],
'Field1':[ 12, 15, 11, 7, 55, 8.8]})
df2 = pd.DataFrame({'localid':['D1','D2','D3','D3','D9'],
'color':[['blue','grey'],
['yellow'],
['black','red','green'],
['white'],
['blue']]})
dfmerged = pd.merge(df1, df2, left_on='id', right_on='localid')
dfmerged2 = pd.merge(df1, df2, left_on='id', right_on='localid', how='outer')
</code></pre>
<p>Which results in:</p>
<pre><code>>>> dfmerged2
Field1 id color localid
0 12.0 D1 [blue, grey] D1
1 15.0 D2 [yellow] D2
2 11.0 D3 [black, red, green] D3
3 11.0 D3 [white] D3
4 7.0 D4 NaN NaN
5 55.0 D5 NaN NaN
6 8.8 D6 NaN NaN
7 NaN NaN [blue] D9
</code></pre>
| 0 | 2016-10-06T09:36:04Z | [
"python",
"pandas",
"merge"
]
|
Python 3.4 does not read more than one line from a text file | 39,892,150 | <p>I am writing a python(3.4) code which uses a basic authentication. I have stored the credentials i.e the username & password in a text file(abc.txt).<br>
Whenever I login, the code accepts only the first line of the text file & ignores the rest of the credentials and gives incorrect credentials error.</p>
<p>My code:</p>
<pre><code>with open('abc.txt') as f:
credentials = [x.strip().split(':') for x in f.readlines()]
for username, password in credentials:
user_input = input('Please Enter username: ')
if user_input != username:
sys.exit('Incorrect incorrect username, terminating... \n')
user_input = input('Please Enter Password: ')
if user_input != password:
sys.exit('Incorrect Password, terminating... \n')
print ('User is logged in!\n')
</code></pre>
<p>abc.txt:</p>
<pre><code>Sil:xyz123
smith:abc321
</code></pre>
| 1 | 2016-10-06T09:17:50Z | 39,892,390 | <p>That is happening because you are only checking the first line. Currently the user can only enter credentials that match the first line in the text file, otherwise the program will exit. You should create a dictionary with the username and password and then check if a username is in that dictionary instead of iterating over the list of credentials.</p>
<pre><code>with open('abc.txt') as f:
credentials = dict([x.strip().split(':') for x in f.readlines()]) # Created a dictionary with username:password items
username_input = input('Please Enter username: ')
if username_input not in credentials: # Check if username is in the credentials dictionary
sys.exit('Incorrect incorrect username, terminating... \n')
password_input = input('Please Enter Password: ')
if password_input != credentials[username_input]: # Check if the password entered matches the password in the dictionary
sys.exit('Incorrect Password, terminating... \n')
print ('User is logged in!\n')
</code></pre>
| 1 | 2016-10-06T09:29:53Z | [
"python",
"python-3.4"
]
|
Regular expression to find string with iterating letters on the end | 39,892,170 | <p>Can someone help me with this kind of regular expression matching?</p>
<p>For example, I'm searching through list containing different strings with a letter iterating at the end of the string:</p>
<ul>
<li>MonsterA</li>
<li>MonsterB</li>
<li>MonsterC</li>
<li>HeroA</li>
<li>HeroB</li>
<li>HeroC</li>
<li>...</li>
</ul>
<p>What I need this script to return is only the preceding part of the string, in this example <strong>Monster</strong> and <strong>Hero</strong>. </p>
| -1 | 2016-10-06T09:19:01Z | 39,892,360 | <p>If you absolutely need a regex:</p>
<pre><code>re.match(r"(.*)[A-Z]", word).group(1)
</code></pre>
<p>But it is not the most efficient if you just want to remove the last character.</p>
| 0 | 2016-10-06T09:28:27Z | [
"python",
"regex",
"string",
"loops",
"iterator"
]
|
Regular expression to find string with iterating letters on the end | 39,892,170 | <p>Can someone help me with this kind of regular expression matching?</p>
<p>For example, I'm searching through list containing different strings with a letter iterating at the end of the string:</p>
<ul>
<li>MonsterA</li>
<li>MonsterB</li>
<li>MonsterC</li>
<li>HeroA</li>
<li>HeroB</li>
<li>HeroC</li>
<li>...</li>
</ul>
<p>What I need this script to return is only the preceding part of the string, in this example <strong>Monster</strong> and <strong>Hero</strong>. </p>
| -1 | 2016-10-06T09:19:01Z | 39,892,528 | <p>You could use a <em>positive lookahead assertion</em> <code>(?=...)</code> to check the words ends in a single uppercase character and then use word boudaries <code>\b...\b</code> to ensure it does not match patterns that arent whole words:</p>
<pre><code>>>> text = "This re will match MonsterA and HeroB but not heroC or MonsterCC"
>>> re.findall(r"\b[A-Z][a-z]+(?=[A-Z]\b)", text)
['Monster', 'Hero']
</code></pre>
<p><code>re.findall</code> returns all such matches in a list. </p>
| 0 | 2016-10-06T09:36:49Z | [
"python",
"regex",
"string",
"loops",
"iterator"
]
|
Python: How to make a string to a list using a substring? | 39,892,262 | <p>For example I have a string like this...</p>
<pre><code>StringA = "a city a street a room a bed"
</code></pre>
<p>I want to cut this string using a substring <code>"a "</code> and make a <code>list</code> from it. So the result looks like...</p>
<pre><code>ListA = ["city ", "street " ,"room " ,"bed"]
</code></pre>
<p>It would be OK if there are some empty spaces left. How can I do this? Thanks!</p>
| 0 | 2016-10-06T09:24:22Z | 39,892,302 | <p>You can use <code>split</code>:</p>
<pre><code>list_a = string_a.split('a ')
</code></pre>
| 1 | 2016-10-06T09:25:55Z | [
"python"
]
|
Python: How to make a string to a list using a substring? | 39,892,262 | <p>For example I have a string like this...</p>
<pre><code>StringA = "a city a street a room a bed"
</code></pre>
<p>I want to cut this string using a substring <code>"a "</code> and make a <code>list</code> from it. So the result looks like...</p>
<pre><code>ListA = ["city ", "street " ,"room " ,"bed"]
</code></pre>
<p>It would be OK if there are some empty spaces left. How can I do this? Thanks!</p>
| 0 | 2016-10-06T09:24:22Z | 39,892,361 | <p>You can do it in one line:</p>
<pre><code>filter(len, "a city a street a room a bed".split("a "))
</code></pre>
| 1 | 2016-10-06T09:28:28Z | [
"python"
]
|
Python: How to make a string to a list using a substring? | 39,892,262 | <p>For example I have a string like this...</p>
<pre><code>StringA = "a city a street a room a bed"
</code></pre>
<p>I want to cut this string using a substring <code>"a "</code> and make a <code>list</code> from it. So the result looks like...</p>
<pre><code>ListA = ["city ", "street " ,"room " ,"bed"]
</code></pre>
<p>It would be OK if there are some empty spaces left. How can I do this? Thanks!</p>
| 0 | 2016-10-06T09:24:22Z | 39,892,440 | <p>This should also work:</p>
<pre><code>ListA = StringA.split("a ")[1:]
</code></pre>
<p>The [1:] after the split statement has been added because the part before the first 'a' (which is '') will be treated as the first element of ListA and you don't want that.</p>
<p>Your output will look exactly like the one you desire:</p>
<pre><code>ListA = ['city ', 'street ', 'room ', 'bed']
</code></pre>
| 1 | 2016-10-06T09:32:10Z | [
"python"
]
|
Python: How to make a string to a list using a substring? | 39,892,262 | <p>For example I have a string like this...</p>
<pre><code>StringA = "a city a street a room a bed"
</code></pre>
<p>I want to cut this string using a substring <code>"a "</code> and make a <code>list</code> from it. So the result looks like...</p>
<pre><code>ListA = ["city ", "street " ,"room " ,"bed"]
</code></pre>
<p>It would be OK if there are some empty spaces left. How can I do this? Thanks!</p>
| 0 | 2016-10-06T09:24:22Z | 39,892,819 | <p>This will make list from string:</p>
<pre><code>re.split(r'\s*\ba\b\s*', a)
</code></pre>
<p>Output should be something like this:</p>
<pre><code>['', 'city', 'cola', 'room', 'bed']
</code></pre>
<p>So, to remove empty strings from list You can use:</p>
<pre><code>[item for item in re.split(r'\s*\ba\b\s*', a) if item]
</code></pre>
<p>Which will produce:</p>
<pre><code>['city', 'cola', 'room', 'bed']
</code></pre>
| 0 | 2016-10-06T09:51:18Z | [
"python"
]
|
Variable and parameter in a method | 39,892,270 | <p>I am new to Python and I have an issue with it when I am creating a function:</p>
<p>I created a dictionary where the keys are parameters like <code>n_estimators</code>, <code>C</code>, <code>max_depths</code> for instance.</p>
<p>I have a loop where I would like to set the parameters for a given estimator (which is an input of my funtion) but I got an issue.</p>
<p>For instance let's say my estimator is a <code>RandomForestClassifier</code>,</p>
<p>The code will be:</p>
<pre><code>key = 'n_estimators'
estimator = estimator.set_params(key=100)
</code></pre>
<p>I got the error:</p>
<pre><code>ValueError: Invalid parameter key for estimator RandomForestClassifier.
</code></pre>
<p>I understand the problem which is that set_params consider key as a parameter (and not as 'n_estimators') but I don't know how to solve the problem yet.</p>
<p>I would really appreciate any advice here.</p>
| 0 | 2016-10-06T09:24:44Z | 39,892,340 | <p>Use a dict with the <code>**</code> operator:</p>
<pre><code>key = 'n_estimators'
estimator = estimator.set_params(**{key: 100})
</code></pre>
| 0 | 2016-10-06T09:27:37Z | [
"python",
"machine-learning"
]
|
Variable and parameter in a method | 39,892,270 | <p>I am new to Python and I have an issue with it when I am creating a function:</p>
<p>I created a dictionary where the keys are parameters like <code>n_estimators</code>, <code>C</code>, <code>max_depths</code> for instance.</p>
<p>I have a loop where I would like to set the parameters for a given estimator (which is an input of my funtion) but I got an issue.</p>
<p>For instance let's say my estimator is a <code>RandomForestClassifier</code>,</p>
<p>The code will be:</p>
<pre><code>key = 'n_estimators'
estimator = estimator.set_params(key=100)
</code></pre>
<p>I got the error:</p>
<pre><code>ValueError: Invalid parameter key for estimator RandomForestClassifier.
</code></pre>
<p>I understand the problem which is that set_params consider key as a parameter (and not as 'n_estimators') but I don't know how to solve the problem yet.</p>
<p>I would really appreciate any advice here.</p>
| 0 | 2016-10-06T09:24:44Z | 39,892,384 | <p>or you know, just </p>
<pre><code>estimator.set_params(n_estimators=100)
</code></pre>
| 0 | 2016-10-06T09:29:45Z | [
"python",
"machine-learning"
]
|
Variable and parameter in a method | 39,892,270 | <p>I am new to Python and I have an issue with it when I am creating a function:</p>
<p>I created a dictionary where the keys are parameters like <code>n_estimators</code>, <code>C</code>, <code>max_depths</code> for instance.</p>
<p>I have a loop where I would like to set the parameters for a given estimator (which is an input of my funtion) but I got an issue.</p>
<p>For instance let's say my estimator is a <code>RandomForestClassifier</code>,</p>
<p>The code will be:</p>
<pre><code>key = 'n_estimators'
estimator = estimator.set_params(key=100)
</code></pre>
<p>I got the error:</p>
<pre><code>ValueError: Invalid parameter key for estimator RandomForestClassifier.
</code></pre>
<p>I understand the problem which is that set_params consider key as a parameter (and not as 'n_estimators') but I don't know how to solve the problem yet.</p>
<p>I would really appreciate any advice here.</p>
| 0 | 2016-10-06T09:24:44Z | 39,913,780 | <p>You could consider using <code>GridSearchCV</code> to evaluate a model over multiple hyperparameter values, and for different hyperparameters.</p>
| 0 | 2016-10-07T09:21:27Z | [
"python",
"machine-learning"
]
|
Twisted - Twistar calling lastval() causing psycopg2 error | 39,892,296 | <p>At the end of an insertion query, twistar calls lastval() and causes the postgres driver to fail.</p>
<pre><code>2016-10-06 11:08:02+0200 [-] Log opened.
2016-10-06 11:08:02+0200 [-] MAIN: Starting the reactor
2016-10-06 11:08:02+0200 [-] TWISTAR query: SELECT * FROM my_user WHERE user_id = %s LIMIT 1
2016-10-06 11:08:02+0200 [-] TWISTAR args: 009a65e7-a6a8-4de4-ad1a-87ac20e4073e
2016-10-06 11:08:02+0200 [-] TWISTAR query: SELECT * FROM my_user LIMIT 1
2016-10-06 11:08:02+0200 [-] TWISTAR query: INSERT INTO my_user ("username","user_id") VALUES (%s,%s)
2016-10-06 11:08:02+0200 [-] TWISTAR args: myusername,009a65e7-a6a8-4de4-ad1a-87ac20e4073e
2016-10-06 11:08:02+0200 [-] TWISTAR query: SELECT lastval()
2016-10-06 11:08:02+0200 [-] Unhandled error in Deferred:
2016-10-06 11:08:02+0200 [-] Unhandled Error
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/site-packages/twisted/_threads/_threadworker.py", line 46, in work
task()
File "/usr/lib/python2.7/site-packages/twisted/_threads/_team.py", line 190, in doWork
task()
--- <exception caught here> ---
File "/usr/lib/python2.7/site-packages/twisted/python/threadpool.py", line 246, in inContext
result = inContext.theWork()
File "/usr/lib/python2.7/site-packages/twisted/python/threadpool.py", line 262, in <lambda>
inContext.theWork = lambda: context.call(ctx, func, *args, **kw)
File "/usr/lib/python2.7/site-packages/twisted/python/context.py", line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/usr/lib/python2.7/site-packages/twisted/python/context.py", line 81, in callWithContext
return func(*args,**kw)
File "/usr/lib/python2.7/site-packages/twisted/enterprise/adbapi.py", line 477, in _runInteraction
compat.reraise(excValue, excTraceback)
File "/usr/lib/python2.7/site-packages/twisted/enterprise/adbapi.py", line 467, in _runInteraction
result = interaction(trans, *args, **kw)
File "/usr/lib/python2.7/site-packages/twistar/dbconfig/base.py", line 348, in _doinsert
self.insert(tablename, vals, txn)
File "/usr/lib/python2.7/site-packages/twistar/dbconfig/base.py", line 192, in insert
return self.getLastInsertID(txn)
File "/usr/lib/python2.7/site-packages/twistar/dbconfig/postgres.py", line 9, in getLastInsertID
self.executeTxn(txn, q)
File "/usr/lib/python2.7/site-packages/twistar/dbconfig/base.py", line 78, in executeTxn
return txn.execute(query, *args, **kwargs)
psycopg2.OperationalError: ERRORE: lastval non è stato ancora definito in questa sessione
</code></pre>
<p>last line says "lastval not yet defined in this session"</p>
<p>how to avoid that? i have no control how twistar calling lastval</p>
<p>here's the code who caused that </p>
<pre><code>def __user_done(self, user):
if len(user.errors) > 0:
print '%s errors in user creation' % len(user.errors)
print user.errors
else:
logging.debug("My user created. uuid is %s and username is %s" % (user.user_id, user.username))
def insert_my_user(self, name):
"""Inserisce il proprio utente con nome dato e uuid randomico"""
extras.register_uuid()
my_uuid = uuid4()
extensions.adapt(my_uuid).getquoted()
me = My_user(user_id=my_uuid, username=name)
me.save().addCallback(self.__user_done)
</code></pre>
| 0 | 2016-10-06T09:25:41Z | 39,897,567 | <p>If somebody else has the same problem here's how the developer solved to me: </p>
<blockquote>
<p>I think the issue is that you're explicitly setting the id column. Twistar is designed to use autoincrementing id values at the DB level (in the case of Postgres, this would be a SERIAL PRIMARY KEY column type), which is why you don't have a lastval defined.</p>
</blockquote>
| 0 | 2016-10-06T13:37:01Z | [
"python",
"postgresql",
"twisted.web"
]
|
Python pandas: Find a value in another dataframe and replace it | 39,892,399 | <p>I have two dataframes df_l (with 3000 rows) and df_s(with 100 rows) :
df_l</p>
<pre><code>version|update_date
2.3.4| date1
3.4.5|date2
</code></pre>
<p>and df_s</p>
<pre><code>version|release_date
2.3.4| date1
3.4.5|date2
3.3.3|date3
</code></pre>
<p>I want to check if a version in df_l is in df_s, then I want to update the values in df_l.update_date to df_s.release_date. Here is my code</p>
<pre><code>df_l.ix[df_l['version'].isin(df_s['version']),'update_date'] = df_s['release_date']
</code></pre>
<p>but the updated values in df_l.update_date are wrong I am guessing that the matching is not taking place correctly. Can anybody help?</p>
| 2 | 2016-10-06T09:30:19Z | 39,892,478 | <p>IIUC you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a> with inner join <code>how='inner'</code> which is by default. Also you can omit <code>on</code> if in both <code>DataFrames</code> are only 2 columns and one of them is same in both:</p>
<pre><code>print (pd.merge(df_l, df_s))
version update_date release_date
0 2.3.4 date1 date1
1 3.4.5 date2 date2
</code></pre>
| 1 | 2016-10-06T09:34:14Z | [
"python",
"string",
"pandas",
"matching"
]
|
Obtain the difference between two files | 39,892,460 | <p><em>First of all, i searched on the web and stackoverflow for around 3 days and haven't found anything i've been looking for.</em></p>
<p>I am doing a weekly security audit where i get back a .csv file with the IPs and the open ports. They look like this:</p>
<p><strong>20160929.csv</strong></p>
<pre><code>10.4.0.23;22
10.12.7.8;23
10.18.3.192;23
</code></pre>
<p><strong>20161006.csv</strong></p>
<pre><code>10.4.0.23;22
10.18.3.192;23
10.24.0.2;22
10.75.1.0;23
</code></pre>
<p>The difference is:
<strong>10.12.7.8:23</strong> got closed.
<strong>10.24.0.2:22</strong> and <strong>10.75.1.0:23</strong> got opened.</p>
<p>I want a script which prints me out:</p>
<pre><code>[-] 10.12.7.8:23
[+] 10.24.0.2:22
[+] 10.75.1.0:23
</code></pre>
<p><strong>How can i make a script like this?</strong> I tried my difflib but that isn't what i need. I need to be able to also write that to files later or send that output as a mail which i have a script for already.</p>
<p>I can't use Unix, because in our company we have a Windows environment and are not allowed to use another OS. So i can't use <code>diff</code> or some other great tools.</p>
<p>This is my first attempt:</p>
<pre><code>old = set((line.strip() for line in open('1.txt', 'r+')))
new = open('2.txt', 'r+')
diff = open('diff.txt', 'w')
for line in new:
if line.strip() not in old:
diff.write(line)
new.close()
diff.close()
</code></pre>
<p>This is my second attempt</p>
<pre><code>old = set((line.strip() for line in open('1.txt', 'r+')))
new = open('2.txt', 'r+')
diff = open('diff.txt', 'w')
for line in new:
if line.strip() not in old:
diff.write(line)
new.close()
diff.close()
</code></pre>
| 4 | 2016-10-06T09:33:21Z | 39,893,259 | <p>Using your code as a base, you could do the following:</p>
<pre><code>old = set((line.strip() for line in open('1.txt')))
new = set((line.strip() for line in open('2.txt')))
with open('diff.txt', 'w') as diff:
for line in new:
if line not in old:
diff.write('[-] {}\n'.format(line))
for line in old:
if line not in new:
diff.write('[+] {}\n'.format(line))
</code></pre>
<p>There's a couple of tweaks in here:</p>
<ol>
<li>We want to read the individual lines of both the old and new
files to compare.</li>
<li>We don't have to <code>strip</code> each individual line as we have done that while reading the file.</li>
<li>We use <code>{}</code> and <code>.format()</code> to build text strings.</li>
<li>Using <code>\n</code> ensures we put each entry on a new line of our output file.</li>
<li>Using <code>with</code> for the file we are writing to lets us open it without having to call <code>close</code> and (if my knowledge is correct) allows for better handling of any program crashes once the file has been opened.</li>
</ol>
| 1 | 2016-10-06T10:12:17Z | [
"python",
"python-2.7",
"csv",
"difference",
"differentiation"
]
|
Obtain the difference between two files | 39,892,460 | <p><em>First of all, i searched on the web and stackoverflow for around 3 days and haven't found anything i've been looking for.</em></p>
<p>I am doing a weekly security audit where i get back a .csv file with the IPs and the open ports. They look like this:</p>
<p><strong>20160929.csv</strong></p>
<pre><code>10.4.0.23;22
10.12.7.8;23
10.18.3.192;23
</code></pre>
<p><strong>20161006.csv</strong></p>
<pre><code>10.4.0.23;22
10.18.3.192;23
10.24.0.2;22
10.75.1.0;23
</code></pre>
<p>The difference is:
<strong>10.12.7.8:23</strong> got closed.
<strong>10.24.0.2:22</strong> and <strong>10.75.1.0:23</strong> got opened.</p>
<p>I want a script which prints me out:</p>
<pre><code>[-] 10.12.7.8:23
[+] 10.24.0.2:22
[+] 10.75.1.0:23
</code></pre>
<p><strong>How can i make a script like this?</strong> I tried my difflib but that isn't what i need. I need to be able to also write that to files later or send that output as a mail which i have a script for already.</p>
<p>I can't use Unix, because in our company we have a Windows environment and are not allowed to use another OS. So i can't use <code>diff</code> or some other great tools.</p>
<p>This is my first attempt:</p>
<pre><code>old = set((line.strip() for line in open('1.txt', 'r+')))
new = open('2.txt', 'r+')
diff = open('diff.txt', 'w')
for line in new:
if line.strip() not in old:
diff.write(line)
new.close()
diff.close()
</code></pre>
<p>This is my second attempt</p>
<pre><code>old = set((line.strip() for line in open('1.txt', 'r+')))
new = open('2.txt', 'r+')
diff = open('diff.txt', 'w')
for line in new:
if line.strip() not in old:
diff.write(line)
new.close()
diff.close()
</code></pre>
| 4 | 2016-10-06T09:33:21Z | 39,893,267 | <p>You can try this one:</p>
<pre><code>old_f = open('1.txt')
new_f = open('2.txt')
diff = open('diff.txt', 'w')
old = [line.strip() for line in old_f]
new = [line.strip() for line in new_f]
for line in old:
if line not in new:
print '[-] ' + str(line)
diff.write('[-] ' + str(line) + '\n'
for line in new:
if line not in old:
print '[+]' + str(line)
diff.write('[+] ' + str(line) + '\n'
old_f.close()
new_f.close()
diff.close()
</code></pre>
| 0 | 2016-10-06T10:12:46Z | [
"python",
"python-2.7",
"csv",
"difference",
"differentiation"
]
|
Obtain the difference between two files | 39,892,460 | <p><em>First of all, i searched on the web and stackoverflow for around 3 days and haven't found anything i've been looking for.</em></p>
<p>I am doing a weekly security audit where i get back a .csv file with the IPs and the open ports. They look like this:</p>
<p><strong>20160929.csv</strong></p>
<pre><code>10.4.0.23;22
10.12.7.8;23
10.18.3.192;23
</code></pre>
<p><strong>20161006.csv</strong></p>
<pre><code>10.4.0.23;22
10.18.3.192;23
10.24.0.2;22
10.75.1.0;23
</code></pre>
<p>The difference is:
<strong>10.12.7.8:23</strong> got closed.
<strong>10.24.0.2:22</strong> and <strong>10.75.1.0:23</strong> got opened.</p>
<p>I want a script which prints me out:</p>
<pre><code>[-] 10.12.7.8:23
[+] 10.24.0.2:22
[+] 10.75.1.0:23
</code></pre>
<p><strong>How can i make a script like this?</strong> I tried my difflib but that isn't what i need. I need to be able to also write that to files later or send that output as a mail which i have a script for already.</p>
<p>I can't use Unix, because in our company we have a Windows environment and are not allowed to use another OS. So i can't use <code>diff</code> or some other great tools.</p>
<p>This is my first attempt:</p>
<pre><code>old = set((line.strip() for line in open('1.txt', 'r+')))
new = open('2.txt', 'r+')
diff = open('diff.txt', 'w')
for line in new:
if line.strip() not in old:
diff.write(line)
new.close()
diff.close()
</code></pre>
<p>This is my second attempt</p>
<pre><code>old = set((line.strip() for line in open('1.txt', 'r+')))
new = open('2.txt', 'r+')
diff = open('diff.txt', 'w')
for line in new:
if line.strip() not in old:
diff.write(line)
new.close()
diff.close()
</code></pre>
| 4 | 2016-10-06T09:33:21Z | 39,893,322 | <p>In the following solution I've used sets, so the order doesn't matter and we can do direct subtraction with the old and new to see what has changed.</p>
<p>I've also used the <code>with</code> context manager pattern for opening files, which is a neat way of ensuring they are closed again.</p>
<pre><code>def read_items(filename):
with open(filename) as fh:
return {line.strip() for line in fh}
def diff_string(old, new):
return "\n".join(
['[-] %s' % gone for gone in old - new] +
['[+] %s' % added for added in new - old]
)
with open('diff.txt', 'w') as fh:
fh.write(diff_string(read_items('1.txt'), read_items('2.txt')))
</code></pre>
<p>Obviously you could print out the diff string if you wanted to.</p>
| 3 | 2016-10-06T10:14:50Z | [
"python",
"python-2.7",
"csv",
"difference",
"differentiation"
]
|
csv writer expected byte like and space between rows | 39,892,463 | <p>I'm trying to combine several CSV files into one. But I'm getting a space between each row.</p>
<p>I read somewhere I had to change w to wb in <code>filewriter = csv.writer(open("output.csv", "w"))</code></p>
<p>but this gives me the following error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "C:\Users\Rasmus\workspace\Portfolio\src\Companies.py", line 33, in <module>
collect()
File "C:\Users\Rasmus\workspace\Portfolio\src\Companies.py", line 31, in collect
Write.UpdateCSV(lst)
File "C:\Users\Rasmus\workspace\Portfolio\src\Write.py", line 11, in __init__
filewriter.writerows(lst)
TypeError: a bytes-like object is required, not 'str'
</code></pre>
<p>I'm not sure what to make of this, as the only solution I could find gave me another error.</p>
<p>Are there any other way to get rid of the space between rows?</p>
<p>My code:</p>
<pre><code>import csv
import Write
paths = [
'Finance.CSV',
'Energy.CSV',
'Basic Industries Companies.CSV',
'Consumer Durables Companies.CSV',
'Consumer Non-Durables Companies.CSV',
'Consumer Services Companies.CSV',
'Health Care Companies.CSV',
'Public Utilities Companies.CSV',
'Technology Companies.CSV',
'Transportation Companies.CSV'
]
def collect():
lst = []
for path in paths:
file=open( 'C:/Users/Rasmus/Desktop/Sectors/' + path, "r")
reader = csv.reader(file)
for line in reader:
tmpLst = []
tmpLst.append(line[0])
tmpLst.append(line[7])
lst.append(tmpLst)
#print(line[0] + ", " + line[7])
Write.UpdateCSV(lst)
collect()
import csv
class UpdateCSV():
def __init__(self, lst):
#print(lst)
#resultFile = open("output.csv",'w')
#wr = csv.writer(resultFile, dialect='excel')
filewriter = csv.writer(open("output.csv", "wb"))
filewriter.writerows(lst)
</code></pre>
| 0 | 2016-10-06T09:33:44Z | 39,892,543 | <p>Your method works in Python 2.</p>
<p>But in python 3 you cannot open a text file in binary mode or the write method will expect bytes.</p>
<p>A correct way of doing it in python 3 (with, added the <code>with</code> statement that ensures that the file is closed when exiting the <code>with</code> block):</p>
<pre><code>with open("output.csv", "w", newline='') as f:
filewriter = csv.writer(f)
</code></pre>
<p>(omitting the <code>newline</code> parameter works but inserts a "blank" line in windows systems because of extra carriage return, that's probably the problem you're describing)</p>
<p>EDIT: I checked csv module documentation, and there's an even better way to do this, works with python 2 <em>and</em> 3, which is to change csv <code>lineterminator</code>:</p>
<pre><code>with open("output.csv","w") as f:
filewriter = csv.writer(f,lineterminator="\n")
</code></pre>
<p>I had asked a question some time ago and just updated my answer to reflect that: <a href="http://stackoverflow.com/questions/38808284/portable-way-to-write-csv-file-in-python-2-or-python-3">portable way to write csv file in python 2 or python 3</a></p>
<p>EDIT 2: 2.7.12 and 3.5.2 version of python don't need all this. Seems that someone fixed the problem for good. But not everyone can upgrade even the minor versions of python (because of company policies, qualified tools depending on a given version...)</p>
| 2 | 2016-10-06T09:37:41Z | [
"python",
"python-3.x",
"csv"
]
|
Fail to transfer the list in data frame to numpy array with python-pandas | 39,892,464 | <p>For a data frame <code>df</code>:</p>
<pre><code>name list1 list2
a [1, 3, 10, 12, 20..] [2, 6, 23, 29...]
b [2, 10, 14, 3] [4, 7, 8, 13...]
c [] [98, 101, 200]
...
</code></pre>
<p>I want to transfer the <code>list1</code> and <code>list2</code> to <code>np.array</code> and then <code>hstack</code> them. Here is what I did:</p>
<pre><code>df.pv = df.apply(lambda row: np.hstack((np.asarray(row.list1), np.asarray(row.list2))), axis=1)
</code></pre>
<p>And I got such an error:</p>
<pre><code>ValueError: Shape of passed values is (138493, 175), indices imply (138493, 4)
</code></pre>
<p>Where <code>138493==len(df)</code></p>
<p>Please note that some value in <code>list1</code> and <code>list2</code> is empty list, <code>[]</code>. And the length of list are different among rows. Do you know what is the reason how can I fix the problem? Thanks in advance!</p>
<p>EDIT:</p>
<p>When I just try to convert one list to array:</p>
<pre><code>df.apply(lambda row: np.asarray(row.list1), axis=1)
</code></pre>
<p>An error also occurs:</p>
<pre><code>ValueError: Empty data passed with indices specified.
</code></pre>
| 1 | 2016-10-06T09:33:45Z | 39,894,100 | <p>Your apply function is <em>almost</em> correct. All you have to do - convert the output of the <code>np.hstack()</code> function back to a python list.</p>
<pre><code>df.apply(lambda row: list(np.hstack((np.asarray(row.list1), np.asarray(row.list2)))), axis=1)
</code></pre>
<p>The code is shown below (including the df creation):</p>
<pre><code>df = pd.DataFrame([('a',[1, 3, 10, 12, 20],[2, 6, 23, 29]),
('b',[2, 10, 1.4, 3],[4, 7, 8, 13]),
('c',[],[98, 101, 200])],
columns = ['name','list1','list2'])
df['list3'] = df.apply(lambda row: list(np.hstack((np.asarray(row.list1), np.asarray(row.list2)))), axis=1)
print(df)
</code></pre>
<p>Output:</p>
<pre><code>0 [1, 3, 10, 12, 20, 2, 6, 23, 29]
1 [2.0, 10.0, 1.4, 3.0, 4.0, 7.0, 8.0, 13.0]
2 [98.0, 101.0, 200.0]
Name: list3, dtype: object
</code></pre>
<p>If you want a numpy array, the only way I could get it to work is:</p>
<pre><code>df['list3'] = df['list3'].apply(lambda x: np.array(x))
print(type(df['list3'].ix[0]))
Out[] : numpy.ndarray
</code></pre>
| 1 | 2016-10-06T10:54:11Z | [
"python",
"pandas",
"numpy"
]
|
multiprocessing freeze computer | 39,892,551 | <p>I improved my execution time by using multiprocessing but I am not sure whether the behavior of the PC is correct, it freezes the system until all processes are done.
I am using Windows 7 and Python 2.7.</p>
<p>Perhaps I am doing a mistake, here is what I did:</p>
<pre><code>def do_big_calculation(sub_list, b, c):
# do some calculations here with the sub_list
if __name__ == '__main__':
list = [[1,2,3,4], [5,6,7,8], [9,10,11,12]]
jobs = []
for sub_l in list :
j = multiprocessing.Process(target=do_big_calculation, args=(sub_l, b, c))
jobs.append(j)
for j in jobs:
j.start()
</code></pre>
| 2 | 2016-10-06T09:37:58Z | 39,896,352 | <p>Here, you are creating 1 <code>Process</code> per task. This is will run all your tasks in parallel but it imposes an heavy overhead on your computer as your scheduler will need to manage many processes. This can cause a system freeze as too many ressources are used for your program. </p>
<p>A solution here could be to use the <code>multiprocessing.Pool</code> to run a given number of processes simultaneously performing some tasks:</p>
<pre><code>import multiprocessing as mp
def do_big_calculation(args):
sub_list, b, c = args
return 1
if __name__ == '__main__':
b, c = 1, 1
ll = [([1, 2, 3, 4], b, c),
([5, 6, 7, 8], b, c),
([9, 10, 11, 12], b, c)]
pool = mp.Pool(4)
result = pool.map(do_big_calculation, ll)
pool.terminate()
print(result)
</code></pre>
<p>If you are ready to use third party library, you could also take a look at <a href="https://github.com/agronholm/pythonfutures" rel="nofollow"><code>concurrent.futures</code></a> (you need to install it in python2.7 but it exists for python3.4+) or <a href="http://pythonhosted.org/joblib/" rel="nofollow"><code>joblib</code></a> (available with pip):</p>
<pre><code>from joblib import Parallel, delayed
def do_big_calculation(sub_list, b, c):
return 1
if __name__ == '__main__':
b, c = 1, 1
ll = [([1, 2, 3, 4], b, c),
([5, 6, 7, 8], b, c),
([9, 10, 11, 12], b, c)]
result = Parallel(n_jobs=-1)(
delayed(do_big_calculation)(l, b, c) for l in ll)
print(result)
</code></pre>
<p>The main advantage of such library is that it is developing whereas <code>multiprocessing</code> in python2.7 is freezed. Thus, there are bug fixes and improvements relatively often.<br>
It also implements some clever tools to reduce the overhead for the computation. For instance, it uses memory mapping for big numpy array (reducing the memory footprint of starting all jobs). </p>
| 2 | 2016-10-06T12:43:49Z | [
"python",
"windows",
"python-multiprocessing"
]
|
Feather install error with pip3 | 39,892,605 | <p>Hey everyone while I am trying to <code>pip3 install feather</code> this error popup on the screen.</p>
<pre><code>During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-mijuoz_z/feather/setup.py", line 3, in <module>
distribute_setup.use_setuptools()
File "/tmp/pip-build-mijuoz_z/feather/distribute_setup.py", line 145, in use_setuptools
return _do_download(version, download_base, to_dir, download_delay)
File "/tmp/pip-build-mijuoz_z/feather/distribute_setup.py", line 125, in _do_download
_build_egg(egg, tarball, to_dir)
File "/tmp/pip-build-mijuoz_z/feather/distribute_setup.py", line 99, in _build_egg
_extractall(tar)
File "/tmp/pip-build-mijuoz_z/feather/distribute_setup.py", line 467, in _extractall
self.chown(tarinfo, dirpath)
TypeError: chown() missing 1 required positional argument: 'numeric_owner'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-mijuoz_z/feather/
</code></pre>
| 0 | 2016-10-06T09:40:20Z | 39,893,511 | <p>There seem to be this big bad error shown underneath it.</p>
<pre><code>TypeError: chown() missing 1 required positional argument: 'numeric_owner'
</code></pre>
<p>What are your arguments to <code>chown()</code> function? </p>
| 0 | 2016-10-06T10:24:34Z | [
"python",
"numpy",
"matplotlib",
"scikit-learn",
"feather"
]
|
Using pandas to scrape weather data from wundergound | 39,892,710 | <p>I came across a very useful set of scripts on the Shane Lynn for the
<a href="http://www.shanelynn.ie/analysis-of-weather-data-using-pandas-python-and-seaborn/" rel="nofollow">Analysis of Weather data</a>. The first script, used to scrape data from Weather Underground, is as follows:</p>
<pre><code>import requests
import pandas as pd
from dateutil import parser, rrule
from datetime import datetime, time, date
import time
def getRainfallData(station, day, month, year):
"""
Function to return a data frame of minute-level weather data for a single Wunderground PWS station.
Args:
station (string): Station code from the Wunderground website
day (int): Day of month for which data is requested
month (int): Month for which data is requested
year (int): Year for which data is requested
Returns:
Pandas Dataframe with weather data for specified station and date.
"""
url = "http://www.wunderground.com/weatherstation/WXDailyHistory.asp?ID={station}&day={day}&month={month}&year={year}&graphspan=day&format=1"
full_url = url.format(station=station, day=day, month=month, year=year)
# Request data from wunderground data
response = requests.get(full_url, headers={'User-agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36'})
data = response.text
# remove the excess <br> from the text data
data = data.replace('<br>', '')
# Convert to pandas dataframe (fails if issues with weather station)
try:
dataframe = pd.read_csv(io.StringIO(data), index_col=False)
dataframe['station'] = station
except Exception as e:
print("Issue with date: {}-{}-{} for station {}".format(day,month,year, station))
return None
return dataframe
# Generate a list of all of the dates we want data for
start_date = "2016-08-01"
end_date = "2016-08-31"
start = parser.parse(start_date)
end = parser.parse(end_date)
dates = list(rrule.rrule(rrule.DAILY, dtstart=start, until=end))
# Create a list of stations here to download data for
stations = ["ILONDON28"]
# Set a backoff time in seconds if a request fails
backoff_time = 10
data = {}
# Gather data for each station in turn and save to CSV.
for station in stations:
print("Working on {}".format(station))
data[station] = []
for date in dates:
# Print period status update messages
if date.day % 10 == 0:
print("Working on date: {} for station {}".format(date, station))
done = False
while done == False:
try:
weather_data = getRainfallData(station, date.day, date.month, date.year)
done = True
except ConnectionError as e:
# May get rate limited by Wunderground.com, backoff if so.
print("Got connection error on {}".format(date))
print("Will retry in {} seconds".format(backoff_time))
time.sleep(10)
# Add each processed date to the overall data
data[station].append(weather_data)
# Finally combine all of the individual days and output to CSV for analysis.
pd.concat(data[station]).to_csv("data/{}_weather.csv".format(station))
</code></pre>
<p>However, I get the error:</p>
<pre><code>Working on ILONDONL28
Issue with date: 1-8-2016 for station ILONDONL28
Issue with date: 2-8-2016 for station ILONDONL28
Issue with date: 3-8-2016 for station ILONDONL28
Issue with date: 4-8-2016 for station ILONDONL28
Issue with date: 5-8-2016 for station ILONDONL28
Issue with date: 6-8-2016 for station ILONDONL28
</code></pre>
<p>Can anyone help me with this error?</p>
<p>The data for the chosen station and the time period is available, as shown at this <a href="https://www.wunderground.com/weatherstation/WXDailyHistory.asp?ID=ILONDONL28&day=01&month=08&year=2016&graphspan=day&format=1" rel="nofollow">link</a>.</p>
| 0 | 2016-10-06T09:46:06Z | 39,893,297 | <p>The output you are getting is because an exception is being raised. If you added a <code>print e</code> you would see that this is because <code>import io</code> was missing from the top of the script. Secondly, the station name you gave was out by one character. Try the following:</p>
<pre><code>import io
import requests
import pandas as pd
from dateutil import parser, rrule
from datetime import datetime, time, date
import time
def getRainfallData(station, day, month, year):
"""
Function to return a data frame of minute-level weather data for a single Wunderground PWS station.
Args:
station (string): Station code from the Wunderground website
day (int): Day of month for which data is requested
month (int): Month for which data is requested
year (int): Year for which data is requested
Returns:
Pandas Dataframe with weather data for specified station and date.
"""
url = "http://www.wunderground.com/weatherstation/WXDailyHistory.asp?ID={station}&day={day}&month={month}&year={year}&graphspan=day&format=1"
full_url = url.format(station=station, day=day, month=month, year=year)
# Request data from wunderground data
response = requests.get(full_url)
data = response.text
# remove the excess <br> from the text data
data = data.replace('<br>', '')
# Convert to pandas dataframe (fails if issues with weather station)
try:
dataframe = pd.read_csv(io.StringIO(data), index_col=False)
dataframe['station'] = station
except Exception as e:
print("Issue with date: {}-{}-{} for station {}".format(day,month,year, station))
return None
return dataframe
# Generate a list of all of the dates we want data for
start_date = "2016-08-01"
end_date = "2016-08-31"
start = parser.parse(start_date)
end = parser.parse(end_date)
dates = list(rrule.rrule(rrule.DAILY, dtstart=start, until=end))
# Create a list of stations here to download data for
stations = ["ILONDONL28"]
# Set a backoff time in seconds if a request fails
backoff_time = 10
data = {}
# Gather data for each station in turn and save to CSV.
for station in stations:
print("Working on {}".format(station))
data[station] = []
for date in dates:
# Print period status update messages
if date.day % 10 == 0:
print("Working on date: {} for station {}".format(date, station))
done = False
while done == False:
try:
weather_data = getRainfallData(station, date.day, date.month, date.year)
done = True
except ConnectionError as e:
# May get rate limited by Wunderground.com, backoff if so.
print("Got connection error on {}".format(date))
print("Will retry in {} seconds".format(backoff_time))
time.sleep(10)
# Add each processed date to the overall data
data[station].append(weather_data)
# Finally combine all of the individual days and output to CSV for analysis.
pd.concat(data[station]).to_csv(r"data/{}_weather.csv".format(station))
</code></pre>
<p>Giving you an output CSV file starting as follows:</p>
<pre class="lang-none prettyprint-override"><code>,Time,TemperatureC,DewpointC,PressurehPa,WindDirection,WindDirectionDegrees,WindSpeedKMH,WindSpeedGustKMH,Humidity,HourlyPrecipMM,Conditions,Clouds,dailyrainMM,SoftwareType,DateUTC,station
0,2016-08-01 00:05:00,17.8,11.6,1017.5,ESE,120,0.0,0.0,67,0.0,,,0.0,WeatherCatV2.31B93,2016-07-31 23:05:00,ILONDONL28
1,2016-08-01 00:20:00,17.7,11.0,1017.5,SE,141,0.0,0.0,65,0.0,,,0.0,WeatherCatV2.31B93,2016-07-31 23:20:00,ILONDONL28
2,2016-08-01 00:35:00,17.5,10.8,1017.5,South,174,0.0,0.0,65,0.0,,,0.0,WeatherCatV2.31B93,2016-07-31 23:35:00,ILONDONL28
</code></pre>
<p>If you are not getting a CSV file, I suggest you add a full path to the output filename.</p>
| 1 | 2016-10-06T10:14:07Z | [
"python",
"pandas",
"import",
"weather-api"
]
|
Mocking submodules in python | 39,892,746 | <p>I am trying to generate autodocumentation of my project through sphinx. However, I will run the generation of the autodocs in an environment that won't have all the modules I am importing. Hence I would like to mock the import statements.</p>
<p>On <a href="http://read-the-docs.readthedocs.io/en/latest/faq.html" rel="nofollow">http://read-the-docs.readthedocs.io/en/latest/faq.html</a> I found this trick for C modules:</p>
<pre><code>import sys
from unittest.mock import MagicMock
class Mock(MagicMock):
@classmethod
def __getattr__(cls, name):
return Mock()
MOCK_MODULES = ['pygtk', 'gtk', 'gobject', 'argparse', 'numpy', 'pandas']
sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
</code></pre>
<p>However mocking getattr does not solve cases like these:</p>
<pre><code>from foo.bar import blah
</code></pre>
<p>that is, when there is a dot [.] involved in the import statement. </p>
<p>Anyone any idea how to get all my imports mocked for a specific list of modules?</p>
| 0 | 2016-10-06T09:47:57Z | 39,892,851 | <p>The import</p>
<pre><code>from foo.bar import blah
</code></pre>
<p>will look for <code>sys.modules['foo.bar']</code>. Just insert that:</p>
<pre><code>>>> from foo.bar import blah
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'foo'
>>> import sys
>>> from unittest import mock
>>> sys.modules['foo.bar'] = mock.Mock()
>>> from foo.bar import blah
>>> blah
<Mock name='mock.blah' id='4362289896'>
</code></pre>
| 1 | 2016-10-06T09:53:14Z | [
"python",
"python-3.x",
"mocking",
"python-sphinx",
"autodoc"
]
|
LSTM implementation in keras Using specific dataset | 39,892,774 | <p>I am trying to understand how LSTM RNNs work and how they can be implemented in Keras in order to be able to solve a binary classification problem. My code and the dataset i use are visible below. When i compilr the code i get an error <code>TypeError: __init__() got multiple values for keyword argument 'input_dim'</code>, Can anybody help?</p>
<pre><code> from keras.models import Sequential
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras.layers import Dense
from sklearn.cross_validation import train_test_split
import numpy
from sklearn.preprocessing import StandardScaler # data normalization
seed = 7
numpy.random.seed(seed)
dataset = numpy.loadtxt("sorted output.csv", delimiter=",")
X = dataset[:,0:4]
scaler = StandardScaler(copy=True, with_mean=True, with_std=True ) #data normalization
X = scaler.fit_transform(X) #data normalization
Y = dataset[:4]
# split into 67% for train and 33% for test
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=seed)
# create model
model = Sequential()
model.add(Embedding(12,input_dim=4,init='uniform',activation='relu'))
model.add(Dense(4, init='uniform', activation='relu'))
model.add(LSTM(100))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test,y_test), nb_epoch=150, batch_size=10)
</code></pre>
<p><a href="http://i.stack.imgur.com/70UWR.png" rel="nofollow"><img src="http://i.stack.imgur.com/70UWR.png" alt="enter image description here"></a></p>
| 1 | 2016-10-06T09:49:32Z | 39,893,254 | <p>Looks like two separate questions here. </p>
<p>Regarding how to use LSTMs / Keras, there are some good tutorials around. Try <a href="http://machinelearningmastery.com/binary-classification-tutorial-with-the-keras-deep-learning-library/" rel="nofollow">this one</a> which also describes a binary classification problem. If you have a specific issue or area that you don't understand, let me know.</p>
<p>Regarding the file opening issue, perhaps the whitespace in the filename is causing an issue. Check out <a href="http://stackoverflow.com/questions/14852140/whitespaces-in-the-path-of-windows-filepath">this answer</a> to see if it helps.</p>
| 0 | 2016-10-06T10:12:04Z | [
"python",
"neural-network",
"theano",
"keras"
]
|
LSTM implementation in keras Using specific dataset | 39,892,774 | <p>I am trying to understand how LSTM RNNs work and how they can be implemented in Keras in order to be able to solve a binary classification problem. My code and the dataset i use are visible below. When i compilr the code i get an error <code>TypeError: __init__() got multiple values for keyword argument 'input_dim'</code>, Can anybody help?</p>
<pre><code> from keras.models import Sequential
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras.layers import Dense
from sklearn.cross_validation import train_test_split
import numpy
from sklearn.preprocessing import StandardScaler # data normalization
seed = 7
numpy.random.seed(seed)
dataset = numpy.loadtxt("sorted output.csv", delimiter=",")
X = dataset[:,0:4]
scaler = StandardScaler(copy=True, with_mean=True, with_std=True ) #data normalization
X = scaler.fit_transform(X) #data normalization
Y = dataset[:4]
# split into 67% for train and 33% for test
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=seed)
# create model
model = Sequential()
model.add(Embedding(12,input_dim=4,init='uniform',activation='relu'))
model.add(Dense(4, init='uniform', activation='relu'))
model.add(LSTM(100))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test,y_test), nb_epoch=150, batch_size=10)
</code></pre>
<p><a href="http://i.stack.imgur.com/70UWR.png" rel="nofollow"><img src="http://i.stack.imgur.com/70UWR.png" alt="enter image description here"></a></p>
| 1 | 2016-10-06T09:49:32Z | 39,907,711 | <p>This is in fact a case where the error message you are getting is perfectly to-the-point. (I wish this would always be the case with Python and Keras...)</p>
<p>Keras' <strong>Embedding</strong> layer constructor has <a href="https://keras.io/layers/embeddings/" rel="nofollow">this signature</a>:
<code>
keras.layers.embeddings.Embedding(input_dim, output_dim, ...)
</code></p>
<p>However, you are constructing it using:
<code>
Embedding(12,input_dim=4,...)
</code></p>
<p>So figure out which is the input and output dimension, respectively, and fix your parameter order and names. Based on the table you included in the question, I'm guessing 4 is your input dimension and 12 is your output dimension; then it'd be <code>Embedding(input_dim=4, output_dim=12, ...)</code>.</p>
| 0 | 2016-10-07T00:44:19Z | [
"python",
"neural-network",
"theano",
"keras"
]
|
Embedded document filtration in mongodb | 39,892,799 | <p>{</p>
<pre><code>"_id" : ObjectId("57f5ee94536d1a50ed3337c6"),
"details" : [
{
"status" : NumberLong(0),
"batch_id" : null,
"applied_date" : NumberLong(1470283450),
"call_letter" : 72,
"fid" : "3273681"
},
{
"status" : NumberLong(0),
"batch_id" : null,
"applied_date" : NumberLong(1470205742),
"call_letter" : 72,
"fid" : "3308438"
},
{
"status" : NumberLong(0),
"batch_id" : null,
"applied_date" : NumberLong(1470396778),
"call_letter" : 72,
"fid" : "3539451"
},
{
"status" : NumberLong(0),
"batch_id" : null,
"applied_date" : NumberLong(1470283450),
"call_letter" : 75,
"fid" : "3273681"
},
{
"status" : NumberLong(0),
"batch_id" : null,
"applied_date" : NumberLong(1470205742),
"call_letter" : 75,
"fid" : "3308438"
},
{
"status" : NumberLong(0),
"batch_id" : null,
"applied_date" : NumberLong(1470396778),
"call_letter" : 75,
"fid" : "3539451"
}
],
"job_id" : "92854"
}
</code></pre>
<p>This is my document format of <code>mongodb</code>.</p>
<p>I need to select all <strong>fid</strong> based on <strong>call_letter = 75</strong> for particular <code>job_id</code> and <code>list of job ids</code> (using pymondo)</p>
<p>How to filter data based on this conditions? please help me.</p>
<p>As I am new to mongoDB, I would like to get the answer for my particular query </p>
| -1 | 2016-10-06T09:50:33Z | 39,895,193 | <p>Try using and aggregate query like below</p>
<pre><code>db.jobs.aggregate([{"$match":{"job_id" : "92854"}},
{"$unwind":"$details"},
{"$match":{call_letter"" : "75"}}])
</code></pre>
<p>Add '$project' and '$group' to the pipeline to format your output as required (use '$push' to make it a list).</p>
| 0 | 2016-10-06T11:48:01Z | [
"python"
]
|
How can I get around memory limitation in this script? | 39,892,920 | <p>I'm trying to normalize my dataset which is <code>1.7 Gigabyte</code>. I have <code>14Gig of RAM</code> and I hit my limit very quickly. </p>
<p>This happens when computing the <code>mean/std</code> of the training data. The training data takes up the majority of the memory when loaded into <code>RAM(13.8Gig)</code>,thus the mean gets calculated, but when it reaches to the next line while calculating the <code>std</code>, it crashes. </p>
<p>Follows the script:</p>
<pre><code>import caffe
import leveldb
import numpy as np
from caffe.proto import caffe_pb2
import cv2
import sys
import time
direct = 'examples/svhn/'
db_train = leveldb.LevelDB(direct+'svhn_train_leveldb')
db_test = leveldb.LevelDB(direct+'svhn_test_leveldb')
datum = caffe_pb2.Datum()
#using the whole dataset for training which is 604,388
size_train = 604388 #normal training set is 73257
size_test = 26032
data_train = np.zeros((size_train, 3, 32, 32))
label_train = np.zeros(size_train, dtype=int)
print 'Reading training data...'
i = -1
for key, value in db_train.RangeIter():
i = i + 1
if i % 1000 == 0:
print i
if i == size_train:
break
datum.ParseFromString(value)
label = datum.label
data = caffe.io.datum_to_array(datum)
data_train[i] = data
label_train[i] = label
print 'Computing statistics...'
print 'calculating mean...'
mean = np.mean(data_train, axis=(0,2,3))
print 'calculating std...'
std = np.std(data_train, axis=(0,2,3))
#np.savetxt('mean_svhn.txt', mean)
#np.savetxt('std_svhn.txt', std)
print 'Normalizing training'
for i in range(3):
print i
data_train[:, i, :, :] = data_train[:, i, :, :] - mean[i]
data_train[:, i, :, :] = data_train[:, i, :, :]/std[i]
print 'Outputting training data'
leveldb_file = direct + 'svhn_train_leveldb_normalized'
batch_size = size_train
# create the leveldb file
db = leveldb.LevelDB(leveldb_file)
batch = leveldb.WriteBatch()
datum = caffe_pb2.Datum()
for i in range(size_train):
if i % 1000 == 0:
print i
# save in datum
datum = caffe.io.array_to_datum(data_train[i], label_train[i])
keystr = '{:0>5d}'.format(i)
batch.Put( keystr, datum.SerializeToString() )
# write batch
if(i + 1) % batch_size == 0:
db.Write(batch, sync=True)
batch = leveldb.WriteBatch()
print (i + 1)
# write last batch
if (i+1) % batch_size != 0:
db.Write(batch, sync=True)
print 'last batch'
print (i + 1)
#explicitly freeing memory to avoid hitting the limit!
#del data_train
#del label_train
print 'Reading test data...'
data_test = np.zeros((size_test, 3, 32, 32))
label_test = np.zeros(size_test, dtype=int)
i = -1
for key, value in db_test.RangeIter():
i = i + 1
if i % 1000 == 0:
print i
if i ==size_test:
break
datum.ParseFromString(value)
label = datum.label
data = caffe.io.datum_to_array(datum)
data_test[i] = data
label_test[i] = label
print 'Normalizing test'
for i in range(3):
print i
data_test[:, i, :, :] = data_test[:, i, :, :] - mean[i]
data_test[:, i, :, :] = data_test[:, i, :, :]/std[i]
#Zero Padding
#print 'Padding...'
#npad = ((0,0), (0,0), (4,4), (4,4))
#data_train = np.pad(data_train, pad_width=npad, mode='constant', constant_values=0)
#data_test = np.pad(data_test, pad_width=npad, mode='constant', constant_values=0)
print 'Outputting test data'
leveldb_file = direct + 'svhn_test_leveldb_normalized'
batch_size = size_test
# create the leveldb file
db = leveldb.LevelDB(leveldb_file)
batch = leveldb.WriteBatch()
datum = caffe_pb2.Datum()
for i in range(size_test):
# save in datum
datum = caffe.io.array_to_datum(data_test[i], label_test[i])
keystr = '{:0>5d}'.format(i)
batch.Put( keystr, datum.SerializeToString() )
# write batch
if(i + 1) % batch_size == 0:
db.Write(batch, sync=True)
batch = leveldb.WriteBatch()
print (i + 1)
# write last batch
if (i+1) % batch_size != 0:
db.Write(batch, sync=True)
print 'last batch'
print (i + 1)
</code></pre>
<p>How can I make it consume less memory so that I can get to run the script?</p>
| 3 | 2016-10-06T09:55:59Z | 39,893,540 | <p>Why not compute the statistics on a subset of the original data? For example, here we compute the mean and std for just 100 points:</p>
<pre><code>sample_size = 100
data_train = np.random.rand(1000, 20, 10, 10)
# Take subset of training data
idxs = np.random.choice(data_train.shape[0], sample_size)
data_train_subset = data_train[idxs]
# Compute stats
mean = np.mean(data_train_subset, axis=(0,2,3))
std = np.std(data_train_subset, axis=(0,2,3))
</code></pre>
<p>If your data is 1.7Gb, it is highly unlikely that you need all the data to get an accurate estimation of the mean and std.</p>
<p>In addition, could you get away with fewer bits in your datatype? I'm not sure what datatype <code>caffe.io.datum_to_array</code> returns, but you could do:</p>
<pre><code>data = caffe.io.datum_to_array(datum).astype(np.float32)
</code></pre>
<p>to ensure the data is <code>float32</code> format. (If the data is currently <code>float64</code>, then this will save you half the space).</p>
| 0 | 2016-10-06T10:25:48Z | [
"python"
]
|
extract RGB values from linear colourmap made using LinearSegmentedColormap | 39,893,017 | <p>I would like to create a linear colourmap from a list of discrete colors, and extract the underlying RGB values. I've managed to do the first step using the example script in the matplotlib document.</p>
<pre><code>from matplotlib import cm
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
colors = [(1, 0, 0), (0, 1, 0), (0, 0, 1)]
colormap = LinearSegmentedColormap.from_list('colormapX', colors, N=100)
x = np.arange(0, np.pi, 0.1)
y = np.arange(0, 2*np.pi, 0.1)
X, Y = np.meshgrid(x, y)
Z = np.cos(X) * np.sin(Y) * 10
fig, ax = plt.subplots()
im = ax.imshow(Z, interpolation='nearest', origin='lower', cmap=colormap)
fig.colorbar(im, ax=ax)
plt.show()
</code></pre>
<p>This colormap is based on 100 colours that were derived from the interpolation of the the original three colours. How do I extract an ndarray with the RGB values of these 100 colours?</p>
| 0 | 2016-10-06T10:00:50Z | 39,895,534 | <p>Don't know if it's available directly somewhere, or if one can ask the <code>colormap</code> to evaluate a given value (return the colour corresponding to a given number), but you can make the list yourself in this simple case:</p>
<pre><code>def color_interpolation(c1, c2, fraction=0.5):
return ((1.-fraction)*c1[0] + fraction*c2[0],
(1.-fraction)*c1[1] + fraction*c2[1],
(1.-fraction)*c1[2] + fraction*c2[2],)
def make_color_interpolation_list(colors, N):
n_colors = len(colors)
n_steps_between_colors = N/(n_colors-1)
fraction_step = 1. / n_steps_between_colors
color_array = np.zeros((N,3))
color_index = 0
while color_index < n_colors-1:
fraction_index = 0
while fraction_index < n_steps_between_colors:
index = color_index*n_steps_between_colors+fraction_index
color_array[index]= color_interpolation(c1=colors[color_index],
c2=colors[color_index+1],
fraction=fraction_index*fraction_step)
fraction_index += 1
color_index += 1
if index != len(color_array)-1:
color_array[-1] = colors[-1]
return color_array
</code></pre>
| 1 | 2016-10-06T12:05:33Z | [
"python",
"matplotlib",
"colormap"
]
|
time taken by code is an issue for python dataframe | 39,893,142 | <p>I need help related to dataframe related time taken by following code.
it takes around 20 sec to complete dataset around 2000 records.</p>
<pre><code>def findRe(leaddatadf, keyAttributes, datadf):
for combs in itertools.combinations(atrList,
len(atrList)-1):
v_by =(set(atrList) - set(combs)) # varrying
grpdatapf=datadf.groupby(combs)
for name, group in grpdatapf:
if(group.shape[0]>1):
tmpgdf = leaddatadf[leaddatadf['unique_id'].astype(float).\
isin(group['unique_id'].astype(float))]
if(tmpgdf.shape[0]>1):
tmpgdf['mprice']=tmpgdf['mprice'].astype(float)
tmpgdf=tmpgdf.sort('mprice')
tmpgdf['id'] = tmpgdf['id']
tmpgdf['desc'] = tmpgdf['description']
tmpgdf['related_id'] = tmpgdf['id'].shift(-1)
tmpgdf['related_desc'] = tmpgdf['description'].shift(-1)
tmpgdf['related_mprice'] = tmpgdf['mprice'].shift(-1)
tmpgdf['pld'] = np.where(
(tmpgdf['related_price'].astype(float) > \
tmpgdf['mprice'].astype(float)),
(tmpgdf['related_price'].astype(float) - \
tmpgdf['mprice'].astype(float)) ,
(tmpgdf['mprice'].astype(float) - \
tmpgdf['related_mprice'].astype(float)))
tmpgdf['pltxt'] = np.where(
tmpgdf['related_mprice'].astype(float) - \
tmpgdf['mprice'].astype(float)>0.0,'<',
np.where(tmpgdf['related_mprice'].astype(float)\
- tmpgdf['mprice'].astype(float)<0,'>','='))
tmpgdf['prc_rlt_dif_nbr_p'] = abs(
(tmpgdf['pld'].astype(float) / \
((tmpgdf['mprice'].astype(float)))) )
tmpgdf['keyatr'] = str(atrList)
tmpgdf['varying'] = np.where(1==1,
"".join(v_by ),'')# varrying
temp = tmpgdf[['id',
'desc', 'related_id',
'related_desc', 'pltxt', 'pld',
'prc_rlt_dif_nbr_p', 'mprice', 'related_mprice',
'keyatr', 'varying']]
temp = temp[temp['related_mprice'].astype(float)>=0.0]
reldf.extend(list(temp.T.to_dict().values()))
return pd.DataFrame(
reldf, columns = ['id',
'desc', 'related_id',
'related_desc', 'pltxt', 'pld',
'prc_rlt_dif_nbr_p', 'mprice', 'related_mprice',
'keyatr', 'varying'])
</code></pre>
| -2 | 2016-10-06T10:06:24Z | 39,893,260 | <p>please print after every line how many ms it takes</p>
<p>use this <a href="http://stackoverflow.com/a/1557584/2655092">http://stackoverflow.com/a/1557584/2655092</a></p>
<p>and return with the lines that takes the most time</p>
| 0 | 2016-10-06T10:12:17Z | [
"python",
"python-3.x",
"pandas",
"dataframe"
]
|
time taken by code is an issue for python dataframe | 39,893,142 | <p>I need help related to dataframe related time taken by following code.
it takes around 20 sec to complete dataset around 2000 records.</p>
<pre><code>def findRe(leaddatadf, keyAttributes, datadf):
for combs in itertools.combinations(atrList,
len(atrList)-1):
v_by =(set(atrList) - set(combs)) # varrying
grpdatapf=datadf.groupby(combs)
for name, group in grpdatapf:
if(group.shape[0]>1):
tmpgdf = leaddatadf[leaddatadf['unique_id'].astype(float).\
isin(group['unique_id'].astype(float))]
if(tmpgdf.shape[0]>1):
tmpgdf['mprice']=tmpgdf['mprice'].astype(float)
tmpgdf=tmpgdf.sort('mprice')
tmpgdf['id'] = tmpgdf['id']
tmpgdf['desc'] = tmpgdf['description']
tmpgdf['related_id'] = tmpgdf['id'].shift(-1)
tmpgdf['related_desc'] = tmpgdf['description'].shift(-1)
tmpgdf['related_mprice'] = tmpgdf['mprice'].shift(-1)
tmpgdf['pld'] = np.where(
(tmpgdf['related_price'].astype(float) > \
tmpgdf['mprice'].astype(float)),
(tmpgdf['related_price'].astype(float) - \
tmpgdf['mprice'].astype(float)) ,
(tmpgdf['mprice'].astype(float) - \
tmpgdf['related_mprice'].astype(float)))
tmpgdf['pltxt'] = np.where(
tmpgdf['related_mprice'].astype(float) - \
tmpgdf['mprice'].astype(float)>0.0,'<',
np.where(tmpgdf['related_mprice'].astype(float)\
- tmpgdf['mprice'].astype(float)<0,'>','='))
tmpgdf['prc_rlt_dif_nbr_p'] = abs(
(tmpgdf['pld'].astype(float) / \
((tmpgdf['mprice'].astype(float)))) )
tmpgdf['keyatr'] = str(atrList)
tmpgdf['varying'] = np.where(1==1,
"".join(v_by ),'')# varrying
temp = tmpgdf[['id',
'desc', 'related_id',
'related_desc', 'pltxt', 'pld',
'prc_rlt_dif_nbr_p', 'mprice', 'related_mprice',
'keyatr', 'varying']]
temp = temp[temp['related_mprice'].astype(float)>=0.0]
reldf.extend(list(temp.T.to_dict().values()))
return pd.DataFrame(
reldf, columns = ['id',
'desc', 'related_id',
'related_desc', 'pltxt', 'pld',
'prc_rlt_dif_nbr_p', 'mprice', 'related_mprice',
'keyatr', 'varying'])
</code></pre>
| -2 | 2016-10-06T10:06:24Z | 39,896,337 | <p>You're using <code>astype(float)</code> very often. Every time you use it - a copy of the series is created. You could try to set <code>dtype=float</code> at the very beginning when you're trying to load the dataframe - that way you're only converting the series to float once - and not on every iteration :)</p>
<p>Let me know if this helps</p>
| 0 | 2016-10-06T12:42:57Z | [
"python",
"python-3.x",
"pandas",
"dataframe"
]
|
How to calculate count and percentage in groupby in Python | 39,893,148 | <p>I have following output after grouping by</p>
<pre><code>Publisher.groupby('Category')['Title'].count()
Category
Coding 5
Hacking 7
Java 1
JavaScript 5
LEGO 43
Linux 7
Networking 5
Others 123
Python 8
R 2
Ruby 4
Scripting 4
Statistics 2
Web 3
</code></pre>
<p>In the above output I want the percentage also i.e for the first row <code>5*100/219</code> and so on. I am doing following</p>
<pre><code> Publisher.groupby('Category')['Title'].agg({'Count':'count','Percentage':lambda x:x/x.sum()})
</code></pre>
<p>But it gives me an error. Please help</p>
| 0 | 2016-10-06T10:06:37Z | 39,893,317 | <p>I think you can use:</p>
<pre><code>P = Publisher.groupby('Category')['Title'].count().reset_index()
P['Percentage'] = 100 * P['Title'] / P['Title'].sum()
</code></pre>
<p>Sample:</p>
<pre><code>Publisher = pd.DataFrame({'Category':['a','a','s'],
'Title':[4,5,6]})
print (Publisher)
Category Title
0 a 4
1 a 5
2 s 6
P = Publisher.groupby('Category')['Title'].count().reset_index()
P['Percentage'] = 100 * P['Title'] / P['Title'].sum()
print (P)
Category Title Percentage
0 a 2 66.666667
1 s 1 33.333333
</code></pre>
| 1 | 2016-10-06T10:14:42Z | [
"python",
"pandas",
"group-by"
]
|
Why does TensorFlow only find one CPU device despite having multiple cores? | 39,893,161 | <p>as far as I understood TensorFlow creates one device per core. (source: <a href="https://github.com/samjabrahams/tensorflow-white-paper-notes" rel="nofollow">https://github.com/samjabrahams/tensorflow-white-paper-notes</a>: <em>NOTE: To reiterate- in this context, "single device" means using a single CPU core or single GPU, not a single machine. Similarly, "multi-device" does not refer to multiple machines, but to multiple CPU cores and/or GPUs. See "3.3 Distributed Execution" for multiple machine discussion.</em>)</p>
<p>My computer has four cores but it only recognises one: </p>
<pre><code>>>> from tensorflow.python.client import device_lib
>>> print(device_lib.list_local_devices())
[name: "/cpu:0"
device_type: "CPU"
memory_limit: 268435456
bus_adjacency: BUS_ANY
incarnation: 13835232998165214133
]
</code></pre>
<p>Do you have any idea why?</p>
| 0 | 2016-10-06T10:07:23Z | 39,904,672 | <p>By default <code>cpu:0</code> represents all cores available to the process. You can create devices <code>cpu:0</code>, <code>cpu:1</code> which represent 1 logical core each by doing something like this</p>
<pre><code>config = tf.ConfigProto(device_count={"CPU": 2},
inter_op_parallelism_threads=2,
intra_op_parallelism_threads=1)
sess = tf.Session(config=config)
</code></pre>
<p>Then you can assign to devices as</p>
<pre><code>with tf.device("/cpu:0"):
# ...
with tf.device("/cpu:1"):
# ...
</code></pre>
| 0 | 2016-10-06T19:58:59Z | [
"python",
"tensorflow"
]
|
multiply even numbers, add odd numbers | 39,893,178 | <p>I'm trying to solve how to multiply all even numbers in a list with 2, and add all odd numbers with 7. And then present the list in an descending order. It has to be with a function that takes the list as argument.</p>
<p>I found this here on stackoverflow, but it's not really what I'm after, since the example sums up the even numbers to one product.</p>
<p>This is my code:</p>
<pre><code>L = [45, 22, 2, 498, 78]
def EvenOdd(L):
product = 2
resp = 7
elem = None
for elem, val in enumerate(L):
elem += 1
if elem % 2 == 0:
product *= elem
if elem % 2 == 1:
resp += elem
result = L[elem]
result.sort()
result.reverse()
print(result)
</code></pre>
| -3 | 2016-10-06T10:08:10Z | 39,893,357 | <p>You can create new list using:</p>
<pre><code>new_list = [item * 2 if item % 2 == 0 else item + 7 for item in L]
</code></pre>
<p>and then sort it using:</p>
<pre><code>new_list.sort(reverse=True)
</code></pre>
<p>Output should look like this:</p>
<pre><code>[996, 156, 52, 44, 4]
</code></pre>
| 0 | 2016-10-06T10:16:19Z | [
"python",
"list",
"function"
]
|
multiply even numbers, add odd numbers | 39,893,178 | <p>I'm trying to solve how to multiply all even numbers in a list with 2, and add all odd numbers with 7. And then present the list in an descending order. It has to be with a function that takes the list as argument.</p>
<p>I found this here on stackoverflow, but it's not really what I'm after, since the example sums up the even numbers to one product.</p>
<p>This is my code:</p>
<pre><code>L = [45, 22, 2, 498, 78]
def EvenOdd(L):
product = 2
resp = 7
elem = None
for elem, val in enumerate(L):
elem += 1
if elem % 2 == 0:
product *= elem
if elem % 2 == 1:
resp += elem
result = L[elem]
result.sort()
result.reverse()
print(result)
</code></pre>
| -3 | 2016-10-06T10:08:10Z | 39,893,504 | <p>You can go through the list and check whether the number is even or not. Then do your multiplication/addition depending on the result. An example is shown below:</p>
<pre><code>original_list = [45, 22, 2, 498, 78]
new_list = []
for number in original_list:
if number % 2 == 0: #check to see if the number is even
new_list.append(number*2)
else:
new_list.append(number+7)
sort_list = sorted(new_list)
descending_list = sort_list[::-1]
print (original_list)
print (descending_list)
</code></pre>
<p>The output of which gives:</p>
<pre><code>[45, 22, 2, 498, 78]
[996, 156, 52, 44, 4]
</code></pre>
| 0 | 2016-10-06T10:24:17Z | [
"python",
"list",
"function"
]
|
To read data from Putty and store it in another file by Python | 39,893,274 | <p>I need a script in Python that will collect the logs/information from PUTTY and then need to store this information in an another file saved in drive.</p>
| -4 | 2016-10-06T10:12:59Z | 39,893,343 | <pre><code>import subprocess
import sys
HOST="127.0.0.1"
# Ports are handled in ~/.ssh/config since we use OpenSSH
COMMAND="uname -a"
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
if result == []:
error = ssh.stderr.readlines()
print >>sys.stderr, "ERROR: %s" % error
else:
print result
</code></pre>
| 0 | 2016-10-06T10:15:30Z | [
"python"
]
|
How to navigate a webpage with python | 39,893,382 | <p>I was made a python script for bruteforce (pen test) but before the bruteforce start, I need to go through some links clicking and login, so I want to do those stuff with python. Basically when I start that script it should login and click some links then start the bruteforce.</p>
<p>so, is there any way I can make my python script to do those basic stuff for me?</p>
| 0 | 2016-10-06T10:17:39Z | 39,895,021 | <p>You might like to check these:</p>
<ul>
<li><a href="http://wwwsearch.sourceforge.net/mechanize/" rel="nofollow">mechanize</a></li>
<li><a href="http://seleniumhq.org/" rel="nofollow">Selenium</a></li>
<li><a href="http://code.google.com/p/python-spidermonkey/" rel="nofollow">Spidermonkey</a></li>
<li><a href="http://www.webkit.org/" rel="nofollow">webkit</a></li>
</ul>
<p>These tools will help you to emulate browser via script.</p>
| 2 | 2016-10-06T11:39:39Z | [
"python",
"website"
]
|
python - pandas - check if date exists in dataframe | 39,893,420 | <p>I have a dataframe like this:</p>
<pre><code> category date number
0 Cat1 2010-03-01 1
1 Cat2 2010-09-01 1
2 Cat3 2010-10-01 1
3 Cat4 2010-12-01 1
4 Cat5 2012-04-01 1
5 Cat2 2013-02-01 1
6 Cat3 2013-07-01 1
7 Cat4 2013-11-01 2
8 Cat5 2014-11-01 5
9 Cat2 2015-01-01 1
10 Cat3 2015-03-01 1
</code></pre>
<p>I would like to check if a date is exist in this dataframe but I am unable to. I tried various ways as below but still no use:</p>
<pre><code>if pandas.Timestamp("2010-03-01 00:00:00", tz=None) in df['date'].values:
print 'date exist'
if datetime.strptime('2010-03-01', '%Y-%m-%d') in df['date'].values:
print 'date exist'
if '2010-03-01' in df['date'].values:
print 'date exist'
</code></pre>
<p>The 'date exist' never got printed. How could I check if the date exist? Because I want to insert the none-existed date with number equals 0 to all the categories so that I could plot a continuously line chart (one category per line). Help is appreciated. Thanks in advance. </p>
<p>The last one gives me this:
<code>FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison</code>
And the <code>date exist</code> not get printed. </p>
| 2 | 2016-10-06T10:19:34Z | 39,893,467 | <p>I think you need convert to datetime first by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow"><code>to_datetime</code></a> and then if need select all rows use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p>
<pre><code>df.date = pd.to_datetime(df.date)
print (df.date == pd.Timestamp("2010-03-01 00:00:00"))
0 True
1 False
2 False
3 False
4 False
5 False
6 False
7 False
8 False
9 False
10 False
Name: date, dtype: bool
print (df[df.date == pd.Timestamp("2010-03-01 00:00:00")])
category date number
0 Cat1 2010-03-01 1
</code></pre>
<p>For return <code>True</code> use check value converted to <code>numpy array</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.values.html" rel="nofollow"><code>values</code></a>:</p>
<pre><code>if ('2010-03-01' in df['date'].values):
print ('date exist')
</code></pre>
<p>Or at least one <code>True</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.any.html" rel="nofollow"><code>any</code></a> as comment <a href="http://stackoverflow.com/questions/39893420/python-pandas-check-if-date-exists-in-dataframe#comment67072665_39893420">Edchum</a>:</p>
<pre><code>if (df.date == pd.Timestamp("2010-03-01 00:00:00")).any():
print ('date exist')
</code></pre>
| 1 | 2016-10-06T10:22:39Z | [
"python",
"datetime",
"pandas",
"dataframe"
]
|
Returning multiple lists from pool.map processes? | 39,893,635 | <p>Win 7, x64, Python 2.7.12</p>
<p>In the following code I am setting off some pool processes to do a trivial multiplication via the <code>multiprocessing.Pool.map()</code> method. The output data is collected in <code>List_1</code>.</p>
<p>NOTE: this is a stripped down simplification of my actual code. There are multiple lists involved in the real application, all huge.</p>
<pre><code>import multiprocessing
import numpy as np
def createLists(branches):
firstList = branches[:] * node
return firstList
def init_process(lNodes):
global node
node = lNodes
print 'Starting', multiprocessing.current_process().name
if __name__ == '__main__':
mgr = multiprocessing.Manager()
nodes = mgr.list()
pool_size = multiprocessing.cpu_count()
branches = [i for i in range(1, 21)]
lNodes = 10
splitBranches = np.array_split(branches, int(len(branches)/pool_size))
pool = multiprocessing.Pool(processes=pool_size, initializer=init_process, initargs=[lNodes])
myList_1 = pool.map(createLists, splitBranches)
pool.close()
pool.join()
</code></pre>
<p>I now add an extra calculation to <code>createLists()</code> & try to pass back both lists.</p>
<pre><code>import multiprocessing
import numpy as np
def createLists(branches):
firstList = branches[:] * node
secondList = branches[:] * node * 2
return firstList, secondList
def init_process(lNodes):
global node
node = lNodes
print 'Starting', multiprocessing.current_process().name
if __name__ == '__main__':
mgr = multiprocessing.Manager()
nodes = mgr.list()
pool_size = multiprocessing.cpu_count()
branches = [i for i in range(1, 21)]
lNodes = 10
splitBranches = np.array_split(branches, int(len(branches)/pool_size))
pool = multiprocessing.Pool(processes=pool_size, initializer=init_process, initargs=[lNodes])
myList_1, myList_2 = pool.map(createLists, splitBranches)
pool.close()
pool.join()
</code></pre>
<p>This raises the follow error & traceback..</p>
<pre><code>Traceback (most recent call last):
File "<ipython-input-6-ff188034c708>", line 1, in <module>
runfile('C:/Users/nr16508/Local Documents/Inter Trab Angle/Parallel/scratchpad.py', wdir='C:/Users/nr16508/Local Documents/Inter Trab Angle/Parallel')
File "C:\Users\nr16508\AppData\Local\Continuum\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "C:\Users\nr16508\AppData\Local\Continuum\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 87, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "C:/Users/nr16508/Local Documents/Inter Trab Angle/Parallel/scratchpad.py", line 36, in <module>
myList_1, myList_2 = pool.map(createLists, splitBranches)
ValueError: too many values to unpack
</code></pre>
<p>When I tried to put both list into one to pass back ie... </p>
<pre><code>return [firstList, secondList]
......
myList = pool.map(createLists, splitBranches)
</code></pre>
<p>...the output becomes too jumbled for further processing.</p>
<p>Is there an method of collecting more than one list from pooled processes?</p>
| 1 | 2016-10-06T10:30:05Z | 39,893,715 | <p>This question has nothing to do with multiprocessing or threadpooling. It is simply about how to unzip lists, which can be done with the standard <code>zip(*...)</code> idiom.</p>
<pre><code>myList_1, myList_2 = zip(*pool.map(createLists, splitBranches))
</code></pre>
| 1 | 2016-10-06T10:33:46Z | [
"python",
"threadpool",
"python-multiprocessing"
]
|
How to make a moving circle delete on impact with another circle in Python | 39,893,693 | <pre><code>from tkinter import *
import time
root = Tk()
canvas =Canvas(root, width=1000, height=1000, bg= '#2B2B2B')
canvas.pack()
#id1 white
x = 400
y = 400
r = 50
#rc2 orange
f = 100
d = 400
r2 =100
id1 = canvas.create_oval(x - r, y - r, x + r, y + r, outline='white')
id2 = canvas.create_oval(f - r2, d - r2, f + r2, d + r2, outline='#CC7832')
def move_circle():
for k in range(31):
time.sleep(0.025)
canvas.move(id2, 5, 0)
canvas.update()
def get_coords(id_num):
pos = canvas.coords(id_num)
x = (pos[0] + pos[2])/2
y = (pos[1] + pos[3])/2
print(x, y)
return x, y
from math import sqrt
def distance(id1, id2):
x1, y1 = get_coords(id1)
x2, y2 = get_coords(id2)
print("distance", x1, y1, x2, y2)
return sqrt((x2 - x1)**2 + (y2 - y1)**2)
def delete_circle(): #12
if distance(id1, id2) < (r+r2):
canvas.delete(id2)
move_circle()
delete_circle()
root.mainloop()
</code></pre>
<p>'''Hi Guys, I am trying to get circle id2 to delete when it hits id1. At the moment this only works if I have my range
set to between 31-89 in the move circle function. I understand that the reason this is the case is because the delete function only executes when the move circle function has completed it's loop. I think the solution is to somehow get the delete function to have access to the coordinate changes as they occur during the loop. However I have hit a wall to say the least on trying to do this. Any help would be much appreciated. Thanks a million in advance.</p>
| 0 | 2016-10-06T10:32:33Z | 39,899,324 | <p>You just need find_overlapping method.</p>
<p>Basic example :</p>
<pre><code>id1 = canvas.create_oval(x - r, y - r, x + r, y + r, outline='white')
id2 = canvas.create_oval(f - r2, d - r2, f + r2, d + r2, outline='#CC7832')
id1_coords = canvas.coords(id1)
def move_circle(x, y):
canvas.move(id2, x, y)
if id2 not in canvas.find_overlapping(*id1_coords) :
canvas.after(100, move_circle, x, y)
else :
canvas.delete(id2)
move_circle(5, 0)
root.mainloop()
</code></pre>
| 0 | 2016-10-06T14:57:09Z | [
"python",
"tkinter"
]
|
How to close tkinter window without a button? | 39,893,719 | <p>I'm an A-level computing student and I am the only one in my class who can code using Python. Even my teachers have not learned the language. I'm attempting to code a login program that exits when the info is put in correctly and displays a welcome screen image (I haven't coded that part yet). It has to close and display a fail message after 3 failed login attempts. I've encountered many logic errors when attempting to alter my attempts variable for the elif statement to work after many failed logins as well as getting the tkinter window to terminate/close based on the relevant if/elif statement. This is failing to work and I've looked at many code examples on this site and cannot find anything, could I please get some help with fixing my code?</p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>from tkinter import * #Importing graphics
attempts = 0 #Defining attempts variable
def OperatingProgram(): #Defining active program
class Application(Frame):
global attempts
def __init__(self,master):
super(Application, self).__init__(master) #Set __init__ to the master class
self.grid()
self.InnerWindow() #Creates function
def InnerWindow(self): #Defining the buttons and input boxes within the window
global attempts
print("Booted log in screen")
self.title = Label(self, text=" Please log in, you have " + str(attempts) + " incorrect attempts.") #Title
self.title.grid(row=0, column=2)
self.user_entry_label = Label(self, text="Username: ") #Username box
self.user_entry_label.grid(row=1, column=1)
self.user_entry = Entry(self) #Username entry box
self.user_entry.grid(row=1, column=2)
self.pass_entry_label = Label(self, text="Password: ") #Password label
self.pass_entry_label.grid(row=2, column=1)
self.pass_entry = Entry(self) #Password entry box
self.pass_entry.grid(row=2, column=2)
self.sign_in_butt = Button(self, text="Log In",command = self.logging_in) #Log in button
self.sign_in_butt.grid(row=5, column=2)
def logging_in(self):
global attempts
print("processing")
user_get = self.user_entry.get() #Retrieve Username
pass_get = self.pass_entry.get() #Retrieve Password
if user_get == 'octo' and pass_get == 'burger': #Statement for successful info
import time
time.sleep(2) #Delays for 2 seconds
print("Welcome!")
QuitProgram()
elif user_get != 'octo' or pass_get != 'burger': #Statement for any failed info
if attempts >= 2: #Statement if user has gained 3 failed attempts
import time
time.sleep(2)
print("Sorry, you have given incorrect details too many times!")
print("This program will now end itself")
QuitProgram()
else: #Statement if user still has enough attempts remaining
import time
time.sleep(2)
print("Incorrect username, please try again")
attempts += 1
else: #Statement only exists to complete this if statement block
print("I don't know what you did but it is very wrong.")
root = Tk() #Window format
root.title("Log in screen")
root.geometry("320x100")
app = Application(root) #The frame is inside the widget
root.mainloop() #Keeps the window open/running
def QuitProgram(): #Defining program termination
import sys
sys.exit()
OperatingProgram()
</code></pre>
| -1 | 2016-10-06T10:34:14Z | 39,894,033 | <p>Consider for a moment the following two lines in your logging_in method:</p>
<pre><code>if user_get == 'octo' and pass_get == 'burger':
elif user_get != 'octo' or pass_get != 'burger':
</code></pre>
<p>so if the login credentials are correct, the code after the first test is executed. If they are incorrect, the code after the second test is executed.</p>
<p>However, the code that you want to see executed after multiple failures is under a third test clause:</p>
<pre><code>elif attempts >= 3:
</code></pre>
<p>The thing is, the thread of execution will never see this test, as either the first or second one will have already evaluated to true (login credentials are correct, login credentials are incorrect) - it would need to be possible for them both to evaluate to false before the value of attempts would ever be checked.</p>
<p>The easiest way to fix this would be to change your</p>
<pre><code>elif attempts >= 3:
</code></pre>
<p>line to read</p>
<pre><code>if attempts >= 3:
</code></pre>
<p>adjusting the else clause/adding a new one if you feel it is necessary.</p>
| 1 | 2016-10-06T10:50:37Z | [
"python",
"python-3.x",
"tkinter"
]
|
How can I add a new column, choose the x-lowest values from another column, and use 1 and 0 to differentiate? (Accoring to id number) | 39,893,819 | <p>I have a csv.data and want to add a new column ("new"). In this new column I like to have 1's for the two lowest values in "cycle". The rest should be 0's. This procedure should be done for each group of numbers in "id". The result should be like the following image.(This is just a example. In my case I have much more data) Hopefully, someone can help me.</p>
<p><code>
id cycle new
1 1 1
1 2 1
1 3 0
2 1 1
2 2 1
2 3 0
3 1 1
3 2 1
3 3 0</code> </p>
| 2 | 2016-10-06T10:39:25Z | 39,894,161 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.SeriesGroupBy.nsmallest.html" rel="nofollow"><code>SeriesGroupBy.nsmallest</code></a> with <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html" rel="nofollow"><code>numpy.where</code></a>:</p>
<pre><code>idx = df.groupby('id')['cycle'].nsmallest(2).reset_index(drop=True, level=0).index
print (idx)
Int64Index([0, 1, 3, 4, 6, 7], dtype='int64')
df['new1'] = np.where(df.index.isin(idx), 1, 0)
print (df)
id cycle new new1
0 1 1 1 1
1 1 2 1 1
2 1 3 0 0
3 2 1 1 1
4 2 2 1 1
5 2 3 0 0
6 3 1 1 1
7 3 2 1 1
8 3 3 0 0
</code></pre>
| 0 | 2016-10-06T10:57:01Z | [
"python",
"pandas",
"numpy"
]
|
How can I add a new column, choose the x-lowest values from another column, and use 1 and 0 to differentiate? (Accoring to id number) | 39,893,819 | <p>I have a csv.data and want to add a new column ("new"). In this new column I like to have 1's for the two lowest values in "cycle". The rest should be 0's. This procedure should be done for each group of numbers in "id". The result should be like the following image.(This is just a example. In my case I have much more data) Hopefully, someone can help me.</p>
<p><code>
id cycle new
1 1 1
1 2 1
1 3 0
2 1 1
2 2 1
2 3 0
3 1 1
3 2 1
3 3 0</code> </p>
| 2 | 2016-10-06T10:39:25Z | 39,894,389 | <p>Here's an approach assuming <code>a</code> as the input array with those two columns -</p>
<pre><code>sorted_idx = np.lexsort(a[:,::-1].T)
idx = np.unique(a[sorted_idx,0],return_index=1)[1]
bin_arr = np.convolve(np.in1d(np.arange(a.shape[0]),idx),[1,1],'same')
out = bin_arr[sorted_idx.argsort()]
</code></pre>
<p>Few possible improvements (on performance) :</p>
<p>1) At the first step, we could alternatively have :</p>
<pre><code>sorted_idx = np.ravel_multi_index(a.T,a.max(0)+1).argsort()
</code></pre>
<p>2) Alternative way to calculate <code>idx</code> could be like so :</p>
<pre><code>a0 = a[sorted_idx,0]
idx = np.append(0,np.nonzero(a0[1:] > a0[:-1])[0]+1)
</code></pre>
<p>3) Last two steps could be replaced with something like this -</p>
<pre><code>out = np.zeros(a.shape[0],dtype=int)
out[sorted_idx[(idx[:,None] + [0,1])]] = 1
</code></pre>
<p>Sample run -</p>
<pre><code>In [79]: a # Input array
Out[79]:
array([[ 1, 4],
[ 1, 3],
[ 1, 5],
[ 2, 6],
[ 2, 9],
[ 2, 5],
[ 2, 11],
[ 3, 3],
[ 3, 4],
[ 3, 0]])
In [80]: np.column_stack((a,out)) # Input stacked with output
Out[80]:
array([[ 1, 4, 1],
[ 1, 3, 1],
[ 1, 5, 0],
[ 2, 6, 1],
[ 2, 9, 0],
[ 2, 5, 1],
[ 2, 11, 0],
[ 3, 3, 1],
[ 3, 4, 0],
[ 3, 0, 1]])
</code></pre>
| 0 | 2016-10-06T11:09:19Z | [
"python",
"pandas",
"numpy"
]
|
Python Threading, list of 10 links taken as 10 positional arguments | 39,893,852 | <p>got a spider that is trying to crawl and add to a database, and thought I would use threading to quicken things up a little bit </p>
<p>here is the code:</p>
<pre><code>def final_function(link_set):
root = 'http://www.rightmove.co.uk'
pages = []
for link in link_set:
try:
links = forty_page_getter(link)
pages.append(links)
except:
print('not possible for:' + str(link))
pass
flattened = [item for sublist in pages for item in sublist]
print('flattened done')
for page in flattened:
print(len(flattened))
try:
page_stripper(link=(root+page))
except:
print('couldnt do it for')
pass
</code></pre>
<p>so that is the final function that takes in a list of links as an argument.
My problem is here:</p>
<pre><code>if __name__ == "__main__":
areas = pd.read_csv('postcodes.csv')
areas = areas['0']
result_list = split_list(flattened=areas, chunk_size=10)
threads = []
outer_count = 1
# here ten postcode links
for i in result_list:
print('Started thread No. ' + str(outer_count))
t = threading.Thread(target=final_function, args=i)
threads.append(t)
t.start()
outer_count += 1
</code></pre>
<p>i is a sublist of links, from which I can get housing data, its length is ten, which is why I get an exception </p>
<pre><code>Exception in thread Thread-1:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
TypeError: final_function() takes 1 positional argument but 10 were given
</code></pre>
<p>is there anyway I could skip past this? Im stuck out of ideas as I thought simply passing that as an argument would make sense</p>
<p>EDIT: solved myself, I dont know why but all you need to do is </p>
<pre><code>t = threading.Thread(target=final_function, args=(i,))
</code></pre>
<p>which solves it </p>
| 0 | 2016-10-06T10:41:03Z | 39,894,065 | <p><code>args</code> in <code>threading.Thread</code> is supposed be a tuple of arguments, which means that when you pass iterable (list) to it, it considers every list element as separate argument.</p>
<p>It can be avoided by passing tuple, containing a list, to <code>args</code>, like</p>
<pre><code>for i in result_list:
t = threading.Thread(target=final_function, args=(i,))
</code></pre>
| 2 | 2016-10-06T10:52:05Z | [
"python",
"multithreading"
]
|
"The view polls.views.index didn't return an HttpResponse object. It returned None instead." Error in Django | 39,894,047 | <p>I am following the tutorial in Django documentation. I did exactly as it has said in the <a href="https://docs.djangoproject.com/en/1.10/intro/tutorial01/" rel="nofollow">tutorial</a>.</p>
<p>ValueError at /polls/</p>
<p>When I load on browser "<a href="http://localhost:8000/polls/" rel="nofollow">http://localhost:8000/polls/</a>" I get this error:</p>
<pre><code>The view polls.views.index didn't return an HttpResponse object. It returned None instead.
Request Method: GET
Request URL: http://localhost:8000/polls/
Django Version: 1.10.1
Exception Type: ValueError
Exception Value:
The view polls.views.index didn't return an HttpResponse object. It returned None instead.
Exception Location: /home/jack/anaconda2/envs/py3k/lib/python3.4/site-packages/django/core/handlers/base.py in _get_response, line 198
Python Executable: /home/jack/anaconda2/envs/py3k/bin/python
Python Version: 3.4.5
Python Path:
['/home/jack/Documents/Django Learning/mysite',
'/home/jack/anaconda2/envs/py3k/lib/python34.zip',
'/home/jack/anaconda2/envs/py3k/lib/python3.4',
'/home/jack/anaconda2/envs/py3k/lib/python3.4/plat-linux',
'/home/jack/anaconda2/envs/py3k/lib/python3.4/lib-dynload',
'/home/jack/anaconda2/envs/py3k/lib/python3.4/site-packages/Sphinx-1.4.1-py3.4.egg',
'/home/jack/anaconda2/envs/py3k/lib/python3.4/site-packages/setuptools-23.0.0-py3.4.egg',
'/home/jack/anaconda2/envs/py3k/lib/python3.4/site-packages']
Server time: Thu, 6 Oct 2016 10:32:20 +0000
</code></pre>
<p>Code Of mysite/polls/views.py</p>
<pre><code>from django.shortcuts import render
from django.http import HttpResponse
def index(request):
HttpResponse("Welcome to poll's index!")
# Create your views here.
</code></pre>
| 0 | 2016-10-06T10:51:26Z | 39,894,163 | <p><code>HttpResponse("Welcome to poll's index!")</code> .should be </p>
<pre><code>return HttpResponse("Welcome to poll's index!")
</code></pre>
<p>You are not returning anything from your view.</p>
| 3 | 2016-10-06T10:57:02Z | [
"python",
"django",
"anaconda"
]
|
Read all the data from one row of matrix | 39,894,069 | <p>I am reading a .mat file using python </p>
<pre><code>mat= sio.loadmat('C:/Users/machine-learning-ex3/ex3/ex3data1')
print(mat['X'])
print(mat['X'].shape)
</code></pre>
<p>The out put looks like </p>
<pre><code>[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]
(5000, 400)
</code></pre>
<p>How can I check only all the data of one row.<br>
While trying it gives error like: </p>
<pre><code>TypeError: unhashable type: 'slice'
</code></pre>
| 0 | 2016-10-06T10:52:16Z | 39,894,229 | <p>From <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.io.loadmat.html" rel="nofollow">reference</a>, loadmat returns dictionary with key as variable name and value as the loaded matrix. From your print statements,
mat['X'] is a 2-d array.</p>
<p>For showing ith row, simply write</p>
<pre><code>mat['X'][i]
</code></pre>
<p>Where i is the index of the row.</p>
<p>PS: you didn't mention what did you try to get that error. If you still have problems, mention that too.</p>
| 1 | 2016-10-06T11:01:00Z | [
"python"
]
|
Project Euler - Largest Palindrome #4 Python | 39,894,123 | <p>I'm new to python and decided it would be a good idea to improve my coding (in general) by doing some of the challenges on project Euler. I'm currently stuck on problem 4 and i'm not sure what is going wrong (for those not in the know, problem 4 is as follows):</p>
<blockquote>
<p>A palindromic number reads the same both ways.</p>
<p>The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 x 99.</p>
</blockquote>
<p>Find the largest palindrome made from the product of two 3-digit numbers.</p>
<pre><code>x, y = 999, 999
palindrome = []
while True:
palindrome = [i for i in str(x * y)]
r_palindrome = palindrome[::-1]
if palindrome == r_palindrome:
break
else:
y -= 1
if y < 100:
y = x
x -= 1
print x, y, palindrome
</code></pre>
<p>I seem to get the answer <code>987 * 286 = 282282</code> which feels awfully low.
Can someone explain the best way of doing this and what my current code is doing wrong rather than just a simple "here is the code" answer please. </p>
| 0 | 2016-10-06T10:55:26Z | 39,894,642 | <p>Starting at <code>x = 999</code> and <code>y = 999</code> and then decrementing only <code>y</code> until you restart with a decremented <code>x</code> and a reset <code>y</code> does not guarantee you that you will hit the largest palindrome first.</p>
<p>Just think of the following example: Letâs imagine a different requirement for the product (to not spoil the palindrome result here). Imagine you starting with <code>x = 999</code> and then you reach to <code>y = 101</code> until you hit the first âvalidâ result.</p>
<p>In your case, you would accept <code>999 * 101 = 100899</code> as the largest result. But there may actually be a different solution <code>998 * 998 = 996004</code> which you never looked at but is obviously much larger.</p>
<p>So you need to change how you decide when to stop looking and know that you reached the largest number.</p>
<hr>
<p>Btw. as a general hint for Project Euler: Especially the first problems can be easily solved with brute force (i.e. trying out every possible solution). While this will probably not give you a satisfying feeling that you solved the problem in a smart way, it does give you an idea on how to get there. You can always work out a better solution then, but keep in mind that Project Euler has actually <em>a lot</em> problems which are completely impossible using brute force anyway, so you have enough to worry about later ;)</p>
<p>For this particular problem, you could write a list comprehension that would give you all palindromes and then get the maximum of it. Thatâs a one-liner; itâs very inefficient but for this particularly small input domain, itâs fast enough to still give you an answer instantly:</p>
<pre><code>max([(x * y, x, y) for x in range(100, 1000) for y in range(100, 1000) if str(x * y) == str(x * y)[::-1]])
</code></pre>
| 3 | 2016-10-06T11:21:07Z | [
"python"
]
|
Project Euler - Largest Palindrome #4 Python | 39,894,123 | <p>I'm new to python and decided it would be a good idea to improve my coding (in general) by doing some of the challenges on project Euler. I'm currently stuck on problem 4 and i'm not sure what is going wrong (for those not in the know, problem 4 is as follows):</p>
<blockquote>
<p>A palindromic number reads the same both ways.</p>
<p>The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 x 99.</p>
</blockquote>
<p>Find the largest palindrome made from the product of two 3-digit numbers.</p>
<pre><code>x, y = 999, 999
palindrome = []
while True:
palindrome = [i for i in str(x * y)]
r_palindrome = palindrome[::-1]
if palindrome == r_palindrome:
break
else:
y -= 1
if y < 100:
y = x
x -= 1
print x, y, palindrome
</code></pre>
<p>I seem to get the answer <code>987 * 286 = 282282</code> which feels awfully low.
Can someone explain the best way of doing this and what my current code is doing wrong rather than just a simple "here is the code" answer please. </p>
| 0 | 2016-10-06T10:55:26Z | 39,895,037 | <p>OK, just to find the solution, let's use brute force:</p>
<pre><code>>>> prod = itertools.product(range(999,99,-1),range(999,99,-1))
>>> palindromes = [(x*y,x,y) for x,y in prod if str(x*y) == str(x*y)[::-1]]
>>> max(palindromes, key=lambda t: t[0])
(906609, 993, 913)
</code></pre>
<p>Took a bit of time, but at least we have the answer. I've worked out a solution that works very quickly, and I think it takes the essense of what you were trying to achieve:</p>
<pre><code>>>> x = 999
>>> palindromes = []
>>> floor = 99
>>> while x > floor:
... for i in range(x,floor,-1):
... product = x*i
... candidate = str(product)
... if candidate == candidate[::-1]:
... palindromes.append((product,x,i))
... floor = i
... break
... x -= 1
...
>>> palindromes
[(580085, 995, 583), (906609, 993, 913), (886688, 968, 916), (888888, 962, 924)]
>>>
</code></pre>
<p>Essentially, this updates the floor with the lowest number that last gave us a palindrome. We know we don't have to look lower than that each time. </p>
| 0 | 2016-10-06T11:40:31Z | [
"python"
]
|
How to compute the median and 68% confidence interval around the median of non-Gaussian distribution in Python? | 39,894,213 | <p>I have a data set which is a numpy array say a=[a1,a2,.....] and also the weights of the data w=[w1,w2,w3...]. I have computed the histogram using numpy histogram package which gives me the hist array. Now I want to compute the median of this probability distribution function and also the 68% contour around the median. Remember my dataset is not Gaussian. </p>
<p>Can anyone help? I am using python.</p>
| 1 | 2016-10-06T11:00:02Z | 40,051,761 | <p>Here a solution using <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_discrete.html" rel="nofollow">scipy.stats.rv_discrete</a>:</p>
<pre><code>import numpy as np, scipy.stats as st
# example data set
a = np.arange(20)
w = a + 1
# create custom discrete random variable from data set
rv = st.rv_discrete(values=(a, w/float(w.sum())))
# scipy.stats.rv_discrete has methods for median, confidence interval, etc.
print("median:", rv.median())
print("68% CI:", rv.interval(0.68))
</code></pre>
<p>Output reflects the uneven weights in the example data set:</p>
<pre class="lang-none prettyprint-override"><code>median: 13.0
68% CI: (7.0, 18.0)
</code></pre>
| 2 | 2016-10-14T20:48:23Z | [
"python",
"numpy",
"scipy",
"statistics"
]
|
Python checking equality of tuples | 39,894,363 | <p>I have a numpy array of source and destination ip's</p>
<pre><code>consarray
array([['10.125.255.133', '104.244.42.130'],
['104.244.42.130', '10.125.255.133']], dtype=object)
</code></pre>
<p>The actual array is much larger than this.</p>
<p>I want to create a set of unique connection pairs from the array:</p>
<p>In the given eg: it is clear that both rows of the numpy array are part of same connection (Just src and destination are interchanged, so it is outgoing and incoming respectively).</p>
<p>I tried creating a set of unique tuples.
like this:</p>
<pre><code>conset = set(map(tuple,consarray))
conset
{('10.125.255.133', '104.244.42.130'), ('104.244.42.130', '10.125.255.133')}
</code></pre>
<p>What i actually want is for ('10.125.255.133', '104.244.42.130') and ('104.244.42.130', '10.125.255.133') to be considered the same so that only one of them will be in the set.</p>
<p>Can anyone tell me how do i go about doing this?</p>
<p><strong>EDIT:</strong></p>
<p>There have been some good answers, but actually i want another requirement,</p>
<p>I want that the first occurrence should always be the one retained irrespective of the ip address.</p>
<p>In the above example: ('10.125.255.133', '104.244.42.130') appears first, so it is the outgoing connection, i want to retain this.</p>
<p>If the above example changed to:</p>
<pre><code>consarray
array(['104.244.42.130', '10.125.255.133']],
[['10.125.255.133', '104.244.42.130'],dtype=object)
</code></pre>
<p>I would want ('104.244.42.130', '10.125.255.133') to be retained.</p>
| 3 | 2016-10-06T11:08:18Z | 39,894,511 | <p>You can sort them first:</p>
<pre><code>conset = set(map(tuple, map(sorted, consarray)))
print (conset)
</code></pre>
<p>gives:</p>
<pre><code>{('10.125.255.133', '104.244.42.130')}
</code></pre>
| 1 | 2016-10-06T11:14:51Z | [
"python",
"tuples"
]
|
Python checking equality of tuples | 39,894,363 | <p>I have a numpy array of source and destination ip's</p>
<pre><code>consarray
array([['10.125.255.133', '104.244.42.130'],
['104.244.42.130', '10.125.255.133']], dtype=object)
</code></pre>
<p>The actual array is much larger than this.</p>
<p>I want to create a set of unique connection pairs from the array:</p>
<p>In the given eg: it is clear that both rows of the numpy array are part of same connection (Just src and destination are interchanged, so it is outgoing and incoming respectively).</p>
<p>I tried creating a set of unique tuples.
like this:</p>
<pre><code>conset = set(map(tuple,consarray))
conset
{('10.125.255.133', '104.244.42.130'), ('104.244.42.130', '10.125.255.133')}
</code></pre>
<p>What i actually want is for ('10.125.255.133', '104.244.42.130') and ('104.244.42.130', '10.125.255.133') to be considered the same so that only one of them will be in the set.</p>
<p>Can anyone tell me how do i go about doing this?</p>
<p><strong>EDIT:</strong></p>
<p>There have been some good answers, but actually i want another requirement,</p>
<p>I want that the first occurrence should always be the one retained irrespective of the ip address.</p>
<p>In the above example: ('10.125.255.133', '104.244.42.130') appears first, so it is the outgoing connection, i want to retain this.</p>
<p>If the above example changed to:</p>
<pre><code>consarray
array(['104.244.42.130', '10.125.255.133']],
[['10.125.255.133', '104.244.42.130'],dtype=object)
</code></pre>
<p>I would want ('104.244.42.130', '10.125.255.133') to be retained.</p>
| 3 | 2016-10-06T11:08:18Z | 39,894,522 | <p>You could either apply <em>sorting</em> before making the <em>tuples</em>:</p>
<pre><code>conset = set(map(lambda x: tuple(sorted(x)), consarray))
</code></pre>
<p>Or use <em>fronzensets</em> instead of <em>tuples</em>:</p>
<pre><code>conset = set(map(frozenset, consarray))
</code></pre>
<p>To guarantee that the first item will be retained and the second not inserted, you could use a <em>regular</em> <code>for</code> loop:</p>
<pre><code>conset = set()
for x in consarray:
x = frozenset(x)
if x in conset:
continue
conset.add(x)
</code></pre>
| 3 | 2016-10-06T11:15:20Z | [
"python",
"tuples"
]
|
Python checking equality of tuples | 39,894,363 | <p>I have a numpy array of source and destination ip's</p>
<pre><code>consarray
array([['10.125.255.133', '104.244.42.130'],
['104.244.42.130', '10.125.255.133']], dtype=object)
</code></pre>
<p>The actual array is much larger than this.</p>
<p>I want to create a set of unique connection pairs from the array:</p>
<p>In the given eg: it is clear that both rows of the numpy array are part of same connection (Just src and destination are interchanged, so it is outgoing and incoming respectively).</p>
<p>I tried creating a set of unique tuples.
like this:</p>
<pre><code>conset = set(map(tuple,consarray))
conset
{('10.125.255.133', '104.244.42.130'), ('104.244.42.130', '10.125.255.133')}
</code></pre>
<p>What i actually want is for ('10.125.255.133', '104.244.42.130') and ('104.244.42.130', '10.125.255.133') to be considered the same so that only one of them will be in the set.</p>
<p>Can anyone tell me how do i go about doing this?</p>
<p><strong>EDIT:</strong></p>
<p>There have been some good answers, but actually i want another requirement,</p>
<p>I want that the first occurrence should always be the one retained irrespective of the ip address.</p>
<p>In the above example: ('10.125.255.133', '104.244.42.130') appears first, so it is the outgoing connection, i want to retain this.</p>
<p>If the above example changed to:</p>
<pre><code>consarray
array(['104.244.42.130', '10.125.255.133']],
[['10.125.255.133', '104.244.42.130'],dtype=object)
</code></pre>
<p>I would want ('104.244.42.130', '10.125.255.133') to be retained.</p>
| 3 | 2016-10-06T11:08:18Z | 39,894,721 | <p>Since you're using <code>numpy</code>, you can use <code>numpy.unique</code>, eg:</p>
<pre><code>a = np.array([('10.125.255.133', '104.244.42.130'), ('104.244.42.130', ' 10.125.255.133')])
</code></pre>
<p>Then <code>np.unique(a)</code> gives you:</p>
<pre><code>array(['10.125.255.133', '104.244.42.130'], dtype='<U14')
</code></pre>
| 1 | 2016-10-06T11:25:03Z | [
"python",
"tuples"
]
|
error installing PyObjC osX | 39,894,476 | <p>I am new to Python and I am desperately trying to install PyObjC via spyder.
The command</p>
<pre><code>pip install PyObjC
</code></pre>
<p>returns an error: </p>
<blockquote>
<p>Command "python setup.py egg_info" failed with error code 1 in
/private/var/folders/0v/cg_rdz_x4d32jm6hz7n5txgh3djxb1/T/pip-build-UBlDtP/pyobjc-core/</p>
</blockquote>
<p>No solutions found in other posts worked. I have also tried to install the whole thing manually & via conda.
It seems that Xcode is installed.
And it is the only module I have had issues with. </p>
<p>Any suggestions? :)</p>
<p>Note: I´m working with Python 2.7, on a macOS Sierra 10.12</p>
| 1 | 2016-10-06T11:13:10Z | 39,894,901 | <p>The actual error message which can help more is in the beginning of the traceback. In my case it is</p>
<pre><code>xcode-select: error: tool 'xcodebuild' requires Xcode, but active developer directory '/Library/Developer/CommandLineTools' is a command line tools instance
Traceback (most recent call last):
File "<string>", line 1, in <module>
...
</code></pre>
<p>If you are getting the same, here are the answers on how to debug this:
<a href="http://stackoverflow.com/questions/17980759/xcode-select-active-developer-directory-error">xcode-select active developer directory error</a></p>
| 0 | 2016-10-06T11:33:43Z | [
"python",
"osx",
"python-2.7",
"install",
"pyobjc"
]
|
merge data based on some specific columns, pands | 39,894,527 | <p>Assume I'm having two data frame's, <code>t1h</code> and <code>t2h</code>, I want to merge that dataframe in such a way that for a specific list of columns if those rows seem to be similar I need to perform addition operation with contents of rest of the columns.</p>
<p><strong>t1h</strong></p>
<pre><code> timestamp ip domain http_status \
0 1475740500.0 192.168.1.1 example.com 200
1 1475740500.0 192.168.1.1 example.com 200
2 1475740500.0 192.168.1.1 example.com 201
3 1475740500.0 192.168.1.1 example.com 201
4 1475740500.0 192.168.1.1 example.com 202
test b_count b_sum test_count test_sum data1 \
0 False 46 24742949931480 46 9.250 0
1 True 48 28151237474796 48 9.040 0
2 False 36 21702308613722 36 7.896 0
3 True 24 13112423049120 24 5.602 0
4 False 62 29948023487954 62 12.648 0
data2
0 0
1 0
2 0
3 0
4 0
</code></pre>
<p><strong>t2h</strong></p>
<pre><code> timestamp ip domain http_status \
0 1475740500.0 192.168.1.1 example.com 200
1 1475740500.0 192.168.1.1 example.com 200
2 1475740500.0 192.168.1.1 example.com 201
3 1475740500.0 192.168.1.1 example.com 201
4 1475740500.0 192.168.1.1 example.com 202
test b_count b_sum test_count test_sum data1 \
0 False 44 22349502626302 44 9.410 0
1 True 32 16859760597754 32 5.988 0
2 False 46 23478212117794 46 8.972 0
3 True 36 20956236750016 36 7.124 0
4 False 54 35255787384306 54 9.898 0
data2
0 0
1 0
2 0
3 0
4 0
</code></pre>
<p>based on the below coulum list I need to get the output,</p>
<pre><code>groupby_fields = ['timestamp', 'ip', 'domain', 'http_status', 'test']
pd.merge(t1h, t2h, on=groupby_fields)
timestamp ip domain http_status \
0 1475740500.0 192.168.1.1 example.com 200
1 1475740500.0 192.168.1.1 example.com 200
2 1475740500.0 192.168.1.1 example.com 201
3 1475740500.0 192.168.1.1 example.com 201
4 1475740500.0 192.168.1.1 example.com 202
test b_count_x b_sum_x test_count_x test_sum_x \
0 False 46 24742949931480 46 9.250
1 True 48 28151237474796 48 9.040
2 False 36 21702308613722 36 7.896
3 True 24 13112423049120 24 5.602
4 False 62 29948023487954 62 12.648
data1_x data2_x b_count_y b_sum_y \
0 0 0 44 22349502626302
1 0 0 32 16859760597754
2 0 0 46 23478212117794
3 0 0 36 20956236750016
4 0 0 54 35255787384306
test_count_y test_sum_y data1_y data2_y
0 44 9.410 0 0
1 32 5.988 0 0
2 46 8.972 0 0
3 36 7.124 0 0
4 54 9.898 0 0
</code></pre>
<p>I want it in such a way that the output should look like, </p>
<blockquote>
<p>Note: except the columns in <code>groupby_fields</code> every other column are of either type either <code>int</code> or <code>float</code></p>
</blockquote>
<pre><code> timestamp ip domain http_status \
0 1475740500.0 192.168.1.1 example.com 200
1 1475740500.0 192.168.1.1 example.com 200
2 1475740500.0 192.168.1.1 example.com 201
3 1475740500.0 192.168.1.1 example.com 201
4 1475740500.0 192.168.1.1 example.com 202
test b_count b_sum test_count test_sum \
0 False 90 47092452557782 90 18.660
1 True 80 45010998072550 80 15.028
2 False 82 45180520731516 82 16.868
3 True 60 34068659799136 60 12.726
4 False 116 65203810872260 116 22.546
data1 data2 \
0 0 0
1 0 0
2 0 0
3 0 0
4 0 0
</code></pre>
<p>Please let me know how can I achieve that in an optimized way.</p>
| 1 | 2016-10-06T11:15:29Z | 39,895,158 | <h2>Great use case for the <code>groupby.agg()</code> function</h2>
<p>Assuming that <code>t1h</code> and <code>t2h</code> already exist, and have the same column names</p>
<pre><code>groupby_fields = ['timestamp', 'ip', 'domain', 'http_status', 'test']
df = t2h.append(t2h, ignore_index = True)
agg_dict = {'b_count':'count',
'b_sum':'sum',
'test_count':'count',
'test_sum':'sum',
'data1':'sum',
'data2':'sum'}
df.groupby(groupby_fields).agg(agg_dict).reset_index()
</code></pre>
| 1 | 2016-10-06T11:46:26Z | [
"python",
"pandas",
"dataframe",
"merge"
]
|
pytest collecting 0 items | 39,894,566 | <p>I have a socket program - 2 scripts, server and client. On server side I have many functions. I want to test these functions. I am new to python. Found something called pytest. So for all functions on my server side I did something like this. </p>
<pre><code>def fun(a):
// fn definition
return b
def test_fun():
assert fun(test_case) == "expected value"
</code></pre>
<p>I named this server script as test_server.py and imported pytest. I have imported pytest on client side also and renamed the script to test_client.py Then when I run using</p>
<blockquote>
<p>py.test test_server.py</p>
</blockquote>
<p>and then </p>
<blockquote>
<p>py.test test_client.py</p>
</blockquote>
<p>On server side it says collecting 0 items, and that's it. It is not collecting any. Any idea where I am going wrong. BTW I tried with simple python codes. There pytest is working properly. Is it that pytest doesn't work with socket programming or am I doing any mistake ? Also there is no mistake in codes without using pytest. It is working perfectly fine when I do </p>
<blockquote>
<p>python test_server.py</p>
</blockquote>
<p>and then, </p>
<blockquote>
<p>python test_client.py</p>
</blockquote>
| 0 | 2016-10-06T11:17:39Z | 39,895,065 | <p>if you want to test your client functions you should actually mock server responses. If you want to run some integration tests for client. Then start your server with:</p>
<pre><code>python test_server.py
</code></pre>
<p>and run your client tests as:</p>
<pre><code>py.test test_client.py
</code></pre>
<p>py.test only runs functions which names are starting from test_ , so my guess that your server does not even start with pytest.</p>
| 0 | 2016-10-06T11:41:36Z | [
"python",
"sockets",
"py.test"
]
|
Path manipulation in python | 39,894,698 | <p>I have looked on Stack Overflow everywhere but I cant find a solution to this problem.</p>
<p>Given that I have a folder/file as string: <code>"/path1/path2/path3/file"</code> how can I get the parent folder and its parent folder. In other words if I want to traverse up one level <code>"/path1/path2/path3"</code> or two levels <code>"/path1/path2"</code> how can I get those string values from the original string path in python?</p>
<p>Please note that I <strong>don't simply want the pieces</strong> of the path (in other words not a list of <code>['path1', 'path2', 'path3']</code>) but instead <code>"/path1/path2/path3"</code>.</p>
<p>Thank you </p>
| 2 | 2016-10-06T11:23:50Z | 39,894,737 | <p><a href="https://docs.python.org/2/library/os.path.html#os.path.split" rel="nofollow"><code>os.path.split</code></a> will do it for you. There are many other interesting functions in there.</p>
| 1 | 2016-10-06T11:25:41Z | [
"python"
]
|
Path manipulation in python | 39,894,698 | <p>I have looked on Stack Overflow everywhere but I cant find a solution to this problem.</p>
<p>Given that I have a folder/file as string: <code>"/path1/path2/path3/file"</code> how can I get the parent folder and its parent folder. In other words if I want to traverse up one level <code>"/path1/path2/path3"</code> or two levels <code>"/path1/path2"</code> how can I get those string values from the original string path in python?</p>
<p>Please note that I <strong>don't simply want the pieces</strong> of the path (in other words not a list of <code>['path1', 'path2', 'path3']</code>) but instead <code>"/path1/path2/path3"</code>.</p>
<p>Thank you </p>
| 2 | 2016-10-06T11:23:50Z | 39,894,743 | <pre><code>'/'.join (originalPath.split ('/') [:-2])
</code></pre>
<p>It will take you 2 levels up. </p>
| 0 | 2016-10-06T11:25:59Z | [
"python"
]
|
Path manipulation in python | 39,894,698 | <p>I have looked on Stack Overflow everywhere but I cant find a solution to this problem.</p>
<p>Given that I have a folder/file as string: <code>"/path1/path2/path3/file"</code> how can I get the parent folder and its parent folder. In other words if I want to traverse up one level <code>"/path1/path2/path3"</code> or two levels <code>"/path1/path2"</code> how can I get those string values from the original string path in python?</p>
<p>Please note that I <strong>don't simply want the pieces</strong> of the path (in other words not a list of <code>['path1', 'path2', 'path3']</code>) but instead <code>"/path1/path2/path3"</code>.</p>
<p>Thank you </p>
| 2 | 2016-10-06T11:23:50Z | 39,894,751 | <p><a href="https://docs.python.org/2/library/os.path.html#os.path.dirname" rel="nofollow">os.path.dirname</a> gives you the parent directory.</p>
| 0 | 2016-10-06T11:26:35Z | [
"python"
]
|
Path manipulation in python | 39,894,698 | <p>I have looked on Stack Overflow everywhere but I cant find a solution to this problem.</p>
<p>Given that I have a folder/file as string: <code>"/path1/path2/path3/file"</code> how can I get the parent folder and its parent folder. In other words if I want to traverse up one level <code>"/path1/path2/path3"</code> or two levels <code>"/path1/path2"</code> how can I get those string values from the original string path in python?</p>
<p>Please note that I <strong>don't simply want the pieces</strong> of the path (in other words not a list of <code>['path1', 'path2', 'path3']</code>) but instead <code>"/path1/path2/path3"</code>.</p>
<p>Thank you </p>
| 2 | 2016-10-06T11:23:50Z | 39,894,831 | <p><code>os.path.dirname()</code> (<a href="https://docs.python.org/2/library/os.path.html#os.path.dirname" rel="nofollow"><em>doc</em></a>) is the way to go. It returns the directory which contains the object pointed by the path:</p>
<pre><code>>>> import os.path
>>> os.path.dirname('/path1/path2/path3/file')
'/path1/path2/path3'
</code></pre>
<p>In this case, you want the "grandparent" directory, so just use the function twice:</p>
<pre><code>>>> parent = os.path.dirname('/path1/path2/path3/file')
>>> os.path.dirname(parent)
'/path1/path2'
</code></pre>
<p>If you want to do it an arbitrary number of times, a function can be helpful here:</p>
<pre><code>def go_up(path, n):
for i in range(n):
path = os.path.dirname(path)
return path
</code></pre>
<p>Here are some examples:</p>
<pre><code>>>> go_up('/path1/path2/path3/file', 1)
'/path1/path2/path3'
>>> go_up('/path1/path2/path3/file', 2)
'/path1/path2'
>>> go_up('/path1/path2/path3/file', 3)
'/path1'
</code></pre>
| 4 | 2016-10-06T11:30:04Z | [
"python"
]
|
Path manipulation in python | 39,894,698 | <p>I have looked on Stack Overflow everywhere but I cant find a solution to this problem.</p>
<p>Given that I have a folder/file as string: <code>"/path1/path2/path3/file"</code> how can I get the parent folder and its parent folder. In other words if I want to traverse up one level <code>"/path1/path2/path3"</code> or two levels <code>"/path1/path2"</code> how can I get those string values from the original string path in python?</p>
<p>Please note that I <strong>don't simply want the pieces</strong> of the path (in other words not a list of <code>['path1', 'path2', 'path3']</code>) but instead <code>"/path1/path2/path3"</code>.</p>
<p>Thank you </p>
| 2 | 2016-10-06T11:23:50Z | 39,895,111 | <p>additionally to all other answers, you can also use <a href="https://docs.python.org/2/library/os.path.html#os.path.join" rel="nofollow">os.path.join</a> function with the <code>os.path.pardir</code> variable which usually equals <code>..</code>.</p>
<pre><code>>>> path = "some/random/path/to/process"
>>> parent = os.path.pardir
>>> grand_parent_path = os.path.join(path, parent, parent)
>>> grand_parent_path
'some/random/path/to/process\\..\\..'
</code></pre>
<p>If you don't want to repeat <code>parent</code> several times, you can use a multipled list:</p>
<pre><code>>>> os.path.join(path, *([parent] * 3))
'some/random/path/to/process\\..\\..\\..'
</code></pre>
<p>The result is quite ugly, so you can use the <a href="https://docs.python.org/2/library/os.path.html?highlight=os.path.realpath#os.path.normpath" rel="nofollow">os.path.normpath</a> function to make it prettier:</p>
<pre><code>>>> os.path.normpath(os.path.join(path, *([parent] * 3)))
'some\\random'
</code></pre>
<p><strong>Important note</strong>: Doing so, you are sure your code is portable</p>
| 2 | 2016-10-06T11:43:49Z | [
"python"
]
|
Path manipulation in python | 39,894,698 | <p>I have looked on Stack Overflow everywhere but I cant find a solution to this problem.</p>
<p>Given that I have a folder/file as string: <code>"/path1/path2/path3/file"</code> how can I get the parent folder and its parent folder. In other words if I want to traverse up one level <code>"/path1/path2/path3"</code> or two levels <code>"/path1/path2"</code> how can I get those string values from the original string path in python?</p>
<p>Please note that I <strong>don't simply want the pieces</strong> of the path (in other words not a list of <code>['path1', 'path2', 'path3']</code>) but instead <code>"/path1/path2/path3"</code>.</p>
<p>Thank you </p>
| 2 | 2016-10-06T11:23:50Z | 39,895,494 | <p>You can use the <a href="https://docs.python.org/3.6/library/pathlib.html#module-pathlib" rel="nofollow"><code>pathlib</code></a> module:</p>
<pre><code>>>> path = pathlib.PurePath('/file1/file2/file3/file')
>>> path.parts
('/', 'file1', 'file2', 'file3', 'file')
>>> os.path.join(*path.parts[:-2])
'/file1/file2'
</code></pre>
<p>So just put <code>path.parts[:-n]</code> for <code>n</code> levels up.</p>
<p>Alternatively you can use the <a href="https://docs.python.org/3.6/library/pathlib.html#pathlib.PurePath.parents" rel="nofollow"><code>parents</code></a> attribute:</p>
<pre><code>>>> path = pathlib.PurePath('/file1/file2/file3/file4/file5/file6')
>>> path.parents[0]
PurePosixPath('/file1/file2/file3/file4/file5')
>>> path.parents[1]
PurePosixPath('/file1/file2/file3/file4')
>>> path.parents[4]
PurePosixPath('/file1')
</code></pre>
<p>So to go up <code>n</code> levels just use <code>parents[n-1]</code>.</p>
<p>To convert a <code>*Path</code> object to a string just call <code>str</code> on it:</p>
<pre><code>>>> str(path)
'/file1/file2/file3/file4/file5/file6'
</code></pre>
| 2 | 2016-10-06T12:03:45Z | [
"python"
]
|
Combine two files with Pandas | 39,894,821 | <p>I am having trouble using pandas to combine two input files as shown in the data sample below. They start out as CSV files exported from WordPress. I load them into data frames. My idea was to create an empty output data frame and fill it by looping through each <code>id</code> in the first input file, but that seems cumbersome and not taking advantage of Pandas' strengths. And because I'm new to Pandas I can't figure out how to convert the list-type second file into my desired output format.</p>
<p><strong>input_file_1:</strong></p>
<pre><code>id postDate
23 2016-10-03
24 2016-02-15
25 2016-07-22
</code></pre>
<p><strong>input_file_2:</strong></p>
<pre><code>id key value
23 name smith
23 age 24
23 city boston
24 name jones
24 age 35
24 city chicago
25 name williams
25 age 21
25 city dallas
</code></pre>
<p><strong>desired_output_file:</strong></p>
<pre><code>id postDate name age city
23 2016-10-03 smith 24 boston
24 2016-02-15 jones 35 chicago
25 2016-07-22 williams 21 dallas
</code></pre>
| 1 | 2016-10-06T11:29:40Z | 39,894,925 | <p>you can use <code>pivot</code> in conjunction with <code>join</code>:</p>
<pre><code>In [126]: df1.set_index('id').join(df2.pivot(index='id', columns='key', values='value'))
Out[126]:
postDate age city name
id
23 2016-10-03 24 boston smith
24 2016-02-15 35 chicago jones
25 2016-07-22 21 dallas williams
</code></pre>
<p>explanation:</p>
<pre><code>In [127]: df2.pivot(index='id', columns='key', values='value')
Out[127]:
key age city name
id
23 24 boston smith
24 35 chicago jones
25 21 dallas williams
</code></pre>
| 0 | 2016-10-06T11:34:40Z | [
"python",
"pandas"
]
|
Set points outside plot to upper limit | 39,894,896 | <p>Maybe this question exists already, but I could not find it. </p>
<p>I am making a scatter plot in Python. For illustrative purposes, I don't want to set my axes range such that all points are included - there may be some really high or really low values, and all I care about in those points is that they exist - that is, they need to be in the plot, but not on their actual value - rather, somewhere on the top of the canvas.</p>
<p>I know that in IDL there is a nice short syntax for this: in <code>plot(x,y<value)</code> any value in y greater than <code>value</code> will simply be put at <code>y=value</code>.</p>
<p>I am looking for something similar in Python. Can somebody help me out?</p>
| 3 | 2016-10-06T11:33:06Z | 39,896,204 | <p>There is no equivalent syntactic sugar in matplotlib. You will have to preprocess your data, e.g.: </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
ymin, ymax = 0, 0.9
x, y = np.random.rand(2,1000)
y[y>ymax] = ymax
fig, ax = plt.subplots(1,1)
ax.plot(x, y, 'o', ms=10)
ax.set_ylim(ymin, ymax)
plt.show()
</code></pre>
| 2 | 2016-10-06T12:36:31Z | [
"python",
"matplotlib",
"plot",
"limits"
]
|
Set points outside plot to upper limit | 39,894,896 | <p>Maybe this question exists already, but I could not find it. </p>
<p>I am making a scatter plot in Python. For illustrative purposes, I don't want to set my axes range such that all points are included - there may be some really high or really low values, and all I care about in those points is that they exist - that is, they need to be in the plot, but not on their actual value - rather, somewhere on the top of the canvas.</p>
<p>I know that in IDL there is a nice short syntax for this: in <code>plot(x,y<value)</code> any value in y greater than <code>value</code> will simply be put at <code>y=value</code>.</p>
<p>I am looking for something similar in Python. Can somebody help me out?</p>
| 3 | 2016-10-06T11:33:06Z | 39,896,349 | <p>you can just use <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.minimum.html" rel="nofollow"><code>np.minimum</code></a> on the <code>y</code> data to set anything above your upper limit to that limit. <code>np.minimum</code> calculates the minima element-wise, so only those values greater than <code>ymax</code> will be set to <code>ymax</code>.</p>
<p>For example: </p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0., np.pi*2, 30)
y = 10. * np.sin(x)
ymax = 5
fig, ax = plt.subplots(1)
ax.scatter(x, np.minimum(y, ymax))
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/cEMSo.png" rel="nofollow"><img src="http://i.stack.imgur.com/cEMSo.png" alt="enter image description here"></a></p>
| 1 | 2016-10-06T12:43:45Z | [
"python",
"matplotlib",
"plot",
"limits"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.