title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Change the values of string in parenthesis which starts with Varchar
39,489,971
<p>I have a python variable (type unicode), which holds a query statement:</p> <pre><code>CREATE TABLE afg_temp (tadm1_code INTEGER NOT NULL, tadm1_name VARCHAR(10) NOT NULL, test VARCHAR(4)); </code></pre> <p>I need to replace all the values in each VARCHAR occurence with 254. Initially I thought it would be enough to use the replace() function but this won't work as the numbers in the parenthesis are random.</p> <p>Is there a way to do this using regex?</p>
-2
2016-09-14T12:03:00Z
39,490,059
<p>with regexp:</p> <pre><code>import re query = 'CREATE TABLE afg_temp (tadm1_code INTEGER NOT NULL, tadm1_name VARCHAR(10) NOT NULL, test VARCHAR(4));' print re.sub(r'VARCHAR\([0-9]*\)','VARCHAR(254)',query) </code></pre>
4
2016-09-14T12:07:50Z
[ "python", "regex" ]
Change the values of string in parenthesis which starts with Varchar
39,489,971
<p>I have a python variable (type unicode), which holds a query statement:</p> <pre><code>CREATE TABLE afg_temp (tadm1_code INTEGER NOT NULL, tadm1_name VARCHAR(10) NOT NULL, test VARCHAR(4)); </code></pre> <p>I need to replace all the values in each VARCHAR occurence with 254. Initially I thought it would be enough to use the replace() function but this won't work as the numbers in the parenthesis are random.</p> <p>Is there a way to do this using regex?</p>
-2
2016-09-14T12:03:00Z
39,490,083
<pre><code>import re s='''CREATE TABLE afg_temp (tadm1_code INTEGER NOT NULL, tadm1_name VARCHAR(10) NOT NULL, test VARCHAR(4));''' re.sub('VARCHAR\([0-9]{1,3}\)','VARCHAR(254)',s) </code></pre> <p>There are plenty of other solutions and you don't even need python for this. The easiest would be to use sed.</p>
1
2016-09-14T12:09:00Z
[ "python", "regex" ]
Kivy - My ScrollView doesn't scroll
39,489,980
<p>I'm having problems in my Python application with Kivy library. In particular I'm trying to create a scrollable list of elements in a TabbedPanelItem, but I don't know why my list doesn't scroll.</p> <p>Here is my kv file:</p> <pre><code>#:import sm kivy.uix.screenmanager ScreenManagement: transition: sm.FadeTransition() SecondScreen: &lt;SecondScreen&gt;: tabba: tabba name: 'second' FloatLayout: background_color: (255, 255, 255, 1.0) BoxLayout: orientation: 'vertical' size_hint: 1, 0.10 pos_hint: {'top': 1.0} canvas: Color: rgba: (0.98, 0.4, 0, 1.0) Rectangle: pos: self.pos size: self.size Label: text: 'MyApp' font_size: 30 size: self.texture_size BoxLayout: orientation: 'vertical' size_hint: 1, 0.90 Tabba: id: tabba BoxLayout: orientation: 'vertical' size_hint: 1, 0.10 pos_hint: {'bottom': 1.0} Button: background_color: (80, 1, 0, 1.0) text: 'Do nop' font_size: 25 &lt;Tabba&gt;: do_default_tab: False background_color: (255, 255, 255, 1.0) TabbedPanelItem: text: 'First_Tab' Tabs: TabbedPanelItem: text: 'Second_Tab' Tabs: TabbedPanelItem: text: 'Third_Tab' Tabs: &lt;Tabs&gt;: grid: grid ScrollView: scroll_timeout: 250 scroll_distance: 20 do_scroll_y: True do_scroll_x: False GridLayout: id: grid cols: 1 spacing: 10 padding: 10 Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) Label: text:'scroll' color: (0, 0, 0, 1.0) </code></pre> <p>And here my .py code:</p> <pre><code>__author__ = 'drakenden' __version__ = '0.1' import kivy kivy.require('1.9.0') # replace with your current kivy version ! from kivy.app import App from kivy.lang import Builder from kivy.uix.screenmanager import ScreenManager, Screen, FadeTransition from kivy.properties import StringProperty, ObjectProperty,NumericProperty from kivy.uix.tabbedpanel import TabbedPanel from kivy.uix.boxlayout import BoxLayout from kivy.uix.button import Button from kivy.utils import platform from kivy.uix.gridlayout import GridLayout from kivy.uix.label import Label from kivy.uix.scrollview import ScrollView class Tabs(ScrollView): def __init__(self, **kwargs): super(Tabs, self).__init__(**kwargs) class Tabba(TabbedPanel): pass class SecondScreen(Screen): pass class ScreenManagement(ScreenManager): pass presentation = Builder.load_file("layout2.kv") class MyApp(App): def build(self): return presentation MyApp().run() </code></pre> <p>Where/What am I doing wrong? </p> <p>(Comments and suggests for UI improvements are also accepted)</p>
0
2016-09-14T12:03:22Z
39,490,992
<p>I Myself haven't used kivy for a while but if I remember exacly: Because layout within ScrollView should be BIGGER than scroll view ex ScrollView width: 1000px, GridView 1100px. So it will be possible to scroll it by 100px</p>
1
2016-09-14T12:54:43Z
[ "android", "python", "scroll", "kivy", "kivy-language" ]
How to make any method from view/model as celery task
39,490,052
<p>I have some of the analytics methods in <code>models.py</code> under <code>class Analytics</code> (e.g: <code>Analytics.record_read_analytics()</code>). And we are calling those methods for recording analytics, which doesn't need to be synchronous. Currently it's affecting rendering of each request so decided to add these methods in celery queue. We are already using celery for some of our tasks hence we have <code>tasks.py</code> and <code>celery.py</code> file.</p> <p><strong>Following is section of <code>models.py</code> file:</strong></p> <pre><code>class Analytics(): ... ... @staticmethod def method_a(): ... ... def method_b(): ... ... @staticmethod def record_read_analytics(): ... ... </code></pre> <p>I don't wanted to write again same model level class methods in <code>tasks.py</code> and wanted to make some of view method's and model level class methods as celery task.</p> <p><strong>Following is <code>celery.py</code> file:</strong></p> <pre><code>from __future__ import absolute_import from celery import Celery app = Celery('gnowsys_ndf', include=['gnowsys_ndf.tasks']) app.config_from_object('gnowsys_ndf.celeryconfig') if __name__ == '__main__': app.start() </code></pre> <p>I'm new to <code>celery</code> and looking for help. Thank you in advance.</p>
0
2016-09-14T12:07:36Z
39,490,279
<p>You may achieve it like:</p> <pre><code>analytics = Analytics() # Object of Analytics class Analytics.record_read_analytics.delay() </code></pre> <p>Also, you need to add <code>@task</code> decorator with the <code>record_read_analytics</code> function</p>
0
2016-09-14T12:18:43Z
[ "python", "django", "celery", "django-celery" ]
How to make any method from view/model as celery task
39,490,052
<p>I have some of the analytics methods in <code>models.py</code> under <code>class Analytics</code> (e.g: <code>Analytics.record_read_analytics()</code>). And we are calling those methods for recording analytics, which doesn't need to be synchronous. Currently it's affecting rendering of each request so decided to add these methods in celery queue. We are already using celery for some of our tasks hence we have <code>tasks.py</code> and <code>celery.py</code> file.</p> <p><strong>Following is section of <code>models.py</code> file:</strong></p> <pre><code>class Analytics(): ... ... @staticmethod def method_a(): ... ... def method_b(): ... ... @staticmethod def record_read_analytics(): ... ... </code></pre> <p>I don't wanted to write again same model level class methods in <code>tasks.py</code> and wanted to make some of view method's and model level class methods as celery task.</p> <p><strong>Following is <code>celery.py</code> file:</strong></p> <pre><code>from __future__ import absolute_import from celery import Celery app = Celery('gnowsys_ndf', include=['gnowsys_ndf.tasks']) app.config_from_object('gnowsys_ndf.celeryconfig') if __name__ == '__main__': app.start() </code></pre> <p>I'm new to <code>celery</code> and looking for help. Thank you in advance.</p>
0
2016-09-14T12:07:36Z
39,490,334
<p>You can create tasks out of <a href="http://docs.celeryproject.org/en/latest/reference/celery.contrib.methods.html" rel="nofollow">methods</a>. The bad thing about this is that the object itself gets passed around (because the state of the object in worker has to be same as the state of the caller) in order for it to be called, so you lose some flexibility. So your object has to be pickled every time, which is why I am against this solution. <strong>Of course this concerns only class methods, static methods have no such problem.</strong></p> <p>Another solution, which I like, is to create separate tasks.py or class based tasks and call the methods from within them. This way, you will have FULL control over Analytics object within your worker.</p>
1
2016-09-14T12:20:45Z
[ "python", "django", "celery", "django-celery" ]
Manipulating copied numpy array without changing the original
39,490,068
<p>I am trying to manipulate a numpy array that contains data stored in an other array. So far, when I change a value in my array, both of the arrays get values changed:</p> <pre><code> import numpy as np from astropy.io import fits image = fits.getdata("randomImage.fits") fft = np.fft.fft2(image) fftMod = np.copy(fft) fftMod = fftMod*2 if fftMod.all()== fft.all(): print "shit same same same " -- &gt; shit same same same </code></pre> <p>Why is?</p>
0
2016-09-14T12:08:09Z
39,490,284
<p>You misunderstood the usage of the .all() method. It yields True if all elements of an array are not 0. This seems to be the case in both your arrays or in neither of them. </p> <p>Since one is the double of the other, they definetly give the same result to the .all() method (both True or both False)</p> <p><strong>edit as requested in the comments:</strong> To compare the content of the both arrays use element wise comparison first and check that all elements are True with .all:</p> <pre><code>(fftMod == fft).all() </code></pre> <p>Or maybe better for floats including a certain tolerance:</p> <pre><code>np.allclose(fftMod, fft) </code></pre>
3
2016-09-14T12:18:50Z
[ "python", "arrays", "numpy" ]
Pandas fuzzy detect duplicates
39,490,190
<p>How can use fuzzy matching in pandas to detect duplicate rows (efficiently) </p> <p><a href="http://i.stack.imgur.com/yZxK6.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/yZxK6.jpg" alt="enter image description here"></a></p> <p>How to find duplicates of one column vs. all the other ones without a gigantic for loop of converting row_i toString() and then comparing it to all the other ones?</p>
1
2016-09-14T12:13:46Z
39,553,585
<p>Not pandas specific, but within the python ecosystem the <a href="https://github.com/datamade/dedupe" rel="nofollow">dedupe python library</a> would seem to do what you want. In particular, it allows you to compare each column of a row separately and then combine the information into a single probability score of a match.</p>
1
2016-09-18T02:52:09Z
[ "python", "pandas", "fuzzy-search", "locality-sensitive-hash" ]
How can we use a loop to create checkbuttons from an array and print the values of the selected checkbutton(s)?
39,490,247
<p>I have an array of strings, and I want to be able to use a loop to quickly create a lot of checkbuttons for them, because the idea is that the user can later add/delete items in the array, so it should be adaptable. </p> <p>I'm not even sure if this is possible with the method I'm trying to use. The problem with the code below is that it only checks the very last checkbutton/the very last item in the array, so it always returns PY_VAR3 or 'd' etc.</p> <p>It would be amazing if someone could help me understand what to do, even if it's a complete rewrite of the code. I'm completely stumped.</p> <pre><code>from Tkinter import * Window = Tk() class Test: def __init__(self): array = ['a', 'b', 'c', 'd'] def doCheckbutton(): for i in array: self.var = StringVar() c = Checkbutton(Window, text='blah', variable=self.var, command=printSelection) c.pack() def printSelection(): print(self.var) doCheckbutton() Test() Window.mainloop() </code></pre> <p><strong>Solved</strong> </p> <pre><code>from Tkinter import * Window = Tk() class Test: def __init__(self): self.array = ['a', 'b', 'c', 'd'] self.vars = [] #Array for saved values self.doCheckbutton() def doCheckbutton(self): for i in range(len(self.array)): self.vars.append(StringVar()) #create new item in vars array c = Checkbutton(Window, text=self.array[i], variable=self.vars[-1], command=lambda i=i: self.printSelection(i), onvalue='on', offvalue='off') c.pack() def printSelection(self, i): print(self.array[i] + ': ' + self.vars[i].get()) Test() Window.mainloop() </code></pre> <p>When a checkbutton is ticked/unticked, it prints out statements such as: c: on c: off</p>
1
2016-09-14T12:17:00Z
39,492,563
<p>You can create for each <code>CheckBox</code> a <code>StringVar</code> and save them in a list then Use <code>get</code> method on <code>StringVar</code> to get its value (<code>lambda</code> is used to passe the index in array list): </p> <pre><code>from Tkinter import * Window = Tk() class Test: def __init__(self): self.array = ['a', 'b', 'c', 'd'] self.vars = [] self.doCheckbutton() def doCheckbutton(self): for i in range(len(self.array)): self.vars.append(StringVar()) self.vars[-1].set(0) c = Checkbutton(Window, text=self.array[i], variable=self.vars[-1], command=lambda i=i: self.printSelection(i), onvalue=1, offvalue=0) c.pack() def printSelection(self, i): print(self.vars[i].get()) Test() Window.mainloop() </code></pre> <p>I hope this will be helpful.</p>
0
2016-09-14T14:08:05Z
[ "python", "tkinter" ]
Sort multiple dictionaries identically, based on a specific order defined by a list
39,490,257
<p>I had a special case where multiple existing dictionaries had to be sorted based on the exact order of items in a list (not alphabetical). So for example the dictionaries were:</p> <pre><code>dict_one = {"LastName": "Bar", "FirstName": "Foo", "Address": "Example Street 101", "Phone": "012345678"} dict_two = {"Phone": "001122334455", "LastName": "Spammer", "FirstName": "Egg", "Address": "SSStreet 123"} dict_three = {"Address": "Run Down Street 66", "Phone": "0987654321", "LastName": "Biker", "FirstName": "Random"} </code></pre> <p>And the list was:</p> <pre><code>data_order = ["FirstName", "LastName", "Phone", "Address"] </code></pre> <p>With the expected result being the ability to create a file like this:</p> <pre><code>FirstName;LastName;Phone;Address Foo;Bar;012345678;Example Street 101 Egg;Spammer;001122334455;SSStreet 123 Random;Biker;0987654321;Run Down Street 66 </code></pre> <p><strong>Note</strong>: In my case, the real use was an Excel file using pyexcel-xls, but the CSV-like example above is probably closer to what is usually done, so the answers might be more universally applicable for CSV than Excel.</p>
1
2016-09-14T12:17:27Z
39,490,258
<p>I had a bit of hard time to find any good answers in Stack Overflow for this case, but eventually I got the sorting working, which I could use to create the file. The header row can simply be taken directly from the <code>data_order</code> list below. Here's how I did it - hope it helps someone:</p> <pre><code>from collections import OrderedDict import pprint dict_one = { "LastName": "Bar", "FirstName": "Foo", "Address": "Example Street 101", "Phone": "012345678"} dict_two = { "Phone": "001122334455", "LastName": "Spammer", "FirstName": "Egg", "Address": "SSStreet 123"} dict_three = { "Address": "Run Down Street 66", "Phone": "0987654321", "LastName": "Biker", "FirstName": "Random"} dict_list = [] dict_list.append(dict_one) dict_list.append(dict_two) dict_list.append(dict_three) data_order = ["FirstName", "LastName", "Phone", "Address"] result = [] for dictionary in dict_list: result_dict = OrderedDict() # Go through the data_order in order for key in data_order: # Populate result_dict in the list order result_dict[key] = dictionary[key] result.append(result_dict) pp = pprint.PrettyPrinter(indent=4) pp.pprint(result) """ [ { 'FirstName': 'Foo', 'LastName': 'Bar', 'Phone': '012345678', 'Address': 'Example Street 101'}, { 'FirstName': 'Egg', 'LastName': 'Spammer', 'Phone': '001122334455', 'Address': 'SSStreet 123'}, { 'FirstName': 'Random', 'LastName': 'Biker', 'Phone': '0987654321', 'Address': 'Run Down Street 66'}] """ </code></pre>
0
2016-09-14T12:17:27Z
[ "python", "python-3.x", "sorting" ]
Sort multiple dictionaries identically, based on a specific order defined by a list
39,490,257
<p>I had a special case where multiple existing dictionaries had to be sorted based on the exact order of items in a list (not alphabetical). So for example the dictionaries were:</p> <pre><code>dict_one = {"LastName": "Bar", "FirstName": "Foo", "Address": "Example Street 101", "Phone": "012345678"} dict_two = {"Phone": "001122334455", "LastName": "Spammer", "FirstName": "Egg", "Address": "SSStreet 123"} dict_three = {"Address": "Run Down Street 66", "Phone": "0987654321", "LastName": "Biker", "FirstName": "Random"} </code></pre> <p>And the list was:</p> <pre><code>data_order = ["FirstName", "LastName", "Phone", "Address"] </code></pre> <p>With the expected result being the ability to create a file like this:</p> <pre><code>FirstName;LastName;Phone;Address Foo;Bar;012345678;Example Street 101 Egg;Spammer;001122334455;SSStreet 123 Random;Biker;0987654321;Run Down Street 66 </code></pre> <p><strong>Note</strong>: In my case, the real use was an Excel file using pyexcel-xls, but the CSV-like example above is probably closer to what is usually done, so the answers might be more universally applicable for CSV than Excel.</p>
1
2016-09-14T12:17:27Z
39,492,882
<p>This can be achieved in a one liner, although it is harder to read. In case it is useful for someone:</p> <pre><code>print [OrderedDict([(key, d[key]) for key in data_order]) for d in [dict_one, dict_two, dict_three]] </code></pre>
0
2016-09-14T14:22:52Z
[ "python", "python-3.x", "sorting" ]
Sort multiple dictionaries identically, based on a specific order defined by a list
39,490,257
<p>I had a special case where multiple existing dictionaries had to be sorted based on the exact order of items in a list (not alphabetical). So for example the dictionaries were:</p> <pre><code>dict_one = {"LastName": "Bar", "FirstName": "Foo", "Address": "Example Street 101", "Phone": "012345678"} dict_two = {"Phone": "001122334455", "LastName": "Spammer", "FirstName": "Egg", "Address": "SSStreet 123"} dict_three = {"Address": "Run Down Street 66", "Phone": "0987654321", "LastName": "Biker", "FirstName": "Random"} </code></pre> <p>And the list was:</p> <pre><code>data_order = ["FirstName", "LastName", "Phone", "Address"] </code></pre> <p>With the expected result being the ability to create a file like this:</p> <pre><code>FirstName;LastName;Phone;Address Foo;Bar;012345678;Example Street 101 Egg;Spammer;001122334455;SSStreet 123 Random;Biker;0987654321;Run Down Street 66 </code></pre> <p><strong>Note</strong>: In my case, the real use was an Excel file using pyexcel-xls, but the CSV-like example above is probably closer to what is usually done, so the answers might be more universally applicable for CSV than Excel.</p>
1
2016-09-14T12:17:27Z
39,492,922
<p>This is a classic use case for <a href="https://docs.python.org/3/library/csv.html#csv.DictWriter" rel="nofollow"><code>csv.DictWriter</code></a>, because your expected output is CSV-like (semi-colon delimiters instead of commas is supported) which would handle all of this for you, avoiding the need for ridiculous workaround involving <code>OrderedDict</code>, and making it easy to read the data back in without worrying about corner cases (<code>csv</code> automatically quotes fields if necessary, and parses quoted fields on read in as needed):</p> <pre><code>with open('outputfile.txt', 'w', newline='') as f: csvout = csv.DictWriter(f, data_order, delimiter=';') # Write the header csvout.writeheader() csvout.writerow(dict_one) csvout.writerow(dict_two) csvout.writerow(dict_three) </code></pre> <p>That's it, <code>csv</code> handles ordering, (it knows the correct order from the <code>data_order</code> passed as <code>fieldnames</code> to the <code>DictWriter</code> constructor), formatting, etc.</p> <hr> <p>If you had some need to pull the values in a specific order from many <code>dict</code>s without writing them (since your use case doesn't even use the keys), <a href="https://docs.python.org/3/library/operator.html#operator.itemgetter" rel="nofollow"><code>operator.itemgetter</code></a> can be used to simplify this dramatically:</p> <pre><code>from operator import itemgetter getfields = itemgetter(*data_order) dict_one_fields = getfields(dict_one) </code></pre> <p>which leaves <code>dict_one_fields</code> as a <code>tuple</code> with the requested fields in the requested order, <code>('Foo', 'Bar', '012345678', 'Example Street 101')</code>, and runs significantly faster than repeatedly indexing at the Python layer (<code>itemgetter</code> creates a C level "functor" that can retrieve all the requested values in a single call, with no Python level byte code at all for built-in keys like <code>str</code>).</p>
0
2016-09-14T14:24:48Z
[ "python", "python-3.x", "sorting" ]
Python objects in other classes or separate?
39,490,657
<p>I have an application I'm working on in <strong>Python 2.7</strong> which has several classes that need to interact with each other before returning everything back to the main program for output. </p> <p>So a brief example of the code would be:</p> <pre><code>class foo_network(): """Handles all network functionality""" def net_connect(self, network): """Connects to the network destination""" pass class foo_fileparsing(): """Handles all sanitation, formatting, and parsing on received file""" def file_check(self, file): """checks file for potential problems""" pass </code></pre> <p>currently I have a main file/function which instantiates all the classes and then handles passing data back and forth, as necessary, between them and their methods. However this seems a bit clumsy.</p> <p>As such I'm wondering two things:</p> <ol> <li>What would be the most 'Pythonic' way to handle this?</li> <li>What is the best way to handle this for performance and memory usage?</li> </ol> <p>I'm wondering if I should just instantiate objects of one class inside another (from the example, say, creating a <strong>foo_fileparsing</strong> object within the <strong>foo_network</strong> class if that is the only class which will be calling it, rather than my current approach of returning everything to the main function and passing it between objects that way.</p> <p>Unfortunately I can't find a PEP or other resource that seems to address this type of situation.</p>
-1
2016-09-14T12:38:18Z
39,491,190
<p>You can use modules. And have every class in one module. and then you can use import to import only a particular method from that class. And all you need to do for that is create a directory with the name same as you class name and put a <strong>init</strong>.py file in that directory which tells python to consider that directory as a module. then for ex. foo_network folder contains a file named foo_network.py and a file <strong>init</strong>.py. and in foo_network.py the code is- </p> <pre><code>class foo_network(): """Handles all network functionality""" def net_connect(self, network): """Connects to the network destination""" pass </code></pre> <p>and in any other file you can simply use </p> <pre><code>import net_connect from foo_network </code></pre> <p>it will only import that particular method. This way your code will not look messy and you can will be importing only what is required. you can also do </p> <pre><code>import * from foo_network </code></pre> <p>to import all methods at once. </p> <p>Hope that helps.</p>
0
2016-09-14T13:03:25Z
[ "python", "oop", "memory-management", "python-performance" ]
runserver.py giving Module Error while setup MDM server code
39,490,673
<p>I am trying to setup this git code on Ubuntu server <a href="https://github.com/jessepeterson/commandment" rel="nofollow">https://github.com/jessepeterson/commandment</a></p> <p>I have managed to install the required packages <a href="https://github.com/jessepeterson/commandment/blob/master/requirements.txt" rel="nofollow">https://github.com/jessepeterson/commandment/blob/master/requirements.txt</a></p> <ul> <li>M2Crypto&lt;0.25 </li> <li>pyOpenSSL </li> <li>Flask </li> <li>SQLAlchemy </li> <li>apns </li> <li>oauthlib </li> <li>passlib </li> <li><p>biplist</p> <p>Now when I run the runserver.py file it is giving me some Module Error </p></li> </ul> <p><a href="http://i.stack.imgur.com/o7wNa.png" rel="nofollow"><img src="http://i.stack.imgur.com/o7wNa.png" alt="Please see the attach image"></a></p> <p>Please let me know how can I setup this code </p> <p>Thanks,</p>
1
2016-09-14T12:39:05Z
39,490,814
<p>Please check if apns module is installed by <code>pip list</code></p> <p>If it is not installed, Please install apns module by <code>pip install apns</code></p> <p>If already part installed, check if it is part of your virtualenv PYTHONPATH, if you are using virtualenv</p>
1
2016-09-14T12:45:55Z
[ "python", "ios", "mdm" ]
How to iterate each word through nltk synsets and store misspelled words in separate list?
39,490,777
<p>I am trying to take a text file with messages and iterate each word through NLTK wordnet synset function. I want to do this because I want to create a list of mispelled words. For example if I do: </p> <pre><code>wn.synsets('dog') </code></pre> <p>I get output:</p> <pre><code>[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01'), Synset('chase.v.01')] </code></pre> <p>now if the word is mispelled like so:</p> <pre><code>wn.synsets('doeg') </code></pre> <p>I get output:</p> <pre><code>[] </code></pre> <p>If I am returned an empty list I want to save the misspelled word in another list like so and while continuing to iterate through rest of the file:</p> <pre><code>mispelled_words = ['doeg'] </code></pre> <p>I am at a loss how to do this, here is my code below, I would need to do the iterating after variable "chat_message_tokenize". The name path is words I want to drop:</p> <pre><code>import nltk import csv from nltk.tag import pos_tag from nltk.corpus import wordnet as wn from nltk.stem.snowball import SnowballStemmer def text_function(): #nltk.download('punkt') #nltk.download('averaged_perceptron_tagger') # Read in chat messages and names files chat_path = 'filepath.csv' try: with open(chat_path) as infile: chat_messages = infile.read() except Exception as error: print(error) return name_path = 'filepath.txt' try: with open(names_path) as infile: names = infile.read() except Exception as error: print(error) return chat_messages = chat_messages.split('Chats:')[1].strip() names = names.split('Name:')[1].strip().lower() chat_messages_tokenized = nltk.word_tokenize(chat_messages) names_tokenized = nltk.word_tokenize(names) # adding part of speech(pos) tag and dropping proper nouns pos_drop = pos_tag(chat_messages_tokenized) chat_messages_tokenized = [SnowballStemmer('english').stem(word.lower()) for word, pos in pos_drop if pos != 'NNP' and word not in names_tokenized] for chat_messages_tokenized if not wn.synset(chat_messages_tokenized): print('empty list') if __name__ == '__main__': text_function() # for s in wn.synsets('dog'): # lemmas = s.lemmas() # for l in lemmas: # if l.name() == stemmer: # print (l.synset()) csv_path ='OutputFilePath.csv' try: with open(csv_path, 'w') as outfile: writer = csv.writer(outfile) for word in chat_messages_tokenized: writer.writerow([word]) except Exception as error: print(error) return if __name__ == '__main__': text_function() </code></pre> <p>Thank you in advance. </p>
1
2016-09-14T12:44:14Z
39,718,953
<p>You already have the pseudocode in your explanation, you can just code it as you have explained, as follows:</p> <pre><code>misspelled_words = [] # The list to store misspelled words for word in chat_messages_tokenized: # loop through each word if not wn.synset(word): # if there is no synset for this word misspelled_words.append(word) # add it to misspelled word list print(misspelled_words) </code></pre>
1
2016-09-27T07:45:49Z
[ "python", "iteration", "nltk", "python-3.5", "wordnet" ]
Django submit form using Request Module, CSRF Verification failed
39,490,782
<p>I am trying to submit a form of django application using Python request module, however it gives me the following error</p> <blockquote> <p>Error Code: 403<br> Message: CSRF verification failed. Request aborted.</p> </blockquote> <p>I tried to convert the in JSON using <code>json.dumps()</code> and sent the request, however I am getting the same error.</p> <p>I am not sure, what is missing. When I submit the form using UI, it works well. I intercepted the request using the <code>Postman</code> plugin as well and the request I have form is same.</p> <pre><code>import requests session = requests.session() session.get("http://localhost:8000/autoflex/addresult/") csrf_token = session.cookies["csrftoken"] print csrf_token cookie = "csrftoken=%s" % csrf_token headers = {"Content-Type": "Application/x-www-form-urlencoded", "Cookie": cookie, "Accept-Encoding": "gzip, deflate", "Connection": "keep-alive", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36", "Referer": "http://localhost:8000/autoflex/addresult/"} data = {"xml_url": xml_url, "csrfmiddlewaretoken": csrf_token} result = session.post("http://localhost:8000/autoflex/addresult/", data=data, headers=headers) print result.request.body print result.request.headers print(result.status_code, result.reason, result.content) </code></pre>
0
2016-09-14T12:44:21Z
39,491,919
<p>I was providing the other arguments in the header and I think this is creating the issue. I removed all other arguments and kept only referrer and now it's working. </p> <pre><code>import requests session = requests.session() session.get("http://localhost:8000/autoflex/addresult/") csrf_token = session.cookies["csrftoken"] data = {"xml_url": xml_url, "csrfmiddlewaretoken": csrf_token} result = session.post("http://localhost:8000/autoflex/addresult/", data=data, headers={"Referer": "http://localhost:8000/autoflex/addresult/"}) print result.request.body print result.request.headers print(result.status_code, result.reason, result.content) </code></pre>
0
2016-09-14T13:35:20Z
[ "python", "django", "python-2.7", "django-forms", "request" ]
Searching JSON, Returning String within the JSON Structure
39,490,838
<p>I have a set of JSON data that looks simular to this:</p> <pre><code>{"executions": [ { "id": 17, "orderId": 16, "executionStatus": "1", "cycleId": 5, "projectId": 15006, "issueId": 133038, "issueKey": "QTCMP-8", "label": "", "component": "", "projectKey": "QTCMP", "executionDefectCount": 0, "stepDefectCount": 0, "totalDefectCount": 0 }, { "id": 14, "orderId": 14, "executionStatus": "1", "cycleId": 5, "projectId": 15006, "issueId": 133042, "issueKey": "QTCMP-10", "label": "", "component": "", "projectKey": "QTCMP", "executionDefectCount": 0, "stepDefectCount": 0, "totalDefectCount": 0 } ], "currentlySelectedExecutionId": "", "recordsCount": 4 } </code></pre> <p>I have taken this and parsed it with Python as below:</p> <pre><code>import json import pprint with open('file.json') as dataFile: data = json.load(dataFile) </code></pre> <p>With this I am able to find sets of data like executions by doing data["executions"] etc.. What I need to be able to do is search for the string "QTCMP-8" within the structure, and then when I find that specific string, find the "id" that is associated with that string. So in the case of QTCMP-8 it would be id 17; for QTCMP-10 it would be 14.</p> <p>Is this possible? Do I need to convert the data first? Any help is greatly appreciated!</p>
1
2016-09-14T12:46:52Z
39,490,944
<p>A simple iteration with a condition will do the job:</p> <pre><code>for execution in data['executions']: if "QTCMP" in execution['issueKey']: print(execution["id"]) # -&gt; 17 # -&gt; 14 </code></pre>
0
2016-09-14T12:52:13Z
[ "python", "json" ]
Searching JSON, Returning String within the JSON Structure
39,490,838
<p>I have a set of JSON data that looks simular to this:</p> <pre><code>{"executions": [ { "id": 17, "orderId": 16, "executionStatus": "1", "cycleId": 5, "projectId": 15006, "issueId": 133038, "issueKey": "QTCMP-8", "label": "", "component": "", "projectKey": "QTCMP", "executionDefectCount": 0, "stepDefectCount": 0, "totalDefectCount": 0 }, { "id": 14, "orderId": 14, "executionStatus": "1", "cycleId": 5, "projectId": 15006, "issueId": 133042, "issueKey": "QTCMP-10", "label": "", "component": "", "projectKey": "QTCMP", "executionDefectCount": 0, "stepDefectCount": 0, "totalDefectCount": 0 } ], "currentlySelectedExecutionId": "", "recordsCount": 4 } </code></pre> <p>I have taken this and parsed it with Python as below:</p> <pre><code>import json import pprint with open('file.json') as dataFile: data = json.load(dataFile) </code></pre> <p>With this I am able to find sets of data like executions by doing data["executions"] etc.. What I need to be able to do is search for the string "QTCMP-8" within the structure, and then when I find that specific string, find the "id" that is associated with that string. So in the case of QTCMP-8 it would be id 17; for QTCMP-10 it would be 14.</p> <p>Is this possible? Do I need to convert the data first? Any help is greatly appreciated!</p>
1
2016-09-14T12:46:52Z
39,490,956
<p>You can't do this in computational order of O(1), at least with what it is like now. The following is a solution with O(n) complexity for each search.</p> <pre><code>id = None for dic in executions: if dic['issueKey'] == query: id = dic['id'] break </code></pre> <p>Doing this in O(1), need a pre-processing of O(n), in which you categorize executions by their <strong>issueKey</strong>, and save inside it whatever information you want.</p> <pre><code># Preprocessing of O(n) mapping = dict() for dic in executions: mapping[dic['issueKey']] = { 'id': dic['id'], 'whatever': 'whateverel' } # Now you can query in O(1) return dic[query]['id'] </code></pre> <p>You might also want to consider working with <a href="http://mongodb.com/" rel="nofollow">MongoDB</a> or likes of it, if you're doing heavy json querying.</p>
1
2016-09-14T12:53:11Z
[ "python", "json" ]
Searching JSON, Returning String within the JSON Structure
39,490,838
<p>I have a set of JSON data that looks simular to this:</p> <pre><code>{"executions": [ { "id": 17, "orderId": 16, "executionStatus": "1", "cycleId": 5, "projectId": 15006, "issueId": 133038, "issueKey": "QTCMP-8", "label": "", "component": "", "projectKey": "QTCMP", "executionDefectCount": 0, "stepDefectCount": 0, "totalDefectCount": 0 }, { "id": 14, "orderId": 14, "executionStatus": "1", "cycleId": 5, "projectId": 15006, "issueId": 133042, "issueKey": "QTCMP-10", "label": "", "component": "", "projectKey": "QTCMP", "executionDefectCount": 0, "stepDefectCount": 0, "totalDefectCount": 0 } ], "currentlySelectedExecutionId": "", "recordsCount": 4 } </code></pre> <p>I have taken this and parsed it with Python as below:</p> <pre><code>import json import pprint with open('file.json') as dataFile: data = json.load(dataFile) </code></pre> <p>With this I am able to find sets of data like executions by doing data["executions"] etc.. What I need to be able to do is search for the string "QTCMP-8" within the structure, and then when I find that specific string, find the "id" that is associated with that string. So in the case of QTCMP-8 it would be id 17; for QTCMP-10 it would be 14.</p> <p>Is this possible? Do I need to convert the data first? Any help is greatly appreciated!</p>
1
2016-09-14T12:46:52Z
39,491,098
<p>You can get the list of all ids as:</p> <pre><code>&gt;&gt;&gt; [item['id'] for item in my_json['executions'] if item['issueKey'].startswith('QTCMP')] [17, 14] </code></pre> <p>where <code>my_json</code> is the variable storing your JSON structure</p> <p><strong>Note:</strong> I am using <code>item['issueKey'].startswith('QTCMP')</code> instead of <code>'QTCMP' in item['issueKey']</code> as you need id of items starting with <code>QTCMP</code>. For example, if the value is <code>XXXQTCMP</code>, it's id should not present with the result (but it will result as <code>True</code> on using <code>in</code> statement)</p>
0
2016-09-14T12:59:16Z
[ "python", "json" ]
Django: app level variables
39,490,843
<p>I've created a Django-rest-framework app. It exposes some API which does some get/set operations in the MySQL DB. </p> <p>I have a requirement of making an HTTP request to another server and piggyback this response along with the usual response. I'm trying to use a self-made HTTP connection pool to make HTTP requests instead of making new connections on each request.</p> <p>What is the most appropriate place to keep this app level HTTP connection pool object?</p> <p>I've looked around for it &amp; there are multiple solutions each with some cons. Here are some:</p> <ol> <li><p>To make a singleton class of the pool in a diff file, but this is not a good pythonic way to do things. There are various discussions over why not to use singleton design pattern.</p> <p>Also, I don't know how intelligent it would be to pool a pooler? (:P)</p></li> <li>To keep it in <strong>init</strong>.py of the app dir. The issue with that are as follows: <ul> <li>It should only contain imports &amp; things related to that.</li> <li>It will be difficult to unit test the code because the import would happen before mocking and it would actually try to hit the API.</li> </ul></li> <li><p>To use sessions, but I guess that makes more sense if it was something user session specific, like a user specific number, etc</p> <p>Also, the object needs to be serializable. I don't know how HTTP Connection pool can be serialized.</p></li> <li>To keep it global in views.py but that also is discouraged.</li> </ol> <p>What is the best place to store such app/global level variables?</p>
1
2016-09-14T12:46:57Z
39,491,806
<p>A possible solution is to implement a custom Django middleware, as described in <a href="https://docs.djangoproject.com/ja/1.9/topics/http/middleware/" rel="nofollow">https://docs.djangoproject.com/ja/1.9/topics/http/middleware/</a>.</p> <p>You could initialize the HTTP connection pool in the middleware's <strong>__init__</strong> method, which is only called <em>once</em> at the first request. Then, start the HTTP request during <strong>process_request</strong> and on <strong>process_response</strong> check it has finished (or wait for it) and append that response to the internal one.</p>
0
2016-09-14T13:30:24Z
[ "python", "django", "django-rest-framework", "global-variables" ]
Drawing sine curve in Memory-Used view of Task Manager on Windows 10?
39,490,934
<p>I am trying to write a simple proof-of-concept script on Windows 10 that let's me draw the absolute of a sin curve in the task manager memory window.</p> <p>My code is as follows:</p> <pre><code>import time import math import gc import sys x = 1 string_drawer = [] while True: #Formula for the eqaution (sin curve) y = (abs(math.sin(math.radians(100*x))))*512000000 print (y, type(y)) #Making y type 'int' so that it can be used to append y = int(round(y)) print (y, type(y)) #Checking the size of string_drawer for debugging print(sys.getsizeof(string_drawer)) #Loop used for appending if sys.getsizeof(string_drawer) &lt; y: #If y is bigger, find the difference and append y = y - sys.getsizeof(string_drawer) string_drawer.append(' ' *y) elif sys.getsizeof(string_drawer) &gt; y: #If y is smaller, delete the variable and make a new one string_drawer = [] *y else: #If y is the same size as string_drawer, do nothing string_drawer = string_drawer #Call the Python gerbage colector gc.collect() #Sleep to make sure Task Manager catches the change in RAM usage time.sleep(0.5) #Increment x x += 1 print(x, type(x)) </code></pre> <p>What I am getting is as follows: <img src="http://i.stack.imgur.com/ahEsx.png" alt="Image"></p> <p>What I want is this: <img src="http://i.stack.imgur.com/UXPYc.png" alt="Image 2"></p> <p>Do you have an idea of what I am doing wrong? My guess is that is is something within the if loop or something regarding the garbage collector.</p> <p>Any help is appreciated. Thanks :)</p>
3
2016-09-14T12:51:59Z
39,520,004
<pre><code>sys.getsizeof(string_drawer) </code></pre> <blockquote> <p>Return the size of an object in bytes. The object can be any type of object. All built-in objects will return correct results, but this does not have to hold true for third-party extensions as it is implementation specific.</p> <p><strong>Only the memory consumption directly attributed to the object is accounted for, not the memory consumption of objects it refers to.</strong></p> </blockquote> <p>And you are appending to a list of strings so the getsizeof will return the memory attributed to the list not the size of the strings of spaces it refers to. Change the list for a string </p> <pre><code>string_drawer = '' string_drawer += ' ' * y </code></pre> <p>but then you can and should use <code>len</code> instead of <code>sys.getsizeof</code> because the later adds the size of the garbage collector (although if the string is large enough it is negligible), also if you want to do nothing then do nothing:</p> <pre><code>else: #If y is the same size as string_drawer, do nothing pass </code></pre> <p>and to reset the string do <code>string_drawer = ''</code>, not as you do with the list <code>string_drawer = [] * y</code> </p>
1
2016-09-15T20:39:56Z
[ "python", "memory-management", "garbage-collection" ]
how to stop multiple python script running if any script has error
39,490,989
<p>I have python script (installer.py) which runs multiple python and shell programs:</p> <pre><code>print "Going to run Script1" os.system("python script1.py") print "Going to run Script2" os.system("python script2.py") </code></pre> <p>But I found that even if script1.py is not run because of its error, it simply moves on to run script2.py.</p> <p>How to stop the script (installer.py) at the time of script1.py itself..</p>
1
2016-09-14T12:54:40Z
39,491,071
<p><code>os.system</code> will return the exit status of the system call. Just check to see if your command executed correctly.</p> <pre><code>ret = os.system('python script1.py') if ret != 0: # call failed raise Exception("System call failed with error code %d" % ret) ret = os.system('python script2.py') if ret != 0: # call failed raise Exception("System call failed with error code %d" % ret) </code></pre>
0
2016-09-14T12:58:25Z
[ "python", "linux" ]
Naturaltime with minutes, hours, days and weeks
39,491,030
<p>I use naturaltime in my Django application.</p> <p>How can I display only time in minutes, then hours, then days, then weeks?</p> <p>Here is my code:</p> <pre><code>{{ obj.pub_date|naturaltime }} </code></pre>
0
2016-09-14T12:56:49Z
39,491,547
<p>I use two custom filters to solve a very similar problem using babel and pytz. I write "Today" or "Yesterday" plus the time. You're welcome to use my code any way you like. </p> <p>My template code is two usages, one for writing "Today", "Yesterday", or the date if it was even earlier. </p> <p><code>{{ scored_document.fields.10.value|format_date_human(locale='en') }}</code></p> <p>Then this tag writes the actual time of day </p> <p><code>{{ scored_document.fields.10.value|datetimeformat_list(hour=scored_document.fields.17.value|int ,minute =scored_document.fields.18.value|int, timezoneinfo=timezoneinfo, locale=locale) }}</code></p> <p>The two corresponding functions are</p> <pre><code>MONTHS = ('Jan.', 'Feb.', 'Mar.', 'April.', 'May.', 'June.', 'July.', 'Aug.', 'Sep.', 'Oct.', 'Nov.', 'Dec.') FORMAT = '%H:%M / %d-%m-%Y' def format_date_human(to_format, locale='en', timezoneinfo='Asia/Calcutta'): tzinfo = timezone(timezoneinfo) now = datetime.now() #logging.info('delta: %s', str((now - to_format).days)) #logging.info('delta2: %s', str((datetime.date(now)-datetime.date(to_format)).days)) if datetime.date(to_format) == datetime.date(now): date_str = _('Today') elif (now - to_format).days == 1: date_str = _('Yesterday') else: month = MONTHS[to_format.month - 1] date_str = '{0} {1}'.format(to_format.day, _(month)) time_str = format_time(to_format, 'H:mm', tzinfo=tzinfo, locale=locale) return "{0}".format(date_str, time_str) def datetimeformat_list(date, hour, minute, locale='en', timezoneinfo='Asia/Calcutta'): import datetime as DT import pytz utc = pytz.utc to_format = DT.datetime(int(date.year), int(date.month), int(date.day), int(hour), int(minute)) utc_date = utc.localize(to_format) tzone = pytz.timezone(timezoneinfo) tzone_date = utc_date.astimezone(tzone) time_str = format_time(tzone_date, 'H:mm') return "{0}".format(time_str) </code></pre>
0
2016-09-14T13:19:01Z
[ "python", "django" ]
Replacing characters of string in a list
39,491,132
<p>I am replacing the character and/or string in l3 by comparing it with l1 and l2 . What output I am getting and what output I like to get is shown below.</p> <p>my code </p> <pre><code>l1 = ["Jai","Sharath","Ravi","Aditya"] l2 = ["Singh","Kumar","Sharma","Rao"] l3 = ["J.Singh","Sharath_K","R-Sharma","Rao_Aditya"] for x,y,z in zip(l1,l2,l3): if x in z: z.replace(x,"Firstname") elif x[0] in z: z.replace(x[0],"First/Character/of/first/name") elif y in z: z.replace(y,"lastname") else: z.replace(y[0],"First/Character/of/last/name") </code></pre> <p>my output</p> <pre><code>'First/Character/of/first/name.Singh' 'Firstname_K' 'First/Character/of/first/name/Sharma' 'Rao_Firstname' </code></pre> <p>my expected output</p> <pre><code>'First/Character/of/first/name.lastname' 'Firstname_First/Character/of/last/name' 'First/Character/of/first/name/lastname' 'lastname_Firstname' </code></pre> <p>how do i get my desired output?</p>
0
2016-09-14T13:01:01Z
39,491,156
<p>Strings are immutable. <code>replace</code> does not work in-place, it returns a new string. You need to reallocate that new string to the original name.</p> <pre><code>if x in z: z = z.replace(x,"Firstname") </code></pre> <p>(Also, please use more than one space indentation.)</p>
0
2016-09-14T13:02:14Z
[ "python", "python-3.x" ]
Replacing characters of string in a list
39,491,132
<p>I am replacing the character and/or string in l3 by comparing it with l1 and l2 . What output I am getting and what output I like to get is shown below.</p> <p>my code </p> <pre><code>l1 = ["Jai","Sharath","Ravi","Aditya"] l2 = ["Singh","Kumar","Sharma","Rao"] l3 = ["J.Singh","Sharath_K","R-Sharma","Rao_Aditya"] for x,y,z in zip(l1,l2,l3): if x in z: z.replace(x,"Firstname") elif x[0] in z: z.replace(x[0],"First/Character/of/first/name") elif y in z: z.replace(y,"lastname") else: z.replace(y[0],"First/Character/of/last/name") </code></pre> <p>my output</p> <pre><code>'First/Character/of/first/name.Singh' 'Firstname_K' 'First/Character/of/first/name/Sharma' 'Rao_Firstname' </code></pre> <p>my expected output</p> <pre><code>'First/Character/of/first/name.lastname' 'Firstname_First/Character/of/last/name' 'First/Character/of/first/name/lastname' 'lastname_Firstname' </code></pre> <p>how do i get my desired output?</p>
0
2016-09-14T13:01:01Z
39,491,558
<p>Consider your use of <code>elif</code>. If your first condition triggers, replacing the first name, will the last condition trigger to replace the last name? You may want to try two <code>if</code> <code>else</code> structures. </p> <p>Consider the following:</p> <pre><code>z = 'abc' if z[0] == 'a': z = z.replace('a', '1') elif z[1] == 'b': z = z.replace('b', '2') if z[2] == 'c': z = z.replace('c', '3') </code></pre> <p>What will z be at the end of this block? Would removing the <code>z =</code> change that? How does changing the conditionals (<code>if</code> <code>elif</code> <code>else</code>) change the output?</p>
0
2016-09-14T13:19:29Z
[ "python", "python-3.x" ]
Python Pandas get_dummies() limitation. Doesnt convert all columns
39,491,225
<p>I have 6 columns in my dataframe. 2 of them have about 3K unique values. When I use <code>get_dummies()</code> on the entire dataframe or just one those 2 columns what gets returned is the exact same column with 3k values. <code>get_dummies</code> fails to dummy-fy the bigger columns. Some columns do get one-hot encoded but the big ones dont. </p> <p>I wonder if get_dummies only works on sets with small cardinality. </p> <p>I believe this was also discusses here: <a href="http://stackoverflow.com/questions/25057694/need-help-with-pythonpandas-script#comment66274933_25057694">Need help with python(pandas) script</a></p>
1
2016-09-14T13:05:12Z
39,492,784
<p>It appears to work as intended for me.</p> <p>Consider the series <code>s</code> of random 3 character strings</p> <pre><code>import pandas as pd import numpy as np from string import lowercase np.random.seed([3,1415]) s = pd.DataFrame(np.random.choice(list(lowercase), (10000, 3))).sum(1) s.nunique() 7583 </code></pre> <p>Then assign the dataframe <code>df</code></p> <pre><code>df = s.str.get_dummies() </code></pre> <hr> <pre><code>df.shape (10000, 7583) </code></pre> <hr> <pre><code>df.sum(1).describe() count 10000.0 mean 1.0 std 0.0 min 1.0 25% 1.0 50% 1.0 75% 1.0 max 1.0 dtype: float64 </code></pre>
1
2016-09-14T14:18:06Z
[ "python", "pandas", "one-hot-encoding" ]
Pandas + Python: More efficient code
39,491,259
<p>This is my code:</p> <pre><code>import pandas as pd import os import glob as g archivos = g.glob('C:\Users\Desktop\*.csv') for archiv in archivos: nombre = os.path.splitext(archiv)[0] df = pd.read_csv(archiv, sep=",") d = pd.to_datetime(df['DATA_LEITURA'], format="%Y%m%d") df['FECHA_LECTURA'] = d.dt.date del df['DATA_LEITURA'] df['CONSUMO']="" df['DIAS']="" df["SUMDIAS"]="" df["SUMCONS"]="" df["CONSANUAL"] = "" ordenado = df.sort_values(['NR_CPE','FECHA_LECTURA', 'HORA_LEITURA'], ascending=True) ##Agrupamos por el CPE agrupado = ordenado.groupby('NR_CPE') for name, group in agrupado: #Recorremos el grupo indice = group.index.values inicio = indice[0] fin = indice[-1] #Llenamos la primeras lectura de cada CPE, con esa lectura (porque no hay una lectura anterior) ordenado.CONSUMO.loc[inicio] = 0 ordenado.DIAS.loc[inicio] = 0 cont=0 for i in indice: #Recorremos lo que hay dentro de los grupos, dentro de los CPES(lecturas) if i &gt; inicio and i &lt;= fin : cont=cont+1 consumo = ordenado.VALOR_LEITURA[indice[cont]] - ordenado.VALOR_LEITURA[indice[cont-1]] dias = (ordenado.FECHA_LECTURA[indice[cont]] - ordenado.FECHA_LECTURA[indice[cont-1]]).days ordenado.CONSUMO.loc[i] = consumo ordenado.DIAS.loc[i] = dias # Hago las sumatorias, el resultado es un objeto DataFrame dias = agrupado['DIAS'].sum() consu = agrupado['CONSUMO'].sum() canu = (consu/dias) * 365 #Contador con el numero de courrencias de los campos A,B y C conta=0 contb=0 contc=0 #Como es un DF, para recorrerlo tengo que iterar sobre ellos para hacer la comparacion print "Grupos:" for ind, sumdias in dias.iteritems(): if sumdias &lt;= 180: grupo = "A" conta=conta+1 elif sumdias &gt; 180 and sumdias &lt;= 365: grupo = "B" contb=contb+1 elif sumdias &gt; 365: grupo = "C" contc=contc+1 print "grupo A: " , conta print "grupo B: " , contb print "grupo C: " , contc #Formateamos los campos para no mostrar todos los decimales Fdias = dias.map('{:.0f}'.format) Fcanu = canu.map('{:.2f}'.format) frames = [Fdias, consu, Fcanu] concat = pd.concat(frames,axis=1).replace(['inf','nan'],[0,0]) with open('C:\Users\Documents\RPE_PORTUGAL\Datos.csv','a') as f: concat.to_csv(f,header=False,columns=['CPE','DIAS','CONSUMO','CONSUMO_ANUAL']) try: ordenado.to_excel(nombre+'.xls', columns=["NOME_DISTRITO", "NR_CPE","MARCA_EQUIPAMENTO","NR_EQUIPAMENTO","VALOR_LEITURA","REGISTADOR","TIPO_REGISTADOR", "TIPO_DADOS_RECOLHIDOS","FACTOR_MULTIPLICATIVO_FINAL","NR_DIGITOS_INTEIRO","UNIDADE_MEDIDA", "TIPO_LEITURA","MOTIVO_LEITURA","ESTADO_LEITURA","HORA_LEITURA","FECHA_LECTURA","CONSUMO","DIAS"], index=False) print (archiv) print ("===============================================") print ("*****Se ha creado el archivo correctamente*****") print ("===============================================") except IOError: print ("===================================================") print ("¡¡¡¡¡Hubo un error en la escritura del archivo!!!!!") print ("===================================================") </code></pre> <p>This takes a file where I have lectures of energy consumption from different dates for every light meter(<code>'NR_CPE'</code>) and do some calculations:</p> <ol> <li><p>Calculate the energy consumption for every <code>'NR_CPE'</code> by substracting the previous reading with the next one and the result put in a new column named <code>'CONSUMO'</code>.</p></li> <li><p>Calculate the number of days where I'v got a reading and sum up the number of days</p></li> <li>Add the consumption for every <code>'NR_CPE'</code> and calculate the anual consumption.</li> <li>Finally I want to classify by number of days that every light meter(<code>'NR_CPE'</code>) has a lecture. A if it has less than 180 days, B between 180 and 1 year and C more than a year. </li> </ol> <p>And finally write this result in two differents files. Any idea of how should I re-code this to have the same output and be faster? Thank you all.</p> <p>BTW this is my dataset:</p> <pre><code>,NOME_DISTRITO,NR_CPE,MARCA_EQUIPAMENTO,NR_EQUIPAMENTO,VALOR_LEITURA,REGISTADOR,TIPO_REGISTADOR,TIPO_DADOS_RECOLHIDOS,FACTOR_MULTIPLICATIVO_FINAL,NR_DIGITOS_INTEIRO,UNIDADE_MEDIDA,TIPO_LEITURA,MOTIVO_LEITURA,ESTADO_LEITURA,DATA_LEITURA,HORA_LEITURA 0,GUARDA,A002000642VW,101,1865411,4834,001,S,1,1,4,kWh,1,1,A,20150629,205600 1,GUARDA,A002000642VW,101,1865411,4834,001,S,1,1,4,kWh,2,2,A,20160218,123300 2,GUARDA,A002000642VJ,122,204534,25083,001,S,1,1,5,kWh,1,1,A,20150629,205700 3,GUARDA,A002000642VJ,122,204534,27536,001,S,1,1,5,kWh,2,2,A,20160218,123200 4,GUARDA,A002000642HR,101,1383899,11734,001,S,1,1,5,kWh,1,1,A,20150629,205600 5,GUARDA,A002000642HR,101,1383899,11800,001,S,1,1,5,kWh,2,2,A,20160218,123000 6,GUARDA,A002000995VM,101,97706436,12158,001,S,1,1,5,kWh,1,3,A,20150713,155300 7,GUARDA,A002000995VM,101,97706436,12163,001,S,1,1,5,kWh,2,2,A,20160129,162300 8,GUARDA,A002000995VM,101,97706436,12163,001,S,1,1,5,kWh,2,2,A,20160202,195800 9,GUARDA,A2000995VM,101,97706436,12163,001,S,1,1,5,kWh,1,3,A,20160404,145200 10,GUARDA,A002000996LV,168,5011703276,3567,001,V,1,1,6,kWh,1,1,A,20150528,205900 11,GUARDA,A02000996LV,168,5011703276,3697,001,V,1,1,6,kWh,2,2,A,20150929,163500 12,GUARDA,A02000996LV,168,5011703276,1287,002,P,1,1,6,kWh,1,1,A,20150528,205900 </code></pre>
-2
2016-09-14T13:06:53Z
39,492,970
<p>Generally you want to <a href="http://stackoverflow.com/a/7837947/12663">avoid for loops in pandas</a>. For example, the first loop where you calculate total consumption and days could be rewritten as a groupby apply something like:</p> <pre><code>def last_minus_first(df): columns_of_interest = df[['VALOR_LEITURA', 'days']] diff = columns_of_interest.iloc[-1] - columns_of_interest.iloc[0] return diff df['date'] = pd.to_datetime(df['DATA_LEITURA'], format="%Y%m%d") df['days'] = (df['date'] - pd.datetime(1970,1,1)).dt.days # create days column df.groupby('NR_CPE').apply(last_minus_first) </code></pre> <p>(btw I don't understand why you are subtracting each entry from the previous, surely for meter readings this is the same as last-first?)</p> <p>Then given the result of the above as consumption, you can replace your second for loop (for ind, sumdias in dias.iteritems()) with something like:</p> <pre><code>pd.cut(consumption.days, [-1, 180, 365, np.inf], labels=['a', 'b', 'c']).value_counts() </code></pre>
1
2016-09-14T14:26:55Z
[ "python", "pandas" ]
What is the range of valid dates for the datetime module?
39,491,327
<p>I'm playing with the Python <a href="https://docs.python.org/2/library/datetime.html" rel="nofollow"><code>datetime</code></a> module. I'm using it to determine the day of the week for a given date. Python conveniently raises a <code>ValueError</code> when the date is invalid; e.g., for February 29 on non-leap years. </p> <p>I have found that for years greater than 10,000 AD, <code>ValueError</code> exceptions are raised for many dates that are not February 29. This leads me to consider that the <code>datetime</code> module is not valid for dates that far in the future.</p> <p><strong>What is the range of valid dates for the <code>datetime</code> module?</strong></p>
-1
2016-09-14T13:09:58Z
39,491,365
<p>Check <a href="https://docs.python.org/2/library/datetime.html#datetime.date.min"><code>date.min</code></a> and <a href="https://docs.python.org/2/library/datetime.html#datetime.date.max"><code>date.max</code></a>:</p> <pre><code>&gt;&gt;&gt; from datetime import date &gt;&gt;&gt; date.min datetime.date(1, 1, 1) &gt;&gt;&gt; date.max datetime.date(9999, 12, 31) </code></pre>
5
2016-09-14T13:11:36Z
[ "python", "date", "datetime", "calendar" ]
map() is returning only part of arguments
39,491,355
<p>I have the following code:</p> <pre><code>a = '0' b = '256' mod_add = ['300', '129', '139'] list(map(lambda a, b, x: (a &lt; x) and (x &lt; b), a, b, mod_add)) </code></pre> <p>I'd like to check every element in mod_add, but </p> <pre><code>list(map(lambda a, b, x: (a &lt; x) and (x &lt; b), a, b, mod_add)) </code></pre> <p>returns only one <code>False</code>. With some values <code>(a = '100', b = '200')</code> it returns <code>'False', 'False', 'False'</code>.</p> <p>What am I doing wrong?</p>
1
2016-09-14T13:11:17Z
39,491,491
<p><code>a</code> and <code>b</code> are strings, they will be rightly treated as <em>iterables</em> by <code>map</code>, not constants as you intend. You should either use a list comprehension or not pass <code>a</code> and <code>b</code> as parameters to <code>map</code>:</p> <pre><code>&gt;&gt;&gt; [a &lt; x &lt; b for x in mod_add] [False, True, True] </code></pre> <p>Comparisons can be chained arbitrarily, so <code>(a &lt; x) and (x &lt; b)</code> can be replaced with <code>a &lt; x &lt; b</code></p> <hr> <p>Comparing integers instead of strings (which is probably what you want) is just another step away:</p> <pre><code>&gt;&gt;&gt; [int(a) &lt; int(i) &lt; int(b) for i in mod_add] [False, True, True] </code></pre>
2
2016-09-14T13:16:47Z
[ "python", "python-3.x" ]
map() is returning only part of arguments
39,491,355
<p>I have the following code:</p> <pre><code>a = '0' b = '256' mod_add = ['300', '129', '139'] list(map(lambda a, b, x: (a &lt; x) and (x &lt; b), a, b, mod_add)) </code></pre> <p>I'd like to check every element in mod_add, but </p> <pre><code>list(map(lambda a, b, x: (a &lt; x) and (x &lt; b), a, b, mod_add)) </code></pre> <p>returns only one <code>False</code>. With some values <code>(a = '100', b = '200')</code> it returns <code>'False', 'False', 'False'</code>.</p> <p>What am I doing wrong?</p>
1
2016-09-14T13:11:17Z
39,491,528
<p>If you really have to use <code>map</code>:</p> <pre><code>list(map(lambda x: (a &lt; x) and (x &lt; b), mod_add)) </code></pre> <p>Edit:</p> <p>In response to the desire to <code>map</code> only one element from the list, it really doesn't make such sense to me to do that. But if that's what you wish to do, you can try:</p> <pre><code>list(map(lambda x: (a &lt; x) and (x &lt; b), [mod_add[0]])) </code></pre> <p>I hope this helps.</p>
2
2016-09-14T13:18:31Z
[ "python", "python-3.x" ]
map() is returning only part of arguments
39,491,355
<p>I have the following code:</p> <pre><code>a = '0' b = '256' mod_add = ['300', '129', '139'] list(map(lambda a, b, x: (a &lt; x) and (x &lt; b), a, b, mod_add)) </code></pre> <p>I'd like to check every element in mod_add, but </p> <pre><code>list(map(lambda a, b, x: (a &lt; x) and (x &lt; b), a, b, mod_add)) </code></pre> <p>returns only one <code>False</code>. With some values <code>(a = '100', b = '200')</code> it returns <code>'False', 'False', 'False'</code>.</p> <p>What am I doing wrong?</p>
1
2016-09-14T13:11:17Z
39,491,659
<p>This is really the kind of thing you should prefer list comprehensions for.</p> <pre><code>min_, max_ = '0', '256' # do you mean for these to be compared lexicographically?! # be aware that '0' &lt; '1234567890' &lt; '256' is True mod_add = ['300', '129', '139'] result = [min_ &lt; mod &lt; max_ for mod in mod_add] # [False, True, True] </code></pre> <p><a href="https://docs.python.org/3/library/functions.html#map" rel="nofollow"><code>map</code></a> <a href="https://docs.python.org/3/library/functions.html#filter" rel="nofollow"><code>filter</code></a> and <a href="https://docs.python.org/2/library/functions.html#reduce" rel="nofollow"><code>reduce</code></a> (now <a href="https://docs.python.org/3/library/functools.html#functools.reduce" rel="nofollow"><code>functools.reduce</code></a>) are mighty tools to be sure, but Python tends to shy away from them in favor of more verbose, easier to read expressions.</p>
0
2016-09-14T13:23:41Z
[ "python", "python-3.x" ]
map() is returning only part of arguments
39,491,355
<p>I have the following code:</p> <pre><code>a = '0' b = '256' mod_add = ['300', '129', '139'] list(map(lambda a, b, x: (a &lt; x) and (x &lt; b), a, b, mod_add)) </code></pre> <p>I'd like to check every element in mod_add, but </p> <pre><code>list(map(lambda a, b, x: (a &lt; x) and (x &lt; b), a, b, mod_add)) </code></pre> <p>returns only one <code>False</code>. With some values <code>(a = '100', b = '200')</code> it returns <code>'False', 'False', 'False'</code>.</p> <p>What am I doing wrong?</p>
1
2016-09-14T13:11:17Z
39,491,749
<pre><code>a = '0' b = '256' mod_add['300', '129', '139'] map(lambda x: int(a)&lt;int(x)&lt;int(b), mod_add) </code></pre> <p>Output :</p> <pre><code>[False, True, True] </code></pre>
0
2016-09-14T13:27:40Z
[ "python", "python-3.x" ]
Python/Json:Expecting property name enclosed in double quotes
39,491,420
<p>I've been trying to figure out a good way to load JSON objects in Python. I send this json data:</p> <pre><code>{'http://example.org/about': {'http://purl.org/dc/terms/title': [{'type': 'literal', 'value': "Anna's Homepage"}]}} </code></pre> <p>to the backend where it will be received as a string then I used <code>json.loads(data)</code> to parse it.</p> <p>But each time I got the same exception : </p> <pre><code>ValueError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) </code></pre> <p>I googled it but nothing seems to work besides this solution <code>json.loads(json.dumps(data))</code> which personally seems for me not that efficient since it accept any kind of data even the ones that are not in json format.</p> <p>Any suggestions will be much appreciated.</p>
0
2016-09-14T13:13:52Z
39,491,599
<p>Quite simply, that string is not valid JSON. As the error says, JSON documents need to use double quotes.</p> <p>You need to fix the source of the data.</p>
1
2016-09-14T13:21:19Z
[ "python", "json", "parsing" ]
Python/Json:Expecting property name enclosed in double quotes
39,491,420
<p>I've been trying to figure out a good way to load JSON objects in Python. I send this json data:</p> <pre><code>{'http://example.org/about': {'http://purl.org/dc/terms/title': [{'type': 'literal', 'value': "Anna's Homepage"}]}} </code></pre> <p>to the backend where it will be received as a string then I used <code>json.loads(data)</code> to parse it.</p> <p>But each time I got the same exception : </p> <pre><code>ValueError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) </code></pre> <p>I googled it but nothing seems to work besides this solution <code>json.loads(json.dumps(data))</code> which personally seems for me not that efficient since it accept any kind of data even the ones that are not in json format.</p> <p>Any suggestions will be much appreciated.</p>
0
2016-09-14T13:13:52Z
39,491,607
<p>JSON strings must use double quotes. The JSON python library enforces this so you are unable to load your string. Your data needs to look like this:</p> <pre><code>{"http://example.org/about": {"http://purl.org/dc/terms/title": [{"type": "literal", "value": "Anna's Homepage"}]}} </code></pre> <p>If that's not something you can do, you could use <code>ast.literal_eval()</code> instead of <code>json.loads()</code></p>
0
2016-09-14T13:21:32Z
[ "python", "json", "parsing" ]
Python/Json:Expecting property name enclosed in double quotes
39,491,420
<p>I've been trying to figure out a good way to load JSON objects in Python. I send this json data:</p> <pre><code>{'http://example.org/about': {'http://purl.org/dc/terms/title': [{'type': 'literal', 'value': "Anna's Homepage"}]}} </code></pre> <p>to the backend where it will be received as a string then I used <code>json.loads(data)</code> to parse it.</p> <p>But each time I got the same exception : </p> <pre><code>ValueError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) </code></pre> <p>I googled it but nothing seems to work besides this solution <code>json.loads(json.dumps(data))</code> which personally seems for me not that efficient since it accept any kind of data even the ones that are not in json format.</p> <p>Any suggestions will be much appreciated.</p>
0
2016-09-14T13:13:52Z
39,491,613
<p>This:</p> <pre><code>{'http://example.org/about': {'http://purl.org/dc/terms/title': [{'type': 'literal', 'value': "Anna's Homepage"}]}} </code></pre> <p>is not JSON.<br> This:</p> <pre><code>{"http://example.org/about": {"http://purl.org/dc/terms/title": [{"type": "literal", "value": "Anna's Homepage"}]}} </code></pre> <p>is JSON.</p>
2
2016-09-14T13:21:43Z
[ "python", "json", "parsing" ]
Python/Json:Expecting property name enclosed in double quotes
39,491,420
<p>I've been trying to figure out a good way to load JSON objects in Python. I send this json data:</p> <pre><code>{'http://example.org/about': {'http://purl.org/dc/terms/title': [{'type': 'literal', 'value': "Anna's Homepage"}]}} </code></pre> <p>to the backend where it will be received as a string then I used <code>json.loads(data)</code> to parse it.</p> <p>But each time I got the same exception : </p> <pre><code>ValueError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) </code></pre> <p>I googled it but nothing seems to work besides this solution <code>json.loads(json.dumps(data))</code> which personally seems for me not that efficient since it accept any kind of data even the ones that are not in json format.</p> <p>Any suggestions will be much appreciated.</p>
0
2016-09-14T13:13:52Z
39,491,618
<p>I've checked your JSON data </p> <pre><code>{'http://example.org/about': {'http://purl.org/dc/terms/title': [{'type': 'literal', 'value': "Anna's Homepage"}]}} </code></pre> <p>in <a href="http://jsonlint.com/" rel="nofollow">http://jsonlint.com/</a> and the results were:</p> <pre><code>Error: Parse error on line 1: { 'http://example.org/ --^ Expecting 'STRING', '}', got 'undefined' </code></pre> <p>modifying it to the following string solve the JSON error:</p> <pre><code>{ "http://example.org/about": { "http://purl.org/dc/terms/title": [{ "type": "literal", "value": "Anna's Homepage" }] } } </code></pre>
1
2016-09-14T13:21:51Z
[ "python", "json", "parsing" ]
Python/Json:Expecting property name enclosed in double quotes
39,491,420
<p>I've been trying to figure out a good way to load JSON objects in Python. I send this json data:</p> <pre><code>{'http://example.org/about': {'http://purl.org/dc/terms/title': [{'type': 'literal', 'value': "Anna's Homepage"}]}} </code></pre> <p>to the backend where it will be received as a string then I used <code>json.loads(data)</code> to parse it.</p> <p>But each time I got the same exception : </p> <pre><code>ValueError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) </code></pre> <p>I googled it but nothing seems to work besides this solution <code>json.loads(json.dumps(data))</code> which personally seems for me not that efficient since it accept any kind of data even the ones that are not in json format.</p> <p>Any suggestions will be much appreciated.</p>
0
2016-09-14T13:13:52Z
39,491,628
<p>As it clearly says in error, names should be enclosed in double quotes instead of single quotes. The string you pass is just not a valid JSON. It should look like</p> <pre><code>{"http://example.org/about": {"http://purl.org/dc/terms/title": [{"type": "literal", "value": "Anna's Homepage"}]}} </code></pre>
1
2016-09-14T13:22:23Z
[ "python", "json", "parsing" ]
Clustering points based on their function values and proximity
39,491,440
<p>I have many points <code>X</code> and their function values <code>f</code> stored in <code>numpy</code> arrays. I want to find <strong>all points</strong> in <code>X</code> that don't have a better point (smaller <code>f</code> value) within a distance <code>r</code>.</p> <p><code>X</code> is hundreds of thousands of points, so I can't precompute <code>sp.spatial.distance.pdist(X)</code> but resort to the following:</p> <pre><code>def cluster(X,f,r): pts,n = np.shape(X) centers = [] for i in range(0,pts): pdist = sp.spatial.distance.cdist(X,[X[i]]) if not np.any(np.logical_and(pdist &lt;= r, f &lt; f[i])): centers.append(i) return centers </code></pre> <p>This takes minutes. Is there a way to quickly cluster based on proximity and another metric?</p>
1
2016-09-14T13:14:30Z
39,491,598
<p>You could partition the space so that you could ignore partitions wholly outside the radius for the point being tested.</p> <p>You could also order the points by f, so you don't need to scan those with smaller values.</p>
2
2016-09-14T13:21:14Z
[ "python", "numpy", "scipy", "cluster-analysis" ]
Clustering points based on their function values and proximity
39,491,440
<p>I have many points <code>X</code> and their function values <code>f</code> stored in <code>numpy</code> arrays. I want to find <strong>all points</strong> in <code>X</code> that don't have a better point (smaller <code>f</code> value) within a distance <code>r</code>.</p> <p><code>X</code> is hundreds of thousands of points, so I can't precompute <code>sp.spatial.distance.pdist(X)</code> but resort to the following:</p> <pre><code>def cluster(X,f,r): pts,n = np.shape(X) centers = [] for i in range(0,pts): pdist = sp.spatial.distance.cdist(X,[X[i]]) if not np.any(np.logical_and(pdist &lt;= r, f &lt; f[i])): centers.append(i) return centers </code></pre> <p>This takes minutes. Is there a way to quickly cluster based on proximity and another metric?</p>
1
2016-09-14T13:14:30Z
39,493,588
<p>I think one can sum this up as:</p> <p>Use k-nearest neighbor to build a kdtree. Query the tree for points close to your query point with radius, check their function values.</p> <pre><code>x=scipy.random.rand(10000,2) # sample data f = exp(-x[:,0]**2) # sample function values K=scipy.spatial.KDTree(x) # generate kdtree of data set ix = K.query_point_ball(x[0],0.1) # query indices of points within 0.1 of x[0] in euclidean norm # check f[ix] for your function criterion </code></pre> <p>You can query for all points at once if you are interested in that</p> <pre><code>ix = K.query_point_ball(x,0.04) </code></pre>
1
2016-09-14T14:53:22Z
[ "python", "numpy", "scipy", "cluster-analysis" ]
Clustering points based on their function values and proximity
39,491,440
<p>I have many points <code>X</code> and their function values <code>f</code> stored in <code>numpy</code> arrays. I want to find <strong>all points</strong> in <code>X</code> that don't have a better point (smaller <code>f</code> value) within a distance <code>r</code>.</p> <p><code>X</code> is hundreds of thousands of points, so I can't precompute <code>sp.spatial.distance.pdist(X)</code> but resort to the following:</p> <pre><code>def cluster(X,f,r): pts,n = np.shape(X) centers = [] for i in range(0,pts): pdist = sp.spatial.distance.cdist(X,[X[i]]) if not np.any(np.logical_and(pdist &lt;= r, f &lt; f[i])): centers.append(i) return centers </code></pre> <p>This takes minutes. Is there a way to quickly cluster based on proximity and another metric?</p>
1
2016-09-14T13:14:30Z
39,501,444
<p>You can significantly reduce the number of distance computation by keeping a record. For instance, if j is a neighbor of a center i and it has a larger f value, then j can never be a center since one of its neighbors is i which has a smaller f value. Please check the following and let me know if you need clarification.</p> <pre><code>def cluster4(X,f,r): pts,n = np.shape(X) centers = np.ones((pts,1),dtype=int) for i in range(pts): if not centers[i]: continue pdist = sp.spatial.distance.cdist(X,[X[i]]) inrange = (pdist &lt;= r) inrange[i] = False lesser = (f &lt; f[i]) if np.any(inrange &amp; lesser): centers[i] = 0 centers[inrange &amp; np.invert(lesser)] = 0 return np.where(centers == 1)[0] </code></pre>
1
2016-09-15T00:15:19Z
[ "python", "numpy", "scipy", "cluster-analysis" ]
Identify string in dataframe and replace content using Python
39,491,482
<p>I have a csv file and I loaded using Pandas. Firstly I decide to rename the columns. The dataframe is this:</p> <p><a href="http://i.stack.imgur.com/oFXmJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/oFXmJ.png" alt="enter image description here"></a></p> <p>My goal is to check if all the columns of each row contain the following characters <code>\n</code>. If so, the cells of the row containing the previously mentioned string must be modified in such a way that the only content left is what comes after <code>\n</code>. The output of such algorithm should be like this:</p> <p><a href="http://i.stack.imgur.com/mHPSN.png" rel="nofollow"><img src="http://i.stack.imgur.com/mHPSN.png" alt="enter image description here"></a></p> <p>The code so far is this but I got stuck on finding and removing \n along with what precedes it. </p> <pre><code>import pandas as pd df = pd.read_csv('prova.csv', sep=',', skiprows=0, header=None,low_memory=False) df.columns = ['A','B','C','D','E','F'] for index, row in df.iterrows(): if '\n' in row[?]: # how do I remove the unwanted characters for each cell? </code></pre> <p>Notice: I want to investigate all the columns, not only those where <code>\n</code> appears.</p> <pre><code>A object B object C object D object E int64 F object dtype: object </code></pre>
1
2016-09-14T13:16:30Z
39,492,104
<p>IIUC, you can use <code>applymap</code> with <code>str.split</code> to split on <code>\n</code> char and take the last split:</p> <pre><code>df['E'] = df['E'].astype(str) df.applymap(lambda x: x.split('\n')[-1]) </code></pre> <p><a href="http://i.stack.imgur.com/26rXK.png" rel="nofollow"><img src="http://i.stack.imgur.com/26rXK.png" alt="Image"></a></p> <p>One liner:</p> <pre><code>df.applymap(lambda x: x.split('\n')[-1] if type(x)==str else x) </code></pre>
3
2016-09-14T13:44:33Z
[ "python", "pandas", "replace", "identity" ]
Identify string in dataframe and replace content using Python
39,491,482
<p>I have a csv file and I loaded using Pandas. Firstly I decide to rename the columns. The dataframe is this:</p> <p><a href="http://i.stack.imgur.com/oFXmJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/oFXmJ.png" alt="enter image description here"></a></p> <p>My goal is to check if all the columns of each row contain the following characters <code>\n</code>. If so, the cells of the row containing the previously mentioned string must be modified in such a way that the only content left is what comes after <code>\n</code>. The output of such algorithm should be like this:</p> <p><a href="http://i.stack.imgur.com/mHPSN.png" rel="nofollow"><img src="http://i.stack.imgur.com/mHPSN.png" alt="enter image description here"></a></p> <p>The code so far is this but I got stuck on finding and removing \n along with what precedes it. </p> <pre><code>import pandas as pd df = pd.read_csv('prova.csv', sep=',', skiprows=0, header=None,low_memory=False) df.columns = ['A','B','C','D','E','F'] for index, row in df.iterrows(): if '\n' in row[?]: # how do I remove the unwanted characters for each cell? </code></pre> <p>Notice: I want to investigate all the columns, not only those where <code>\n</code> appears.</p> <pre><code>A object B object C object D object E int64 F object dtype: object </code></pre>
1
2016-09-14T13:16:30Z
39,492,166
<p>You can use a regular expression to remove anything before a '\n' (or any other character you specify) from a string:</p> <pre><code>import re str="onetwo\nthree" print(str) test = re.search('(?&lt;=\\n)\w+', str) print(test.group(0)) </code></pre>
3
2016-09-14T13:47:01Z
[ "python", "pandas", "replace", "identity" ]
Identify string in dataframe and replace content using Python
39,491,482
<p>I have a csv file and I loaded using Pandas. Firstly I decide to rename the columns. The dataframe is this:</p> <p><a href="http://i.stack.imgur.com/oFXmJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/oFXmJ.png" alt="enter image description here"></a></p> <p>My goal is to check if all the columns of each row contain the following characters <code>\n</code>. If so, the cells of the row containing the previously mentioned string must be modified in such a way that the only content left is what comes after <code>\n</code>. The output of such algorithm should be like this:</p> <p><a href="http://i.stack.imgur.com/mHPSN.png" rel="nofollow"><img src="http://i.stack.imgur.com/mHPSN.png" alt="enter image description here"></a></p> <p>The code so far is this but I got stuck on finding and removing \n along with what precedes it. </p> <pre><code>import pandas as pd df = pd.read_csv('prova.csv', sep=',', skiprows=0, header=None,low_memory=False) df.columns = ['A','B','C','D','E','F'] for index, row in df.iterrows(): if '\n' in row[?]: # how do I remove the unwanted characters for each cell? </code></pre> <p>Notice: I want to investigate all the columns, not only those where <code>\n</code> appears.</p> <pre><code>A object B object C object D object E int64 F object dtype: object </code></pre>
1
2016-09-14T13:16:30Z
39,492,991
<p><strong><em>Solution</em></strong><br> Use <code>str</code> accessor with <code>split</code> after <code>stack</code> to get a series.</p> <pre><code>df.astype(str).stack().str.split('\n').str[-1].unstack() </code></pre> <p><a href="http://i.stack.imgur.com/EWhBg.png" rel="nofollow"><img src="http://i.stack.imgur.com/EWhBg.png" alt="enter image description here"></a></p> <hr> <h3>Setup Reference</h3> <pre><code>df = pd.DataFrame([ ['bello', 'bot', 'corpo', '105', 245, 'Yes'], ['bello', 'par\nsot', 'testo\ncorpo', '105', 660, 'Yes\nno'], ['bello', 'pic\nhot', 'fallo', '195\n250', 660, 'Yes'], ['bello', 'hot', 'fallo\nbacca', '105', 245, 'Yes'] ], columns=list('ABCDEF')) </code></pre>
3
2016-09-14T14:27:50Z
[ "python", "pandas", "replace", "identity" ]
Pointers vs garbage collection in Python
39,491,706
<p>I understand in Python that assigning a name to an object creates a reference to the object, and that when an objects' ref count goes to zero it is garbage collected. I want to understand whether I should be using pointers in my code or if allowing regular garbage collection is better.</p> <pre><code>import time while True: foo = time.clock() #every iteration get the clock time </code></pre> <p>In my code above, every iteration of the loop assigns the variable name "foo" to reference a new float object returned from time.clock(). Since I know that "foo" will always make reference to a float object returned from time.clock() is it more efficient to use a pointer? (e.g. from ctypes module)? If this were C I would use a pointer but in Python I'm not sure if it matters.</p>
0
2016-09-14T13:25:48Z
39,491,998
<p>There is no notion of pointer in pure Python. What you are referring to is the <code>ctypes</code> module, it is meant to interact with code actually written and compiled in a C compatible language (where pointers actually exist).</p> <p>So your question is invalid.</p>
0
2016-09-14T13:39:04Z
[ "python", "pointers", "garbage-collection", "ctypes" ]
get infinity input PIL (python)
39,491,724
<p>I'm writing a program that converts image to png and converts size to 512, I want to get more input. </p> <pre><code>from PIL import Image import string import random put = input("enter your image path:") im = Image.open(put) size = (512,512) if im.size &gt; size: im.thumbnail(size,Image.ANTIALIAS) else: print ("it's ready") ranam = (''.join(random.choice(string.ascii_lowercase + string.digits) for _ in range(12))) name = "d:/arjang/images/" + ranam im.save(name + ".png") </code></pre> <p>Please help.</p>
0
2016-09-14T13:26:24Z
39,491,869
<p>This will loop, asking for another image path to resize, until you enter an empty string:</p> <pre><code>from PIL import Image import string import random while True: put = input("enter your image path:") if not put.strip(): break im = Image.open(put) size = (512,512) if im.size &gt; size: im.thumbnail(size,Image.ANTIALIAS) else: print ("it's ready") ranam = (''.join(random.choice(string.ascii_lowercase + string.digits) for _ in range(12))) name = "d:/arjang/images/" + ranam im.save(name + ".png") </code></pre>
0
2016-09-14T13:33:06Z
[ "python", "input", "python-imaging-library", "pillow" ]
get infinity input PIL (python)
39,491,724
<p>I'm writing a program that converts image to png and converts size to 512, I want to get more input. </p> <pre><code>from PIL import Image import string import random put = input("enter your image path:") im = Image.open(put) size = (512,512) if im.size &gt; size: im.thumbnail(size,Image.ANTIALIAS) else: print ("it's ready") ranam = (''.join(random.choice(string.ascii_lowercase + string.digits) for _ in range(12))) name = "d:/arjang/images/" + ranam im.save(name + ".png") </code></pre> <p>Please help.</p>
0
2016-09-14T13:26:24Z
39,491,968
<p>You have been mislead by the "input() function. Its does not do what you would hope a function called input will do. input executes what you type as code.</p> <p>This is so misleading that python3 changed "input()" to do the obvious and removed raw_input().</p> <p>To read a line from the standard input you can use this in python2:</p> <pre><code>put = raw_input("enter your image path:") </code></pre> <p>or you could do this:</p> <pre><code>import sys sys.stdout.write("enter your image path:") sys.atdout.flush() put = sys.stdin.readline().strip() </code></pre>
0
2016-09-14T13:37:46Z
[ "python", "input", "python-imaging-library", "pillow" ]
SWIG Python C++ output array giving 'unknown type' error
39,491,861
<p>I am trying to use swig to wrap some c++ code to pass a numpy array back to python. I am following some examples I have seen online to use numpy.i. Here is what my code looks like.</p> <p>I am using this as the function definition in my class header file:</p> <pre><code>bool grabFrame(int buf_size, unsigned char *buf); </code></pre> <p>In my interface file I have:</p> <pre><code>/* File : OV4682Interface.i */ %module OV4682Interface %include "std_string.i" %{ #define SWIG_FILE_WITH_INIT #include "OV4682FrameGrabber.h" %} %include "numpy.i" %init %{ import_array(); %} %apply (int DIM1, unsigned char* ARGOUT_ARRAY1) {(int buf_size, unsigned char *buf)}; %include "../inc/OV4682FrameGrabber.h" </code></pre> <p>My Python code looks like this:</p> <pre><code>import numpy as np import OV4682Interface as ov width = 672 height = 380 buf_size = width*height*2 buf = np.zeros(buf_size, dtype=np.uint8) grab = ov.OV4682FrameGrabber() grab.grabFrame(buf) </code></pre> <p>When I run this I get the following error:</p> <p>Traceback (most recent call last): File "OV4682FrameGrabberTest.py", line 44, in grab.grabFrame(buf) File "/home/ubuntu/rgb_ir_frame_grabber/build/lib/OV4682Interface.py", line 117, in grabFrame def grabFrame(self, *args): return _OV4682Interface.OV4682FrameGrabber_grabFrame(self, *args) TypeError: Int dimension expected. 'unknown type' given.</p> <p>For some reason I am getting an error saying the type is unknown for the passed in array, but I have explicitly set the dtype to be np.uint8. I was wondering if anyone can point me to what I am doing incorrectly here as I am a bit stumped.</p>
0
2016-09-14T13:32:42Z
39,509,531
<p>Looks like I did not look closely enough at the answer for this post (<a href="https://stackoverflow.com/questions/36222455/swigcpython-passing-and-receiving-c-arrays">SWIG+c+Python: Passing and receiving c arrays</a>). I did not notice that swig adds the output array to the return list and just wants the size of the array passed in. </p> <p>I changed the method definition to this:</p> <pre><code>void grabFrame(int buf_size, unsigned char *buf); </code></pre> <p>and I changed the python call to be this:</p> <pre><code>buf = grab.grabFrame(np.shape(buf)[0]) </code></pre> <p>This now works and returns the array of data I wanted.</p>
0
2016-09-15T11:06:20Z
[ "python", "c++", "numpy", "swig" ]
How to clear Chrome HSTS-Cache?
39,491,918
<p>Bg: I am developing a host management tool recently, and I want to realize such a function: auto-delete HSTS cache of chrome-browser by click a btn. I run <a href="https://github.com/craSH/Chrome-STS" rel="nofollow">python scripts</a> to delete HSTS-cache file of chrome. But... after I finished delete that file, I must restart chrome and then this clear-operation works.(Maybe chrome read file and store it in cache?)</p> <p>Question: So are there some methods to clear hsts-auto? not need to open chrome://net-internals/#hsts , or use my method and not need to restart chrome-browser?</p> <p>THX~</p>
0
2016-09-14T13:35:18Z
39,493,396
<p>Easiest option would be to configure a page on your webserver with a Strict-Transport-Security header with max-age=0. Then visit that page through Chrome to clear the HSTS cache.l for that site.</p> <p>Note some pages preload the HSTS header into Chrome's source code so cannot be cleared. But for sites you own which are not preloaded, above should work.</p> <p>This this also depends on the "clear page" being served over HTTPS to allow the header to be processed.</p>
0
2016-09-14T14:44:25Z
[ "python", "node.js", "google-chrome", "hsts" ]
assign series values based on index
39,491,956
<p>having a simple series:</p> <pre><code>&gt;&gt;&gt; s = pd.Series(index=pd.date_range('2016-09-01','2016-09-05')) &gt;&gt;&gt; s 2016-09-01 NaN 2016-09-02 NaN 2016-09-03 NaN 2016-09-04 NaN 2016-09-05 NaN Freq: D, dtype: float64 </code></pre> <p>Am I able to set series values based on its index? Let's say, I want to set series values to dayofweek of corresponding index entry. Of course, I can accomplish it easily by constructing series from scratch:</p> <pre><code>&gt;&gt;&gt; dr = pd.date_range('2016-09-01','2016-09-05') &gt;&gt;&gt; s = pd.Series(data=dr.dayofweek, index=dr) &gt;&gt;&gt; s 2016-09-01 3 2016-09-02 4 2016-09-03 5 2016-09-04 6 2016-09-05 0 Freq: D, dtype: int32 </code></pre> <p>I am also able to accomplish it using dataframe: <code>df['old_column'] = df.index.dayofweek</code>. Is it possible to set series in similar manner (using the only "column" series have)? <code>.apply()</code> and <code>.map()</code> methods seem as no help, since they do not work with index values...</p>
1
2016-09-14T13:36:54Z
39,492,011
<p>You can do it like this:</p> <pre><code>s[s.index] = s.index.dayofweek s Out: 2016-09-01 3 2016-09-02 4 2016-09-03 5 2016-09-04 6 2016-09-05 0 Freq: D, dtype: int32 </code></pre>
2
2016-09-14T13:39:38Z
[ "python", "pandas", "series" ]
assign series values based on index
39,491,956
<p>having a simple series:</p> <pre><code>&gt;&gt;&gt; s = pd.Series(index=pd.date_range('2016-09-01','2016-09-05')) &gt;&gt;&gt; s 2016-09-01 NaN 2016-09-02 NaN 2016-09-03 NaN 2016-09-04 NaN 2016-09-05 NaN Freq: D, dtype: float64 </code></pre> <p>Am I able to set series values based on its index? Let's say, I want to set series values to dayofweek of corresponding index entry. Of course, I can accomplish it easily by constructing series from scratch:</p> <pre><code>&gt;&gt;&gt; dr = pd.date_range('2016-09-01','2016-09-05') &gt;&gt;&gt; s = pd.Series(data=dr.dayofweek, index=dr) &gt;&gt;&gt; s 2016-09-01 3 2016-09-02 4 2016-09-03 5 2016-09-04 6 2016-09-05 0 Freq: D, dtype: int32 </code></pre> <p>I am also able to accomplish it using dataframe: <code>df['old_column'] = df.index.dayofweek</code>. Is it possible to set series in similar manner (using the only "column" series have)? <code>.apply()</code> and <code>.map()</code> methods seem as no help, since they do not work with index values...</p>
1
2016-09-14T13:36:54Z
39,492,223
<p>When using <code>apply</code> on a series, you cannot access the index values. However, you can when using <code>apply</code> on a dataframe. So, convert to a dataframe first.</p> <pre><code>s.to_frame().apply(lambda x: x.name.dayofweek, axis=1) 2016-09-01 3 2016-09-02 4 2016-09-03 5 2016-09-04 6 2016-09-05 0 Freq: D, dtype: int64 </code></pre> <p>This is a demonstration of how to access the index value via <code>apply</code>. If assigning a column to be the <code>dayofweek</code> values is the only objective, <code>s.index.dayofweek</code> is far more appropriate.</p>
0
2016-09-14T13:50:31Z
[ "python", "pandas", "series" ]
Python virtualenv ImportError: No module named _vendor
39,492,165
<p>I installed virtualenv on my system but when I want to actually create a virtual environment I get the following error:</p> <pre><code>... Successfully installed virtualenv-15.0.3 $ virtualenv venv New python executable in /Users/.../venv/bin/python Installing setuptools, pip, wheel... Complete output from command /Users/.../venv/bin/python - setuptools pip wheel: Traceback (most recent call last): File "&lt;stdin&gt;", line 9, in &lt;module&gt; File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pkgutil.py", line 578, in get_data loader = get_loader(package) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pkgutil.py", line 464, in get_loader return find_loader(fullname) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pkgutil.py", line 474, in find_loader for importer in iter_importers(fullname): File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pkgutil.py", line 430, in iter_importers __import__(pkg) ImportError: No module named _vendor ---------------------------------------- ...Installing setuptools, pip, wheel...done. Traceback (most recent call last): File "/usr/local/bin/virtualenv", line 11, in &lt;module&gt; sys.exit(main()) File "/Library/Python/2.7/site-packages/virtualenv.py", line 711, in main symlink=options.symlink) File "/Library/Python/2.7/site-packages/virtualenv.py", line 944, in create_environment download=download, File "/Library/Python/2.7/site-packages/virtualenv.py", line 900, in install_wheel call_subprocess(cmd, show_stdout=False, extra_env=env, stdin=SCRIPT) File "/Library/Python/2.7/site-packages/virtualenv.py", line 795, in call_subprocess % (cmd_desc, proc.returncode)) OSError: Command /Users/.../venv/bin/python - setuptools pip wheel failed with error code 1 </code></pre> <p>Any suggestions why this fails?</p>
0
2016-09-14T13:46:58Z
39,499,763
<p>This seems to be a problem with <code>pip</code> in combination with the native OS X version of Python. I downloaded python from <a href="https://www.python.org/" rel="nofollow">https://www.python.org/</a> and was able to install <code>pip</code> and <code>virtualenv</code> properly.</p>
0
2016-09-14T21:17:14Z
[ "python", "pip", "virtualenv" ]
Scikit dendrogram: How to disable ouput?
39,492,171
<p>If I run <code>dendrogram</code> from the scikit libary:</p> <pre><code>from scipy.cluster.hierarchy import linkage, dendrogram # ... X = np.asarray(X) Z = linkage(X, 'single', 'correlation') plt.figure(figsize=(16,8)) dendrogram(Z, color_threshold=0.7) </code></pre> <p>I get <em>a ton</em> of <code>print</code> output in my ipython notebook:</p> <pre><code>{'color_list': ['g', 'r', 'c', 'm', 'y', ... 0.70780175324891315, 0.70172263980890581], [0.0, 0.54342622932769225, 0.54342622932769225, 0.0], [0.0, 0.46484932243120658, 0.46484932243120658, 0.0], ... 177, 196, 82, 19, 108]} </code></pre> <p>How can I disable this? I'm only interested in the actual dendrogram.</p>
1
2016-09-14T13:47:24Z
39,492,212
<p>to redirect print to nothing:</p> <pre><code>import os import sys f = open(os.devnull, 'w') temp = sys.stdout sys.stdout = f # print is disabled here sys.stdout = temp # print works again! </code></pre>
1
2016-09-14T13:50:08Z
[ "python", "scikit-learn" ]
how to use an open file for reuse it in severals functions?
39,492,250
<p>I am a beginner in python and not completely bilingual, so I hope you understand me. I'm trying to develop a code where anyone can open a file, in order to display its contents in a graph matplotlib, to do this using a function called <code>read_file()</code> with which I get the data and insert a <code>Listbox</code> without any problems. I accomplished the functionality but my concern arises when I want to call the information contained in the file from another function called <code>show_graph()</code>, in this part I require use the loaded file (in the <code>read_file()</code> function), the only way to achieve this is by adding:</p> <pre><code>f = open(‘example1.las') log = LASReader(f, null_subs=np.nan) </code></pre> <p>with which I can plot, but not practical for me, in other words how to use an open file for reuse it in severals functions? </p> <p>Someone could give me their support to solve this please?</p> <p>Here is the complete code:</p> <pre><code>from Tkinter import * from las import LASReader from pprint import pprint import tkFileDialog import matplotlib, sys matplotlib.use('TkAgg') import numpy as np from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2TkAgg from matplotlib.figure import Figure import matplotlib.pyplot as plt root = Tk() root.geometry("900x700+10+10") def read_file(): filename = tkFileDialog.askopenfilename() f = open(filename) log = LASReader(f, null_subs=np.nan) for curve in log.curves.names: parent.insert(END,curve) def add_name(): it = parent.get(ACTIVE) child.insert(END, it) def show_graph(): child = Listbox(root, selectmode=MULTIPLE) try: s = child.selection_get() if s == "GR": print 'selected:', s f = open('example1.las') log = LASReader(f, null_subs=np.nan) fig = plt.figure(figsize=(6, 7.5)) plt.plot(log.data['GR'], log.data['DEPT']) plt.ylabel(log.curves.DEPT.descr + " (%s)" % log.curves.DEPT.units) plt.xlabel(log.curves.GR.descr + " (%s)" % log.curves.GR.units) plt.ylim(log.stop, log.start) plt.title(log.well.WELL.data + ', ' + log.well.DATE.data) plt.grid() dataPlot = FigureCanvasTkAgg(fig, master=root) dataPlot.show() dataPlot.get_tk_widget().grid(row=0, column=2, columnspan=2, rowspan=2, sticky=W+E+N+S, padx=380, pady=52) elif s == "NPHI": print 'selected:', s f = open('Shamar-1.las') log = LASReader(f, null_subs=np.nan) fig = plt.figure(figsize=(6, 7.5)) plt.plot(log.data['NPHI'], log.data['DEPT']) plt.ylabel(log.curves.DEPT.descr + " (%s)" % log.curves.DEPT.units) plt.xlabel(log.curves.NPHI.descr + " (%s)" % log.curves.NPHI.units) plt.ylim(log.stop, log.start) plt.title(log.well.WELL.data + ', ' + log.well.DATE.data) plt.grid() dataPlot = FigureCanvasTkAgg(fig, master=root) dataPlot.show() dataPlot.get_tk_widget().grid(row=0, column=2, columnspan=2, rowspan=2, sticky=W+E+N+S, padx=380, pady=52) elif s == "DPHI": print 'selected:', s f = open('Shamar-1.las') log = LASReader(f, null_subs=np.nan) fig = plt.figure(figsize=(6, 7.5)) plt.plot(log.data['DPHI'], log.data['DEPT']) plt.ylabel(log.curves.DEPT.descr + " (%s)" % log.curves.DEPT.units) plt.xlabel(log.curves.DPHI.descr + " (%s)" % log.curves.DPHI.units) plt.ylim(log.stop, log.start) plt.title(log.well.WELL.data + ', ' + log.well.DATE.data) plt.grid() dataPlot = FigureCanvasTkAgg(fig, master=root) dataPlot.show() dataPlot.get_tk_widget().grid(row=0, column=2, columnspan=2, rowspan=2, sticky=W+E+N+S, padx=380, pady=52) except: print 'no selection' def remove_name(): child.delete(ACTIVE) def btnClick(): pass e = Entry(root) e.pack(padx=5) b = Button(root, text="OK", command=btnClick) b.pack(pady=5) # create the canvas, size in pixels canvas = Canvas(width = 490, height = 600, bg = 'grey') # pack the canvas into a frame/form canvas.place(x=340, y=50) etiqueta = Label(root, text='Nemonics:') etiqueta.place(x=10, y=30) parent = Listbox(root) root.title("Viewer") parent.place(x=5, y=50) selec_button = Button(root, text='Graph', command=show_graph) selec_button.place(x=340, y=20) remove_button = Button(root, text='&lt;&lt;delete', command=remove_name) remove_button.place(x=138, y=150) add_button = Button(root, text='Add&gt;&gt;', command=add_name) add_button.place(x=138, y=75) child = Listbox(root) child.place(x=210, y=50) butt = Button(root, text="load file", command=read_file) butt.place(x=10, y=5) root.mainloop() </code></pre>
0
2016-09-14T13:52:10Z
39,492,414
<p>You can use a global variable to keep it, declaring it as global before the variable name (f in your case). However, I don't recommend it if you are modifying the file.</p>
0
2016-09-14T14:00:11Z
[ "python", "tkinter" ]
how to use an open file for reuse it in severals functions?
39,492,250
<p>I am a beginner in python and not completely bilingual, so I hope you understand me. I'm trying to develop a code where anyone can open a file, in order to display its contents in a graph matplotlib, to do this using a function called <code>read_file()</code> with which I get the data and insert a <code>Listbox</code> without any problems. I accomplished the functionality but my concern arises when I want to call the information contained in the file from another function called <code>show_graph()</code>, in this part I require use the loaded file (in the <code>read_file()</code> function), the only way to achieve this is by adding:</p> <pre><code>f = open(‘example1.las') log = LASReader(f, null_subs=np.nan) </code></pre> <p>with which I can plot, but not practical for me, in other words how to use an open file for reuse it in severals functions? </p> <p>Someone could give me their support to solve this please?</p> <p>Here is the complete code:</p> <pre><code>from Tkinter import * from las import LASReader from pprint import pprint import tkFileDialog import matplotlib, sys matplotlib.use('TkAgg') import numpy as np from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2TkAgg from matplotlib.figure import Figure import matplotlib.pyplot as plt root = Tk() root.geometry("900x700+10+10") def read_file(): filename = tkFileDialog.askopenfilename() f = open(filename) log = LASReader(f, null_subs=np.nan) for curve in log.curves.names: parent.insert(END,curve) def add_name(): it = parent.get(ACTIVE) child.insert(END, it) def show_graph(): child = Listbox(root, selectmode=MULTIPLE) try: s = child.selection_get() if s == "GR": print 'selected:', s f = open('example1.las') log = LASReader(f, null_subs=np.nan) fig = plt.figure(figsize=(6, 7.5)) plt.plot(log.data['GR'], log.data['DEPT']) plt.ylabel(log.curves.DEPT.descr + " (%s)" % log.curves.DEPT.units) plt.xlabel(log.curves.GR.descr + " (%s)" % log.curves.GR.units) plt.ylim(log.stop, log.start) plt.title(log.well.WELL.data + ', ' + log.well.DATE.data) plt.grid() dataPlot = FigureCanvasTkAgg(fig, master=root) dataPlot.show() dataPlot.get_tk_widget().grid(row=0, column=2, columnspan=2, rowspan=2, sticky=W+E+N+S, padx=380, pady=52) elif s == "NPHI": print 'selected:', s f = open('Shamar-1.las') log = LASReader(f, null_subs=np.nan) fig = plt.figure(figsize=(6, 7.5)) plt.plot(log.data['NPHI'], log.data['DEPT']) plt.ylabel(log.curves.DEPT.descr + " (%s)" % log.curves.DEPT.units) plt.xlabel(log.curves.NPHI.descr + " (%s)" % log.curves.NPHI.units) plt.ylim(log.stop, log.start) plt.title(log.well.WELL.data + ', ' + log.well.DATE.data) plt.grid() dataPlot = FigureCanvasTkAgg(fig, master=root) dataPlot.show() dataPlot.get_tk_widget().grid(row=0, column=2, columnspan=2, rowspan=2, sticky=W+E+N+S, padx=380, pady=52) elif s == "DPHI": print 'selected:', s f = open('Shamar-1.las') log = LASReader(f, null_subs=np.nan) fig = plt.figure(figsize=(6, 7.5)) plt.plot(log.data['DPHI'], log.data['DEPT']) plt.ylabel(log.curves.DEPT.descr + " (%s)" % log.curves.DEPT.units) plt.xlabel(log.curves.DPHI.descr + " (%s)" % log.curves.DPHI.units) plt.ylim(log.stop, log.start) plt.title(log.well.WELL.data + ', ' + log.well.DATE.data) plt.grid() dataPlot = FigureCanvasTkAgg(fig, master=root) dataPlot.show() dataPlot.get_tk_widget().grid(row=0, column=2, columnspan=2, rowspan=2, sticky=W+E+N+S, padx=380, pady=52) except: print 'no selection' def remove_name(): child.delete(ACTIVE) def btnClick(): pass e = Entry(root) e.pack(padx=5) b = Button(root, text="OK", command=btnClick) b.pack(pady=5) # create the canvas, size in pixels canvas = Canvas(width = 490, height = 600, bg = 'grey') # pack the canvas into a frame/form canvas.place(x=340, y=50) etiqueta = Label(root, text='Nemonics:') etiqueta.place(x=10, y=30) parent = Listbox(root) root.title("Viewer") parent.place(x=5, y=50) selec_button = Button(root, text='Graph', command=show_graph) selec_button.place(x=340, y=20) remove_button = Button(root, text='&lt;&lt;delete', command=remove_name) remove_button.place(x=138, y=150) add_button = Button(root, text='Add&gt;&gt;', command=add_name) add_button.place(x=138, y=75) child = Listbox(root) child.place(x=210, y=50) butt = Button(root, text="load file", command=read_file) butt.place(x=10, y=5) root.mainloop() </code></pre>
0
2016-09-14T13:52:10Z
39,492,744
<p>Is this the lasreader you use? <a href="https://scipy.github.io/old-wiki/pages/Cookbook/LASReader.html" rel="nofollow">https://scipy.github.io/old-wiki/pages/Cookbook/LASReader.html</a></p> <p>Then you should not need to give it a file handle. Just give the filename an this should do the trick. The reader will close the file after parsing.</p> <pre><code>def read_file(): filename = tkFileDialog.askopenfilename() log = LASReader(filename, null_subs=np.nan) </code></pre> <p>If you have read the contents of a file usually you can close it afterwards:</p> <pre><code>f = open(filename) log = LASReader(f, null_subs=np.nan) f.close() </code></pre> <p>or use with which closes the file for you automatically</p> <pre><code>with open(filename, 'r') as f: log = LASReader(f, null_subs=np.nan) </code></pre>
0
2016-09-14T14:16:21Z
[ "python", "tkinter" ]
python, pandas, dataframe, rows to columns
39,492,251
<p>I've got a dataframe I pulled from a poorly organized SQL table. That table has unique rows for every channel I can extract that info to a python dataframe, and intend to do further processing, but for now just want to get it to a more usable format</p> <p>sample input:</p> <pre><code>C = pd.DataFrame() A = np.array([datetime.datetime(2016,8,8,0,0,1,1000),45,'foo1',1]) B = pd.DataFrame(A.reshape(1,4),columns = ['date','chNum','chNam','value']) C = C.append(B) A = np.array([datetime.datetime(2016,8,8,0,0,1,1000),46,'foo2',12.3]) B = pd.DataFrame(A.reshape(1,4),columns = ['date','chNum','chNam','value']) C = C.append(B) A = np.array([datetime.datetime(2016,8,8,0,0,2,1000),45,'foo1',10]) B = pd.DataFrame(A.reshape(1,4),columns = ['date','chNum','chNam','value']) C = C.append(B) A = np.array([datetime.datetime(2016,8,8,0,0,2,1000),46,'foo2',11.3]) B = pd.DataFrame(A.reshape(1,4),columns = ['date','chNum','chNam','value']) C = C.append(B) </code></pre> <p>Produces </p> <pre><code> date chNum chNam value 0 2016-08-08 00:00:01.001000 45 foo1 1 0 2016-08-08 00:00:01.001000 46 foo2 12.3 0 2016-08-08 00:00:02.001000 45 foo1 10 0 2016-08-08 00:00:02.001000 46 foo2 11.3 </code></pre> <p>I want</p> <pre><code> date foo1 foo2 2016-08-08 00:00:01.001000 1 12.3 2016-08-08 00:00:02.001000 10 113 </code></pre> <p>I have a solution: make a list of unique dates, for each date loop through the dataframe and pull off each channel, making a new row. kind of tedious (error prone)! to program, so I was wondering if there's a better way to utilize Pandas tools</p>
1
2016-09-14T13:52:19Z
39,492,419
<p>Use <code>set_index</code> then <code>unstack</code> to pivot</p> <pre><code>C.set_index(['date', 'chNum', 'chNam'])['value'].unstack(['chNam', 'chNum']) </code></pre> <p><a href="http://i.stack.imgur.com/EWjS7.png" rel="nofollow"><img src="http://i.stack.imgur.com/EWjS7.png" alt="enter image description here"></a></p> <hr> <p>To get exactly what you asked for</p> <pre><code>C.set_index(['date', 'chNam'])['value'].unstack().rename_axis(None, 1) </code></pre> <p><a href="http://i.stack.imgur.com/h6G9z.png" rel="nofollow"><img src="http://i.stack.imgur.com/h6G9z.png" alt="enter image description here"></a></p>
2
2016-09-14T14:00:20Z
[ "python", "pandas", "dataframe" ]
Django Capturing multiple url parameters in request.GET
39,492,393
<p>i am trying to get all query parameters out of an request</p> <pre><code>url/?animal__in=dog,cat&amp;countries__in=france </code></pre> <p>I tried</p> <pre><code>animals = request.GET.get('animal__in','') countries = request.GET.get('countries__in','') </code></pre> <p>but then animals and countries are not lists, they are just strings. Is there a more django way to do this whole capturing?</p> <p>Edit: Its important that i am using it in django-admin for filtering, where these two are not the same:</p> <pre><code>url/?animal__in=dog,cat&amp;countries__in=france url/?animal__in=dog&amp;animal__in=cat&amp;countries__in=france </code></pre>
0
2016-09-14T13:59:20Z
39,492,436
<p>Send the parameters in the proper HTTP format:</p> <pre><code>?animal__in=dog&amp;animal__in=cat&amp;countries__in=france </code></pre> <p>and do </p> <pre><code>request.GET.getlist('animal__in') </code></pre>
0
2016-09-14T14:01:12Z
[ "python", "django", "django-admin", "django-urls" ]
Django Capturing multiple url parameters in request.GET
39,492,393
<p>i am trying to get all query parameters out of an request</p> <pre><code>url/?animal__in=dog,cat&amp;countries__in=france </code></pre> <p>I tried</p> <pre><code>animals = request.GET.get('animal__in','') countries = request.GET.get('countries__in','') </code></pre> <p>but then animals and countries are not lists, they are just strings. Is there a more django way to do this whole capturing?</p> <p>Edit: Its important that i am using it in django-admin for filtering, where these two are not the same:</p> <pre><code>url/?animal__in=dog,cat&amp;countries__in=france url/?animal__in=dog&amp;animal__in=cat&amp;countries__in=france </code></pre>
0
2016-09-14T13:59:20Z
39,492,439
<p>request.GET.getlist('some_list_field')</p>
0
2016-09-14T14:01:24Z
[ "python", "django", "django-admin", "django-urls" ]
Django Capturing multiple url parameters in request.GET
39,492,393
<p>i am trying to get all query parameters out of an request</p> <pre><code>url/?animal__in=dog,cat&amp;countries__in=france </code></pre> <p>I tried</p> <pre><code>animals = request.GET.get('animal__in','') countries = request.GET.get('countries__in','') </code></pre> <p>but then animals and countries are not lists, they are just strings. Is there a more django way to do this whole capturing?</p> <p>Edit: Its important that i am using it in django-admin for filtering, where these two are not the same:</p> <pre><code>url/?animal__in=dog,cat&amp;countries__in=france url/?animal__in=dog&amp;animal__in=cat&amp;countries__in=france </code></pre>
0
2016-09-14T13:59:20Z
39,493,634
<p>split(',') works fine, not really django though</p> <pre><code>animals = request.GET.get('animal__in','').split(',') countries = request.GET.get('countries__in','').split(',') </code></pre>
0
2016-09-14T14:55:30Z
[ "python", "django", "django-admin", "django-urls" ]
Can you use self.assertRaises as an async context manager?
39,492,402
<p>I'd like to test that a python 3 coro fails with a particular exception, but this functionality doesn't seem to be implemented.</p> <pre><code>async with self.assertRaises(TestExceptionType): await my_func() </code></pre> <p>as the unit test fails like this:</p> <pre><code>... File "/Users/...../tests.py", line 144, in go async with self.assertRaises(TestExceptionType): AttributeError: __aexit__ </code></pre> <p>So my question is: should this work? And if not, what's the best way to assert a failing async function?</p>
1
2016-09-14T13:59:42Z
39,494,567
<p>Just use classic old good <code>with</code> around <code>await</code> call:</p> <pre><code>with self.assertRaises(TestExceptionType): await my_func() </code></pre> <p>It works.</p>
1
2016-09-14T15:39:33Z
[ "python", "unit-testing", "python-3.x", "python-asyncio" ]
Splitting numpy array field values that are matrices into column vectors
39,492,405
<p>I have the following numpy structured array:</p> <pre><code>x = np.array([(22, 2, -1000000000.0, [1000,2000.0]), (22, 2, 400.0, [1000,2000.0])], dtype=[('f1', '&lt;i4'), ('f2', '&lt;i4'), ('f3', '&lt;f4'), ('f4', '&lt;f4',2)]) </code></pre> <p>As you can see, field 'f4' is a matrix:</p> <pre><code>In [63]: x['f4'] Out[63]: array([[ 1000., 2000.], [ 1000., 2000.]], dtype=float32) </code></pre> <p>My end goal is to have a numpy structured array that only has vectors. I was wondering how to split 'f4' into two fields ('f41' and 'f42') where each field represents the column of the matrix.</p> <pre><code>In [67]: x Out[67]: array([(22, 2, -1000000000.0, 1000.0, 2000.0), (22, 2, 400.0, 1000.0, 2000.0)], dtype=[('f1', '&lt;i4'), ('f2', '&lt;i4'), ('f3', '&lt;f4'), ('f41', '&lt;f4'), ('f42', '&lt;f4')]) </code></pre> <p>Also i was wondering if it was possible to achieve this while using operations that modify the array in place or with minimal copying of the original data.</p>
5
2016-09-14T13:59:50Z
39,493,124
<p>You can do this by creating a new view (<a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.view.html" rel="nofollow">np.view</a>) of the array, which will not copy:</p> <pre><code>import numpy as np x = np.array([(22, 2, -1000000000.0, [1000,2000.0]), (22, 2, 400.0, [1000,2000.0])], dtype=[('f1', '&lt;i4'), ('f2', '&lt;i4'), ('f3', '&lt;f4'), ('f4', '&lt;f4', 2)]) xNewView = x.view(dtype=[('f1', '&lt;i4'), ('f2', '&lt;i4'), ('f3', '&lt;f4'), ('f41', '&lt;f4'), ('f42', '&lt;f4')]) print(np.may_share_memory(x, xNewView)) # True print(xNewView) # array([(22, 2, -1000000000.0, 1000.0, 2000.0), # (22, 2, 400.0, 1000.0, 2000.0)], # dtype=[('f1', '&lt;i4'), ('f2', '&lt;i4'), ('f3', '&lt;f4'), # ('f41', '&lt;f4'), ('f42', '&lt;f4')]) print(xNewView['f41']) # array([ 1000., 1000.], dtype=float32) print(xNewView['f42']) # array([ 2000., 2000.], dtype=float32) </code></pre> <p><code>xNewView</code> can then be used instead of <code>x</code>.</p>
3
2016-09-14T14:33:07Z
[ "python", "numpy", "structured-array" ]
How to extend the logger.Logging Class?
39,492,471
<p>I would like to start with a basic logging class that inherits from Python's <code>logging.Logger</code> class. However, I am not sure about how I should be constructing my class so that I can establish the basics needed for customising the inherited logger.</p> <p>This is what I have so far in my <code>logger.py</code> file:</p> <pre><code>import sys import logging from logging import DEBUG, INFO, ERROR class MyLogger(object): def __init__(self, name, format="%(asctime)s | %(levelname)s | %(message)s", level=INFO): # Initial construct. self.format = format self.level = level self.name = name # Logger configuration. self.console_formatter = logging.Formatter(self.format) self.console_logger = logging.StreamHandler(sys.stdout) self.console_logger.setFormatter(self.console_formatter) # Complete logging config. self.logger = logging.getLogger("myApp") self.logger.setLevel(self.level) self.logger.addHandler(self.console_logger) def info(self, msg, extra=None): self.logger.info(msg, extra=extra) def error(self, msg, extra=None): self.logger.error(msg, extra=extra) def debug(self, msg, extra=None): self.logger.debug(msg, extra=extra) def warn(self, msg, extra=None): self.logger.warn(msg, extra=extra) </code></pre> <p>This is the main <code>myApp.py</code>:</p> <pre><code>import entity from core import MyLogger my_logger = MyLogger("myApp") def cmd(): my_logger.info("Hello from %s!" % ("__CMD")) entity.third_party() entity.another_function() cmd() </code></pre> <p>And this is the <code>entity.py</code> module:</p> <pre><code># Local modules from core import MyLogger # Global modules import logging from logging import DEBUG, INFO, ERROR, CRITICAL my_logger = MyLogger("myApp.entity", level=DEBUG) def third_party(): my_logger.info("Initial message from: %s!" % ("__THIRD_PARTY")) def another_function(): my_logger.warn("Message from: %s" % ("__ANOTHER_FUNCTION")) </code></pre> <p>When I run the main app, I get this:</p> <pre><code>2016-09-14 12:40:50,445 | INFO | Initial message from: __THIRD_PARTY! 2016-09-14 12:40:50,445 | INFO | Initial message from: __THIRD_PARTY! 2016-09-14 12:40:50,445 | WARNING | Message from: __ANOTHER_FUNCTION 2016-09-14 12:40:50,445 | WARNING | Message from: __ANOTHER_FUNCTION 2016-09-14 12:40:50,445 | INFO | Hello from __CMD! 2016-09-14 12:40:50,445 | INFO | Hello from __CMD! </code></pre> <p>Everything is printed twice, as probably I have failed to set the logger class properly.</p> <p>--- <strong>UPDATE (01): Clarifying My Goals</strong> ---</p> <p><strong>(1)</strong> I would like to encapsulate the main logging functionality in one single location so I can do this:</p> <pre><code> from mylogger import MyLogger my_logger = MyLogger("myApp") my_logger.info("Hello from %s!" % ("__CMD")) </code></pre> <p><strong>(2)</strong> I am planning to use <code>CustomFormatter</code> and <code>CustomAdapter</code> classes. This bit does not require a custom logging class, those can be plugged in straight away.</p> <p><strong>(3)</strong> I probably do not need to go very deep in terms of customisation of the underlying logger class (records etc.), intercepting <code>logger.info</code>, <code>loggin.debug</code> etc. should be enough.</p> <p>So referring back to <a href="http://code.activestate.com/recipes/474089-extending-the-logging-module/" rel="nofollow">this python receipt</a> that has been circulated many times on these forums:</p> <p>I am trying to the find the sweet point between having a <code>Logger Class</code>, yet still be able to use the built in functions like assigning <code>Formatters</code> and <code>Adapters</code> etc. So everything stays compatible with the <code>logging</code> module.</p> <pre><code>class OurLogger(logging.getLoggerClass()): def makeRecord(self, name, level, fn, lno, msg, args, exc_info, func=None, extra=None): # Don't pass all makeRecord args to OurLogRecord bc it doesn't expect "extra" rv = OurLogRecord(name, level, fn, lno, msg, args, exc_info, func) # Handle the new extra parameter. # This if block was copied from Logger.makeRecord if extra: for key in extra: if (key in ["message", "asctime"]) or (key in rv.__dict__): raise KeyError("Attempt to overwrite %r in LogRecord" % key) rv.__dict__[key] = extra[key] return rv </code></pre> <p>--- <strong>UPDATE (02): A work-in-progress possible solution</strong> ---</p> <p>I have created a repo with a simple python app demonstrating a possible solution. Please feel free to peak and help me improve this.</p> <p><a href="https://github.com/symbolix/xlog_example" rel="nofollow">xlog_example</a></p> <p>This example effectively demonstrates the technique of overriding the <code>logging.Logger</code> class and the <code>logging.LogRecord</code> class through inheritance.</p> <p>Two external items are mixed into the log stream: <code>funcname</code> and <code>username</code> without using any <code>Formatters</code> or <code>Adapters</code>.</p>
0
2016-09-14T14:02:58Z
39,492,759
<p>This line</p> <pre><code>self.logger = logging.getLogger("myApp") </code></pre> <p>always retrieves a reference to the same logger, so you are adding an additional handler to it every time you instantiate <code>MyLogger</code>. The following would fix your current instance, since you call <code>MyLogger</code> with a different argument both times.</p> <pre><code>self.logger = logging.getLogger(name) </code></pre> <p>but note that you will still have the same problem if you pass the same <code>name</code> argument more than once.</p> <p>What your class needs to do is keep track of which loggers it has already configured.</p> <pre><code>class MyLogger(object): loggers = set() def __init__(self, name, format="%(asctime)s | %(levelname)s | %(message)s", level=INFO): # Initial construct. self.format = format self.level = level self.name = name # Logger configuration. self.console_formatter = logging.Formatter(self.format) self.console_logger = logging.StreamHandler(sys.stdout) self.console_logger.setFormatter(self.console_formatter) # Complete logging config. self.logger = logging.getLogger(name) if name not in self.loggers: self.loggers.add(name) self.logger.setLevel(self.level) self.logger.addHandler(self.console_logger) </code></pre> <p>This doesn't allow you to re-configure a logger at all, but I leave it as an exercise to figure out how to do that properly.</p> <p>The key thing to note, though, is that you can't have two separately configured loggers with the same name.</p> <hr> <p>Of course, the fact that <code>logging.getLogger</code> always returns a reference to the same object for a given name means that your class is working at odds with the <code>logging</code> module. Just configure your loggers once at program start-up, then get references as necessary with <code>getLogger</code>.</p>
1
2016-09-14T14:17:11Z
[ "python", "logging" ]
How to extend the logger.Logging Class?
39,492,471
<p>I would like to start with a basic logging class that inherits from Python's <code>logging.Logger</code> class. However, I am not sure about how I should be constructing my class so that I can establish the basics needed for customising the inherited logger.</p> <p>This is what I have so far in my <code>logger.py</code> file:</p> <pre><code>import sys import logging from logging import DEBUG, INFO, ERROR class MyLogger(object): def __init__(self, name, format="%(asctime)s | %(levelname)s | %(message)s", level=INFO): # Initial construct. self.format = format self.level = level self.name = name # Logger configuration. self.console_formatter = logging.Formatter(self.format) self.console_logger = logging.StreamHandler(sys.stdout) self.console_logger.setFormatter(self.console_formatter) # Complete logging config. self.logger = logging.getLogger("myApp") self.logger.setLevel(self.level) self.logger.addHandler(self.console_logger) def info(self, msg, extra=None): self.logger.info(msg, extra=extra) def error(self, msg, extra=None): self.logger.error(msg, extra=extra) def debug(self, msg, extra=None): self.logger.debug(msg, extra=extra) def warn(self, msg, extra=None): self.logger.warn(msg, extra=extra) </code></pre> <p>This is the main <code>myApp.py</code>:</p> <pre><code>import entity from core import MyLogger my_logger = MyLogger("myApp") def cmd(): my_logger.info("Hello from %s!" % ("__CMD")) entity.third_party() entity.another_function() cmd() </code></pre> <p>And this is the <code>entity.py</code> module:</p> <pre><code># Local modules from core import MyLogger # Global modules import logging from logging import DEBUG, INFO, ERROR, CRITICAL my_logger = MyLogger("myApp.entity", level=DEBUG) def third_party(): my_logger.info("Initial message from: %s!" % ("__THIRD_PARTY")) def another_function(): my_logger.warn("Message from: %s" % ("__ANOTHER_FUNCTION")) </code></pre> <p>When I run the main app, I get this:</p> <pre><code>2016-09-14 12:40:50,445 | INFO | Initial message from: __THIRD_PARTY! 2016-09-14 12:40:50,445 | INFO | Initial message from: __THIRD_PARTY! 2016-09-14 12:40:50,445 | WARNING | Message from: __ANOTHER_FUNCTION 2016-09-14 12:40:50,445 | WARNING | Message from: __ANOTHER_FUNCTION 2016-09-14 12:40:50,445 | INFO | Hello from __CMD! 2016-09-14 12:40:50,445 | INFO | Hello from __CMD! </code></pre> <p>Everything is printed twice, as probably I have failed to set the logger class properly.</p> <p>--- <strong>UPDATE (01): Clarifying My Goals</strong> ---</p> <p><strong>(1)</strong> I would like to encapsulate the main logging functionality in one single location so I can do this:</p> <pre><code> from mylogger import MyLogger my_logger = MyLogger("myApp") my_logger.info("Hello from %s!" % ("__CMD")) </code></pre> <p><strong>(2)</strong> I am planning to use <code>CustomFormatter</code> and <code>CustomAdapter</code> classes. This bit does not require a custom logging class, those can be plugged in straight away.</p> <p><strong>(3)</strong> I probably do not need to go very deep in terms of customisation of the underlying logger class (records etc.), intercepting <code>logger.info</code>, <code>loggin.debug</code> etc. should be enough.</p> <p>So referring back to <a href="http://code.activestate.com/recipes/474089-extending-the-logging-module/" rel="nofollow">this python receipt</a> that has been circulated many times on these forums:</p> <p>I am trying to the find the sweet point between having a <code>Logger Class</code>, yet still be able to use the built in functions like assigning <code>Formatters</code> and <code>Adapters</code> etc. So everything stays compatible with the <code>logging</code> module.</p> <pre><code>class OurLogger(logging.getLoggerClass()): def makeRecord(self, name, level, fn, lno, msg, args, exc_info, func=None, extra=None): # Don't pass all makeRecord args to OurLogRecord bc it doesn't expect "extra" rv = OurLogRecord(name, level, fn, lno, msg, args, exc_info, func) # Handle the new extra parameter. # This if block was copied from Logger.makeRecord if extra: for key in extra: if (key in ["message", "asctime"]) or (key in rv.__dict__): raise KeyError("Attempt to overwrite %r in LogRecord" % key) rv.__dict__[key] = extra[key] return rv </code></pre> <p>--- <strong>UPDATE (02): A work-in-progress possible solution</strong> ---</p> <p>I have created a repo with a simple python app demonstrating a possible solution. Please feel free to peak and help me improve this.</p> <p><a href="https://github.com/symbolix/xlog_example" rel="nofollow">xlog_example</a></p> <p>This example effectively demonstrates the technique of overriding the <code>logging.Logger</code> class and the <code>logging.LogRecord</code> class through inheritance.</p> <p>Two external items are mixed into the log stream: <code>funcname</code> and <code>username</code> without using any <code>Formatters</code> or <code>Adapters</code>.</p>
0
2016-09-14T14:02:58Z
39,509,253
<p>At this stage, I believe that the research I have made so far and the example provided with the intention to wrap up the solution is sufficient to serve as an answer to my question. In general, there are many approaches that may be utilised to wrap a logging solution. This particular question aimed to focus on a solution that utilises <code>logging.Logger</code> class inheritance so that the internal mechanics can be altered, yet the rest of the functionality kept as it is since it is going to be provided by the original <code>logging.Logger</code> class.</p> <p>Having said that, class inheritance techniques should be used with great care. Many of the facilities provided by the logging module are already sufficient to maintain and run a stable logging workflow. Inheriting from the <code>logging.Logger</code> class is probably good when the goal is some kind of a fundamental change to the way the log data is processed and exported.</p> <p>To summarise this, I see that there are two approaches for wrapping logging functionality:</p> <p><strong>1) The Traditional Logging:</strong></p> <p>This is simply working with the provided logging methods and functions, but wrap them in a module so that some of the generic repetitive tasks are organised in one place. In this way, things like log files, log levels, managing custom <code>Filters</code>, <code>Adapters</code> etc. will be easy.</p> <p><em>I am not sure if a <code>class</code> approach can be utilised in this scenario (and I am not talking about a super class approach which is the topic of the second item) as it seems that things are getting complicated when the logging calls are wrapped inside a class. I would like to hear about this issue and I will definitely prepare a question that explores this aspect.</em></p> <p><strong>2) The Logger Inheritance:</strong></p> <p>This approach is based on inheriting from the original <code>logging.Logger</code> class and adding to the existing methods or entirely hijacking them by modifying the internal behaviour. The mechanics are based on the following bit of code:</p> <pre><code># Register our logger. logging.setLoggerClass(OurLogger) my_logger = logging.getLogger("main") </code></pre> <p>From here on, we are relying on our own Logger, yet we are still able to benefit from all of the other logging facilities:</p> <pre><code># We still need a loggin handler. ch = logging.StreamHandler() my_logger.addHandler(ch) # Confgure a formatter. formatter = logging.Formatter('LOGGER:%(name)12s - %(levelname)7s - &lt;%(filename)s:%(username)s:%(funcname)s&gt; %(message)s') ch.setFormatter(formatter) # Example main message. my_logger.setLevel(DEBUG) my_logger.warn("Hi mom!") </code></pre> <p>This example is crucial as it demonstrates the injection of two data bits <code>username</code> and <code>funcname</code> without using custom <code>Adapters</code> or <code>Formatters</code>. </p> <p>Please see the <a href="https://github.com/symbolix/xlog_example" rel="nofollow" title="xlog.py example">xlog.py repo</a> for more information regarding this solution. This is an example that I have prepared based on <a href="http://stackoverflow.com/questions/10973362/python-logging-function-name-file-name-line-number-using-a-single-file">other questions</a> and bits of code from other <a href="http://code.activestate.com/recipes/474089-extending-the-logging-module/" rel="nofollow">sources</a>.</p>
0
2016-09-15T10:52:05Z
[ "python", "logging" ]
how can sync clients connect to twisted server
39,492,507
<p>I'm new to twisted. I was wondering if I can use multiple sync clients to connect to a twisted server? Or I have to make the client twisted as well? Thanks in advance.</p>
1
2016-09-14T14:05:16Z
39,492,575
<p>Clients do not have to be written w/ twisted (they don't even have to be written in Python); they just have to use a protocol that your server supports.</p>
2
2016-09-14T14:08:42Z
[ "python", "twisted" ]
scipy.odr output intercept and slope
39,492,513
<p>I'm trying to plot ad odr regression. I used the code from this post as an example: <a href="http://stackoverflow.com/questions/27276951/linear-regression-using-scipy-odr-fails-not-full-rank-at-solution.">sample code</a> this is my code:</p> <pre><code># regressione ODR import scipy.odr as odr def funzione(B,x): return B[0]*x+B[1] linear= odr.Model(funzione) variabili=odr.Data(database.valore_rut,database.valore_cap) regressione_ortogonale=odr.ODR(variabili,linear,beta0=[1., 2.]) output=regressione_ortogonale.run() output.pprint() </code></pre> <p>this is the output</p> <pre><code>Beta: [ 1.00088365 1.78267543] Beta Std Error: [ 0.04851125 0.41899546] Beta Covariance: [[ 0.00043625 -0.00154797] [-0.00154797 0.03254372]] Residual Variance: 5.39450361153 Inverse Condition #: 0.109803542662 Reason(s) for Halting: Sum of squares convergence </code></pre> <p>where can i find the intercept and the slope to draw the line?</p> <p>Thanks</p>
0
2016-09-14T14:05:32Z
39,492,978
<p>The attribute <code>output.beta</code> holds the coefficients, which you called <code>B</code> in your code. So the slope is <code>output.beta[0]</code> and the intercept is <code>output.beta[1]</code>.</p> <p>To draw a line, you could do something like:</p> <pre><code># xx holds the x limits of the line to draw. The graph is a straight line, # so we only need the endpoints to draw it. xx = np.array([start, stop]) yy = funzione(output.beta, xx) plot(xx, yy) </code></pre>
0
2016-09-14T14:27:14Z
[ "python", "scipy", "regression" ]
os.path.isfile() returns false for file on network drive
39,492,524
<p>I am trying to test if a file exists on the network drive using os.path.isfile however it returns false even when the file is there. Any ideas why this might be or any other methods I could use to check for this file?</p> <p>I am using Python2.7 and Windows10</p> <p>This returns true as it should:</p> <pre><code>import os if os.path.isfile("C:\test.txt"): print "Is File" else: print "Is Not File" </code></pre> <p>This returns false even though the file exists:</p> <pre><code>import os if os.path.isfile("Q:\test.txt"): print "Is File" else: print "Is Not File" </code></pre>
1
2016-09-14T14:05:58Z
39,492,755
<p>From python <a href="https://docs.python.org/2/library/os.path.html" rel="nofollow">https://docs.python.org/2/library/os.path.html</a>: </p> <blockquote> <p>The os.path module is always the path module suitable for the operating system Python is running on, and therefore usable for <strong>local paths</strong></p> </blockquote> <p>Trying using the full UNC path instead of the mapped drive.</p> <pre><code>import os if os.path.isfile(r"\\full\uncpath\test.txt"): print "Is File" else: print "Is Not File" </code></pre>
1
2016-09-14T14:16:57Z
[ "python", "python-2.7" ]
Read json file and get output values using python
39,492,541
<p>i want to fetch the output of below json file using python </p> <p>Json file </p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>{ "Name": [ { "name": "John", "Avg": "55.7" }, { "name": "Rose", "Avg": "71.23" }, { "name": "Lola", "Avg": "78.93" }, { "name": "Harry", "Avg": "95.5" } ] }</code></pre> </div> </div> </p> <p>I want to get the average marks of the person, when i look for harry i.e i need output in below or similar format Harry = 95.5</p> <p>Here's my code</p> <pre><code>import json json_file = open('test.json') //the above contents are stored in the json data = json.load(json_file) do = data['Name'][0] op1 = do['Name'] if op1 is 'Harry': print do['Avg'] </code></pre> <p>But when i run i get error "IOError: [Errno 63] File name too long"</p>
-10
2016-09-14T14:07:06Z
39,493,905
<p>How to print the score of Harry with python 3</p> <pre><code>import json from pprint import pprint with open('test.json') as data_file: data = json.load(data_file) for l in data["Name"]: if (str(l['name']) == 'Harry'): pprint(l['Avg']) </code></pre>
0
2016-09-14T15:06:44Z
[ "python", "json" ]
Python - asyncio - Wait for a future inside a callback
39,492,549
<p>Here is my code, I want <code>my_reader</code> to wait maximum 5 seconds for a future then do something with the <code>my_future.result()</code>. Please note that <code>my_reader</code> is <strong>not</strong> a <strong>coroutine</strong>, it is a <strong>callback</strong>.</p> <pre><code>import asyncio, socket sock = ... async def my_coroutine(): ... def my_reader(): my_future = asyncio.Future() ... # I want to wait (with a timeout) for my_future loop = asyncio.get_event_loop() loop.add_reader(sock, my_reader) loop.run_forever() loop.close() </code></pre> <p>I don't want to use either:</p> <ul> <li><code>AbstractEventLoop.create_datagram_endpoint()</code></li> <li><code>AbstractEventLoop.create_connection()</code></li> <li>...</li> </ul> <p>The socket I have is returned from another module and I have to read packets with a given size. The transfer must happen under 5 seconds.</p> <p><strong>How to wait for a future inside a callback?</strong></p>
1
2016-09-14T14:07:28Z
39,494,263
<p>Sorry, waiting in <strong>callbacks</strong> is impossible. Callback should be executed instantly by definition -- otherwise event loop hangs on callback execution period.</p> <p>Your logic should be built on <strong>coroutines</strong> but low-level <em>on_read</em> callback may inform these coroutines by setting a value of future.</p> <p>See <a href="https://github.com/aio-libs/aiopg/blob/master/aiopg/connection.py" rel="nofollow">aiopg.connection</a> for inspiration. The callback is named <code>Connection._ready</code>.</p>
2
2016-09-14T15:25:55Z
[ "python", "python-3.5", "python-asyncio" ]
Python - asyncio - Wait for a future inside a callback
39,492,549
<p>Here is my code, I want <code>my_reader</code> to wait maximum 5 seconds for a future then do something with the <code>my_future.result()</code>. Please note that <code>my_reader</code> is <strong>not</strong> a <strong>coroutine</strong>, it is a <strong>callback</strong>.</p> <pre><code>import asyncio, socket sock = ... async def my_coroutine(): ... def my_reader(): my_future = asyncio.Future() ... # I want to wait (with a timeout) for my_future loop = asyncio.get_event_loop() loop.add_reader(sock, my_reader) loop.run_forever() loop.close() </code></pre> <p>I don't want to use either:</p> <ul> <li><code>AbstractEventLoop.create_datagram_endpoint()</code></li> <li><code>AbstractEventLoop.create_connection()</code></li> <li>...</li> </ul> <p>The socket I have is returned from another module and I have to read packets with a given size. The transfer must happen under 5 seconds.</p> <p><strong>How to wait for a future inside a callback?</strong></p>
1
2016-09-14T14:07:28Z
39,495,380
<p>You should run the coroutine separately and send data to it from the callback via Queue.</p> <p>Look on this answer:</p> <p><a href="http://stackoverflow.com/a/29475218/1112457">http://stackoverflow.com/a/29475218/1112457</a></p>
0
2016-09-14T16:24:49Z
[ "python", "python-3.5", "python-asyncio" ]
How to turn an string into an integer?
39,492,631
<p>I am doing a little code for my programming class and we need to make a program that calculates the cost of building a desk. I need help changing my DrawerAmount to an integer, not a string!</p> <pre><code>def Drawers(): print("How many drawers are there?") DrawerAmount = input(int) print("Okay, I accept that the total amount of drawers is " + DrawerAmount + ".") return DrawerAmount def Desk(): print("What type of wood is your desk?") DeskType = input() print("Alright, your desk is made of " + DeskType + ".") return DeskType def Calculation(DrawerAmount, DeskType): if "m" in DeskType: FinalPrice = DrawerAmount * 30 + 180 elif "o" in DeskType: FinalPrice = DrawerAmount * 30 + 140 elif "p" in DeskType: FinalPrice = DrawerAmount * 30 + 100 def Total(): print("The final price is " + FinalPrice ) DrawerAmount = Drawers() DeskType = Desk() Calculation(DrawerAmount, DeskType) FinalPrice = Total() </code></pre>
-6
2016-09-14T14:11:15Z
39,492,748
<blockquote> <p>I need help changing my DrawerAmount to an integer, not a string!</p> </blockquote> <p>Try this:</p> <pre><code>v = int(DrawerAmount) </code></pre>
0
2016-09-14T14:16:30Z
[ "python", "python-3.x" ]
How to turn an string into an integer?
39,492,631
<p>I am doing a little code for my programming class and we need to make a program that calculates the cost of building a desk. I need help changing my DrawerAmount to an integer, not a string!</p> <pre><code>def Drawers(): print("How many drawers are there?") DrawerAmount = input(int) print("Okay, I accept that the total amount of drawers is " + DrawerAmount + ".") return DrawerAmount def Desk(): print("What type of wood is your desk?") DeskType = input() print("Alright, your desk is made of " + DeskType + ".") return DeskType def Calculation(DrawerAmount, DeskType): if "m" in DeskType: FinalPrice = DrawerAmount * 30 + 180 elif "o" in DeskType: FinalPrice = DrawerAmount * 30 + 140 elif "p" in DeskType: FinalPrice = DrawerAmount * 30 + 100 def Total(): print("The final price is " + FinalPrice ) DrawerAmount = Drawers() DeskType = Desk() Calculation(DrawerAmount, DeskType) FinalPrice = Total() </code></pre>
-6
2016-09-14T14:11:15Z
39,493,375
<pre><code>def Drawers(): draweramount = input("How many drawers are there?") print("Okay, I accept that the total amount of drawers is " + draweramount + ".") return int(draweramount) def Desk(): desktype = input("What type of wood is your desk?") print("Alright, your desk is made of " + desktype + ".") return desktype def Calculation(draweramount, desktype): if "m" in desktype: finalprice = draweramount * 30 + 180 elif "o" in desktype: finalprice = draweramount * 30 + 140 elif "p" in desktype: finalprice = draweramount * 30 + 100 return finalprice draweramount = Drawers() desktype = Desk() finalprice=Calculation(draweramount, desktype) print("The final price is ",Calculation(draweramount,desktype) ) </code></pre> <blockquote> <p>You haven't specified the output in case the input is not "P" or "M" or "O". Have your functions in small letters and include <strong>_</strong> whenever its a combination of two or more words eg. function_name() </p> </blockquote>
0
2016-09-14T14:43:39Z
[ "python", "python-3.x" ]
pyodbc module does not work with Python3
39,492,762
<p>I have just transitioned to a Mac (from Win) and I cannot find the proper way to make a script work (it did on Win).</p> <p>I am using <code>import pyodbc</code> on the first line and I get the "No module .." error.</p> <p><em>Later Edit</em>: I changed the first line to <code>import pypyodbc</code></p> <p>If I enter the workspace in the preexistent python version (2.7.10) I can import the module but the script fails with:</p> <pre><code>pyodbc.Error: ('00000', '[00000] [iODBC][Driver Manager]dlopen({SQL Server}, 6): image not found (0) (SQLDriverConnect)') </code></pre> <p>I want to use python3 anyway.</p> <p>If I enter python3, when trying to import the module I get an error.</p> <p>My main problem is that I an not sure how to find where the problem is. Can anyone help with this?</p> <p><strong>Later Edit</strong></p> <p>It worked if I used <code>pypyodbc</code> instead of <code>pyodbc</code>. It imports the module and the only thing left to solve is the decoding part: <code>UnicodeDecodeError: 'utf-32-le' codec can't decode bytes in position 0-1: truncated data</code></p>
0
2016-09-14T14:17:26Z
39,492,910
<p>Module pyobdc is distributed for Python3. You can check that on its <a href="https://pypi.python.org/pypi/pyodbc" rel="nofollow">PyPi page</a>.</p> <p>Did you actually install it for Python3?</p> <p>Python packages are installed separately for each Python instance on your machine. (In fact, you may even use virtual environments to create multiple environments for one Python installation, which can be useful if you need to run several applications with dependencies on different versions of the same library.)</p> <p>To install a package using pip for Python3 on a Mac with also Python2 installed, see <a href="http://stackoverflow.com/questions/20082935/how-to-install-pip-for-python3-on-mac-os-x">this question</a>.</p> <h3>Edit: installation issue</h3> <p>Your problem seems to be related to <a href="https://github.com/mkleehammer/pyodbc/issues/101" rel="nofollow">this issue</a> in pyodbc.</p> <p>The fix is in master but not released on Pypi. You may try to install the latest version from master branch.</p> <p>Should be:</p> <pre><code>pip install git+https://github.com/mkleehammer/pyodbc@master </code></pre>
0
2016-09-14T14:24:18Z
[ "python", "pyodbc" ]
Odoo delegation inheritance delete corresponding records of inherited class
39,492,821
<p>I've extended a default class using _inherits. I am using Odoo v9.</p> <pre><code>class new_product_uom(models.Model): _inherits = {'product.uom':'uomid', } _name = "newproduct.uom" uomid = fields.Many2one('product.uom',ondelete='cascade', required=True). #declare variables and functions specific to new_product_uom sellable = fields.Boolean('Sell products using this UoM?', default=True) [...] </code></pre> <p>If I delete the corresponding record in product.uom, the new_product_uom is deleted. </p> <p>If I were to delete a new_product_uom record, nothing happens to the corresponding product_uom record. </p> <p>I'd like for BOTH records to be automatically deleted when either is deleted. Is there a way I can do this? Thanks in advance for the help.</p> <p>Clarification:</p> <p>product.uom is a default odoo class. It holds UoM records (inches, centimeters, etc). I use delegation inheritance to extend this class. See: <a href="https://www.odoo.com/documentation/9.0/howtos/backend.html#model-inheritance" rel="nofollow">https://www.odoo.com/documentation/9.0/howtos/backend.html#model-inheritance</a> </p> <p>So, when I add a record for newproduct.uom, a record is automatically created under the model product.uom. I can assign the values of the corresponding record in product.uom by addressing them in newproduct.uom. </p> <p>For my uses, it will be intended as a Parent->child relation, with newproduct.uom being the parent, and the default product.uom being the child. I chose this method of inheritance to allow quicker creation and modification of related values, as well as a separation of functions (rather than overriding the default methods for default operations). </p>
0
2016-09-14T14:19:47Z
39,496,144
<p>In your parent class override unlink. Not sure if I have the correct class name. Delete the child record and then delete the current record.</p> <pre><code>@api.multi def unlink(self): self.uom_id.unlink() return super(new_product_uom, self).unlink() </code></pre>
1
2016-09-14T17:11:30Z
[ "python", "inheritance", "openerp", "odoo-9" ]
simplest way to override Django admin inline to request formfield_for_dbfield for each instance
39,492,839
<p>I would like to provide different widgets to input form fields for the same type of model field in a Django admin inline.</p> <p>I have implemented a version of the Entity-Attribute-Value paradigm in my shop application (I tried eav-django and it wasn't flexible enough). In my model it is Product-Parameter-Value (see Edit below). Everything works as I want except that when including an admin inline for the Parameter-Value pair, the same input formfield is used for every value. I understand that this is the default Django admin behaviour because it uses the same formset for each Inline row.</p> <p>I have a callback on my Parameter that I would like to use (get_value_formfield). I currently have:</p> <pre><code>class SpecificationValueAdminInline(admin.TabularInline): model = SpecificationValue fields = ('parameter', 'value') readonly_fields = ('parameter',) max_num = 0 def get_formset(self, request, instance, **kwargs): """Take a copy of the instance""" self.parent_instance = instance return super().get_formset(request, instance, **kwargs) def formfield_for_dbfield(self, db_field, **kwargs): """Override admin function for requesting the formfield""" if self.parent_instance and db_field.name == 'value': # Notice first() on the end --&gt; sv_instance = SpecificationValue.objects.filter( product=self.parent_instance).first() formfield = sv_instance.parameter.get_value_formfield() else: formfield = super().formfield_for_dbfield(db_field, **kwargs) return formfield </code></pre> <p>formfield_for_dbfield is only called once for each admin page.</p> <p>How would I override the default behaviour so that formfield_for_dbfield is called once for each SpecificationValue instance, preferably passing the instance in each time?</p> <p>Edit:</p> <p>Here is the model layout:</p> <pre><code>class Product(Model): specification = ManyToManyField('SpecificationParameter', through='SpecificationValue') class SpecificationParameter(Model): """Other normal model fields here""" type = models.PositiveSmallIntegerField(choices=TUPLE) def get_value_formfield(self): """ Return the type of form field for parameter instance with the correct widget for the value """ class SpecificationValue(Model): product = ForeignKey(Product) parameter = ForeignKey(SpecificationParameter) # To store and retrieve all types of value, overrides CharField value = CustomValueField() </code></pre>
0
2016-09-14T14:20:42Z
39,747,701
<p>The way I eventually solved this is using the <code>form =</code> attribute of the Admin Inline. This skips the form generation code of the ModelAdmin:</p> <pre><code>class SpecificationValueForm(ModelForm): class Meta: model = SpecificationValue def __init__(self, instance=None, **kwargs): super().__init__(instance=instance, **kwargs) if instance: self.fields['value'] = instance.parameter.get_value_formfield() else: self.fields['value'].disabled = True class SpecificationValueAdminInline(admin.TabularInline): form = SpecificationValueForm </code></pre> <p>Using standard forms like this, widgets with choices (e.g. <code>RadioSelect</code> and <code>CheckboxSelectMultiple</code>) have list bullets next to them in the admin interface because the <code>&lt;ul&gt;</code> doesn't have the <code>radiolist</code> class. You can almost fix the <code>RadioSelect</code> by using <code>AdminRadioSelect(attrs={'class': 'radiolist'})</code> but there isn't an admin version of the <code>CheckboxSelectMultiple</code> so I preferred consistency. Also there is an <code>aligned</code> class missing from the <code>&lt;fieldset&gt;</code> wrapper element.</p> <p>Looks like I'll have to live with that!</p>
0
2016-09-28T12:34:38Z
[ "python", "django", "django-admin" ]
Python formating all values in list of lists
39,492,886
<p>I am so sorry that I asking so silly question but could you help me please format list of lists ("%.2f") for example:</p> <pre><code>a=[[1.343465432, 7.423334343], [6.967997797, 4.5522577]] </code></pre> <p>I used:</p> <pre><code> for x in a: a = ["%.2f" % i for i in x] print (a) </code></pre> <p>OUT:</p> <pre><code>['1.34', '7.42'] ['6.97', '4.55'] </code></pre> <p>But I would like to get my list of lists OUT:</p> <pre><code>[['1.34', '7.42'] ,['6.97', '4.55']] </code></pre>
0
2016-09-14T14:23:04Z
39,492,924
<p>In a simpler way, you may achieve the same using list comprehension as:</p> <pre><code>&gt;&gt;&gt; a=[[1.343465432, 7.423334343], [6.967997797, 4.5522577]] &gt;&gt;&gt; [["%.2f" % i for i in l] for l in a] [['1.34', '7.42'], ['6.97', '4.55']] </code></pre> <p>Infact even your code is fine. Instead of print, you need to just <code>append</code> the values you are printing to a list</p>
2
2016-09-14T14:24:49Z
[ "python", "list", "format" ]
Python formating all values in list of lists
39,492,886
<p>I am so sorry that I asking so silly question but could you help me please format list of lists ("%.2f") for example:</p> <pre><code>a=[[1.343465432, 7.423334343], [6.967997797, 4.5522577]] </code></pre> <p>I used:</p> <pre><code> for x in a: a = ["%.2f" % i for i in x] print (a) </code></pre> <p>OUT:</p> <pre><code>['1.34', '7.42'] ['6.97', '4.55'] </code></pre> <p>But I would like to get my list of lists OUT:</p> <pre><code>[['1.34', '7.42'] ,['6.97', '4.55']] </code></pre>
0
2016-09-14T14:23:04Z
39,494,470
<p>A more general solution requires just a few lines. It recursively formats numbers, lists, lists of lists of any nesting depth.</p> <pre><code>def fmt(data): if isinstance(data, list): return [fmt(x) for x in data] try: return "%.2f" % data except TypeError: return data a=[[1.343465432, 7.423334343], [6.967997797, 4.5522577], [10, [20, 30]], 40] print(fmt(a)) # [['1.34', '7.42'], ['6.97', '4.55'], ['10.00', ['20.00', '30.00']], '40.00'] </code></pre>
1
2016-09-14T15:35:01Z
[ "python", "list", "format" ]
create a main function in python and passing arguments
39,492,907
<p>Iam new with python and I did my first program in Python with jupyter notebook. Here my tutor, said to me that I have to converti it into a script .py with passing arguments. I try to do this byte per byte .</p> <p>Can you just help me how to begin the script and passing New1 and New as arguments.</p> <pre><code>df_equipment = pd.read_csv('C:/Users/Demonstrator/Downloads/New1.csv',delimiter=';', parse_dates=[0], infer_datetime_format = True) df_energy2=pd.read_csv('C:/Users/Demonstrator/Downloads/New2.csv', delimiter=';', parse_dates=[0], infer_datetime_format = True) </code></pre> <p>Thank you</p>
0
2016-09-14T14:24:11Z
39,492,983
<p>Here is an example from <a href="http://www.tutorialspoint.com/python/python_functions.htm" rel="nofollow">Pythion Tutorials</a></p> <pre><code>def printme( str ): "This prints a passed string into this function" print str return; # Now you can call printme function printme("I'm first call to user defined function!") printme("Again second call to the same function") </code></pre>
0
2016-09-14T14:27:34Z
[ "python" ]
create a main function in python and passing arguments
39,492,907
<p>Iam new with python and I did my first program in Python with jupyter notebook. Here my tutor, said to me that I have to converti it into a script .py with passing arguments. I try to do this byte per byte .</p> <p>Can you just help me how to begin the script and passing New1 and New as arguments.</p> <pre><code>df_equipment = pd.read_csv('C:/Users/Demonstrator/Downloads/New1.csv',delimiter=';', parse_dates=[0], infer_datetime_format = True) df_energy2=pd.read_csv('C:/Users/Demonstrator/Downloads/New2.csv', delimiter=';', parse_dates=[0], infer_datetime_format = True) </code></pre> <p>Thank you</p>
0
2016-09-14T14:24:11Z
39,493,078
<p>Try this <a href="http://www.pythonforbeginners.com/system/python-sys-argv" rel="nofollow">tutorial for using sys.argv</a>. When you are ready to be more robust about argument parsing, look into this <a href="https://docs.python.org/3/howto/argparse.html" rel="nofollow">argparse tutorial</a>.</p>
1
2016-09-14T14:31:15Z
[ "python" ]
How to show the trajectory of a projectile using python?
39,492,972
<p>I'm fairly new to stack overflow and programming on the whole, so I apologize in advance if this question has already been asked or is a stupid question on the whole. How could I visually show the trajectory of a projectile after doing the calculations in python? Like a module? PyGame? Are there other languages that would be better for this? Thanks,</p> <p>Nimrodian.</p>
1
2016-09-14T14:26:59Z
39,493,181
<p>You can use whatever graphic module you'd like.</p> <p>Pygame is one, right, but I believe matplotlib is probably simpler.</p> <p>Check this :</p> <pre><code>import matplotlib.pyplot as plt from matplotlib.path import Path import matplotlib.patches as patches verts = [ (0., 0.), # P0 (0.2, 1.), # P1 (1., 0.8), # P2 (0.8, 0.), # P3 ] codes = [Path.MOVETO, Path.CURVE4, Path.CURVE4, Path.CURVE4, ] path = Path(verts, codes) fig = plt.figure() ax = fig.add_subplot(111) patch = patches.PathPatch(path, facecolor='none', lw=2) ax.add_patch(patch) xs, ys = zip(*verts) ax.plot(xs, ys, 'x--', lw=2, color='black', ms=10) ax.text(-0.05, -0.05, 'P0') ax.text(0.15, 1.05, 'P1') ax.text(1.05, 0.85, 'P2') ax.text(0.85, -0.05, 'P3') ax.set_xlim(-0.1, 1.1) ax.set_ylim(-0.1, 1.1) plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/1RNVU.png" rel="nofollow"><img src="http://i.stack.imgur.com/1RNVU.png" alt="http://matplotlib.org/_images/path_tutorial-2.png"></a></p> <p>Taken from : <a href="http://matplotlib.org/users/path_tutorial.html" rel="nofollow">http://matplotlib.org/users/path_tutorial.html</a></p>
2
2016-09-14T14:35:30Z
[ "python", "pygame" ]
How to show the trajectory of a projectile using python?
39,492,972
<p>I'm fairly new to stack overflow and programming on the whole, so I apologize in advance if this question has already been asked or is a stupid question on the whole. How could I visually show the trajectory of a projectile after doing the calculations in python? Like a module? PyGame? Are there other languages that would be better for this? Thanks,</p> <p>Nimrodian.</p>
1
2016-09-14T14:26:59Z
39,493,205
<p>I suggest matplotlib. Here's <a href="http://matplotlib.org/users/pyplot_tutorial.html" rel="nofollow">a tutorial</a>.</p>
0
2016-09-14T14:36:16Z
[ "python", "pygame" ]
Spark: fetch data from complex dataframe schema with map
39,493,076
<p>I've got a following structure</p> <pre><code>json.select($"comments").printSchema root |-- comments: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- comment: struct (nullable = true) | | | |-- date: string (nullable = true) | | | |-- score: string (nullable = true) | | | |-- shouts: array (nullable = true) | | | | |-- element: string (containsNull = true) | | | |-- tags: array (nullable = true) | | | | |-- element: string (containsNull = true) | | | |-- text: string (nullable = true) | | | |-- username: string (nullable = true) | | |-- subcomments: array (nullable = true) | | | |-- element: struct (containsNull = true) | | | | |-- date: string (nullable = true) | | | | |-- score: string (nullable = true) | | | | |-- shouts: array (nullable = true) | | | | | |-- element: string (containsNull = true) | | | | |-- tags: array (nullable = true) | | | | | |-- element: string (containsNull = true) | | | | |-- text: string (nullable = true) | | | | |-- username: string (nullable = true) </code></pre> <p>I would like to get an array/list [username, score, text] of comment. Normally, in pyspark I would do something like this</p> <pre><code>comments = json .select("comments") .flatMap(lambda element: map(lambda comment: Row(username = comment.username, score = comment.score, text = comment.text), element[0]) .toDF() </code></pre> <p>But, when I try the same approach in scala</p> <pre><code>json.select($"comments").rdd.map{row: Row =&gt; row(0)}.take(3) </code></pre> <p>I have some weird output</p> <pre><code>Array[Any] = Array( WrappedArray([[stirng,string,WrappedArray(),WrappedArray(),,string] ...], ...) </code></pre> <p>Is there any way to perform that task in scala as easy as it's done with python?</p> <p>Also, how to iterate WrappedArray like an Array/List, I'm having an error like this</p> <pre><code>rror: scala.collection.mutable.WrappedArray.type does not take parameters </code></pre>
0
2016-09-14T14:31:13Z
39,494,209
<p>How about using statically typed <code>Dataset</code> instead?</p> <pre><code>case class Comment( date: String, score: String, shouts: Seq[String], tags: Seq[String], text: String, username: String ) df .select(explode($"comments.comment").alias("comment")) .select("comment.*") .as[Comment] .map(c =&gt; (c.username, c.score, c.date)) </code></pre> <p>which can be further simplified if you don't depend on REPL:</p> <pre><code>df .select("comments.comment") .as[Seq[Comment]] .flatMap(_.map(c =&gt; (c.username, c.score, c.text))) </code></pre> <p>If you really want to deal with <code>Rows</code> use typed getters:</p> <pre><code>df.rdd.flatMap( _.getAs[SR]("comments") .map(_.getAs[Row]("comment")) .map { // You could also _.getAs[String]("score") or getString(0) case Row(_, score: String, _, _, text: String, username: String) =&gt; (username, score, text) } ) </code></pre>
2
2016-09-14T15:23:20Z
[ "python", "scala", "apache-spark", "pyspark" ]
"lambdas can't have assignment statements" - so why is "foo = lambda x: x * 2" legal?
39,493,159
<p>I am reading Functional Python Programming by Steven Lott, a book about using Python 'functionally' instead of in a more object oriented fashion and which focuses on exploratory data analysis for most of its examples.</p> <p>Lott says that Lamda's can't have assignment statements. But on the same page he assigned a lambda function to a variable:</p> <pre><code>&gt;&gt;mersenne = lambda x: 2**x-1 &gt;&gt;mersenne(17) 131071 </code></pre> <p>How is that not an assignment statement? Is there some other sense of 'assignment' that I am missing?</p>
0
2016-09-14T14:34:40Z
39,493,221
<p>You can't have assignments inside the "lambda" function, but the lambda itself can be used in assignments.</p> <p>So you can't say something like <code>lambda x: y = x*2; return y</code>, but you can say <code>foo = lambda x: x*2</code></p>
4
2016-09-14T14:36:49Z
[ "python", "lambda", "variable-assignment" ]
"lambdas can't have assignment statements" - so why is "foo = lambda x: x * 2" legal?
39,493,159
<p>I am reading Functional Python Programming by Steven Lott, a book about using Python 'functionally' instead of in a more object oriented fashion and which focuses on exploratory data analysis for most of its examples.</p> <p>Lott says that Lamda's can't have assignment statements. But on the same page he assigned a lambda function to a variable:</p> <pre><code>&gt;&gt;mersenne = lambda x: 2**x-1 &gt;&gt;mersenne(17) 131071 </code></pre> <p>How is that not an assignment statement? Is there some other sense of 'assignment' that I am missing?</p>
0
2016-09-14T14:34:40Z
39,493,355
<p>It's not not an assignment.</p> <p>A lambda in Python cannot <em>contain</em> an assignment. But this is pretty much the only aspect of Python which enforces a functional paradigm. The rest of the language has some unescapable procedural features; it is hard to imagine a Python program which didn't contain <em>any</em> assignments.</p>
0
2016-09-14T14:42:52Z
[ "python", "lambda", "variable-assignment" ]
How to delete multiple tables in SQLAlchemy
39,493,174
<p>Inspired by this question: <a href="http://stackoverflow.com/questions/35918605/how-to-delete-a-table-in-sqlalchemy">How to delete a table in SQLAlchemy?</a>, I ended up with the question: How to delete multiple tables.</p> <p>Say I have 3 tables as seen below and I want to delete 2 tables (imagine a lot more tables, so no manually table deletion).</p> <h2>Tables</h2> <pre><code>import sqlalchemy as sqla import sqlalchemy.ext.declarative as sqld import sqlalchemy.orm as sqlo sqla_base = sqld.declarative_base() class name(sqla_base): __tablename__ = 'name' id = sqla.Column(sqla.Integer, primary_key=True) name = sqla.Column(sqla.String) class job(sqla_base): __tablename__ = 'job' id = sqla.Column(sqla.Integer, primary_key=True) group = sqla.Column(sqla.String) class company(sqla_base): __tablename__ = 'company' id = sqla.Column(sqla.Integer, primary_key=True) company = sqla.Column(sqla.String) engine = sqla.create_engine("sqlite:///test.db", echo=True) sqla_base.metadata.bind = engine # Tables I want to delete to_delete = ['job', 'function'] # Get all tables in the database for table in engine.table_names(): # Delete only the tables in the delete list if table in to_delete: sql = sqla.text("DROP TABLE IF EXISTS {}".format(table)) engine.execute(sql) # Making new tables now the old ones are deleted sqla_base.metadata.create_all(engine) </code></pre> <h2>How in SQLAlchemy? EDIT</h2> <p>This works, however I was wondering if I can do the same in SQLAlchemy style instead of executing raw SQL code with <code>sqla.text("DROP TABLE IF EXISTS {}".format(table))</code> (not using <code>sqla_base.metadata.drop_all()</code>, because that drops all tables).</p> <p>I know the function <code>tablename.__table__.drop()</code> or <code>tablename.__table__.drop(engine)</code> exists, but I don't want to type it manually for every table.</p> <p>From the answer given by @daveoncode, the following code does what I want (EDIT 2: added <code>checkfirst=True</code>, in case it didn't exist in db yet and str()):</p> <pre><code>for table in sqla_base.metadata.sorted_tables: if str(table) in self.to_delete: table.drop(checkfirst=True) </code></pre> <h2>Question</h2> <p>How do I drop multiple tables in SQLAlchemy style, achieving the same as the raw SQL code above?</p>
0
2016-09-14T14:35:15Z
39,493,548
<p>The error you get is perfectly clear:</p> <pre><code>AttributeError: 'str' object has no attribute '__table__' </code></pre> <p>You are not iterating on Table objects, but on table names (aka <strong>strings</strong>!), so of course a string has not an attribute <code>__table__</code>, so your statement:</p> <pre><code>tablename.__table__.drop() or tablename.__table__.drop(engine) </code></pre> <p>is wrong! it should be:</p> <pre><code>table_instance.__table__.drop() or table_instance.__table__.drop(engine) </code></pre> <p>You can access table instances from the metadata, take a look here:</p> <p><a href="http://docs.sqlalchemy.org/en/latest/core/metadata.html#sqlalchemy.schema.MetaData.sorted_tables" rel="nofollow">http://docs.sqlalchemy.org/en/latest/core/metadata.html#sqlalchemy.schema.MetaData.sorted_tables</a></p> <pre><code>UPDATE: </code></pre> <p>anyway <code>drop_all()</code> is the method to use for drop all the tables in a simple command: <a href="http://docs.sqlalchemy.org/en/latest/core/metadata.html#sqlalchemy.schema.MetaData.drop_all" rel="nofollow">http://docs.sqlalchemy.org/en/latest/core/metadata.html#sqlalchemy.schema.MetaData.drop_all</a></p>
1
2016-09-14T14:51:27Z
[ "python", "sqlalchemy" ]
Regular expressions not splitten Python
39,493,176
<p>Using python I'm trying to divide a text file in blocks using regular expression. The text file looks like this:</p> <pre><code>Block1 u 0.00 2.00 0.11 2.11 Block2 v 0.00 2.01 0.01 2.11 Block3 a 1.01 2.02 0.01 2.11 </code></pre> <p>my regular expression</p> <pre><code>re.split("(\bBlock1\b\n\s\s[u].*\n.*)", open('Blockfile.txt', "r").read()) </code></pre> <p>However when I run the code it doesn't split. see my regex code here: <a href="https://regex101.com/r/jW7oP4/2" rel="nofollow">https://regex101.com/r/jW7oP4/2</a></p> <p>Thanks!!</p>
1
2016-09-14T14:35:20Z
39,493,416
<p>You don't necessarily need regular expressions and can approach it line by line checking if a line starts with <code>Block</code> collecting the results into a dictionary:</p> <pre><code>from collections import defaultdict data = defaultdict(list) with open("input.txt") as f: for line in f: if line.startswith("Block"): key = line.strip() else: data[key].append(line.strip()) print(dict(data)) </code></pre> <p>Prints:</p> <pre><code>{ 'Block3': ['a 1.01 2.02', '0.01 2.11'], 'Block2': ['v 0.00 2.01', '0.01 2.11'], 'Block1': ['u 0.00 2.00', '0.11 2.11'] } </code></pre>
0
2016-09-14T14:44:59Z
[ "python", "regex" ]
Regular expressions not splitten Python
39,493,176
<p>Using python I'm trying to divide a text file in blocks using regular expression. The text file looks like this:</p> <pre><code>Block1 u 0.00 2.00 0.11 2.11 Block2 v 0.00 2.01 0.01 2.11 Block3 a 1.01 2.02 0.01 2.11 </code></pre> <p>my regular expression</p> <pre><code>re.split("(\bBlock1\b\n\s\s[u].*\n.*)", open('Blockfile.txt', "r").read()) </code></pre> <p>However when I run the code it doesn't split. see my regex code here: <a href="https://regex101.com/r/jW7oP4/2" rel="nofollow">https://regex101.com/r/jW7oP4/2</a></p> <p>Thanks!!</p>
1
2016-09-14T14:35:20Z
39,493,437
<p>Always, <em>ALWAYS</em> use raw strings when working with regular expressions in Python. <code>\b</code> means backslash inside a string, it gets evaluated and your regex gets damaged. Just add an 'r' in front of string. This will do the trick:</p> <pre><code>re.split(r"(\bBlock1\b\n\s\s[u].*\n.*)", open('Blockfile.txt', "r").read()) </code></pre>
0
2016-09-14T14:45:44Z
[ "python", "regex" ]
Regular expressions not splitten Python
39,493,176
<p>Using python I'm trying to divide a text file in blocks using regular expression. The text file looks like this:</p> <pre><code>Block1 u 0.00 2.00 0.11 2.11 Block2 v 0.00 2.01 0.01 2.11 Block3 a 1.01 2.02 0.01 2.11 </code></pre> <p>my regular expression</p> <pre><code>re.split("(\bBlock1\b\n\s\s[u].*\n.*)", open('Blockfile.txt', "r").read()) </code></pre> <p>However when I run the code it doesn't split. see my regex code here: <a href="https://regex101.com/r/jW7oP4/2" rel="nofollow">https://regex101.com/r/jW7oP4/2</a></p> <p>Thanks!!</p>
1
2016-09-14T14:35:20Z
39,493,442
<p><code>Split</code> only splits on the argument with its speech marks, for example:</p> <p>Splitting <em>"this string"</em> with <code>.split(" ")</code> results in:</p> <pre><code>["this","string"] </code></pre> <p>But splitting it with <code>.split("s ")</code> results in:</p> <pre><code>["thi", "string"] </code></pre> <p>Rather than:</p> <pre><code>["thi", "tring"] </code></pre> <p>Which is your problem. Your code will only split when it gets <code>(\bBlock1\b\n\s\s[u].*\n.*)</code> all in one go!</p> <p>I suggest using multiple split functions or a different function like <code>translate</code>. </p>
0
2016-09-14T14:45:56Z
[ "python", "regex" ]
Two lists of JSON values -- do operation on some key
39,493,182
<p>I have 2 lists of dictionaries which look like this:</p> <pre><code>x = [{'id':1,'num':5,'den':8}, {'id':2,'num':3,'den':5}, {'id':4,'num':11,'den':18}, {'id':3,'num':2,'den':81}, {'id':7,'num':10,'den':33}] y = [{'id':1,'num':4,'den':9}, {'id':6,'num':5,'den':11}, {'id':3,'num':13,'den':83}, {'id':2,'num':15,'den':28}, {'id':4,'num':1,'den':2}] </code></pre> <p>Now, as it is clear, the keys in each (item) dict of both lists are same. For those elements, which have same <code>id</code>, I want a new list with corresponding <code>num = num(x) + num(y)</code> and <code>den = den(x) + den(y)</code>. So, in this case, output will be:</p> <pre><code>z = [{'id':1,'num':9,'den':17}, {'id':2,'num':18,'den':33}, {'id':4,'num':12,'den':20}, {'id':3,'num':15,'den':164}] </code></pre> <p>How can this be achieved in the most "pythonic" way. Should I just brute force? </p>
-1
2016-09-14T14:35:30Z
39,493,327
<p>You may achieve it using <code>list comprehension</code> as:</p> <pre><code>&gt;&gt;&gt; [{'id': i['id'], 'num': i['num'] + j['num'], 'den': i['den'] + j['den']} for i in x for j in y if i['id'] == j['id']] [{'num': 9, 'id': 1, 'den': 17}, {'num': 18, 'id': 2, 'den': 33}, {'num': 15, 'id': 3, 'den': 164}, {'num': 12, 'id': 4, 'den': 20}] </code></pre>
1
2016-09-14T14:41:28Z
[ "python", "json", "list", "dictionary" ]
Two lists of JSON values -- do operation on some key
39,493,182
<p>I have 2 lists of dictionaries which look like this:</p> <pre><code>x = [{'id':1,'num':5,'den':8}, {'id':2,'num':3,'den':5}, {'id':4,'num':11,'den':18}, {'id':3,'num':2,'den':81}, {'id':7,'num':10,'den':33}] y = [{'id':1,'num':4,'den':9}, {'id':6,'num':5,'den':11}, {'id':3,'num':13,'den':83}, {'id':2,'num':15,'den':28}, {'id':4,'num':1,'den':2}] </code></pre> <p>Now, as it is clear, the keys in each (item) dict of both lists are same. For those elements, which have same <code>id</code>, I want a new list with corresponding <code>num = num(x) + num(y)</code> and <code>den = den(x) + den(y)</code>. So, in this case, output will be:</p> <pre><code>z = [{'id':1,'num':9,'den':17}, {'id':2,'num':18,'den':33}, {'id':4,'num':12,'den':20}, {'id':3,'num':15,'den':164}] </code></pre> <p>How can this be achieved in the most "pythonic" way. Should I just brute force? </p>
-1
2016-09-14T14:35:30Z
39,493,668
<p>a somehow more readable solution : </p> <pre><code>import collections result = collections.defaultdict(lambda: {'num': 0,'den': 0}) for data in x + y: match = result[data['id']] match['num'] += data['num'] match['den'] += data['den'] match['id'] = data['id'] z = list(result.values()) </code></pre>
0
2016-09-14T14:56:50Z
[ "python", "json", "list", "dictionary" ]
How to use tf.nn.max_pool_with_argmax correctly
39,493,229
<p>currently I play a little bit around with tensorflow to create a better understanding of machine learning an tensorflow itself. Therefore I want to visualize the methods (as much as possible) of tensorflow. To visualize max_pool I loaded an image and perform the method. After that I displayed both: input and output image.</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf import cv2 import numpy as np import matplotlib.pyplot as plt image = cv2.imread('lena.png') image_tensor = tf.expand_dims(tf.Variable(image, dtype=tf.float32), 0) #output, argmax = tf.nn.max_pool_with_argmax(image_tensor, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1') output = tf.nn.max_pool(image_tensor, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1') init = tf.initialize_all_variables() session = tf.Session() session.run(init) output = session.run(output) session.close() image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) plt.figure() plt.imshow(image) plt.show() output = cv2.cvtColor(output[0], cv2.COLOR_RGB2BGR) plt.figure() plt.imshow(255-output) plt.show() </code></pre> <p>Everything works fine and I get this output (as expected)</p> <p><a href="http://i.stack.imgur.com/FFoKmm.png" rel="nofollow"><img src="http://i.stack.imgur.com/FFoKmm.png" alt="image (input)"></a> <a href="http://i.stack.imgur.com/YZhogm.png" rel="nofollow"><img src="http://i.stack.imgur.com/YZhogm.png" alt="enter image description here"></a></p> <p>Now I wanted to test the method <code>tf.nn.max_pool_with_argmax</code> to get the argmax of the pooling operations. But if I uncomment the line </p> <p><code>output, argmax = tf.nn.max_pool_with_argmax(image_tensor, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1') </code></p> <p>Python crashes with </p> <blockquote> <p>tensorflow.python.framework.errors.InvalidArgumentError: No OpKernel was registered to support Op 'MaxPoolWithArgmax' with these attrs [[Node: pool1 = MaxPoolWithArgmaxT=DT_FLOAT, Targmax=DT_INT64, ksize=[1, 2, 2, 1], padding="SAME", strides=[1, 2, 2, 1]]]</p> </blockquote> <p>I don't have an idea which argument is wrong because every argument should be correct (<a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/nn.html#max_pool_with_argmax" rel="nofollow">tensorflow docs</a>) ...</p> <p>Does anyone know what went wrong?</p>
0
2016-09-14T14:37:22Z
39,495,311
<p>From a look at <a href="https://github.com/tensorflow/tensorflow/blob/bc64f05d4090262025a95438b42a54bfdc5bcc80/tensorflow/core/kernels/maxpooling_op.cc#L672" rel="nofollow">the implementation</a>, it appears that the <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/nn.html#max_pool_with_argmax" rel="nofollow"><code>tf.nn.max_pool_with_argmax()</code></a> is only implemented for GPU. If you are running the CPU-only build of TensorFlow, then you would get an error of the form <code>"No OpKernel was registered to support Op 'MaxPoolWithArgmax' with these attrs ..."</code>.</p> <p>(This seems like a place where the documentation and the error message could be improved.)</p>
1
2016-09-14T16:20:08Z
[ "python", "tensorflow" ]
Extrapolate 2d numpy array in one dimension
39,493,231
<p>I have numpy.array data set from a simulation, but I'm missing the point at the edge (x=0.1), how can I interpolate/extrapolate the data in z to the edge? I have:</p> <pre><code>x = [ 0. 0.00667 0.02692 0.05385 0.08077] y = [ 0. 10. 20. 30. 40. 50.] # 0. 0.00667 0.02692 0.05385 0.08077 z = [[ 25. 25. 25. 25. 25. ] # 0. [ 25.301 25.368 25.617 26.089 26.787] # 10. [ 25.955 26.094 26.601 27.531 28.861] # 20. [ 26.915 27.126 27.887 29.241 31.113] # 30. [ 28.106 28.386 29.378 31.097 33.402] # 40. [ 29.443 29.784 30.973 32.982 35.603]] # 50. </code></pre> <p>I want to add a new column in z corresponding to x = 0.1 so that my new x will be</p> <pre><code>x_new = [ 0. 0.00667 0.02692 0.05385 0.08077 0.1] # 0. 0.00667 0.02692 0.05385 0.08077 0.01 z = [[ 25. 25. 25. 25. 25. ? ] # 0. [ 25.301 25.368 25.617 26.089 26.787 ? ] # 10. [ 25.955 26.094 26.601 27.531 28.861 ? ] # 20. [ 26.915 27.126 27.887 29.241 31.113 ? ] # 30. [ 28.106 28.386 29.378 31.097 33.402 ? ] # 40. [ 29.443 29.784 30.973 32.982 35.603 ? ]] # 50. </code></pre> <p>Where all '?' replaced with interpolated/extrapolated data. Thanks for any help!</p>
0
2016-09-14T14:37:29Z
39,493,832
<p>Have you had a look at scipy.interpolate2d.interp2d (which uses splines)?</p> <pre><code>from scipy.interpolate import interp2d fspline = interp2d(x,y,z) # maybe need to switch x and y around znew = fspline([0.1], y) z = np.c_[[z, znew] # to join arrays </code></pre> <p><strong>EDIT</strong>:</p> <p>The method that @dnalow and I are imagining is along the following lines:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt # make some test data def func(x, y): return np.sin(np.pi*x) + np.sin(np.pi*y) xx, yy = np.mgrid[0:2:20j, 0:2:20j] zz = func(xx[:], yy[:]).reshape(xx.shape) fig, (ax1, ax2, ax3, ax4) = plt.subplots(1,4, figsize=(13, 3)) ax1.imshow(zz, interpolation='nearest') ax1.set_title('Original') # remove last column zz[:,-1] = np.nan ax2.imshow(zz, interpolation='nearest') ax2.set_title('Missing data') # compute missing column using simplest imaginable model: first order Taylor gxx, gyy = np.gradient(zz[:, :-1]) zz[:, -1] = zz[:, -2] + gxx[:, -1] + gyy[:,-1] ax3.imshow(zz, interpolation='nearest') ax3.set_title('1st order Taylor approx') # add curvature to estimate ggxx, _ = np.gradient(gxx) _, ggyy = np.gradient(gyy) zz[:, -1] = zz[:, -2] + gxx[:, -1] + gyy[:,-1] + ggxx[:,-1] + ggyy[:, -1] ax4.imshow(zz, interpolation='nearest') ax4.set_title('2nd order Taylor approx') fig.tight_layout() fig.savefig('extrapolate_2d.png') plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/9GyJu.png" rel="nofollow"><img src="http://i.stack.imgur.com/9GyJu.png" alt="enter image description here"></a></p> <p>You could improve the estimate by<br> (a) adding higher order derivatives (aka Taylor expansion), or<br> (b) computing the gradients in more directions than just x and y (and then weighting the gradients accordingly).</p> <p>Also, you will get better gradients if you pre-smooth the image (and now we have a complete Sobel filter...).</p>
1
2016-09-14T15:03:32Z
[ "python", "arrays", "numpy", "interpolation", "extrapolation" ]
Checking "encryption" and "decryption" of Caesar Cipher using strings in python
39,493,347
<p>I was supposed to create strings in my python function to show that the encryption and decryption of my code was working properly. Below is my code. </p> <pre><code>def encrypt1(): plain1 = input('Enter plain text message: ') cipher = '' for each in plain1: c = (ord(each)+3) % 126 if c &lt; 32: c+=31 cipher += chr(c) print ("Your encrypted message is:" + cipher) encrypt1() </code></pre> <p>1.) I get the "Inconsistent use of tabs and spaces" error</p> <p>2.) How should I input set, constant strings to get the wanted input check if my code works (for example, type in meet and get the correct input, etc)</p>
-2
2016-09-14T14:42:27Z
39,493,715
<p>Your code is mostly fine but there are a few issues that I've pointed out:</p> <ol> <li>Your indenting is off. The line <code>cipher += chr(c)</code> should be indented to match the things in the for loop</li> <li>The function encypt1() shouldn't take a parameter; you are setting plain1 in the method itself so you can declare it there</li> <li>If you want to deal with just lowercase letters, you should be doing % 123 (value of 'z') and the if clause should check if <code>c &lt; 97</code>. If you want to wrap around printable ascii what you're doing is fine.</li> </ol> <p>This gives:</p> <pre><code> def encrypt1(): plain1 = input('Enter plain text message: ') cipher = '' for each in plain1: c = (ord(each)+3) % 126 if c &lt; 32: c+=31 cipher += chr(c) print ("Your encrypted message is:" + cipher) </code></pre> <p>Because you want to be able to test this on multiple strings, you would want to pass plain1 as a parameter to the function. But you also want the user to be able to input plain1 if a parameter isn't passed in to the function. For that I would suggest a default parameter. Look at the commented code below:</p> <pre><code>def encrypt1(plain1=""): # if no argument is passed in, plain1 will be "" if not plain1: # check if plain1 == "" and if so, read input from user plain1 = input('Enter plain text message: ') cipher = '' for each in plain1: c = (ord(each)+3) % 126 if c &lt; 32: c+=31 cipher += chr(c) return cipher # rather than printing the string, return it instead </code></pre> <p>So to test this you could do:</p> <pre><code>test_strings = ['hello world', 'harambe', 'Name', 'InputString'] for phrase in test_strings: print ("Your encrypted message is:" + encrypt1(phrase)) </code></pre> <p>Which gives the output:</p> <pre><code>Your encrypted message is:khoor#zruog Your encrypted message is:kdudpeh Your encrypted message is:Qdph Your encrypted message is:LqsxwVwulqj </code></pre>
1
2016-09-14T14:58:50Z
[ "python", "encryption", "caesar-cipher" ]
Convert a str typed series to numeric where I can
39,493,348
<p>Consider the str type series <code>s</code></p> <pre><code>s = pd.Series(['a', '1']) </code></pre> <hr> <pre><code>pd.to_numeric(s, 'ignore') 0 a 1 1 dtype: object </code></pre> <hr> <pre><code>pd.to_numeric(s, 'ignore').apply(type) 0 &lt;type 'str'&gt; 1 &lt;type 'str'&gt; dtype: object </code></pre> <p>Clearly, both types are still string. It seems that the <code>'ignore'</code> option ignores the entires series conversion. How do I get it to do what it can and ignore the rest.</p> <p>I want the series types to be</p> <pre><code>pd.to_numeric(s, 'ignore').apply(type) 0 &lt;type 'str'&gt; 1 &lt;type 'int'&gt; dtype: object </code></pre> <hr> <p>EDIT: I came up with this after I posted the question and after @ayhan provided an answer.</p> <p><strong><em>My solution</em></strong><br> Not vectorized, but gives me exactly what I want</p> <pre><code>s.apply(pd.to_numeric, errors='ignore') </code></pre>
1
2016-09-14T14:42:29Z
39,493,426
<p>This is what I am using:</p> <pre><code>pd.to_numeric(s, errors='coerce').fillna(s) Out: 0 a 1 1 dtype: object </code></pre> <hr> <pre><code>pd.to_numeric(s, errors='coerce').fillna(s).apply(type) Out: 0 &lt;class 'str'&gt; 1 &lt;class 'float'&gt; dtype: object </code></pre>
3
2016-09-14T14:45:24Z
[ "python", "pandas" ]
Python RSA Decryption is throwing TypeErrors
39,493,368
<p>My RSA decrypt function:</p> <pre><code>def decrypt(ctext,private_key): key,n = private_key text = [chr(pow(char,key)%n) for char in ctext] return "".join(text) </code></pre> <p>is sometimes throwing a <code>TypeError</code> which tells that <code>pow(char,key)%n</code> provides a <code>float</code>. Why is that? I cant explain it myself and it would be interesting to know why.</p> <p>For example it happens when:</p> <blockquote> <p>crypt-Text = [513300, 369218, 473524, 473524, 500307, 509880, 264366, 500307, 337068, 473524, 264834]<br> key = -159317<br> n = 540767</p> </blockquote>
0
2016-09-14T14:43:21Z
39,521,781
<p>It's hard to figure out much from a tiny fragment of code. You are getting float results because your variable <code>key</code> is negative number. It is clear from the description of <a href="https://docs.python.org/3/library/functions.html#pow" rel="nofollow">pow</a> that you will get float results, which is not what you want. You really should be using the 3-argument form of pow, and your exponent should always be positive. By using standard <a href="https://en.wikipedia.org/wiki/RSA_(cryptosystem)#Key_generation" rel="nofollow">RSA math</a> you can always make your exponent positive by adding an appropriate multiple of &Phi;(n). In your particular case, &Phi;(n) = (631 - 1) * (857 - 1) = 539280, so key = 379963 mod &Phi;(n). </p> <p>You should correct the code that gives you a negative exponent.</p>
0
2016-09-15T23:28:33Z
[ "python", "python-3.x", "rsa", "typeerror", "precision" ]
Regular Expression to get only words from file starting with letter and removing words with only numbers and punctuation in python
39,493,399
<p>I have a text file which i am reading in python through nltk functions. I need to get only only words from file starting with letter and removing words with only numbers and punctuation. For ex :-</p> <pre><code>['Osteama pranay@123 123 !'] </code></pre> <p>so the desired output is</p> <pre><code>Osteama pranay@123 </code></pre> <p>Please suggest a regular expression for this</p>
-2
2016-09-14T14:44:31Z
39,494,105
<pre><code>import re ' '.join(re.findall(r'\b[a-z][^\s]*\b', 'Osteama pranay@123 123 !', re.I)) </code></pre> <p>the same regexp used with nltk.RegexpTokenizer</p> <pre><code>import nltk tokenizer = RegexpTokenizer(r'[a-zA-Z][^\s]*\b') nltk.tokenize('Osteama pranay@123 123 !') </code></pre>
-1
2016-09-14T15:17:58Z
[ "python", "regex", "nltk", "regex-negation" ]
Regular Expression to get only words from file starting with letter and removing words with only numbers and punctuation in python
39,493,399
<p>I have a text file which i am reading in python through nltk functions. I need to get only only words from file starting with letter and removing words with only numbers and punctuation. For ex :-</p> <pre><code>['Osteama pranay@123 123 !'] </code></pre> <p>so the desired output is</p> <pre><code>Osteama pranay@123 </code></pre> <p>Please suggest a regular expression for this</p>
-2
2016-09-14T14:44:31Z
39,495,701
<p>To use regular expression you need to >>>import re first</p> <pre><code>import nltk,re,pprint from __future__ import division from nltk import word_tokenize def openbook(self,book): file = open(book) raw = file.read() tokens = nltk.wordpunct_tokenize(raw) text = nltk.Text(tokens) words = [w.lower() for w in text] vocab = sorted(set(words)) return vocab if __name__ == "__main__": import sys openbook(file(sys.argv[1])) </code></pre> <p>It might help you</p>
0
2016-09-14T16:42:45Z
[ "python", "regex", "nltk", "regex-negation" ]
length python multidimensional list
39,493,455
<p>I have a strange error in the following code:</p> <pre><code>s = MyClass() f = open(filename, 'r') nbline = f.readline() for line in iter(f): linesplit = line.split() s.add(linesplit) f.close() print(len(s.l)) print(nbline) </code></pre> <p>the two print don't give me the same result. Why?</p> <p>the class definition is:</p> <pre><code>class MyClass: l = [] def add(self, v): self.l.append(v) </code></pre> <p>and the file format is:</p> <pre><code>161 3277 4704 52456568 0 1340 380 425 3277 4704 52456578 1 1330 380 422 3118 4719 52456588 1 1340 390 415 3109 4732 52456598 1 1340 400 420 3182 4743 52456608 1 1350 410 427 3309 4789 52456618 1 1360 420 446 ... </code></pre> <p>for this file the print are: 51020 161</p> <p>and the file contain 162 line (the number of line + the line)</p> <p>If I call the function one it's ok, the error appear when I call the function twice or more (it's look like previous file are read!!! :/)</p>
0
2016-09-14T14:46:48Z
39,493,508
<p>Try something the lines of:</p> <p>(len(list[first dimension]) + len(list[second dimension])) etc...</p> <p>A bit clunky but I think it will do what you want</p>
0
2016-09-14T14:49:30Z
[ "python", "list", "variable-length" ]
length python multidimensional list
39,493,455
<p>I have a strange error in the following code:</p> <pre><code>s = MyClass() f = open(filename, 'r') nbline = f.readline() for line in iter(f): linesplit = line.split() s.add(linesplit) f.close() print(len(s.l)) print(nbline) </code></pre> <p>the two print don't give me the same result. Why?</p> <p>the class definition is:</p> <pre><code>class MyClass: l = [] def add(self, v): self.l.append(v) </code></pre> <p>and the file format is:</p> <pre><code>161 3277 4704 52456568 0 1340 380 425 3277 4704 52456578 1 1330 380 422 3118 4719 52456588 1 1340 390 415 3109 4732 52456598 1 1340 400 420 3182 4743 52456608 1 1350 410 427 3309 4789 52456618 1 1360 420 446 ... </code></pre> <p>for this file the print are: 51020 161</p> <p>and the file contain 162 line (the number of line + the line)</p> <p>If I call the function one it's ok, the error appear when I call the function twice or more (it's look like previous file are read!!! :/)</p>
0
2016-09-14T14:46:48Z
39,494,228
<p>The file probably has a trailing newline or two. If <code>len(s.l) == nbline + 1</code> or just print <code>s.l[-3:]</code> to check.</p>
0
2016-09-14T15:24:03Z
[ "python", "list", "variable-length" ]
length python multidimensional list
39,493,455
<p>I have a strange error in the following code:</p> <pre><code>s = MyClass() f = open(filename, 'r') nbline = f.readline() for line in iter(f): linesplit = line.split() s.add(linesplit) f.close() print(len(s.l)) print(nbline) </code></pre> <p>the two print don't give me the same result. Why?</p> <p>the class definition is:</p> <pre><code>class MyClass: l = [] def add(self, v): self.l.append(v) </code></pre> <p>and the file format is:</p> <pre><code>161 3277 4704 52456568 0 1340 380 425 3277 4704 52456578 1 1330 380 422 3118 4719 52456588 1 1340 390 415 3109 4732 52456598 1 1340 400 420 3182 4743 52456608 1 1350 410 427 3309 4789 52456618 1 1360 420 446 ... </code></pre> <p>for this file the print are: 51020 161</p> <p>and the file contain 162 line (the number of line + the line)</p> <p>If I call the function one it's ok, the error appear when I call the function twice or more (it's look like previous file are read!!! :/)</p>
0
2016-09-14T14:46:48Z
39,494,488
<p>First, thanks for the edit.</p> <p>Here is a better looking and more pythonic code:</p> <pre><code>s = MyClass() with open(filename, 'r') as f: nbline = f.readline() for line in f: linesplit = line.split() s.add(linesplit) </code></pre> <p>Then make sure you are setting <code>self.l = []</code> in your <code>MyClass</code></p>
0
2016-09-14T15:36:00Z
[ "python", "list", "variable-length" ]
length python multidimensional list
39,493,455
<p>I have a strange error in the following code:</p> <pre><code>s = MyClass() f = open(filename, 'r') nbline = f.readline() for line in iter(f): linesplit = line.split() s.add(linesplit) f.close() print(len(s.l)) print(nbline) </code></pre> <p>the two print don't give me the same result. Why?</p> <p>the class definition is:</p> <pre><code>class MyClass: l = [] def add(self, v): self.l.append(v) </code></pre> <p>and the file format is:</p> <pre><code>161 3277 4704 52456568 0 1340 380 425 3277 4704 52456578 1 1330 380 422 3118 4719 52456588 1 1340 390 415 3109 4732 52456598 1 1340 400 420 3182 4743 52456608 1 1350 410 427 3309 4789 52456618 1 1360 420 446 ... </code></pre> <p>for this file the print are: 51020 161</p> <p>and the file contain 162 line (the number of line + the line)</p> <p>If I call the function one it's ok, the error appear when I call the function twice or more (it's look like previous file are read!!! :/)</p>
0
2016-09-14T14:46:48Z
39,494,910
<p>Ok the problem is that s.l is a class variable shared by all instances!!!</p>
0
2016-09-14T15:58:24Z
[ "python", "list", "variable-length" ]
Python - psql prompt for password even if it got it
39,493,486
<p>I'm working on a small code, which have to: </p> <ul> <li>Connect with SFTP</li> <li>execute command on PostgreSQL (password protected)</li> </ul> <p>I am willing to use password as a plain text.</p> <p>For the moment my code is:</p> <pre><code>import pysftp command = "... some SQL command" sftp= pysftp.Connection('server_name', username='username', password='password') sftp.execute("export PGPASSWORD='password_to_psql'") sftp.execute("psql -h 127.0.0.1 -d {} -U {} -W -c "{}"").format(database_name, user_name, command) sftp.close() </code></pre> <p>I thought it was good idea, but when i type proper command in terminal, shell prompts for password (but is not required, when i 'enter', everything is executed.</p> <p>Does anyone knows how can i 'disable' prompts, which are not required?</p>
0
2016-09-14T14:48:25Z
39,493,615
<p><code>psql</code> has a <code>--no-password</code> option, which can also be specified as <code>-w</code>.</p> <p>It looks like you might have misspelled <code>-w</code> as <code>-W</code>, which has the opposite effect.</p>
1
2016-09-14T14:54:36Z
[ "python", "postgresql", "pysftp" ]
Program design of a timetable creator
39,493,520
<p>For fun, I want to write a timetable creator in python for schools. I.e. a program where schools can input their rooms, teachers, classes and subjects and some preferences and which will output a timetable for each class/teacher/room. I don't have a problem with the logic behind this (because that is the part I am most interested in), but I do have a problem with the design (due to inexperience in writing something big from scratch). </p> <p>So assume I have a list of rooms (101, 102, ...), a list of teachers (Mr A, Mrs B, ...), a list of subjects (math, english, ...) and a list of classes (5, 6, ...).</p> <p>Now some rooms are better suited for different subjects (like 101 is good for math &amp; english, but geography must be in 102, if possible). Of course, every teacher has a certain set of subjects he teaches.</p> <p>Also, the classes are parted in different groups. I.e. class year 5 is parted in 5-a and 5-b for all subjects except sports (where it may be 5-groupX and 5-groupY) and another subject, where it may be 5-group1, 5-group2 and 5-group3. </p> <p>It would be nice if someone could give me some advice on how to efficiently save this data / design my classes, so I can write some nice code. </p> <p>My first guess would be something like (pseude code):</p> <pre><code>class Room: string name # e.g. r102.name = "102" int id # that should be unique? class Subject: string name int id map RoomPref # like geography.Roompref[r102.id] = 1.0 # or math.Roompref[r101.id] = 0.75 class Teacher: list Subjects # like MrsB.Subjects = {geography.id, math.id} </code></pre> <p>etc, etc, but I am not sure if this approach is good and leads to nice code. Especially all the different interconnections pose a problem to me. (Is assigning different IDs a good solution?)</p> <p>Advice and/or reading material is welcome.</p>
0
2016-09-14T14:50:05Z
39,494,086
<p>One difficult part of this problem is that there are numerous relations that make up a schedule. I would think very carefully about how a teacher T teaches a class and she teaches it in room X at 1:30 pm.</p> <p>For example, in your program, you may want to know when room X is available. To do this, you will want to check your data structures and find room X, then search the list of times that room X is either free or occupied. HOWEVER, in another case, you may want to know whether or not Teacher J teaches math. So you would do a similar lookup and search all the classes that Teacher J teaches.</p> <p>So my advice is this: you will want to avoid what may initially seem like the easiest option to create a data structure where you create an organization of time slots and then assign teachers and classes to those times. You would likely make quick progress at first, but eventually run into trouble when you realize how difficult it is to use a structure of Class objects to represent a set of relationships.</p> <p>Instead, try starting with a relational database like mysql. Use references to other tables in order to link data from one table to another instead of writing all the code yourself. There's no need to reinvent the wheel here and try to write a complex class structure to represent what are essentially simple (though possibly highly interconnected) relationships.</p>
1
2016-09-14T15:16:40Z
[ "python", "design-patterns", "design" ]