title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Python, search list of list and give next value
39,748,249
<p>I'm creating a program that given a percentage for commission and given an invoice(s), it will look for given invoices, look for the value of that invoice and calculate the commission based on the percentage input. When we input <code>"calculate"</code> it will sum all the commission and give me the total of commission. </p> <p>I'm having trouble trying to identify the input within the list. Once I do that, what must I do to be given the value of that invoice?</p> <p><code>data base = [Invoice, Invoice amount, invoice margin]</code></p> <p>This is what I have so far. </p> <pre><code>data_base = [["f1",2000,.24],["f2",150000,.32],["f3",345000,.32]] comission_percentage = int(input("Comision Percentage: ")) invoice = input("Enter an invoice: ") total_comision = 0 while invoice != "calculate" : while invoice in data_base: invoice_ = data_base.index(invoice)+1 #Gives me the position of invoice invoice_amount = data_base[p_factura] #Give me the value os the invoice comission = ((margen_comisionar/100)* monto_factura) #calculatecomission total_comision = comission + total_comission invoice = input("Enter an invoice: ") print("The amount to comission is: " + str(total_comission)) </code></pre>
1
2016-09-28T12:57:17Z
39,748,794
<p>You would do better to split up the task of finding the right invoice from the calculations.</p> <p>At present your code will not run at all, as you are using variables whose values you haven't defined. Since you don't need to alter the invoices at all it will be fine just to return the values you want. Here's an example, that returns an "empty" invoice in case it doesn't find the one you asked for:</p> <pre><code>def find_invoice(s): for i_num, amount, rate in data_base: if i_num == s: return i_num, amount, rate return(None, 0.0, 0,0) </code></pre> <p>Then the main part of your logic can read:</p> <pre><code>total_commission = 0.0 while invoice != "calculate": i, amt, rate = find_invoice(invoice) if i: total_commission += amt*comission_percentage/100.0 invoice = input("Enter an invoice: ") </code></pre> <p>Also, be careful with variable names. Your spelling must be EXACT! Not a bad effort for an apparent beginner, though.</p>
0
2016-09-28T13:21:15Z
[ "python", "list", "element" ]
Python, search list of list and give next value
39,748,249
<p>I'm creating a program that given a percentage for commission and given an invoice(s), it will look for given invoices, look for the value of that invoice and calculate the commission based on the percentage input. When we input <code>"calculate"</code> it will sum all the commission and give me the total of commission. </p> <p>I'm having trouble trying to identify the input within the list. Once I do that, what must I do to be given the value of that invoice?</p> <p><code>data base = [Invoice, Invoice amount, invoice margin]</code></p> <p>This is what I have so far. </p> <pre><code>data_base = [["f1",2000,.24],["f2",150000,.32],["f3",345000,.32]] comission_percentage = int(input("Comision Percentage: ")) invoice = input("Enter an invoice: ") total_comision = 0 while invoice != "calculate" : while invoice in data_base: invoice_ = data_base.index(invoice)+1 #Gives me the position of invoice invoice_amount = data_base[p_factura] #Give me the value os the invoice comission = ((margen_comisionar/100)* monto_factura) #calculatecomission total_comision = comission + total_comission invoice = input("Enter an invoice: ") print("The amount to comission is: " + str(total_comission)) </code></pre>
1
2016-09-28T12:57:17Z
39,748,932
<p>Your 'database' seems to be a list of lists. I'm assuming the input from the user is supposed to match the first part of the list.</p> <p>So given:</p> <pre><code>data_base = [["f1",2000,.24],["f2",150000,.32],["f3",345000,.32]] </code></pre> <p>If the user inputs <code>f1</code>, you want it to match <code>["f1",2000,.24]</code>.</p> <p>Your inner <code>while</code> loop will not work in this case since <code>"f1"</code> is not in <code>[["f1",2000,.24],["f2",150000,.32],["f3",345000,.32]]</code></p> <p>You need to check if the first element of every element of your list matches your input.</p> <pre><code>for invoice_info in database: if invoice in invoice_info: # found it! &lt;do your invoice commission calculations here&gt; </code></pre> <p>A better way to do it would be to convert your database to dictionaries:</p> <pre><code>data_base = {'f1': {'invoice_amount': 2000, 'invoice_margin': .24}, 'f2': {'invoice_amount': 15000, 'invoice_margin': .32}, 'f3': {'invoice_amount': 345000, 'invoice_margin': .32} } </code></pre> <p>Then you can do</p> <pre><code>if invoice in data_base: found_invoice = data_base[invoice] total_commission += found_invoice['invoice_amount'] * commission_percentage / 100 </code></pre>
0
2016-09-28T13:27:13Z
[ "python", "list", "element" ]
Python, search list of list and give next value
39,748,249
<p>I'm creating a program that given a percentage for commission and given an invoice(s), it will look for given invoices, look for the value of that invoice and calculate the commission based on the percentage input. When we input <code>"calculate"</code> it will sum all the commission and give me the total of commission. </p> <p>I'm having trouble trying to identify the input within the list. Once I do that, what must I do to be given the value of that invoice?</p> <p><code>data base = [Invoice, Invoice amount, invoice margin]</code></p> <p>This is what I have so far. </p> <pre><code>data_base = [["f1",2000,.24],["f2",150000,.32],["f3",345000,.32]] comission_percentage = int(input("Comision Percentage: ")) invoice = input("Enter an invoice: ") total_comision = 0 while invoice != "calculate" : while invoice in data_base: invoice_ = data_base.index(invoice)+1 #Gives me the position of invoice invoice_amount = data_base[p_factura] #Give me the value os the invoice comission = ((margen_comisionar/100)* monto_factura) #calculatecomission total_comision = comission + total_comission invoice = input("Enter an invoice: ") print("The amount to comission is: " + str(total_comission)) </code></pre>
1
2016-09-28T12:57:17Z
39,748,954
<p>you should try this code. check for the names and indentation </p> <pre><code> total = 0 invoice = "calculate" while invoice!='calculate' for item in data_base: total = total + item[1]*comission_percentage/100 invoice = input("Enter invoice: ") </code></pre>
0
2016-09-28T13:28:12Z
[ "python", "list", "element" ]
PYTHONPATH order on Ubuntu 14.04
39,748,267
<p>I have two computers running Ubuntu 14.04 server (let's call them A and B). B was initially a 10.04 but it has received two upgrades to 12.04 and 14.04. I do not understand why the python path is different on the two computers.</p> <p>As you can see on the two paths below, the pip installation path <code>/usr/local/lib/python2.7/dist-packages</code> comes <strong>before</strong> the apt python packages path <code>/usr/lib/python2.7/dist-packages</code> on Ubuntu A, but it comes <strong>after</strong> on Ubuntu B.</p> <p>This leads to several problems if a python package is installed both via apt and pip. As you can see below, if both <code>python-six</code> apt package and <code>six</code> pip package are installed, they may be two different library versions.</p> <p>The installation of packages system is not always my choice, but might be some dependencies of other packages that are installed.</p> <p>This problem could probably be solved with a virtualenv, but for reasons I will not detail, <strong>I cannot use virtualenv</strong> here, and must install pip packages system-wide.</p> <h2>Ubuntu A</h2> <pre><code>&gt;&gt;&gt; import sys, six &gt;&gt;&gt; sys.path ['', '/usr/local/bin', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/local/lib/python2.7/dist-packages/IPython/extensions'] &gt;&gt;&gt; six &lt;module 'six' from '/usr/local/lib/python2.7/dist-packages/six.pyc'&gt; </code></pre> <h2>Ubuntu B</h2> <pre><code>&gt;&gt;&gt; import sys, six &gt;&gt;&gt; sys.path ['', '/usr/local/bin', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/local/lib/python2.7/dist-packages/IPython/extensions'] &gt;&gt;&gt; six &gt;&gt;&gt; &lt;module 'six' from '/usr/lib/python2.7/dist-packages/six.pyc'&gt; </code></pre> <p>For both machines <code>$PATH</code> is the same, and <code>$PYTHONPATH</code> is empty.</p> <ul> <li><p>Why are those PYTHONPATHS different?</p></li> <li><p>How can I fix the pythonpath order in "Ubuntu B" so it will load pip packages before the system ones, in a system-wide way? Is there a apt package I should reinstall or reconfigure so the PYTHONPATH would prioritize pip packages ?</p></li> </ul>
13
2016-09-28T12:58:06Z
39,748,722
<p>PYTHONPATH is environmental variable which you can set how ever you want,to add additional directories.You should not install Python packages manually,use <code>pip</code>.On older Ubuntu,probably you have manually installed modules before the upgrade.</p>
-2
2016-09-28T13:17:45Z
[ "python", "ubuntu", "ubuntu-14.04", "pythonpath", "ubuntu-server" ]
PYTHONPATH order on Ubuntu 14.04
39,748,267
<p>I have two computers running Ubuntu 14.04 server (let's call them A and B). B was initially a 10.04 but it has received two upgrades to 12.04 and 14.04. I do not understand why the python path is different on the two computers.</p> <p>As you can see on the two paths below, the pip installation path <code>/usr/local/lib/python2.7/dist-packages</code> comes <strong>before</strong> the apt python packages path <code>/usr/lib/python2.7/dist-packages</code> on Ubuntu A, but it comes <strong>after</strong> on Ubuntu B.</p> <p>This leads to several problems if a python package is installed both via apt and pip. As you can see below, if both <code>python-six</code> apt package and <code>six</code> pip package are installed, they may be two different library versions.</p> <p>The installation of packages system is not always my choice, but might be some dependencies of other packages that are installed.</p> <p>This problem could probably be solved with a virtualenv, but for reasons I will not detail, <strong>I cannot use virtualenv</strong> here, and must install pip packages system-wide.</p> <h2>Ubuntu A</h2> <pre><code>&gt;&gt;&gt; import sys, six &gt;&gt;&gt; sys.path ['', '/usr/local/bin', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/local/lib/python2.7/dist-packages/IPython/extensions'] &gt;&gt;&gt; six &lt;module 'six' from '/usr/local/lib/python2.7/dist-packages/six.pyc'&gt; </code></pre> <h2>Ubuntu B</h2> <pre><code>&gt;&gt;&gt; import sys, six &gt;&gt;&gt; sys.path ['', '/usr/local/bin', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/local/lib/python2.7/dist-packages/IPython/extensions'] &gt;&gt;&gt; six &gt;&gt;&gt; &lt;module 'six' from '/usr/lib/python2.7/dist-packages/six.pyc'&gt; </code></pre> <p>For both machines <code>$PATH</code> is the same, and <code>$PYTHONPATH</code> is empty.</p> <ul> <li><p>Why are those PYTHONPATHS different?</p></li> <li><p>How can I fix the pythonpath order in "Ubuntu B" so it will load pip packages before the system ones, in a system-wide way? Is there a apt package I should reinstall or reconfigure so the PYTHONPATH would prioritize pip packages ?</p></li> </ul>
13
2016-09-28T12:58:06Z
39,750,657
<p>You can set the <code>PYTHONPATH</code> to content of <code>Ubuntu A</code></p> <pre><code>env PYTHONPATH="/usr/local/bin:/usr/lib/python2.7:/usr/lib/python2.7/plat-x86_64-linux-gnu:..." python </code></pre>
-1
2016-09-28T14:38:47Z
[ "python", "ubuntu", "ubuntu-14.04", "pythonpath", "ubuntu-server" ]
PYTHONPATH order on Ubuntu 14.04
39,748,267
<p>I have two computers running Ubuntu 14.04 server (let's call them A and B). B was initially a 10.04 but it has received two upgrades to 12.04 and 14.04. I do not understand why the python path is different on the two computers.</p> <p>As you can see on the two paths below, the pip installation path <code>/usr/local/lib/python2.7/dist-packages</code> comes <strong>before</strong> the apt python packages path <code>/usr/lib/python2.7/dist-packages</code> on Ubuntu A, but it comes <strong>after</strong> on Ubuntu B.</p> <p>This leads to several problems if a python package is installed both via apt and pip. As you can see below, if both <code>python-six</code> apt package and <code>six</code> pip package are installed, they may be two different library versions.</p> <p>The installation of packages system is not always my choice, but might be some dependencies of other packages that are installed.</p> <p>This problem could probably be solved with a virtualenv, but for reasons I will not detail, <strong>I cannot use virtualenv</strong> here, and must install pip packages system-wide.</p> <h2>Ubuntu A</h2> <pre><code>&gt;&gt;&gt; import sys, six &gt;&gt;&gt; sys.path ['', '/usr/local/bin', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/local/lib/python2.7/dist-packages/IPython/extensions'] &gt;&gt;&gt; six &lt;module 'six' from '/usr/local/lib/python2.7/dist-packages/six.pyc'&gt; </code></pre> <h2>Ubuntu B</h2> <pre><code>&gt;&gt;&gt; import sys, six &gt;&gt;&gt; sys.path ['', '/usr/local/bin', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/local/lib/python2.7/dist-packages/IPython/extensions'] &gt;&gt;&gt; six &gt;&gt;&gt; &lt;module 'six' from '/usr/lib/python2.7/dist-packages/six.pyc'&gt; </code></pre> <p>For both machines <code>$PATH</code> is the same, and <code>$PYTHONPATH</code> is empty.</p> <ul> <li><p>Why are those PYTHONPATHS different?</p></li> <li><p>How can I fix the pythonpath order in "Ubuntu B" so it will load pip packages before the system ones, in a system-wide way? Is there a apt package I should reinstall or reconfigure so the PYTHONPATH would prioritize pip packages ?</p></li> </ul>
13
2016-09-28T12:58:06Z
39,807,752
<p>The simplest way is to use <a href="https://docs.python.org/2/library/sys.html#sys.path" rel="nofollow">sys.path</a> to make sure that you will have right order of paths added. <code>sys.path</code> gives out the list of paths that are available in the <code>PYHTONPATH</code> environment variable with the right order. If you want any path to have higher priority over others, just add it to the beginning of the list.</p> <p>And you will also find this in the official docs:</p> <blockquote> <p>A program is free to modify this list for its own purposes.</p> </blockquote> <p>WARNING: Even though this gives better control over the priority, just make sure that whatever library you add does not mess with the system libraries. Else, your library will be searched first, as it is in the beginning of the list, and they may replace the system libraries. Just as an example, if you have written a library by the name <code>os</code>, after adding this to the <code>sys.path</code>, that library will be imported instead of Python's in-built. So take all the caution and also with a large amount of salt before jumping to this.</p>
-1
2016-10-01T14:26:14Z
[ "python", "ubuntu", "ubuntu-14.04", "pythonpath", "ubuntu-server" ]
PYTHONPATH order on Ubuntu 14.04
39,748,267
<p>I have two computers running Ubuntu 14.04 server (let's call them A and B). B was initially a 10.04 but it has received two upgrades to 12.04 and 14.04. I do not understand why the python path is different on the two computers.</p> <p>As you can see on the two paths below, the pip installation path <code>/usr/local/lib/python2.7/dist-packages</code> comes <strong>before</strong> the apt python packages path <code>/usr/lib/python2.7/dist-packages</code> on Ubuntu A, but it comes <strong>after</strong> on Ubuntu B.</p> <p>This leads to several problems if a python package is installed both via apt and pip. As you can see below, if both <code>python-six</code> apt package and <code>six</code> pip package are installed, they may be two different library versions.</p> <p>The installation of packages system is not always my choice, but might be some dependencies of other packages that are installed.</p> <p>This problem could probably be solved with a virtualenv, but for reasons I will not detail, <strong>I cannot use virtualenv</strong> here, and must install pip packages system-wide.</p> <h2>Ubuntu A</h2> <pre><code>&gt;&gt;&gt; import sys, six &gt;&gt;&gt; sys.path ['', '/usr/local/bin', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/local/lib/python2.7/dist-packages/IPython/extensions'] &gt;&gt;&gt; six &lt;module 'six' from '/usr/local/lib/python2.7/dist-packages/six.pyc'&gt; </code></pre> <h2>Ubuntu B</h2> <pre><code>&gt;&gt;&gt; import sys, six &gt;&gt;&gt; sys.path ['', '/usr/local/bin', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/local/lib/python2.7/dist-packages/IPython/extensions'] &gt;&gt;&gt; six &gt;&gt;&gt; &lt;module 'six' from '/usr/lib/python2.7/dist-packages/six.pyc'&gt; </code></pre> <p>For both machines <code>$PATH</code> is the same, and <code>$PYTHONPATH</code> is empty.</p> <ul> <li><p>Why are those PYTHONPATHS different?</p></li> <li><p>How can I fix the pythonpath order in "Ubuntu B" so it will load pip packages before the system ones, in a system-wide way? Is there a apt package I should reinstall or reconfigure so the PYTHONPATH would prioritize pip packages ?</p></li> </ul>
13
2016-09-28T12:58:06Z
39,822,069
<p>If you peek into Python's <code>site.py</code>, which you can by opening <code>/usr/lib/python2.7/site.py</code> in a text editor.</p> <p>The sys.path is augmented with directories for packages distributed within the distribution. Local addons go into <code>/usr/local/lib/python/dist-packages</code>, the global addons install into <code>/usr/{lib,share}/python/dist-packages</code>.</p> <p>You can change the order by overriding this:</p> <pre><code>def getsitepackages(): """Returns a list containing all global site-packages directories (and possibly site-python). For each directory present in the global ``PREFIXES``, this function will find its `site-packages` subdirectory depending on the system environment, and will return a list of full paths. """ sitepackages = [] seen = set() for prefix in PREFIXES: if not prefix or prefix in seen: continue seen.add(prefix) if sys.platform in ('os2emx', 'riscos'): sitepackages.append(os.path.join(prefix, "Lib", "site-packages")) elif os.sep == '/': sitepackages.append(os.path.join(prefix, "local/lib", "python" + sys.version[:3], "dist-packages")) sitepackages.append(os.path.join(prefix, "lib", "python" + sys.version[:3], "dist-packages")) else: sitepackages.append(prefix) sitepackages.append(os.path.join(prefix, "lib", "site-packages")) if sys.platform == "darwin": # for framework builds *only* we add the standard Apple # locations. from sysconfig import get_config_var framework = get_config_var("PYTHONFRAMEWORK") if framework: sitepackages.append( os.path.join("/Library", framework, sys.version[:3], "site-packages")) return sitepackages </code></pre>
1
2016-10-02T21:51:37Z
[ "python", "ubuntu", "ubuntu-14.04", "pythonpath", "ubuntu-server" ]
PYTHONPATH order on Ubuntu 14.04
39,748,267
<p>I have two computers running Ubuntu 14.04 server (let's call them A and B). B was initially a 10.04 but it has received two upgrades to 12.04 and 14.04. I do not understand why the python path is different on the two computers.</p> <p>As you can see on the two paths below, the pip installation path <code>/usr/local/lib/python2.7/dist-packages</code> comes <strong>before</strong> the apt python packages path <code>/usr/lib/python2.7/dist-packages</code> on Ubuntu A, but it comes <strong>after</strong> on Ubuntu B.</p> <p>This leads to several problems if a python package is installed both via apt and pip. As you can see below, if both <code>python-six</code> apt package and <code>six</code> pip package are installed, they may be two different library versions.</p> <p>The installation of packages system is not always my choice, but might be some dependencies of other packages that are installed.</p> <p>This problem could probably be solved with a virtualenv, but for reasons I will not detail, <strong>I cannot use virtualenv</strong> here, and must install pip packages system-wide.</p> <h2>Ubuntu A</h2> <pre><code>&gt;&gt;&gt; import sys, six &gt;&gt;&gt; sys.path ['', '/usr/local/bin', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/local/lib/python2.7/dist-packages/IPython/extensions'] &gt;&gt;&gt; six &lt;module 'six' from '/usr/local/lib/python2.7/dist-packages/six.pyc'&gt; </code></pre> <h2>Ubuntu B</h2> <pre><code>&gt;&gt;&gt; import sys, six &gt;&gt;&gt; sys.path ['', '/usr/local/bin', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/local/lib/python2.7/dist-packages/IPython/extensions'] &gt;&gt;&gt; six &gt;&gt;&gt; &lt;module 'six' from '/usr/lib/python2.7/dist-packages/six.pyc'&gt; </code></pre> <p>For both machines <code>$PATH</code> is the same, and <code>$PYTHONPATH</code> is empty.</p> <ul> <li><p>Why are those PYTHONPATHS different?</p></li> <li><p>How can I fix the pythonpath order in "Ubuntu B" so it will load pip packages before the system ones, in a system-wide way? Is there a apt package I should reinstall or reconfigure so the PYTHONPATH would prioritize pip packages ?</p></li> </ul>
13
2016-09-28T12:58:06Z
39,844,293
<p>As we cannot explore into your system, I am trying to analysis your first question by illustrating how <code>sys.path</code> is initialized. Available references are <a href="http://mikeboers.com/blog/2014/05/23/where-does-the-sys-path-start" rel="nofollow">where-does-sys-path-starts</a> and <a href="http://pyvideo.org/pycon-us-2011/pycon-2011--reverse-engineering-ian-bicking--39-s.html" rel="nofollow">pyco-reverse-engineering</a>(python2.6).</p> <p>The <code>sys.path</code> comes from the following variables(in order):</p> <ol> <li><code>$PYTHONPATH</code> (highest priority)</li> <li><code>sys.prefix</code>-ed stdlib</li> <li><code>sys.exec_prefix</code>-ed stdlib</li> <li><code>site-packages</code></li> <li><code>*.pth</code> in site-packages (lowest priority)</li> </ol> <p>Now let's describe each of these variables:</p> <ol> <li><code>$PYTHONPATH</code>, this is just a system environment variable.</li> <li>&amp; 3. <code>sys.prefix</code> and <code>sys.exec_prefix</code> are determined before any python script is executed. It is actually coded in the source <a href="https://hg.python.org/cpython/file/e5d963cb6afc/Modules/getpath.c#l314" rel="nofollow">Module/getpath.c</a>. </li> </ol> <p>The logic is like this: </p> <pre><code>IF $PYTHONHOME IS set: RETURN sys.prefix AND sys.exec_prefix as $PYTHONHOME ELSE: current_dir = directory of python executable; DO: current_dir = parent(current_dir) IF FILE 'lib/pythonX.Y/os.py' EXSITS: sys.prefix = current_dir IF FILE 'lib/pythonX.Y/lib-dynload' EXSITS: sys.exec_prefix = current_dir IF current_dir IS '/': BREAK WHILE(TRUE) IF sys.prefix IS NOT SET: sys.prefix = BUILD_PREFIX IF sys.exec_prefix IS NOT SET: sys.exec_prefix = BUILD_PREFIX </code></pre> <ol start="4"> <li><p>&amp; 5. <code>site-packages</code> and <code>*.pth</code> are added by import of <code>site.py</code>. In this module you will find the docs:</p> <p>This will append site-specific paths to the module search path. On Unix (including Mac OSX), it starts with sys.prefix and sys.exec_prefix (if different) and appends lib/python/site-packages as well as lib/site-python. ... ...</p> <p>For Debian and derivatives, this sys.path is augmented with directories for packages distributed within the distribution. Local addons go into /usr/local/lib/python/dist-packages, Debian addons install into /usr/{lib,share}/python/dist-packages. /usr/lib/python/site-packages is not used.</p> <p>A path configuration file is a file whose name has the form .pth; its contents are additional directories (one per line) to be added to sys.path. ... ...</p></li> </ol> <p>And a code snippet for important function <code>getsitepackages</code>:</p> <pre><code>sitepackages.append(os.path.join(prefix, "local/lib", "python" + sys.version[:3], "dist-packages")) sitepackages.append(os.path.join(prefix, "lib", "python" + sys.version[:3], "dist-packages")) </code></pre> <p>Now I try to fig out where may be this odd problem comes from:</p> <ol> <li><code>$PYTHONPATH</code>, impossible, because it is empty both A and B</li> <li><code>sys.prefix</code> and <code>sys.exec_prefix</code>, maybe, please check them and as well as <code>$PYTHONHOME</code></li> <li><code>site.py</code>, maybe, check the file.</li> </ol> <p>The <code>sys.path</code> output of B is quite odd, <code>dist-package</code> (<code>site-package</code>) goes before <code>sys.exec_prefix</code> (<code>lib-dynload</code>). Please try to investigate each step of <code>sys.path</code> initialization of machine B, you may find out something.</p> <p>Very sorry that I cannot replicate your problem. By the way, about your question title, I think <code>SYS.PATH</code> is better than <code>PYTHONPATH</code>, which makes me misinterpretation as <code>$PYTHONPATH</code> at first glance.</p>
4
2016-10-04T04:41:43Z
[ "python", "ubuntu", "ubuntu-14.04", "pythonpath", "ubuntu-server" ]
Re-compose a Tensor after tensor factorization
39,748,285
<p>I am trying to decompose a 3D matrix using python library <a href="https://github.com/mnick/scikit-tensor">scikit-tensor</a>. I managed to decompose my Tensor (with dimensions 100x50x5) into three matrices. My question is how can I compose the initial matrix again using the decomposed matrix produced with Tensor factorization? I want to check if the decomposition has any meaning. My code is the following:</p> <pre><code>import logging from scipy.io.matlab import loadmat from sktensor import dtensor, cp_als import numpy as np //Set logging to DEBUG to see CP-ALS information logging.basicConfig(level=logging.DEBUG) T = np.ones((400, 50)) T = dtensor(T) P, fit, itr, exectimes = cp_als(T, 10, init='random') // how can I re-compose the Matrix T? TA = np.dot(P.U[0], P.U[1].T) </code></pre> <p>I am using the canonical decomposition as provided from the scikit-tensor library function cp_als. Also what is the expected dimensionality of the decomposed matrices?</p>
12
2016-09-28T12:58:35Z
39,794,464
<p>The CP product of, for example, 4 matrices</p> <p><img src="https://chart.googleapis.com/chart?cht=tx&amp;chl=X_%7Babcd%7D+%3D+%5Cdisplaystyle%5Csum_%7Bz%3D0%7D%5E%7BZ%7D%7BA_%7Baz%7D+B_%7Bbz%7D+C_%7Bcz%7D+D_%7Bdz%7D%7D+%2B+%5Cepsilon_%7Babcd%7D" alt="X_{abcd} = \displaystyle\sum_{z=0}^{Z}{A_{az} B_{bz} C_{cz} D_{dz} + \epsilon_{abcd}}"></p> <p>can be expressed using <a href="https://en.wikipedia.org/wiki/Einstein_notation" rel="nofollow">Einstein notation</a> as</p> <p><img src="https://chart.googleapis.com/chart?cht=tx&amp;chl=X_%7Babcd%7D+%3D+A_%7Baz%7D+B_%7Bbz%7D+C_%7Bcz%7D+D_%7Bdz%7D+%2B+%5Cepsilon_%7Babcd%7D" alt="X_{abcd} = A_{az} B_{bz} C_{cz} D_{dz} + \epsilon_{abcd}"></p> <p>or in numpy as</p> <pre><code>numpy.einsum('az,bz,cz,dz -&gt; abcd', A, B, C, D) </code></pre> <p>so in your case you would use</p> <pre><code>numpy.einsum('az,bz-&gt;ab', P.U[0], P.U[1]) </code></pre> <p>or, in your 3-matrix case</p> <pre><code>numpy.einsum('az,bz,cz-&gt;abc', P.U[0], P.U[1], P.U[2]) </code></pre> <p><code>sktensor.ktensor.ktensor</code> also have a method <code>totensor()</code> that does exactly this:</p> <pre><code>np.allclose(np.einsum('az,bz-&gt;ab', P.U[0], P.U[1]), P.totensor()) &gt;&gt;&gt; True </code></pre>
5
2016-09-30T14:57:48Z
[ "python", "math", "data-science", "scikits" ]
Fill missing values of 1 data frame from another data frame using pandas
39,748,413
<p>Need help in filling 0's.</p> <p>In the below dataframe i have a column "Item_Visibility" which has zeros. I need to fill those with values from second dataframe(image 2). The common column between the 2 dataframes is "Item_Identifier".</p> <p>Thanks in advance</p> <p><a href="http://i.stack.imgur.com/XnvfR.png" rel="nofollow"><img src="http://i.stack.imgur.com/XnvfR.png" alt="enter image description here"></a> <a href="http://i.stack.imgur.com/muRHU.png" rel="nofollow"><img src="http://i.stack.imgur.com/muRHU.png" alt="enter image description here"></a></p>
0
2016-09-28T13:04:17Z
39,748,655
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.mask.html" rel="nofollow"><code>mask</code></a> with maping by <code>Series</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow"><code>map</code></a>:</p> <pre><code>print (df1) a b c d 0 FDA15 9.30 Low Fat 0.016 1 FDX07 19.20 Regular 0.000 2 NCD19 8.93 Low Fat 0.000 3 FDP10 NaN Low Fat 0.127 print (df2) e d 0 FDW59 0.0202 1 FDX07 0.0178 df1.d = df1.d.mask(df1.d == 0, df1.a.map(df2.set_index('e')['d'])) print (df1) a b c d 0 FDA15 9.30 Low Fat 0.0160 1 FDX07 19.20 Regular 0.0178 2 NCD19 8.93 Low Fat NaN 3 FDP10 NaN Low Fat 0.1270 </code></pre>
1
2016-09-28T13:15:34Z
[ "python", "pandas" ]
Fill missing values of 1 data frame from another data frame using pandas
39,748,413
<p>Need help in filling 0's.</p> <p>In the below dataframe i have a column "Item_Visibility" which has zeros. I need to fill those with values from second dataframe(image 2). The common column between the 2 dataframes is "Item_Identifier".</p> <p>Thanks in advance</p> <p><a href="http://i.stack.imgur.com/XnvfR.png" rel="nofollow"><img src="http://i.stack.imgur.com/XnvfR.png" alt="enter image description here"></a> <a href="http://i.stack.imgur.com/muRHU.png" rel="nofollow"><img src="http://i.stack.imgur.com/muRHU.png" alt="enter image description here"></a></p>
0
2016-09-28T13:04:17Z
39,748,781
<p>try this:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({"A":["a", "b", "c", "d", "e"], "B":[1, 2, 0, 0, 0]}) s = pd.Series([10, 20, 30, 40], index=["a", "b", "c", "d"]) mask = df["B"] == 0 df.loc[mask, "B"] = s[df.loc[mask, "A"]].values </code></pre> <p>df:</p> <pre><code> A B 0 a 1 1 b 2 2 c 0 3 d 0 4 e 0 </code></pre> <p>s:</p> <pre><code>a 10 b 20 c 30 d 40 dtype: int64 </code></pre> <p>output:</p> <pre><code> A B 0 a 1.0 1 b 2.0 2 c 30.0 3 d 40.0 4 e NaN </code></pre>
1
2016-09-28T13:20:40Z
[ "python", "pandas" ]
Kernel Density Estimation Heatmap in python
39,748,477
<p>I have a list of latitude and longitude coordinates and respective Received Signal strength values at each coordinate. How would I plot a kernel density estimation (kde) 2D heatmap for these signal strengths at each lat-lon in python(matplotlib)? </p>
-1
2016-09-28T13:07:38Z
39,748,978
<p>You can use the python library <strong>seaborn</strong>. It has a handy function that plots kernel density estimation for your heatmaps.</p> <p>Check this out :</p> <pre><code>import seaborn as sns lat = [list_of_values] long = [list_of_values] ax = sns.kdeplot(lat, long, cmap="Blues", shade=True, shade_lowest=False) </code></pre>
0
2016-09-28T13:29:05Z
[ "python", "matplotlib", "heatmap", "kernel-density" ]
How to avoid invalid token when converting binary to decimal
39,748,689
<p>Okay. first of all i know there is a built-in function converting binary to decimal but i thought i challenge myself and make my own. </p> <p>Here are the codes....</p> <pre><code>def binaryToDecimal(binary): binaryList = list(str(binary)) exponent = len(binaryList) - 1 decimal = 0 for char in binaryList: bit = int(char) decimal += bit * (2 ** exponent) exponent -= 1 print(decimal) </code></pre> <p>The problem is, i know if i want to begin my binary with 0, i have to use 0b prefix to avoid an invalid token but it leads me to a problem. It doesn't convert the last 1 on the left</p> <pre><code>binaryToDecimal(0b010001110) </code></pre> <p>So it outputs 14 instead of 142</p> <p>How do i fix that? Also is there a way i can just say </p> <pre><code>binaryToDecimal(010001110) </code></pre> <p>without getting an invalid token without using 0b prefix?</p> <p>I'm using Python 3.5</p>
-1
2016-09-28T13:16:49Z
39,749,042
<p>Youre problem is mainly <code>binaryList = list(str(binary))</code>. <code>str(0b010001110)</code> returns <code>'142'</code>, not <code>'0b101010'</code> as you might expect.</p> <p>Consequently, <code>binaryList</code> is <code>['1', '4', '2']</code>.</p> <p>A coincidence made that <code>1 * 2**2 + 4 * 2**1 + 2</code> equals 14, which gives the impression that you're function don't treat the last digit.</p> <p>But, if you're running you're function with <code>0b101010</code> as input, you get <code>10</code>.</p> <p>A solution to get the binary representation as string is to pass to <code>format</code>:</p> <pre><code>list("{0:b}".format(0b010001110)) </code></pre> <p>Which returns</p> <pre><code>['1', '0', '0', '0', '1', '1', '1', '0'] </code></pre>
0
2016-09-28T13:31:23Z
[ "python", "python-3.x" ]
How to avoid invalid token when converting binary to decimal
39,748,689
<p>Okay. first of all i know there is a built-in function converting binary to decimal but i thought i challenge myself and make my own. </p> <p>Here are the codes....</p> <pre><code>def binaryToDecimal(binary): binaryList = list(str(binary)) exponent = len(binaryList) - 1 decimal = 0 for char in binaryList: bit = int(char) decimal += bit * (2 ** exponent) exponent -= 1 print(decimal) </code></pre> <p>The problem is, i know if i want to begin my binary with 0, i have to use 0b prefix to avoid an invalid token but it leads me to a problem. It doesn't convert the last 1 on the left</p> <pre><code>binaryToDecimal(0b010001110) </code></pre> <p>So it outputs 14 instead of 142</p> <p>How do i fix that? Also is there a way i can just say </p> <pre><code>binaryToDecimal(010001110) </code></pre> <p>without getting an invalid token without using 0b prefix?</p> <p>I'm using Python 3.5</p>
-1
2016-09-28T13:16:49Z
39,749,298
<p><code>0b010001110</code> is <em>already</em> an integer with the decimal value of 142. If you really want to do your own binary conversion function you'll need to pass the binary arg in as a string, eg '010001110', or as a list of bits, which can be strings, integers, or even the boolean values <code>True</code> and <code>False</code>. </p> <p>Once you have the <code>str</code> vs <code>int</code> input issue resolved, there's a simpler way to do the conversion. There's no need to mess around with exponents: in the loop just bit-shift the current result one place to the left and insert the next bit. Like this: </p> <pre><code>def bin_to_int(bits): result = 0 for b in bits: result = (result &lt;&lt; 1) | int(b) return result # Test data = [ '0', '1', '110', '001101', '10001110', '000010001110', '11000000111001', ] for bits in data: print(bits, int(bits, 2), bin_to_int(bits)) </code></pre> <p><strong>output</strong></p> <pre><code>0 0 0 1 1 1 110 6 6 001101 13 13 10001110 142 142 000010001110 142 142 11000000111001 12345 12345 </code></pre> <p>My test code uses the built-in <code>int</code> constructor to do the conversion as well, to verify that my <code>bin_to_int</code> function is performing correctly.</p> <p>This line, which does most of the work, uses bitwise operators, </p> <pre><code>result = (result &lt;&lt; 1) | int(b) </code></pre> <p>but you can implement it with "normal" arithmetic operators, if you like</p> <pre><code>result = result * 2 + int(b) </code></pre>
1
2016-09-28T13:41:50Z
[ "python", "python-3.x" ]
Python 3.5 - method overloading with @overload
39,748,842
<p>There is an <a href="https://pypi.python.org/pypi/overloading" rel="nofollow">overloading</a> package for Python 3.5+. Whith this package, it's possible to redefined methods, but with distinct type hints and its decorator will find out, which overloaded method should be called. </p> <p><strong>Common coding pattern:</strong></p> <pre><code>class foo: def func(param): if instance(param, int): pass elif instance(param, str): pass elif instance(param, list): pass else: raise ValueError() </code></pre> <p><strong>With @overload:</strong></p> <pre><code>class foo: @overload def func(param: int): pass @overload def func(param: str): pass @overload def func(param: list): pass </code></pre> <p>Here is the <a href="https://overloading.readthedocs.io/en/latest/" rel="nofollow">documentation</a>.</p> <hr> <p>My questions are:</p> <ul> <li>How big is the performance impact compared to old style parameter type switching?</li> <li>And how does this package access the type hints?</li> </ul>
0
2016-09-28T13:23:15Z
39,749,114
<p>You'd have to measure it on your own with a real code. </p> <p>I made a very quick look at the code of this library and conclusion is simple. It uses a lot of reflection (inspect package) and type comparison. inspect package on its own is mostly used by debugging tools - they always slow your code down. </p> <p>Just look at these lines:</p> <pre><code>complexity = complexity_mapping[id] if complexity &amp; 8 and isinstance(arg, tuple): element_type = tuple(type(el) for el in arg) elif complexity &amp; 4 and hasattr(arg, 'keys'): element_type = (type(element), type(arg[element])) else: element_type = type(element) </code></pre> <h1> </h1> <pre><code>type_hints = typing.get_type_hints(func) if typing else func.__annotations__ types = tuple(normalize_type(type_hints.get(param, AnyType)) for param in parameters) </code></pre> <p>Note that this package if over 7 months old and has only 70 stars. Python is not Java... You'd really hurt python itself with this package :D You'd better implement some core api method that delegates calls to other methods/objects, based on type parameters - just like it should be done with Python. </p>
1
2016-09-28T13:34:14Z
[ "python", "overloading", "python-3.5", "method-overloading" ]
Why does Python3.5 have generator based co-routines?
39,748,849
<p>If at all, there's no functional differences (besides syntax) between <code>native</code> and <code>generator</code> based co-routines; why does <code>Python3</code> have both? I understand what a generator based co-routine is.</p> <p>Was there a particular design decision or thought that made sense to have both? </p> <p>Also, I'm a bit confused here; the diff between a generator and a co-routine is that I can't write into a <code>generator</code>; this <a href="https://www.python.org/dev/peps/pep-0342/" rel="nofollow">PEP</a> added the ability to do that. I guess there's no diff between the two in python? (I'm also assuming that writing into a co-routine and sending a value to a co-routine is the same thing)</p> <p>Why does <code>asyncio</code> support the use of both?</p>
1
2016-09-28T13:23:35Z
39,749,725
<p>Before 3.5, there was no real difference between a generator and a co-routine. A generator <em>is</em> a co-routine purely by using <code>yield</code> as an expression and expecting data via <code>send()</code>.</p> <p>This sort-of changed with the new <a href="https://www.python.org/dev/peps/pep-0492/" rel="nofollow"><code>async</code> syntax</a> added to Python 3.5. The <code>async</code> syntax is both syntactic sugar (turning generators into async coroutines visually in your code) and an extension to the language to make it possible to use coroutines as context managers and iterables (<code>async with</code> and <code>async for</code>).</p> <p>Quoting the PEP:</p> <blockquote> <p>It is proposed to make <em>coroutines</em> a proper standalone concept in Python, and introduce new supporting syntax. The ultimate goal is to help establish a common, easily approachable, mental model of asynchronous programming in Python and make it as close to synchronous programming as possible.</p> </blockquote> <p>Under the hood, co-routines are basically still generators and generators support most of the same functionality. Co-routines are now a distinct type however to make it possible to explicitly test for awaitable objects, see <a href="http://bugs.python.org/issue24400" rel="nofollow">issue 24400</a> for the motivation (initially this was a per-instance flag, not a distinct type).</p> <p>To be clear, you can still use <code>.send()</code> an a generator. But a generator is not <em>awaitable</em>, it is not <em>expected</em> to cooperate.</p> <p><code>asyncio</code> must support both because the library aims to be compatible with Python versions &lt; 3.5 and thus can't rely on the new language constructs to be available.</p>
3
2016-09-28T13:58:32Z
[ "python", "generator", "python-asyncio", "coroutine" ]
Matplotlib 3D scatter autoscale issue
39,748,867
<p>Python 2.7, matplotlib 1.5.1, Win 7, x64</p> <p>I am trying to plot the shortest distances between a node &amp; its geometrical nearest neighbour (ie not its nearest connected neighbour) in a graph using Dijkstra's algorithm. </p> <p>The algorithm is working fine but when it comes to plotting, matplotlib's scaling freaks out when I plot certain nodes.</p> <p>Code snippet:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D # Find the paths between a given node &amp; its nearest neighbours def plotPath(pathStart, pathEnd, pointCol='b'): shortPath = graph.dijkstra(pathStart, pathEnd) # this calculates the shortest path pathNodesIdx = [i-1 for i in shortPath] # Algorithm returns 1 indexed whilst Python uses 0 indexed pathCoords = L3.nodes[pathNodesIdx, 1:4] # retrieves the coordinate for the nodes on the path ax.scatter(pathCoords[1:-1,0], pathCoords[1:-1,1], pathCoords[1:-1,2], s=240, c=pointCol, marker='o') startNode = pathCoords[0] endNode = pathCoords[-1] ax.scatter(startNode[0], startNode[1], startNode[2], s=240, c='g', marker='o') ax.scatter(endNode[0], endNode[1], endNode[2], s=240, c='r', marker='o') for node in pathCoords[1:]: ax.plot([startNode[0], node[0]], [startNode[1], node[1]], [startNode[2], node[2]], color=pointCol, linewidth=2.0) startNode = node return pathCoords pointCol = 'b' fig = plt.figure() ax = fig.add_subplot(111, projection='3d') pathStart = 1 # given node graph=Graph(L3.trabGraph) # L3.trabGraph is list conataining the edge/node/cost information for the graph # Return indices for nearest neighbours nearest = [i+1 for i in L3.nodeNeighbours(pathStart, numNeighs=6, method='brute')[1:]] </code></pre> <p>For example I just plot the path to the 2nd nearest neighbour using <code>plotPath(1, nearest[2])</code> I get:</p> <p><a href="http://i.stack.imgur.com/IMZWW.png" rel="nofollow"><img src="http://i.stack.imgur.com/IMZWW.png" alt="enter image description here"></a></p> <p>But if I add the other nearest neighbours using,</p> <pre><code>p0 = plotPath(1, nearest[0]) p1 = plotPath(1, nearest[1]) p2 = plotPath(1, nearest[2]) p3 = plotPath(1, nearest[3]) p4 = plotPath(1, nearest[4]) </code></pre> <p>I get:</p> <p><a href="http://i.stack.imgur.com/qSad5.png" rel="nofollow"><img src="http://i.stack.imgur.com/qSad5.png" alt="enter image description here"></a></p> <p>For reference the coordinates of the nodes for each case:</p> <pre><code>p0 = array([[ 1.094, 1.76 , 1.125], [ 1.188, 1.75 , 1.104]]) p1 = array([[ 1.094, 1.76 , 1.125], [ 1.104, 1.875, 1.094]]) p2 = array([[ 1.094, 1.76 , 1.125], [ 1.188, 1.75 , 1.104], [ 1.188, 1.688, 1.094]]) p3 = array([[ 1.094, 1.76 , 1.125], [ 1.198, 1.76 , 1.198]]) p4 = array([[ 1.094, 1.76 , 1.125], [ 1.198, 1.76 , 1.198], [ 1.188, 1.708, 1.198]]) </code></pre> <p>For the life of me I dont see why matplotlib does this? Anybody know?</p> <p>I have left out the implementations of the Dijkstra algorithm (from Rosetta code FYI) &amp; the creation of the graph for the sake of brevity &amp; the fact that I'm not at liberty to share the graph information.</p>
0
2016-09-28T13:24:28Z
39,781,033
<p>If the question here really is "How do I set the limits of a 3d plot in matplotlib?" then the answer would be:</p> <p>Just as you do in the 2d case: </p> <pre><code>ax.set_xlim([xmin, xmax]) ax.set_ylim([ymin, ymax]) ax.set_zlim([zmin, zmax]) </code></pre> <p>Finding the values of min and max for respective cases might of course be automated.</p>
1
2016-09-29T22:34:03Z
[ "python", "matplotlib", "plot" ]
Find maximum value and index in a python list?
39,748,916
<p>I have a python list that is like this,</p> <pre><code>[[12587961, 0.7777777777777778], [12587970, 0.5172413793103449], [12587979, 0.3968253968253968], [12587982, 0.88], [12587984, 0.8484848484848485], [12587992, 0.7777777777777778], [12587995, 0.8070175438596491], [12588015, 0.4358974358974359], [12588023, 0.8985507246376812], [12588037, 0.5555555555555555], [12588042, 0.9473684210526315]] </code></pre> <p>This list can be up to thousand elements in length, how can I get the maximum value in the list according to the second item in the sub-array, and get the index of the maximum value which is the fist element in the sub-array in python?</p>
3
2016-09-28T13:26:26Z
39,748,989
<pre><code>from operator import itemgetter a = [[12587961, 0.7777777777777778], [12587970, 0.5172413793103449], [12587979, 0.3968253968253968], [12587982, 0.88], [12587984, 0.8484848484848485], [12587992, 0.7777777777777778], [12587995, 0.8070175438596491], [12588015, 0.4358974358974359], [12588023, 0.8985507246376812], [12588037, 0.5555555555555555], [12588042, 0.9473684210526315]] max(a, key=itemgetter(1))[0] // =&gt; 12588042 </code></pre>
0
2016-09-28T13:29:35Z
[ "python", "list", "max" ]
Find maximum value and index in a python list?
39,748,916
<p>I have a python list that is like this,</p> <pre><code>[[12587961, 0.7777777777777778], [12587970, 0.5172413793103449], [12587979, 0.3968253968253968], [12587982, 0.88], [12587984, 0.8484848484848485], [12587992, 0.7777777777777778], [12587995, 0.8070175438596491], [12588015, 0.4358974358974359], [12588023, 0.8985507246376812], [12588037, 0.5555555555555555], [12588042, 0.9473684210526315]] </code></pre> <p>This list can be up to thousand elements in length, how can I get the maximum value in the list according to the second item in the sub-array, and get the index of the maximum value which is the fist element in the sub-array in python?</p>
3
2016-09-28T13:26:26Z
39,749,005
<p>Use the <a href="https://docs.python.org/3/library/functions.html#max" rel="nofollow"><code>max</code></a> function and its <code>key</code> parameter, to use only the second element to compare elements of the list.</p> <p>For example,</p> <pre><code>&gt;&gt;&gt; data = [[12587961, 0.7777777777777778], [12587970, 0.5172413793103449], [12587979, 0.3968253968253968].... [12588042, 0.9473684210 526315]] &gt;&gt;&gt; max(data, key=lambda item: item[1]) [12588042, 0.9473684210526315] </code></pre> <p>Now, if you want just the first element, then you can simply get the first element alone, or just unpack the result, like this</p> <pre><code>&gt;&gt;&gt; index, value = max(data, key=lambda item: item[1]) &gt;&gt;&gt; index 12588042 &gt;&gt;&gt; value 0.9473684210526315 </code></pre> <hr> <p>Edit: If you want to find the maximum index (first value) out of all elements with the maximum value (second value), then you can do it like this</p> <pre><code>&gt;&gt;&gt; _, max_value = max(data, key=lambda item: item[1]) &gt;&gt;&gt; max(index for index, value in data if value == max_value) </code></pre> <p>You can do the same in a single iteration, like this</p> <pre><code>max_index = float("-inf") max_value = float("-inf") for index, value in data: if value &gt; max_value: max_value = value max_index = index elif value == max_value: max_index = max(max_index, index) </code></pre>
7
2016-09-28T13:30:00Z
[ "python", "list", "max" ]
Find maximum value and index in a python list?
39,748,916
<p>I have a python list that is like this,</p> <pre><code>[[12587961, 0.7777777777777778], [12587970, 0.5172413793103449], [12587979, 0.3968253968253968], [12587982, 0.88], [12587984, 0.8484848484848485], [12587992, 0.7777777777777778], [12587995, 0.8070175438596491], [12588015, 0.4358974358974359], [12588023, 0.8985507246376812], [12588037, 0.5555555555555555], [12588042, 0.9473684210526315]] </code></pre> <p>This list can be up to thousand elements in length, how can I get the maximum value in the list according to the second item in the sub-array, and get the index of the maximum value which is the fist element in the sub-array in python?</p>
3
2016-09-28T13:26:26Z
39,749,019
<p>Use <code>max</code> with a key.</p> <pre><code>l = [[12587961, 0.7777777777777778], [12587970, 0.5172413793103449], [12587979, 0.3968253968253968], [12587982, 0.88], [12587984, 0.8484848484848485], [12587992, 0.7777777777777778], [12587995, 0.8070175438596491], [12588015, 0.4358974358974359], [12588023, 0.8985507246376812], [12588037, 0.5555555555555555], [12588042, 0.9473684210526315]] max_sub = max(l, key=lambda x: x[1]) max_val = max_sub[1] max_index = max_sub[0] </code></pre>
3
2016-09-28T13:30:28Z
[ "python", "list", "max" ]
Find maximum value and index in a python list?
39,748,916
<p>I have a python list that is like this,</p> <pre><code>[[12587961, 0.7777777777777778], [12587970, 0.5172413793103449], [12587979, 0.3968253968253968], [12587982, 0.88], [12587984, 0.8484848484848485], [12587992, 0.7777777777777778], [12587995, 0.8070175438596491], [12588015, 0.4358974358974359], [12588023, 0.8985507246376812], [12588037, 0.5555555555555555], [12588042, 0.9473684210526315]] </code></pre> <p>This list can be up to thousand elements in length, how can I get the maximum value in the list according to the second item in the sub-array, and get the index of the maximum value which is the fist element in the sub-array in python?</p>
3
2016-09-28T13:26:26Z
39,749,148
<p>Simple </p> <pre><code>list = [[12587961, 0.7777777777777778], [12587970, 0.5172413793103449], [12587979, 0.3968253968253968], [12587982, 0.88], [12587984, 0.8484848484848485], [12587992, 0.7777777777777778], [12587995, 0.8070175438596491], [12588015, 0.4358974358974359], [12588023, 0.8985507246376812], [12588037, 0.5555555555555555], [12588042, 0.9473684210526315]] list2 = [] for x in list: list2.append(x[1]) print "index-&gt;" + str(list[list2.index(max(list2))][0]) print "max value-&gt;" + str(list[list2.index(max(list2))][1]) </code></pre>
-1
2016-09-28T13:35:30Z
[ "python", "list", "max" ]
Find maximum value and index in a python list?
39,748,916
<p>I have a python list that is like this,</p> <pre><code>[[12587961, 0.7777777777777778], [12587970, 0.5172413793103449], [12587979, 0.3968253968253968], [12587982, 0.88], [12587984, 0.8484848484848485], [12587992, 0.7777777777777778], [12587995, 0.8070175438596491], [12588015, 0.4358974358974359], [12588023, 0.8985507246376812], [12588037, 0.5555555555555555], [12588042, 0.9473684210526315]] </code></pre> <p>This list can be up to thousand elements in length, how can I get the maximum value in the list according to the second item in the sub-array, and get the index of the maximum value which is the fist element in the sub-array in python?</p>
3
2016-09-28T13:26:26Z
39,749,733
<pre><code>allData = [[12587961, 0.7777777777777778], [12587970, 0.5172413793103449], [12587979, 0.3968253968253968], [12587982, 0.88], [12587984, 0.8484848484848485], [12587992, 0.7777777777777778], [12587995, 0.8070175438596491], [12588015, 0.4358974358974359], [12588023, 0.8985507246376812], [12588037, 0.5555555555555555], [12588042, 0.9473684210526315]] listOfSecondData = [i[1] for i in allData] result = allData[listOfSecondData.index(max(listOfSecondData))][0] print(result) #Output: 12588042 </code></pre>
0
2016-09-28T13:58:52Z
[ "python", "list", "max" ]
Pandas Dataframe row number become the same after selected by condition
39,748,963
<p>So I have 100,000 row of data look like this</p> <p>df</p> <pre><code> date ticker holding 0 2016-09-22 1 788315240 1 2016-09-22 2 429232858 2 2016-09-22 3 1677428346 3 2016-09-22 4 321595332 </code></pre> <p>but when I do:</p> <pre><code>df.loc[df.ticker == 3] </code></pre> <p>it returns</p> <pre><code> date ticker stockholding 2 2016-09-22 3 1677428346 2 2016-09-21 3 1679285716 2 2016-09-20 3 1680425466 2 2016-09-19 3 1678823216 2 2016-09-18 3 1678743180 2 2016-09-14 3 1682832643 </code></pre> <p>you may notice, all row number become 2, even if i do</p> <pre><code>for i, row in df[df.ticker == 3].iterrows(): print i </code></pre> <p>it print all '2's.</p> <p>What am I doing wrong here?</p>
1
2016-09-28T13:28:28Z
39,748,999
<p>You have duplicates values in <code>index</code> only.</p> <p>You can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a>:</p> <pre><code>df.reset_index(drop=True, inplace=True) </code></pre> <p>Sample:</p> <pre><code>df = pd.DataFrame({'A':[1,2,3,4,5,8], 'B':[4,5,6,0,1,7], 'ticker':[7,8,9,3,3,3]}, index=[0,1,2,2,2,2]) print (df) A B ticker 0 1 4 7 1 2 5 8 2 3 6 9 2 4 0 3 2 5 1 3 2 8 7 3 print (df.loc[df.ticker == 3]) A B ticker 2 4 0 3 2 5 1 3 2 8 7 3 df.reset_index(drop=True, inplace=True) print (df.loc[df.ticker == 3]) A B ticker 3 4 0 3 4 5 1 3 5 8 7 3 </code></pre>
0
2016-09-28T13:29:48Z
[ "python", "pandas", "indexing", "dataframe", "condition" ]
On the default/fill value for outer joins
39,748,976
<p>Below are teeny/toy versions of much larger/complex dataframes I'm working with:</p> <pre><code>&gt;&gt;&gt; A key u v w x 0 a 0.757954 0.258917 0.404934 0.303313 1 b 0.583382 0.504687 NaN 0.618369 2 c NaN 0.982785 0.902166 NaN 3 d 0.898838 0.472143 NaN 0.610887 4 e 0.966606 0.865310 NaN 0.548699 5 f NaN 0.398824 0.668153 NaN &gt;&gt;&gt; B key y z 0 a 0.867603 NaN 1 b NaN 0.191067 2 c 0.238616 0.803179 3 p 0.080446 NaN 4 q 0.932834 NaN 5 r 0.706561 0.814467 </code></pre> <p>(FWIW, at the end of this post, I provide code to generate these dataframes.)</p> <p>I want to produce an outer join of these dataframes on the <code>key</code> column<sup>1</sup>, in such a way that the new positions induced by the outer join get default value 0.0. IOW, the desired result looks like this</p> <pre><code> key u v w x y z 0 a 0.757954 0.258917 0.404934 0.303313 0.867603 NaN 1 b 0.583382 0.504687 NaN 0.618369 NaN 0.191067 2 c NaN 0.982785 0.902166 NaN 0.238616 0.803179 3 d 0.898838 0.472143 NaN 0.610887 0.000000 0.000000 4 e 0.966606 0.86531 NaN 0.548699 0.000000 0.000000 5 f NaN 0.398824 0.668153 NaN 0.000000 0.000000 6 p 0.000000 0.000000 0.000000 0.000000 0.080446 NaN 7 q 0.000000 0.000000 0.000000 0.000000 0.932834 NaN 8 r 0.000000 0.000000 0.000000 0.000000 0.706561 0.814467 </code></pre> <p>(Note that this desired output contains some NaNs, namely those that were already present in <code>A</code> or <code>B</code>.)</p> <p>The <code>merge</code> method gets me part-way there, but the filled-in default values are NaN's, not 0.0's:</p> <pre><code>&gt;&gt;&gt; C = pandas.DataFrame.merge(A, B, how='outer', on='key') &gt;&gt;&gt; C key u v w x y z 0 a 0.757954 0.258917 0.404934 0.303313 0.867603 NaN 1 b 0.583382 0.504687 NaN 0.618369 NaN 0.191067 2 c NaN 0.982785 0.902166 NaN 0.238616 0.803179 3 d 0.898838 0.472143 NaN 0.610887 NaN NaN 4 e 0.966606 0.865310 NaN 0.548699 NaN NaN 5 f NaN 0.398824 0.668153 NaN NaN NaN 6 p NaN NaN NaN NaN 0.080446 NaN 7 q NaN NaN NaN NaN 0.932834 NaN 8 r NaN NaN NaN NaN 0.706561 0.814467 </code></pre> <p>The <code>fillna</code> method fails to produce the desired output, because it modifies some positions that should be left unchanged:</p> <pre><code>&gt;&gt;&gt; C.fillna(0.0) key u v w x y z 0 a 0.757954 0.258917 0.404934 0.303313 0.867603 0.000000 1 b 0.583382 0.504687 0.000000 0.618369 0.000000 0.191067 2 c 0.000000 0.982785 0.902166 0.000000 0.238616 0.803179 3 d 0.898838 0.472143 0.000000 0.610887 0.000000 0.000000 4 e 0.966606 0.865310 0.000000 0.548699 0.000000 0.000000 5 f 0.000000 0.398824 0.668153 0.000000 0.000000 0.000000 6 p 0.000000 0.000000 0.000000 0.000000 0.080446 0.000000 7 q 0.000000 0.000000 0.000000 0.000000 0.932834 0.000000 8 r 0.000000 0.000000 0.000000 0.000000 0.706561 0.814467 </code></pre> <p>How can I achieve the desired output efficiently? (Performance matters here, because I intend to perform this operation on much larger dataframes than those shown here.)</p> <hr> <p>FWIW, below is the code to generate the example dataframes <code>A</code> and <code>B</code>.</p> <pre><code>from pandas import DataFrame from collections import OrderedDict from random import random, seed def make_dataframe(rows, colnames): return DataFrame(OrderedDict([(n, [row[i] for row in rows]) for i, n in enumerate(colnames)])) maybe_nan = lambda: float('nan') if random() &lt; 0.4 else random() seed(0) A = make_dataframe([['a', maybe_nan(), maybe_nan(), maybe_nan(), maybe_nan()], ['b', maybe_nan(), maybe_nan(), maybe_nan(), maybe_nan()], ['c', maybe_nan(), maybe_nan(), maybe_nan(), maybe_nan()], ['d', maybe_nan(), maybe_nan(), maybe_nan(), maybe_nan()], ['e', maybe_nan(), maybe_nan(), maybe_nan(), maybe_nan()], ['f', maybe_nan(), maybe_nan(), maybe_nan(), maybe_nan()]], ('key', 'u', 'v', 'w', 'x')) B = make_dataframe([['a', maybe_nan(), maybe_nan()], ['b', maybe_nan(), maybe_nan()], ['c', maybe_nan(), maybe_nan()], ['p', maybe_nan(), maybe_nan()], ['q', maybe_nan(), maybe_nan()], ['r', maybe_nan(), maybe_nan()]], ('key', 'y', 'z')) </code></pre> <hr> <p><sup><sup>1</sup>For for case of <em>multi-key</em> outer joins, see <a href="http://stackoverflow.com/q/39751636/559827">here</a>.</sup></p>
2
2016-09-28T13:29:04Z
39,749,744
<p>You can fill zeros after the <code>merge</code>:</p> <pre><code>res = pd.merge(A, B, how="outer") res.loc[~res.key.isin(A.key), A.columns] = 0 </code></pre> <p><strong>EDIT</strong></p> <p>to skip <code>key</code> column:</p> <pre><code>res.loc[~res.key.isin(A.key), A.columns.drop("key")] = 0 </code></pre>
1
2016-09-28T13:59:34Z
[ "python", "pandas" ]
Bank ATM Program login
39,749,051
<p>I want to make this program that acts as a bank, how do I make sure the correct ID number must be entered with the correct pin and have it depending on the id you entered print hello then their name and prompt how much money they have in the bank.</p> <pre><code>attempts = 0 store_id = [1057, 2736, 4659, 5691, 1234, 4321] store_name = ["Jeremy Clarkson", "Suzanne Perry", "Vicki Butler-Henderson", "Jason Plato"] store_balance = [172.16, 15.62, 23.91, 62.17, 131.90, 231.58] store_pin = [1057, 2736, 4659, 5691] start = int(input("Are you a member of the Northern Frock Bank?\n1. Yes\n2. No\n")) if start == 1: idguess = "" pinguess = "" while (idguess not in store_id) or (pinguess not in store_pin): idguess = int(input("ID Number: ")) pinguess = int(input("PIN Number: ")) if (idguess not in store_id) or (pinguess not in store_pin): print("Invalid Login") attempts = attempts + 1 if attempts == 3: print("This ATM has been blocked for too many failed attempts.") break elif start == 2: name = str(input("What is your full name?: ")) pin = str(input("Please choose a 4 digit pin number for your bank account: ")) digits = len(pin) balance = 100 while digits != 4: print("That Pin is Invalid") pin = str(input("Please choose a 4 digit pin number for your bank account: ")) digits = len(pin) store_name.append(name) store_pin.append(pin) </code></pre>
-4
2016-09-28T13:31:46Z
39,750,876
<p>I'm very impressed by how much you've elaborated on your program. Here's how I would view your solution.</p> <hr> <p>So to create a login simulation, I would instead use a <em>dictionary.</em> That way you can assign an ID to a PIN. For example:</p> <pre><code>credentials = { "403703": "121", "3900": "333", "39022": "900" } </code></pre> <p>Where your ID is on the left side of the colon and the PIN is on the right. You would also have to assign the ID to a name that belongs to that ID using, you guessed it, a dictionary!</p> <pre><code>bankIDs = { "403703": "Anna", "3900": "Jacob", "39022": "Kendrick" } </code></pre> <p>Now that you've done that, you can create your virtual login system using <em>if/else</em> control flow. I've made my code like this:</p> <pre><code>attempts = 0 try: while attempts &lt; 3: id_num = raw_input("Enter your ID: ") PIN = raw_input("Password: ") if (id_num in credentials) and (PIN == credentials[id_num]): print "login success." login(id_num) else: print "Login fail. try again." attempts += 1 if attempts == 3: print "You have reached the maximum amount of tries." except KeyboardInterrupt: print "Now closing. Goodbye!" </code></pre> <p>Note the try and except block is really optional. You could use the <code>break</code> operator like you did in your code if you wanted to, instead. I just like to put a little customization in there (Remember to break out of your program is CTRL-C). Finally, Python has a way of making life easier for people by using <em>functions</em>. Notice I used one where I put <code>login(id_num)</code>. Above this while loop you'll want to define your login so that you can display a greeting message for that particular person. Here's what I did:</p> <pre><code>def login(loginid): print "Hello, %s!" % bankIDs[loginid] </code></pre> <p>Simple use of string formatting. And there you have it. The same can be done with displaying that person's balance. Just make the dictionary for it, then print the code in your login definition. The rest of the code is good as it is. Just make sure you've indented properly your while-loop inside the <em>elif</em> on the bottom of your code, and your last 2 lines as well. Hope I helped. Cheers!</p>
0
2016-09-28T14:47:03Z
[ "python" ]
How to improve a mixin structure in Python?
39,749,146
<p>I have a simple mixin structure in Python. The code should be pretty self-explaining:</p> <pre><code>class Base: def __init__(self): pass class MixinA: def __init__(self): self.x = 0 self.y = 1 def a(self): print('A: x = ' + str(self.x) + ', y = ' + str(self.y)) class MixinB: def __init__(self): self.x = 2 self.z = 3 def b(self): print('B: x = ' + str(self.x) + ', z = ' + str(self.z)) class MyFirstMix(MixinA, MixinB, Base): def __init__(self): Base.__init__(self) MixinB.__init__(self) MixinA.__init__(self) class MySecondMix(MixinA, Base): def __init__(self): Base.__init__(self) MixinA.__init__(self) </code></pre> <p>I'd like to improve this a bit, so this leads to 3 questions/problems:</p> <ol> <li><code>MixinA</code> and <code>MixinB</code> both have a member <code>x</code>. Is there a way to make sure, each of the class sees only its own x<code>?</code> As far as I know: No, there isn't.</li> <li>It's slightly cumbersome to call the constructor for each mixin in the mixed class. Is there a way to automatically call all constructors or to do something with the same effect?</li> <li>Is there a way to dynamically mixin something in-line, without explicitly creating a class? I'm looking for a syntax like: <code>mix = Base() with MixinA</code></li> </ol> <p>If my proposed structure is completely wrong, I'm also open for other recommendations on how to handle mixins.</p>
1
2016-09-28T13:35:28Z
39,749,418
<p>When your inheritance schemes start to suffer from these sorts of issues it's time to consider using a technique called <em>composition</em> instead. A good readable introduction to the topic <a href="https://www.thoughtworks.com/insights/blog/composition-vs-inheritance-how-choose" rel="nofollow">here</a>. The <a href="https://en.wikipedia.org/wiki/Composition_over_inheritance" rel="nofollow">Wikipedia example</a> is a bit less accessible, but also useful if you can handle the other programming languages. <a href="http://programmers.stackexchange.com/questions/134097/why-should-i-prefer-composition-over-inheritance">This StackExchange question</a> also offers useful discussion.</p> <p>At its simplest, rather than a class inheriting from <code>SomeParent</code> and mixing in the <code>Mixin</code> class, you instead have the <code>SomeParent</code> instances each create an instance of <code>Mixin</code> and use that to access the mixin class's functionality.</p>
0
2016-09-28T13:47:16Z
[ "python", "mixins" ]
How to improve a mixin structure in Python?
39,749,146
<p>I have a simple mixin structure in Python. The code should be pretty self-explaining:</p> <pre><code>class Base: def __init__(self): pass class MixinA: def __init__(self): self.x = 0 self.y = 1 def a(self): print('A: x = ' + str(self.x) + ', y = ' + str(self.y)) class MixinB: def __init__(self): self.x = 2 self.z = 3 def b(self): print('B: x = ' + str(self.x) + ', z = ' + str(self.z)) class MyFirstMix(MixinA, MixinB, Base): def __init__(self): Base.__init__(self) MixinB.__init__(self) MixinA.__init__(self) class MySecondMix(MixinA, Base): def __init__(self): Base.__init__(self) MixinA.__init__(self) </code></pre> <p>I'd like to improve this a bit, so this leads to 3 questions/problems:</p> <ol> <li><code>MixinA</code> and <code>MixinB</code> both have a member <code>x</code>. Is there a way to make sure, each of the class sees only its own x<code>?</code> As far as I know: No, there isn't.</li> <li>It's slightly cumbersome to call the constructor for each mixin in the mixed class. Is there a way to automatically call all constructors or to do something with the same effect?</li> <li>Is there a way to dynamically mixin something in-line, without explicitly creating a class? I'm looking for a syntax like: <code>mix = Base() with MixinA</code></li> </ol> <p>If my proposed structure is completely wrong, I'm also open for other recommendations on how to handle mixins.</p>
1
2016-09-28T13:35:28Z
39,752,890
<p>For python class inherent, I believe there are some tricks you need to know:</p> <ol> <li><p>Class in python2 and python3 are quite different. Python2 support old-style class, but python3 support new-style class only. Simply speaking: in python3, classes always inherent from a base class <code>object</code>, even though you do not explicitly use it. Check <a href="http://stackoverflow.com/questions/54867/what-is-the-difference-between-old-style-and-new-style-classes-in-python">Difference-between-Old-New-Class</a>.</p></li> <li><p>Method Resolution Order (MRO). This determines how derived class search inherent members and functions. See <a href="http://python-history.blogspot.com/2010/06/method-resolution-order.html" rel="nofollow">MRO</a></p></li> <li><p><code>super</code> function. Combined with MRO, you can easily call parent member or function, without explicitly know the name of parent class. See <a href="http://stackoverflow.com/questions/607186/how-does-pythons-super-do-the-right-thing">Super</a></p></li> </ol> <p>Now comes to you questions:</p> <ol> <li>MixinA and MixinB both have a member x. Is there a way to make sure, each of the class sees only its own x?</li> </ol> <p>I don't quit understand your meaning. When you refer a class member, you must refer it through its instance or class. So <code>instance_of_MixinA.x</code> and <code>instance_of_MixinB.x</code> are separated. If you are talking about class <code>MyFirstMix(MixinA, MixinB, Base)</code>, it depends on how <code>__init__</code> function is called. In your code, you first populate <code>x</code> by <code>MixinB</code> and then reset its value by <code>MixinA</code>. </p> <ol start="2"> <li>It's slightly cumbersome to call the constructor for each mixin in the mixed class. Is there a way to automatically call all constructors or to do something with the same effect.</li> </ol> <p>Your designation make it impossible. You have to call all constructors.</p> <ol start="3"> <li>Is there a way to dynamically mixin something in-line, without explicitly creating a class?</li> </ol> <p>I am not sure. But I can give you some suggestions: try outside <code>__init__</code> members when def class (python3, if you used python2 take care of <code>super</code>):</p> <pre><code>class Base: def __init__(self): pass class MixinA: x = 0 y = 1 class MixinB: x = 2 z = 3 def b(self): print('B: x = ' + str(self.x) + ', z = ' + str(self.z)) class MyFirstMix(MixinA, MixinB, Base): def __init__(self): super().__init__() class MySecondMix(MixinA, Base): def __init__(self): super().__init__() </code></pre> <p>The variables outside <code>__init__</code> behaves quit different from inside variables: outside variables belongs to class and will be populated for all instances of this class, while inside variables belongs only to instance (referred by <code>self</code> when you define class), and will be populated only if <code>__init__</code> is called. That's why you cannot use <code>super</code> to call all the constructors---<code>super</code> only call the priority parent's <code>__init__</code>. See <a href="http://stackoverflow.com/questions/1537202/variables-inside-and-outside-of-a-class-init-function">variables-outsite-inside-init</a></p> <p>This is a good solution to Mixin class. In above code, <code>MyFirstMix</code> inherents both <code>MixinA</code> and <code>MixinB</code> whose members are all class members (outside <code>__init__</code>). So instances of <code>MyFirstMix</code> will inherent all class members of <code>MixinA</code> and <code>MixinB</code> without call <code>__init__</code>. Here <code>MixinA</code> and <code>MixinB</code> own same class member <code>x</code>, but the <code>MRO</code> determines that when instances of <code>MyFirstMix</code> refer <code>x</code>, <code>x</code> from <code>MixinA</code> should be returned.</p> <p>Hope this will be helpful. Thanks!</p>
0
2016-09-28T16:24:08Z
[ "python", "mixins" ]
concatenate excel datas with python or Excel
39,749,152
<p>Here's my problem, I have an Excel sheet with 2 columns (see below)<a href="http://i.stack.imgur.com/jMLBZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/jMLBZ.png" alt="enter image description here"></a></p> <p>I'd like to print (on python console or in a excel cell) all the data under this form : </p> <pre><code> "1" : ["1123","1165", "1143", "1091", "n"], *** n ∈ [A2; A205]*** </code></pre> <p>We don't really care about the Column B. But I need to add every postal code under this specific form. </p> <p>is there a way to do it with Excel or in Python with Panda ? (If you have any other ideas I would love to hear them)</p> <p>Cheers</p>
2
2016-09-28T13:35:41Z
39,749,442
<p>I think you can use <code>parse_cols</code> for parse first column and then filter out all columns from 205 to 1000 by <code>skiprows</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html" rel="nofollow"><code>read_excel</code></a>:</p> <pre><code>df = pd.read_excel('test.xls', sheet_name='Sheet1', parse_cols=0, skiprows=list(range(205,1000))) print (df) </code></pre> <p>Last use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.tolist.html" rel="nofollow"><code>tolist</code></a> for convert first column to <code>list</code>:</p> <pre><code>print({"1": df.iloc[:,0].tolist()}) </code></pre> <hr> <p>The simpliest solution is parse only first column and then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow"><code>iloc</code></a>:</p> <pre><code>df = pd.read_excel('test.xls', parse_cols=0) print({"1": df.iloc[:206,0].astype(str).tolist()}) </code></pre>
2
2016-09-28T13:47:59Z
[ "python", "excel", "pandas", "xlsxwriter" ]
concatenate excel datas with python or Excel
39,749,152
<p>Here's my problem, I have an Excel sheet with 2 columns (see below)<a href="http://i.stack.imgur.com/jMLBZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/jMLBZ.png" alt="enter image description here"></a></p> <p>I'd like to print (on python console or in a excel cell) all the data under this form : </p> <pre><code> "1" : ["1123","1165", "1143", "1091", "n"], *** n ∈ [A2; A205]*** </code></pre> <p>We don't really care about the Column B. But I need to add every postal code under this specific form. </p> <p>is there a way to do it with Excel or in Python with Panda ? (If you have any other ideas I would love to hear them)</p> <p>Cheers</p>
2
2016-09-28T13:35:41Z
39,749,520
<p>I am not familiar with excel, but pandas could easily handle this problem.</p> <p>First, read the excel to a DataFrame</p> <pre><code>import pandas as pd df = pd.read_excel(filename) </code></pre> <p>Then, print as you like</p> <pre><code>print({"1": list(df.iloc[0:N]['A'])}) </code></pre> <p>where <code>N</code> is the amount you would like to print. That is it. If the list is not a string list, you need to cast the int to string.</p> <p>Also, there are a lot parameters that can control the load part of excel <code>read_excel</code>, you can go through the document to set suitable parameters.</p> <p>Hope this would be helpful to you.</p>
1
2016-09-28T13:51:02Z
[ "python", "excel", "pandas", "xlsxwriter" ]
Save FITS table: The keyword description with its value is too long
39,749,173
<p>I get an error when trying to save astropy Tables retrieved using astroquery to FITS files. In some case it complains that the description of some keywords is too long. The <code>writeto()</code> function seems to have a <code>output_verify</code> argument to avoid this kind of problem, but I cannot find how to pass it to the <code>write()</code> function? Does it exist? Here is my code:</p> <pre><code>import astropy.units as u from astroquery.vizier import Vizier import astropy.coordinates as coord from astropy.table import Table akari_query=Vizier(columns=["S09","S18","e_S09","e_S18","q_S09","q_S18"],catalog=["II/297/irc"]) result=akari_query.query_region(coord.SkyCoord(ra=200.0, dec=10.0,unit=(u.deg, u.deg),frame='icrs'), width=[2.0*u.deg,2.0*u.deg],return_type='votable') table=Table(result[0], masked=True) table.write('test.fits') </code></pre> <p>It returns a long error message ending with: </p> <pre><code>ValueError: The keyword description with its value is too long </code></pre>
1
2016-09-28T13:36:47Z
39,759,965
<p>The problem is that <code>table.meta['description']</code> is longer than allowed for the header of the fits file you're trying to save. You can simply shorten it to anything below 80 characters and try to write <code>test.fits</code> again:</p> <pre><code>table.meta['description'] = u'AKARI/IRC All-Sky Survey Point Source Catalogue v. 1.0' table.write('test.fits') </code></pre>
2
2016-09-29T01:26:46Z
[ "python", "astropy" ]
Python : why doesn't a.pop() modify the list (custom linked-list class)
39,749,242
<p>I'm trying to define a class Hlist of linked lists as below:</p> <pre><code>class Hlist: def __init__(self, value, hlnext): self.value = value self.hlnext = hlnext def pop(self): res = self.value if not(self.hlnext == None): self = self.hlnext return res def __repr__(self): return (str(self.value) + ' - ' + str(self.hlnext)) </code></pre> <p>When I test the pop() method on </p> <pre><code>a = Hlist(1, Hlist(2, None)) </code></pre> <p>Python returns 1 - 2 - None, ok. Then</p> <pre><code>a.pop() </code></pre> <p>returns 1, fine. However : </p> <pre><code>print(a) </code></pre> <p>returns 1 - 2 - None. The list hasn't been modified despite</p> <pre><code>self = self.hlnext </code></pre> <p>Is self the pointer a or is it another pointer pointing to the same address as a? And why does the following code return the expected answer for pop():</p> <pre><code>class Hlist: def __init__(self, value, hlnext): self.value = value self.hlnext = hlnext def pop(self): res = self.value if not(self.hlnext == None): self.value = self.hlnext.value self.next = self.hlnext.hlnext return res def __repr__(self): return (str(self.value) + ' - ' + str(self.hlnext)) </code></pre> <p>is it due to the setattr function used by python?</p> <p>Actually i was trying to get the equivalent in Python of the following class in Java : </p> <pre><code>class Hlist{ int value; Hlist hlnext; Hlist(int value,Hlist hlnext){ value = value; hlnext = hlnext; } } </code></pre> <p>and add a pop() method to it. In a pop() method, will Java's <code>this</code> work the same way Python's <code>self</code> does (local variable) or will it be binded to the pointer a I called pop()? In that case, will <code>this = this.hlnext</code> change the <code>a</code> pointer or not?</p>
2
2016-09-28T13:39:24Z
39,749,397
<p>Because <code>self</code> isn't working the way you think it is. <code>self</code> is just another local variable: assigning to it inside <code>pop()</code> won't change the object into another thing. See <a href="http://stackoverflow.com/questions/28531939/python-assignment-to-self-in-constructor-does-not-make-object-the-same">this question</a> for more details.</p>
1
2016-09-28T13:46:25Z
[ "python", "pointers", "linked-list" ]
Python : why doesn't a.pop() modify the list (custom linked-list class)
39,749,242
<p>I'm trying to define a class Hlist of linked lists as below:</p> <pre><code>class Hlist: def __init__(self, value, hlnext): self.value = value self.hlnext = hlnext def pop(self): res = self.value if not(self.hlnext == None): self = self.hlnext return res def __repr__(self): return (str(self.value) + ' - ' + str(self.hlnext)) </code></pre> <p>When I test the pop() method on </p> <pre><code>a = Hlist(1, Hlist(2, None)) </code></pre> <p>Python returns 1 - 2 - None, ok. Then</p> <pre><code>a.pop() </code></pre> <p>returns 1, fine. However : </p> <pre><code>print(a) </code></pre> <p>returns 1 - 2 - None. The list hasn't been modified despite</p> <pre><code>self = self.hlnext </code></pre> <p>Is self the pointer a or is it another pointer pointing to the same address as a? And why does the following code return the expected answer for pop():</p> <pre><code>class Hlist: def __init__(self, value, hlnext): self.value = value self.hlnext = hlnext def pop(self): res = self.value if not(self.hlnext == None): self.value = self.hlnext.value self.next = self.hlnext.hlnext return res def __repr__(self): return (str(self.value) + ' - ' + str(self.hlnext)) </code></pre> <p>is it due to the setattr function used by python?</p> <p>Actually i was trying to get the equivalent in Python of the following class in Java : </p> <pre><code>class Hlist{ int value; Hlist hlnext; Hlist(int value,Hlist hlnext){ value = value; hlnext = hlnext; } } </code></pre> <p>and add a pop() method to it. In a pop() method, will Java's <code>this</code> work the same way Python's <code>self</code> does (local variable) or will it be binded to the pointer a I called pop()? In that case, will <code>this = this.hlnext</code> change the <code>a</code> pointer or not?</p>
2
2016-09-28T13:39:24Z
39,749,420
<p>It's moslty because you can't change self directly.<br> If you think about pointers, you can't change the pointer address, except if you use a pointer on this pointer. Here, if you consider self as a pointer, when you assign another value to self, you don't really change the self pointer.</p> <p><a href="http://stackoverflow.com/a/1216361/5813357">See this answer</a></p> <p>The second code "works" (not in all cases), because you aren't changing self itself, but the references on which it's pointing. Then your instance is updated to remove its old value and update itself with the next value.</p>
0
2016-09-28T13:47:19Z
[ "python", "pointers", "linked-list" ]
Python QT. Form from another Form
39,749,268
<p>First i am beginning in programming :)</p> <p>I create in QT Designer Form ( MainForm) and add function in button to open a new form. I do this step from <a href="http://stackoverflow.com/questions/27567208/how-do-i-open-sub-window-after-i-click-on-button-on-main-screen-in-pyqt4">How do I open sub window after I click on button on main screen in PyQt4</a> ( First anwser) but when i compile I got:</p> <blockquote> <p>'Ui_V1' object has no attribute 'show'</p> </blockquote> <p>Where is the problem. thanks :)</p> <p>this is a part of code in main form.py:</p> <pre><code>from V1 import Ui_V1 #V1 class and form self.pushButton_5.clicked.connect(lambda: self.openV1()) def openV1(self): window=Ui_V1() window.show() </code></pre> <p>OK i solved this by watching video on Yt :D</p> <pre><code>def openV1(self): self.V1Window=QtGui.QMainWindow() self.ui= Ui_V1() self.ui.setupUi(self.V1Window) self.V1Window.show() </code></pre> <p>and it works :)</p>
0
2016-09-28T13:40:47Z
39,749,394
<p>Have you compiled your code to python? By default it will be a ui file. You can use <code>pyuic4.exe</code> file.</p> <pre><code>c:\Python27\Lib\site-packages\PyQt4\something&gt; pyuic4.exe full/path/to/input.ui -o full/path/to/output.py </code></pre>
0
2016-09-28T13:46:18Z
[ "python", "qt", "pyqt" ]
Python QT. Form from another Form
39,749,268
<p>First i am beginning in programming :)</p> <p>I create in QT Designer Form ( MainForm) and add function in button to open a new form. I do this step from <a href="http://stackoverflow.com/questions/27567208/how-do-i-open-sub-window-after-i-click-on-button-on-main-screen-in-pyqt4">How do I open sub window after I click on button on main screen in PyQt4</a> ( First anwser) but when i compile I got:</p> <blockquote> <p>'Ui_V1' object has no attribute 'show'</p> </blockquote> <p>Where is the problem. thanks :)</p> <p>this is a part of code in main form.py:</p> <pre><code>from V1 import Ui_V1 #V1 class and form self.pushButton_5.clicked.connect(lambda: self.openV1()) def openV1(self): window=Ui_V1() window.show() </code></pre> <p>OK i solved this by watching video on Yt :D</p> <pre><code>def openV1(self): self.V1Window=QtGui.QMainWindow() self.ui= Ui_V1() self.ui.setupUi(self.V1Window) self.V1Window.show() </code></pre> <p>and it works :)</p>
0
2016-09-28T13:40:47Z
39,767,246
<p>Look into the generated file. Usually <code>pyuic4</code> generates a class that is not a QtWidget it is just a factory with a <code>setupUI</code> method.</p> <p>I usually do this:</p> <pre><code>class MyForm(QtGui.QWidget, Ui_V1): def __init__(self, *args): QtGui.QWidget.__init__(self, *args) self.setupUi(self) </code></pre> <p>Then you can use your <code>MyForm</code> as a regular widget.</p>
0
2016-09-29T10:02:11Z
[ "python", "qt", "pyqt" ]
Simpel Python Calculator - Syntax Error - Indentation errors
39,749,277
<p>I recently started learning Python. I have never coded before, but it seemed like a challenge. The first thing I have made is this calculator. However, I can't seem to get it to work. </p> <pre><code>while True: print("Typ 'plus' to add two numbers") print("Typ 'min' to subtract two numbers") print("Typ 'multiplication' multiply two numbers") print("Typ 'division' to divide two numbers") print("Typ 'end' to abort the program") user_input = input(": ") if user_input == "end" break elif user_input == "plus": num1 = float(input("Give a number: ")) num2 = float(input("Give another number: ")) result = str(num1 + num2) print("The anwser is: " + str(result)) elif user_input == "min": num1 = float(input("Give a number: ")) num2 = float(input("Give another number: ")) result = str(num1 - num2) print("The anwser is: " + str(result)) elif user_input == "maal": num1 = float(input("Give a number:")) num2 = float(input("Give another number: ")) result = str(num1 * num2) print("The anwser is: " + str(result)) elif user_input == "deel": num1 = float(input("Give a number: ")) num2 = float(input("Give another number: ")) result = str(num1 + num2) print("The anwser is: " + str(result)) else: print ("I don't understand!") </code></pre> <p>I know it will probably something stupid, I am still very much learning. Just trying to pickup a new skill instead of bothering my friends who do know how to code. </p>
-4
2016-09-28T13:41:05Z
39,750,237
<p>You missed a colon after an this if statement</p> <pre><code>if user_input == "end" break </code></pre> <p>Should Be:</p> <pre><code>if user_input == "end": break </code></pre>
0
2016-09-28T14:20:18Z
[ "python" ]
Auto convert strings and float columns using genfromtxt from numpy/python
39,749,447
<p>I have several different data files that I need to import using genfromtxt. Each data file has different content. For example, file 1 may have all floats, file 2 may have all strings, and file 3 may have a combination of floats and strings etc. Also the number of columns vary from file to file, and since there are hundreds of files, I don't know which columns are floats and strings in each file. However, all the entries in each column are the same data type. </p> <p>Is there a way to set up a converter for genfromtxt that will detect the type of data in each column and convert it to the right data type?</p> <p>Thanks!</p>
0
2016-09-28T13:48:06Z
39,750,409
<p>If you're able to use the Pandas library, <code>pandas.read_csv</code> is <em>much</em> more generally useful than <code>np.genfromtxt</code>, and will automatically handle the kind of type inference mentioned in your question. The result will be a dataframe, but you can get out a numpy array in one of several ways. e.g.</p> <pre><code>import pandas as pd data = pd.read_csv(filename) # get a numpy array; this will be an object array if data has mixed/incompatible types arr = data.values # get a record array; this is how numpy handles mixed types in a single array arr = data.to_records() </code></pre> <p><code>pd.read_csv</code> has dozens of options for various forms of text inputs; see more in the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow">pandas.read_csv documentation</a>.</p>
1
2016-09-28T14:26:54Z
[ "python", "numpy", "data-type-conversion", "genfromtxt" ]
Action bar in anchorlayout - KIVY. python
39,749,526
<p>My goal is to make add these 3 buttons to an actionbar on the top of the screen as seen in the screenshot, please help.</p> <p>.kv file</p> <pre><code>AnchorLayout: anchor_x: 'center' anchor_y: 'top' BoxLayout: padding: 30 size_hint: 1, .1 orientation: 'horizontal' Button: text: 'Back' on_press: root.manager.current = 'mainpage' TextInput: text: 'Search' Button: text: 'Fav' </code></pre> <p><strong>Fav and back button will later be changed to icons, and search will be a dropdown</strong></p> <p>.py file</p> <pre><code>class MainPage(Screen): pass </code></pre> <p><a href="http://i.stack.imgur.com/gEVmD.png" rel="nofollow">enter image description here</a></p>
0
2016-09-28T13:51:15Z
39,758,458
<p>I suggest you use a relative layout inside your box layout:</p> <pre><code>BoxLayout: padding: 30 size_hint: 1, .1 orientation: 'horizontal' RelativeLayout: Button: text: 'Back' pos_hint: {'center_x': 0.1, 'center_y': 0.5} on_press: root.manager.current = 'mainpage' TextInput: pos_hint: {'center_x': 0.5, 'center_y': 0.5} text: 'Search' Button: pos_hint: {'center_x': 0.9, 'center_y': 0.5} text: 'Fav' </code></pre>
0
2016-09-28T22:08:59Z
[ "python", "android-actionbar", "kivy" ]
python: generate numeric column to column with string
39,749,557
<p>I have a dataframe and it's a part of a column</p> <pre><code>category Search Search Онлайн-магазин Онлайн-магазин Форумы и отзывы Онлайн-магазин Форумы и отзывы Агрегатор Информационный ресурс Онлайн-магазин Телеком Онлайн-магазин </code></pre> <p>I need to create column with category, converted to numeric. I mean</p> <pre><code>category numeric_category Search 1 Search 1 Онлайн-магазин 2 Онлайн-магазин 2 Форумы и отзывы 3 Онлайн-магазин 2 Форумы и отзывы 3 Агрегатор 4 Информационный ресурс 5 Онлайн-магазин 2 Телеком 6 Онлайн-магазин 2 </code></pre> <p>How can I do that? using <code>numpy</code>?</p>
1
2016-09-28T13:52:28Z
39,749,617
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.factorize.html" rel="nofollow"><code>factorize</code></a>:</p> <pre><code>df['numeric_category'] = pd.factorize(df.category)[0] + 1 </code></pre> <p>Then you can also convert it to <code>category</code> for saving memory:</p> <pre><code>df['numeric_category'] = pd.Categorical(pd.factorize(df.category)[0] + 1) </code></pre> <p>Sample:</p> <pre><code>df = pd.DataFrame({'category':['a','s','a']}) print (df) category 0 a 1 s 2 a df['numeric_category'] = pd.Categorical(pd.factorize(df.category)[0] + 1) print (df) category numeric_category 0 a 1 1 s 2 2 a 1 </code></pre>
1
2016-09-28T13:54:40Z
[ "python", "pandas" ]
python: generate numeric column to column with string
39,749,557
<p>I have a dataframe and it's a part of a column</p> <pre><code>category Search Search Онлайн-магазин Онлайн-магазин Форумы и отзывы Онлайн-магазин Форумы и отзывы Агрегатор Информационный ресурс Онлайн-магазин Телеком Онлайн-магазин </code></pre> <p>I need to create column with category, converted to numeric. I mean</p> <pre><code>category numeric_category Search 1 Search 1 Онлайн-магазин 2 Онлайн-магазин 2 Форумы и отзывы 3 Онлайн-магазин 2 Форумы и отзывы 3 Агрегатор 4 Информационный ресурс 5 Онлайн-магазин 2 Телеком 6 Онлайн-магазин 2 </code></pre> <p>How can I do that? using <code>numpy</code>?</p>
1
2016-09-28T13:52:28Z
39,749,906
<pre><code>dict={} for item in df.category: if item not in dict: dict[item]=len(dict)+1 print "category\t"+"numeric_category" for item in df.category: print "%s\t%s"%(item,dict[item]) </code></pre>
1
2016-09-28T14:06:25Z
[ "python", "pandas" ]
How to get a list of useless features using sklearn?
39,749,592
<p>I have a dataset to build a classificator:</p> <pre><code>dataset = pd.read_csv(sys.argv[1], decimal=",",delimiter=";", encoding='cp1251') X=dataset.ix[:, dataset.columns != 'class'] Y=dataset['class'] </code></pre> <p>I want to select important features only, so I do:</p> <pre><code>clf=svm.SVC(probability=True, gamma=0.017, C=5, coef0=0.00001, kernel='linear', class_weight='balanced') model = SelectFromModel(clf, prefit=True) X_train, X_test, Y_train, Y_test = cross_validation.train_test_split(X, Y, test_size=0.5, random_state=5) y_pred=clf.fit(X_train, Y_train).predict(X_test) X_new = model.transform(X) </code></pre> <p>So X_new has a shape 3000x72 while X had 3000x130. I would like to get a list of the features which are and are not in X_new. How can I do it?</p> <p>X was a dataframe with a header, but X_new is a list of lists with feature values without any name, so I can't merge it as I would do in pandas. Thank you for any help!</p>
0
2016-09-28T13:53:44Z
39,750,096
<p><code>clf.coef_</code> returns you a list of feature weights (apply after <code>fit()</code>). Sort it by weights and you see which are not very useful.</p>
0
2016-09-28T14:15:01Z
[ "python", "pandas", "scikit-learn", "feature-selection", "sklearn-pandas" ]
How to get a list of useless features using sklearn?
39,749,592
<p>I have a dataset to build a classificator:</p> <pre><code>dataset = pd.read_csv(sys.argv[1], decimal=",",delimiter=";", encoding='cp1251') X=dataset.ix[:, dataset.columns != 'class'] Y=dataset['class'] </code></pre> <p>I want to select important features only, so I do:</p> <pre><code>clf=svm.SVC(probability=True, gamma=0.017, C=5, coef0=0.00001, kernel='linear', class_weight='balanced') model = SelectFromModel(clf, prefit=True) X_train, X_test, Y_train, Y_test = cross_validation.train_test_split(X, Y, test_size=0.5, random_state=5) y_pred=clf.fit(X_train, Y_train).predict(X_test) X_new = model.transform(X) </code></pre> <p>So X_new has a shape 3000x72 while X had 3000x130. I would like to get a list of the features which are and are not in X_new. How can I do it?</p> <p>X was a dataframe with a header, but X_new is a list of lists with feature values without any name, so I can't merge it as I would do in pandas. Thank you for any help!</p>
0
2016-09-28T13:53:44Z
39,759,099
<p>You might also want to take a look at <a href="http://scikit-learn.org/stable/modules/feature_selection.html" rel="nofollow">Feature Selection</a>. It describes some techniques and tools to do this more systematically.</p>
1
2016-09-28T23:23:06Z
[ "python", "pandas", "scikit-learn", "feature-selection", "sklearn-pandas" ]
python table view output
39,749,594
<p>I have some thing like this in output.txt file</p> <pre><code>Service1:Aborted Service2:failed Service3:failed Service4:Aborted Service5:failed </code></pre> <p>output in 2nd file(output2.txt) :</p> <pre><code> Service1 Service2 Servive3 Service4 Service5 Aborted failed failed Aborted failed </code></pre> <p>Would like to get output in table format as above</p> <p>Code I am trying:</p> <pre><code> file=open('output.txt','r') target=open('output2.txt','w') states=[] for line in file.readlines(): parts=line.strip().split(':') states.append(parts[1].strip()) target.write(','.join(states)) target.close() </code></pre>
0
2016-09-28T13:53:50Z
39,749,880
<p>Very quickly:</p> <pre><code>headers = [] statuses = [] for line in file: line=line.strip() parts=line.split(":") headers.append(parts[0]) statuses.append(parts[1]) format_str = '{:&gt;15}'*(len(headers)) print(format_str.format(*headers)) print(format_str.format(*statuses)) </code></pre>
2
2016-09-28T14:05:18Z
[ "python", "python-2.7", "python-3.x" ]
python table view output
39,749,594
<p>I have some thing like this in output.txt file</p> <pre><code>Service1:Aborted Service2:failed Service3:failed Service4:Aborted Service5:failed </code></pre> <p>output in 2nd file(output2.txt) :</p> <pre><code> Service1 Service2 Servive3 Service4 Service5 Aborted failed failed Aborted failed </code></pre> <p>Would like to get output in table format as above</p> <p>Code I am trying:</p> <pre><code> file=open('output.txt','r') target=open('output2.txt','w') states=[] for line in file.readlines(): parts=line.strip().split(':') states.append(parts[1].strip()) target.write(','.join(states)) target.close() </code></pre>
0
2016-09-28T13:53:50Z
39,749,946
<p>You're on your own for figuring out how to get your input data into a numpy array, and what example the reshape function does, but this should get you about 75% to the finish line</p> <pre><code>import numpy as np import pandas as pd A_in = [[1,2,3],[4,5,6]] B = pd.DataFrame(np.array(A_in).reshape(3,2)) B.to_csv('output2.txt,'\t') </code></pre>
0
2016-09-28T14:07:41Z
[ "python", "python-2.7", "python-3.x" ]
How to groupby and assign an array to a column in python-pandas?
39,749,653
<p>Given a data frame <code>df</code> like that:</p> <pre><code>a b 2 nan 3 nan 3 nan 4 nan 4 nan 4 nan 5 nan 5 nan 5 nan 5 nan ... </code></pre> <p>A critical rule is that each number <code>n</code> in <code>a</code> repeat <code>n-1</code> rows. And my expected output is:</p> <pre><code>a b 2 1 3 1 3 2 4 1 4 2 4 3 5 1 5 2 5 3 5 4 ... </code></pre> <p>Thus the number <code>m</code> in <code>b</code> is a list from <code>1</code> to <code>n-1</code>. I tried it in this way:</p> <pre><code>df.groupby('a').apply(lambda x: np.asarray(range(x['a'].unique()[0]))) </code></pre> <p>But the result is a list in one row, which is not what I want.</p> <p>Could you please tell me how to implement it? Thanks in advance!</p>
1
2016-09-28T13:55:48Z
39,749,852
<pre><code># make a column that is 0 on the first occurrence of a number in a and 1 after df['is_duplicated'] = df.duplicated(['a']).astype(int) # group by values of a and get the cumulative sum of duplicates # add one since the first duplicate has a value of 0 df['b'] = df[['a', 'is_duplicated']].groupby(['a']).cumsum() + 1 </code></pre>
1
2016-09-28T14:04:13Z
[ "python", "pandas", "numpy", "dataframe" ]
How to groupby and assign an array to a column in python-pandas?
39,749,653
<p>Given a data frame <code>df</code> like that:</p> <pre><code>a b 2 nan 3 nan 3 nan 4 nan 4 nan 4 nan 5 nan 5 nan 5 nan 5 nan ... </code></pre> <p>A critical rule is that each number <code>n</code> in <code>a</code> repeat <code>n-1</code> rows. And my expected output is:</p> <pre><code>a b 2 1 3 1 3 2 4 1 4 2 4 3 5 1 5 2 5 3 5 4 ... </code></pre> <p>Thus the number <code>m</code> in <code>b</code> is a list from <code>1</code> to <code>n-1</code>. I tried it in this way:</p> <pre><code>df.groupby('a').apply(lambda x: np.asarray(range(x['a'].unique()[0]))) </code></pre> <p>But the result is a list in one row, which is not what I want.</p> <p>Could you please tell me how to implement it? Thanks in advance!</p>
1
2016-09-28T13:55:48Z
39,749,910
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow"><code>cumcount</code></a>:</p> <pre><code>df['b'] = df.groupby('a').cumcount() + 1 print (df) a b 0 2 1 1 3 1 2 3 2 3 4 1 4 4 2 5 4 3 6 5 1 7 5 2 8 5 3 9 5 4 </code></pre>
3
2016-09-28T14:06:34Z
[ "python", "pandas", "numpy", "dataframe" ]
Connect MSSQL 2014 using python 3.4
39,749,682
<p>I'm trying to connect MSSQL 2014 database using python (3.4).</p> <p>I installed the pypyodbc package.</p> <pre><code> import pypyodbc connection = pypyodbc.connect('DRIVER ={SQL Server};' 'SERVER = myserver;' 'UID=user;' 'PWD=password;' 'DATABASE = dbo.db') </code></pre> <p>When I tried this, I'm getting an error saying Data source name not found and no default driver specified.</p>
0
2016-09-28T13:56:57Z
39,751,443
<p>Check which drivers are installed (in Powershell)</p> <pre><code>Get-ItemProperty 'hklm:\SOFTWARE\ODBC\ODBCINST.INI\ODBC Drivers' </code></pre> <p>Also, Remove the spaces from 'SERVER = myServer' to make 'SERVER=myServer'. For me this works</p> <pre><code>conn = pypyodbc.connect('Driver={SQL Server Native Client 11.0};Server=myhost;') </code></pre>
1
2016-09-28T15:13:44Z
[ "python", "sql-server", "python-3.x" ]
Passing a list of column names to a query in psycopg2
39,749,684
<p>I have the following query:</p> <pre><code>select distinct * from my_table where %s is NULL; </code></pre> <p>I would like to be able to pass in more than one column name to this query, but I do not know how many columns I will want to be checking for nulls every time.</p> <p>How can I use query parameter techniques like the above to make both the below queries from the same statement?:</p> <pre><code>select distinct * from my_table where "col1" is NULL or "col2" is NULL; select distinct * from my_table where "col1" is NULL or "col2" is NULL or "col3" is NULL; </code></pre> <p>I would appreciate an answer that includes being able to get the below as well, but it is not necessary in this case:</p> <pre><code>select distinct * from my_table where "col1" is NULL; </code></pre> <p>(I am using Amazon Redshift, in case that removes an postgresql-related possibilities)</p>
0
2016-09-28T13:56:59Z
39,794,963
<pre><code>query = ''' select distinct * from my_table where %s or "col1" is NULL and %s or "col2" is NULL and %s or "col3" is NULL and %s ''' </code></pre> <p>Pass <code>True</code> to the conditions which you want to be evaluated. Say you want rows where any of <code>col1</code> and <code>col2</code> are <code>null</code>:</p> <pre><code>cursor.execute(query, (False, True, True, False)) </code></pre> <p>If you want all rows regardless:</p> <pre><code>cursor.execute(query, (True, False, False, False)) </code></pre> <p>BTW double quoting identifiers is a bad idea.</p>
1
2016-09-30T15:24:04Z
[ "python", "postgresql", "amazon-redshift", "psycopg2" ]
Convert string within matrix row to matrix with rows and columns, and numbers in string to integers
39,749,745
<p>I saved a sheet from excel into csv format. And after importing the data in python with the code:</p> <pre><code>import csv with open('45deg_marbles.csv', 'r') as f: reader = csv.reader(f,dialect='excel') basis = [] for row in reader: print(row) </code></pre> <p>Output:</p> <pre><code>['1;2;3;4;5;6;7;8;9;10;11;12;13;14;15;16'] ['0.001;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363'] ['0.002;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363'] ['0.003;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283'] </code></pre> <p>Basically it has 16 columns and 1399 rows. I realized each row consists of one long string, I then replaced all the ';' with ',' which will hopefully help to convert the column of strings to a matrix which I can manipulate the data with. Now I end up with a matrix or rather a list of one row with all the strings. This is what I have so far in terms of code and output respectively:</p> <pre><code>import csv with open('45deg_marbles.csv', 'r') as f: reader = csv.reader(f,dialect='excel') basis = [] for row in reader: #print(row) for i in range(len(row)): new_row = (row[i].replace(';', ',')) basis.append(new_row) print(basis) &gt;&gt; ['1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16', '0.001,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363', '0.002,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363', '0.003,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.004,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.005,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.006,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.007,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.008,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.009,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.01,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', ... , '1.396,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0', '1.397,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0', '1.398,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0'] </code></pre> <p>But this is the form I want it in a matrix equal to:</p> <pre><code>[[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],[0.001,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363],[0.002,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363], [0.003,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283]] </code></pre> <p>In order to do manipulation on the data</p> <p>I would greatly appreciate any help. Thank you in advance.</p>
1
2016-09-28T13:59:35Z
39,749,813
<p>You can append a list without having to replace <code>;</code> with <code>,</code> </p> <pre><code>import csv with open('45deg_marbles.csv', 'r') as f: reader = csv.reader(f) basis = [] for row in reader: basis.append(list(map(float,row[0].split(';')))) </code></pre> <p>The row is coming through as a string. You need to split the string by comma, then <code>map</code> to a <code>float</code>. Use <code>list(map())</code> for Python3.</p>
-1
2016-09-28T14:02:25Z
[ "python", "python-2.7", "python-3.x", "csv", "matrix" ]
Convert string within matrix row to matrix with rows and columns, and numbers in string to integers
39,749,745
<p>I saved a sheet from excel into csv format. And after importing the data in python with the code:</p> <pre><code>import csv with open('45deg_marbles.csv', 'r') as f: reader = csv.reader(f,dialect='excel') basis = [] for row in reader: print(row) </code></pre> <p>Output:</p> <pre><code>['1;2;3;4;5;6;7;8;9;10;11;12;13;14;15;16'] ['0.001;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363'] ['0.002;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363'] ['0.003;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283'] </code></pre> <p>Basically it has 16 columns and 1399 rows. I realized each row consists of one long string, I then replaced all the ';' with ',' which will hopefully help to convert the column of strings to a matrix which I can manipulate the data with. Now I end up with a matrix or rather a list of one row with all the strings. This is what I have so far in terms of code and output respectively:</p> <pre><code>import csv with open('45deg_marbles.csv', 'r') as f: reader = csv.reader(f,dialect='excel') basis = [] for row in reader: #print(row) for i in range(len(row)): new_row = (row[i].replace(';', ',')) basis.append(new_row) print(basis) &gt;&gt; ['1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16', '0.001,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363', '0.002,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363', '0.003,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.004,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.005,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.006,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.007,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.008,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.009,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.01,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', ... , '1.396,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0', '1.397,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0', '1.398,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0'] </code></pre> <p>But this is the form I want it in a matrix equal to:</p> <pre><code>[[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],[0.001,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363],[0.002,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363], [0.003,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283]] </code></pre> <p>In order to do manipulation on the data</p> <p>I would greatly appreciate any help. Thank you in advance.</p>
1
2016-09-28T13:59:35Z
39,749,882
<p>what you need to do is</p> <pre><code>for row in reader: basis.append(row.split(';')) </code></pre> <p>What you are doing wrong is that you replace ';' with comma ',' this does not make a list from a string, just replaces a symbol in this string. While you should split string to elements.</p>
-2
2016-09-28T14:05:21Z
[ "python", "python-2.7", "python-3.x", "csv", "matrix" ]
Convert string within matrix row to matrix with rows and columns, and numbers in string to integers
39,749,745
<p>I saved a sheet from excel into csv format. And after importing the data in python with the code:</p> <pre><code>import csv with open('45deg_marbles.csv', 'r') as f: reader = csv.reader(f,dialect='excel') basis = [] for row in reader: print(row) </code></pre> <p>Output:</p> <pre><code>['1;2;3;4;5;6;7;8;9;10;11;12;13;14;15;16'] ['0.001;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363'] ['0.002;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363;11.00127363'] ['0.003;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283;10.94525283'] </code></pre> <p>Basically it has 16 columns and 1399 rows. I realized each row consists of one long string, I then replaced all the ';' with ',' which will hopefully help to convert the column of strings to a matrix which I can manipulate the data with. Now I end up with a matrix or rather a list of one row with all the strings. This is what I have so far in terms of code and output respectively:</p> <pre><code>import csv with open('45deg_marbles.csv', 'r') as f: reader = csv.reader(f,dialect='excel') basis = [] for row in reader: #print(row) for i in range(len(row)): new_row = (row[i].replace(';', ',')) basis.append(new_row) print(basis) &gt;&gt; ['1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16', '0.001,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363', '0.002,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363', '0.003,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.004,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.005,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.006,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.007,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.008,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.009,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', '0.01,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283', ... , '1.396,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0', '1.397,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0', '1.398,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0'] </code></pre> <p>But this is the form I want it in a matrix equal to:</p> <pre><code>[[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],[0.001,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363],[0.002,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363,11.00127363], [0.003,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283,10.94525283]] </code></pre> <p>In order to do manipulation on the data</p> <p>I would greatly appreciate any help. Thank you in advance.</p>
1
2016-09-28T13:59:35Z
39,750,009
<p>change separator to a semicolon (default is comma, which does not work here since your input data has semicolons in it) (I think you could omit the <code>dialect='excel'</code> part)</p> <pre><code>import csv with open('45deg_marbles.csv', 'r') as f: reader = csv.reader(f,dialect='excel',delimiter=";") basis = list(reader) </code></pre> <p>now <code>basis</code> is a list of rows containing the data as text.</p> <p>But you want them as integers / float. So you have to do some more postprocessing: list comprehension converting to integer if it is an integer (negative integers work too), else converting to float (of course another test needs to be added if there are alphanumerical rows, but not the case here)</p> <pre><code>import csv,re intre = re.compile(r"-?\d+$") with open('45deg_marbles.csv', 'r') as f: reader = csv.reader(f,dialect='excel',delimiter=";") basis = [] for row in reader: basis.append([int(x) if intre.match(x) else float(x) for x in row]) print(basis) </code></pre> <p>result</p> <pre><code>[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16], [0.001, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363], [0.002, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363, 11.00127363], [0.003, 10.94525283, 10.94525283, 10.94525283, 10.94525283, 10.94525283, 10.94525283, 10.94525283, 10.94525283, 10.94525283, 10.94525283, 10.94525283, 10.94525283, 10.94525283, 10.94525283, 10.94525283]] </code></pre> <p>Note that there's a variant if integers are guaranteed to be positive. Saves a regex evaluation:</p> <pre><code>basis.append([int(x) if x.isdigit() else float(x) for x in row]) </code></pre>
2
2016-09-28T14:10:50Z
[ "python", "python-2.7", "python-3.x", "csv", "matrix" ]
Optimize Python: Large arrays, memory problems
39,749,807
<p>I'm having a speed problem running a python / numypy code. I don't know how to make it faster, maybe someone else?</p> <p>Assume there is a surface with two triangulation, one fine (..._fine) with M points, one coarse with N points. Also, there's data on the coarse mesh at every point (N floats). I'm trying to do the following:</p> <p>For every point on the fine mesh, find the k closest points on coarse mesh and get mean value. Short: interpolate data from coarse to fine. </p> <p>My code right now goes like that. With large data (in my case M = 2e6, N = 1e4) the code runs about 25 minutes, guess due to the explicit for loop not going into numpy. Any ideas how to solve that one with smart indexing? M x N arrays blowing the RAM..</p> <pre><code>import numpy as np p_fine.shape =&gt; m x 3 p.shape =&gt; n x 3 data_fine = np.empty((m,)) for i, ps in enumerate(p_fine): data_fine[i] = np.mean(data_coarse[np.argsort(np.linalg.norm(ps-p,axis=1))[:k]]) </code></pre> <p>Cheers!</p>
3
2016-09-28T14:02:02Z
39,751,039
<p><strong>Approach #1</strong></p> <p>We are working with large sized datasets and memory is an issue, so I will try to optimize the computations within the loop. Now, we can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow"><code>np.einsum</code></a> to replace <code>np.linalg.norm</code> part and <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.argpartition.html" rel="nofollow"><code>np.argpartition</code></a> in place of actual sorting with <code>np.argsort</code>, like so -</p> <pre><code>out = np.empty((m,)) for i, ps in enumerate(p_fine): subs = ps-p sq_dists = np.einsum('ij,ij-&gt;i',subs,subs) out[i] = data_coarse[np.argpartition(sq_dists,k)[:k]].sum() out = out/k </code></pre> <p><strong>Approach #2</strong></p> <p>Now, as another approach we can also use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html" rel="nofollow"><code>Scipy's cdist</code></a> for a fully vectorized solution, like so -</p> <pre><code>from scipy.spatial.distance import cdist out = data_coarse[np.argpartition(cdist(p_fine,p),k,axis=1)[:,:k]].mean(1) </code></pre> <p>But, since we are memory bound here, we can perform these operations in chunks. Basically, we would get chunks of rows from that tall array <code>p_fine</code> that has millions of rows and use <code>cdist</code> and thus at each iteration get chunks of output elements instead of just one scalar. With this, we would cut the loop count by the length of that chunk.</p> <p>So, finally we would have an implementation like so -</p> <pre><code>out = np.empty((m,)) L = 10 # Length of chunk (to be used as a param) num_iter = m//L for j in range(num_iter): p_fine_slice = p_fine[L*j:L*j+L] out[L*j:L*j+L] = data_coarse[np.argpartition(cdist\ (p_fine_slice,p),k,axis=1)[:,:k]].mean(1) </code></pre> <p><strong>Runtime test</strong></p> <p>Setup -</p> <pre><code># Setup inputs m,n = 20000,100 p_fine = np.random.rand(m,3) p = np.random.rand(n,3) data_coarse = np.random.rand(n) k = 5 def original_approach(p,p_fine,m,n,k): data_fine = np.empty((m,)) for i, ps in enumerate(p_fine): data_fine[i] = np.mean(data_coarse[np.argsort(np.linalg.norm\ (ps-p,axis=1))[:k]]) return data_fine def proposed_approach(p,p_fine,m,n,k): out = np.empty((m,)) for i, ps in enumerate(p_fine): subs = ps-p sq_dists = np.einsum('ij,ij-&gt;i',subs,subs) out[i] = data_coarse[np.argpartition(sq_dists,k)[:k]].sum() return out/k def proposed_approach_v2(p,p_fine,m,n,k,len_per_iter): L = len_per_iter out = np.empty((m,)) num_iter = m//L for j in range(num_iter): p_fine_slice = p_fine[L*j:L*j+L] out[L*j:L*j+L] = data_coarse[np.argpartition(cdist\ (p_fine_slice,p),k,axis=1)[:,:k]].sum(1) return out/k </code></pre> <p>Timings -</p> <pre><code>In [134]: %timeit original_approach(p,p_fine,m,n,k) 1 loops, best of 3: 1.1 s per loop In [135]: %timeit proposed_approach(p,p_fine,m,n,k) 1 loops, best of 3: 539 ms per loop In [136]: %timeit proposed_approach_v2(p,p_fine,m,n,k,len_per_iter=100) 10 loops, best of 3: 63.2 ms per loop In [137]: %timeit proposed_approach_v2(p,p_fine,m,n,k,len_per_iter=1000) 10 loops, best of 3: 53.1 ms per loop In [138]: %timeit proposed_approach_v2(p,p_fine,m,n,k,len_per_iter=2000) 10 loops, best of 3: 63.8 ms per loop </code></pre> <p>So, there's about <strong><code>2x</code></strong> improvement with the first proposed approach and <strong><code>20x</code></strong> over the original approach with the second one at the sweet spot with the <code>len_per_iter</code> param set at <code>1000</code>. Hopefully this will bring down your 25 minutes runtime to little over a minute. Not bad I guess!</p>
1
2016-09-28T14:54:57Z
[ "python", "arrays", "performance", "numpy", "large-data" ]
Optimize Python: Large arrays, memory problems
39,749,807
<p>I'm having a speed problem running a python / numypy code. I don't know how to make it faster, maybe someone else?</p> <p>Assume there is a surface with two triangulation, one fine (..._fine) with M points, one coarse with N points. Also, there's data on the coarse mesh at every point (N floats). I'm trying to do the following:</p> <p>For every point on the fine mesh, find the k closest points on coarse mesh and get mean value. Short: interpolate data from coarse to fine. </p> <p>My code right now goes like that. With large data (in my case M = 2e6, N = 1e4) the code runs about 25 minutes, guess due to the explicit for loop not going into numpy. Any ideas how to solve that one with smart indexing? M x N arrays blowing the RAM..</p> <pre><code>import numpy as np p_fine.shape =&gt; m x 3 p.shape =&gt; n x 3 data_fine = np.empty((m,)) for i, ps in enumerate(p_fine): data_fine[i] = np.mean(data_coarse[np.argsort(np.linalg.norm(ps-p,axis=1))[:k]]) </code></pre> <p>Cheers!</p>
3
2016-09-28T14:02:02Z
39,767,338
<p>First of all thanks for the detailed help. </p> <p>First, Divakar, your solutions gave substantial speed-up. With my data, the code ran for just below 2 minutes depending a bit on the chunk size. </p> <p>I also tried my way around sklearn and ended up with </p> <pre><code>def sklearnSearch_v3(p, p_fine, k): neigh = NearestNeighbors(k) neigh.fit(p) return data_coarse[neigh.kneighbors(p_fine)[1]].mean(axis=1) </code></pre> <p>which ended up being quite fast, for my data sizes, I get the following</p> <pre><code>import numpy as np from sklearn.neighbors import NearestNeighbors m,n = 2000000,20000 p_fine = np.random.rand(m,3) p = np.random.rand(n,3) data_coarse = np.random.rand(n) k = 3 </code></pre> <p>yields</p> <pre><code>%timeit sklearv3(p, p_fine, k) 1 loop, best of 3: 7.46 s per loop </code></pre>
2
2016-09-29T10:06:19Z
[ "python", "arrays", "performance", "numpy", "large-data" ]
os.system does not look the programs that are in my path
39,749,896
<p>I have a problem when calling programs inside a python script. The programs that are giving me problems are those that I have installed manually on my computer and then added them to path on .bashrc file. The programs that where installed using 'sudo apt-get install some_program' don't give me any problem</p> <p>The programs where added to my .bashrc file as the following way:</p> <pre><code>#path to fastqc export PATH=$PATH:/home/bioinfor3/bin/FastQC/ #path to fastx-toolkits export PATH=$PATH:/home/bioinfor3/bin/fastx/ </code></pre> <p>Inside my PyCharm, I am using the os module to call those programs the in the below manner:</p> <pre><code>os.system('fastqc seq.fastq') </code></pre> <p>And I get this error</p> <pre><code>sh: 1: fastqc: not found </code></pre> <p>I guess it has something to do with the sh path or something, but I am not able to make it work</p> <p>EDIT:</p> <p>If Pycharm is launched from the terminal, it inherits the bashrc file with my personal paths and it works</p>
0
2016-09-28T14:05:51Z
39,750,138
<p>Presumably this is happening because you have modified your login environment to adjust your PATH, but this updated path isn't seen by the shell that's running PyCharm, or PyCharm appears to be nullifying it somehow.</p> <p>You should first of all verify that</p> <pre><code>os.system('/home/bioinfor3/bin/FastQC/fastqc seq.fastq') </code></pre> <p>operates as you would expect (no reason why it shouldn't, but worth checking).</p> <p>It seems from <a href="http://stackoverflow.com/questions/21581197/pycharm-set-the-correct-environment-variable-path">this answer</a> that by default PyCharm doesn't use <code>bash</code> for its shell but <code>tcsh</code>. Therefore it isn't seeing the setting you are enforcing on <code>bash</code>.</p>
1
2016-09-28T14:16:21Z
[ "python", "bash", "operating-system", "pycharm" ]
Error in query many-to-many relationship in sqlalchemy using flask
39,749,903
<p>I have two models CoreDrive and GamificationTechnique and a generated table of a many-to-many relationship. I'm trying to do a simple query in table cores_techiques , but always get the error. </p> <p>AttributeError: 'Table' object has no attribute 'query'. </p> <p>I'm using python3, flask and sqlalchemy. I'm very beginner, I'm trying to learn flask with this application.</p> <p>Models</p> <pre><code>#many_CoreDrive_has_many_GamificationTechnique cores_techiques = db.Table('cores_techiques', db.Column('core_drive_id', db.Integer, db.ForeignKey('core_drive.id')), db.Column('gamification_technique_id', db.Integer, db.ForeignKey('gamification_technique.id'))) class CoreDrive(db.Model): id = db.Column(db.Integer(), primary_key=True) name_core_drive = db.Column(db.String(80), unique=True) description_core_drive = db.Column(db.String(255)) techniques = db.relationship('GamificationTechnique', secondary=cores_techiques, backref=db.backref('core_drives', lazy='dynamic')) class GamificationTechnique(db.Model): id = db.Column(db.Integer(), primary_key=True) name_gamification_technique = db.Column(db.String(80), unique=True) description_gamification_technique = db.Column(db.String(255)) number_gamification_technique = db.Column(db.Integer()) attributtes = db.relationship('Atributte', secondary=techniques_atributtes, backref=db.backref('gamification_techniques', lazy='dynamic')) </code></pre> <p>Routes</p> <pre><code>@app.route('/profile') def profile(): my_core_techique = cores_techiques.query.all() my_user = User.query.first() my_technique = GamificationTechnique.query.all() return render_template('profile.html',my_user=my_user,my_technique=my_technique, my_core_techique=my_core_techique) </code></pre>
0
2016-09-28T14:06:16Z
39,906,323
<p>You can list all the <code>CoreDriver</code> and each object have a list of the <code>GamificationTechnique</code></p> <pre><code>cores = CoreDrivers.query.all() for core in cores: print core.techniques </code></pre> <p>With this in mind you can spend only the list <code>cores</code> to page</p>
0
2016-10-06T22:04:29Z
[ "python", "flask", "sqlalchemy", "flask-sqlalchemy" ]
Python Gtk3 Window Icon from SVG / scalable icon from stock theme
39,750,041
<p>How to set an high quality icon to Gtk.Window ? My theme has SVG icons, but I always get an pixel size of 24 px. So what is wrong with my code? Would be very happy for some help. Thanks</p> <p><a href="http://i.stack.imgur.com/7L9p4.png" rel="nofollow"><img src="http://i.stack.imgur.com/7L9p4.png" alt="enter image description here"></a></p> <p><strong>Max size is always 24</strong>:</p> <pre><code>#!/usr/bin/python3 import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk window = Gtk.Window() window.connect("delete-event", Gtk.main_quit) icon_name = "applications-mail" icon_theme = Gtk.IconTheme.get_default() found_icons = set() for res in range(0, 512, 2): icon = icon_theme.lookup_icon(icon_name, res, 0) found_icons.add(icon.get_filename()) print("\n".join(found_icons)) sizes = Gtk.IconTheme.get_default().get_icon_sizes(icon_name) max_size = max(sizes) print("max size = {} ({})".format(max_size, sizes)) pixbuf = icon_theme.load_icon(icon_name, max_size, 0) window.set_default_icon_list([pixbuf]) window.show_all() Gtk.main() </code></pre> <p><strong>Response</strong>:</p> <pre><code>/usr/share/icons/Mint-X/categories/22/applications-mail.png /usr/share/icons/Mint-X/categories/48/applications-mail.png /usr/share/icons/Mint-X/categories/96/applications-mail.svg /usr/share/icons/Mint-X/categories/32/applications-mail.png /usr/share/icons/Mint-X/categories/16/applications-mail.png /usr/share/icons/Mint-X/categories/24/applications-mail.png max size = 24 ([22, 16, 24]) </code></pre>
2
2016-09-28T14:12:38Z
39,819,514
<p>I could be mistaken, here are some ideas:</p> <ul> <li>I don't have <em>any</em> icon on my machine called "applications-mail". I did find many "internet-mail" icons though. </li> </ul> <blockquote> <p>/usr/share/icons/Mint-X/categories/96/applications-mail.svg</p> </blockquote> <ul> <li>Also, I believe svg icons should be in a <code>scalable</code> directory. Possibly the svg you did find wasn't recognized as such. Eg. I have:</li> </ul> <p><code>/usr/share/icons/Tango/scalable/apps/internet-mail.svg</code></p> <ul> <li>I modified your program slightly:</li> </ul> <p>Listing:</p> <pre><code>#!/usr/bin/env python3 from gi.repository import Gtk class MainWindow(Gtk.Window): def __init__(self): super(MainWindow, self).__init__() self.connect("delete-event", Gtk.main_quit) #icon_name = "applications-mail" icon_name = "internet-mail" icon_theme = Gtk.IconTheme.get_default() found_icons = set() for res in range(0, 512, 2): icon = icon_theme.lookup_icon(icon_name, res, 0) #print(icon) if icon != None: found_icons.add(icon.get_filename()) if len(found_icons) &gt; 0: print("\n".join(found_icons)) sizes = Gtk.IconTheme.get_default().get_icon_sizes(icon_name) max_size = max(sizes) print("max size = {} ({})".format(max_size, sizes)) pixbuf = icon_theme.load_icon(icon_name, max_size, 0) self.set_default_icon_list([pixbuf]) self.show_all() def run(self): Gtk.main() def main(args): mainwdw = MainWindow() mainwdw.run() return 0 if __name__ == '__main__': import sys sys.exit(main(sys.argv)) </code></pre> <p>and I get:</p> <pre><code>/usr/share/icons/Tango/24x24/apps/internet-mail.png /usr/share/icons/Tango/scalable/apps/internet-mail.svg /usr/share/icons/Tango/16x16/apps/internet-mail.png /usr/share/icons/Tango/32x32/apps/internet-mail.png /usr/share/icons/Tango/22x22/apps/internet-mail.png max size = 32 ([22, 16, 24, 32, -1, 0]) </code></pre> <p>where -1 indicates a scalable icon. (so, don't use <code>max()</code> - look for -1. This from the <a href="https://developer.gnome.org/gtk3/stable/GtkIconTheme.html#gtk-icon-theme-get-icon-sizes" rel="nofollow">developers' site</a>: </p> <blockquote> <p>[gtk_icon_theme_get_icon_sizes] Returns an array of integers describing the sizes at which the icon is available without scaling. A size of -1 means that the icon is available in a scalable format.</p> </blockquote> <p><strong>edit</strong>: More ideas:</p> <ul> <li><p>Gtk uses gdk-pixbuf.loaders modules to render images such as icons. You might not have the svg driver working correctly. I also seem to recall that the librsvg libraries is necessary.</p></li> <li><p>Even if another icon is actually working, you might be seeing a copy from the icon cache, and your icon renderer might still be failing.</p></li> <li><p>There might even be a problem with the icon cache itself. Try <a href="https://developer.gnome.org/gtk3/stable/gtk-update-icon-cache.html" rel="nofollow">rebuilding the cache</a>.</p></li> </ul>
1
2016-10-02T17:10:38Z
[ "python", "icons", "gtk3" ]
DRY selection of a subset of a dictionary
39,750,165
<p>Suppose I have a dictionary <code>config</code> which, among others, has the keys <code>username</code> and <code>password</code>. I'd like to create a new dictionary consisting only of the <code>username</code> and <code>password</code> key-value pairs from <code>config</code>. One way to do this is:</p> <pre><code>new_dictionary = {'username': config['username'], 'password': config['password']} </code></pre> <p>This seems a bit verbose to me, as it contains repetitions of the words <code>username</code> and <code>password</code>. Is there a more succinct way to do this?</p>
1
2016-09-28T14:17:12Z
39,750,258
<p>You can just create <code>tuple</code> or <code>list</code> that stores names of fields that you want to be copied and iterate through every element</p> <pre><code>new_dictionary = {k: config[k] for k in ('username', 'password')} </code></pre>
3
2016-09-28T14:21:19Z
[ "python", "dictionary" ]
DRY selection of a subset of a dictionary
39,750,165
<p>Suppose I have a dictionary <code>config</code> which, among others, has the keys <code>username</code> and <code>password</code>. I'd like to create a new dictionary consisting only of the <code>username</code> and <code>password</code> key-value pairs from <code>config</code>. One way to do this is:</p> <pre><code>new_dictionary = {'username': config['username'], 'password': config['password']} </code></pre> <p>This seems a bit verbose to me, as it contains repetitions of the words <code>username</code> and <code>password</code>. Is there a more succinct way to do this?</p>
1
2016-09-28T14:17:12Z
39,750,880
<p>You could store the username/password pair in a sub dictionary (called credential)</p> <pre><code>config = { 'param1': 'value1', 'credential': { 'username': 'myname', 'password': 'pa22w0rd' } } new-dictionary = config['credential'] </code></pre> <p>You could create a credential class and pass that object around instead of a dictionary.</p>
0
2016-09-28T14:47:11Z
[ "python", "dictionary" ]
Python open url with urllib than get back the changed url of opened webpage
39,750,189
<p>I want to send some requests to Google maps. I open the url that is changed based on the request. And I want to get back the changed url. An example:</p> <pre><code>import urllib, urllib2 my_address = '1600 Amphitheatre Parkway Mountain View, CA 94043' data = urllib.urlencode({'output':'csv', 'q':my_address}) req = urllib2.Request('https://www.google.co.uk/maps/place?' + data) res_0 = urllib2.urlopen(req) print res_0.geturl() </code></pre> <p>url to open (<code>res_0.geturl()</code>):</p> <p><code>'https://www.google.co.uk/maps/search/1600+Amphitheatre+Parkway+Mountain+View,+CA+94043/data=!4m2!2m1!4b1?dg=dbrw&amp;newdg=1'</code></p> <p>And I want to get back the changed url, that is:</p> <p><code>'https://www.google.co.uk/maps/place/1600+Amphitheatre+Pkwy,+Mountain+View,+CA+94043,+USA/@37.4223371,-122.0866079,17z/data=!3m1!4b1!4m5!3m4!1s0x808fba027820e5d9:0x60a90600ff6e7e6e!8m2!3d37.4223329!4d-122.0844192'</code></p> <p>I opened the <code>res_0</code> url in the browser manually and I get the above changed url.</p> <p>How can I do that?</p> <p>Thank you!</p>
0
2016-09-28T14:17:58Z
39,750,425
<p>you can use the .geturl() method inside urllib2</p> <p>Example:</p> <pre><code>print res_0.geturl() </code></pre>
0
2016-09-28T14:28:08Z
[ "python", "url", "urllib2" ]
Intelligently Determine Image Size in Python
39,750,205
<p>I would like to use Python to intelligently choose dimensions (in inches) for pictures. Were I to do this manually, I would open the picture in some photo editing software and resize the image until it seemed 'good'. </p> <p>Other answers I've seen on SO have pre-specified dimensions, but in this case that's what I want the program to determine. I'm terribly new to image processing, so I'm hesitant to be more specific. However, I <em>think</em> I want to choose a size such that DPI >= 300. All of these images will end up printed, which is why I'm focused on DPI as a metric. A horrible brute force way might be:</p> <pre><code>from PIL import Image import numpy as np import bisect min_size = 1 max_size = 10 sizes = np.linspace(min_size, max_size) for size in sizes: im = Image.open(im_name) im.thumbnail((size, size), Image.ANTIALIAS) #assumes square images dpi_vals.append(im.info['dpi'])) best_size = sizes[bisect.bisect(dpi_vals, 300)] </code></pre> <p>This strikes me as very inefficient and I'm hoping someone with more experience has a smarter way.</p>
0
2016-09-28T14:18:42Z
39,751,695
<p>Forget dpi. It is a very confusing term. Unless you scan\print (ie use a physical medium), this doesn't mean much.</p> <p>A good <strong>digital</strong> dimension has the following :</p> <ul> <li>The biggest factor of 2 possible. This is why most standard digital resolution (esp. in video have size multiple of 2^3, or 2^4). Digital filters and transformations take advantage of this for optimization</li> <li>A simple factor between the source and target image (again mostly 2). cropping an image by a factor of 8 will almost always produce better results than cropping by 7. Most the time it will also be faster.</li> <li>A kinda standard aspect ratio. square, 4:3, 16:9.</li> <li>depending on the situation preservation of the aspect ratio of the source image.</li> </ul> <p>This is why a lot of sites provide you a resizable rectangle\square box for some photos (like facebook profile picture), which will ensure the aspect ratio is the same as the one of the processed image, that the minimum\maximum dimensions are met, etc...</p>
1
2016-09-28T15:25:10Z
[ "python", "image-processing", "python-imaging-library" ]
Intelligently Determine Image Size in Python
39,750,205
<p>I would like to use Python to intelligently choose dimensions (in inches) for pictures. Were I to do this manually, I would open the picture in some photo editing software and resize the image until it seemed 'good'. </p> <p>Other answers I've seen on SO have pre-specified dimensions, but in this case that's what I want the program to determine. I'm terribly new to image processing, so I'm hesitant to be more specific. However, I <em>think</em> I want to choose a size such that DPI >= 300. All of these images will end up printed, which is why I'm focused on DPI as a metric. A horrible brute force way might be:</p> <pre><code>from PIL import Image import numpy as np import bisect min_size = 1 max_size = 10 sizes = np.linspace(min_size, max_size) for size in sizes: im = Image.open(im_name) im.thumbnail((size, size), Image.ANTIALIAS) #assumes square images dpi_vals.append(im.info['dpi'])) best_size = sizes[bisect.bisect(dpi_vals, 300)] </code></pre> <p>This strikes me as very inefficient and I'm hoping someone with more experience has a smarter way.</p>
0
2016-09-28T14:18:42Z
39,756,780
<p>I guess I asked a bit prematurely. This hinges more on understanding resolution vs. DPI than it does on programming. Once I learned more, the answer becomes quite easy. Digital images have resolutions, while printed images have DPI. To choose the size of a digital image such that, if printed, that image will have 300 DPI is as simple as: pixels / 300. In the specific case of <code>PIL</code>, the following code works fine:</p> <pre><code>import PIL im = Image.open(im_name) print_ready_width = im.size[0] / 300 </code></pre> <p>Then the image will have 300 DPI if printed at <code>print_read_width</code>.</p>
0
2016-09-28T20:09:25Z
[ "python", "image-processing", "python-imaging-library" ]
Responsive IPython notebook with running progress bar of dask/distributed
39,750,223
<p>I am running a cluster with <code>dask.distributed</code>. Currently I submit tasks to the cluster with Jupyter notebook that I use as a GUI.</p> <p>The respective notebook cell contains the following code.</p> <pre><code>%pylab inline %load_ext autoreload %autoreload 2 from distributed import progress sys.path.append('/path/to/my/python/modules/on/NAS') import jobs jobid = jobs.add_new_job(...) r = jobs.start_job(jobid) progress(r) </code></pre> <p><code>jobs</code> is the name of my python module. <code>jobs.add_new_job</code> returns a string with job identifier. <code>jobs.start_job</code> returns a list of <code>distributed.client.Future</code>s. Final result of the job is a report with some numbers and some plots in PDF.</p> <p>Now I'd like to implement a job queue with some indication on what is being processed now and what is waiting.</p> <p>My goal is to implement the following scenario. </p> <p>Member of my team prepares some data for a new job, then opens Jupyter notebook in his browser, enters job parameters in the cell in the call to <code>add_new_job</code>, then executes this cell, then closes that page and waits till computations are done. He could also leave the page open and observe the progress.</p> <p>Up to now I've found that if I submit a single job to a cluster by running the cell once and waiting till everything is done, everything works like a charm.</p> <p>If I try to submit one more job by simply editing the cell code and running it again, then the cluster stops calculating first submitted job. My explanation of this is that <code>r</code> is deleted and its destructor sends cancellation requests to the cluster.</p> <p>If I try to submit a new job by making a copy of the notebook, a new empty page opens in the browser, and then it takes very long time until the notebook loads and allows a user to do anything.</p> <p>Also, progress bar, displayed by <code>progress</code>, very often disappears by itself.</p> <p>I've read already about JupyterHub, but currently it seems to me that using it is like a shooting sparrows with a heavy artillery, and there should be more simple way.</p>
0
2016-09-28T14:19:42Z
39,807,890
<blockquote> <p>My explanation of this is that r is deleted and its destructor sends cancellation requests to the cluster</p> </blockquote> <p>This is correct. A simple way to avoid this would be to add <code>r</code> to some result set that is not deleted every time you run your cell</p> <pre><code>-- cell 1 -- results = [] -- cell 2 -- import jobs jobid = jobs.add_new_job(...) r = jobs.start_job(jobid) results.append(r) progress(r) </code></pre>
1
2016-10-01T14:42:07Z
[ "python", "distributed", "jupyter-notebook", "dask" ]
Finding probability of word in string
39,750,449
<p>If I have a longer string, how do I calculate the probability of finding a word of a given length within that string?</p> <p>So far I have this:</p> <pre class="lang-python prettyprint-override"><code>import math from scipy import stats alphabet = list("ATCG") # This is the alphabet I am working with string = "AATCAGTAGATCG" # Here are two example strings string2 = "TGTAAACCTTGGTTTATCG" word = "ATCG" # This is my word n_substrings = len(string) - len(word) # The number of possible substrings n_substrings2 = len(string2) - len(word) prob_match = math.pow(len(alphabet), - len(word)) # The probability of randomly choosing the word from the alphabet # Get the probability from a binomial test? print stats.binom_test(1, n_substrings, p=prob_match) # (Number of successes, number of trials, prob of success) print stats.binom_test(1, n_substrings2, p=prob_match) &gt;&gt;&gt;0.0346119111615 0.0570183821615 </code></pre> <p>Is this a suitable way to do this or am I missing something? </p>
0
2016-09-28T14:29:29Z
39,750,806
<p>I think you should do:</p> <pre><code>n_substrings = len(string) - len(word) +1 </code></pre> <p>In a 5 letter string, with a 4 letter substring you have 2 options: ATCGA can hold ATCG and TCGA</p>
1
2016-09-28T14:44:26Z
[ "python", "statistics" ]
Remove an even/odd number from an odd/even Python list
39,750,474
<p>I am trying to better understand list comprehension in Python. I completed an online challenge on codewars with a rather inelegant solution, given below.</p> <p>The challenge was:</p> <ol> <li>Given a list of even numbers and one odd, return the odd</li> <li>Given a list of odd numbers and one even, return the even</li> </ol> <p>My (inelegant) solution to this was:</p> <pre><code>def find_outlier(integers): o = [] e = [] for i in integers: if i % 2 == 0: e.append(i) else: o.append(i) # use sums to return int type if len(o) == 1: return sum(o) else: return sum(e) </code></pre> <p>Which works fine, but seems to be pretty brute force. Am I wrong in thinking that starting (most) functions with placeholder lists like <code>o</code> and <code>e</code> is pretty "noob-like"?</p> <p>I would love to better understand why this solution works for the odd list, but fails on the even list, in an effort to better understand list comprehension:</p> <pre><code>def find_outlier(integers): if [x for x in integers if x % 2 == 0]: return [x for x in integers if x % 2 == 0] elif [x for x in integers if x % 2 != 0]: return [x for x in integers if x % 2 != 0] else: print "wtf!" o = [1,3,4,5] e = [2,4,6,7] In[1]: find_outlier(o) Out[1]: [4] In[2]: find_outlier(e) Out[2]: [2, 4, 6] </code></pre> <p>Where <code>Out[2]</code> should be returning <code>7</code>.</p> <p>Thanks in advance for any insights.</p>
1
2016-09-28T14:30:56Z
39,750,601
<p>Your attempt fails because the first <code>if</code> is <em>always going to be true</em>. You'll always have a list with at least 1 element; either the odd one out is odd and you tested a list with all even numbers, otherwise you have a list with the <em>one</em> even number in it. Only an <em>empty</em> list would be false.</p> <p>List comprehensions are not the best solution here, no. Try to solve it instead with the minimum number of elements checked (the first 2 elements, if they differ in type get a 3rd to break the tie, otherwise iterate until you find the one that doesn't fit in the tail):</p> <pre><code>def find_outlier(iterable): it = iter(iterable) first = next(it) second = next(it) parity = first % 2 if second % 2 != parity: # odd one out is first or second, 3rd will tell which return first if next(it) % 2 != parity else second else: # the odd one out is later on; iterate until we find the exception return next(i for i in it if i % 2 != parity) </code></pre> <p>The above will throw a <code>StopIteration</code> exception if there are either fewer than 3 elements in the input iterable, or there is no exception to be found. It also won't handle the case where there is more than one exception (e.g. 2 even followed by 2 odd; the first odd value would be returned in that case).</p>
6
2016-09-28T14:36:38Z
[ "python", "list", "list-comprehension" ]
Remove an even/odd number from an odd/even Python list
39,750,474
<p>I am trying to better understand list comprehension in Python. I completed an online challenge on codewars with a rather inelegant solution, given below.</p> <p>The challenge was:</p> <ol> <li>Given a list of even numbers and one odd, return the odd</li> <li>Given a list of odd numbers and one even, return the even</li> </ol> <p>My (inelegant) solution to this was:</p> <pre><code>def find_outlier(integers): o = [] e = [] for i in integers: if i % 2 == 0: e.append(i) else: o.append(i) # use sums to return int type if len(o) == 1: return sum(o) else: return sum(e) </code></pre> <p>Which works fine, but seems to be pretty brute force. Am I wrong in thinking that starting (most) functions with placeholder lists like <code>o</code> and <code>e</code> is pretty "noob-like"?</p> <p>I would love to better understand why this solution works for the odd list, but fails on the even list, in an effort to better understand list comprehension:</p> <pre><code>def find_outlier(integers): if [x for x in integers if x % 2 == 0]: return [x for x in integers if x % 2 == 0] elif [x for x in integers if x % 2 != 0]: return [x for x in integers if x % 2 != 0] else: print "wtf!" o = [1,3,4,5] e = [2,4,6,7] In[1]: find_outlier(o) Out[1]: [4] In[2]: find_outlier(e) Out[2]: [2, 4, 6] </code></pre> <p>Where <code>Out[2]</code> should be returning <code>7</code>.</p> <p>Thanks in advance for any insights.</p>
1
2016-09-28T14:30:56Z
39,750,803
<p>The most efficient answer is going to get a little ugly.</p> <pre><code>def f(in_list): g = (i for i in in_list) first = next(g) second = next(g) #The problem as described doesn't make sense for fewer than 3 elements. Let them handle the exceptions. if first%2 == second%2: a = first%2 for el in g: if el%2 != a: return el else: third = next(g) if third%2 == first%2: return second else: return first except ValueError('Got a bad list, all evens or all odds') </code></pre>
-1
2016-09-28T14:44:24Z
[ "python", "list", "list-comprehension" ]
Remove an even/odd number from an odd/even Python list
39,750,474
<p>I am trying to better understand list comprehension in Python. I completed an online challenge on codewars with a rather inelegant solution, given below.</p> <p>The challenge was:</p> <ol> <li>Given a list of even numbers and one odd, return the odd</li> <li>Given a list of odd numbers and one even, return the even</li> </ol> <p>My (inelegant) solution to this was:</p> <pre><code>def find_outlier(integers): o = [] e = [] for i in integers: if i % 2 == 0: e.append(i) else: o.append(i) # use sums to return int type if len(o) == 1: return sum(o) else: return sum(e) </code></pre> <p>Which works fine, but seems to be pretty brute force. Am I wrong in thinking that starting (most) functions with placeholder lists like <code>o</code> and <code>e</code> is pretty "noob-like"?</p> <p>I would love to better understand why this solution works for the odd list, but fails on the even list, in an effort to better understand list comprehension:</p> <pre><code>def find_outlier(integers): if [x for x in integers if x % 2 == 0]: return [x for x in integers if x % 2 == 0] elif [x for x in integers if x % 2 != 0]: return [x for x in integers if x % 2 != 0] else: print "wtf!" o = [1,3,4,5] e = [2,4,6,7] In[1]: find_outlier(o) Out[1]: [4] In[2]: find_outlier(e) Out[2]: [2, 4, 6] </code></pre> <p>Where <code>Out[2]</code> should be returning <code>7</code>.</p> <p>Thanks in advance for any insights.</p>
1
2016-09-28T14:30:56Z
39,750,869
<p>What are the shortcomings of this response (which is at the top of the solution stack on <a href="https://www.codewars.com/kata/find-the-parity-outlier/solutions/python" rel="nofollow">this particular challenge</a>)?</p> <pre><code>def find_outlier(int): odds = [x for x in int if x%2!=0] evens= [x for x in int if x%2==0] return odds[0] if len(odds)&lt;len(evens) else evens[0] </code></pre>
0
2016-09-28T14:46:54Z
[ "python", "list", "list-comprehension" ]
com0com and pyserial virtual serial ports. Can this be used to simulate unplugging a serial usb device?
39,750,547
<p>I am using com0com and pyserial. I open one port, write to it using pyserial and read from it in the YAT emulator. This works great. Can this setup be used to simulate unplugging of a usb device that is emulating a serial port? I want to recreate a UnauthorizedAccessException that is rarely thrown by real devices in our application software upon unplug. After writing to CNCA0 using pyserial and reading from CNCB0 using YAT successfuly, I tried to close CNCB0 from pyserial and of course it wouldnt let me because port is already acquired by YAT (Access is denied). Any ideas about how to simulate an unplug action of a real device? </p>
0
2016-09-28T14:34:07Z
39,760,718
<p>Unauthorized access is simple to replicate. Open up the port with another application, perhaps in another YAT tab. When you try to connect with a different application, you should get an unauthorized access error. However, I'm not sure if that is really the question you are asking.</p> <p>If you really want to emulate a port disconnect you should realize that you are also at the mercy of the serial driver. Different drivers will behave differently on sudden device removal. What I'm saying is that you can trick yourself into thinking you have a bullet-proof exception handling process when in reality, all bets are off. </p> <p>I did some experimenting with some of the com0com change commands and nothing I did could cause the port to "virtually disconnect". See the setupc.exe command >help for more info on what is available.</p> <p>You might also be interested in playing with the emulated noise feature. Open the command shell for com0com and execute</p> <pre><code>change &lt;YOUR_PORT_CNC&gt; EmuNoise=0.0001 </code></pre> <p>The value is a percentage chance of corruption of the data stream. Fun stuff.</p>
0
2016-09-29T03:03:02Z
[ "python", "windows", "serial-port", "pyserial", "com0com" ]
Python 3 Base64 decode messing up newline characters
39,750,575
<p>I'm trying to decode a base64 multi-line file through the standard python library, however only the first line gets decoded, and the rest gets dumped for no reason.</p> <p>Why is this?</p> <p>The file before it gets encoded (what I'm trying to achieve after decoding):</p> <blockquote> <p>dataFile.dat</p> <p>VERSION: BenWin+ Version: 3.0.12.1[CR]</p> <p>[CR][LF]</p> <p>CREATED: 01 September 2016 12:56:27 PM[CR]</p> <p>[CR][LF]</p> <p>TIME CODE: 0x907e0, 0x10004, 0x38000c, 0x242001b[CR]</p> <p>[CR][LF]</p> <p>...</p> </blockquote> <p>[CR] and [LF] are the character codes for Carriage Return (\r) and Line Feed (\n) respectively</p> <p>I base64 encode the file using base64.b64encode and want to decode it later. Here is my code snippet.</p> <pre><code>encodedData = b'VkVSU0lPTjogQmVuV2luKyBWZXJzaW9uOiAzLjAuMTIuMQo=Cg==Q1JFQVRFRDogMDEgU2VwdGVtYmVyIDIwMTYgMTI6NTY6MjcgUE0KCg==VElNRSBDT0RFOiAweDkwN2UwLCAweDEwMDA0LCAweDM4MDAwYywgMHgyNDIwMDFiCg==Cg==' data = base64.b64decode(encodedData) print(data) </code></pre> <p>Which returns</p> <blockquote> <p>b'VERSION: BenWin+ Version: 3.0.12.1\n'</p> </blockquote> <p>Thanks in advance. Using Python 3.5</p>
1
2016-09-28T14:35:18Z
39,751,232
<p>The problem appears to be that you are encoding each line separately and then joining those encoded strings together. A Base-64 encoded string may end in padding characters, and when the decoder sees those padding characters it assumes that's the end of the valid data, so any following data is ignored.</p> <p>Here's how to Base64 encode multi-line text in Python 3. First, we need to convert the Unicode text to bytes. Then we Base64 encode all those bytes in one go. To decode, we reverse the process: first Base64 decode, then decode the resulting bytes to a Unicode string. Notice that the <code>\r</code> and <code>\n</code> have been preserved properly.</p> <pre><code>import base64 s = 'VERSION: BenWin+ Version: 3.0.12.1\r\r\nCREATED: 01 September 2016 12:56:27 PM\r\r\nTIME CODE: 0x907e0, 0x10004, 0x38000c, 0x242001b\r\r\n' print(s) b = base64.b64encode(s.encode('utf8')) print(b) z = base64.b64decode(b).decode('utf8') print(repr(z)) </code></pre> <p><strong>output</strong></p> <pre><code>VERSION: BenWin+ Version: 3.0.12.1 CREATED: 01 September 2016 12:56:27 PM TIME CODE: 0x907e0, 0x10004, 0x38000c, 0x242001b b'VkVSU0lPTjogQmVuV2luKyBWZXJzaW9uOiAzLjAuMTIuMQ0NCkNSRUFURUQ6IDAxIFNlcHRlbWJlciAyMDE2IDEyOjU2OjI3IFBNDQ0KVElNRSBDT0RFOiAweDkwN2UwLCAweDEwMDA0LCAweDM4MDAwYywgMHgyNDIwMDFiDQ0K' 'VERSION: BenWin+ Version: 3.0.12.1\r\r\nCREATED: 01 September 2016 12:56:27 PM\r\r\nTIME CODE: 0x907e0, 0x10004, 0x38000c, 0x242001b\r\r\n' </code></pre>
1
2016-09-28T15:03:50Z
[ "python", "base64", "decoding" ]
Pandas groupby sum
39,750,590
<p>I have a dataframe as follows:</p> <pre><code>ref, type, amount 001, foo, 10 001, foo, 5 001, bar, 50 001, bar, 5 001, test, 100 001, test, 90 002, foo, 20 002, foo, 35 002, bar, 75 002, bar, 80 002, test, 150 002, test, 110 </code></pre> <p>This is what I'm trying to get:</p> <pre><code>ref, type, amount, foo, bar, test 001, foo, 10, 15, 55, 190 001, foo, 5, 15, 55, 190 001, bar, 50, 15, 55, 190 001, bar, 5, 15, 55, 190 001, test, 100, 15, 55, 190 001, test, 90, 15, 55, 190 002, foo, 20, 55, 155, 260 002, foo, 35, 55, 155, 260 002, bar, 75, 55, 155, 260 002, bar, 80, 55, 155, 260 002, test, 150, 55, 155, 260 002, test, 110, 55, 155, 260 </code></pre> <p>So I have this:</p> <pre><code>df.groupby('ref')['amount'].transform(sum) </code></pre> <p>But how can I filter it such that the above only applies to rows where <code>type=foo</code> or <code>bar</code> or <code>test</code>?</p>
0
2016-09-28T14:36:07Z
39,750,769
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a> to original <code>DataFrame</code>:</p> <pre><code>df1 = df.groupby(['ref','type'])['amount'].sum().unstack().reset_index() print (df1) type ref bar foo test 0 001 55 15 190 1 002 155 55 260 df = pd.merge(df, df1, on='ref') print (df) ref type amount sums bar foo test 0 001 foo 10 15 55 15 190 1 001 foo 5 15 55 15 190 2 001 bar 50 55 55 15 190 3 001 bar 5 55 55 15 190 4 001 test 100 190 55 15 190 5 001 test 90 190 55 15 190 6 002 foo 20 55 155 55 260 7 002 foo 35 55 155 55 260 8 002 bar 75 155 155 55 260 9 002 bar 80 155 155 55 260 10 002 test 150 260 155 55 260 11 002 test 110 260 155 55 260 </code></pre> <p><strong>Timings</strong>:</p> <pre><code>In [506]: %timeit (pd.merge(df, df.groupby(['ref','type'])['amount'].sum().unstack().reset_index(), on='ref')) 100 loops, best of 3: 3.4 ms per loop In [507]: %timeit (pd.merge(df, pd.pivot_table(df, values='amount', index=['ref'], columns=['type'], aggfunc=np.sum), left_on='ref', right_index=True)) 100 loops, best of 3: 4.99 ms per loop </code></pre>
3
2016-09-28T14:43:30Z
[ "python", "pandas", "merge", "group-by", "sum" ]
Pandas groupby sum
39,750,590
<p>I have a dataframe as follows:</p> <pre><code>ref, type, amount 001, foo, 10 001, foo, 5 001, bar, 50 001, bar, 5 001, test, 100 001, test, 90 002, foo, 20 002, foo, 35 002, bar, 75 002, bar, 80 002, test, 150 002, test, 110 </code></pre> <p>This is what I'm trying to get:</p> <pre><code>ref, type, amount, foo, bar, test 001, foo, 10, 15, 55, 190 001, foo, 5, 15, 55, 190 001, bar, 50, 15, 55, 190 001, bar, 5, 15, 55, 190 001, test, 100, 15, 55, 190 001, test, 90, 15, 55, 190 002, foo, 20, 55, 155, 260 002, foo, 35, 55, 155, 260 002, bar, 75, 55, 155, 260 002, bar, 80, 55, 155, 260 002, test, 150, 55, 155, 260 002, test, 110, 55, 155, 260 </code></pre> <p>So I have this:</p> <pre><code>df.groupby('ref')['amount'].transform(sum) </code></pre> <p>But how can I filter it such that the above only applies to rows where <code>type=foo</code> or <code>bar</code> or <code>test</code>?</p>
0
2016-09-28T14:36:07Z
39,750,886
<p>A solution using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html">pivot table</a> :</p> <pre><code>&gt;&gt;&gt; b = pd.pivot_table(df, values='amount', index=['ref'], columns=['type'], aggfunc=np.sum) &gt;&gt;&gt; b type bar foo test ref 1 55 15 190 2 155 55 260 &gt;&gt;&gt; pd.merge(df, b, left_on='ref', right_index=True) ref type amount bar foo test 0 1 foo 10 55 15 190 1 1 foo 5 55 15 190 2 1 bar 50 55 15 190 3 1 bar 5 55 15 190 4 1 test 100 55 15 190 5 1 test 90 55 15 190 6 2 foo 20 155 55 260 7 2 foo 35 155 55 260 8 2 bar 75 155 55 260 9 2 bar 80 155 55 260 10 2 test 150 155 55 260 11 2 test 110 155 55 260 </code></pre>
6
2016-09-28T14:47:18Z
[ "python", "pandas", "merge", "group-by", "sum" ]
Python Multi-threading in a recordset
39,750,873
<p>I have a database record set (approx. 1000 rows) and I am currently iterating through them, to integrate more data using extra db query for each record.</p> <p>Doing that, raises the overall process time to maybe 100 seconds.</p> <p>What I want to do is share the functionality to 2-4 processes.</p> <p>I am using Python 2.7 to have AWS Lambda compatibility.</p> <pre><code>def handler(event, context): try: records = connection.get_users() mandrill_client = open_mandrill_connection() mandrill_messages = get_mandrill_messages() mandrill_template = 'POINTS weekly-report-to-user' start_time = time.time() messages = build_messages(mandrill_messages, records) print("OVERALL: %s seconds ---" % (time.time() - start_time)) send_mandrill_message(mandrill_client, mandrill_template, messages) connection.close_database_connection() return "Process Completed" except Exception as e: print(e) </code></pre> <p>Following is the function which I want to put into threads:</p> <pre><code>def build_messages(messages, records): for record in records: record = dict(record) stream = get_user_stream(record) data = compile_loyalty_stream(stream) messages['to'].append({ 'email': record['email'], 'type': 'to' }) messages['merge_vars'].append({ 'rcpt': record['email'], 'vars': [ { 'name': 'total_points', 'content': record['total_points'] }, { 'name': 'total_week', 'content': record['week_points'] }, { 'name': 'stream_greek', 'content': data['el'] }, { 'name': 'stream_english', 'content': data['en'] } ] }) return messages </code></pre> <p>What I have tried is importing the multiprocessing library:</p> <pre><code>from multiprocessing.pool import ThreadPool </code></pre> <p>Created a pool inside the <strong>try</strong> block and mapped the function inside this pool:</p> <pre><code>pool = ThreadPool(4) messages = pool.map(build_messages_in, itertools.izip(itertools.repeat(mandrill_messages), records)) def build_messages_in(a_b): build_msg(*a_b) def build_msg(a, b): return build_messages(a, b) def get_user_stream(record): response = [] i = 0 for mod, mod_id, act, p, act_created in izip(record['models'], record['model_ids'], record['actions'], record['points'], record['action_creation']): information = get_reference(mod, mod_id) if information: response.append({ 'action': act, 'points': p, 'created': act_created, 'info': information }) if (act == 'invite_friend') \ or (act == 'donate') \ or (act == 'bonus_500_general') \ or (act == 'bonus_1000_general') \ or (act == 'bonus_500_cancel') \ or (act == 'bonus_1000_cancel'): response[i]['info']['date_ref'] = act_created response[i]['info']['slug'] = 'attiki' if (act == 'bonus_500_general') \ or (act == 'bonus_1000_general') \ or (act == 'bonus_500_cancel') \ or (act == 'bonus_1000_cancel'): response[i]['info']['title'] = '' i += 1 return response </code></pre> <p>Finally I removed the <strong>for</strong> loop from the build_message function.</p> <p>What I get as a results is a 'NoneType' object is not iterable.</p> <p>Is this the correct way of doing this?</p>
1
2016-09-28T14:46:58Z
39,753,853
<p>Your code seems pretty in-depth and so you cannot be sure that <code>multithreading</code> will lead to any performance gains when applied on a high level. Therefore, it's worth digging down to the point that gives you the largest latency and considering how to approach the specific bottleneck. See <a href="http://stackoverflow.com/questions/18114285/python-what-are-the-differences-between-the-threading-and-multiprocessing-modul">here</a> for greater discussion on threading limitations.</p> <p>If, for example as we discussed in comments, you can pinpoint a single task that is taking a long time, then you could try to parallelize it using <code>multiprocessing</code> instead - to leverage more of your CPU power. Here is a generic example that hopefully is simple enough to understand to mirror your Postgres queries without going into your own code base; I think that's an unfeasible amount of effort tbh.</p> <pre><code>import multiprocessing as mp import time import random import datetime as dt MAILCHIMP_RESPONSE = [x for x in range(1000)] def chunks(l, n): n = max(1, n) return [l[i:i + n] for i in range(0, len(l), n)] def db_query(): ''' Delayed response from database ''' time.sleep(0.01) return random.random() def do_queries(query_list): ''' The function that takes all your query ids and executes them sequentially for each id ''' results = [] for item in query_list: query = db_query() # Your super-quick processing of the Postgres response processing_result = query * 2 results.append([item, processing_result]) return results def single_processing(): ''' As you do now - equivalent to get_reference ''' result_of_process = do_queries(MAILCHIMP_RESPONSE) return result_of_process def multi_process(chunked_data, queue): ''' Same as single_processing, except we put our results in queue rather than returning them ''' result_of_process = do_queries(chunked_data) queue.put(result_of_process) def multiprocess_handler(): ''' Divide and conquor on our db requests. We split the mailchimp response into a series of chunks and fire our queries simultaneously. Thus, each concurrent process has a smaller number of queries to make ''' num_processes = 4 # depending on cores/resources size_chunk = len(MAILCHIMP_RESPONSE) / num_processes chunked_queries = chunks(MAILCHIMP_RESPONSE, size_chunk) queue = mp.Queue() # This is going to combine all the results processes = [mp.Process(target=multi_process, args=(chunked_queries[x], queue)) for x in range(num_processes)] for p in processes: p.start() divide_and_conquor_result = [] for p in processes: divide_and_conquor_result.extend(queue.get()) return divide_and_conquor_result if __name__ == '__main__': start_single = dt.datetime.now() single_process = single_processing() print "Single process took {}".format(dt.datetime.now() - start_single) print "Number of records processed = {}".format(len(single_process)) start_multi = dt.datetime.now() multi = multiprocess_handler() print "Multi process took {}".format(dt.datetime.now() - start_multi) print "Number of records processed = {}".format(len(multi)) </code></pre>
2
2016-09-28T17:16:48Z
[ "python", "multithreading", "python-2.7", "lambda" ]
How to split text into chunks minimizing the solution?
39,750,879
<p><strong>OVERVIEW</strong> </p> <p>I got a set of possible valid chunks I can use to split a text (if possible).</p> <p>How can i split a given text using these chunks such as the result will be optimized (minimized) in terms of the number of resulting chunks?</p> <p><strong>TEST SUITE</strong></p> <pre><code>if __name__ == "__main__": import random import sys random.seed(1) # 1) Testing robustness examples = [] sys.stdout.write("Testing correctness...") N = 50 large_number = "3141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647093844609550582231725359408128481" for i in range(100): for j in range(i): choices = random.sample(range(i), j) examples.append((choices, large_number)) for (choices, large_number) in examples: get_it_done(choices, large_number) sys.stdout.write("OK") # 2) Testing correctness examples = [ # Example1 -&gt; # Solution ['012345678910203040506070', '80', '90', '100', '200', '300', '400', '500', '600', '700', '800', '900'] ( [ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "20", "30", "40", "50", "60", "70", "80", "90", "100", "200", "300", "400", "500", "600", "700", "800", "900", "012345678910203040506070" ], "0123456789102030405060708090100200300400500600700800900" ), # Example2 ## Solution ['100'] ( ["0", "1", "10", "100"], "100" ), # Example3 ## Solution ['101234567891020304050', '6070809010020030040050', '0600700800900'] ( [ "10", "20", "30", "40", "50", "60", "70", "80", "90", "012345678910203040506070", "101234567891020304050", "6070809010020030040050", "0600700800900" ], "10123456789102030405060708090100200300400500600700800900" ), # Example4 ### Solution ['12', '34', '56', '78', '90'] ( [ "12", "34", "56", "78", "90", "890", ], "1234567890" ), # Example5 ## Solution ['12', '34'] ( [ "1", "2", "3", "12", "23", "34" ], "1234" ), # Example6 ## Solution ['100', '10'] ( ["0", "1", "10", "100"], "10010" ) ] score = 0 for (choices, large_number) in examples: res = get_it_done(choices, large_number) flag = "".join(res) == large_number print("{0}\n{1}\n{2} --&gt; {3}".format( large_number, "".join(res), res, flag)) print('-' * 80) score += flag print( "Score: {0}/{1} = {2:.2f}%".format(score, len(examples), score / len(examples) * 100)) # 3) TODO: Testing optimization, it should provide (if possible) # minimal cases </code></pre> <p><strong>QUESTION</strong></p> <p>How could I solve this problem on python without using a brute-force approach?</p>
8
2016-09-28T14:47:06Z
39,752,628
<p>Using dynamic programming, you can construct a list <code>(l0, l1, l2, ... ln-1)</code>, where <code>n</code> is the number of characters in your input string and <code>li</code> is the minimum number of chunks you need to arrive at character <code>i</code> of the input string. The overall structure would look as follows:</p> <pre><code>minValues := list with n infinity entries for i from 0 to n-1 for every choice c that is a suffix of input[0..i] if i - len(c) &lt; 0 newVal = 1 else newVal = minValues[i - len(c)] + 1 end if if(newVal &lt; minValues[i]) minValues[i] = newVal //optionally record the used chunk end if next next </code></pre> <p>The minimum number of chunk for your entire string is then <code>ln-1</code>. You can get the actual chunks by tracking back through the list (which requires to record the used chunks).</p> <p>Retrieving the choices that are suffixes can be sped up using a trie (of the reverse choice strings). The worst case complexity will still be <code>O(n * c * lc)</code>, where <code>n</code> is the length of the input string, <code>c</code> is the number of choices, and <code>lc</code> is the maximum length of the choices. However, this complexity will only occur for choices that are nested suffixes (e.g. <code>0</code>, <code>10</code>, <code>010</code>, <code>0010</code>...). In this case, the trie will degenerate to a list. In average, the run time should be much less. Under the assumption that the number of retrieved choices from the trie is always a small constant, it is <code>O(n * lc)</code> (actually, the <code>lc</code> factor is probably also smaller).</p> <p>Here is an example:</p> <pre><code>choices = ["0","1","10","100"] text = "10010" algorithm step content of minValues 0 1 2 3 4 --------------------------------------------------------- initialize (∞, ∞ , ∞ , ∞ , ∞ ) i = 0, c = "1" (1 "1", ∞ , ∞ , ∞ , ∞ ) i = 1, c = "0" (1 "1", 2 "0", ∞ , ∞ , ∞ ) i = 1, c = "10" (1 "1", 1 "10", ∞ , ∞ , ∞ ) i = 2, c = "0" (1 "1", 1 "10", 2 "0", ∞ , ∞ ) i = 2, c = "100" (1 "1", 1 "10", 1 "100", ∞ , ∞ ) i = 3, c = "1" (1 "1", 1 "10", 1 "100", 2 "1", ∞ ) i = 4, c = "0" (1 "1", 1 "10", 1 "100", 2 "1", 3 "0" ) i = 4, c = "10" (1 "1", 1 "10", 1 "100", 2 "1", 2 "10") </code></pre> <p>Meaning: We can compose the string with 2 chunks. Tracing back gives the chunks in reverse order: "10", "100".</p>
7
2016-09-28T16:09:29Z
[ "python", "string", "algorithm", "split", "computer-science" ]
How to split text into chunks minimizing the solution?
39,750,879
<p><strong>OVERVIEW</strong> </p> <p>I got a set of possible valid chunks I can use to split a text (if possible).</p> <p>How can i split a given text using these chunks such as the result will be optimized (minimized) in terms of the number of resulting chunks?</p> <p><strong>TEST SUITE</strong></p> <pre><code>if __name__ == "__main__": import random import sys random.seed(1) # 1) Testing robustness examples = [] sys.stdout.write("Testing correctness...") N = 50 large_number = "3141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647093844609550582231725359408128481" for i in range(100): for j in range(i): choices = random.sample(range(i), j) examples.append((choices, large_number)) for (choices, large_number) in examples: get_it_done(choices, large_number) sys.stdout.write("OK") # 2) Testing correctness examples = [ # Example1 -&gt; # Solution ['012345678910203040506070', '80', '90', '100', '200', '300', '400', '500', '600', '700', '800', '900'] ( [ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "20", "30", "40", "50", "60", "70", "80", "90", "100", "200", "300", "400", "500", "600", "700", "800", "900", "012345678910203040506070" ], "0123456789102030405060708090100200300400500600700800900" ), # Example2 ## Solution ['100'] ( ["0", "1", "10", "100"], "100" ), # Example3 ## Solution ['101234567891020304050', '6070809010020030040050', '0600700800900'] ( [ "10", "20", "30", "40", "50", "60", "70", "80", "90", "012345678910203040506070", "101234567891020304050", "6070809010020030040050", "0600700800900" ], "10123456789102030405060708090100200300400500600700800900" ), # Example4 ### Solution ['12', '34', '56', '78', '90'] ( [ "12", "34", "56", "78", "90", "890", ], "1234567890" ), # Example5 ## Solution ['12', '34'] ( [ "1", "2", "3", "12", "23", "34" ], "1234" ), # Example6 ## Solution ['100', '10'] ( ["0", "1", "10", "100"], "10010" ) ] score = 0 for (choices, large_number) in examples: res = get_it_done(choices, large_number) flag = "".join(res) == large_number print("{0}\n{1}\n{2} --&gt; {3}".format( large_number, "".join(res), res, flag)) print('-' * 80) score += flag print( "Score: {0}/{1} = {2:.2f}%".format(score, len(examples), score / len(examples) * 100)) # 3) TODO: Testing optimization, it should provide (if possible) # minimal cases </code></pre> <p><strong>QUESTION</strong></p> <p>How could I solve this problem on python without using a brute-force approach?</p>
8
2016-09-28T14:47:06Z
39,833,238
<pre><code>def find_shortest_path(graph, start, end, path=[]): path = path + [start] if start == end: return path if start not in graph: return None shortest = None for node in graph[start]: if node not in path: newpath = find_shortest_path(graph, node, end, path) if newpath: if not shortest or len(newpath) &lt; len(shortest): shortest = newpath return shortest def get_it_done(choices, number): mapping = {} graph = {} for choice in choices: if choice in number: _from = number.index(choice) _to = _from + len(choice) mapping.setdefault((_from, _to), choice) items = sorted(mapping.items(), key=lambda x: x[0]) for _range, value in items: _from, _to = _range graph.setdefault(_from, []).append(_to) start = 0 end = _range[1] #this is hack, works only in python 2.7 path = find_shortest_path(graph, start, end) ranges = [tuple(path[i:i+2]) for i in range(len(path) - 1)] if len(ranges) == 1: items = sorted(choices, key=len, reverse=True) number_length = len(number) result = '' for item in items: result += item if len(result) == number_length: return result return [mapping[_range] for _range in ranges] if __name__ == "__main__": examples = [ # Example1 -&gt; # Solution ['012345678910203040506070', '80', '90', '100', '200', '300', '400', '500', '600', '700', '800', '900'] ( [ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "20", "30", "40", "50", "60", "70", "80", "90", "100", "200", "300", "400", "500", "600", "700", "800", "900", "012345678910203040506070" ], "0123456789102030405060708090100200300400500600700800900" ), ## Example2 ## Solution ['100'] ( ["0", "1", "10", "100"], "100" ), ## Example3 ## Solution ['101234567891020304050', '6070809010020030040050', '0600700800900'] ( [ "10", "20", "30", "40", "50", "60", "70", "80", "90", "012345678910203040506070", "101234567891020304050", "6070809010020030040050", "0600700800900" ], "10123456789102030405060708090100200300400500600700800900" ), ### Example4 ### Solution ['12', '34', '56', '78', '90'] ( [ "12", "34", "56", "78", "90", "890", ], "1234567890" ), ## Example5 ## Solution ['12', '34'] ( [ "1", "2", "3", "12", "23", "34" ], "1234" ), # Example6 ## Solution ['100', '10'] ( ["0", "1", "10", "100"], "10010" ) ] score = 0 for (choices, large_number) in examples: res = get_it_done(choices, large_number) flag = "".join(res) == large_number print("{0}\n{1}\n{2} --&gt; {3}".format( large_number, "".join(res), res, flag)) print('-' * 80) score += flag print( "Score: {0}/{1} = {2:.2f}%".format(score, len(examples), score / len(examples) * 100)) </code></pre> <p><code>get_it_done</code> function creates at first <code>mapping</code>, where keys are occurency ranges of each <code>choice</code> in a <code>number</code>. Then sorts it by first item in each key of <code>mapping</code> dict. Next step is creating <code>graph</code>. Then using <code>find_shortest_path</code> function, we can find the shortest path to build result in the most optimal way. So at the end we can use <code>mapping</code> again, to return <code>choices</code> according to their ranges. If there is one range, we have situation when all numbers consists the same two values, so rules are different. We can collect numbers direct from <code>choices</code> (sorted descending) until length of the result will be the same as length of a <code>number</code>.</p>
2
2016-10-03T13:47:49Z
[ "python", "string", "algorithm", "split", "computer-science" ]
How to split text into chunks minimizing the solution?
39,750,879
<p><strong>OVERVIEW</strong> </p> <p>I got a set of possible valid chunks I can use to split a text (if possible).</p> <p>How can i split a given text using these chunks such as the result will be optimized (minimized) in terms of the number of resulting chunks?</p> <p><strong>TEST SUITE</strong></p> <pre><code>if __name__ == "__main__": import random import sys random.seed(1) # 1) Testing robustness examples = [] sys.stdout.write("Testing correctness...") N = 50 large_number = "3141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647093844609550582231725359408128481" for i in range(100): for j in range(i): choices = random.sample(range(i), j) examples.append((choices, large_number)) for (choices, large_number) in examples: get_it_done(choices, large_number) sys.stdout.write("OK") # 2) Testing correctness examples = [ # Example1 -&gt; # Solution ['012345678910203040506070', '80', '90', '100', '200', '300', '400', '500', '600', '700', '800', '900'] ( [ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "20", "30", "40", "50", "60", "70", "80", "90", "100", "200", "300", "400", "500", "600", "700", "800", "900", "012345678910203040506070" ], "0123456789102030405060708090100200300400500600700800900" ), # Example2 ## Solution ['100'] ( ["0", "1", "10", "100"], "100" ), # Example3 ## Solution ['101234567891020304050', '6070809010020030040050', '0600700800900'] ( [ "10", "20", "30", "40", "50", "60", "70", "80", "90", "012345678910203040506070", "101234567891020304050", "6070809010020030040050", "0600700800900" ], "10123456789102030405060708090100200300400500600700800900" ), # Example4 ### Solution ['12', '34', '56', '78', '90'] ( [ "12", "34", "56", "78", "90", "890", ], "1234567890" ), # Example5 ## Solution ['12', '34'] ( [ "1", "2", "3", "12", "23", "34" ], "1234" ), # Example6 ## Solution ['100', '10'] ( ["0", "1", "10", "100"], "10010" ) ] score = 0 for (choices, large_number) in examples: res = get_it_done(choices, large_number) flag = "".join(res) == large_number print("{0}\n{1}\n{2} --&gt; {3}".format( large_number, "".join(res), res, flag)) print('-' * 80) score += flag print( "Score: {0}/{1} = {2:.2f}%".format(score, len(examples), score / len(examples) * 100)) # 3) TODO: Testing optimization, it should provide (if possible) # minimal cases </code></pre> <p><strong>QUESTION</strong></p> <p>How could I solve this problem on python without using a brute-force approach?</p>
8
2016-09-28T14:47:06Z
39,892,785
<pre><code>def find_shortest_path(graph, start, end, path=[]): path = path + [start] if start == end: return path if start not in graph: return None shortest = None for node in graph[start]: if node not in path: newpath = find_shortest_path(graph, node, end, path) if newpath: if not shortest or len(newpath) &lt; len(shortest): shortest = newpath return shortest def get_it_done(choices, number): mapping = {} graph = {} for choice in choices: if choice in number: _from = number.index(choice) _to = _from + len(choice) mapping.setdefault((_from, _to), choice) items = sorted(mapping.items(), key=lambda x: x[0]) for _range, value in items: _from, _to = _range graph.setdefault(_from, []).append(_to) start = 0 end = _range[1] #this is hack, works only in python 2.7 path = find_shortest_path(graph, start, end) ranges = [tuple(path[i:i+2]) for i in range(len(path) - 1)] if len(ranges) == 1: return [mapping[(start, graph[start][-1])]] return [mapping[_range] for _range in ranges] if __name__ == "__main__": examples = [ # Example1 -&gt; # Solution ['012345678910203040506070', '80', '90', '100', '200', '300', '400', '500', '600', '700', '800', '900'] ( [ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "20", "30", "40", "50", "60", "70", "80", "90", "100", "200", "300", "400", "500", "600", "700", "800", "900", "012345678910203040506070" ], "0123456789102030405060708090100200300400500600700800900" ), ## Example2 ## Solution ['100'] ( ["0", "1", "10", "100"], "100" ), ## Example3 ## Solution ['101234567891020304050', '6070809010020030040050', '0600700800900'] ( [ "10", "20", "30", "40", "50", "60", "70", "80", "90", "012345678910203040506070", "101234567891020304050", "6070809010020030040050", "0600700800900" ], "10123456789102030405060708090100200300400500600700800900" ), ### Example4 ### Solution ['12', '34', '56', '78', '90'] ( [ "12", "34", "56", "78", "90", "890", ], "1234567890" ), ## Example5 ## Solution ['12', '34'] ( [ "1", "2", "3", "12", "23", "34" ], "1234" ) ] for (choices, large_number) in examples: res = get_it_done(choices, large_number) print("{0}\n{1}\n{2} --&gt; {3}".format( large_number, "".join(res), res, "".join(res) == large_number)) print('-' * 80) </code></pre>
-3
2016-10-06T09:49:49Z
[ "python", "string", "algorithm", "split", "computer-science" ]
How to split text into chunks minimizing the solution?
39,750,879
<p><strong>OVERVIEW</strong> </p> <p>I got a set of possible valid chunks I can use to split a text (if possible).</p> <p>How can i split a given text using these chunks such as the result will be optimized (minimized) in terms of the number of resulting chunks?</p> <p><strong>TEST SUITE</strong></p> <pre><code>if __name__ == "__main__": import random import sys random.seed(1) # 1) Testing robustness examples = [] sys.stdout.write("Testing correctness...") N = 50 large_number = "3141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647093844609550582231725359408128481" for i in range(100): for j in range(i): choices = random.sample(range(i), j) examples.append((choices, large_number)) for (choices, large_number) in examples: get_it_done(choices, large_number) sys.stdout.write("OK") # 2) Testing correctness examples = [ # Example1 -&gt; # Solution ['012345678910203040506070', '80', '90', '100', '200', '300', '400', '500', '600', '700', '800', '900'] ( [ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "20", "30", "40", "50", "60", "70", "80", "90", "100", "200", "300", "400", "500", "600", "700", "800", "900", "012345678910203040506070" ], "0123456789102030405060708090100200300400500600700800900" ), # Example2 ## Solution ['100'] ( ["0", "1", "10", "100"], "100" ), # Example3 ## Solution ['101234567891020304050', '6070809010020030040050', '0600700800900'] ( [ "10", "20", "30", "40", "50", "60", "70", "80", "90", "012345678910203040506070", "101234567891020304050", "6070809010020030040050", "0600700800900" ], "10123456789102030405060708090100200300400500600700800900" ), # Example4 ### Solution ['12', '34', '56', '78', '90'] ( [ "12", "34", "56", "78", "90", "890", ], "1234567890" ), # Example5 ## Solution ['12', '34'] ( [ "1", "2", "3", "12", "23", "34" ], "1234" ), # Example6 ## Solution ['100', '10'] ( ["0", "1", "10", "100"], "10010" ) ] score = 0 for (choices, large_number) in examples: res = get_it_done(choices, large_number) flag = "".join(res) == large_number print("{0}\n{1}\n{2} --&gt; {3}".format( large_number, "".join(res), res, flag)) print('-' * 80) score += flag print( "Score: {0}/{1} = {2:.2f}%".format(score, len(examples), score / len(examples) * 100)) # 3) TODO: Testing optimization, it should provide (if possible) # minimal cases </code></pre> <p><strong>QUESTION</strong></p> <p>How could I solve this problem on python without using a brute-force approach?</p>
8
2016-09-28T14:47:06Z
39,904,054
<p>Sorry, the implementation is a bit hacky. But I think it always returns the optimal answer. (Did not proove, though.) It is a fast and complete implementation in python and returns the correct answers for all proposed use cases.</p> <p>The algorithm is recursive and works as follows:</p> <ol> <li>start at the beginning of the text.</li> <li>find matching chunks that can be used as the first chunk.</li> <li>for each matching chunk, recursively start at step 1. with the rest of the text (i.e. the chunk removed from the start) and collect the solutions</li> <li>return the shortest of the collected solutions</li> </ol> <p>When the algorithm is done, all possible paths (and the not possible ones, i.e. no match at the end) should have been traversed exactly once.</p> <p>To perform step 2 efficiently, I build a patricia tree for the choices so the possible chunks matching the beginning of the text can be looked up quickly.</p> <pre><code>def get_seq_in_tree(tree, choice): if type(tree)!=dict: if choice == tree: return [choice] return [] for i in range(1, len(choice)+1): if choice[:i] in tree: return [choice[:i]] + get_seq_in_tree(tree[choice[:i]], choice[i:]) return [] def seq_can_end_here(tree, seq): res = [] last = tree for e, c in enumerate(seq): if '' in last[c]: res.append(e+1) last = last[c] return res def build_tree(choices): tree = {} choices = sorted(choices) for choice in choices: last = tree for c in choice: if c not in last: last[c] = {} last = last[c] last['']=None return tree solution_cache = {} ncalls = 0 def solve(tree, number): global solution_cache global ncalls ncalls +=1 # take every path only once if number in solution_cache: return solution_cache[number] solutions = [] seq = get_seq_in_tree(tree, number) endings = seq_can_end_here(tree, seq) for i in reversed(endings): current_solution = [] current_solution.append(number[:i]) if i == len(number): solutions.append(current_solution) else: next_solution = solve(tree, number[i:]) if next_solution: solutions.append(current_solution + next_solution) if not solutions: return None shortest_solution = sorted([(len(solution), solution) for solution in solutions])[0][1] solution_cache[number] = shortest_solution return shortest_solution def get_it_done(choices, number): tree = build_tree(choices) solution = solve(tree, number) return solution if __name__ == "__main__": examples = [ # Example1 -&gt; # Solution ['012345678910203040506070', '80', '90', '100', '200', '300', '400', '500', '600', '700', '800', '900'] ( [ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "20", "30", "40", "50", "60", "70", "80", "90", "100", "200", "300", "400", "500", "600", "700", "800", "900", "012345678910203040506070" ], "0123456789102030405060708090100200300400500600700800900" ), ## Example2 ## Solution ['100'] ( ["0", "1", "10", "100"], "100" ), ## Example3 ## Solution ['101234567891020304050', '6070809010020030040050', '0600700800900'] ( [ "10", "20", "30", "40", "50", "60", "70", "80", "90", "012345678910203040506070", "101234567891020304050", "6070809010020030040050", "0600700800900" ], "10123456789102030405060708090100200300400500600700800900" ), ### Example4 ### Solution ['12', '34', '56', '78', '90'] ( [ "12", "34", "56", "78", "90", "890", ], "1234567890" ), ## Example5 ## Solution ['12', '34'] ( [ "1", "2", "3", "12", "23", "34" ], "1234" ), # Example6 ## Solution ['100', '10'] ( ["0", "1", "10", "100"], "10010" ) ] score = 0 for (choices, large_number) in examples: res = get_it_done(choices, large_number) flag = "".join(res) == large_number print("{0}\n{1}\n{2} --&gt; {3}".format( large_number, "".join(res), res, flag)) print('-' * 80) score += flag print("Score: {0}/{1} = {2:.2f}%".format(score, len(examples), score / len(examples) * 100)) </code></pre> <p>I guess the complexity is something like O(L * N * log(C)) where L is the length of the text, N is the size of the vocabulary and C is the number of choices.</p> <p><strong>EDIT:</strong> Included the missing test case.</p>
4
2016-10-06T19:20:05Z
[ "python", "string", "algorithm", "split", "computer-science" ]
Python rewriting instead of appending
39,751,095
<p>I have two csv files result.csv and sample.csv.</p> <p>result.csv</p> <pre><code>M11251TH1230 M11543TH4292 M11435TDS144 </code></pre> <p>sample.csv</p> <pre><code>M11435TDS144,STB#1,Router#1 M11543TH4292,STB#2,Router#1 M11509TD9937,STB#3,Router#1 M11543TH4258,STB#4,Router#1 </code></pre> <p>I have a python script which will compare both the files if line in result.csv matches with the first word in the line in sample.csv, then append 1 else append 0 at every line in sample.csv</p> <p>It should look like M11435TDS144,STB#1,Router#1,1 and M11543TH4258,STB#4,Router#1,0 since M11543TH4258 is not found in result.csv</p> <p>script.py</p> <pre><code>import csv with open('result.csv', 'rb') as f: reader = csv.reader(f) result_list = [] for row in reader: result_list.extend(row) with open('sample.csv', 'rb') as f: reader = csv.reader(f) sample_list = [] for row in reader: if row[0] in result_list: sample_list.append(row + [1]) else: sample_list.append(row + [0]) with open('sample.csv', 'wb') as f: writer = csv.writer(f) writer.writerows(sample_list) </code></pre> <p>sample output(sample.csv)if I run the script two times</p> <pre><code>M11435TDS144,STB#1,Router#1,1,1 M11543TH4292,STB#2,Router#1,1,1 M11509TD9937,STB#3,Router#1,0,0 M11543TH4258,STB#4,Router#1,0,0 </code></pre> <p>Every time I run the script, 1's and 0's are being appended in a new column sample.csv. Is there any way every time I run the script, I can replace the appended column instead of increasing columns.</p>
0
2016-09-28T14:57:07Z
39,751,361
<p>you write to the <code>sample.csv</code> and then you use it as input file, with the additional column. That's why you have more and more 1's and 0's in this file. Regards, Grzegorz</p>
0
2016-09-28T15:09:39Z
[ "python", "csv", "fileappender" ]
Early stopping with tflearn
39,751,113
<p>I'm having a hard time to figure out how to implement early stopping with tflearn. Supposedly it works by using callbacks in the model.fit() function but I don't quite get how it's done... This is the example on the website but it still needs a Monitor class that I can't get to work:</p> <pre><code>class MonitorCallback(tflearn.callbacks.Callback): def __init__(self, api): self.my_monitor_api = api def on_epoch_end(self, training_state): self.my_monitor_api.send({ accuracy: training_state.global_acc, loss: training_state.global_loss, }) monitorCallback = new MonitorCallback(api) model = ... model.fit(..., callbacks=monitorCallback) </code></pre> <p>Does anyone have an example or an idea of how to do this? Cheers</p>
0
2016-09-28T14:58:08Z
39,927,599
<p>Which version of tflearn are you using? Most probably you will need to download the repo to leverage the feature right now. <a href="https://github.com/tflearn/tflearn/pull/288" rel="nofollow">Early commits for the feature are dated on Aug 17 2016</a> but latest release (<a href="https://github.com/tflearn/tflearn/commit/8b4ac4d14cecfc6695e3d10927f4394a4696ab08" rel="nofollow">v2.2.0</a> when writing this) is dated Aug 10 2016 and does not include it. Perhaps that explains the issue ...</p>
0
2016-10-08T00:54:02Z
[ "python", "machine-learning", "tensorflow" ]
How to select a span element using selenium in python?
39,751,153
<p>I am trying to select month which is a table element using the below code where I am finding the table element using <code>xpath</code>. </p> <p>This code works:</p> <pre><code>month = driver.find_element_by_xpath('/html/body/div[11]/div[3]/table/tbody/tr/td/span[text()="Aug"]) month.click() </code></pre> <p>Here, instead of <code>span[8]</code>, I am trying to store the text in a variable and trying to run the below code:</p> <pre><code>year = str(raw_input("Enter year") month = driver.find_element_by_xpath('/html/body/div[11]/div[3]/table/tbody/tr/td/span[text()= year]) month.click() </code></pre> <p>But this code is not working.</p>
0
2016-09-28T14:59:55Z
39,754,392
<p>Actually your question is not much clear, I think you want to append <code>year</code> variable value into your <code>xpath</code> to locate an element with input text, to achieve this you should try as below :-</p> <pre><code>year = str(raw_input("Enter year") month = driver.find_element_by_xpath("/html/body/div[11]/div[3]/table/tbody/tr/td/span[text() = '" + year + "']") month.click() </code></pre> <p>You can summarise above <code>xpath</code> with text as well which would also work as :-</p> <pre><code>month = driver.find_element_by_xpath(".//span[text() = '" + year + "']") month.click() </code></pre>
0
2016-09-28T17:47:23Z
[ "python", "html", "python-2.7", "selenium-webdriver", "html-table" ]
Python: Trying to restart script not working
39,751,206
<p>Tried to restart my python script within itself. Python 2.7.11</p> <pre><code>#!/usr/bin/python # -*- coding: utf-8 -*- import os import sys os.execv(__file__, sys.argv) sys.exit() </code></pre> <p>Result:</p> <pre><code>Traceback (most recent call last): File "...\foo.py", line 3, in &lt;module&gt; os.execv(__file__, sys.argv) OSError: [Errno 8] Exec format error </code></pre> <p>Another code:</p> <pre><code>#!/usr/bin/python # -*- coding: utf-8 -*- import os import sys os.execv(sys.executable, [sys.executable] + sys.argv) sys.exit() </code></pre> <p>Result:</p> <pre><code>C:\...\python.exe: can't open file 'C:\...\Math': [Errno 2] No such file or directory </code></pre> <p>The file's name is foo.py - it's in a folder name 'Math Project'</p> <p>Codepage: 852, if necessary.</p>
0
2016-09-28T15:02:42Z
39,752,046
<p>Your error message <code>C:\...\python.exe</code> suggests that you're running a Windows system. </p> <p>Your first script fails because under Windows, <code>os.execv()</code> doesn't know how to handle Python scripts because the first line (<code>#!/usr/bin/python</code>) is not evaluated nor does it point to a valid Python interpreter on most Windows systems. In effect, <code>os.execv()</code> tries to execute a plain text file which happens to contain Python code, but the system doesn't know that.</p> <p>Your second script fails to correctly retrieve the file name of your Python script <code>foo.py</code>. It's not clear to me why that happens, but the error message suggests that there might be a problem with the space in your directory name <code>Math Project</code>. </p> <p>As a possible workaround, try replacing the line </p> <pre class="lang-py prettyprint-override"><code>os.execv(sys.executable, [sys.executable] + sys.argv) </code></pre> <p>by the following:</p> <pre class="lang-py prettyprint-override"><code>os.execv(sys.executable, [sys.executable, os.path.join(sys.path[0], __file__)] + sys.argv[1:]) </code></pre> <p>This line attempts to reconstruct the correct path to your Python script, and pass it as an argument to the Python interpreter.</p> <p>As a side note: Keep in mind what your script is doing: it's unconditionally starting another instance of itself. This will result in an infinite loop, which will eventually bring down your system. Make sure that your real script contains an abort condition.</p> <p><strong>EDIT:</strong> </p> <p>The problem lies, indeed, with the space in the path, and the workaround that I mentioned won't help. However, the <code>subprocess</code> module should take care of that. Use it like so:</p> <pre class="lang-py prettyprint-override"><code>import os import sys import subprocess subprocess.call(["python", os.path.join(sys.path[0], __file__)] + sys.argv[1:]) </code></pre>
0
2016-09-28T15:40:58Z
[ "python", "exec", "restart" ]
Regex to extract titles from the text
39,751,242
<p>Can anyone help with the regex to extract the text phrases after 'Title:' from the following text: (have just bolded the text to clearly depict the portion to be extracted)</p> <pre>Title: <b>Anorectal Fistula (Fistula-in-Ano)</b> Procedure Code(s): Effective date: 7/1/07 Title: <b>2003247</b> or previous effective dates) Title: <b>ST2 Assay for Chronic Heart Failure</b> Description/Background Heart Failure HF is one among many cardiovascular diseases that comprises a major cause of morbidity and mortality worldwide. The term “heart failure” (HF) refers to a complex clinical syndrome .</pre> <p>I am using the regex: <code>(?:Title: \n+(.*))|(?:Title:\n+(.*))|(?&lt;=Title: )(.*)(?=Procedure)</code></p> <p>However, it doesn't seem to capture the terms correctly! I am using Python 2.7.12</p>
2
2016-09-28T15:04:08Z
39,809,089
<p>I suggest using</p> <pre><code> Title:\s*(.*?)\s*Procedure|Title:\s*(.*) </code></pre> <p>See the <a href="https://regex101.com/r/a8DXbK/1" rel="nofollow">regex demo</a>.</p> <p><em>Details</em>:</p> <ul> <li><code>Title:</code> - literal text <code>Title:</code></li> <li><code>\s*</code> - 0+ whitespaces</li> <li><code>(.*?)</code> - Group 1: any 0+ chars other than linebreak symbols as few as possible up to the first</li> <li><code>\s*Procedure</code> - 0+ whitespaces + the string <code>Procedure</code></li> <li><code>|</code> - or</li> <li><code>Title:\s*</code> - <code>Title:</code> string + 0+ whitespaces</li> <li><code>(.*)</code> - Group 2: zero or more any chars other than linebreak symbols as many as possible (the rest of the line).</li> </ul> <p><a href="http://ideone.com/QJiZjR" rel="nofollow">Python code</a>:</p> <pre><code>import re regex = r"Title:\s*(.*?)\s*Procedure|Title:\s*(.*)" test_str = ("Title: Anorectal Fistula (Fistula-in-Ano) Procedure Code(s):\n\n" "Effective date: 7/1/07\n\n" "Title:\n\n" "2003247\n\n" "or previous effective dates)\n\n" "Title:\n\n" "ST2 Assay for Chronic Heart Failure\n\n" "Description/Background\n\n" "Heart Failure\n\n" "HF is one among many cardiovascular diseases that comprises a major cause of morbidity and mortality worldwide. The term “heart failure” (HF) refers to a complex clinical syndrome .") res = [] for m in re.finditer(regex, test_str): if m.group(1): res.append(m.group(1)) else: res.append(m.group(2)) print(res) # =&gt; ['Anorectal Fistula (Fistula-in-Ano)', '2003247', 'ST2 Assay for Chronic Heart Failure'] </code></pre>
0
2016-10-01T16:43:24Z
[ "python", "regex", "python-2.7", "text-extraction" ]
REGEX formulating conditions
39,751,262
<p>Just started learning python and regex. </p> <pre><code>My regex: \b\d+\s+([A-Za-z]* |[A-Za-z]*\s+[A-Za-z]*)\s+\D+.. </code></pre> <p>using <a href="https://regex101.com/" rel="nofollow">https://regex101.com/</a> </p> <p><strong>string 1:</strong> <a href="https://i.imgur.com/XNuXftW.jpg" rel="nofollow">https://i.imgur.com/XNuXftW.jpg</a> (why does Beer has whitespaces while carrot/chocolate dont have?)</p> <p><strong>string 2</strong><a href="https://i.imgur.com/nrl2FPB.jpg" rel="nofollow">https://i.imgur.com/nrl2FPB.jpg</a> (adding further more of \s+[A-Za-z] in the capture group doesnt seem to be working anymore, WHY?)</p> <p><strong>string 3</strong>: <a href="https://i.imgur.com/qH0Z7Hi.jpg" rel="nofollow">https://i.imgur.com/qH0Z7Hi.jpg</a> (same as string 2 problem)</p> <p>my question is how do i continue to formulate such that it will encompass the above conditions? thank you</p> <p>in the case that you need to test it yourself, i have provided the strings as below.</p> <p>=</p>
0
2016-09-28T15:05:20Z
39,751,529
<p>Try this:</p> <pre><code>\d+\s+([A-Za-z ]*)\b *\D+ </code></pre> <p>See on <a href="https://regex101.com/r/rlFheb/2" rel="nofollow">regex101</a>.</p>
1
2016-09-28T15:17:56Z
[ "python", "regex" ]
REGEX formulating conditions
39,751,262
<p>Just started learning python and regex. </p> <pre><code>My regex: \b\d+\s+([A-Za-z]* |[A-Za-z]*\s+[A-Za-z]*)\s+\D+.. </code></pre> <p>using <a href="https://regex101.com/" rel="nofollow">https://regex101.com/</a> </p> <p><strong>string 1:</strong> <a href="https://i.imgur.com/XNuXftW.jpg" rel="nofollow">https://i.imgur.com/XNuXftW.jpg</a> (why does Beer has whitespaces while carrot/chocolate dont have?)</p> <p><strong>string 2</strong><a href="https://i.imgur.com/nrl2FPB.jpg" rel="nofollow">https://i.imgur.com/nrl2FPB.jpg</a> (adding further more of \s+[A-Za-z] in the capture group doesnt seem to be working anymore, WHY?)</p> <p><strong>string 3</strong>: <a href="https://i.imgur.com/qH0Z7Hi.jpg" rel="nofollow">https://i.imgur.com/qH0Z7Hi.jpg</a> (same as string 2 problem)</p> <p>my question is how do i continue to formulate such that it will encompass the above conditions? thank you</p> <p>in the case that you need to test it yourself, i have provided the strings as below.</p> <p>=</p>
0
2016-09-28T15:05:20Z
39,751,542
<p>You could use this regex, which takes advantage of look-behind (<code>?&lt;=</code>) and look-ahead <code>(?=</code>) so it only captures the product names:</p> <pre><code>(?&lt;=\s\s)\w+(?:\s\w+)*(?=\s\s) </code></pre> <p>See demo on <a href="https://regex101.com/r/rtIXLK/2" rel="nofollow">regex101.com</a>.</p> <p>Use it with the <code>g</code> modifier.</p>
1
2016-09-28T15:18:52Z
[ "python", "regex" ]
REGEX formulating conditions
39,751,262
<p>Just started learning python and regex. </p> <pre><code>My regex: \b\d+\s+([A-Za-z]* |[A-Za-z]*\s+[A-Za-z]*)\s+\D+.. </code></pre> <p>using <a href="https://regex101.com/" rel="nofollow">https://regex101.com/</a> </p> <p><strong>string 1:</strong> <a href="https://i.imgur.com/XNuXftW.jpg" rel="nofollow">https://i.imgur.com/XNuXftW.jpg</a> (why does Beer has whitespaces while carrot/chocolate dont have?)</p> <p><strong>string 2</strong><a href="https://i.imgur.com/nrl2FPB.jpg" rel="nofollow">https://i.imgur.com/nrl2FPB.jpg</a> (adding further more of \s+[A-Za-z] in the capture group doesnt seem to be working anymore, WHY?)</p> <p><strong>string 3</strong>: <a href="https://i.imgur.com/qH0Z7Hi.jpg" rel="nofollow">https://i.imgur.com/qH0Z7Hi.jpg</a> (same as string 2 problem)</p> <p>my question is how do i continue to formulate such that it will encompass the above conditions? thank you</p> <p>in the case that you need to test it yourself, i have provided the strings as below.</p> <p>=</p>
0
2016-09-28T15:05:20Z
39,751,854
<p>I guess the the space before "|" is the one causes it captures "beer " in <strong>string 1 case</strong> "Chocolate cake" does not happen as "beer " as it is matched with the second regex which is </p> <pre><code>[A-Za-z]*\s+[A-Za-z]* </code></pre> <p>for <strong>string 2</strong> [A-Za-z]<em>\s+[A-Za-z]</em> regex matches exactly two words </p> <p>How about try below regex, modified from trincot</p> <pre><code>(?&lt;=\s\s)(\w+\s)+(\w+)(?=\s\s) </code></pre>
1
2016-09-28T15:31:51Z
[ "python", "regex" ]
How to use a refresh_token for youtube python api?
39,751,307
<p>So i got a refresh token in this way and can I keep it?</p> <p>And if so, how do I use it next time, so that there is no need for me to open browser?</p> <p>Right now I'm thinking about creating OAuth2Credentials object directly, is this the right way?</p> <pre><code>from urllib.parse import urlparse, parse_qs from oauth2client.client import flow_from_clientsecrets, OAuth2Credentials from oauth2client.file import Storage from oauth2client.tools import argparser, run_flow from apiclient.discovery import build from apiclient.errors import HttpError from oauth2client.contrib import gce import httplib2 import webbrowser CLIENT_SECRETS_FILE = "bot_credentials.json" flow = client.flow_from_clientsecrets( CLIENT_SECRETS_FILE, scope=scope, redirect_uri='http://127.0.0.1:65010') flow.params['include_granted_scopes'] = 'true' flow.params['access_type'] = 'offline' auth_uri = flow.step1_get_authorize_url() webbrowser.open(auth_uri) url = input('Please enter the redirected url with code') code = get_url_param(url, 'code') if code is None: print('there is an error in your redirect link with code parameter, check if it exists') exit() print(code) credentials = flow.step2_exchange(code[0]) print(credentials.to_json())#refresh_token here!!! </code></pre>
0
2016-09-28T15:07:01Z
39,774,370
<p>If the user consents to authorize your application to access those resources, Google will return a token to your application. Depending on your application's type, it will either validate the token or exchange it for a different type of token. Check this <a href="https://developers.google.com/youtube/2.0/developers_guide_protocol_oauth2" rel="nofollow">documentation</a>.</p> <blockquote> <p>For example, a server-side web application would exchange the returned token for an access token and a <strong>refresh token</strong>. The access token would let the application authorize requests on the user's behalf, and <strong>the refresh token would let the application retrieve a new access token when the original access token expires.</strong></p> </blockquote> <p>Basically, if your application obtains a refresh token during the authorization process, then you will need to periodically use that token to obtain a new, valid access token. Server-side web applications, installed applications, and devices all obtain refresh tokens. </p> <p>It is stated <a href="https://developers.google.com/youtube/2.0/developers_guide_protocol_oauth2#OAuth2_Refreshing_a_Token" rel="nofollow">here</a> that at any time, your application can send a POST request to Google's authorization server that specifies your client ID, your client secret, and the <strong>refresh token</strong> for the user. The request should also set the <code>grant_type</code> parameter value to <code>refresh_token</code>.</p> <blockquote> <p>The following example demonstrates this request:</p> <pre><code>POST /o/oauth2/token HTTP/1.1 Host: accounts.google.com Content-Type: application/x-www-form-urlencoded client_id=21302922996.apps.googleusercontent.com&amp; client_secret=XTHhXh1SlUNgvyWGwDk1EjXB&amp; refresh_token=1/6BMfW9j53gdGImsixUH6kU5RsR4zwI9lUVX-tqf8JXQ&amp; grant_type=refresh_token </code></pre> <p>The authorization server will return a JSON object that contains a new access token:</p> <pre><code>{ "access_token":"1/fFAGRNJru1FTz70BzhT3Zg", "expires_in":3920, "token_type":"Bearer" } </code></pre> </blockquote> <p>You can check this <a href="https://gist.github.com/ikai/5905078" rel="nofollow">sample on GitHub</a> to generate a refresh token for the YouTube API. Note that this will will also create a file called <code>generate_token.py-oauth</code> that contains this information.</p>
0
2016-09-29T15:26:59Z
[ "python", "youtube-api", "google-api-client" ]
Update MongoDB collection with python script
39,751,341
<p>I want to be able to create a new empty collection that will update any time a python script is called. I know that to create the collection i can simply use pymongo as follows:</p> <pre><code>from pymongo import MongoClient db = MongoClient('my.ip.add.ress', 27017)['xxxx'] #connect to client db.createCollection("colName") #create empty collection </code></pre> <p>I want to be able to update it using scripts that I call (specifically from Team City) like:</p> <pre><code>python update.py --build-type xyz --status xyz </code></pre> <p>How would I go about doing this so that the script would update that specific collection that I want?</p>
0
2016-09-28T15:08:39Z
39,793,714
<p>I suppose you know which collection you like to modify. If you do, you can just add the collection as another argument to your command:</p> <p>After that you can fetch the command line arguments by using sys.argv or a library specifically written to parse command line arguments. The python 3 standard library includes argpase (<a href="https://docs.python.org/3/library/argparse.html" rel="nofollow">https://docs.python.org/3/library/argparse.html</a>). However I'd suggest to use click (<a href="http://click.pocoo.org/5/" rel="nofollow">http://click.pocoo.org/5/</a>).</p> <p>Save the following as cli.py</p> <pre><code>import click from pymongo import MongoClient MONGOHOST = 'localhost' MONGOPORT = 27017 @click.command() @click.option('--db', help='Database', required=True) @click.option('--col', help='Collection', required=True) @click.option('--build_type', help='Build Type', required=True) @click.option('--status', help='Status', required=True) def update(db, col, build_type, status): mongocol = MongoClient(MONGOHOST, MONGOPORT)[db][col] mongocol.insert_one({'build_type': build_type, 'status': status}) # You could also do: mongocol.find_and_modify() or whatever... if __name__ == '__main__': update() </code></pre> <p>Then run a command like:</p> <pre><code>python cli.py --db=test --col=test --build_type=staging --sta tus=finished </code></pre> <p>make sure you have pymongo and click installed:</p> <pre><code>pip install pymongo click </code></pre>
0
2016-09-30T14:19:53Z
[ "python", "mongodb", "teamcity", "pymongo" ]
Python reading a file line by line and sending it to another Python script
39,751,523
<p>Good day.</p> <p>Today i was trying to practice python and Im trying to make a script that reads the lines from a file containing only numbers and using said numbers as a parameter in another Python script.</p> <p>Here at work i sometimes need to execute a python script called Suspend.py, ever time i excute this scripit i must type the following information:</p> <pre><code>Suspend.py suspend telefoneNumber </code></pre> <p>I have to do this procedure many times during the day and i have to do this for every number on the list, it is usually a very long list. SO i though on trying to make things a little bit faster and creat a Python script myself.</p> <p>Thing is a just started learning Python on my own i kinda suck at it, so i have no idea on how to do this.</p> <p>In one file i have the following numbers:</p> <pre><code>87475899 87727856 87781681 87794922 87824499 88063188 88179211 88196532 88244043 88280924 88319531 88421427 88491113 </code></pre> <p>I want python to be able to read line by line and send this number to another file script together with the word "suspend" on the previously said python script.</p>
1
2016-09-28T15:17:44Z
39,751,620
<p>If I understand you correctly:</p> <pre><code>import subprocess with open("file_with_numbers.txt") as f: for line in f: subprocess.call(["python", "Suspend.py", "suspend", line.strip()]) </code></pre>
2
2016-09-28T15:22:19Z
[ "python" ]
select dict object based on value
39,751,545
<p>This is how my JSON object looks like:</p> <pre><code>[{ 'Description': 'Description 1', 'OutputKey': 'OutputKey 1', 'OutputValue': 'OutputValue 1' }, { 'Description': 'Description 2', 'OutputKey': 'OutputKey 2', 'OutputValue': 'OutputValue 2' }, { 'Description': 'Description 3', 'OutputKey': 'OutputKey 3', 'OutputValue': 'OutputValue 3' }, { 'Description': 'Description 4', 'OutputKey': 'OutputKey 4', 'OutputValue': 'OutputValue 4' }, { 'Description': 'Description 5', 'OutputKey': 'OutputKey 5', 'OutputValue': 'OutputValue 5' }, { 'Description': 'Description 6', 'OutputKey': 'OutputKey 6', 'OutputValue': 'OutputValue 6' }] </code></pre> <p>How do i grab the dict object where OutputKey is 4 without looping thru?</p> <p><code>any(key['OutputKey'] == 'OutputKey 4' for key in myJSON)</code> returns true but i need it to grab the whole JSON object that belongs to that key.</p>
1
2016-09-28T15:18:58Z
39,751,615
<p>You can use <code>next</code>:</p> <pre><code>next(key for key in myJSON if key['OutputKey'] == 'OutputKey 4') </code></pre> <p>However, if the "key" isn't present, then you'll get a <code>StopIteration</code> exception. You can handle that as you normally would handle any other exception, or you can supply a "default" to <code>next</code>:</p> <pre><code>default = None # or whatever... next((key for key in myJSON if key['OutputKey'] == 'OutputKey 4'), default) </code></pre> <p>Note that this is still "looping through" all the items (except that it short circuits when it finds the right one). If you really want to avoid a loop, you'll need a different data-structure:</p> <pre><code>dict_by_output_key = {key['OutputKey']: key for key in myJSON} dict_by_output_key['OutputKey 4'] </code></pre> <p>You still have to do 1 loop -- But once you've done it, you can do as many O(1) lookups as you want. The cost is a little additional intermediate storage (memory).</p>
5
2016-09-28T15:22:04Z
[ "python" ]
select dict object based on value
39,751,545
<p>This is how my JSON object looks like:</p> <pre><code>[{ 'Description': 'Description 1', 'OutputKey': 'OutputKey 1', 'OutputValue': 'OutputValue 1' }, { 'Description': 'Description 2', 'OutputKey': 'OutputKey 2', 'OutputValue': 'OutputValue 2' }, { 'Description': 'Description 3', 'OutputKey': 'OutputKey 3', 'OutputValue': 'OutputValue 3' }, { 'Description': 'Description 4', 'OutputKey': 'OutputKey 4', 'OutputValue': 'OutputValue 4' }, { 'Description': 'Description 5', 'OutputKey': 'OutputKey 5', 'OutputValue': 'OutputValue 5' }, { 'Description': 'Description 6', 'OutputKey': 'OutputKey 6', 'OutputValue': 'OutputValue 6' }] </code></pre> <p>How do i grab the dict object where OutputKey is 4 without looping thru?</p> <p><code>any(key['OutputKey'] == 'OutputKey 4' for key in myJSON)</code> returns true but i need it to grab the whole JSON object that belongs to that key.</p>
1
2016-09-28T15:18:58Z
39,751,623
<pre><code>try: next(key for key in myJSON if key['OutputKey'] == 'OutputKey 4') except StopIteration: #code for the case where there is no such dictionary </code></pre>
1
2016-09-28T15:22:22Z
[ "python" ]
select dict object based on value
39,751,545
<p>This is how my JSON object looks like:</p> <pre><code>[{ 'Description': 'Description 1', 'OutputKey': 'OutputKey 1', 'OutputValue': 'OutputValue 1' }, { 'Description': 'Description 2', 'OutputKey': 'OutputKey 2', 'OutputValue': 'OutputValue 2' }, { 'Description': 'Description 3', 'OutputKey': 'OutputKey 3', 'OutputValue': 'OutputValue 3' }, { 'Description': 'Description 4', 'OutputKey': 'OutputKey 4', 'OutputValue': 'OutputValue 4' }, { 'Description': 'Description 5', 'OutputKey': 'OutputKey 5', 'OutputValue': 'OutputValue 5' }, { 'Description': 'Description 6', 'OutputKey': 'OutputKey 6', 'OutputValue': 'OutputValue 6' }] </code></pre> <p>How do i grab the dict object where OutputKey is 4 without looping thru?</p> <p><code>any(key['OutputKey'] == 'OutputKey 4' for key in myJSON)</code> returns true but i need it to grab the whole JSON object that belongs to that key.</p>
1
2016-09-28T15:18:58Z
39,751,655
<p>A JSON Object is essentially a list of dictionaries. Therefore you need to use index notation to access one of the dictionaries, in this case myJSON[3]. Then inside the dictionary, call that dictionary's key so that it will return its value. So:</p> <p><code>myJSON[3]['OutputKey']</code></p> <p>will return OutputKey4.</p>
0
2016-09-28T15:23:39Z
[ "python" ]
select dict object based on value
39,751,545
<p>This is how my JSON object looks like:</p> <pre><code>[{ 'Description': 'Description 1', 'OutputKey': 'OutputKey 1', 'OutputValue': 'OutputValue 1' }, { 'Description': 'Description 2', 'OutputKey': 'OutputKey 2', 'OutputValue': 'OutputValue 2' }, { 'Description': 'Description 3', 'OutputKey': 'OutputKey 3', 'OutputValue': 'OutputValue 3' }, { 'Description': 'Description 4', 'OutputKey': 'OutputKey 4', 'OutputValue': 'OutputValue 4' }, { 'Description': 'Description 5', 'OutputKey': 'OutputKey 5', 'OutputValue': 'OutputValue 5' }, { 'Description': 'Description 6', 'OutputKey': 'OutputKey 6', 'OutputValue': 'OutputValue 6' }] </code></pre> <p>How do i grab the dict object where OutputKey is 4 without looping thru?</p> <p><code>any(key['OutputKey'] == 'OutputKey 4' for key in myJSON)</code> returns true but i need it to grab the whole JSON object that belongs to that key.</p>
1
2016-09-28T15:18:58Z
39,751,677
<p>You have a list of dicts. If you know the index of the required dict, then you can index that list:</p> <pre><code>output_4 = myJSON[3] </code></pre> <p>Otherwise, you'll be looping through, even if the loop is not obvious (as with your <code>any</code> approach):</p> <pre><code>for d in myJSON: if d['OutputKey'] == 'OutputKey 4': result = d break </code></pre>
0
2016-09-28T15:24:23Z
[ "python" ]
select dict object based on value
39,751,545
<p>This is how my JSON object looks like:</p> <pre><code>[{ 'Description': 'Description 1', 'OutputKey': 'OutputKey 1', 'OutputValue': 'OutputValue 1' }, { 'Description': 'Description 2', 'OutputKey': 'OutputKey 2', 'OutputValue': 'OutputValue 2' }, { 'Description': 'Description 3', 'OutputKey': 'OutputKey 3', 'OutputValue': 'OutputValue 3' }, { 'Description': 'Description 4', 'OutputKey': 'OutputKey 4', 'OutputValue': 'OutputValue 4' }, { 'Description': 'Description 5', 'OutputKey': 'OutputKey 5', 'OutputValue': 'OutputValue 5' }, { 'Description': 'Description 6', 'OutputKey': 'OutputKey 6', 'OutputValue': 'OutputValue 6' }] </code></pre> <p>How do i grab the dict object where OutputKey is 4 without looping thru?</p> <p><code>any(key['OutputKey'] == 'OutputKey 4' for key in myJSON)</code> returns true but i need it to grab the whole JSON object that belongs to that key.</p>
1
2016-09-28T15:18:58Z
39,751,741
<p>Perhaps you should consider using some kind of in-memory database to store the info in order to perform that kind of queries on your data.</p> <p>For instance, I have used <a href="https://pypi.python.org/pypi/tinydb" rel="nofollow">TinyDB</a> in the past, which save the data in a file in JSON format. This is the code for the requested query using this module:</p> <pre><code>from tinydb import TinyDB, Query from tinydb.storages import MemoryStorage db = TinyDB(storage=MemoryStorage) db.insert_multiple([{ 'Description': 'Description 1', 'OutputKey': 'OutputKey 1', 'OutputValue': 'OutputValue 1' }, { 'Description': 'Description 2', 'OutputKey': 'OutputKey 2', 'OutputValue': 'OutputValue 2' }, { 'Description': 'Description 3', 'OutputKey': 'OutputKey 3', 'OutputValue': 'OutputValue 3' }, { 'Description': 'Description 4', 'OutputKey': 'OutputKey 4', 'OutputValue': 'OutputValue 4' }, { 'Description': 'Description 5', 'OutputKey': 'OutputKey 5', 'OutputValue': 'OutputValue 5' }, { 'Description': 'Description 6', 'OutputKey': 'OutputKey 6', 'OutputValue': 'OutputValue 6' }]) q = Query() print db.search(q.OutputKey == 'OutputKey 4') </code></pre>
0
2016-09-28T15:26:53Z
[ "python" ]
python ipaddress, get first usable host only?
39,751,563
<p>I am using pythons ipaddress module and im trying to get the first usable host only, not all usable hosts</p> <p>the below gives me all hosts, and when i try to index it i get the below error.</p> <p>is is possible any other way to just get the first usable host?</p> <p>Thanks</p> <pre><code>n = ipaddress.ip_network(u'10.10.20.0/24') for ip in n.hosts(): ... print ip 10.10.20.1 10.10.20.2 10.10.20.3 10.10.20.4 ... &gt;&gt;&gt; for ip in n.hosts(): ... print ip[1] ... Traceback (most recent call last): File "&lt;console&gt;", line 2, in &lt;module&gt; TypeError: 'IPv4Address' object does not support indexing &gt;&gt;&gt; </code></pre> <p>the below is also failing</p> <pre><code>&gt;&gt;&gt; print n.hosts()[0] Traceback (most recent call last): File "&lt;console&gt;", line 1, in &lt;module&gt; TypeError: 'generator' object has no attribute '__getitem__' </code></pre>
0
2016-09-28T15:19:59Z
39,751,684
<p><code>hosts()</code> returns a generator object, which does not support indexing. You must iterate through it.</p> <p>If you only want the first element, just use <code>next()</code>:</p> <pre><code>n = ipaddress.ip_network(u'10.10.20.0/24') first_host = next(n.hosts()) </code></pre> <p>If you want to convert the generator object into a list which supports indexing, you have to call the <code>list()</code> function:</p> <pre><code>all_hosts = list(n.hosts()) first_host = all_hosts[0] </code></pre> <p>You can also loop through a generator object like you would a list, as you did in your first code snippet:</p> <pre><code>for ip in n.hosts(): # do something with ip: this is the IP address, so don't try to index into it! pass </code></pre>
4
2016-09-28T15:24:36Z
[ "python" ]
PyCharm - Python Strange behavior
39,751,570
<p>I dont know if I can post this here or not. But a strange thing happened recently. </p> <p>I use pycharm to run my python codes and surprisingly when I opened a piece of my code - it got deleted. The file is 0KB now - for some reason. I am using this file for over a month now and this happened when I opened it and the file automatically got deleted from pycharm and next it became 0KB.</p> <p>When I tried to delete this file: </p> <p>I get the following </p> <p>Error 0xx800710FE: This file is currently not available for use on this computer</p>
0
2016-09-28T15:20:21Z
39,751,646
<p>PyCharm has Local History feature, it might recover your deleted file...</p> <ul> <li>Right click on your file</li> <li>Click <code>Local History</code> </li> <li><p>Click <code>Show History</code></p> <p><a href="http://i.stack.imgur.com/82Fd6.png" rel="nofollow"><img src="http://i.stack.imgur.com/82Fd6.png" alt="enter image description here"></a></p></li> <li><p>Right Click on the time-frame you'd like to go back to and click <code>Revert</code></p> <p><a href="http://i.stack.imgur.com/jOajY.png" rel="nofollow"><img src="http://i.stack.imgur.com/jOajY.png" alt="enter image description here"></a></p></li> </ul>
0
2016-09-28T15:23:20Z
[ "python", "pycharm" ]
PyCharm - Python Strange behavior
39,751,570
<p>I dont know if I can post this here or not. But a strange thing happened recently. </p> <p>I use pycharm to run my python codes and surprisingly when I opened a piece of my code - it got deleted. The file is 0KB now - for some reason. I am using this file for over a month now and this happened when I opened it and the file automatically got deleted from pycharm and next it became 0KB.</p> <p>When I tried to delete this file: </p> <p>I get the following </p> <p>Error 0xx800710FE: This file is currently not available for use on this computer</p>
0
2016-09-28T15:20:21Z
39,753,232
<p>Seems like a problem with one of my network servers. It is fixed after talking with an IT professional</p>
0
2016-09-28T16:42:39Z
[ "python", "pycharm" ]
On the default/fill value for *multi-key* outer joins
39,751,636
<p>NB: The post below is the "multi-key" counterpart of an <a href="http://stackoverflow.com/q/39748976/559827">earlier question</a> of mine. The solutions to that earlier question work only for the case where the join is on a single key, and it is not clear to me how to generalize those solutions to the multi-key case presented below. Since, IME, modifying an already-answered question in a way that disqualifies the answers it has received is frowned upon in SO, I'm posting this variant separately. I have also posted a <a href="http://meta.stackoverflow.com/q/335424/559827">question</a> to Meta SO on whether I should delete this post and instead modify the original question, at the expense of invalidating its current answers.</p> <hr> <p>Below are teeny/toy versions of much larger/complex dataframes I'm working with:</p> <pre><code>&gt;&gt;&gt; A key1 key2 u v w x 0 a G 0.757954 0.258917 0.404934 0.303313 1 b H 0.583382 0.504687 NaN 0.618369 2 c I NaN 0.982785 0.902166 NaN 3 d J 0.898838 0.472143 NaN 0.610887 4 e K 0.966606 0.865310 NaN 0.548699 5 f L NaN 0.398824 0.668153 NaN key1 key2 y z 0 a G 0.867603 NaN 1 b H NaN 0.191067 2 c I 0.238616 0.803179 3 d G 0.080446 NaN 4 e H 0.932834 NaN 5 f I 0.706561 0.814467 </code></pre> <p>(FWIW, at the end of this post, I provide code to generate these dataframes.)</p> <p>I want to produce an outer join of these dataframes on the <code>key1</code> and <code>key2</code> columns, in such a way that the new positions induced by the outer join get default value 0.0. IOW, the desired result looks like this</p> <pre><code> key1 key2 u v w x y z 0 a G 0.757954 0.258917 0.404934 0.303313 0.867603 NaN 1 b H 0.583382 0.504687 NaN 0.618369 NaN 0.191067 2 c I NaN 0.982785 0.902166 NaN 0.238616 0.803179 3 d J 0.898838 0.472143 NaN 0.610887 0.000000 0.000000 4 e K 0.966606 0.86531 NaN 0.548699 0.000000 0.000000 5 f L NaN 0.398824 0.668153 NaN 0.000000 0.000000 6 d G 0.000000 0.000000 0.000000 0.000000 0.080446 NaN 7 e H 0.000000 0.000000 0.000000 0.000000 0.932834 NaN 8 f I 0.000000 0.000000 0.000000 0.000000 0.706561 0.814467 </code></pre> <p>(Note that this desired output contains some NaNs, namely those that were already present in <code>A</code> or <code>B</code>.)</p> <p>The <code>merge</code> method gets me part-way there, but the filled-in default values are NaN's, not 0.0's:</p> <pre><code>&gt;&gt;&gt; C = pandas.DataFrame.merge(A, B, how='outer', on=('key1', 'key2')) &gt;&gt;&gt; C key1 key2 u v w x y z 0 a G 0.757954 0.258917 0.404934 0.303313 0.867603 NaN 1 b H 0.583382 0.504687 NaN 0.618369 NaN 0.191067 2 c I NaN 0.982785 0.902166 NaN 0.238616 0.803179 3 d J 0.898838 0.472143 NaN 0.610887 NaN NaN 4 e K 0.966606 0.865310 NaN 0.548699 NaN NaN 5 f L NaN 0.398824 0.668153 NaN NaN NaN 6 d G NaN NaN NaN NaN 0.080446 NaN 7 e H NaN NaN NaN NaN 0.932834 NaN 8 f I NaN NaN NaN NaN 0.706561 0.814467 </code></pre> <p>The <code>fillna</code> method fails to produce the desired output, because it modifies some positions that should be left unchanged:</p> <pre><code>&gt;&gt;&gt; C.fillna(0.0) key1 key2 u v w x y z 0 a G 0.757954 0.258917 0.404934 0.303313 0.867603 0.000000 1 b H 0.583382 0.504687 0.000000 0.618369 0.000000 0.191067 2 c I 0.000000 0.982785 0.902166 0.000000 0.238616 0.803179 3 d J 0.898838 0.472143 0.000000 0.610887 0.000000 0.000000 4 e K 0.966606 0.865310 0.000000 0.548699 0.000000 0.000000 5 f L 0.000000 0.398824 0.668153 0.000000 0.000000 0.000000 6 d G 0.000000 0.000000 0.000000 0.000000 0.080446 0.000000 7 e H 0.000000 0.000000 0.000000 0.000000 0.932834 0.000000 8 f I 0.000000 0.000000 0.000000 0.000000 0.706561 0.814467 </code></pre> <p>How can I achieve the desired output efficiently? (Performance matters here, because I intend to perform this operation on much larger dataframes than those shown here.)</p> <hr> <p><strong>IMPORTANT:</strong> In order to keep the example minimal, I made the multikey consist of only two columns; in practice the number of keys in a multi-key may be significantly greater. Proposed answers should be suitable for multi-keys consisting of at least half-dozen columns.</p> <hr> <p>FWIW, below is the code to generate the example dataframes <code>A</code> and <code>B</code>.</p> <pre><code>from pandas import DataFrame from collections import OrderedDict from random import random, seed def make_dataframe(rows, colnames): return DataFrame(OrderedDict([(n, [row[i] for row in rows]) for i, n in enumerate(colnames)])) maybe_nan = lambda: float('nan') if random() &lt; 0.4 else random() seed(0) A = make_dataframe([['A', 'g', maybe_nan(), maybe_nan(), maybe_nan(), maybe_nan()], ['B', 'h', maybe_nan(), maybe_nan(), maybe_nan(), maybe_nan()], ['C', 'i', maybe_nan(), maybe_nan(), maybe_nan(), maybe_nan()], ['D', 'j', maybe_nan(), maybe_nan(), maybe_nan(), maybe_nan()], ['E', 'k', maybe_nan(), maybe_nan(), maybe_nan(), maybe_nan()], ['F', 'l', maybe_nan(), maybe_nan(), maybe_nan(), maybe_nan()]], ('key1', 'key2', 'u', 'v', 'w', 'x')) B = make_dataframe([['A', 'g', maybe_nan(), maybe_nan()], ['B', 'h', maybe_nan(), maybe_nan()], ['C', 'i', maybe_nan(), maybe_nan()], ['D', 'g', maybe_nan(), maybe_nan()], ['E', 'h', maybe_nan(), maybe_nan()], ['F', 'i', maybe_nan(), maybe_nan()]], ('key1', 'key2', 'y', 'z')) </code></pre>
3
2016-09-28T15:22:55Z
39,754,322
<p>Set the <code>keys</code> to be the index of the two <code>DF's</code>:</p> <pre><code>def index_set(frame, keys=['key1', 'key2']): frame.set_index(keys, inplace=True) return frame </code></pre> <p>Subset the <code>DF's</code> containing <code>NaN</code> values:</p> <pre><code>def nulls(frame): nulls_in_frame = frame[frame.isnull().any(axis=1)].reset_index() return nulls_in_frame </code></pre> <p>Join the two <code>Df's</code>. Concatenate the joined <code>DF</code> with each of the subset of <code>NaN</code> containing <code>DF's</code> and drop the duplicated values filling the remaining <code>NaN</code> left with 0's.</p> <p>Then, using <code>combine_first</code> to patch the values using chaining operation with the joined <code>DF</code>.</p> <pre><code>def perform_join(fr_1, fr_2, keys=['key1', 'key2']): fr_1 = index_set(fr_1); frame_2 = index_set(fr_2) frame = fr_1.join(fr_2, how='outer').reset_index() cat_fr_1 = pd.concat([frame, nulls(fr_1)]).drop_duplicates(keys, keep=False).fillna(0) cat_fr_2 = pd.concat([frame, nulls(fr_2)]).drop_duplicates(keys, keep=False).fillna(0) fr_1_join = frame.combine_first(frame.fillna(cat_fr_1[fr_1.columns])) joined_frame = fr_1_join.combine_first(frame.fillna(cat_fr_2[fr_2.columns])) return joined_frame </code></pre> <p>Finally,</p> <pre><code>perform_join(A, B) </code></pre> <p><a href="http://i.stack.imgur.com/6WWjf.png" rel="nofollow"><img src="http://i.stack.imgur.com/6WWjf.png" alt="Image"></a></p>
2
2016-09-28T17:43:33Z
[ "python", "pandas" ]
python replace unicode characters
39,751,705
<p>I wrote a program to read in Windows DNS debugging log, but inside always got some funny characters in the domain field. </p> <p>Below is one of the example:</p> <pre>(13)\xc2\xb5\xc2\xb1\xc2\xbe\xc3\xa2p\xc3\xb4\xc2\x8d(5)example(3)com(0)'</pre> <p>I want to replace all the <code>\x..</code> with a <code>?</code></p> <p>I explicitly type \xc2 as follows works </p> <pre><code>line = '(13)\xc2\xb5\xc2\xb1\xc2\xbe\xc3\xa2p\xc3\xb4\xc2\x8d(5)example(3)com(0)' re.sub('\\\xc2', '?', line) result: '(13)?\xb5?\xb1?\xbe\xc3\xa2p\xc3\xb4?\x8d(5)example(3)com(0)' </code></pre> <p>But its not working if I write as follow:</p> <p><code>re.sub('\\\x..', '?', line)</code></p> <p>How I can write a regular expression to replace them all?</p>
4
2016-09-28T15:25:24Z
39,751,863
<p>There are better tools for this job than regex, you could try for example:</p> <pre><code>&gt;&gt;&gt; line '(13)\xc2\xb5\xc2\xb1\xc2\xbe\xc3\xa2p\xc3\xb4\xc2\x8d(5)example(3)com(0)' &gt;&gt;&gt; line.decode('ascii', 'ignore') u'(13)p(5)example(3)com(0)' </code></pre> <p>That skips non-ascii characters. Or with replace, you can swap them for a '?' placeholder:</p> <pre><code>&gt;&gt;&gt; print line.decode('ascii', 'replace') (13)��������p����(5)example(3)com(0) </code></pre> <p>But the best solution is to find out what erroneous encoding/decoding caused the <a href="https://en.wikipedia.org/wiki/Mojibake" rel="nofollow">mojibake</a> to happen in the first place, so you can recover data by using the correct code pages. </p> <p>There is an excellent answer about unbaking emojibake <a href="http://stackoverflow.com/a/24141326/674039">here</a>. Note that it's an inexact science, and a lot of the crucial information is actually in the comment thread under that answer. </p>
2
2016-09-28T15:32:04Z
[ "python", "mojibake" ]
python replace unicode characters
39,751,705
<p>I wrote a program to read in Windows DNS debugging log, but inside always got some funny characters in the domain field. </p> <p>Below is one of the example:</p> <pre>(13)\xc2\xb5\xc2\xb1\xc2\xbe\xc3\xa2p\xc3\xb4\xc2\x8d(5)example(3)com(0)'</pre> <p>I want to replace all the <code>\x..</code> with a <code>?</code></p> <p>I explicitly type \xc2 as follows works </p> <pre><code>line = '(13)\xc2\xb5\xc2\xb1\xc2\xbe\xc3\xa2p\xc3\xb4\xc2\x8d(5)example(3)com(0)' re.sub('\\\xc2', '?', line) result: '(13)?\xb5?\xb1?\xbe\xc3\xa2p\xc3\xb4?\x8d(5)example(3)com(0)' </code></pre> <p>But its not working if I write as follow:</p> <p><code>re.sub('\\\x..', '?', line)</code></p> <p>How I can write a regular expression to replace them all?</p>
4
2016-09-28T15:25:24Z
39,752,163
<p>what about this?</p> <pre><code>line = '(13)\xc2\xb5\xc2\xb1\xc2\xbe\xc3\xa2p\xc3\xb4\xc2\x8d(5)example(3)com(0)' pattern = r'\\x.+' re.sub(pattern, r'?', line) </code></pre>
-2
2016-09-28T15:46:44Z
[ "python", "mojibake" ]
Mapping each element in list to different column in pandas dataframe
39,751,747
<p><strong>Background</strong>: I have a dataframe with individuals' names and addresses. I'm trying to catalog people associated with each person in my dataframe, so I'm running each row/record in the dataframe through an external API that returns a list of people associated with the individual. The idea is to write a series of functions that calls the API, returns the list of relatives, and appends each name in the list to a distinct column in the original dataframe. The code will eventually be parallelized.</p> <p>The dataframe:</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'first_name': ['Kyle', 'Ted', 'Mary', 'Ron'], 'last_name': ['Smith', 'Jones', 'Johnson', 'Reagan'], 'address': ['123 Main Street', '456 Maple Street', '987 Tudor Place', '1600 Pennsylvania Avenue']}, columns = ['first_name', 'last_name', 'address']) </code></pre> <p>The first function, which calls the API and returns a list of names:</p> <pre><code>import requests import json import numpy as np from multiprocessing import Pool def API_call(row): api_key = '123samplekey' first_name = str(row['First_Name']) last_name = str(row['Last_Name']) address = str(row['Street_Address']) url = 'https://apiaddress.com/' + '?first_name=' + first_name + '?last_name=' + last_name + '?address' = address + '?api_key' + api_key response = requests.get(url) JSON = response.json() name_list = [] for index, person in enumerate(JSON['people']): name = JSON['people'].get('name') name_list.append(name) return name_list </code></pre> <p>This function works well. For each person in the dataframe, a list of family/friends is returned. So, for Kyle Smith, the function returns <code>[Heather Smith, Dan Smith]</code>, for Ted Jones the function returns <code>[Al Jones, Karen Jones, Tiffany Jones, Natalie Jones]</code>, and so on for each row/record in the dataframe.</p> <p><strong>Problem:</strong> I'm struggling to write a subsequent function that will iterate through the returned list and append each value to a unique column that corresponds to the searched name in the dataframe. I want the function to return a database that looks like this:</p> <pre><code>First_Name | Last_Name | Street_Address | relative1_name | relative2_name | relative3_name | relative4_name ----------------------------------------------------------------------------------------------------------------------------- Kyle | Smith | 123 Main Street | Heather Smith | Dan Smith | | Ted | Jones | 456 Maple Street | Al Jones | Karen Jones | Tiffany Jones | Natalie Jones Mary | Johnson | 987 Tudor Place | Kevin Johnson | | | Ron | Reagan | 1600 Pennsylvania Avenue | Nancy Reagan | Patti Davis | Michael Reagan | Christine Reagan </code></pre> <p>NOTE: The goal is to vectorize everything, so that I can use the <code>apply</code> method and eventually run the whole thing in parallel. Something along the lines of the following code has worked for me in the past, when the "API_call" function was returning a single object instead of a list that needed to be iterated/mapped:</p> <pre><code>def API_call(row): # all API parameters url = 'https//api.com/parameters' response = request.get(url) JSON = response.json() single_object = JSON['key1']['key2'].get('key3') return single_object def second_function(data): data['single_object'] = data.apply(API_call, axis =1) return data def parallelize(dataframe, function): df_splits = np.array_split(dataframe, 10) pool = Pool(4) df_whole = pd.concat(pool.map(function, df_splits)) pool.close() pool.join() return df_whole parallelize(df, second_function) </code></pre> <p>The problem is I just can't write a vectorizable function (second_function) that maps names from the list returned by the API to unique columns in the original dataframe. Thanks in advance for any help! </p>
0
2016-09-28T15:27:10Z
39,752,637
<pre><code>import pandas as pd def make_relatives_frame(relatives): return pd.DataFrame(data=[relatives], columns=["relative%i_name" % x for x in range(1, len(relatives) + 1)]) # example output from an API call df_names = pd.DataFrame(data=[["Kyle", "Smith"]], columns=["First_Name", "Last_Name"]) relatives = ["Heather Smith", "Dan Smith"] df_relatives = make_relatives_frame(relatives) df_names[df_relatives.columns] = df_relatives # example output from another API Call with more relatives df_names2 = pd.DataFrame(data=[["John", "Smith"]], columns=["First_Name", "Last_Name"]) relatives2 = ["Heath Smith", "Daryl Smith", "Scott Smith"] df_relatives2 = make_relatives_frame(relatives2) df_names2[df_relatives2.columns] = df_relatives2 # example of stacking the outputs total_df = df_names.append(df_names2) print total_df </code></pre> <p>The above code should get you started. Obviously it is just a representative example, but you should be able to refactor it to fit your specific use case. </p>
0
2016-09-28T16:10:02Z
[ "python", "list", "function", "pandas", "vectorization" ]
How to handle meta data associated with a pandas dataframe?
39,751,807
<p>What is the best practice for saving meta information to a dataframe? I know of the following coding practice</p> <pre><code>import pandas as pd df = pd.DataFrame([]) df.currency = 'USD' df.measure = 'Price' df.frequency = 'daily' </code></pre> <p>But as stated in this post <a href="http://stackoverflow.com/questions/14688306/adding-meta-information-metadata-to-pandas-dataframe">Adding meta-information/metadata to pandas DataFrame</a> this is associated with the risk of losing the information by appling functions such as "groupby, pivot, join or loc" as they may return "a new DataFrame without the metadata attached". </p> <p>Is this still valid or has there been an update to meta information processing in the meantime? What would be an alternative coding practice?</p> <p>I do not think building a seperate object is very suitable. Also working with Multiindex does not convince me. Lets say I want to divide a dataframe with prices by a dataframe with earnings. Working with Multiindices would be very involved.</p> <pre><code>#define price DataFrame p_index = pd.MultiIndex.from_tuples([['Apple', 'price', 'daily'],['MSFT', 'price', 'daily']]) price = pd.DataFrame([[90, 20], [85, 30], [70, 25]], columns=p_index) # define earnings dataframe e_index = pd.MultiIndex.from_tuples( [['Apple', 'earnings', 'daily'], ['MSFT', 'earnings', 'daily']]) earnings=pd.DataFrame([[5000, 2000], [5800, 2200], [5100, 3000]], columns=e_index) price.divide(earnings.values, level=1, axis=0) </code></pre> <p>In the example above I do not even ensure that the company indices really match. I would probably need to invoke a pd.DataFrame.reindex() or similar. This cannot be a good coding practice in my point of view.</p> <p>Is there a straightforward solution to the problem of handling meta information in that context that I don't see?</p> <p>Thank you in advance</p>
1
2016-09-28T15:29:50Z
39,755,663
<p>I think that MultiIndexes is the way to go, but this way:</p> <pre><code>daily_price_data = pd.DataFrame({'Apple': [90, 85, 30], 'MSFT':[20, 30, 25]}) daily_earnings_data = pd.DataFrame({'Apple': [5000, 58000, 5100], 'MSFT':[2000, 2200, 3000]}) data = pd.concat({'price':daily_price_data, 'earnings': daily_earnings_data}, axis=1) data earnings price Apple MSFT Apple MSFT 0 5000 2000 90 20 1 58000 2200 85 30 2 5100 3000 30 25 </code></pre> <p>Then, to divide:</p> <pre><code>data['price'] / data['earnings'] </code></pre> <p>If you find that your workflow makes more sense to have companies listed on the first level of the index, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html" rel="nofollow">pandas.DataFrame.xs</a> will be very helpful:</p> <pre><code>data2 = data.reorder_levels([1,0], axis=1).sort_index(axis=1) data2.xs('price', axis=1, level=-1) / data2.xs('earnings', axis=1, level=-1) </code></pre>
0
2016-09-28T19:01:37Z
[ "python", "pandas", "metadata", "finance", "divide" ]
Configure Visual Studio Code to run Python in bash on Windows
39,751,858
<p>I want to run python <code>.py</code> file in Visual Studio Code using Windows bash console.</p> <p>What I tried to do:</p> <p>Change default shell in <code>settings.json</code>:</p> <pre><code>{ "terminal.integrated.shell.windows": "C:\\Windows\\sysnative\\bash.exe" } </code></pre> <p>Add task in <code>tasks.json</code> to run <code>python</code> command with file name as an argument:</p> <pre><code>{ // See https://go.microsoft.com/fwlink/?LinkId=733558 // for the documentation about the tasks.json format "version": "0.1.0", "command": "python", "isShellCommand": true, "showOutput": "always", "tasks": [ { "taskName": "Run python in bash", "suppressTaskName": true, "args": ["${file}"] } ] } </code></pre> <p>There are a few problems to solve here:</p> <ol> <li>Tasks are not being run in bash as I wanted</li> <li>To access <code>C</code> drive <a href="http://www.howtogeek.com/261383/how-to-access-your-ubuntu-bash-files-in-windows-and-your-windows-system-drive-in-bash/" rel="nofollow">I need to replace</a> <code>C:\</code> with <code>/mnt/c</code> in file path</li> </ol> <p>Can you share with my solutions to those problems?</p>
0
2016-09-28T15:31:54Z
39,755,569
<p>I don't have Windows 10 with <code>bash</code> but I'd imagine the problem is that you're not actually trying to run Python. You're trying to run <code>bash</code> (and then run <code>python</code>). Try setting the command to <code>bash</code> with params <code>["python", "$file"]</code>.</p>
0
2016-09-28T18:55:48Z
[ "python", "vscode", "wsl" ]
expanding rows in pandas dataframe
39,751,866
<p>I have the following data:</p> <pre><code>product Sales_band Hour_id sales prod_1 HIGH 1 200 prod_1 HIGH 3 100 prod_1 HIGH 4 300 prod_1 VERY HIGH 2 100 prod_1 VERY HIGH 5 253 prod_1 VERY HIGH 6 234 </code></pre> <p>want to add rows based on the <strong>hour_id</strong> value. hour_id variable can take values from <strong>1 to 10</strong>. So the same data above will be expanded where the hour ids are missing. Dummy output is :(<strong>sales = 0 when missing hour id</strong>)</p> <pre><code>product Sales_band Hour_id sales prod_1 HIGH 1 200 prod_1 HIGH 2 0 prod_1 HIGH 3 100 prod_1 HIGH 4 300 prod_1 HIGH 5 0 prod_1 HIGH 6 0 prod_1 HIGH 7 0 prod_1 HIGH 8 0 prod_1 HIGH 9 0 prod_1 HIGH 10 0 prod_1 VERY HIGH 1 0 prod_1 VERY HIGH 2 100 prod_1 VERY HIGH 3 0 prod_1 VERY HIGH 4 0 prod_1 VERY HIGH 5 253 prod_1 VERY HIGH 6 234 prod_1 VERY HIGH 7 0 prod_1 VERY HIGH 8 0 prod_1 VERY HIGH 9 0 prod_1 VERY HIGH 10 0 </code></pre> <p>how can I achieve this using python dataframe.</p>
2
2016-09-28T15:32:06Z
39,752,186
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow"><code>reindex</code></a>:</p> <pre><code>print (df.groupby(['product','Sales_band'])['Hour_id','sales'] .apply(lambda x: x.set_index('Hour_id').reindex(range(1, 11), fill_value=0)) .reset_index()) product Sales_band Hour_id sales 0 prod_1 HIGH 1 200 1 prod_1 HIGH 2 0 2 prod_1 HIGH 3 100 3 prod_1 HIGH 4 300 4 prod_1 HIGH 5 0 5 prod_1 HIGH 6 0 6 prod_1 HIGH 7 0 7 prod_1 HIGH 8 0 8 prod_1 HIGH 9 0 9 prod_1 HIGH 10 0 10 prod_1 VERY HIGH 1 0 11 prod_1 VERY HIGH 2 100 12 prod_1 VERY HIGH 3 0 13 prod_1 VERY HIGH 4 0 14 prod_1 VERY HIGH 5 253 15 prod_1 VERY HIGH 6 234 16 prod_1 VERY HIGH 7 0 17 prod_1 VERY HIGH 8 0 18 prod_1 VERY HIGH 9 0 19 prod_1 VERY HIGH 10 0 </code></pre>
2
2016-09-28T15:48:06Z
[ "python", "python-3.x", "pandas", "expand", "reindex" ]