title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Get all active campaigns from facebook ads api - how to set the filter | 39,771,522 | <p>I would like to get insights of all my active campaigns running on Facebook Ads. I manage to get all campaigns with FacebookAdsApi on my account but i am not able to use a filter so i only get campaigns with the "ACTIVE" status. </p>
<p>Here is my code so far:</p>
<pre><code>from facebookads.api import FacebookAdsApi
from facebookads.objects import AdAccount, Ad, Insights, AdUser
import datetime
my_app_id = 'xxx'
my_app_secret = 'xxx'
my_access_token = 'xxx'
FacebookAdsApi.init(my_app_id, my_app_secret, my_access_token)
me = AdUser(fbid='me')
my_accounts = list(me.get_ad_accounts())
my_account = my_accounts[0]
fields = [
Insights.Field.campaign_name,
Insights.Field.spend
]
params = {
'time_range': {'since': str(datetime.date(2015, 1, 1)), 'until': str(datetime.date.today())},
'level': 'campaign',
'limit': 1000
}
insights = my_account.get_insights(fields=fields, params=params)
print len(insights)
>>> 115
</code></pre>
<p>I tried to add the following line to <code>params</code>:</p>
<pre><code>filtering': [{'field': 'campaign.effective_status','operator':'IN','value':['ACTIVE']}]
</code></pre>
<p>which results in this error-msg:</p>
<pre><code>"error_user_msg": "The reporting data you are trying to fetch has too many rows. Please use asynchronous query or restrict amount of ad IDs or time ranges to fetch the data."
</code></pre>
<p>I can get all campaigns from my account (115) without any issue and there are only 10 active campaigns at the moment so i guess my filter is wrong?</p>
| 0 | 2016-09-29T13:21:44Z | 39,855,389 | <p>This is a common issue with insights queries. When working with lot of data (lot of campaigns, lot of days or both) you can easily run into described error.</p>
<p><a href="https://developers.facebook.com/docs/marketing-api/insights/overview/v2.7" rel="nofollow">FB docs</a> say:</p>
<blockquote>
<p>There is no explicit limit for when a query will fail. When it times out, try to break down the query into smaller queries by putting in filters like date range.</p>
</blockquote>
<p>In your query the issue is most likely caused by fetching data from beginning of 2015. For starters I suggest using for example <code>date_preset=last_30_days</code> (preset date intervals should be returned faster) and proceed from there, maybe by splitting your insights loading logic into more intervals.</p>
<p>Another option is reducing page size (<code>limit</code>), which can also cause this issue.</p>
<p>Or the ultimate solution - use <a href="https://developers.facebook.com/docs/marketing-api/insights/async/v2.7" rel="nofollow">async jobs</a> for loading insights. This prevents FB from timing out, because the query runs asynchronously and you check for the job status and load data only when it's done.</p>
| 1 | 2016-10-04T14:49:22Z | [
"python",
"facebook-graph-api",
"facebook-ads-api"
]
|
Visual C++ for python failed with exit status 2 when installing divisi2 | 39,771,592 | <p>i am using Python 2.7 on Windows 8.1 46 bit.</p>
<p>I want to install divisi2 <a href="https://pypi.python.org/pypi/Divisi2/2.2.5" rel="nofollow">https://pypi.python.org/pypi/Divisi2/2.2.5</a></p>
<p>I have installed NumPy and SciPy which are the pre-requisites for divisi2 already. I have installed Visual C++ for python 9.0.</p>
<p>Whenever i issue pip install divisi2 command i get the following error in the console.</p>
<pre><code> svdlib/svdwrapper.c(89) : error C2059: syntax error : '{'
svdlib/svdwrapper.c(90) : error C2275: 'PyObject' : illegal use of this type
as an expression
c:\python27\include\object.h(108) : see declaration of 'PyObject'
svdlib/svdwrapper.c(90) : error C2065: 'arr' : undeclared identifier
svdlib/svdwrapper.c(90) : error C2065: 'type' : undeclared identifier
svdlib/svdwrapper.c(90) : warning C4047: 'function' : 'PyArray_Descr *' diff
ers in levels of indirection from 'int'
svdlib/svdwrapper.c(90) : warning C4024: 'function through pointer' : differ
ent types for formal and actual parameter 2
svdlib/svdwrapper.c(91) : error C2065: 'dim' : undeclared identifier
svdlib/svdwrapper.c(91) : warning C4047: 'function' : 'npy_intp *' differs i
n levels of indirection from 'int'
svdlib/svdwrapper.c(91) : warning C4024: 'function through pointer' : differ
ent types for formal and actual parameter 4
svdlib/svdwrapper.c(91) : error C2065: 'strides' : undeclared identifier
svdlib/svdwrapper.c(91) : warning C4047: 'function' : 'npy_intp *' differs i
n levels of indirection from 'int'
svdlib/svdwrapper.c(91) : warning C4024: 'function through pointer' : differ
ent types for formal and actual parameter 5
svdlib/svdwrapper.c(95) : error C2065: 'arr' : undeclared identifier
svdlib/svdwrapper.c(96) : error C2065: 'arr' : undeclared identifier
svdlib/svdwrapper.c(96) : warning C4047: 'return' : 'int *' differs in level
s of indirection from 'int'
svdlib/svdwrapper.c(100) : error C2143: syntax error : missing '{' before '*
'
svdlib/svdwrapper.c(102) : warning C4133: 'initializing' : incompatible type
s - from 'int *' to 'PyObject *'
svdlib/svdwrapper.c(114) : warning C4133: 'return' : incompatible types - fr
om 'PyObject *' to 'int *'
error: command 'C:\\Users\\i054564\\AppData\\Local\\Programs\\Common\\Micros
oft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2
----------------------------------------
Command "c:\python27\python.exe -u -c "import setuptools, tokenize;__file__='c:\
\users\\i054564\\appdata\\local\\temp\\pip-build-0aufqt\\divisi2\\setup.py';exec
(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'),
__file__, 'exec'))" install --record c:\users\i054564\appdata\local\temp\pip-5d
xl7g-record\install-record.txt --single-version-externally-managed --compile" fa
iled with error code 1 in c:\users\i054564\appdata\local\temp\pip-build-0aufqt\d
ivisi2\
</code></pre>
<p>cheers,</p>
<p>Saurav</p>
| 0 | 2016-09-29T13:24:36Z | 39,786,213 | <p>I finally solved this problem after getting this hint from a colleague i.e. to use</p>
<p><a href="https://github.com/develersrl/gccwinbinaries" rel="nofollow">https://github.com/develersrl/gccwinbinaries</a>.</p>
<p>This will install the requisites and then you can install divisi2.</p>
| 0 | 2016-09-30T07:42:07Z | [
"python",
"pip"
]
|
Django template showing \u200e code | 39,771,702 | <p>Hey guys I am beyond frustrated/exhausted trying to fix this unicode code \u200e showing in my web page. I tried everything I can think of. Here is what my page looks like, its data scraped articles from news.google.com and shown on my page with the time submission (the time submission is where the \u200e pops up everywhere)
<a href="http://i.imgur.com/lrqmvWG.jpg" rel="nofollow">http://i.imgur.com/lrqmvWG.jpg</a></p>
<p>I am going to provide my <strong>views.py</strong>, my <strong>articles.html</strong> (the page in the picture that is set up to display everything), and <strong>header.html</strong> (for whatever reason. But this is the parent template of articles.html for the CSS inheriting). Also, I researched and know that the \u200e is a left-to-right mark and when I inspect the source in news.google.com, it pops up in the <strong>time submission element</strong> as </p>
<pre><code>&lrm;
</code></pre>
<p>like so:</p>
<pre><code><span class="al-attribution-timestamp">&lrm;â51 minutes agoâ&lrm;</span>
</code></pre>
<p>I tried editing the views.py to encode it using .encode(encoding='ascii','ignore') or utf-8 or iso-8859-8 and a couple other lines of code I found researching deep on google but it still displays \u200e everywhere. I put it in so many different parts of my views.py too even right after the for loop (and right before + after it gets stored as data in the variable "b" and its just not going away. What do I need to do?</p>
<p><strong>Views.py</strong></p>
<pre><code>def articles(request):
""" Grabs the most recent articles from the main news page """
import bs4, requests
list = []
list2 = []
url = 'https://news.google.com/'
r = requests.get(url)
sta = "&lrm;"
try:
r.raise_for_status() == True
except ValueError:
print('Something went wrong.')
soup = bs4.BeautifulSoup(r.text, 'html.parser')
for listarticles in soup.find_all('h2', 'esc-lead-article-title'):
a = listarticles.text
list.append(a)
for articles_times in soup.find_all('span','al-attribution-timestamp'):
b = articles_times.text
list2.append(b)
list = zip(list,list2)
context = {'list':list, 'list2':list2}
return render(request, 'newz/articles.html', context)
</code></pre>
<p><strong>articles.html</strong></p>
<pre><code> {% extends "newz/header.html" %}
{% block content %}
<script>
.firstfont (
font-family: serif;
}
</script>
<div class ="row">
<h3 class="btn-primary">These articles are scraped from <strong>news.google.com</strong></h3><br>
<ul class="list-group">
{% for thefinallist in list %}
<div class="col-md-15">
<li class="list-group-item">{{ thefinallist }}
</li>
</div>
{% endfor %}
</div></ul>
{{ list }}
{% endblock %}
</code></pre>
<p><strong>header.html</strong></p>
<pre><code> <!DOCTYPE html>
<html lang="en">
<head>
<title>Sacred Page</title>
<meta charset="utf-8" />
{% load staticfiles %}
<meta name="viewport" content = "width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="{% static 'newz/css/bootstrap.min.css' %}" type = "text/css"/>
<style type="text/css">
html,
body {
height:100%
}
</style>
</head>
<body class="body" style="background-color:#EEEDFA">
<div class="container-fluid" style="min-height:95%; ">
<div class="row">
<div class="col-sm-2">
<br>
<center>
<img src="{% static 'newz/img/profile.jpg' %}" class="responsive-img" style='max-height:100px;' alt="face">
</center>
</div>
<div class="col-sm-10">
<br>
<center>
<h3><font color="007385">The sacred database</font></h3>
</center>
</div>
</div><hr>
<div class="row">
<div class="col-sm-2">
<br>
<br>
<!-- Great, til you resize. -->
<!--<div class="well bs-sidebar affix" id="sidebar" style="background-color:#E77200">-->
<div class="well bs-sidebar" id="sidebar" style="background-color:#E1DCF5">
<ul class="nav nav-pills nav-stacked">
<li><a href='/'>Home</a></li>
<li><a href='/newz/'>News database</a></li>
<li><a href='/blog/'>Blog</a></li>
<li><a href='/contact/'>Contact</a></li>
</ul>
</div> <!--well bs-sidebar affix-->
</div> <!--col-sm-2-->
<div class="col-sm-10">
<div class='container-fluid'>
<br><br>
<font color="#2E2C2B">
{% block content %}
{% endblock %}
{% block fool %}
{% endblock fool %}
</font>
</div>
</div>
</div>
</div>
<footer>
<div class="container-fluid" style='margin-left:15px'>
<p><a href="#" target="blank">Contact</a> | <a href="#" target="blank">LinkedIn</a> | <a href="#" target="blank">Twitter</a> | <a href="#" target="blank">Google+</a></p>
</div>
</footer>
</body>
</html>
</code></pre>
| 1 | 2016-09-29T13:29:18Z | 39,772,393 | <p>If you want, you can use <code>replace()</code> to strip the character from your string.</p>
<pre><code>b = articles_times.text.replace('\u200E', '')
</code></pre>
<p>The reason that you see <code>\u200E</code> in the rendered html instead of <code>&lrm;</code> is that you are including the tuple <code>{{ thefinallist }}</code> in your template. That means Python calls <code>repr()</code> on the tuple, and you see <code>\u200E</code>. It also means you see the parentheses, for example <code>('headline' '\u200e1 hour ago')</code></p>
<p>If you display the elements of the tuple separately, then you will get <code>&lrm;</code> in the template instead. For example, you could do:</p>
<pre><code>{% for headline, timeago in list %}
<div class="col-md-15">
<li class="list-group-item">{{ headline }} {{ timeago }}
</li>
</div>
{% endfor %}
</code></pre>
| 1 | 2016-09-29T13:58:06Z | [
"javascript",
"python",
"django"
]
|
How to set up different desktop launchers for anaconda spyder using conda environments? | 39,771,849 | <p>I have created a virtual machine for use in an upcoming data science lecture. I installed CentOS minimal into Virtualbox and included an XFCE desktop. I have also installed two analytic stack python versions (2.7, 3.5) using Anaconda mini and the conda environment manager. </p>
<p>I set up another environment in addition to the default using the following command:</p>
<pre><code>conda create --name py3datascience numpy pandas scikit-learn matplotlib beautifulsoup4 cairo hdf5 jupyter nltk patsy pytables pystan pymc requests sas7bdat seaborn sqlite statsmodels spyder
</code></pre>
<p>As expected, I now have an additional environment called py3datascience. I can launch Spyder (connected to this environment) from the terminal using the following:</p>
<pre><code>source activate py3datascience
spyder
</code></pre>
<p>And everything works as expected. I would like to create a desktop shortcut to launch Spyder in this specific environment (and another desktop shortcut for the Python 2.7 I will install), but I have not been able to do it. </p>
<p>I created a shell script with following commands:</p>
<pre><code>source activate py3datascience
spyder
</code></pre>
<p>and placed it in /home/user/scripts. When I run this script from the terminal, it works as expected (Spyder Launches in the correct environment). I tried creating a *.desktop file that would run this script and it does not work. It fails to launch Spyder, but it also fails to give me an error message. Here is the contents of my failed desktop file:</p>
<pre><code>[Desktop Entry]
Version=1.0
Type=Application
Name=SpyderPy3
Comment=
Exec=/home/user1/scripts/SpyderPy3.sh
Icon=
Path=
Terminal=false
StartupNotify=true
</code></pre>
<p>I found a .desktop file in the appropriate environment folder that was created by the conda commands, it is here: </p>
<p>/home/user1/anaconda/envs/py3datascience/share/applications/spyder3.desktop</p>
<pre><code>[Desktop Entry]
Version=1.0
Type=Application
Name=SpyderPy3
Comment=
Exec=/home/user1/scripts/SpyderPy3.sh
Icon=
Path=
Terminal=false
StartupNotify=true
</code></pre>
<p>My lack of linux skills are likely showing, so I am seeking help on how to proceed. The basic question is, after using conda to set up different environments, how can I create desktop or panel shortcuts (in linux, specifically CentOS with XFCE) to the appropriate Spyder installation? The following commands in terminal accomplish this, but I need a panel or desktop shortcut:</p>
<pre><code>source activate py3datascience
spyder
</code></pre>
| 0 | 2016-09-29T13:35:43Z | 39,799,876 | <p>After a bit of research, I figured out my issue. </p>
<p>I needed to create a *.desktop file with the following contents:</p>
<pre><code>[Desktop Entry]
Version=1.0
Type=Application
Name=Spyder py3
Comment=
Exec=xfce4-terminal -e "bash -c 'cd /home/user1/anaconda/bin;source activate py3ds;spyder'"
Icon=
Path=
Terminal=true
StartupNotify=false
</code></pre>
<p>A bit of explanation...if I open a terminal shell and type in the following commands, the environment is activated and then Spyder is launched: </p>
<pre><code>source activate py3ds
spyder
</code></pre>
<p>I did not need to be in any specific directory for this to work. However, when creating the .desktop file and entering the shell commands, I needed to first cd to the appropriate directory and then run source activate. Perhaps I also could have specified the full path in the source activate command instead. </p>
| 0 | 2016-09-30T20:49:55Z | [
"python",
"linux",
"anaconda",
"spyder",
"conda"
]
|
Scraping: add data stored as a picture to CSV file in python 3.5 | 39,771,874 | <p>For this project, I am scraping data from a database and attempting to export this data to a spreadsheet for further analysis. (Previously posted <a href="http://stackoverflow.com/questions/39762565/export-beautifulsoup-scraping-results-to-csv-scrape-include-image-values-in-c">here</a>--thanks for the help over there reworking my code!)</p>
<p>I previously thought that finding the winning candidate in the table could be simplified by just always selecting the first name that appears in the table, as I thought the "winners" always appeared first. However, this is not the case. </p>
<p>Whether or not a candidate was elected is stored in the form of a picture in the first column. How would I scrape this and store it in a spreadsheet?</p>
<p>It's located under < td headers > as:</p>
<pre><code><img src="/WPAPPS/WPR/Content/Images/selected_box.gif" alt="contestant won this nomination contest">
</code></pre>
<p>My question is: how would I use BeautifulSoup to parse the HTML table and extract a value from the first column, which is stored in the table as an image rather than text.</p>
<p>I had an idea for attempting some sort of Boolean sorting measure, but I am unsure of how to implement. </p>
<p>My code is below:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
import re
import csv
url = "http://www.elections.ca/WPAPPS/WPR/EN/NC?province=-1&distyear=2013&district=-1&party=-1&pageno={}&totalpages=55&totalcount=1368&secondaryaction=prev25"
rows = []
for i in range(1, 56):
print(i)
r = requests.get(url.format(i))
data = r.text
cat = BeautifulSoup(data, "html.parser")
links = []
for link in cat.find_all('a', href=re.compile('selectedid=')):
links.append("http://www.elections.ca" + link.get('href'))
for link in links:
r = requests.get(link)
data = r.text
cat = BeautifulSoup(data, "html.parser")
lspans = cat.find_all('span')
cs = cat.find_all("table")[0].find_all("td", headers="name/1")
elected = []
for c in cs:
elected.append(c.contents[0].strip())
rows.append([
lspans[2].contents[0],
lspans[3].contents[0],
lspans[5].contents[0],
re.sub("[\n\r/]", "", cat.find("legend").contents[2]).strip(),
re.sub("[\n\r/]", "", cat.find_all('div', class_="group")[2].contents[2]).strip().encode('latin-1'),
len(elected),
cs[0].contents[0].strip().encode('latin-1')
])
with open('filename.csv', 'w', newline='') as f_output:
csv_output = csv.writer(f_output)
csv_output.writerows(rows)
</code></pre>
<p>Really--any tips would be GREATLY appreciated. Thanks a lot.</p>
| 2 | 2016-09-29T13:37:07Z | 39,778,495 | <p>This snippet will print the name of the elected person:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
req = requests.get("http://www.elections.ca/WPAPPS/WPR/EN/NC/Details?province=-1&distyear=2013&district=-1&party=-1&selectedid=8548")
page_source = BeautifulSoup(req.text, "html.parser")
table = page_source.find("table",{"id":"gvContestants/1"})
for row in table.find_all("tr"):
if not row.find("img"):
continue
if "selected_box.gif" in row.find("img").get("src"):
print(''.join(row.find("td",{"headers":"name/1"}).text.split()))
</code></pre>
<p>As a side note please refrain yourself from declaring variables with meaningless names. It hurts the eyes of anyone trying to help you and it will hurt you in the future when looking at the code again</p>
| 2 | 2016-09-29T19:28:01Z | [
"python",
"csv",
"web-scraping",
"beautifulsoup",
"python-3.5"
]
|
How to extract arrays from an arranged numpy array? | 39,771,934 | <p>This is a relative question of the post <a href="http://stackoverflow.com/questions/39673377/how-to-extract-rows-from-an-numpy-array-based-on-the-content/39674145?noredirect=1#comment66837793_39674145">How to extract rows from an numpy array based on the content?</a>, and I used the following code to split rows based on the content in the column:</p>
<pre><code>np.split(sorted_a,np.unique(sorted_a[:,1],return_index=True)[1][1:])
</code></pre>
<p>the code worked fine, but later I tried the code to split other cases (as below), I found that there could be wrong results (as showed in CASE#1).</p>
<pre><code>CASE#1
[[2748309, 246211, 1],
[2748309, 246211, 2],
[2747481, 246201, 54]]
OUTPUT#1
[]
[[2748309, 246211, 1],
[2748309, 246211, 2],
[2747481, 246201, 54]]
the result I want
[[2748309, 246211, 1],
[2748309, 246211, 2]]
[[2747481, 246201, 54]]
</code></pre>
<p>I think the code may successfully split rows only in the case with little numbers, which with less digits, and I don't know how to solve problems showed in CASE#1 above. So in this post, I have 2 little relative questions:</p>
<p><strong>1. How to split rows with greater numbers in it? (as showed in CASE #1)?</strong></p>
<p><strong>2. How to handle (split) data with both cases including #1 rows with the same element in the second column, but different in the first, and #2 rows with the same element in the first column, but different in the second ? (That is, could python distinguish rows considering contents in both first and second columns simultaneously?)</strong></p>
<p>Feel free to give me suggestions, thank you.</p>
<p><strong>Update#1</strong></p>
<p>The <code>ravel_multi_index</code> function could handle this kind of task with integer-arrays, but how to deal with arrays containing float?</p>
| 1 | 2016-09-29T13:39:35Z | 39,772,874 | <p>The <a href="https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP" rel="nofollow">numpy_indexed</a> package (disclaimer: I am its author) contains functionality to efficiently perform these type of operations:</p>
<pre><code>import numpy_indexed as npi
npi.group_by(a[:, :2]).split(a)
</code></pre>
<p>It has decent test coverage, so id be surprised if it tripped on your seemingly straightforward test case.</p>
| 0 | 2016-09-29T14:19:13Z | [
"python",
"arrays",
"numpy"
]
|
How to extract arrays from an arranged numpy array? | 39,771,934 | <p>This is a relative question of the post <a href="http://stackoverflow.com/questions/39673377/how-to-extract-rows-from-an-numpy-array-based-on-the-content/39674145?noredirect=1#comment66837793_39674145">How to extract rows from an numpy array based on the content?</a>, and I used the following code to split rows based on the content in the column:</p>
<pre><code>np.split(sorted_a,np.unique(sorted_a[:,1],return_index=True)[1][1:])
</code></pre>
<p>the code worked fine, but later I tried the code to split other cases (as below), I found that there could be wrong results (as showed in CASE#1).</p>
<pre><code>CASE#1
[[2748309, 246211, 1],
[2748309, 246211, 2],
[2747481, 246201, 54]]
OUTPUT#1
[]
[[2748309, 246211, 1],
[2748309, 246211, 2],
[2747481, 246201, 54]]
the result I want
[[2748309, 246211, 1],
[2748309, 246211, 2]]
[[2747481, 246201, 54]]
</code></pre>
<p>I think the code may successfully split rows only in the case with little numbers, which with less digits, and I don't know how to solve problems showed in CASE#1 above. So in this post, I have 2 little relative questions:</p>
<p><strong>1. How to split rows with greater numbers in it? (as showed in CASE #1)?</strong></p>
<p><strong>2. How to handle (split) data with both cases including #1 rows with the same element in the second column, but different in the first, and #2 rows with the same element in the first column, but different in the second ? (That is, could python distinguish rows considering contents in both first and second columns simultaneously?)</strong></p>
<p>Feel free to give me suggestions, thank you.</p>
<p><strong>Update#1</strong></p>
<p>The <code>ravel_multi_index</code> function could handle this kind of task with integer-arrays, but how to deal with arrays containing float?</p>
| 1 | 2016-09-29T13:39:35Z | 39,774,794 | <p>If I apply that split line directly to your array I get your result, an empty array plus the original</p>
<pre><code>In [136]: np.split(a,np.unique(a[:,1],return_index=True)[1][1:])
Out[136]:
[array([], shape=(0, 3), dtype=int32),
array([[2748309, 246211, 1],
[2748309, 246211, 2],
[2747481, 246201, 54]])]
</code></pre>
<p>But if I first sort the array on the 2nd column, as specified in the linked answer, I get the desired answer - with the 2 arrays switched</p>
<pre><code>In [141]: sorted_a=a[np.argsort(a[:,1])]
In [142]: sorted_a
Out[142]:
array([[2747481, 246201, 54],
[2748309, 246211, 1],
[2748309, 246211, 2]])
In [143]: np.split(sorted_a,np.unique(sorted_a[:,1],return_index=True)[1][1:])
Out[143]:
[array([[2747481, 246201, 54]]),
array([[2748309, 246211, 1],
[2748309, 246211, 2]])]
</code></pre>
| 0 | 2016-09-29T15:49:07Z | [
"python",
"arrays",
"numpy"
]
|
How to extract arrays from an arranged numpy array? | 39,771,934 | <p>This is a relative question of the post <a href="http://stackoverflow.com/questions/39673377/how-to-extract-rows-from-an-numpy-array-based-on-the-content/39674145?noredirect=1#comment66837793_39674145">How to extract rows from an numpy array based on the content?</a>, and I used the following code to split rows based on the content in the column:</p>
<pre><code>np.split(sorted_a,np.unique(sorted_a[:,1],return_index=True)[1][1:])
</code></pre>
<p>the code worked fine, but later I tried the code to split other cases (as below), I found that there could be wrong results (as showed in CASE#1).</p>
<pre><code>CASE#1
[[2748309, 246211, 1],
[2748309, 246211, 2],
[2747481, 246201, 54]]
OUTPUT#1
[]
[[2748309, 246211, 1],
[2748309, 246211, 2],
[2747481, 246201, 54]]
the result I want
[[2748309, 246211, 1],
[2748309, 246211, 2]]
[[2747481, 246201, 54]]
</code></pre>
<p>I think the code may successfully split rows only in the case with little numbers, which with less digits, and I don't know how to solve problems showed in CASE#1 above. So in this post, I have 2 little relative questions:</p>
<p><strong>1. How to split rows with greater numbers in it? (as showed in CASE #1)?</strong></p>
<p><strong>2. How to handle (split) data with both cases including #1 rows with the same element in the second column, but different in the first, and #2 rows with the same element in the first column, but different in the second ? (That is, could python distinguish rows considering contents in both first and second columns simultaneously?)</strong></p>
<p>Feel free to give me suggestions, thank you.</p>
<p><strong>Update#1</strong></p>
<p>The <code>ravel_multi_index</code> function could handle this kind of task with integer-arrays, but how to deal with arrays containing float?</p>
| 1 | 2016-09-29T13:39:35Z | 39,776,641 | <p>Here's an approach considering pair of elements from each row as indexing tuples -</p>
<pre><code># Convert to linear index equivalents
lidx = np.ravel_multi_index(arr[:,:2].T,arr[:,:2].max(0)+1)
# Get sorted indices of lidx. Using those get shifting indices.
# Split along sorted input array along axis=0 using those.
sidx = lidx.argsort()
out = np.split(arr[sidx],np.unique(lidx[sidx],return_index=1)[1][1:])
</code></pre>
<p>Sample run -</p>
<pre><code>In [34]: arr
Out[34]:
array([[2, 7, 5],
[3, 4, 6],
[2, 3, 5],
[2, 7, 7],
[4, 4, 7],
[3, 4, 6],
[2, 8, 5]])
In [35]: out
Out[35]:
[array([[2, 3, 5]]), array([[2, 7, 5],
[2, 7, 7]]), array([[2, 8, 5]]), array([[3, 4, 6],
[3, 4, 6]]), array([[4, 4, 7]])]
</code></pre>
<p>For a detailed info on converting group of elements as indexing tuple, please refer to <a href="http://stackoverflow.com/a/38674038/3293881"><code>this post</code></a>.</p>
| 1 | 2016-09-29T17:34:32Z | [
"python",
"arrays",
"numpy"
]
|
File handling in Python for beginners | 39,771,972 | <p>Hello I am new to python and I am encountering this error :</p>
<blockquote>
<p>C:\Users\Dylan Galea\Desktop\Modelling and CS>python file_handling.py</p>
<p>File "file_handling.py", line 4</p>
<p>np.savetxt(\Users\Dylan Galea\Desktop\Modelling and </p>
<p>CS\test.txt,twoDarray,delimeter='\t')
^
SyntaxError: unexpected character after line continuation character</p>
</blockquote>
<p>my code is this :</p>
<pre><code>import numpy as np
twoDarray =np.array([[1,2,3],[4,5,6]])
np.savetxt(\Users\Dylan Galea\Desktop\Modelling and CS\test.txt,twoDarray,delimeter='\t')
</code></pre>
<p>can anyone help please ?</p>
| 0 | 2016-09-29T13:41:05Z | 39,772,071 | <p>Please use the code-syntax of stackoverflow so we can read your code easier.</p>
<p>It seems like you spelled <code>delimiter</code> wrong.</p>
| 0 | 2016-09-29T13:44:51Z | [
"python",
"file",
"numpy"
]
|
File handling in Python for beginners | 39,771,972 | <p>Hello I am new to python and I am encountering this error :</p>
<blockquote>
<p>C:\Users\Dylan Galea\Desktop\Modelling and CS>python file_handling.py</p>
<p>File "file_handling.py", line 4</p>
<p>np.savetxt(\Users\Dylan Galea\Desktop\Modelling and </p>
<p>CS\test.txt,twoDarray,delimeter='\t')
^
SyntaxError: unexpected character after line continuation character</p>
</blockquote>
<p>my code is this :</p>
<pre><code>import numpy as np
twoDarray =np.array([[1,2,3],[4,5,6]])
np.savetxt(\Users\Dylan Galea\Desktop\Modelling and CS\test.txt,twoDarray,delimeter='\t')
</code></pre>
<p>can anyone help please ?</p>
| 0 | 2016-09-29T13:41:05Z | 39,772,092 | <p>Hi and welcome to StackOverflow. Please use the tools StackOverflow provides to properly structure your post (e.g. mark code etc.) and make sure the indentation and newlines of the Python code is correct since it's part of the syntax. </p>
<p>Regarding the question it's probably an issue with your path which is not marked as string (must be enclosed in quotation marks) and contains backslashes, which are special escape characters in Python. Depending on your operating system (Mac OS, Windows, Linux etc.) you might need to use forward slashes or double(!) backward slashes.</p>
<p>Try this:</p>
<pre><code>twoDarray = np.array([[1,2,3],[4,5,6]])
np.savetxt("/Users/Dylan Galea/Desktop/Modelling and CS/test.txt", twoDarray,delimeter='\t')
</code></pre>
| 0 | 2016-09-29T13:45:42Z | [
"python",
"file",
"numpy"
]
|
File handling in Python for beginners | 39,771,972 | <p>Hello I am new to python and I am encountering this error :</p>
<blockquote>
<p>C:\Users\Dylan Galea\Desktop\Modelling and CS>python file_handling.py</p>
<p>File "file_handling.py", line 4</p>
<p>np.savetxt(\Users\Dylan Galea\Desktop\Modelling and </p>
<p>CS\test.txt,twoDarray,delimeter='\t')
^
SyntaxError: unexpected character after line continuation character</p>
</blockquote>
<p>my code is this :</p>
<pre><code>import numpy as np
twoDarray =np.array([[1,2,3],[4,5,6]])
np.savetxt(\Users\Dylan Galea\Desktop\Modelling and CS\test.txt,twoDarray,delimeter='\t')
</code></pre>
<p>can anyone help please ?</p>
| 0 | 2016-09-29T13:41:05Z | 39,772,121 | <p>Your file name should be a string.</p>
<pre><code>np.savetxt(r'\Users\Dylan Galea\Desktop\Modelling and CS\test.txt',twoDarray,delimeter='\t')
</code></pre>
| 0 | 2016-09-29T13:46:36Z | [
"python",
"file",
"numpy"
]
|
Is it possible to include interpreter path (or set any default code) when I create new python file in Pycharm? | 39,771,998 | <p>I haven't been able to find anything and I am not sure if this is the place I should be asking... </p>
<p>But I want to include the path to my interpreter in every new project I create. The reason being is that I develop locally and sync my files to a linux server. It is annoying having to manually type <code>#! /users/w/x/y/z/bin/python</code> every time I create a new project. Also would be nice to include certain imports I use 90% of the time. </p>
<p>I got to thinking, in the program I produce music with you can set a default project file. Meaning, when you click new project it is set up how you have configured (include certain virtual instruments, effects, etc).</p>
<p>Is it possible to do this or something similar with <strong>IDE</strong>, and more specifically, <strong>Pycharm</strong>?</p>
| 0 | 2016-09-29T13:42:07Z | 39,772,172 | <p>Click on the top-right tab with your project name, then go <code>Edit Configurations</code> and there you can change the interpreter.</p>
| 0 | 2016-09-29T13:49:11Z | [
"python",
"linux",
"pycharm"
]
|
Is it possible to include interpreter path (or set any default code) when I create new python file in Pycharm? | 39,771,998 | <p>I haven't been able to find anything and I am not sure if this is the place I should be asking... </p>
<p>But I want to include the path to my interpreter in every new project I create. The reason being is that I develop locally and sync my files to a linux server. It is annoying having to manually type <code>#! /users/w/x/y/z/bin/python</code> every time I create a new project. Also would be nice to include certain imports I use 90% of the time. </p>
<p>I got to thinking, in the program I produce music with you can set a default project file. Meaning, when you click new project it is set up how you have configured (include certain virtual instruments, effects, etc).</p>
<p>Is it possible to do this or something similar with <strong>IDE</strong>, and more specifically, <strong>Pycharm</strong>?</p>
| 0 | 2016-09-29T13:42:07Z | 39,772,630 | <p>You should open <strong>File</strong> in the main menu and click <strong>Default Settings</strong>, collapse the <strong>Editor</strong> then click <strong>File and Code Templates</strong>, in the <strong>Files</strong> tab click on the <strong>+</strong> sign and create a new <strong>Template</strong>, give the new template a name and extension, in the editor box put your template content, in your case <code>#! /users/w/x/y/z/bin/python</code> <strong>apply</strong> and <strong>OK</strong>. After that everytime you open a project, select that template to include default lines you want. You could make number of templates.</p>
| 1 | 2016-09-29T14:07:47Z | [
"python",
"linux",
"pycharm"
]
|
tensorflow sequence_loss_by_example weight | 39,772,033 | <p>I want to make weight tensors for tf.nn.seq2seq.sequence_loss_by_example.
I'm using RNN-LSTM with maximum 100 steps, and zero-padded each batch items by maximum steps (100).</p>
<p>Shapes of my logits and labels are like this.</p>
<pre><code>Tensor("dropout/mul_1:0", shape=(50000, 168), dtype=float32) # logits
Tensor("ArgMax:0", shape=(500, 100), dtype=int64) # labels
</code></pre>
<p>50000 is for 500(batch_size) * 100(num_steps), and 168 is number of classes, and I'm passing them to sequence_loss_by_example like the ptb_word_lm.py code provided by Tensorflow. <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/rnn/ptb/ptb_word_lm.py" rel="nofollow">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/rnn/ptb/ptb_word_lm.py</a></p>
<pre><code>loss = tf.nn.seq2seq.sequence_loss_by_example(
[logits],
[tf.reshape(labels, [-1])],
[tf.ones([cf.batch_size * cf.max_time_steps], dtype=tf.float32)])
</code></pre>
<p>However, because my logits and labels are zero-padded, the losses are incorrect. From this answer, <a href="http://stackoverflow.com/a/38502547/3974129">http://stackoverflow.com/a/38502547/3974129</a>, I tried to change tf.ones([..]) part to weight tensors, but their underlying conditions are too different from mine.</p>
<p>I have the step length information like below, and I feed them when training.</p>
<pre><code>self._x_len = tf.placeholder(tf.int64, shape=[self._batch_size])
</code></pre>
<p>For example, I feed length information [3, 10, 2, 3, 1] for batch of size 5. They are also used for sequence_length in tf.nn.rnn().</p>
<p>One way I can think of is iterating the x_len, and use each item as index of last 1 in each weight. </p>
<blockquote>
<p>[0 0 0 0 0 .... 0 0 0] => [1 1 1 ... 1 0 0 0 0]</p>
<p>weight tensor with size of 100 (maximum time step)</p>
</blockquote>
<p>But as you know, I cannot use the values inside the tensor for index, because they are not fed yet.</p>
<p>How can I make weight tensors like this?</p>
| 0 | 2016-09-29T13:43:22Z | 39,825,560 | <p>Using <a href="http://stackoverflow.com/questions/34128104/tensorflow-creating-mask-of-varied-lengths">tensorflow creating mask of varied lengths</a> and <a href="https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/dynamic_rnn.py" rel="nofollow">https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/dynamic_rnn.py</a>, problem solved. I can make indexes of each max steps, and masked them</p>
| 0 | 2016-10-03T06:22:31Z | [
"python",
"numpy",
"tensorflow"
]
|
Join "colums" of list of lists into "rows" of list of lists | 39,772,082 | <p>Suppose I have several lists of length n:</p>
<pre><code>a = [1, 2, 3]
b = [4, 5, 6]
c = [7, 8, 9]
d = [-1, -2, -3]
</code></pre>
<p>I put those lists into a list:</p>
<pre><code>x = [a, b, c]
</code></pre>
<p>Now I'd like to end up with the following list:</p>
<pre><code>y = [[1, 4, 7, -1], [2, 5, 8, -2], [3, 6, 9, -3]]
</code></pre>
<p>What is a quick and pythonic way to do so? (To get from <code>x</code> to <code>y</code>.)</p>
<p>The entries could be anything, I use numbers just because they visualize good.</p>
| 1 | 2016-09-29T13:45:13Z | 39,772,156 | <p>There is a very simple and pythonic way to "reverse-zip" a list in this way. Simple use the <code>zip</code> function and the unpacking operator <code>*</code>:</p>
<pre><code>>>> y = zip(*x)
>>> list(y)
[(1, 4, 7, -1), (2, 5, 8, -2), (3, 6, 9, -3)]
</code></pre>
<p>For more information about what is going on, see <a href="http://stackoverflow.com/questions/13635032/what-is-the-inverse-function-of-zip-in-python">What is the inverse function of zip in python?</a></p>
<p>If it is important that it be a list of lists, instead of a list of tuples, you can do the following:</p>
<pre><code>>>> [list(i) for i in zip(*x)]
[[1, 4, 7, -1], [2, 5, 8, -2], [3, 6, 9, -3]]
</code></pre>
| 3 | 2016-09-29T13:48:28Z | [
"python"
]
|
Join "colums" of list of lists into "rows" of list of lists | 39,772,082 | <p>Suppose I have several lists of length n:</p>
<pre><code>a = [1, 2, 3]
b = [4, 5, 6]
c = [7, 8, 9]
d = [-1, -2, -3]
</code></pre>
<p>I put those lists into a list:</p>
<pre><code>x = [a, b, c]
</code></pre>
<p>Now I'd like to end up with the following list:</p>
<pre><code>y = [[1, 4, 7, -1], [2, 5, 8, -2], [3, 6, 9, -3]]
</code></pre>
<p>What is a quick and pythonic way to do so? (To get from <code>x</code> to <code>y</code>.)</p>
<p>The entries could be anything, I use numbers just because they visualize good.</p>
| 1 | 2016-09-29T13:45:13Z | 39,772,187 | <p>You can do it in one line of code:</p>
<pre><code>[[z[i] for z in x] for i in range(len(x[0]))]
</code></pre>
<p>What it does is that to iterate over all indexes (<code>i</code>) and then iterate over all lists in <code>x</code> (<code>z</code>) and create a new list.</p>
| 3 | 2016-09-29T13:49:57Z | [
"python"
]
|
Join "colums" of list of lists into "rows" of list of lists | 39,772,082 | <p>Suppose I have several lists of length n:</p>
<pre><code>a = [1, 2, 3]
b = [4, 5, 6]
c = [7, 8, 9]
d = [-1, -2, -3]
</code></pre>
<p>I put those lists into a list:</p>
<pre><code>x = [a, b, c]
</code></pre>
<p>Now I'd like to end up with the following list:</p>
<pre><code>y = [[1, 4, 7, -1], [2, 5, 8, -2], [3, 6, 9, -3]]
</code></pre>
<p>What is a quick and pythonic way to do so? (To get from <code>x</code> to <code>y</code>.)</p>
<p>The entries could be anything, I use numbers just because they visualize good.</p>
| 1 | 2016-09-29T13:45:13Z | 39,772,217 | <p>You can try <code>zip</code></p>
<pre><code>>>> result = zip(a,b,c,d) # This gives ans as list of tuples [(1, 4, 7, -1), (2, 5, 8, -2), (3, 6, 9, -3)]
>>> result_list = [list(elem) for elem in result] #convert list of tuples to list of list
>>> result_list
[[1, 4, 7, -1], [2, 5, 8, -2], [3, 6, 9, -3]]
</code></pre>
| 0 | 2016-09-29T13:51:14Z | [
"python"
]
|
Get Exceptions from popen in python | 39,772,136 | <p>I executed this code in python: (test.py)</p>
<pre><code>from subprocess import Popen
p = Popen("file.bat").wait()
</code></pre>
<p>Here is file.bat:</p>
<pre><code>@echo off
start c:\python27\python.exe C:\Test\p1.py %*
start c:\python27\python.exe C:\Test\p2.py %*
pause
</code></pre>
<p>Here is p1.py:</p>
<pre><code>This line is error
print "Hello word"
</code></pre>
<p>p2.py is not interesting</p>
<p>I want to know the exception(not only compiling error) in p1.py by running test.py?
How can I do this?</p>
<p>Thanks!</p>
| 0 | 2016-09-29T13:47:22Z | 39,772,262 | <p>Here's how I got it working:
<code>test.py</code></p>
<pre><code>from subprocess import Popen
p = Popen(["./file.sh"]).wait()
</code></pre>
<p>Make sure to add the <code>[]</code> around file, as well as the <code>./</code>. You can also add arguments, like so:
<code>["./file.sh", "someArg"]</code>
Note that I am not on Windows, but this fixed it on Ubuntu. Please comment if you are still having issues</p>
<p>EDIT:
I think the real solution is: <a href="http://stackoverflow.com/questions/912830/using-subprocess-to-run-python-script-on-windows">Using subprocess to run Python script on Windows</a></p>
<p>This way you can run a python script from python, while still using Popen</p>
| 0 | 2016-09-29T13:53:04Z | [
"python",
"subprocess",
"popen"
]
|
How to use SearchVector with multiple fields from a list of strings | 39,772,355 | <p>I don't have a lot of Python experience and can't figure out how to use a list of strings with the SearchVector functionality described <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/postgres/search/#searchvector" rel="nofollow">here</a>. I'm using Django 1.10 with PostgreSQL 9.6 and have verified that this works if I write the fields in manually. </p>
<p>This function is from a class based view and its purpose is to receive a string from a form, then do a full text search against all CharFields in all Models within MyApp, then return the results (planning on using <a href="http://stackoverflow.com/questions/431628/how-to-combine-2-or-more-querysets-in-a-django-view">itertools</a> to combine multiple QuerySets once I get this working). I don't want to hardcode the fields because we will be adding new objects and new fields regularly - its a sort of corporate middleware app where I am not able to master the data. </p>
<pre><code>def get_queryset(self):
searchTargets = defaultdict(list)
myModels = (obj for obj in apps.get_models()
if obj._meta.label.startswith('myapp')
and not obj._meta.label == 'myapp.ConfigItem')
for thing in myModels:
fieldNames = (x.name for x in thing._meta.fields)
for thisField in fieldNames:
if thing._meta.get_field(thisField).get_internal_type() == 'CharField':
searchTargets[thing._meta.object_name].append(thisField)
user_query=self.request.GET['user_query']
for key in searchTargets.iterkeys():
targetClass=class_for_name('myapp.models',key)
results = targetClass.objects.annotate(
search=SearchVector(searchTargets[key]),
).filter(search=user_query)
#only a single QuerySet will be returned.
return results
</code></pre>
<p>I've tried ','.join(MyList) but of course that just makes one big string. </p>
| 0 | 2016-09-29T13:56:40Z | 39,773,528 | <p>Expand the list <a href="https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists" rel="nofollow">using *</a> and SearchVector will happily search multiple fields. </p>
<pre><code>results = targetClass.objects.annotate(
search=SearchVector(*searchTargets[key]),
).filter(search=user_query)
</code></pre>
| 0 | 2016-09-29T14:49:16Z | [
"python",
"django",
"postgresql"
]
|
How to extract certain data while filtering out the ones including colspan? | 39,772,609 | <p>For example:</p>
<pre><code><tbody>
<tr><td colspan="2"><p>Unwanted Text 1</p>
</td>
</tr>
<tr><td><a href="http://www.example.com">Text 1</a></td>
<td>Nonesense 1</td>
</tr>
<tr><td><a href="http://www.example2.com">Text 2</a></td>
<td>Nonesense2</td>
</tr>
<tr><td colspan="2"><p class="second-title">Unwanted Text 1</p>
</td>
</tr>
</tbody>
</code></pre>
<p>I tried: </p>
<pre><code>soup.select('tr')
for x in g:
print(x.contents[0].text)
</code></pre>
<p>Output:</p>
<pre><code>Unwanted Text 1
Text 1
Text 2
Unwanted Text 2
</code></pre>
<p>How can I only get "Text 1" and "Text 2" while omitting the other ones.</p>
| 2 | 2016-09-29T14:06:52Z | 39,772,694 | <p>You can match the <code>a</code> elements directly:</p>
<pre><code>for item in soup.select('tr > td > a'):
print(item.get_text())
</code></pre>
<p>Or, if you specifically want to skip rows with <code>td</code> elements having <code>colspan</code> attributes:</p>
<pre><code>for item in soup.select('tr'):
if item.find("td", colspan=True):
continue
print(item.td.get_text()) # get text of the first cell in the row
</code></pre>
| 2 | 2016-09-29T14:10:46Z | [
"python",
"web-scraping",
"beautifulsoup"
]
|
can python classes share variable with parent classe | 39,772,625 | <p>I have a class named <code>SocialPlatform</code> :</p>
<pre><code>class SocialPlatform:
nb_post = 0
def __init__(self, data):
self.data = data
self._init_type()
self.id_post = self._init_id_post()
@classmethod
def _init_id_post(cls):
cls.nb_post += 1
return cls.nb_post
</code></pre>
<p>and threee other classes that inherit from <code>SocialPlatform</code> :</p>
<pre><code>class Facebook(SocialPlatform):
_NAME = 'facebook'
@classmethod
def get_class_name(cls):
return cls._NAME
# code here
class Twitter(SocialPlatform):
_NAME = 'twitter'
@classmethod
def get_class_name(cls):
return cls._NAME
# code here
class Instagram(SocialPlatform):
_NAME = 'instagram'
@classmethod
def get_class_name(cls):
return cls._NAME
# code here
</code></pre>
<p>My idea was to increment <code>nb_post</code> each time an instance of <code>SocialPlatform</code> was created. I tought this variable was shared between all the classes that inherit from <code>SocialPlatform</code></p>
<p>So I tested that in my main function :</p>
<pre><code>def main():
post = Post() # an other class with stuff in it, it doesn't matter here
social_platform = {
'facebook': Facebook,
'twitter': Twitter,
'instagram': Instagram
}
while True:
try:
platform = social_platform[post.actual_post['header']['platform']](post.actual_post['data'])
except KeyError:
print 'Platform (%s) not implemented yet' % post.actual_post['header']['platform']
sys.exit(84)
print 'platform name : ' + platform.get_class_name()
print 'post id : ' + str(platform.id_post)
# platform.aff_content()
post.pop()
if not len(post.post):
break
print 'enter return to display next post'
while raw_input() != "": pass
</code></pre>
<p>but when I use this code I get this output :</p>
<pre><code>platform name : twitter
post id : 1
enter return to display next post
platform name : facebook
post id : 1
enter return to display next post
platform name : twitter
post id : 2
</code></pre>
<p>With this method <code>nb_post</code> is shared between Twitter, Facebook or Instagram instance, not all of them.</p>
<p>So my question is : is there any way to do this in python ?</p>
| 1 | 2016-09-29T14:07:43Z | 39,773,363 | <p>You have to explicitly reference the base class in the increment expression:</p>
<pre><code>def _init_id_post(cls):
cls.nb_post += 1
return cls.nb_post
</code></pre>
<p>Should be:</p>
<pre><code>def _init_id_post(cls):
SocialPlatform.nb_post += 1
return SocialPlatform.nb_post
</code></pre>
<p>As per:</p>
<p><a href="http://stackoverflow.com/questions/25590649/how-to-count-the-number-of-instance-of-a-custom-class">How to count the number of instance of a custom class?</a></p>
| 1 | 2016-09-29T14:41:14Z | [
"python",
"inheritance"
]
|
can python classes share variable with parent classe | 39,772,625 | <p>I have a class named <code>SocialPlatform</code> :</p>
<pre><code>class SocialPlatform:
nb_post = 0
def __init__(self, data):
self.data = data
self._init_type()
self.id_post = self._init_id_post()
@classmethod
def _init_id_post(cls):
cls.nb_post += 1
return cls.nb_post
</code></pre>
<p>and threee other classes that inherit from <code>SocialPlatform</code> :</p>
<pre><code>class Facebook(SocialPlatform):
_NAME = 'facebook'
@classmethod
def get_class_name(cls):
return cls._NAME
# code here
class Twitter(SocialPlatform):
_NAME = 'twitter'
@classmethod
def get_class_name(cls):
return cls._NAME
# code here
class Instagram(SocialPlatform):
_NAME = 'instagram'
@classmethod
def get_class_name(cls):
return cls._NAME
# code here
</code></pre>
<p>My idea was to increment <code>nb_post</code> each time an instance of <code>SocialPlatform</code> was created. I tought this variable was shared between all the classes that inherit from <code>SocialPlatform</code></p>
<p>So I tested that in my main function :</p>
<pre><code>def main():
post = Post() # an other class with stuff in it, it doesn't matter here
social_platform = {
'facebook': Facebook,
'twitter': Twitter,
'instagram': Instagram
}
while True:
try:
platform = social_platform[post.actual_post['header']['platform']](post.actual_post['data'])
except KeyError:
print 'Platform (%s) not implemented yet' % post.actual_post['header']['platform']
sys.exit(84)
print 'platform name : ' + platform.get_class_name()
print 'post id : ' + str(platform.id_post)
# platform.aff_content()
post.pop()
if not len(post.post):
break
print 'enter return to display next post'
while raw_input() != "": pass
</code></pre>
<p>but when I use this code I get this output :</p>
<pre><code>platform name : twitter
post id : 1
enter return to display next post
platform name : facebook
post id : 1
enter return to display next post
platform name : twitter
post id : 2
</code></pre>
<p>With this method <code>nb_post</code> is shared between Twitter, Facebook or Instagram instance, not all of them.</p>
<p>So my question is : is there any way to do this in python ?</p>
| 1 | 2016-09-29T14:07:43Z | 39,773,385 | <p>When an attribute is not found, it will be looked up on a higher level. When assigning, the most local level is used though.</p>
<p>For example:</p>
<pre><code>class Foo:
v = 1
a = Foo()
b = Foo()
print(a.v) # 1: v is not found in "a" instance, but found in "Foo" class
Foo.v = 2 # modifies Foo class's "v"
print(a.v) # 2: not found in "a" instance but found in class
a.v = 3 # creates an attribute on "a" instance, does not modify class "v"
print(Foo.v) # 2
print(b.v) # 2: not found in "b" instance but found in "Foo" class
</code></pre>
<p>Here <code>_init_id_post</code> was declared a <code>classmethod</code>, and you're doing <code>cls.nb_post = cls.nb_post + 1</code>.
In this expression, the second <code>cls.nb_post</code> occurence will refer to <code>SocialPlatform</code> the first time, then you assign on the <code>cls</code> object which is refering to the <code>Twitter</code> or <code>Instagram</code> class, not <code>SocialPlatform</code>.
When you call it again on the same class, the second <code>cls.nb_post</code> occurence will not refer to <code>SocialPlatform</code> since you created the attribute at the level of the <code>Twitter</code> class (for example).</p>
<p>The solution is not to use <code>cls</code> but use <code>SocialPlatform.nb_post += 1</code> (and make it a <code>@staticmethod</code>)</p>
| 1 | 2016-09-29T14:42:35Z | [
"python",
"inheritance"
]
|
can python classes share variable with parent classe | 39,772,625 | <p>I have a class named <code>SocialPlatform</code> :</p>
<pre><code>class SocialPlatform:
nb_post = 0
def __init__(self, data):
self.data = data
self._init_type()
self.id_post = self._init_id_post()
@classmethod
def _init_id_post(cls):
cls.nb_post += 1
return cls.nb_post
</code></pre>
<p>and threee other classes that inherit from <code>SocialPlatform</code> :</p>
<pre><code>class Facebook(SocialPlatform):
_NAME = 'facebook'
@classmethod
def get_class_name(cls):
return cls._NAME
# code here
class Twitter(SocialPlatform):
_NAME = 'twitter'
@classmethod
def get_class_name(cls):
return cls._NAME
# code here
class Instagram(SocialPlatform):
_NAME = 'instagram'
@classmethod
def get_class_name(cls):
return cls._NAME
# code here
</code></pre>
<p>My idea was to increment <code>nb_post</code> each time an instance of <code>SocialPlatform</code> was created. I tought this variable was shared between all the classes that inherit from <code>SocialPlatform</code></p>
<p>So I tested that in my main function :</p>
<pre><code>def main():
post = Post() # an other class with stuff in it, it doesn't matter here
social_platform = {
'facebook': Facebook,
'twitter': Twitter,
'instagram': Instagram
}
while True:
try:
platform = social_platform[post.actual_post['header']['platform']](post.actual_post['data'])
except KeyError:
print 'Platform (%s) not implemented yet' % post.actual_post['header']['platform']
sys.exit(84)
print 'platform name : ' + platform.get_class_name()
print 'post id : ' + str(platform.id_post)
# platform.aff_content()
post.pop()
if not len(post.post):
break
print 'enter return to display next post'
while raw_input() != "": pass
</code></pre>
<p>but when I use this code I get this output :</p>
<pre><code>platform name : twitter
post id : 1
enter return to display next post
platform name : facebook
post id : 1
enter return to display next post
platform name : twitter
post id : 2
</code></pre>
<p>With this method <code>nb_post</code> is shared between Twitter, Facebook or Instagram instance, not all of them.</p>
<p>So my question is : is there any way to do this in python ?</p>
| 1 | 2016-09-29T14:07:43Z | 39,773,498 | <pre><code> class A():
n = 0
def __init__(self):
A.n += 1
class B(A):
def __init__(self):
super(B, self).__init__()
class C(A):
def __init__(self):
super(C, self).__init__()
a = A()
print(a.n) #prints 1
b = B()
print(a.n) #prints 2
c = C()
print(a.n) #prints 3
</code></pre>
<p>I think you can figure out the rest by yourself. Good luck!</p>
| 1 | 2016-09-29T14:48:10Z | [
"python",
"inheritance"
]
|
can python classes share variable with parent classe | 39,772,625 | <p>I have a class named <code>SocialPlatform</code> :</p>
<pre><code>class SocialPlatform:
nb_post = 0
def __init__(self, data):
self.data = data
self._init_type()
self.id_post = self._init_id_post()
@classmethod
def _init_id_post(cls):
cls.nb_post += 1
return cls.nb_post
</code></pre>
<p>and threee other classes that inherit from <code>SocialPlatform</code> :</p>
<pre><code>class Facebook(SocialPlatform):
_NAME = 'facebook'
@classmethod
def get_class_name(cls):
return cls._NAME
# code here
class Twitter(SocialPlatform):
_NAME = 'twitter'
@classmethod
def get_class_name(cls):
return cls._NAME
# code here
class Instagram(SocialPlatform):
_NAME = 'instagram'
@classmethod
def get_class_name(cls):
return cls._NAME
# code here
</code></pre>
<p>My idea was to increment <code>nb_post</code> each time an instance of <code>SocialPlatform</code> was created. I tought this variable was shared between all the classes that inherit from <code>SocialPlatform</code></p>
<p>So I tested that in my main function :</p>
<pre><code>def main():
post = Post() # an other class with stuff in it, it doesn't matter here
social_platform = {
'facebook': Facebook,
'twitter': Twitter,
'instagram': Instagram
}
while True:
try:
platform = social_platform[post.actual_post['header']['platform']](post.actual_post['data'])
except KeyError:
print 'Platform (%s) not implemented yet' % post.actual_post['header']['platform']
sys.exit(84)
print 'platform name : ' + platform.get_class_name()
print 'post id : ' + str(platform.id_post)
# platform.aff_content()
post.pop()
if not len(post.post):
break
print 'enter return to display next post'
while raw_input() != "": pass
</code></pre>
<p>but when I use this code I get this output :</p>
<pre><code>platform name : twitter
post id : 1
enter return to display next post
platform name : facebook
post id : 1
enter return to display next post
platform name : twitter
post id : 2
</code></pre>
<p>With this method <code>nb_post</code> is shared between Twitter, Facebook or Instagram instance, not all of them.</p>
<p>So my question is : is there any way to do this in python ?</p>
| 1 | 2016-09-29T14:07:43Z | 39,773,608 | <p>This works for me:</p>
<pre><code>class SocialPlatform(object):
nb_post = 0
def __init__(self):
self.id_post = A.nb_post
A.increment()
@classmethod
def increment(cls):
cls.nb_post += 1
class Facebook(SocialPlatform):
pass
class Twitter(SocialPlatform):
pass
</code></pre>
<p>And then:</p>
<pre><code>>>> a = Facebook()
>>> b = Twitter()
>>> c = Twitter()
>>>
>>> a.id_post
0
>>> b.id_post
1
>>> c.id_post
2
</code></pre>
| 0 | 2016-09-29T14:53:41Z | [
"python",
"inheritance"
]
|
Flask return multiple variables? | 39,772,670 | <p>I am learning Flash with Python. My python skills are okay, but I have no experience with web apps. I have a form that takes some information and I want to display it back after it is submitted. I can do that part, however I can only return one variable from that form even tho there are 3 variables in the form. I can return each one individually but not all together. If I try all 3, I get a 500 error. Here is the code I am working with:</p>
<pre><code>from flask import Blueprint
from flask import render_template
from flask import request
simple_page = Blueprint('simple_page', __name__)
@simple_page.route('/testing', methods=['GET', 'POST'])
def my_form():
if request.method=='GET':
return render_template("my-form.html")
elif request.method=='POST':
firstname = request.form['firstname']
lastname = request.form['lastname']
cellphone = request.form['cellphone']
return firstname, lastname, cellphone
</code></pre>
<p>If I change the last return line to:</p>
<pre><code>return firstname
</code></pre>
<p>it works, or:</p>
<pre><code>return lastname
</code></pre>
<p>or:</p>
<pre><code>return cellphone
</code></pre>
<p>If I try two variables it will only return the first, once I add the 3rd I get the 500 error. I am sure I am doing something silly, but even with tons of googling I could not get it figured out. Any help would be great. Thank you.</p>
| 1 | 2016-09-29T14:09:45Z | 39,772,992 | <p>Flask requires either a <code>str</code> or <code>Response</code> to be return, in you case you are attempting to return a <code>tuple</code>.</p>
<p>You can either return your <code>tuple</code> as a formatted <code>str</code></p>
<pre><code>return '{} {} {}'.format(firstname, lastname, cellphone)
</code></pre>
<p>Or you can pass the values into another <code>template</code></p>
<pre><code>return render_template('my_other_template.html',
firstname=firstname,
lastname=lastname,
cellphone=cellphone)
</code></pre>
| 1 | 2016-09-29T14:24:14Z | [
"python",
"flask"
]
|
PyQt5: cannot write cookie to file using QFile | 39,772,703 | <p>I have a file named <em>cookies.txt</em>.</p>
<pre><code>fd = QFile(":/cookies.txt")
available_cookies = QtNetwork.QNetworkCookieJar().allCookies()
for cookie in available_cookies:
print(cookie.toRawForm(1))
QTextStream(cookie.toRawForm(1), fd.open(QIODevice.WriteOnly))
fd.close()
</code></pre>
<p>Here is my full traceback:</p>
<pre><code>QTextStream(cookie.toRawForm(1), fd.open(QIODevice.WriteOnly))
TypeError: arguments did not match any overloaded call:
QTextStream(): too many arguments
QTextStream(QIODevice): argument 1 has unexpected type 'QByteArray'
QTextStream(QByteArray, mode: Union[QIODevice.OpenMode, QIODevice.OpenModeFlag] = QIODevice.ReadWrite): argument 2 has unexpected type 'bool'
</code></pre>
<p>I am following the C++ documentation, and I am having trouble writing the corresponding python syntax.</p>
| 1 | 2016-09-29T14:11:29Z | 39,773,468 | <p>In <code>QTextStream(cookie.toRawForm(1), fd.open(QIODevice.WriteOnly))</code>, you pass 2 arguments, a <code>QByteArray</code>, and a <code>bool</code> (<code>QIODevice::open</code> returns a boolean), but <code>QTextStream</code> cannot take a <code>QByteArray</code> with a <code>bool</code>.</p>
| 1 | 2016-09-29T14:46:12Z | [
"python",
"qt5",
"pyqt5",
"qfile",
"qtextstream"
]
|
PyQt5: cannot write cookie to file using QFile | 39,772,703 | <p>I have a file named <em>cookies.txt</em>.</p>
<pre><code>fd = QFile(":/cookies.txt")
available_cookies = QtNetwork.QNetworkCookieJar().allCookies()
for cookie in available_cookies:
print(cookie.toRawForm(1))
QTextStream(cookie.toRawForm(1), fd.open(QIODevice.WriteOnly))
fd.close()
</code></pre>
<p>Here is my full traceback:</p>
<pre><code>QTextStream(cookie.toRawForm(1), fd.open(QIODevice.WriteOnly))
TypeError: arguments did not match any overloaded call:
QTextStream(): too many arguments
QTextStream(QIODevice): argument 1 has unexpected type 'QByteArray'
QTextStream(QByteArray, mode: Union[QIODevice.OpenMode, QIODevice.OpenModeFlag] = QIODevice.ReadWrite): argument 2 has unexpected type 'bool'
</code></pre>
<p>I am following the C++ documentation, and I am having trouble writing the corresponding python syntax.</p>
| 1 | 2016-09-29T14:11:29Z | 39,775,150 | <p>Are you really trying to write to a resource path? Resources are read-only, so that is not going to work.</p>
<p>To write to a non-resource path:</p>
<pre><code>fd = QFile('/tmp/cookies.txt')
if fd.open(QIODevice.WriteOnly):
available_cookies = QtNetwork.QNetworkCookieJar().allCookies()
stream = QTextStream(fd)
for cookie in available_cookies:
data = cookie.toRawForm(QtNetwork.QNetworkCookie.Full)
stream << data
fd.close()
</code></pre>
| 0 | 2016-09-29T16:07:11Z | [
"python",
"qt5",
"pyqt5",
"qfile",
"qtextstream"
]
|
How to centralized ticks and labels in a heatmap matplotlib | 39,772,708 | <p>I am plotting a heatmap using matplotlib like the figure below:</p>
<p><a href="http://i.stack.imgur.com/J8gmT.png" rel="nofollow"><img src="http://i.stack.imgur.com/J8gmT.png" alt="enter image description here"></a></p>
<p>The code snip below is used to achive the plot:</p>
<pre><code>C_range = 10. ** np.arange(-2, 8)
gamma_range = 10. ** np.arange(-5, 4)
confMat=np.random.rand(10, 9)
heatmap = plt.pcolor(confMat)
for y in range(confMat.shape[0]):
for x in range(confMat.shape[1]):
plt.text(x + 0.5, y + 0.5, '%.2f' % confMat[y, x],
horizontalalignment='center',
verticalalignment='center',)
plt.grid()
plt.colorbar(heatmap)
plt.subplots_adjust(left=0.15, right=0.99, bottom=0.15, top=0.99)
plt.ylabel('Cost')
plt.xlabel('Gamma')
plt.xticks(np.arange(len(gamma_range)), gamma_range, rotation=45,)
plt.yticks(np.arange(len(C_range)), C_range, rotation=45)
plt.show()
</code></pre>
<p>I need to centrelized the ticks and labels on both axes. Any idea? </p>
| 0 | 2016-09-29T14:11:37Z | 39,776,012 | <p>For your specific code the simplest solution is to shift your tick positions by half a unit separation:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
C_range = 10. ** np.arange(-2, 8)
gamma_range = 10. ** np.arange(-5, 4)
confMat=np.random.rand(10, 9)
heatmap = plt.pcolor(confMat)
for y in range(confMat.shape[0]):
for x in range(confMat.shape[1]):
plt.text(x + 0.5, y + 0.5, '%.2f' % confMat[y, x],
horizontalalignment='center',
verticalalignment='center',)
#plt.grid() #this will look bad now
plt.colorbar(heatmap)
plt.subplots_adjust(left=0.15, right=0.99, bottom=0.15, top=0.99)
plt.ylabel('Cost')
plt.xlabel('Gamma')
plt.xticks(np.arange(len(gamma_range))+0.5, gamma_range, rotation=45,)
plt.yticks(np.arange(len(C_range))+0.5, C_range, rotation=45)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/XkJ9c.png" rel="nofollow"><img src="http://i.stack.imgur.com/XkJ9c.png" alt="result"></a></p>
<p>As you can see, in this case you need to turn off the <code>grid</code>, otherwise it will overlap with your squares and clutter up your plot.</p>
| 1 | 2016-09-29T16:56:32Z | [
"python",
"matplotlib",
"label",
"heatmap"
]
|
How can be shown manyTomany field values into a single text box in django template | 39,772,762 | <p>I want to show manytomany field value into a single input textbox in django template.
My output is
ABC /n
Abc /n
BVC /n</p>
<p>i want like ABC ,Abc,BVC</p>
<p>my code sample is this</p>
<pre><code> <div class='col-sm-8'>
{% for car in cars %}<br/>
<input type='text' class='form-control' name='cars' placeholder='Select cars' value= {{car}}>
{% endfor %}
</div>
</code></pre>
<p>I want to show the output in Textbox field</p>
| 0 | 2016-09-29T14:14:34Z | 39,774,830 | <p>You can do it:</p>
<p>from python: (views.py)</p>
<pre><code>...
data_input = ', '.join([car for car in cars])
...
</code></pre>
<p>and in your template:</p>
<pre><code> <div class='col-sm-8'>
<input type='text' class='form-control' name='cars' placeholder='Select cars' value= "{{ data_input }}">
</div>
</code></pre>
| 0 | 2016-09-29T15:50:40Z | [
"python",
"html",
"django"
]
|
Generate pdf files with Weasyprint, save in zip file, send that zip file to client and present it for download | 39,772,798 | <p>Let me break down my requirement. Here's what I'm doing right now.</p>
<p><strong>1. Generate PDF files from HTML</strong></p>
<p>for this I'm using Weasyprint as following:</p>
<pre><code>lstFileNames = []
for i, content in enumerate(lstHtmlContent):
repName = 'report'+ str(uuid.uuid4()) + '.pdf'
lstFileNames.append("D:/Python/Workspace/" + repName)
HTML(string=content).write_pdf(target=repName,
stylesheets=[CSS(filename='/css/bootstrap.css')])
</code></pre>
<p>all files names, with paths, are saved in <code>lstFileNames</code>.</p>
<p><strong>2. Create a zip file with pdf files generated by weasyprint</strong></p>
<p>for this I'm using zipfile</p>
<pre><code>zipPath = 'reportDir' + str(uuid.uuid4()) + '.zip'
myzip = zipfile.ZipFile(zipPath, 'w')
with myzip:
for f in lstFileNames:
myzip.write(f)
</code></pre>
<p><strong>3. Send zip file to client for download</strong></p>
<pre><code>resp = HttpResponse(myzip, content_type = "application/x-zip-compressed")
resp['Content-Disposition'] = 'attachment; filename=%s' % 'myzip.zip'
</code></pre>
<p><strong>4. Open file for downloading via Javascript</strong></p>
<pre><code>var file = new Blob([response], {type: 'application/x-zip-compressed'});
var fileURL = URL.createObjectURL(file);
window.open(fileURL);
</code></pre>
<p><strong><em>Problems</em></strong></p>
<p><strong>1.</strong> While the zip file is successfully received at front end, after I try to open it, it gives the following error:</p>
<blockquote>
<p>The archive is in either unknown format or damaged</p>
</blockquote>
<p>Am I sending the file wrong or is my Javascript code the problem?</p>
<p><strong>2.</strong> Is there a way to store all pdf files in list of byte arrays and generate zip files with those byte array and send it to the client? I tried that with weasyprint but the result was same <code>damaged file</code>.</p>
<p><strong>3.</strong> Not exactly a problem but I haven't been able to find it in weasyprint docs. Can I enforce the path to where the file should be saved?</p>
<p>Problem # 1 is of extreme priority, rest are secondary. I would like to know if I'm doing it right i.e. generating pdf files and sending their zip file to client. </p>
<p>Thanks in advance. </p>
| 0 | 2016-09-29T14:16:14Z | 39,773,939 | <p>Once you have exited the <code>with</code> block the filehandle is closed. You should reopen the file (this time with open) and use <code>read()</code> to pass the contents to <code>HttpResponse</code> instead of passing the filehandle itself.</p>
<pre><code>with zipfile.ZipFile(zipPath, 'w') as myzip
for f in lstFileNames:
myzip.write(f)
with open(zipPath, 'r') as myzip:
return HttpResponse(myzip.read(), content_type = "application/x-zip-compressed")
</code></pre>
<p>If that works, then you can use a <code>StringIO</code> instance instead of a filehandle to store the zip file. I'm not familiar with Weasyprint so I don't know whether you can use <code>StringIO</code> for that.</p>
| 0 | 2016-09-29T15:08:29Z | [
"javascript",
"python",
"django",
"zipfile",
"weasyprint"
]
|
Generate pdf files with Weasyprint, save in zip file, send that zip file to client and present it for download | 39,772,798 | <p>Let me break down my requirement. Here's what I'm doing right now.</p>
<p><strong>1. Generate PDF files from HTML</strong></p>
<p>for this I'm using Weasyprint as following:</p>
<pre><code>lstFileNames = []
for i, content in enumerate(lstHtmlContent):
repName = 'report'+ str(uuid.uuid4()) + '.pdf'
lstFileNames.append("D:/Python/Workspace/" + repName)
HTML(string=content).write_pdf(target=repName,
stylesheets=[CSS(filename='/css/bootstrap.css')])
</code></pre>
<p>all files names, with paths, are saved in <code>lstFileNames</code>.</p>
<p><strong>2. Create a zip file with pdf files generated by weasyprint</strong></p>
<p>for this I'm using zipfile</p>
<pre><code>zipPath = 'reportDir' + str(uuid.uuid4()) + '.zip'
myzip = zipfile.ZipFile(zipPath, 'w')
with myzip:
for f in lstFileNames:
myzip.write(f)
</code></pre>
<p><strong>3. Send zip file to client for download</strong></p>
<pre><code>resp = HttpResponse(myzip, content_type = "application/x-zip-compressed")
resp['Content-Disposition'] = 'attachment; filename=%s' % 'myzip.zip'
</code></pre>
<p><strong>4. Open file for downloading via Javascript</strong></p>
<pre><code>var file = new Blob([response], {type: 'application/x-zip-compressed'});
var fileURL = URL.createObjectURL(file);
window.open(fileURL);
</code></pre>
<p><strong><em>Problems</em></strong></p>
<p><strong>1.</strong> While the zip file is successfully received at front end, after I try to open it, it gives the following error:</p>
<blockquote>
<p>The archive is in either unknown format or damaged</p>
</blockquote>
<p>Am I sending the file wrong or is my Javascript code the problem?</p>
<p><strong>2.</strong> Is there a way to store all pdf files in list of byte arrays and generate zip files with those byte array and send it to the client? I tried that with weasyprint but the result was same <code>damaged file</code>.</p>
<p><strong>3.</strong> Not exactly a problem but I haven't been able to find it in weasyprint docs. Can I enforce the path to where the file should be saved?</p>
<p>Problem # 1 is of extreme priority, rest are secondary. I would like to know if I'm doing it right i.e. generating pdf files and sending their zip file to client. </p>
<p>Thanks in advance. </p>
| 0 | 2016-09-29T14:16:14Z | 39,786,958 | <p>A slightly different approach would be to move the zip file to a public directory and then send that location to the client (e.g. json formatted), i.e.:</p>
<pre><code>publicPath = os.path.join('public/', os.path.basename(zipPath))
os.rename(zipPath, os.path.join('/var/www/', publicPath))
jsonResp = '{ "zip-location": "' + publicPath + '" }'
resp = HttpResponse(jsonResp, content_type = 'application/json');
</code></pre>
<p>Then in your client's javascript:</p>
<pre><code>var res = JSON.parse(response);
var zipFileUrl = '/' + res['zip-location'];
window.open(zipFileUrl, '_blank');
</code></pre>
<p><code>'/' + res['zip-location']</code> assumes that your page lives in the same folder as the <code>public</code> directory (so <code>http://example.com/public/pdf-files-123.zip</code> points to <code>/var/www/public/pdf-files-123.zip</code> on your file system).</p>
<p>You can clean up the <code>public</code> directory with a cron job that deletes all the <code>.zip</code> files in there that are older than an hour or so.</p>
| 1 | 2016-09-30T08:27:26Z | [
"javascript",
"python",
"django",
"zipfile",
"weasyprint"
]
|
Add prefix to specific columns of Dataframe | 39,772,896 | <p>I've a DataFrame like that :</p>
<pre><code>col1 col2 col3 col4 col5 col6 col7 col8
0 5345 rrf rrf rrf rrf rrf rrf
1 2527 erfr erfr erfr erfr erfr erfr
2 2727 f f f f f f
</code></pre>
<p>I would like to rename all columns but not <strong>col1</strong> and <strong>col2</strong>.</p>
<p>So I tried to make a loop</p>
<pre><code>print(df.columns)
for col in df.columns:
if col != 'col1' and col != 'col2':
col.rename = str(col) + '_x'
</code></pre>
<p>But it's not very efficient...it doesn't work !</p>
| 1 | 2016-09-29T14:20:07Z | 39,772,981 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow"><code>str.contains</code></a> with a regex pattern to filter the cols of interest, then using <code>zip</code> construct a dict and pass this as the arg to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html" rel="nofollow"><code>rename</code></a>:</p>
<pre><code>In [94]:
cols = df.columns[~df.columns.str.contains('col1|col2')]
df.rename(columns = dict(zip(cols, cols + '_x')), inplace=True)
df
Out[94]:
col1 col2 col3_x col4_x col5_x col6_x col7_x col8_x
0 0 5345 rrf rrf rrf rrf rrf rrf
1 1 2527 erfr erfr erfr erfr erfr erfr
2 2 2727 f f f f f f
</code></pre>
<p>So here using <code>str.contains</code> to filter the columns will return the columns that don't match so the column order is irrelevant</p>
| 2 | 2016-09-29T14:23:47Z | [
"python",
"pandas"
]
|
Add prefix to specific columns of Dataframe | 39,772,896 | <p>I've a DataFrame like that :</p>
<pre><code>col1 col2 col3 col4 col5 col6 col7 col8
0 5345 rrf rrf rrf rrf rrf rrf
1 2527 erfr erfr erfr erfr erfr erfr
2 2727 f f f f f f
</code></pre>
<p>I would like to rename all columns but not <strong>col1</strong> and <strong>col2</strong>.</p>
<p>So I tried to make a loop</p>
<pre><code>print(df.columns)
for col in df.columns:
if col != 'col1' and col != 'col2':
col.rename = str(col) + '_x'
</code></pre>
<p>But it's not very efficient...it doesn't work !</p>
| 1 | 2016-09-29T14:20:07Z | 39,773,021 | <p>You can use the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html" rel="nofollow">DataFrame.rename()</a> method</p>
<pre><code>new_names = [(i,i+'_x') for i in df.iloc[:, 2:].columns.values]
df.rename(columns = dict(new_names), inplace=True)
</code></pre>
| 4 | 2016-09-29T14:25:27Z | [
"python",
"pandas"
]
|
Add prefix to specific columns of Dataframe | 39,772,896 | <p>I've a DataFrame like that :</p>
<pre><code>col1 col2 col3 col4 col5 col6 col7 col8
0 5345 rrf rrf rrf rrf rrf rrf
1 2527 erfr erfr erfr erfr erfr erfr
2 2727 f f f f f f
</code></pre>
<p>I would like to rename all columns but not <strong>col1</strong> and <strong>col2</strong>.</p>
<p>So I tried to make a loop</p>
<pre><code>print(df.columns)
for col in df.columns:
if col != 'col1' and col != 'col2':
col.rename = str(col) + '_x'
</code></pre>
<p>But it's not very efficient...it doesn't work !</p>
| 1 | 2016-09-29T14:20:07Z | 39,773,041 | <p>Simpliest solution if <code>col1</code> and <code>col2</code> are first and second column names:</p>
<pre><code>df.columns = df.columns[:2].union(df.columns[2:] + '_x')
print (df)
col1 col2 col3_x col4_x col5_x col6_x col7_x col8_x
0 0 5345 rrf rrf rrf rrf rrf rrf
1 1 2527 erfr erfr erfr erfr erfr erfr
2 2 2727 f f f f f f
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a> or list comprehension:</p>
<pre><code>cols = df.columns[~df.columns.isin(['col1','col2'])]
print (cols)
['col3', 'col4', 'col5', 'col6', 'col7', 'col8']
df.rename(columns = dict(zip(cols, cols + '_x')), inplace=True)
print (df)
col1 col2 col3_x col4_x col5_x col6_x col7_x col8_x
0 0 5345 rrf rrf rrf rrf rrf rrf
1 1 2527 erfr erfr erfr erfr erfr erfr
2 2 2727 f f f f f f
</code></pre>
<hr>
<pre><code>cols = [col for col in df.columns if col not in ['col1', 'col2']]
print (cols)
['col3', 'col4', 'col5', 'col6', 'col7', 'col8']
df.rename(columns = dict(zip(cols, cols + '_x')), inplace=True)
print (df)
col1 col2 col3_x col4_x col5_x col6_x col7_x col8_x
0 0 5345 rrf rrf rrf rrf rrf rrf
1 1 2527 erfr erfr erfr erfr erfr erfr
2 2 2727 f f f f f f
</code></pre>
<p>The fastest is list comprehension:</p>
<pre><code>df.columns = [col+'_x' if col != 'col1' and col != 'col2' else col for col in df.columns]
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>In [350]: %timeit (akot(df))
1000 loops, best of 3: 387 µs per loop
In [351]: %timeit (jez(df1))
The slowest run took 4.12 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 207 µs per loop
In [363]: %timeit (jez3(df2))
The slowest run took 6.41 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 75.7 µs per loop
</code></pre>
<hr>
<pre><code>df1 = df.copy()
df2 = df.copy()
def jez(df):
df.columns = df.columns[:2].union(df.columns[2:] + '_x')
return df
def akot(df):
new_names = [(i,i+'_x') for i in df.iloc[:, 2:].columns.values]
df.rename(columns = dict(new_names), inplace=True)
return df
def jez3(df):
df.columns = [col + '_x' if col != 'col1' and col != 'col2' else col for col in df.columns]
return df
print (akot(df))
print (jez(df1))
print (jez2(df1))
</code></pre>
| 1 | 2016-09-29T14:26:30Z | [
"python",
"pandas"
]
|
Longest path in a undirected and unweighted tree - in python | 39,773,052 | <p>I am solving a problem in which I need to calculate the diameter of the tree.I know how to calculate that using 2 bfs first to find the farthest node then do second bfs using the node we found from the first one.</p>
<p>But I am having difficulty to implement a very simple step - making a adjacency list (dict in case of python) from the input I have written a code to this but its not tidy and not the best can someone tell how to do this efficently </p>
<blockquote>
<p><strong>Input</strong></p>
<p>The first line of the input file contains one integer N --- number of
nodes in the tree (0 < N <= 10000). Next N-1 lines contain N-1 edges
of that tree --- Each line contains a pair (u, v) means there is an
edge between node u and node v (1 <= u, v <= N).</p>
<p><strong>Example</strong>:</p>
<p>8</p>
<p>1 2</p>
<p>1 3</p>
<p>2 4 </p>
<p>2 5 </p>
<p>3 7 </p>
<p>4 6 </p>
<p>7 8</p>
</blockquote>
<p>My code is :</p>
<pre><code>def makedic(m):
d = {}
for i in range(m):
o, p = map(int, raw_input().split())
if o not in d and p not in d:
d[p] = []
d[o] = [p]
elif o not in d and p in d:
d[o] = []
d[p].append(o)
elif p not in d and o in d:
d[p] = []
d[o].append(p)
elif o in d:
d[o].append(p)
# d[p].append(o)
elif p in d:
d[p].append(o)
return d
</code></pre>
<p>Here is how I implemented bfs:</p>
<pre><code>def bfs(g,s):
parent={s:None}
level={s:0}
frontier=[s]
ctr=1
while frontier:
next=[]
for i in frontier:
for j in g[i]:
if j not in parent:
parent[j]=i
level[j]=ctr
next.append(j)
frontier=next
ctr+=1
return level,parent
</code></pre>
| 0 | 2016-09-29T14:27:07Z | 39,773,226 | <p>You are using a <strong>0</strong> instead of <strong>o</strong> in makedic in first <em>elif</em> condition. Also made the correction for undirected graph.</p>
<pre><code>def makedic(m):
d = {}
for i in range(m):
o, p = map(int, raw_input().split())
if o not in d and p not in d:
d[p] = [o]
d[o] = [p]
elif o not in d and p in d:
d[o] = [p]
d[p].append(o)
elif p not in d and o in d:
d[p] = [o]
d[o].append(p)
elif o in d:
d[o].append(p)
d[p].append(o)
return d
</code></pre>
| -1 | 2016-09-29T14:34:55Z | [
"python",
"algorithm"
]
|
Longest path in a undirected and unweighted tree - in python | 39,773,052 | <p>I am solving a problem in which I need to calculate the diameter of the tree.I know how to calculate that using 2 bfs first to find the farthest node then do second bfs using the node we found from the first one.</p>
<p>But I am having difficulty to implement a very simple step - making a adjacency list (dict in case of python) from the input I have written a code to this but its not tidy and not the best can someone tell how to do this efficently </p>
<blockquote>
<p><strong>Input</strong></p>
<p>The first line of the input file contains one integer N --- number of
nodes in the tree (0 < N <= 10000). Next N-1 lines contain N-1 edges
of that tree --- Each line contains a pair (u, v) means there is an
edge between node u and node v (1 <= u, v <= N).</p>
<p><strong>Example</strong>:</p>
<p>8</p>
<p>1 2</p>
<p>1 3</p>
<p>2 4 </p>
<p>2 5 </p>
<p>3 7 </p>
<p>4 6 </p>
<p>7 8</p>
</blockquote>
<p>My code is :</p>
<pre><code>def makedic(m):
d = {}
for i in range(m):
o, p = map(int, raw_input().split())
if o not in d and p not in d:
d[p] = []
d[o] = [p]
elif o not in d and p in d:
d[o] = []
d[p].append(o)
elif p not in d and o in d:
d[p] = []
d[o].append(p)
elif o in d:
d[o].append(p)
# d[p].append(o)
elif p in d:
d[p].append(o)
return d
</code></pre>
<p>Here is how I implemented bfs:</p>
<pre><code>def bfs(g,s):
parent={s:None}
level={s:0}
frontier=[s]
ctr=1
while frontier:
next=[]
for i in frontier:
for j in g[i]:
if j not in parent:
parent[j]=i
level[j]=ctr
next.append(j)
frontier=next
ctr+=1
return level,parent
</code></pre>
| 0 | 2016-09-29T14:27:07Z | 39,773,240 | <p>There are unnecessary checks in your code. For each <em>A - B</em> edge you just have to put <em>B</em> in <em>A</em>'s adjacency list and <em>A</em> in <em>B</em>'s adjacency list:</p>
<pre><code>d = {}
for i in range(m):
u,v = map(int, raw_input().split())
if u in d:
d[u].append(v)
else:
d[u] = [v]
if v in d:
d[v].append(u)
else:
d[v] = [u]
</code></pre>
<p>According to the question every node has an index between <em>1</em> and <em>N</em> so you can use this fact and pre-populate the <code>dict</code> with empty lists. This way you don't have to check whether a key is in the <code>dict</code> or not. Also make the code a little bit shorter:</p>
<pre><code>N = input()
d = { i:[] for i in range(1, N+1) }
for i in range(N):
u,v = map(int, raw_input().split())
d[u].append(v)
d[v].append(u)
</code></pre>
| -1 | 2016-09-29T14:35:33Z | [
"python",
"algorithm"
]
|
Python unittesting request.get with mocking, not raising specific exception | 39,773,061 | <p>I have a simple function that sends a get request to an API, and I want to be able to write a Unit test to assert that the scriprt prints an error and then exits if the request returns an exception.</p>
<pre><code>def fetch_members(self):
try:
r = requests.get(api_url, auth=('user', api_key))
except requests.exceptions.HTTPError as error:
print(error)
sys.exit(1)
</code></pre>
<p>I've tried the following test but the HTTPError is never raised</p>
<pre><code>import requests
from unittest.mock import patch
@patch('sys.exit')
@patch('sys.stdout', new_callable=StringIO)
@patch('requests.get')
def test_fetch_members_with_bad_request(self, mock_get, mock_stdout,
mock_exit):
response = requests.Response()
response.status_code = 400 # Bad reqeust
mock_get.return_value = response
mock_get.side_effect = requests.exceptions.HTTPError
with self.assertRaises(requests.exceptions.HTTPError):
fetch_members()
mock_exit.assert_called_once_with(1)
self.assertIsNotNone(mock_stdout.getvalue())
</code></pre>
<p>How would I go about correctly patching requests.get to always raise some sort of error?</p>
| -1 | 2016-09-29T14:27:32Z | 39,773,180 | <p>Your function catches and handles the exception:</p>
<pre><code>except requests.exceptions.HTTPError as error:
</code></pre>
<p>This means it'll <em>never</em> propagate further, so your <code>assertRaises()</code> fails. Just assert that <code>sys.exit()</code> has been called, that's enough.</p>
<p>There is also no point in setting a return value; the call raises an exception instead. The following should suffice:</p>
<pre><code>@patch('sys.exit')
@patch('sys.stdout', new_callable=StringIO)
@patch('requests.get')
def test_fetch_members_with_bad_request(self, mock_get, mock_stdout,
mock_exit):
mock_get.side_effect = requests.exceptions.HTTPError
fetch_members()
mock_exit.assert_called_once_with(1)
self.assertIsNotNone(mock_stdout.getvalue())
</code></pre>
| 1 | 2016-09-29T14:32:54Z | [
"python",
"unit-testing"
]
|
HTML snapshots using the selenium webdriver? | 39,773,186 | <p>I am trying to capture all the visible content of a page as text. Let's say <a href="https://dukescript.com/best/practices/2015/11/23/dynamic-templates.html" rel="nofollow">that</a> one for example.</p>
<p>If I store the page source then I won't be capturing the comments section because it's loaded using javascript. </p>
<p>Is there a way to take HTML snapshots with selenium webdriver?
(Preferably expressed using the python wrapper)</p>
| 0 | 2016-09-29T14:33:09Z | 39,773,319 | <p>This code will take a screenshot of the entire page:</p>
<pre><code>from selenium import webdriver
driver = webdriver.Firefox()
driver.get('https://dukescript.com/best/practices/2015/11/23/dynamic-templates.html')
driver.save_screenshot('screenshot.png')
driver.quit()
</code></pre>
<p>however, if you just want a screenshot of a specific element, you could use this:</p>
<pre><code>def get_element_screenshot(element: WebElement) -> bytes:
driver = element._parent
ActionChains(driver).move_to_element(element).perform() # focus
src_base64 = driver.get_screenshot_as_base64()
scr_png = b64decode(src_base64)
scr_img = Image(blob=scr_png)
x = element.location["x"]
y = element.location["y"]
w = element.size["width"]
h = element.size["height"]
scr_img.crop(
left=math.floor(x),
top=math.floor(y),
width=math.ceil(w),
height=math.ceil(h))
return scr_img.make_blob()
</code></pre>
<p>Where the WebElement is the Element you're chasing. of course, this method requires you to import <code>from base64 import b64decode</code> and <code>from wand.image import Image</code> to handle the cropping.</p>
| 0 | 2016-09-29T14:38:51Z | [
"python",
"selenium-webdriver",
"web-crawler"
]
|
HTML snapshots using the selenium webdriver? | 39,773,186 | <p>I am trying to capture all the visible content of a page as text. Let's say <a href="https://dukescript.com/best/practices/2015/11/23/dynamic-templates.html" rel="nofollow">that</a> one for example.</p>
<p>If I store the page source then I won't be capturing the comments section because it's loaded using javascript. </p>
<p>Is there a way to take HTML snapshots with selenium webdriver?
(Preferably expressed using the python wrapper)</p>
| 0 | 2016-09-29T14:33:09Z | 39,773,670 | <p>Regardless of whether or not the HTML of the page is generated using JavaScript, you will still be able to capture it using <code>driver.page_source</code>.</p>
<p>I imagine the reason you haven't been able to capture the source of the comments section in your example is because it's contained in an iframe - In order to capture the html source for content within a frame/iframe you'll need to first switch focus to that particular frame followed by calling <code>driver.page_source</code>.</p>
| 2 | 2016-09-29T14:56:15Z | [
"python",
"selenium-webdriver",
"web-crawler"
]
|
Neural network library for true-false based image recognition | 39,773,193 | <p>I'm in need of an artificial neural network library (preferably in python) for one (simple) task. I want to train it so that it can tell wether <strong><em>a thing</em></strong> is in an image. I would train it by feeding it lots of pictures and telling it wether it contains the thing I'm looking for or not:</p>
<p>These images contain <strong><em>this thing</em></strong>, return True <em>(or probability of it containing the thing)</em></p>
<p>These images do not contain <strong><em>this thing</em></strong>, return False <em>(or probability of it containing the thing)</em></p>
<p>Does such a library already exist? I'm fairly new to ANNs and image recognition; although I understand how they both work in principle I find it quite hard to find an adequate library for this task, and even research in this field has proven to be kind of a frustration - any advice towards the right direction is greatly appreciated.</p>
| 1 | 2016-09-29T14:33:23Z | 39,775,116 | <p>There are several good Neural Network approaches in Python, including TensorFlow, Caffe, Lasagne, and <a href="https://scikit-neuralnetwork.readthedocs.io/en/latest/module_mlp.html#classifier" rel="nofollow">sknn</a> (Sci-kit Neural Network). sknn provides an easy, out of the box solution, although in my opinion it is more difficult to customize and can be slow on large datasets. </p>
<p>One thing to consider is whether you want to use a CNN (Convolutional Neural Network) or a standard ANN. With an ANN you will mostly likely have to "unroll" your images into a vector whereas with a CNN, it expects the image to be a cube (if in color, a square otherwise). </p>
<p>Here is a <a href="http://blog.christianperone.com/2015/08/convolutional-neural-networks-and-feature-extraction-with-python/" rel="nofollow">good resource</a> on CNNs in Python. </p>
<p>However, since you aren't really doing a multiclass image classification (for which CNNs are the current gold standard) and doing more of a single object recognition, you may consider a transformed image approach, such as one using the <a href="http://scikit-image.org/docs/dev/auto_examples/plot_hog.html" rel="nofollow">Histogram of Oriented Gradients (HOG)</a>. </p>
<p>In any case, the accuracy of a Neural Network approach, especially when using CNNs, is highly dependent on successful hyperparamter tuning. Unfortunately, there isn't yet any kind of general theory on what hyperparameter values (number and size of layers, learning rate, update rule, dropout percentage, batch size, etc.) are optimal in a given situation. So be prepared to have a nice Training, Validation, and Test set setup in order to fit a robust model. </p>
| 2 | 2016-09-29T16:05:32Z | [
"python",
"python-3.x",
"boolean",
"neural-network",
"image-recognition"
]
|
Neural network library for true-false based image recognition | 39,773,193 | <p>I'm in need of an artificial neural network library (preferably in python) for one (simple) task. I want to train it so that it can tell wether <strong><em>a thing</em></strong> is in an image. I would train it by feeding it lots of pictures and telling it wether it contains the thing I'm looking for or not:</p>
<p>These images contain <strong><em>this thing</em></strong>, return True <em>(or probability of it containing the thing)</em></p>
<p>These images do not contain <strong><em>this thing</em></strong>, return False <em>(or probability of it containing the thing)</em></p>
<p>Does such a library already exist? I'm fairly new to ANNs and image recognition; although I understand how they both work in principle I find it quite hard to find an adequate library for this task, and even research in this field has proven to be kind of a frustration - any advice towards the right direction is greatly appreciated.</p>
| 1 | 2016-09-29T14:33:23Z | 39,777,049 | <p>I am unaware of any library which can do this for you. I use a lot of <a href="http://caffe.berkeleyvision.org/" rel="nofollow">Caffe</a> and can give you a solution till you find a single library which can do it for you.</p>
<p>I hope you know about <a href="http://www.image-net.org/" rel="nofollow">ImageNet</a> and that Caffe has a <a href="http://caffe.berkeleyvision.org/gathered/examples/imagenet.html" rel="nofollow">trained model</a> based on ImageNet.</p>
<p>Here is the idea:</p>
<ul>
<li>Define what <code>the object</code> is. Say <code>object = "laptop"</code>.</li>
<li>Use Caffe's ImageNet trained model, change the code to display the required output you want (you mentioned <code>TRUE</code> or <code>FALSE</code>) when the <code>object</code> is in the <code>output labels</code>.</li>
</ul>
<p>Here is a link to the <a href="https://github.com/arundasan91/Deep-Learning-in-Caffe/blob/master/Deep-Neural-Network-with-Caffe/Deep%20Neural%20Network%20with%20Caffe.md" rel="nofollow">ImageNet tutorial</a> which I wrote.</p>
<p>Here is what you might try:</p>
<ul>
<li>Take a look <a href="https://github.com/arundasan91/classifyME/blob/master/src/caffeClassification.py" rel="nofollow">here</a>. It is a stripped down version of the ImageNet program which I used in a prediction engine.</li>
<li>In line 80 you'll get the top-1 predicted output label. In line 86 you'll get the top-5 predicted labels. Write a line of code to check whether <code>object</code> is in the <code>output_label</code> and return <code>TRUE</code> or <code>FALSE</code> according to it.</li>
</ul>
<p>I understand that you are looking for a specific library, I will look for it, but this is something I would try out in the beginning.</p>
| -1 | 2016-09-29T17:59:30Z | [
"python",
"python-3.x",
"boolean",
"neural-network",
"image-recognition"
]
|
Using rpy2 for inline rmagic with jupyter notebook | 39,773,209 | <p>I am trying to use inline rmagic with jupyter notebook, but have had an extremely difficult time trying to get it to work. </p>
<p>Whenever I try to load <code>%load_ext rpy2.ipython</code>, I get the following error:</p>
<pre><code>ImportError: dlopen(/Users/MyName/anaconda/lib/python2.7/site
packages/rpy2/rinterface/_rinterface.so, 2): Library not loaded: liblzma.5.dylib
Referenced from: /Users/MyName/anaconda/lib/python2.7/site-
packages/rpy2/rinterface/_rinterface.so
Reason: image not found
</code></pre>
<p>I have tried installing it with <code>pip</code>, tried installing it with <code>conda install -c r rpy2</code>.</p>
<p>Beside this rpy2 issue, I <i>was</i> able to set up R and Jupyter notebook so that I can create a new notebook with R, so it doesn't seem to be an R/Jupyter communication issue. </p>
<p>I am running:</p>
<pre><code>OS X (El Capitan)
Python 2.7.12 :: Anaconda 4.1.1 (x86_64)
R version 3.3.1 (2016-06-21) (located as in /Users/myName/anaconda/bin)
rpy2 2.8.3 (located in /Users/myName/anaconda/lib/python2.7/site-packages/)
</code></pre>
<p>Is there any way to get <code>rpy2</code> to work with Jupyter notebook these days?</p>
| 0 | 2016-09-29T14:34:01Z | 39,802,075 | <p>This is looking like a conda issue to me (lzma present at build time, but missing at run time).</p>
<blockquote>
<p>Is there any way to get rpy2 to work with Jupyter notebook these days?</p>
</blockquote>
<p>Probably more than one way to achieve it, but the docker container mentioned on the front page (<a href="http://rpy2.bitbucket.org/" rel="nofollow">http://rpy2.bitbucket.org/</a>) is getting all pieces together in one step.</p>
| 0 | 2016-10-01T01:16:11Z | [
"python",
"anaconda",
"jupyter-notebook",
"rpy2"
]
|
Python Script in GIS Project | 39,773,241 | <p>I'm a student in a GIS Programming course. My professor did not inform me before registering that Python experience would be useful. I have no experience with Python and need help on this project. Any examples or clear explanations would be great. Here is the project:</p>
<p>Make a copy of the FirDepartment.gdb file geodatabase at \cartggp\Geo573&673_Lab\Lab5_Part2
to your computer. Write a python stand-alone script
to get a count of all the single-family homes and multifamily structures in each of the 44
fire response zones: (100 points)</p>
<p>1) Each fire response zone has a corresponding feature class in the database with
name structure âFireBoxMap_idâ. There is also a building footprint feature class,
BldgFoorprints, with a use code field named UseCode (1=single-family,
2=multi- family)</p>
<p>2) First of all, you need to use ListFeatureClasses to create a list object that contains
all the âFireBoxMap_idâ feature classes. Then use a for loop structure to go over
each zone.</p>
<p>3) In the for loop structure for each zone feature class, you need to write codes to
add two new fields to have the counts of single-family and multi-family
buildings. (check the use of AddField geoprocessing tool)</p>
<p>4) Then select all the single-family buildings that are within the zone, get the count,
assign the count to one the two new field for single-family counts; and then
select all the multi-family buildings that are within the zone, get the count, assign
the count to the other new field for multi-family counts. (check the use of
CalculateField geoprocessing tool)</p>
<p>5) Tools you may need:
AddFieldDelimiters for the where_clause
MakeFeatureLayer (set the where_clause to only retrieve only single-family or
multi-family buildings.
SelectLayerByLocation (use âHAVE_THEIR_CENTER_INâ option)
GetCount</p>
<p>Here's what I have on step 2 so far:</p>
<pre><code>import arcpy
import os
arcpy.env.workspace = "D:\Fehr10\Fehr_Python\Datapdf\FireDepartment.gdb"
featureclasses = arcpy.ListFeatureClasses()
</code></pre>
<p>It's not much. I based this off of an example in GIS's help database. </p>
| -1 | 2016-09-29T14:35:36Z | 40,080,198 | <p>Firstly to install an ArcGIS desktop(trial license is required);
The link to get trial license is as below:
<a href="http://www.esri.com/software/arcgis/arcgis-for-desktop/free-trial" rel="nofollow">arcgis trial license</a></p>
<p>Secondly, click help of ArcGIS desktop to go through the help document of arcPy.
As the screenshot below:
<a href="https://i.stack.imgur.com/MBaeM.png" rel="nofollow"><img src="https://i.stack.imgur.com/MBaeM.png" alt="arcpy help in ArcGIS desktop"></a>
Other help documents for python is necessary either.</p>
| 0 | 2016-10-17T06:57:25Z | [
"python",
"gis",
"arcgis"
]
|
pymssql get SQL command - last_executed | 39,773,253 | <p>I have a python script what isnert data to a MS SQL databse. Part of my code:</p>
<pre><code>cursor.execute("INSERT INTO TABLE (COL1, COL2, COL3) VALUES (%s,%s,%s)", (a,b,c))
</code></pre>
<p>The a, b and c are variables.
I want to print executed SQL command. Is this possible?</p>
<p>My goal is log/print every executed SQL command e.g.:</p>
<pre><code>INSERT INTO TABLE (COL1, COL2, COL3) VALUES ('Lemon','Apple','Orange')
</code></pre>
<p>Like a _last_executed attribute on pymysql cursor.</p>
<p>Thanks!</p>
| 0 | 2016-09-29T14:36:03Z | 39,773,304 | <p>You can try</p>
<pre><code>print ("INSERT INTO TABLE (COL1, COL2, COL3) VALUES (%s,%s,%s)", (a,b,c))
</code></pre>
| 1 | 2016-09-29T14:38:11Z | [
"python",
"insert",
"pymssql"
]
|
Google App Engine & Python | 39,773,325 | <p>I got a relatively tough one for you today.</p>
<p>I'm currently taking a Web Development course online and as a first assignment, we're asked to install and use Google App Engine, which runs on Python 2.7 -- I have Python 3.5 installed on my computer. </p>
<p>I tried completing the assignment even though I have the wrong Python version installed. Was able to create a project locally and see it in my browser using <code>localhost:8081</code> -- so far so good.</p>
<p>But for the assignment we need to provide a link, a link that can seemingly only be created once you <em>deploy</em> the project. When I deploy however, this is what I get:</p>
<p>Error in question:</p>
<pre><code>*** Running appcfg.py with the following flags:
--oauth2_credential_file=~/.appcfg_oauth2_tokens update
05:47 PM Application: hello-udacity; version: 1
05:47 PM Host: appengine.google.com
05:47 PM Starting update of app: hello-udacity, version: 1
05:47 PM Getting current resource limits.
2016-09-29 17:47:57,389 ERROR appcfg.py:2411 An error occurred processing file '': HTTP Error 403: Forbidden Unexpected HTTP status 403. Aborting.
Error 403: --- begin server output ---
You do not have permission to modify this app (app_id=u's~hello-udacity').
--- end server output ---
If deploy fails you might need to 'rollback' manually.
The "Make Symlinks..." menu option can help with command-line work.
*** appcfg.py has finished with exit code 1 ***
</code></pre>
<p>So my questions are: </p>
<ol>
<li>What is the root cause of the problem? </li>
<li>What is the solution? </li>
<li>If that solution means uninstalling Python3.5 and reinstalling Python 2.7, what is the easiest way for a beginner like me to do it? </li>
</ol>
<p>If there's anything you think I should know please let me know.</p>
<p>Thanks :)</p>
| 0 | 2016-09-29T14:39:04Z | 39,774,073 | <p><strong>1&2</strong>: You need to change the application name in the <code>application: hello-udacity</code> line in your <code>app.yaml</code> to match <strong>your real</strong> app name that you obtain when you create your GAE project in the developer console (likely someone else already created an app with that name before which you don't have permissions to modify, which is why your deployment fails).</p>
<p><strong>3</strong>: It's likely to run into problems with python 3.5 on GAE, you <strong>should</strong> install 2.7. But you can do that without uninstalling 3.5, both versions should be able to co-exist on the same machine.</p>
| 1 | 2016-09-29T15:14:08Z | [
"python",
"python-2.7",
"python-3.x",
"google-app-engine"
]
|
Merge SPSS variables if they don't exist in the original file, using Python | 39,773,328 | <p>I have an SPSS file that I am removing unwanted variables from, but want to bring in variables from elsewhere if they don't exist. So, I am looking for some Python code to go into my syntax to say - keep all the variables from a list and if any of these don't exist in the first file, then merge them in from the second file. (Python rookie here..) </p>
<p>Thanks!</p>
| 2 | 2016-09-29T14:39:11Z | 39,774,890 | <p>Here's an apporach to get you started:</p>
<pre><code>DATA LIST FREE / ID A B C D E.
BEGIN DATA
1 11 12 13 14 15
END DATA.
DATASET NAME DS1.
DATA LIST FREE / ID D E F G H.
BEGIN DATA
1 24 25 26 27 28
END DATA.
DATASET NAME DS2.
BEGIN PROGRAM PYTHON.
import spssaux, spss
spss.Submit("dataset activate ds1.")
ds1vars=[v.VariableName for v in spssaux.VariableDict()]
spss.Submit("dataset activate ds2.")
ds2vars=[v.VariableName for v in spssaux.VariableDict()]
extravars = [v for v in ds2vars if v not in ds1vars]
spss.Submit("""
DATASET ACTIVATE DS2.
ADD FILES FILE=* /KEEP=ID %s.
MATCH FILES FILE=DS1 /TABLE DS2 /BY ID.
DATASET NAME DS3.
DATASET ACTIVATE DS3.
""" % (" ".join(extravars) ) )
END PROGRAM PYTHON.
</code></pre>
| 2 | 2016-09-29T15:53:55Z | [
"python",
"merge",
"spss"
]
|
Merge SPSS variables if they don't exist in the original file, using Python | 39,773,328 | <p>I have an SPSS file that I am removing unwanted variables from, but want to bring in variables from elsewhere if they don't exist. So, I am looking for some Python code to go into my syntax to say - keep all the variables from a list and if any of these don't exist in the first file, then merge them in from the second file. (Python rookie here..) </p>
<p>Thanks!</p>
| 2 | 2016-09-29T14:39:11Z | 39,777,762 | <p>If you just <code>match files</code> regardless of what variables are missing, only the variables that exist in the <code>table</code> and do not exist in the <code>file</code> will be added to the <code>file</code>.<br>
Note though you'll have trouble if you have text vars in both files with identical names but different widths.</p>
| 0 | 2016-09-29T18:42:32Z | [
"python",
"merge",
"spss"
]
|
Python multiprocessing - check status of each processes | 39,773,377 | <p>I wonder if it is possible to check how long of each processes take.<br>
for example, there are four workers and the job should take no more than 10 seconds, but one of worker take more than 10 seconds.Is there way to raise a alert after 10 seconds and before process finish the job. <br>
My initial thought is using manager, but it seems I have wait till process finished.<br>
Many thanks. </p>
| 1 | 2016-09-29T14:41:52Z | 39,773,925 | <p>I have found this solution time ago (somewhere here in StackOverflow) and I am very happy with it.</p>
<p>Basically, it uses <a href="https://docs.python.org/3/library/signal.html" rel="nofollow">signal</a> to raise an exception if a process takes more than expected.</p>
<p>All you need to do is to add this class to your code:</p>
<pre><code>import signal
class timeout:
def __init__(self, seconds=1, error_message='TimeoutError'):
self.seconds = seconds
self.error_message = error_message
def handle_timeout(self, signum, frame):
raise TimeoutError(self.error_message)
def __enter__(self):
signal.signal(signal.SIGALRM, self.handle_timeout)
signal.alarm(self.seconds)
def __exit__(self, type, value, traceback):
signal.alarm(0)
</code></pre>
<p>Here is a general example of how it works:</p>
<pre><code>import time
with timeout(seconds=3, error_message='JobX took too much time'):
try:
time.sleep(10) #your job
except TimeoutError as e:
print(e)
</code></pre>
<p>In your case, I would add the with statement to the job that your worker need to perform. Then you catch the Exception and you do what you think is best.</p>
<p>Alternatively, you can periodically check if a process is alive:</p>
<pre><code>timeout = 3 #seconds
start = time.time()
while time.time() - start < timeout:
if any(proces.is_alive() for proces in processes):
time.sleep(1)
else:
print('All processes done')
else:
print("Timeout!")
# do something
</code></pre>
| 2 | 2016-09-29T15:07:38Z | [
"python"
]
|
Python multiprocessing - check status of each processes | 39,773,377 | <p>I wonder if it is possible to check how long of each processes take.<br>
for example, there are four workers and the job should take no more than 10 seconds, but one of worker take more than 10 seconds.Is there way to raise a alert after 10 seconds and before process finish the job. <br>
My initial thought is using manager, but it seems I have wait till process finished.<br>
Many thanks. </p>
| 1 | 2016-09-29T14:41:52Z | 39,773,926 | <p>You can check whether process is alive after you tried to join it. Don't forget to set timeout otherwise it'll wait until job is finished. </p>
<p>Here is simple example for you </p>
<pre><code>from multiprocessing import Process
import time
def task():
import time
time.sleep(5)
procs = []
for x in range(2):
proc = Process(target=task)
procs.append(proc)
proc.start()
time.sleep(2)
for proc in procs:
proc.join(timeout=0)
if proc.is_alive():
print "Job is not finished!"
</code></pre>
| 0 | 2016-09-29T15:07:40Z | [
"python"
]
|
python - Pandas - FillNa with another non null row having similar column | 39,773,425 | <p>I would like to fill missing value in one column with the value of another column.</p>
<p>I read that looping through each row would be very bad practice and that it would be better to do everything in one go but I could not find out how to do it with the fillna method.</p>
<p>Data Before</p>
<pre><code>Day Cat1 Cat2
1 cat ant
2 dog elephant
3 cat giraf
4 NaN ant
</code></pre>
<p>Data After</p>
<pre><code>Day Cat1 Cat2
1 cat ant
2 dog elephant
3 cat giraf
4 cat ant
</code></pre>
| 2 | 2016-09-29T14:44:19Z | 39,773,579 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="nofollow"><code>fillna</code></a> and pass the df without <code>NaN</code> rows, setting the index to <code>Cat2</code> and then calling <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html#pandas.Series.map" rel="nofollow"><code>map</code></a> which will perform a lookup:</p>
<pre><code>In [108]:
df['Cat1'] = df['Cat1'].fillna(df['Cat2'].map(df.dropna().set_index('Cat2')['Cat1']))
df
Out[108]:
Day Cat1 Cat2
0 1 cat ant
1 2 dog elephant
2 3 cat giraf
3 4 cat ant
</code></pre>
<p>So here I drop the <code>NaN</code> rows, and set the index to <code>Cat2</code>, by calling <code>map</code> on this it will lookup Cat1<code>values where</code>Cat2` matches</p>
<p>Here is the result of the <code>map</code>:</p>
<pre><code>In [112]:
df['Cat2'].map(df.dropna().set_index('Cat2')['Cat1'])
Out[112]:
0 cat
1 dog
2 cat
3 cat
Name: Cat2, dtype: object
</code></pre>
| 1 | 2016-09-29T14:51:54Z | [
"python",
"pandas",
"numpy",
"jupyter-notebook"
]
|
python pandas/numpy quick way of replacing all values according to a mapping scheme | 39,773,480 | <p>let's say I have a huge panda data frame/numpy array where each element is a list of ordered values:</p>
<pre><code>sequences = np.array([12431253, 123412531, 12341234,12431253, 145345],
[5463456, 1244562, 23452],
[243524, 141234,12431253, 456367],
[456345, 253451],
[75635, 14145, 12346,12431253])
</code></pre>
<p>or,</p>
<pre><code>sequences = pd.DataFrame({'sequence': [[12431253, 123412531, 12341234,12431253, 145345],
[5463456, 1244562, 23452],
[243524, 141234, 456367,12431253],
[456345, 253451],
[75635, 14145, 12346,12431253]]})
</code></pre>
<p>and I want to replace them with another set of identifiers that start from 0, so I design a mapping like this:</p>
<pre><code>from compiler.ast import flatten
from sets import Set
mapping = pd.DataFrame({'v0': list(Set(flatten(sequences['sequence']))), 'v1': range(len(Set(flatten(sequences['sequence'])))})
</code></pre>
<p>......</p>
<p>so the result I was looking for:</p>
<pre><code>sequences = np.array([1, 2, 3,1, 4], [5, 6, 7], [8, 9, 10,1], [11, 12], [13, 14, 15,1])
</code></pre>
<p>how can I scale this up to a huge data frame/numpy of sequences ?</p>
<p>Thanks so much for any guidance! Greatly appreciated!</p>
| 4 | 2016-09-29T14:47:21Z | 39,773,913 | <p>Here's an approach that flattens into a <code>1D</code> array, uses <code>np.unique</code> to assign unique IDs to each element and then splits back into list of arrays -</p>
<pre><code>lens = np.array(map(len,sequences))
seq_arr = np.concatenate(sequences)
ids = np.unique(seq_arr,return_inverse=1)[1]
out = np.split(ids,lens[:-1].cumsum())
</code></pre>
<p>Sample run -</p>
<pre><code>In [391]: sequences = np.array([[12431253, 123412531, 12341234,12431253, 145345],
...: [5463456, 1244562, 23452],
...: [243524, 141234,12431253, 456367],
...: [456345, 12431253],
...: [75635, 14145, 12346,12431253]])
In [392]: out
Out[392]:
[array([12, 13, 11, 12, 5]),
array([10, 9, 2]),
array([ 6, 4, 12, 8]),
array([ 7, 12]),
array([ 3, 1, 0, 12])]
In [393]: np.array(map(list,out)) # If you need NumPy array as final o/p
Out[393]:
array([[12, 13, 11, 12, 5], [10, 9, 2], [6, 4, 12, 8], [7, 12],
[3, 1, 0, 12]], dtype=object)
</code></pre>
| 3 | 2016-09-29T15:07:09Z | [
"python",
"pandas",
"numpy",
"merge"
]
|
python pandas/numpy quick way of replacing all values according to a mapping scheme | 39,773,480 | <p>let's say I have a huge panda data frame/numpy array where each element is a list of ordered values:</p>
<pre><code>sequences = np.array([12431253, 123412531, 12341234,12431253, 145345],
[5463456, 1244562, 23452],
[243524, 141234,12431253, 456367],
[456345, 253451],
[75635, 14145, 12346,12431253])
</code></pre>
<p>or,</p>
<pre><code>sequences = pd.DataFrame({'sequence': [[12431253, 123412531, 12341234,12431253, 145345],
[5463456, 1244562, 23452],
[243524, 141234, 456367,12431253],
[456345, 253451],
[75635, 14145, 12346,12431253]]})
</code></pre>
<p>and I want to replace them with another set of identifiers that start from 0, so I design a mapping like this:</p>
<pre><code>from compiler.ast import flatten
from sets import Set
mapping = pd.DataFrame({'v0': list(Set(flatten(sequences['sequence']))), 'v1': range(len(Set(flatten(sequences['sequence'])))})
</code></pre>
<p>......</p>
<p>so the result I was looking for:</p>
<pre><code>sequences = np.array([1, 2, 3,1, 4], [5, 6, 7], [8, 9, 10,1], [11, 12], [13, 14, 15,1])
</code></pre>
<p>how can I scale this up to a huge data frame/numpy of sequences ?</p>
<p>Thanks so much for any guidance! Greatly appreciated!</p>
| 4 | 2016-09-29T14:47:21Z | 39,774,016 | <p><strong><em>Option 1</em></strong><br>
Using series definition</p>
<pre><code>stop = sequences.sequence.apply(np.size).cumsum()
start = end.shift().fillna(0).astype(int)
params = pd.concat([start, stop], axis=1, keys=['start', 'stop'])
params.apply(lambda x: list(np.arange(**x)), axis=1)
0 [0, 1, 2, 3]
1 [4, 5, 6]
2 [7, 8, 9]
3 [10, 11]
4 [12, 13, 14]
dtype: object
</code></pre>
| 2 | 2016-09-29T15:11:26Z | [
"python",
"pandas",
"numpy",
"merge"
]
|
Adding a text box to an excel chart using openpyxl | 39,773,544 | <p>I'm trying to add a text box to a chart I've generated with openpyxl, but can't find documentation or examples showing how to do so. Does openpyxl support it?</p>
| 0 | 2016-09-29T14:49:55Z | 39,774,351 | <p>I'm not sure what you mean by "text box". In theory you can add pretty much anything covered by the DrawingML specification to a chart but the practice may be slightly different.</p>
<p>However, there is definitely no built-in API for this so you'd have to start by creating a sample file and working backwards from it.</p>
| 0 | 2016-09-29T15:26:08Z | [
"python",
"openpyxl"
]
|
SQLAlchemy: How do you delete multiple rows without querying | 39,773,560 | <p>I have a table that has millions of rows. I want to delete multiple rows via an in clause. However, using the code:</p>
<pre><code>session.query(Users).filter(Users.id.in_(subquery....)).delete()
</code></pre>
<p>The above code will query the results, and then execute the delete. I don't want to do that. I want speed.</p>
<p>I want to be able to execute (yes I know about the session.execute):<code>Delete from users where id in ()</code></p>
<p><strong>So the Question:</strong> How can I get the best of two worlds, using the ORM? Can I do the delete without hard coding the query?</p>
| 4 | 2016-09-29T14:50:42Z | 39,774,354 | <p>Yep! You can call <code>delete()</code> on the table object with an associated whereclause. </p>
<p>Something like this:</p>
<p><code>stmt = Users.__table__.delete().where(Users.id.in_(subquery...))</code></p>
<p>(and then don't forget to execute the statement: <code>engine.execute(stmt)</code>)</p>
<p><a href="http://docs.sqlalchemy.org/en/latest/core/selectable.html#sqlalchemy.sql.expression.TableClause.delete" rel="nofollow">source</a></p>
| 4 | 2016-09-29T15:26:14Z | [
"python",
"orm",
"sqlalchemy"
]
|
how to implement smart indent in python as done in Matlab | 39,773,564 | <p>Is there anyone who know some way to automate the correct indentation in coding in Python?
For example, when I write codes as follows:</p>
<pre><code>if condition1:
print a
else:
print b
</code></pre>
<p>Then the codes should be formatted as follows:</p>
<pre><code>if condition1:
print a
else:
print b*
</code></pre>
<p>I use Spyder for python programming. Thanks!</p>
| -3 | 2016-09-29T14:50:48Z | 39,773,805 | <p>I would recommend getting <a href="https://atom.io/" rel="nofollow">ATOM</a> it is a great text editor for python. By the way have you tried saving your source file as something.py before you gone ahead and started type in your code? it might be the reason.</p>
| 1 | 2016-09-29T15:02:01Z | [
"python",
"indentation"
]
|
how to implement smart indent in python as done in Matlab | 39,773,564 | <p>Is there anyone who know some way to automate the correct indentation in coding in Python?
For example, when I write codes as follows:</p>
<pre><code>if condition1:
print a
else:
print b
</code></pre>
<p>Then the codes should be formatted as follows:</p>
<pre><code>if condition1:
print a
else:
print b*
</code></pre>
<p>I use Spyder for python programming. Thanks!</p>
| -3 | 2016-09-29T14:50:48Z | 39,774,466 | <p>I am new to python and doing scientific computing. I really like the <a href="https://www.jetbrains.com/pycharm/" rel="nofollow"> PyCharm IDE</a>. I am using the free community version, and it works great for me.</p>
<p>In any case, I believe this is a function of the IDE.</p>
| 1 | 2016-09-29T15:31:58Z | [
"python",
"indentation"
]
|
Modifying lines | 39,773,601 | <p>I'm looking to load a file and modify several predefined lines to certain values.
The following is what I tried:</p>
<pre><code>with open("test.txt",'w') as f:
for i,line in f:
if (i == 2):
f.writelines(serial1)
if (i == 3):
f.writelines(serial2)
if (i == 4):
f.writelines(serial3)
else:
f.writelines(line)
</code></pre>
<p>However, when running the code I got the following error:</p>
<pre><code>for i,line in f:
io.UnsupportedOperation: not readable
</code></pre>
<p>What am I doing wrong?</p>
| 0 | 2016-09-29T14:53:19Z | 39,773,654 | <p>You open the file for write-only (<code>w</code> mode) and try to read it - which is done by the <code>for</code> statement.</p>
<p>That is: iterating over a file with the <code>for</code> statement is done when you are reading the file.</p>
<p>If you plan to write to it, <code>for</code> on the fileitself can't help you - just use a normal counter with "range" and write to it.</p>
<pre><code>with open("test.txt",'w') as f:
for i in range(desired_lines):
if (i == 2):
f.write(serial1 + "\n")
if (i == 3):
f.write(serial2 + "\n")
if (i == 4):
f.write(serial3 + "\n")
else:
f.write(line + "\n")
</code></pre>
<p>Also, <code>writelines</code> should be used when you have a list of strings you want to write,
each ina separate line - yu don't show the content of your vaiables, but given that you want the exact line numbers, it looks like <code>writelines</code> is not what you want.</p>
<p>(On a side note - beware of indentation - you should ident a fixed ammount foreach block you enter in Python - it will work with an arbitrry identation like you did, but is not usual in Python code)</p>
<p><strong>update</strong>
It looks like you need to change just some lines of an already existing text file. The best approach for this is by far to recreate another file, replacing the lines you want, and rename everything afterwards.
(Writing over a text file is barely feasible, since all text lines would have to be the same size as the previously existing lines - and still would gain nothing in low-level disk access).</p>
<pre><code>import os
...
lines_to_change{
2: serial1,
3: serial2,
4: serial3,
}
with open("test.txt",'rt') as input_file, open("newfile.txt", "wt") as output_file:
for i, line in enumerate(input_file):
if i in lines_to_change:
output_file.write(lines_to_change[i] + '\n')
else:
output_file.write(line)
os.rename("test.txt", "test_old.txt")
os.rename("newfile.txt", "test.txt")
os.unlink("test_old.txt")
</code></pre>
| 1 | 2016-09-29T14:55:45Z | [
"python",
"file",
"file-handling"
]
|
Modifying lines | 39,773,601 | <p>I'm looking to load a file and modify several predefined lines to certain values.
The following is what I tried:</p>
<pre><code>with open("test.txt",'w') as f:
for i,line in f:
if (i == 2):
f.writelines(serial1)
if (i == 3):
f.writelines(serial2)
if (i == 4):
f.writelines(serial3)
else:
f.writelines(line)
</code></pre>
<p>However, when running the code I got the following error:</p>
<pre><code>for i,line in f:
io.UnsupportedOperation: not readable
</code></pre>
<p>What am I doing wrong?</p>
| 0 | 2016-09-29T14:53:19Z | 39,773,846 | <p>As pointed out, your issue is that you try to read and write in the same file. The easiest way is to open a new file, and replace the first one when you have the desired output.</p>
<p>To get the file lines number, you can use <a href="https://docs.python.org/2.3/whatsnew/section-enumerate.html" rel="nofollow">enumerate</a>:</p>
<pre><code>with open("test.txt",'r') as f:
for i,line in enumerate(f):
if (i == 2):
f.writelines(serial1)
if (i == 3):
f.writelines(serial2)
if (i == 4):
f.writelines(serial3)
else:
f.writelines(line)
</code></pre>
| 0 | 2016-09-29T15:03:51Z | [
"python",
"file",
"file-handling"
]
|
Modifying lines | 39,773,601 | <p>I'm looking to load a file and modify several predefined lines to certain values.
The following is what I tried:</p>
<pre><code>with open("test.txt",'w') as f:
for i,line in f:
if (i == 2):
f.writelines(serial1)
if (i == 3):
f.writelines(serial2)
if (i == 4):
f.writelines(serial3)
else:
f.writelines(line)
</code></pre>
<p>However, when running the code I got the following error:</p>
<pre><code>for i,line in f:
io.UnsupportedOperation: not readable
</code></pre>
<p>What am I doing wrong?</p>
| 0 | 2016-09-29T14:53:19Z | 39,773,883 | <p>What you are doing is called editing in place. For that, Python standard library <code>fileinput</code> can help:</p>
<pre><code>import fileinput
for line in fileinput.input('test.txt', inplace=True):
if fileinput.lineno() == 2:
print serial1
elif fileinput.lineno() == 3:
print serial2
elif fileinput.lineno() == 4:
print serial3
else:
print line
</code></pre>
<h1>Update</h1>
<p>As for "What I'm doing wrong?" there are a couple:</p>
<ul>
<li>You opened the file for writing, how can you read from it? You might attempt to change the mode from "w" to "r+", which is read and write. That brings up another dilemma: After reading line 2, the file pointer is positioned at the beginning of line 3, anything you write will overwrite line 3, not to mention the length differences between the old and new lines. It is better to write to a new temp file, then copy it back to the original when done, or you can use <code>fileinput</code> as shown above.</li>
<li><code>for i, line in f</code> does not work, I think you meant <code>for i, line in enumerate(f, 1)</code></li>
<li>What's with the extremely deep indentation?</li>
<li>The <code>if</code> statement should be <code>if ... elif ... else</code>. Currently your else clause is attached only to the <code>if i == 4</code> statement, not the other two.</li>
</ul>
| 2 | 2016-09-29T15:05:29Z | [
"python",
"file",
"file-handling"
]
|
How to create a string interpreter in python that takes propositions and return an answer based on a given dictionary keys? | 39,773,699 | <p>So after 2 days of struggling with this problem i gave up. You are given two inputs the first one is a lists that contains propositions and the second one is a dictionary. </p>
<p>example: </p>
<pre><code>arg= [<prop1>, "OPERATOR", <prop2>]
dicti= {<prop1>: key1, <prop2394>: key2394,<prop2>:key2}
</code></pre>
<p>the following is a possible input:</p>
<pre><code> arg= [<prop1>, "OPERATOR (AND OR )",
[ "NOT" ,["NOT",<prop2>,"OPERATOR"[<prop2>, "OPERATOR", <prop3>]]]
</code></pre>
<p>I am betting that the problem wont be solved without using double recursion. This is my attempt to solve the problem, i started with base case that the input is a "<em>flat</em>" list which means 1D list that has no lists as elements of the the list. The program should not return boolean values but return <code>true</code> or <code>false</code> which are given in the dictionary.</p>
<pre><code>def interpret(arg, keys ):
if not arg :
return "false"
elif not arg and not keys:
return "false"
elif not keys and isinstance(arg,list):
return "true"
elif isinstance(arg[0],list):
return interperter(arg[0:],keys)
else:
trueCnr=0
for i in range(len(arg)):
if arg[i] in keys and keys[arg[i]]=="true":
if isinstance (arg[i], list):
if("NOT" in arg):
indx= arg.index("NOT")
keys[arg[indx+1]]= "true" if keys[arg[indx+1]]=="true" else "false"
trueCnr+=1
print(trueCnr)
if trueCnr==len(arg)-1 and "AND" in arg: return "true"
elif trueCnr!= 0 and "OR" in arg: return "true"
else: return "false"
print (interpret(["door_open", "AND", ["NOT","cat_gone"]], {"door_open" : "false", "cat_gone" : "true", "cat_asleep" : "true"} ))
</code></pre>
<p>My question is how do i proceed from here.</p>
| 1 | 2016-09-29T14:57:30Z | 39,774,730 | <p>You need to use recursion. Define a function <code>eval(prop, keys)</code> to evaluate the proposition. Then match the proposition with a case. For example,</p>
<pre><code>['prop1', 'AND', 'prop2']
</code></pre>
<p>If you see the above, you would evaluate it as </p>
<pre><code>return eval(prop[:1], keys) and eval(prop[1:], keys)
</code></pre>
<p>A more complex example:</p>
<pre><code>[ "NOT" ,["NOT",'prop2',"OPERATOR"['prop2', "OPERATOR", 'prop3']]]
</code></pre>
<p>would look like</p>
<pre><code>return not eval(prop[1], keys)
</code></pre>
<p>The thing to realize is that at each call to <code>eval</code>, exactly one operator has precedence. Your job is to recognize which operator that is, then pass the rest of the list off to other <code>eval</code> calls to handle.</p>
<p>In pseudocode:</p>
<pre><code>def eval(prop, keys):
if not prop:
return False
if prop is a key:
return keys[prop]
if prop has a binary operator:
return eval(stuff before operator) operator eval(stuff after operator)
if prop has unary operator:
return operator eval(stuff after operator)
</code></pre>
<p>Also, notice that you must return booleans <code>True</code> and <code>False</code> not strings <code>'True'</code> and '<code>False</code>'</p>
| 0 | 2016-09-29T15:45:49Z | [
"python",
"recursion",
"functional-programming",
"string-interpolation"
]
|
Parse an XML file to get a full tag by using Python's lxml package | 39,773,780 | <p>I've got the following XML file:</p>
<pre><code><root>
<scene name="scene1">
<view ath="0" atv="10"/>
<image url="img1.jgp"/>
<hotspot name="hot1"/>
</scene>
<scene name="scene2">
<view ath="20" atv="10"/>
<image url="img2.jgp"/>
<hotspot name="hot2"/>
</scene>
</root>
</code></pre>
<p>I'm writing a Python script using lxml package, to get the entire <code>view</code> tag within <code>scene1</code>. That is:</p>
<pre><code><view ath="0" atv="10" />
</code></pre>
<p>I've read the lxml documentation but all I can find is how to get the tag, its attributes or its content, but not the entire tag.</p>
<p>Can anybody at least point me in the right direction? Does lxml has a function or a method to achieve this?</p>
<p>Thanks,</p>
<p>Rafael </p>
| 0 | 2016-09-29T15:01:06Z | 39,774,292 | <p>Your given XML source contains some errors; I fixed those, see my source below:</p>
<pre><code>from lxml import etree
source = """
<root>
<scene name="scene1">
<view ath="0" atv="10" />
<image url="img1.jgp" />
<hotspot name="hot1" />
</scene>
<scene name="scene2">
<view ath="20" atv="10" />
<image url="img2.jgp" />
<hotspot name="hot2" />
</scene>
</root>
"""
</code></pre>
<p>To parse this source you will create an etree:</p>
<pre><code>tree = etree.fromstring(source)
</code></pre>
<p>(For source coming from a file, use <code>etree.parse()</code> instead.)</p>
<p>Now you can browse through the parsed XML by accessing <code>tree</code> properly. My favorite way of doing so is by navigating with XPaths (mastering these is out of scope of your question):</p>
<pre><code>allViews = tree.xpath('//root/scene/view')
for view in allViews:
print view.attrib
</code></pre>
<p>This will print all XML attributes for each view tag found by the XPath:</p>
<pre><code>{'atv': '10', 'ath': '0'}
{'atv': '10', 'ath': '20'}
</code></pre>
<p>Of course you can also access the other attributes of the view elements like their embedded text (which is empty here of course) or their subelements (children) (of course, in your example they also do not have children).</p>
<p>The wording of your question suggests that you might not have built up an understanding of the fact that this <code>view</code> object is indeed "the entire view tag". You can ask the <code>view</code> object for the tag it is made up of (<code>view</code>), for its attributes (see above), its contents (<code>view.text</code>) and even its subelements (<code>view.getchildren()</code>, but there are none).</p>
<p>You can convert the parsed XML structure back to an ASCII representation by calling <code>etree.tostring(view)</code>; this will return a string like <code>'<view ath="20" atv="10"/>\n '</code>. In most cases you will not do this.</p>
<p>You can also access the elements view the children of the elements:</p>
<pre><code>print tree.getchildren()[1].getchildren()[0].attrib
</code></pre>
<p>This will print the XML attributes of the 0th child (a <code>view</code>) of the first child (a <code>scene</code>) of the <code>tree</code> element:</p>
<pre><code>{'atv': '10', 'ath': '20'}
</code></pre>
| 0 | 2016-09-29T15:23:29Z | [
"python",
"xml",
"lxml"
]
|
Parse an XML file to get a full tag by using Python's lxml package | 39,773,780 | <p>I've got the following XML file:</p>
<pre><code><root>
<scene name="scene1">
<view ath="0" atv="10"/>
<image url="img1.jgp"/>
<hotspot name="hot1"/>
</scene>
<scene name="scene2">
<view ath="20" atv="10"/>
<image url="img2.jgp"/>
<hotspot name="hot2"/>
</scene>
</root>
</code></pre>
<p>I'm writing a Python script using lxml package, to get the entire <code>view</code> tag within <code>scene1</code>. That is:</p>
<pre><code><view ath="0" atv="10" />
</code></pre>
<p>I've read the lxml documentation but all I can find is how to get the tag, its attributes or its content, but not the entire tag.</p>
<p>Can anybody at least point me in the right direction? Does lxml has a function or a method to achieve this?</p>
<p>Thanks,</p>
<p>Rafael </p>
| 0 | 2016-09-29T15:01:06Z | 39,774,347 | <p>The XML content is a string like this:</p>
<pre><code>content = u"""\
<root>
<scene name="scene1">
<view ath="0" atv="10"/>
<image url="img1.jgp"/>
<hotspot name="hot1"/>
</scene>
<scene name="scene2">
<view ath="20" atv="10"/>
<image url="img2.jgp"/>
<hotspot name="hot2"/>
</scene>
</root>
"""
</code></pre>
<p>You can parse a file; But, here, I parse a StringIO:</p>
<pre><code>tree = etree.parse(io.StringIO(content))
</code></pre>
<p>Everything is loaded in an <code>ElementTree</code>.</p>
<p>To find the views, I use a XPath expression:</p>
<pre><code>views = tree.xpath("//scene/view")
</code></pre>
<p>The result is always a list:</p>
<pre><code>for view in views:
print(etree.tostring(view, with_tail=False))
</code></pre>
<p>You'll get:</p>
<pre><code><view ath="0" atv="10"/>
<view ath="20" atv="10"/>
</code></pre>
| 0 | 2016-09-29T15:26:03Z | [
"python",
"xml",
"lxml"
]
|
Using Pandas to "applymap" with access to index/column? | 39,773,833 | <p>I'm currently playing around with a few things involving pandas and I'm wondering what the most effective way to solve the following problem is. Here's a simplified example.</p>
<p>Say I have some data in a data frame:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randint(0,10,size=(10, 4)), columns=['a','b','c','d'],
index=np.random.randint(0,10,size=10))
</code></pre>
<p>This data looks something like this:</p>
<pre><code> a b c d
1 0 0 9 9
0 2 2 1 7
3 9 3 4 0
2 5 0 9 4
1 7 7 7 2
6 4 4 6 4
1 1 6 0 0
7 8 0 9 3
5 0 0 8 3
4 5 0 2 4
</code></pre>
<p>Now I want to apply some function to each value in the data frame (the one below, for example) and get a data frame back as a resulting output. The tricky part is the function I'm applying depends on the value of the index I am currently at.</p>
<pre><code>def f(cell_val,row_val):
try:
return cell_val/row_val
except ZeroDivisionError:
return -1
</code></pre>
<p>Normally, if I wanted to apply a function to each individual cell in the data frame, I would just call applymap on "f". Even if I had to pass in a second argument (row_val, in this case), if the argument was a fixed number I could just write a lambda expression such as "lambda x: f(x,i)" where i is the fixed number I wanted. However, my second argument varies depending on the row in the data frame I am currently calling the function from, which means that I can't just use applymap.</p>
<p>How would I go about solving a problem like this efficiently? I can think of a few ways to do this, but none of them feel "right". I could loop through each individual value and replace them one by one, but that seems really awkward and slow. I could also do something like creating a completely separate data frame containing (cell value, row value) tuples and use the built in pandas applymap on my tuple data frame. But that seems pretty hacky and I'm also creating a completely separate data frame as an extra step.</p>
<p>There must be a better solution to this (a fast solution would be appreciated, because there is a possibility that my data frame ends up being very large).</p>
| 2 | 2016-09-29T15:03:16Z | 39,773,894 | <p>IIUC you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.div.html" rel="nofollow"><code>div</code></a> with <code>axis=0</code> plus you need to convert the <code>Index</code> object to a <code>Series</code> object using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.to_series.html" rel="nofollow"><code>to_series</code></a>:</p>
<pre><code>In [121]:
df.div(df.index.to_series(), axis=0).replace(np.inf, -1)
Out[121]:
a b c d
1 0.000000 0.000000 9.000000 9.000000
0 -1.000000 -1.000000 -1.000000 -1.000000
3 3.000000 1.000000 1.333333 0.000000
2 2.500000 0.000000 4.500000 2.000000
1 7.000000 7.000000 7.000000 2.000000
6 0.666667 0.666667 1.000000 0.666667
1 1.000000 6.000000 0.000000 0.000000
7 1.142857 0.000000 1.285714 0.428571
5 0.000000 0.000000 1.600000 0.600000
4 1.250000 0.000000 0.500000 1.000000
</code></pre>
<p>Additionally as division by zero results in <code>inf</code> you need to call <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html#pandas.DataFrame.replace" rel="nofollow"><code>replace</code></a> to replace those rows with <code>-1</code></p>
| 2 | 2016-09-29T15:05:51Z | [
"python",
"python-3.x",
"pandas"
]
|
Using Pandas to "applymap" with access to index/column? | 39,773,833 | <p>I'm currently playing around with a few things involving pandas and I'm wondering what the most effective way to solve the following problem is. Here's a simplified example.</p>
<p>Say I have some data in a data frame:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randint(0,10,size=(10, 4)), columns=['a','b','c','d'],
index=np.random.randint(0,10,size=10))
</code></pre>
<p>This data looks something like this:</p>
<pre><code> a b c d
1 0 0 9 9
0 2 2 1 7
3 9 3 4 0
2 5 0 9 4
1 7 7 7 2
6 4 4 6 4
1 1 6 0 0
7 8 0 9 3
5 0 0 8 3
4 5 0 2 4
</code></pre>
<p>Now I want to apply some function to each value in the data frame (the one below, for example) and get a data frame back as a resulting output. The tricky part is the function I'm applying depends on the value of the index I am currently at.</p>
<pre><code>def f(cell_val,row_val):
try:
return cell_val/row_val
except ZeroDivisionError:
return -1
</code></pre>
<p>Normally, if I wanted to apply a function to each individual cell in the data frame, I would just call applymap on "f". Even if I had to pass in a second argument (row_val, in this case), if the argument was a fixed number I could just write a lambda expression such as "lambda x: f(x,i)" where i is the fixed number I wanted. However, my second argument varies depending on the row in the data frame I am currently calling the function from, which means that I can't just use applymap.</p>
<p>How would I go about solving a problem like this efficiently? I can think of a few ways to do this, but none of them feel "right". I could loop through each individual value and replace them one by one, but that seems really awkward and slow. I could also do something like creating a completely separate data frame containing (cell value, row value) tuples and use the built in pandas applymap on my tuple data frame. But that seems pretty hacky and I'm also creating a completely separate data frame as an extra step.</p>
<p>There must be a better solution to this (a fast solution would be appreciated, because there is a possibility that my data frame ends up being very large).</p>
| 2 | 2016-09-29T15:03:16Z | 39,774,519 | <p>Here's how you can add the index to the dataframe</p>
<pre><code>pd.DataFrame(df.values + df.index.values[:, None], df.index, df.columns)
</code></pre>
| 1 | 2016-09-29T15:34:46Z | [
"python",
"python-3.x",
"pandas"
]
|
GCP python sdk running dev server with https protocol | 39,773,865 | <pre><code>/google-cloud-sdk/bin/dev_appserver.py --host example.com out/app_engine/
</code></pre>
<p>This commands run development server with custom domain and with custom port as according to -h parameter. but there is no way to specify an ssl domain as it gives an error when i specify like <a href="https://example.com" rel="nofollow">https://example.com</a> or <a href="http://example.com" rel="nofollow">http://example.com</a> is there any way to specify protocol separately. so that i can run at secure domain
thanks</p>
| 0 | 2016-09-29T15:04:41Z | 39,881,763 | <p><strong>This isn't possible at the moment</strong>: see <a href="https://code.google.com/p/googleappengine/issues/detail?id=960" rel="nofollow">here</a> for details.</p>
| 0 | 2016-10-05T19:02:03Z | [
"python",
"ssl",
"google-cloud-platform",
"google-app-engine-python"
]
|
Unexpected behaviour with numpy advanced slicing in named arrays | 39,773,922 | <p>When using numpy named arrays I observe a different behaviour in the following two cases:</p>
<ol>
<li>case: first using an index array for advanced slicing and then selecting a subarray by name</li>
<li>case: first selecting a subarray by name and then using an index array for advanced slicing</li>
</ol>
<p>The follwing code presents an example</p>
<pre><code>import numpy as np
a = np.ones(5)
data = np.array(zip(a, a, a), dtype=[("x", float), ("y", float), ("z", float)])
# case 1
# does not set elements 1, 3 and 4 of data to 22
data[[1, 3, 4]]["y"] = 22
print data["y"] # -> [ 1. 1. 1. 1. 1.]
# case 2
# set elements 1, 3 and 4 of data to 22
data["y"][[1, 3, 4]] = 22
print data["y"] # -> [ 1. 22. 1. 22. 22.]
</code></pre>
<p>The output of the two print commands is
[ 1. 1. 1. 1. 1.] and [ 1. 22. 1. 22. 22.]. Why does changing the order of the selections lead to different results when setting the elements?</p>
| 3 | 2016-09-29T15:07:34Z | 39,774,790 | <p>Indexing with a list or array <a href="http://docs.scipy.org/doc/numpy/user/basics.indexing.html#index-arrays" rel="nofollow">always returns a copy rather than a view</a>:</p>
<pre><code>In [1]: np.may_share_memory(data, data[[1, 3, 4]])
Out[1]: False
</code></pre>
<p>Therefore the assignment <code>data[[1, 3, 4]]["y"] = 22</code> is modifying a <em>copy</em> of <code>data[[1, 3, 4]]</code>, and the original values in <code>data</code> will be unaffected.</p>
<p>On the other hand, referencing a field of a structured array <a href="http://docs.scipy.org/doc/numpy/user/basics.rec.html#introduction" rel="nofollow">returns a view</a>:</p>
<pre><code>In [2]: np.may_share_memory(data, data["y"])
Out[2]: True
</code></pre>
<p>so assigning to <code>data["y"][[1, 3, 4]]</code> <em>will</em> affect the corresponding elements in <code>data</code>.</p>
| 3 | 2016-09-29T15:48:50Z | [
"python",
"numpy",
"structured-array"
]
|
How to sum pivot data with a DIFF in time by one of the indexes | 39,774,097 | <p>Sorry the Title of the post isn't clear and that's probably why I'm struggling to google this in Python/Pandas. I'm wondering if I need to groupby and then diff data....</p>
<p>I'm trying to find the total time a person is using a web site by finding the differences between page access time in an activity log. I'm using SHIFT in python to get the previous (or next) record as a new column in a data frame. I'm then comparing the difference in time this way. This all works and I have asked about this before.</p>
<p>The problem I can't work out is
How to "DIFF" times in the activity log <em>only</em> where the times are for the same user or session. I'm currently DIFFing all records in a data frame but this is clearly wrong when the user or the session ID changes. I'm getting very high values in some places.</p>
<p>Here's my example in python code. </p>
<p>How do I only DIFF for the same "B" records and start again for a new "B" value.</p>
<p>Thanks
Jason</p>
<pre><code># python diff summing
import urllib
import numpy as np
import pandas as pd
import time
import datetime
import math
# Using lamba as in reality I'm diffing 2 dates and returning
# difference in seconds. and checking for zero or NAN
def my_diff(a, b):
c = abs(b-a)
if c <1:
c=1
if math.isnan(c):
c=1
return c
# Required -sum the total time in seconds
# where a user (A) spends time in one session (B)
# I want the total time for all sessions per user but
# having trouble DIFFing between values *only*
# in the same session
# In real data the data is also sorted by
# user, session, datetime so no need to sort.
# A is userid
# B is session ID so will be an integer and unique
# C is access timestamp but just a random number here.
# D is irrelevant here
df = pd.DataFrame({'A' : ['user1', 'user1', 'user1', 'user1',
'user2', 'user2', 'user2', 'user2'],
'B' : ['abc123', 'random', 'jeff', 'gjgjg',
'four', 'five', 'six', 'seven'],
'C' : np.random.randn(8),
'D' : np.random.randn(8)})
# This I think is wrong as I'm DIFFing next/previous record even
# though B the session ID might change
# I only want to DIFF where the session IDs are the same
# i.e. same user and same session.
df['PREVIOUS_C'] = df.C.shift(-1)
df['DIFF'] = df.apply(lambda row: my_diff(row.C, row.PREVIOUS_C), axis=1)
print df
# This is almost what I'm after but I want DIFF to only be difference
# where the session IDs (and user) are the same.
# Diffing over a change of A or B does not make sense here.
pivot = df.pivot_table(index=['A','B'], values=["DIFF"],aggfunc={np.sum},fill_value=0)
print pivot # Desired output to be saved to CSV
# pivot again to get total for each user
</code></pre>
| 0 | 2016-09-29T15:15:01Z | 39,774,545 | <p>In the example you provided there are no common session IDs so your difference column will just have Null values but what you want is this:</p>
<pre><code>df['diff'] = df.groupby(['A','B'])['C'].transform(lambda x:x - x.shift(-1))
</code></pre>
<p>If you want to see difference of C by just Username and not username + session ID you would do something similar:</p>
<pre><code>df['diff'] = df.groupby(['A'])['C'].transform(lambda x:x - x.shift(-1))
</code></pre>
| 1 | 2016-09-29T15:35:50Z | [
"python",
"pandas"
]
|
How to make a ball go around the edge of the screen continuously? (Python 2.7) | 39,774,109 | <p>I need help with my animation code. So far i have the ball go around 3 edges of the screen. But i don't know how to make it go around the last screen. </p>
<pre><code>#-------------------------------------------------------------------------------
# Name: U1A4.py
# Purpose: To animate the ball going around the edge of the screen
#-------------------------------------------------------------------------------
import pygame
import sys
pygame.init()
# Screen
screenSize = (800,600)
displayScreen = pygame.display.set_mode(screenSize,0)
pygame.display.set_caption("Animation Assignment 1")
# Colours
WHITE = (255,255,255)
GREEN = (0,255,0)
RED = (255,0,0)
BLUE = (0,0,255)
displayScreen.fill(WHITE)
pygame.display.update()
# ----------------- Leave animation code here ---------------------------------#
# THU/09/29
# Need to complete the last turn with the ball
x = 50
y = 50
dx = 0
dy = 2
stop = False
while not stop:
for event in pygame.event.get():
if event.type ==pygame.QUIT:
stop = True
displayScreen.fill(WHITE)
x = x + dx
y = y + dy
if (x>=750):
dx = 0
dy = -2
if (y>=550)and dy>0:
dy = 0
dx = 2
if (x>=750)and dy>0:
dy = 0
dx = 2
if (y>=550)and dy>0:
dx = 0
dy = -2
pygame.draw.circle(displayScreen, GREEN, (x,y),50, 0)
pygame.display.update()
pygame.quit()
sys.exit()
</code></pre>
<p>The ball needs to go around the border of the screen continuously, any hints or direct answers are welcome. Thanks.</p>
| -1 | 2016-09-29T15:15:30Z | 39,774,514 | <p>The ball only changes directions in the corners, so you just need to cover four cases (two <code>if</code>'s nested in two <code>if</code>'s):</p>
<pre><code>x += dx
y += dy
if x >= 750:
if y >= 550:
dx = 0
dy = -2
elif y <= 50:
dx = -2
dy = 0
elif x <= 50:
if y >= 550:
dx = 2
dy = 0
elif y <= 50:
dx = 0
dy = 2
</code></pre>
| 0 | 2016-09-29T15:34:36Z | [
"python",
"python-2.7",
"pygame"
]
|
How to make a ball go around the edge of the screen continuously? (Python 2.7) | 39,774,109 | <p>I need help with my animation code. So far i have the ball go around 3 edges of the screen. But i don't know how to make it go around the last screen. </p>
<pre><code>#-------------------------------------------------------------------------------
# Name: U1A4.py
# Purpose: To animate the ball going around the edge of the screen
#-------------------------------------------------------------------------------
import pygame
import sys
pygame.init()
# Screen
screenSize = (800,600)
displayScreen = pygame.display.set_mode(screenSize,0)
pygame.display.set_caption("Animation Assignment 1")
# Colours
WHITE = (255,255,255)
GREEN = (0,255,0)
RED = (255,0,0)
BLUE = (0,0,255)
displayScreen.fill(WHITE)
pygame.display.update()
# ----------------- Leave animation code here ---------------------------------#
# THU/09/29
# Need to complete the last turn with the ball
x = 50
y = 50
dx = 0
dy = 2
stop = False
while not stop:
for event in pygame.event.get():
if event.type ==pygame.QUIT:
stop = True
displayScreen.fill(WHITE)
x = x + dx
y = y + dy
if (x>=750):
dx = 0
dy = -2
if (y>=550)and dy>0:
dy = 0
dx = 2
if (x>=750)and dy>0:
dy = 0
dx = 2
if (y>=550)and dy>0:
dx = 0
dy = -2
pygame.draw.circle(displayScreen, GREEN, (x,y),50, 0)
pygame.display.update()
pygame.quit()
sys.exit()
</code></pre>
<p>The ball needs to go around the border of the screen continuously, any hints or direct answers are welcome. Thanks.</p>
| -1 | 2016-09-29T15:15:30Z | 39,774,752 | <p>Here's my stab at your problem:</p>
<pre><code>import sys, pygame
pygame.init()
size = width, height = 800, 800
speed = [1, 0]
black = 0, 0, 0
screen = pygame.display.set_mode(size)
ball = pygame.image.load("ball.bmp")
ballrect = ball.get_rect()
while 1:
for event in pygame.event.get():
if event.type == pygame.QUIT: sys.exit()
ballrect = ballrect.move(speed)
if ballrect.right > width:
speed = [0, 1]
if ballrect.left < 0:
speed = [0, -1]
if (ballrect.bottom > height) and not (ballrect.left < 0):
speed = [-1,0]
if (ballrect.top < 0) and not (ballrect.right > width):
speed = [1, 0]
screen.fill(black)
screen.blit(ball, ballrect)
pygame.display.flip()
</code></pre>
<p>Makes me feel a bit sick.</p>
<p>Edit - used this for ball.bmp: </p>
<p><a href="http://everitas.rmcclub.ca/wp-content/uploads/2007/11/soccer_ball_1.bmp" rel="nofollow">http://everitas.rmcclub.ca/wp-content/uploads/2007/11/soccer_ball_1.bmp</a></p>
| 1 | 2016-09-29T15:47:07Z | [
"python",
"python-2.7",
"pygame"
]
|
Method that plots i all the x values to a given math function | 39,774,168 | <p>Hey i need help with programing a method that plots i all the x values to a given math function</p>
<pre><code>def compute(expression): #The expression is where the math function is gonna be placed
print("Evaluerer", expression)
for i in range(0,1001):
x=i/100.0 #This is all the x-values, which needs to be put into the expression
</code></pre>
<p>Thanks for replying </p>
| 0 | 2016-09-29T15:18:06Z | 39,774,583 | <p>Define your function :</p>
<pre><code>def f(x):
return x #put here your expression of f(x)
</code></pre>
<p>Then plot your function as follow:</p>
<pre><code>plt.plot([i/100.0 for i in range(0,1001)], map(lambda x: f(x),[i/100.0 for i in range(0,1001)]))
</code></pre>
| 0 | 2016-09-29T15:37:57Z | [
"python"
]
|
Design pattern in Python with a lot of customer specific exceptions | 39,774,231 | <p>At this moment we are working on a large project. </p>
<p>This project is supposed to create EDIFACT messages. This is not so hard at first but the catch is there are a lot of customers that have their own implementation of the standard. </p>
<p>On top of that we are working with several EDIFACT standards (D96A and D01B in our case.)</p>
<p>Some customer exceptions might be as small as having a divergent field length, but some have made their own implementation completely different.</p>
<p>At this moment we have listed the customer exceptions in a list (Just to keep them consistent) and in the code we use something like:</p>
<pre><code>if NAME_LENGTH_IS_100 in customer_exceptions:
this.max_length = 100
else:
this.max_length = 70
</code></pre>
<p>For a couple of simple exceptions this works just fine, but at this moment the code starts to get really cluttered and we are thinking about refactoring the code. </p>
<p>I am thinking about some kind of factory pattern, but I am not sure about the implementation.
Another option would be to create a base package and make a separate implementation for every customer that is diverging from the standard.</p>
<p>I hope someone can help me out with some advice. </p>
<p>Thanks in advance. </p>
| 0 | 2016-09-29T15:20:54Z | 39,774,658 | <p>Why not put the all this in a resource file with the standard as default and each exception handle in a surcharged value, then you'll just need to read the right key for the right client and you code stay clean.</p>
| 0 | 2016-09-29T15:42:27Z | [
"python"
]
|
Design pattern in Python with a lot of customer specific exceptions | 39,774,231 | <p>At this moment we are working on a large project. </p>
<p>This project is supposed to create EDIFACT messages. This is not so hard at first but the catch is there are a lot of customers that have their own implementation of the standard. </p>
<p>On top of that we are working with several EDIFACT standards (D96A and D01B in our case.)</p>
<p>Some customer exceptions might be as small as having a divergent field length, but some have made their own implementation completely different.</p>
<p>At this moment we have listed the customer exceptions in a list (Just to keep them consistent) and in the code we use something like:</p>
<pre><code>if NAME_LENGTH_IS_100 in customer_exceptions:
this.max_length = 100
else:
this.max_length = 70
</code></pre>
<p>For a couple of simple exceptions this works just fine, but at this moment the code starts to get really cluttered and we are thinking about refactoring the code. </p>
<p>I am thinking about some kind of factory pattern, but I am not sure about the implementation.
Another option would be to create a base package and make a separate implementation for every customer that is diverging from the standard.</p>
<p>I hope someone can help me out with some advice. </p>
<p>Thanks in advance. </p>
| 0 | 2016-09-29T15:20:54Z | 39,774,702 | <p>I think your question is too broad to be answered properly (I was up to click the close button because of this but decided otherwise). The reason for this is the following:</p>
<p>There is nothing wrong the code snippet you provided. It should be part of some kind of initialization routine, then this is just fine the way it is. It also doesn't hurt to have things like this in a large amount.</p>
<p>But how to handle more complex cases depends greatly on the cases themselves.</p>
<ul>
<li>For lots of situations it might be sufficient to have such variables which represent the customer's special choices.</li>
<li>For other aspects I'd propose to have a <code>Customer</code> base class with subclasses thereof, for each customer one (or maybe the customers can even be hierarchically grouped, then a nice inheritance tree could reflect this).</li>
<li>For other cases again I'd propose aspect-oriented programming by use of Python decorators to tweak the behavior of methods, functions, and classes.</li>
</ul>
<p>Since this depends greatly on your concrete usecases, I think this question cannot be answered more concretely than this.</p>
| 1 | 2016-09-29T15:44:35Z | [
"python"
]
|
Pandas pivot_table to calculate share of margin | 39,774,314 | <p>I have a <code>Pandas</code> <code>DataFrame</code> named <code>df</code> that contains <code>n</code> <code>columns</code>. One of the <code>columns</code> is named <code>COUNT</code>, which shows how many times values in <code>A</code> occurs. <code>A</code> contains unique identifiers so every row has the value <code>1</code> in the <code>column</code> <code>COUNT</code>. It looks like this:</p>
<pre><code> A B C D E COUNT
id1 cat1 1 a 15 1
id2 cat2 2 b 14 1
id3 cat2 2 c 14 1
id4 cat1 1 d 15 1
id5 cat3 2 e 14 1
.....
</code></pre>
<p>Now I want to transform my <code>df</code> to look like this:</p>
<pre><code> 14 15
cat1_tot NaN 2
cat1_share NaN 1
cat2_tot 2 NaN
cat2_share 0.6666 NaN
cat3_tot 1 NaN
cat3_share 0.3333 NaN
All 3 2
</code></pre>
<p>I can get <code>catx_tot</code> by using <code>pd.pivot_table</code> </p>
<pre><code>pd.pivot_table(
df,
values='COUNT',
index=['B'],
columns=['E'],
margins=True,
aggfunc=np.sum
)
</code></pre>
<p>But how do I add share to this?</p>
| 2 | 2016-09-29T15:24:58Z | 39,774,942 | <p>combine <code>groupby.size</code> with <code>groupby.transform</code></p>
<pre><code>size = df.groupby(['B', 'E']).size()
sums = size.groupby(level='E').transform(np.sum)
aggd = pd.concat([size, size / sums], axis=1, keys=['total', 'share'])
aggd.unstack().stack(0)
</code></pre>
<p><a href="http://i.stack.imgur.com/iKjc9.png" rel="nofollow"><img src="http://i.stack.imgur.com/iKjc9.png" alt="enter image description here"></a></p>
<hr>
<p>to get the <code>All</code> row</p>
<pre><code>all_ = aggd.groupby(level='E').sum().total.rename(('All', 'total'))
aggd.unstack().stack(0).append(all_)
</code></pre>
<p><a href="http://i.stack.imgur.com/JTfST.png" rel="nofollow"><img src="http://i.stack.imgur.com/JTfST.png" alt="enter image description here"></a></p>
| 2 | 2016-09-29T15:56:04Z | [
"python",
"pandas",
"pivot-table"
]
|
lasagne.layers.DenseLayer: "__init__() takes at least 3 arguments" | 39,774,325 | <p>I'm using Lasagne+Theano to create a ResNet and am struggling with the use of DenseLayer. If i use the example on <a href="http://lasagne.readthedocs.io/en/latest/modules/layers/dense.html" rel="nofollow">http://lasagne.readthedocs.io/en/latest/modules/layers/dense.html</a> it works.</p>
<pre><code>l_in = InputLayer((100, 20))
l1 = DenseLayer(l_in, num_units=50)
</code></pre>
<p>But if I want to use it in my project:</p>
<pre><code>#other layers
resnet['res5c_branch2c'] = ConvLayer(resnet['res5c_branch2b'], num_filters=2048, filter_size=1, pad=0, flip_filters=False)
resnet['pool5'] = PoolLayer(resnet['res5c'], pool_size=7, stride=1, mode='average_exc_pad', ignore_border=False)
resnet['fc1000'] = DenseLayer(resnet['pool5'], num_filter=1000)
Traceback (most recent call last):File "convert_resnet_101_caffe.py", line 167, in <module>
resnet['fc1000'] = DenseLayer(resnet['pool5'], num_filter=1000)TypeError: __init__() takes at least 3 arguments (2 given)
</code></pre>
| 1 | 2016-09-29T15:25:24Z | 39,774,510 | <p><code>DenseLayer</code> takes two positional arguments: <code>incoming, num_units</code>. You are instantiating it like this:</p>
<pre><code>DenseLayer(resnet['pool5'], num_filter=1000)
</code></pre>
<p>Note that this is different than the example code:</p>
<pre><code>DenseLayer(l_in, num_units=50)
</code></pre>
<p>Since you are passing a keyword argument that is <em>not</em> <code>num_units</code> as the second argument, I think <code>num_filter</code> is being interpreted as one of the <code>**kwargs</code>, and <code>DenseLayer is still wanting that</code>num_units` argument, and raising an error since you don't provide it.</p>
<p>You can either provide a <code>num_units</code> argument before <code>num_filter</code>, or if that was just a typo, change <code>num_filter</code> to <code>num_units</code>. (The second option seems more likely to me since, although I am not familiar with the library that you are using, I do not see any reference to <code>num_filter</code> in the documentation you linked, although some classes seem to take a <code>num_filters</code> - note the trailing <code>s</code> - argument.)</p>
| 1 | 2016-09-29T15:34:23Z | [
"python",
"deep-learning",
"theano",
"lasagne"
]
|
How to take a partial web snapshot in phantomjs using python? | 39,774,342 | <p>I'm using phantomjs to take a snapshot of a webpage (for example: <a href="http://www.baixaki.com.br/" rel="nofollow">http://www.baixaki.com.br/</a> ) using python.</p>
<p>here is the code:</p>
<pre><code>from selenium import webdriver
driver = webdriver.PhantomJS() # or add to your PATH
driver.get('http://www.baixaki.com.br/')
driver.save_screenshot('screen6.png') # save a screenshot to disk
</code></pre>
<p>The input is a url, the output is an image.
The problem is that the snap shot generated is narrow and long:
<a href="http://i.stack.imgur.com/hMrZp.png" rel="nofollow"><img src="http://i.stack.imgur.com/hMrZp.png" alt="narrow and long snapshot"></a></p>
<p>I want to capture only what fits in the page without scrolling and full width.
For example, something like this:
<a href="http://i.stack.imgur.com/gNuWV.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/gNuWV.jpg" alt="enter image description here"></a></p>
<ul>
<li>I'm looking for a generic solution not a specific one.</li>
</ul>
<p>Would appreciate your help here. </p>
| 1 | 2016-09-29T15:25:55Z | 39,776,824 | <p>You could try cropping the image (I'm using Python 3.5 so you may have to adjust to use StringIO if you're in Python 2.X):</p>
<pre><code>from io import BytesIO
from selenium import webdriver
from PIL import Image
if __name__ == '__main__':
driver = webdriver.PhantomJS('C:<Path to Phantomjs>')
driver.set_window_size(1400, 1000)
driver.get('http://www.baixaki.com.br/')
driver.save_screenshot('screen6.png')
screen = driver.get_screenshot_as_png()
# Crop image
box = (0, 0, 1366, 728)
im = Image.open(BytesIO(screen))
region = im.crop(box)
region.save('screen7.png', 'PNG', optimize=True, quality=95)
</code></pre>
<p>credit where credit is due: <a href="https://gist.github.com/jsok/9502024" rel="nofollow">https://gist.github.com/jsok/9502024</a></p>
| 3 | 2016-09-29T17:45:21Z | [
"python",
"selenium",
"phantomjs"
]
|
How to import Python dependancies in Serverless v1.0 | 39,774,436 | <p>Language: Python
Framework: <a href="https://serverless.com" rel="nofollow">Serverless</a> v1.0</p>
<p>Typically I would run <code>pip freeze > requirements.txt</code> in the project root</p>
<p>How can I get these dependancies packaged into every deploy?</p>
| 0 | 2016-09-29T15:30:41Z | 39,791,686 | <ol>
<li><p>create <code>requirements.txt</code></p>
<p>pip freeze > requirements.txt</p></li>
<li><p>create a folder with all the dependencies: </p>
<p>pip install -t vendored -r requirements.txt</p></li>
</ol>
<p>Note that in order to use these dependencies in the code you'll need to add the following:</p>
<pre><code>import os
import sys
here = os.path.dirname(os.path.realpath(__file__))
sys.path.append(os.path.join(here, "./vendored"))
</code></pre>
<p>See <a href="http://stackoverflow.com/a/36944792/1111215">http://stackoverflow.com/a/36944792/1111215</a> for another example.</p>
| 0 | 2016-09-30T12:35:44Z | [
"python",
"serverless-framework"
]
|
Python- Boolean Cross-checking | 39,774,444 | <p>I have an assignment on using ONLY boolean statements for this question: "Can I register today? A student can register on Monday if Senior, on Tuesday if Junior, on Wednesday if Sophomore, and on Thursday if Freshman."</p>
<p>Is there a simple way to cross check each and every day and status to see if they are either True or False without using if/else statements? If so, how would the code look?</p>
| 0 | 2016-09-29T15:30:58Z | 39,775,586 | <p>Here is a solution using a Python dictionary and Boolean comparison. </p>
<p>We use a dictionary to create <em>student status</em> to <em>day of week</em> associations.
Then get user input and compare using a Boolean expression.</p>
<pre><code>from datetime import date
import calendar
my_dict = {'Senior' : 'Monday',
'Junior' : 'Tuesday',
'Sophomore' : 'Wednesday',
'Freshman' : 'Thursday'
}
my_date = date.today()
my_status = (raw_input("Enter your enrollment status: ")) #Student current status
today = calendar.day_name[my_date.weekday()] #Day or week
reg_status = today == my_dict[my_status]
#who can register today
who_can_register = my_dict.keys()[my_dict.values().index(today)]
print "Can you register=", reg_status
print "Today is ", today , " Only ", who_can_register, "can register"
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>Python 2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> ================================ RESTART ================================
>>>
Enter your enrollment status: Senior
Can you register= False
Today is Thursday Only Freshman can register
>>> ================================ RESTART ================================
>>>
Enter your enrollment status: Junior
Can you register= False
Today is Thursday Only Freshman can register
>>> ================================ RESTART ================================
>>>
Enter your enrollment status: Sophomore
Can you register= False
Today is Thursday Only Freshman can register
>>> ================================ RESTART ================================
>>>
Enter your enrollment status: Freshman
Can you register= True
Today is Thursday Only Freshman can register
>>>
</code></pre>
| 1 | 2016-09-29T16:31:31Z | [
"python",
"boolean"
]
|
Python- Boolean Cross-checking | 39,774,444 | <p>I have an assignment on using ONLY boolean statements for this question: "Can I register today? A student can register on Monday if Senior, on Tuesday if Junior, on Wednesday if Sophomore, and on Thursday if Freshman."</p>
<p>Is there a simple way to cross check each and every day and status to see if they are either True or False without using if/else statements? If so, how would the code look?</p>
| 0 | 2016-09-29T15:30:58Z | 39,776,795 | <p>Perhaps something like this??</p>
<pre><code>from datetime import datetime
def options():
print """
Are you a? [Enter 0-4]
0) Senior
1) Junior
2) Sophomore
3) Freshman
4) quit
"""
print "Can I register today?"
while True:
options()
selection = raw_input("Enter status: ")
try:
selection = int(selection)
except ValueError:
print "Invalid entry please try again"
continue
# if you can't use any if statements
# replace the if statement with a print statement saying to exit using control c
# or above the while loop, create the variable selection = None
# change the while loop from a while True to while selection != 4:
# I leave it up to you :-)
if selection == 4:
break
# 0 is monday
weekday = datetime.today().weekday()
print
print "reply: {0}".format(selection == weekday)
</code></pre>
| 0 | 2016-09-29T17:43:23Z | [
"python",
"boolean"
]
|
Django __str__ function on many to one field is not returning the correct output | 39,774,471 | <p>I'm trying to define <code>__str__</code> function for models A and D for the purpose having human readable text in the admin site. The problem is in model D's <code>__str__</code> function where <code>__str__</code> will return a many_to_one related field, it does not return the format as intended instead it just showa the default 'D object'.</p>
<pre><code>class A(models.Model):
b = models.CharField(max_length = 32)
c = models.CharField(max_length = 32)
def __str__(self):
return '%s %s' % (self.b, self.c)
class D(models.Model):
e = models.ForeignKey(A)
def __str_(self):
return '%s %s' % (self.e.b, self.e.c)
</code></pre>
| -1 | 2016-09-29T15:32:11Z | 39,774,819 | <p>You've got a typo in your method name. You wrote <code>__str_()</code> when it should be <code>__str__()</code>. So when you're calling <code>str()</code> on your model, it's calling the inherited <code>__str__()</code> method from the parent class.</p>
| 1 | 2016-09-29T15:50:19Z | [
"python",
"django",
"django-admin"
]
|
How to make two django projects share the same database | 39,774,580 | <p>I need to make two separate Django projects share the same database. In <code>project_1</code> I have models creating objects that I need to use in <code>project_2</code> (mostly images).</p>
<p>The tree structure of <code>project_1_2</code> is:</p>
<pre><code>project_1/
manage.py
settings.py
project_1_app1/
...
...
project_2/
manage.py
settings.py
project_2_app1/
...
...
</code></pre>
<p>Which is the best approach?</p>
<p><strong>EDIT</strong>: I'm using sqlite3 in my development environment.</p>
<p>I'd like to keep my two django projects as stand-alone projects (so that both can be upgraded safely from their respective repositories). </p>
<pre><code># in project_1/settings.py
import os
PROJECT_ROOT = os.path.abspath(os.path.dirname(__file__))
..
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(PROJECT_ROOT, 'development.db'),
},
}
...
# in project_2/settings.py
import os
PROJECT_ROOT = os.path.abspath(os.path.dirname(__file__))
..
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(PROJECT_ROOT, 'development.db'),
},
}
...
</code></pre>
<p>In this way, each project has its own <code>development.db</code> (the one that I need to be shared):</p>
<pre><code>project_1/development.db
project_2/development.db
</code></pre>
<p>but I guess I need to do something more to make it shared (and unique).
The best for me would be to keep the <code>development.db</code> at <em>project_1/ path</em> and thus set the <em>project_2/settings.py</em> <code>DATABASES</code> to point to <em>project_1/development.db</em>.</p>
| 0 | 2016-09-29T15:37:51Z | 39,781,404 | <p>You can simply define the same database in <code>DATABASES</code> in your settings.py. So, if your database is PostgreSQL, you could do something like this:</p>
<pre><code># in project_1/settings.py
DATABASES = {
'default': {
'NAME': 'common_db',
'ENGINE': 'django.db.backends.postgresql',
'USER': 'project_1_user',
'PASSWORD': 'strong_password_1'
},
}
# in project_2/settings.py
DATABASES = {
'default': {
'NAME': 'common_db',
'ENGINE': 'django.db.backends.postgresql',
'USER': 'project_2_user',
'PASSWORD': 'strong_password_2'
},
}
</code></pre>
<p>Note that both database users (<code>project_1_user</code> and <code>project_2_user</code>) should have the appropriate privileges on the database you wish to use. Or you could instead use the same user for both projects.</p>
<p>If you want to have more than just one database per project, you should take a look at the <a href="https://docs.djangoproject.com/en/1.10/topics/db/multi-db/" rel="nofollow">docs for multiple databases</a>.</p>
<p>On another matter, since you share data, I guess you share functionalities as well. So for example, if <code>project_1_app1</code> and <code>project_2_app1</code> do same (or similar) things, it seems they could instead be a single <a href="https://docs.djangoproject.com/en/1.10/intro/reusable-apps/" rel="nofollow">reusable app</a>.</p>
<p><strong>Edit</strong></p>
<p>Since you use sqlite3, you should ensure the path you use is the same. So, assuming that <code>project_1</code> and <code>project_2</code> are siblings, like so:</p>
<pre><code>projects
project_1
settings.py
...
project_2
settings.py
...
</code></pre>
<p>you should try this:</p>
<pre><code># project_1/settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(PROJECT_ROOT, 'development.db'),
},
}
# project_2/settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(
os.path.dirname(os.path.dirname(PROJECT_ROOT)),
'project_1',
'development.db'
),
},
}
</code></pre>
<p>This would give the structure you ask for. Note however that the projects are not both "standalone". <code>project_2</code> is clearly dependent on <code>project_1</code>'s database.</p>
<p>In any case, perhaps, you should also take a look at the <a href="https://docs.python.org/3/library/os.path.html" rel="nofollow">os.path</a> module for more info.</p>
| 2 | 2016-09-29T23:12:15Z | [
"python",
"django"
]
|
How to change priority in math order(asterisk) | 39,774,694 | <p>I want users to input math formula in my system. How can convert case1 formula to case2 formula using Python? In another word, I would like to change math order specifically for double asterisks.</p>
<pre><code>#case1
3*2**3**2*5
>>>7680
#case2
3*(2**3)**2*5
>>>960
</code></pre>
| 0 | 2016-09-29T15:44:09Z | 39,775,022 | <p>Below is the example to achieve this using <code>re</code> as:</p>
<pre><code>expression = '3*2**3**2*5'
asterisk_exprs = re.findall(r"\d+\*\*\d+", expression) # List of all expressions matching the criterion
for expr in asterisk_exprs:
expression = expression.replace(expr, "({})".format(expr)) # Replace the expression with required expression
# Value of variable 'expression': '3*(2**3)**2*5'
</code></pre>
<p>In order to evaluate the mathematical value of <code>str</code> expression, use <code>eval</code> as:</p>
<pre><code>>>> eval(expression)
960
</code></pre>
| 0 | 2016-09-29T15:59:59Z | [
"python",
"python-2.7",
"numpy",
"math"
]
|
How to change priority in math order(asterisk) | 39,774,694 | <p>I want users to input math formula in my system. How can convert case1 formula to case2 formula using Python? In another word, I would like to change math order specifically for double asterisks.</p>
<pre><code>#case1
3*2**3**2*5
>>>7680
#case2
3*(2**3)**2*5
>>>960
</code></pre>
| 0 | 2016-09-29T15:44:09Z | 39,775,034 | <p>Not only is this not something that Python supports, but really, why would you want to? Modifying BIDMAS or PEMDAS (depending on your location), would not only give you incorrect answers, but also confuse the hell out of any devs looking at the code.</p>
<p>Just use brackets like in Case 2, it's what they're for.</p>
| 1 | 2016-09-29T16:00:42Z | [
"python",
"python-2.7",
"numpy",
"math"
]
|
How to change priority in math order(asterisk) | 39,774,694 | <p>I want users to input math formula in my system. How can convert case1 formula to case2 formula using Python? In another word, I would like to change math order specifically for double asterisks.</p>
<pre><code>#case1
3*2**3**2*5
>>>7680
#case2
3*(2**3)**2*5
>>>960
</code></pre>
| 0 | 2016-09-29T15:44:09Z | 39,775,261 | <p>If users are supposed to enter formulas into your program, I would suggest keeping it as is. The reason is that exponentiation in mathematics is <a href="https://en.wikipedia.org/wiki/Operator_associativity" rel="nofollow">right-associative</a>, meaning the execution goes from the top level down. For example: <code>a**b**c = a**(b**c)</code>, by convention.</p>
<p>There are some programs that use bottom-up resolution of the stacked exponentiation -- MS Excel and LibreOffice are some of them, however, it is against the regular convention, and always confused the hell out of me.</p>
<p>If you would like to override this behavior, and still be mathematically correct, you have to use brackets.</p>
<p>You can always declare your own power method that would resolve the way you want it -- something like <code>numpy.pow()</code>. You could overload the built-in, but that's too much hastle.</p>
<p><a href="https://en.wikipedia.org/wiki/Order_of_operations#Special_cases" rel="nofollow">Read this</a></p>
| 1 | 2016-09-29T16:13:35Z | [
"python",
"python-2.7",
"numpy",
"math"
]
|
write csv in python based on sub-string condition | 39,774,726 | <p>I have the following <code>result</code> variable in a data frame (<code>df</code>) and I am trying to output a csv file for those that starts with "test"</p>
<pre><code>abandoned
static
test_a_1
test_b_2
abandoned
test_b_3
</code></pre>
<p>The following code is not working. Thanks in advance for your insights</p>
<pre><code>substr="test"
if substr in df['result']:
df.to_csv("C:/Projects/result.csv",sep=',',index=False)
</code></pre>
| 0 | 2016-09-29T15:45:38Z | 39,774,847 | <p>Just because "test_a_1" is in the list does not mean that "test" is,
in Python logic.</p>
<p>Example of how Python evaluates "if [string] in [list]" statements:</p>
<pre><code>>>> test = 'test1'
>>> testlist = ['test1', 'test2']
>>> if 'test' in test:
... print('hi')
...
hi
>>> if 'test' in testlist:
... print('hi')
...
>>>
</code></pre>
<p>This would work:</p>
<pre><code>substr="test"
for val in df['result']:
if substr in val:
# Do stuff
# And optionally (if you only need one CSV per dataframe rather than one CSV per result):
break
</code></pre>
| 2 | 2016-09-29T15:51:49Z | [
"python",
"string",
"csv",
"substring"
]
|
write csv in python based on sub-string condition | 39,774,726 | <p>I have the following <code>result</code> variable in a data frame (<code>df</code>) and I am trying to output a csv file for those that starts with "test"</p>
<pre><code>abandoned
static
test_a_1
test_b_2
abandoned
test_b_3
</code></pre>
<p>The following code is not working. Thanks in advance for your insights</p>
<pre><code>substr="test"
if substr in df['result']:
df.to_csv("C:/Projects/result.csv",sep=',',index=False)
</code></pre>
| 0 | 2016-09-29T15:45:38Z | 39,774,901 | <p>If what you mean is that you want to make a csv that only contains the rows for which result starta with 'test', use the following:</p>
<pre><code>df[df.result.str.contains('^test.*')].to_csv("C:/Projects/result.csv",sep=',',index=False)
</code></pre>
| 1 | 2016-09-29T15:54:13Z | [
"python",
"string",
"csv",
"substring"
]
|
write csv in python based on sub-string condition | 39,774,726 | <p>I have the following <code>result</code> variable in a data frame (<code>df</code>) and I am trying to output a csv file for those that starts with "test"</p>
<pre><code>abandoned
static
test_a_1
test_b_2
abandoned
test_b_3
</code></pre>
<p>The following code is not working. Thanks in advance for your insights</p>
<pre><code>substr="test"
if substr in df['result']:
df.to_csv("C:/Projects/result.csv",sep=',',index=False)
</code></pre>
| 0 | 2016-09-29T15:45:38Z | 39,774,954 | <p>This would work :</p>
<pre><code>df['temp'] = [1 if 'test' in df['result'][k] for k in df.index else 0]
df['result'][df['temp']==1].to_csv("/your/path", sep=',', index=False)
</code></pre>
| 0 | 2016-09-29T15:56:29Z | [
"python",
"string",
"csv",
"substring"
]
|
Print Highest, Average, Lowest score of a file | 39,774,785 | <p>I'm trying to print the highest, average, and lowest score of a file in python. But I keep getting error </p>
<pre><code>ValueError: invalid literal for int() with base 10.
</code></pre>
<p>My <em>results.txt</em> file looks like this:</p>
<pre><code>Johnny-8.65
Juan-9.12
Joseph-8.45
Stacey-7.81
Aideen-8.05
Zack-7.21
Aaron-8.31
</code></pre>
<p>And my code looks like this</p>
<pre><code>func1={}
with open('results.txt','r') as f:
for line in f:
name,value=line.split('-')
value=float(value)
if name in func1.keys():
func1[name].append(value)
else:
func1[name]=[value]
#compute average:
for name in func1:
average=sum(func1[name])/len(func1[name])
print("{} : {}".format(name,average))
</code></pre>
| -1 | 2016-09-29T15:48:28Z | 39,774,855 | <p>Values that you provided in file are not <code>int</code> type.
For your approach you can use <code>float()</code>:</p>
<pre><code>value = float(value)
</code></pre>
<p>If you need to extract integer:</p>
<pre><code>s = "123"
num = int(s)
</code></pre>
<blockquote>
<p>num = 123</p>
</blockquote>
<p>If you need to extract float number:</p>
<pre><code>s = "123.12"
num = float(s)
</code></pre>
<blockquote>
<p>num = 123.12</p>
</blockquote>
<p>You can refer to documentation: <a href="https://docs.python.org/2/library/datatypes.html" rel="nofollow">Datatypes #1</a><a class='doc-link' href="http://stackoverflow.com/documentation/python/193/introduction-to-python#t=201609291554315469789">Datatypes #2</a></p>
<p>Also there is nice post about converting number datatypes in python: <a href="http://stackoverflow.com/questions/379906/parse-string-to-float-or-int">Parse String to Float or Int</a></p>
<p>How to get maximum key / value from dictionary refer to: <a href="http://stackoverflow.com/questions/268272/getting-key-with-maximum-value-in-dictionary">get max value</a><a href="http://stackoverflow.com/questions/3108042/get-max-key-in-dictionary">get max key</a></p>
<p>Also, read this article. It will help you to ask clearer questions.
<a href="http://meta.stackoverflow.com/questions/334822/how-do-i-ask-and-answer-homework-questions">How do I ask and answer homework questions?</a></p>
| 0 | 2016-09-29T15:52:05Z | [
"python",
"max",
"average",
"data-type-conversion",
"min"
]
|
Print Highest, Average, Lowest score of a file | 39,774,785 | <p>I'm trying to print the highest, average, and lowest score of a file in python. But I keep getting error </p>
<pre><code>ValueError: invalid literal for int() with base 10.
</code></pre>
<p>My <em>results.txt</em> file looks like this:</p>
<pre><code>Johnny-8.65
Juan-9.12
Joseph-8.45
Stacey-7.81
Aideen-8.05
Zack-7.21
Aaron-8.31
</code></pre>
<p>And my code looks like this</p>
<pre><code>func1={}
with open('results.txt','r') as f:
for line in f:
name,value=line.split('-')
value=float(value)
if name in func1.keys():
func1[name].append(value)
else:
func1[name]=[value]
#compute average:
for name in func1:
average=sum(func1[name])/len(func1[name])
print("{} : {}".format(name,average))
</code></pre>
| -1 | 2016-09-29T15:48:28Z | 39,775,054 | <p>As everyone on this thread suggested you should change <strong>int</strong> to <strong>float</strong>. Regarding your comment above. You use </p>
<pre><code>value=int(float) !
</code></pre>
<p>Float should be outside parentheses. Like this </p>
<pre><code>value=float(value).
</code></pre>
<p>I run the code and it worked for me.</p>
| 0 | 2016-09-29T16:01:49Z | [
"python",
"max",
"average",
"data-type-conversion",
"min"
]
|
Print Highest, Average, Lowest score of a file | 39,774,785 | <p>I'm trying to print the highest, average, and lowest score of a file in python. But I keep getting error </p>
<pre><code>ValueError: invalid literal for int() with base 10.
</code></pre>
<p>My <em>results.txt</em> file looks like this:</p>
<pre><code>Johnny-8.65
Juan-9.12
Joseph-8.45
Stacey-7.81
Aideen-8.05
Zack-7.21
Aaron-8.31
</code></pre>
<p>And my code looks like this</p>
<pre><code>func1={}
with open('results.txt','r') as f:
for line in f:
name,value=line.split('-')
value=float(value)
if name in func1.keys():
func1[name].append(value)
else:
func1[name]=[value]
#compute average:
for name in func1:
average=sum(func1[name])/len(func1[name])
print("{} : {}".format(name,average))
</code></pre>
| -1 | 2016-09-29T15:48:28Z | 39,775,360 | <p>Try this:</p>
<pre><code>scores={}
highest_score=0.0
highest=''
lowest_score=100.0
lowest=''
average=0.0
sums=0.0
files=open("results.txt","r").readlines()
for lines in files[0:]:
line=lines.split("-")
scores[line[0]]=line[1].strip()
for key,value in scores.items():
if float(value)<lowest_score:
lowest_score=float(value)
lowest=key
if float(value)>highest_score:
highest_score=float(value)
highest=key
sums=sums+float(value)
print "highest score:",highest_score," of ",highest
print "lowest score:",lowest_score," of ",lowest
print "average: ",sums/len(scores)
</code></pre>
| 0 | 2016-09-29T16:18:40Z | [
"python",
"max",
"average",
"data-type-conversion",
"min"
]
|
Pandas: how to draw a bar plot with two categories and four series each? | 39,774,826 | <p>I have the following dataframe, where <code>pd.concat</code> has been used to group the columns:</p>
<pre><code> a b
C1 C2 C3 C4 C5 C6 C7 C8
0 15 37 17 10 8 11 19 86
1 39 84 11 5 5 13 9 11
2 10 20 30 51 74 62 56 58
3 88 2 1 3 9 6 0 17
4 17 17 32 24 91 45 63 48
</code></pre>
<p>Now I want to draw a bar plot where I only have two categories (<code>a</code> and <code>b</code>), and each category has four bars representing the average of each column. Columns C1 and C5 should have the same color, as should columns C2 and C6, and so forth. </p>
<p><strong>How can I do it with <em>df.plot.bar()?</em></strong></p>
<p>The plot should resemble the following image. Sorry for it being hand-drawn but it was very hard for me to find a relevant example:
<a href="http://i.stack.imgur.com/8I1Z6.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/8I1Z6.jpg" alt="enter image description here"></a></p>
<p><strong>EDIT</strong></p>
<p>This is the header of my actual DataFrame:</p>
<pre><code> C1 C2 C3 C4 C5 C6 C7 C8
0 34 34 34 34 6 40 13 26
1 19 19 19 19 5 27 12 15
2 100 100 100 100 0 0 0 0
3 0 0 0 0 0 0 0 0
4 100 100 100 100 0 0 0 0
</code></pre>
| 3 | 2016-09-29T15:50:38Z | 39,775,311 | <p>Try <code>seaborn</code></p>
<pre><code>import seaborn as sns
import pandas as pd
def r(df):
return df.loc[df.name].reset_index(drop=True)
data = df.mean().groupby(level=0).apply(r) \
.rename_axis(['grp', 'cat']).reset_index(name='mu')
ax = sns.barplot(x='grp', y='mu', hue='cat', data=data)
ax.legend_.remove()
for i, p in enumerate(ax.patches):
height = p.get_height()
ax.text(p.get_x() + .05, height + 1, df.columns.levels[1][i])
</code></pre>
<p><a href="http://i.stack.imgur.com/QPVeu.png" rel="nofollow"><img src="http://i.stack.imgur.com/QPVeu.png" alt="enter image description here"></a></p>
| 2 | 2016-09-29T16:16:04Z | [
"python",
"pandas",
"matplotlib",
"bar-chart",
"categories"
]
|
Pandas: how to draw a bar plot with two categories and four series each? | 39,774,826 | <p>I have the following dataframe, where <code>pd.concat</code> has been used to group the columns:</p>
<pre><code> a b
C1 C2 C3 C4 C5 C6 C7 C8
0 15 37 17 10 8 11 19 86
1 39 84 11 5 5 13 9 11
2 10 20 30 51 74 62 56 58
3 88 2 1 3 9 6 0 17
4 17 17 32 24 91 45 63 48
</code></pre>
<p>Now I want to draw a bar plot where I only have two categories (<code>a</code> and <code>b</code>), and each category has four bars representing the average of each column. Columns C1 and C5 should have the same color, as should columns C2 and C6, and so forth. </p>
<p><strong>How can I do it with <em>df.plot.bar()?</em></strong></p>
<p>The plot should resemble the following image. Sorry for it being hand-drawn but it was very hard for me to find a relevant example:
<a href="http://i.stack.imgur.com/8I1Z6.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/8I1Z6.jpg" alt="enter image description here"></a></p>
<p><strong>EDIT</strong></p>
<p>This is the header of my actual DataFrame:</p>
<pre><code> C1 C2 C3 C4 C5 C6 C7 C8
0 34 34 34 34 6 40 13 26
1 19 19 19 19 5 27 12 15
2 100 100 100 100 0 0 0 0
3 0 0 0 0 0 0 0 0
4 100 100 100 100 0 0 0 0
</code></pre>
| 3 | 2016-09-29T15:50:38Z | 39,776,042 | <p>You could simply perform <code>unstack</code> after calculating the <code>mean</code> of the <code>DF</code> to render the bar plot.</p>
<pre><code>import seaborn as sns
sns.set_style('white')
#color=0.75(grey)
df.mean().unstack().plot.bar(color=list('rbg')+['0.75'], rot=0, figsize=(8,8))
</code></pre>
<p><a href="http://i.stack.imgur.com/crBu3.png" rel="nofollow"><img src="http://i.stack.imgur.com/crBu3.png" alt="Image"></a></p>
<hr>
<p><strong>Data:</strong> (As per the edited post)</p>
<pre><code>df
</code></pre>
<p><a href="http://i.stack.imgur.com/DTy03.png" rel="nofollow"><img src="http://i.stack.imgur.com/DTy03.png" alt="Image"></a></p>
<p>Prepare the multiindex <code>DF</code> by creating an extra column by repeating the labels according to the selections of columns(Here, 4).</p>
<pre><code>df_multi_col = df.T.reset_index()
df_multi_col['labels'] = np.concatenate((np.repeat('A', 4), np.repeat('B', 4)))
df_multi_col.set_index(['labels', 'index'], inplace=True)
df_multi_col
</code></pre>
<p><a href="http://i.stack.imgur.com/fMbiK.png" rel="nofollow"><img src="http://i.stack.imgur.com/fMbiK.png" alt="Image"></a></p>
<pre><code>df_multi_col.mean(1).unstack().plot.bar(color=list('rbg')+['0.75'], rot=0, figsize=(6,6), width=2)
</code></pre>
<p><a href="http://i.stack.imgur.com/uGYVO.png" rel="nofollow"><img src="http://i.stack.imgur.com/uGYVO.png" alt="Image"></a></p>
| 2 | 2016-09-29T16:58:18Z | [
"python",
"pandas",
"matplotlib",
"bar-chart",
"categories"
]
|
Pythonic way to split string then split again on result? | 39,774,926 | <p>Is there a more pythonic way to do this</p>
<pre><code>def parse_address(hostname, addresses):
netmask=''
for address in addresses:
if hostname in address:
_hostname, _netmask = address.strip().split('/')
hostname = _hostname.split()[-1]
netmask = '/' + _netmask.split()[0]
break
return netmask
</code></pre>
<h3>Test case</h3>
<p>If you do TDD</p>
<pre><code>def test_parse_netmask(self):
hostname = '127.0.0.1'
stdout = [
"1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever",
"3: wlp4s0 inet 192.168.2.133/24 brd 192.168.2.255 scope global dynamic wlp4s0\ valid_lft 58984sec preferred_lft 58984sec",
"4: docker0 inet 172.17.0.1/16 scope global docker0\ valid_lft forever preferred_lft forever",
"5: br-a49026d1a341 inet 172.18.0.1/16 scope global br-a49026d1a341\ valid_lft forever preferred_lft forever",
"6: br-d26f2005f732 inet 172.19.0.1/16 scope global br-d26f2005f732\ valid_lft forever preferred_lft forever",
]
netmask = scanner.parse_address(hostname, stdout)
self.assertEqual(netmask, '/8')
</code></pre>
| 0 | 2016-09-29T15:55:10Z | 39,775,340 | <pre><code>import re
hostname = '127.0.0.1'
stdout = [
"1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever",
"3: wlp4s0 inet 192.168.2.133/24 brd 192.168.2.255 scope global dynamic wlp4s0\ valid_lft 58984sec preferred_lft 58984sec",
"4: docker0 inet 172.17.0.1/16 scope global docker0\ valid_lft forever preferred_lft forever",
"5: br-a49026d1a341 inet 172.18.0.1/16 scope global br-a49026d1a341\ valid_lft forever preferred_lft forever",
"6: br-d26f2005f732 inet 172.19.0.1/16 scope global br-d26f2005f732\ valid_lft forever preferred_lft forever",
]
print [item.split('/')[-1] for item in re.findall(r'(?:\d+\.){3}\d+\/\d+',''.join(stdout)) if hostname in item]
['8']
</code></pre>
| 0 | 2016-09-29T16:17:27Z | [
"python",
"split",
"idiomatic"
]
|
Pythonic way to split string then split again on result? | 39,774,926 | <p>Is there a more pythonic way to do this</p>
<pre><code>def parse_address(hostname, addresses):
netmask=''
for address in addresses:
if hostname in address:
_hostname, _netmask = address.strip().split('/')
hostname = _hostname.split()[-1]
netmask = '/' + _netmask.split()[0]
break
return netmask
</code></pre>
<h3>Test case</h3>
<p>If you do TDD</p>
<pre><code>def test_parse_netmask(self):
hostname = '127.0.0.1'
stdout = [
"1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever",
"3: wlp4s0 inet 192.168.2.133/24 brd 192.168.2.255 scope global dynamic wlp4s0\ valid_lft 58984sec preferred_lft 58984sec",
"4: docker0 inet 172.17.0.1/16 scope global docker0\ valid_lft forever preferred_lft forever",
"5: br-a49026d1a341 inet 172.18.0.1/16 scope global br-a49026d1a341\ valid_lft forever preferred_lft forever",
"6: br-d26f2005f732 inet 172.19.0.1/16 scope global br-d26f2005f732\ valid_lft forever preferred_lft forever",
]
netmask = scanner.parse_address(hostname, stdout)
self.assertEqual(netmask, '/8')
</code></pre>
| 0 | 2016-09-29T15:55:10Z | 39,775,365 | <pre><code>def x(hostname,addresses):
import re
for address in addresses:
result = re.search(hostname+r"/\d", address)
if result:
return result.group(0).split(hostname)[1]
</code></pre>
<p>Don't know if it's more 'Pythonic' but I would do it this way. It's readable to others, yet it's short enough to not drag on the function.</p>
| 1 | 2016-09-29T16:18:51Z | [
"python",
"split",
"idiomatic"
]
|
Pythonic way to split string then split again on result? | 39,774,926 | <p>Is there a more pythonic way to do this</p>
<pre><code>def parse_address(hostname, addresses):
netmask=''
for address in addresses:
if hostname in address:
_hostname, _netmask = address.strip().split('/')
hostname = _hostname.split()[-1]
netmask = '/' + _netmask.split()[0]
break
return netmask
</code></pre>
<h3>Test case</h3>
<p>If you do TDD</p>
<pre><code>def test_parse_netmask(self):
hostname = '127.0.0.1'
stdout = [
"1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever",
"3: wlp4s0 inet 192.168.2.133/24 brd 192.168.2.255 scope global dynamic wlp4s0\ valid_lft 58984sec preferred_lft 58984sec",
"4: docker0 inet 172.17.0.1/16 scope global docker0\ valid_lft forever preferred_lft forever",
"5: br-a49026d1a341 inet 172.18.0.1/16 scope global br-a49026d1a341\ valid_lft forever preferred_lft forever",
"6: br-d26f2005f732 inet 172.19.0.1/16 scope global br-d26f2005f732\ valid_lft forever preferred_lft forever",
]
netmask = scanner.parse_address(hostname, stdout)
self.assertEqual(netmask, '/8')
</code></pre>
| 0 | 2016-09-29T15:55:10Z | 39,777,252 | <p>Would you be able to base it on code like this?</p>
<pre><code>from urllib.parse import urlparse
parseResult = urlparse('http://www.fake.ca/185')
print ( parseResult )
</code></pre>
<p>parseResult is a structure with elements that are revealed in the output of the print statement.</p>
<p>ParseResult(scheme='http', netloc='www.fake.ca', path='/185', params='', query='', fragment='')</p>
| -1 | 2016-09-29T18:12:57Z | [
"python",
"split",
"idiomatic"
]
|
In class object, how to auto update attributes when you change only one entry of a list? | 39,774,953 | <p>I have seen this answer for a very similar question:</p>
<p><a href="http://stackoverflow.com/a/14916491/5726899"><em>In class object, how to auto update attributes?</em></a></p>
<p>I will paste the code here:</p>
<pre><code>class SomeClass(object):
def __init__(self, n):
self.list = range(0, n)
@property
def list(self):
return self._list
@list.setter
def list(self, val):
self._list = val
self._listsquare = [x**2 for x in self._list ]
@property
def listsquare(self):
return self._listsquare
@listsquare.setter
def listsquare(self, val):
self.list = [int(pow(x, 0.5)) for x in val]
>>> c = SomeClass(5)
>>> c.listsquare
[0, 1, 4, 9, 16]
>>> c.list
[0, 1, 2, 3, 4]
>>> c.list = range(0,6)
>>> c.list
[0, 1, 2, 3, 4, 5]
>>> c.listsquare
[0, 1, 4, 9, 16, 25]
>>> c.listsquare = [x**2 for x in range(0,10)]
>>> c.list
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
</code></pre>
<p>In this code, when I update the list using:</p>
<pre><code>>>> c.list = [1, 2, 3, 4]
</code></pre>
<p>c.listsquare will be updated accordingly:</p>
<pre><code>>>> c.listsquare
[1, 4, 9, 16]
</code></pre>
<p>But when I try:</p>
<pre><code>>>> c.list[0] = 5
>>> c.list
[5, 2, 3, 4]
</code></pre>
<p>Listsquares is not updated:</p>
<pre><code>>>> c.listsquare
[1, 4, 9, 16]
</code></pre>
<p>How can I make listsquare auto update when I change only one item inside the list?</p>
| 0 | 2016-09-29T15:56:25Z | 39,776,015 | <p>One way you could do it is by having a private helper <code>_List</code> class which is almost exactly like the built-in <code>list</code> class, but also has a <code>owner</code> attribute. Every time one of a <code>_List</code> instance's elements is assigned a value, it can then modify the <code>listsquare</code> property of its <code>owner</code> to keep it up-to-date. Since it's only used by <code>SomeClass</code> it can be nested inside of it to provide even more encapsulation.</p>
<pre><code>class SomeClass(object):
class _List(list):
def __init__(self, owner, *args, **kwargs):
super(SomeClass._List, self).__init__(*args, **kwargs)
self.owner = owner
def __setitem__(self, index, value):
super(SomeClass._List, self).__setitem__(index, value)
self.owner.listsquare[index] = int(value**2)
def __init__(self, n):
self.list = SomeClass._List(self, range(0, n))
@property
def list(self):
return self._list
@list.setter
def list(self, val):
self._list = val
self._listsquare = [x**2 for x in self._list ]
@property
def listsquare(self):
return self._listsquare
@listsquare.setter
def listsquare(self, val):
self.list = [int(pow(x, 0.5)) for x in val]
c = SomeClass(5)
print(c.list) # --> [0, 1, 2, 3, 4]
print(c.listsquare) # --> [0, 1, 4, 9, 16]
c.list[0] = 5
print(c.list) # --> [5, 1, 2, 3, 4]
print(c.listsquare) # --> [25, 1, 4, 9, 16]
</code></pre>
| 2 | 2016-09-29T16:56:48Z | [
"python",
"class",
"attributes"
]
|
In class object, how to auto update attributes when you change only one entry of a list? | 39,774,953 | <p>I have seen this answer for a very similar question:</p>
<p><a href="http://stackoverflow.com/a/14916491/5726899"><em>In class object, how to auto update attributes?</em></a></p>
<p>I will paste the code here:</p>
<pre><code>class SomeClass(object):
def __init__(self, n):
self.list = range(0, n)
@property
def list(self):
return self._list
@list.setter
def list(self, val):
self._list = val
self._listsquare = [x**2 for x in self._list ]
@property
def listsquare(self):
return self._listsquare
@listsquare.setter
def listsquare(self, val):
self.list = [int(pow(x, 0.5)) for x in val]
>>> c = SomeClass(5)
>>> c.listsquare
[0, 1, 4, 9, 16]
>>> c.list
[0, 1, 2, 3, 4]
>>> c.list = range(0,6)
>>> c.list
[0, 1, 2, 3, 4, 5]
>>> c.listsquare
[0, 1, 4, 9, 16, 25]
>>> c.listsquare = [x**2 for x in range(0,10)]
>>> c.list
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
</code></pre>
<p>In this code, when I update the list using:</p>
<pre><code>>>> c.list = [1, 2, 3, 4]
</code></pre>
<p>c.listsquare will be updated accordingly:</p>
<pre><code>>>> c.listsquare
[1, 4, 9, 16]
</code></pre>
<p>But when I try:</p>
<pre><code>>>> c.list[0] = 5
>>> c.list
[5, 2, 3, 4]
</code></pre>
<p>Listsquares is not updated:</p>
<pre><code>>>> c.listsquare
[1, 4, 9, 16]
</code></pre>
<p>How can I make listsquare auto update when I change only one item inside the list?</p>
| 0 | 2016-09-29T15:56:25Z | 39,776,083 | <p>First of all I would recommend u not to change two, different attributes by accesing single list.setter. The reason why does:</p>
<pre><code>>>> c.list[0] = 5
>>> c.list
[5, 2, 3, 4]
>>> c.listsquare
[1, 4, 9, 16]
</code></pre>
<p>not work, is that, you are accesing the c.list.__setitem__ method.</p>
<pre><code>c.list[0] = 5
is equal to
c.list.__setitem__(0, 5)
or
list.__setitem__(c.list, 0, 5)
and as such, the list.__setitem__ method isn't the one you've implemented in your class.
</code></pre>
<p>However, if you really want to do so, you should reconsider creating the square list based on self._list.</p>
<pre><code>class SomeClass(object):
def __init__(self, n):
self.list = range(0, n)
@property
def list(self):
return self._list
@list.setter
def list(self, val):
self._list = val
@property
def listsquare(self):
return [n ** 2 for n in self.list]
@listsquare.setter
def listsquare(self, val):
self.list = [int(pow(x, 0.5)) for x in val]
</code></pre>
| 1 | 2016-09-29T17:01:38Z | [
"python",
"class",
"attributes"
]
|
FFT Speed on Noncubic meshes | 39,774,971 | <p>I need to repeatedly take the Fourier Transform/Inverse Fourier Transform of a 3d function in order to solve a differential equation. Something like:</p>
<pre><code>import pyfftw.interfaces.numpy_fft as fftw
for i in range(largeNumber):
fFS = fftw.rfftn(f)
# Do stuff
f = fftw.irfftn(fFS)
</code></pre>
<p>The shape of f is highly noncubic. Is there any performance difference based on the order of dimensions, for example (512, 32, 128) vs (512, 128, 32), etc.? </p>
<p>I am looking for any speed ups available. I have already tried playing around with wisdom. I thought it might be fastest if the largest dimension went last (e.g. 32, 128, 512) so that fFS.shape = (32, 128, 257), but this doesn't appear to be the case.</p>
| 1 | 2016-09-29T15:57:10Z | 39,789,674 | <p>If you really want to squeeze all the performance out you can, use the FFTW object directly (most easily accessed through <code>pyfftw.builders</code>). This way you get careful control over exactly what copies occur and whether the normalization is performed on inverse.</p>
<p>Your code as-is will likely benefit from using the cache (enabled by calling <code>pyfftw.interfaces.cache.enable()</code>), which minimises the set up time for the general and safe case, though doesn't eliminate it.</p>
<p>Regarding the best arrangement of dimensions, you'll have to suck it and see. Try all the various options and see what is fastest (with <code>timeit</code>). Make sure when you do the tests you're actually using the data arranged in memory as expected and not just taking a view of the same array in memory (which <code>pyfftw</code> may well handle fine without a copy - though there are tweak parameters for this sort of thing).</p>
<p><code>FFTW</code> tries lots of different options (different algorithms over different FFT representations) and picks the fastest, so you end up with non-obvious implementations that may well change for different datasets that are superficially very similar.</p>
<p>General tips:</p>
<ul>
<li>Turn on the multi-threading for maximum performance (set <code>threads=N</code> where appropriate).</li>
<li>Make sure your arrays are suitably byte aligned - this has less impact than it used to with modern hardware, but will probably make a difference (particularly if all your higher dimension sizes have the byte alignment as a factor).</li>
<li>Read the <a href="http://hgomersall.github.io/pyFFTW/sphinx/tutorial.html" rel="nofollow">tutorial</a> and the <a href="http://hgomersall.github.io/pyFFTW/sphinx/api.html" rel="nofollow">api docs</a>.</li>
</ul>
| 0 | 2016-09-30T10:45:09Z | [
"python",
"performance",
"fft",
"fftw",
"pyfftw"
]
|
AWS Lambda: failed execution with no errors | 39,774,996 | <p>I've written a Python script (<code>upload.py</code>) that is working independently of AWS Lambda. It uploads data as POST to an API. It has the following method to handle a Lambda execution:</p>
<pre><code>def handler(event, context):
print("Hello!")
start()
</code></pre>
<p>When I call <code>start()</code> on my local machine, the script runs successfully. </p>
<p>When I upload the code to Lambda and run a test or initiate a trigger, nothing happens. The following is printed out:</p>
<pre><code>START RequestId: 6abc0995-865c-11e6-b015-57198f9121b5 Version: $LATEST
END RequestId: 6abc0995-865c-11e6-b015-57198f9121b5
REPORT RequestId: 6abc0995-865c-11e6-b015-57198f9121b5 Duration: 2056.85 ms Billed Duration: 2100 ms
</code></pre>
<p>However, when I introduce an error to the code (for example adding a string and integer), the error is printed out.</p>
<p>Everything in settings is correctly defined (e.g. upload.handler) and no VPC is assigned to eliminate network issues. The execution role has Administrator privileges to eliminate that as a possibility as well.</p>
| 0 | 2016-09-29T15:58:45Z | 39,775,092 | <p>So it turns out that the "sys" library isn't permitted in a Lambda, which makes sense in hindsight. To deal with potential encoding issues I had the following code:</p>
<pre><code>import sys
reload(sys)
sys.setdefaultencoding('utf-8')
</code></pre>
<p>As suggested in another Stack Overflow thread. This apparently was blocking execution. Removal allowed the script to execute properly. </p>
| 1 | 2016-09-29T16:04:03Z | [
"python",
"amazon-web-services",
"aws-lambda",
"amazon-lambda"
]
|
Installing opencv-3.1.0 on MacOS Sierra to use with python | 39,775,041 | <p>I recently upgraded to MacOS Sierra and I have been dealing with many issues (I am mentioning it cause it may be relevant). I am trying to install opencv-3.1.0 to use with python 2.7. and it's been impossible. I downloaded opencv-3.1.0 from <a href="https://github.com/Itseez/opencv/archive/3.1.0.zip" rel="nofollow">here</a> unzipped it and ran:</p>
<pre><code>python platforms/osx/build_framework.py osx
</code></pre>
<p>from the opencv-3.1.0 directory. Don't want to print all of the output, so here is just the Error message. </p>
<pre><code>** BUILD FAILED **
The following build commands failed:
CompileC osx/build/x86_64-MacOSX/modules/world/OpenCV.build/Release/opencv_world.build/Objects-normal/x86_64/cap_qtkit.o modules/videoio/src/cap_qtkit.mm normal x86_64 objective-c++ com.apple.compilers.llvm.clang.1_0.compiler
(1 failure)
============================================================
ERROR: Command '['xcodebuild', 'ARCHS=x86_64', '-sdk', 'macosx', '-configuration', 'Release', '-parallelizeTargets', '-jobs', '4', '-target', 'ALL_BUILD', 'build']' returned non-zero exit status 65
============================================================
Traceback (most recent call last):
File "/Users/christoshadjinikolis/Downloads/opencv-3.1.0/platforms/ios/build_framework.py", line 87, in build
self._build(outdir)
File "/Users/christoshadjinikolis/Downloads/opencv-3.1.0/platforms/ios/build_framework.py", line 81, in _build
self.buildOne(t[0], t[1], mainBD, cmake_flags)
File "/Users/christoshadjinikolis/Downloads/opencv-3.1.0/platforms/ios/build_framework.py", line 139, in buildOne
execute(buildcmd + ["-target", "ALL_BUILD", "build"], cwd = builddir)
File "/Users/christoshadjinikolis/Downloads/opencv-3.1.0/platforms/ios/build_framework.py", line 34, in execute
retcode = check_call(cmd, cwd = cwd)
File "/Users/christoshadjinikolis/anaconda/lib/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
CalledProcessError: Command '['xcodebuild', 'ARCHS=x86_64', '-sdk', 'macosx', '-configuration', 'Release', '-parallelizeTargets', '-jobs', '4', '-target', 'ALL_BUILD', 'build']' returned non-zero exit status 65
</code></pre>
<p>Would appreciate your help. Thanks.</p>
| 0 | 2016-09-29T16:01:02Z | 39,775,148 | <p>After following the post <a href="https://github.com/Homebrew/homebrew-science/issues/4104#issuecomment-249362870" rel="nofollow">here</a> I was able to install it just fine by running:<br>
<code>brew install opencv3 --HEAD --with-contrib</code></p>
<p>The issue appears to be related with the QuickTime codecs. You need to specify that the library is installed with ffmpeg instead and it should work.</p>
<pre><code>brew install opencv3 --with-ffmpeg --with-tbb --with-contrib
</code></pre>
| 0 | 2016-09-29T16:07:04Z | [
"python",
"osx",
"opencv"
]
|
Separation of a String into Individual Words (Python) | 39,775,070 | <p>So I have this code here:</p>
<pre><code>#assign a string to variable
x = "example text"
#create set to store separated words
xset = []
#create base word
xword = ""
for letter in x:
if letter == " ":
#add word
xset.append(xword)
#add space
xset.append(letter)
#reset base word
else:
#add letter
xword = xword + letter
#back to line 9
#print set with separated words
print xset
</code></pre>
<p>So pretty self explanatory, the for function looks at each letter in <code>x</code>, and if it isn't a space it adds it to <code>xword</code> (Imagine it like <code>xword = "examp" + "l"</code>, that would make <code>xword = "exampl"</code>). If it is a space, it adds <code>xword</code> and a space to the set, and resets <code>xword</code>.</p>
<p>My issue comes to the fact that the word <code>"text"</code> is not being included in the final set. When this code is run, and the set is printed, it gives this:`['example', ' ']</p>
<p>So why isn't the <code>"text"</code> appearing in the set?</p>
| 0 | 2016-09-29T16:02:17Z | 39,775,154 | <p>You need to have one more append after the loop to add the last word</p>
<pre><code>#assign a string to variable
x = "example text"
#create set to store separated words
xset = []
#create base word
xword = ""
for letter in x:
if letter == " ":
#add word
xset.append(xword)
#add space
xset.append(letter)
#reset base word
else:
#add letter
xword = xword + letter
#back to line 9
xset.append(xword)
#print set with separated words
print xset
</code></pre>
| 0 | 2016-09-29T16:07:28Z | [
"python",
"string",
"python-2.7"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.