title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
How to auto wrap function call in try / catch? | 39,905,390 | <p>I have a lot of getter functions like this:</p>
<pre><code>get_property_a(default=None):
try:
self.data.get("field_1")[0].get("a")
except Exception as e:
return default
get_property_b(default=None):
try:
self.data.get("field_2")[0].get("b")
except Exception as e:
return default
...
</code></pre>
<p>Is there a way to not wrapping all the getters in try/except? It would be nice if it is some kind of annotation like this:</p>
<pre><code>@silent_exec(default=None)
def get_property_b():
self.data.get("field_2")[0].get("b")
</code></pre>
<p>Thanks</p>
| 2 | 2016-10-06T20:50:21Z | 39,905,474 | <p>You <em>can</em> do this by writing your own decorator:</p>
<pre><code>import functools
def silent_exec(default=None):
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs):
except Exception:
return default
return wrapper
return decorator
</code></pre>
<p>With that said, I'd be <em>very</em> wary of using this. You should <em>very rarely</em> be catching <em>all</em> exceptions (as we've done here). Normally you it's better to specify a tuple of exceptions that you actually expect and know how to handle...</p>
<pre><code>import functools
def silent_exec(exceptions, default=None):
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs):
except exceptions:
return default
return wrapper
return decorator
@silent_exec((IndexError, KeyError), default=None)
def get_property_b():
self.data.get("field_2")[0].get("b")
</code></pre>
<p>This way, you don't end up catching/silencing <em>programming</em> errors -- those will still get raised, you can look at them in the logs or wherever they get reported and you can go back in and fix them.</p>
| 5 | 2016-10-06T20:55:32Z | [
"python"
]
|
Applying lambda function to datetime | 39,905,432 | <p>I am using the following code to find clusters with difference <=1 in a list </p>
<pre><code>from itertools import groupby
from operator import itemgetter
data = [ 1, 4,5,6, 10, 15,16,17,18, 22, 25,26,27,28]
for k, g in groupby(enumerate(data), lambda (i, x): (i-x)):
print map(itemgetter(1), g)
</code></pre>
<p>If however I change the <code>data</code> to be an array of datetime to find cluster of datetimes which are only 1 hour apart, it fails.</p>
<p>I am trying the following:</p>
<pre><code>>>> data
array([datetime.datetime(2016, 10, 1, 8, 0),
datetime.datetime(2016, 10, 1, 9, 0),
datetime.datetime(2016, 10, 1, 10, 0), ...,
datetime.datetime(2019, 1, 3, 9, 0),
datetime.datetime(2019, 1, 3, 10, 0),
datetime.datetime(2019, 1, 3, 11, 0)], dtype=object)
from itertools import groupby
from operator import itemgetter
data = [ 1, 4,5,6, 10, 15,16,17,18, 22, 25,26,27,28]
for k, g in groupby(enumerate(data), lambda (i, x): (i-x).total_seconds()/3600):
print map(itemgetter(1), g)
</code></pre>
<p>The error is:</p>
<pre><code> for k, g in groupby(enumerate(data), lambda (i, x): int((i-x).total_seconds()/3600)):
TypeError: unsupported operand type(s) for -: 'int' and 'datetime.datetime'
</code></pre>
<p>There are lot of solutions on the web but I want to apply this particular one for learning.</p>
| 2 | 2016-10-06T20:53:15Z | 39,906,009 | <p>If you want to get all subsequences of items such that each item is an hour later than the previous one (not clusters of items that each are within an hour from eachother), you need to iterate over pairs <code>(data[i-1], data[i])</code>. Currently, you are just iterating over <code>(i, data[i])</code> which raises <code>TypeError</code> when you try to substract <code>data[i]</code> from <code>i</code>. A working example could look like this:</p>
<pre><code>from itertools import izip
def find_subsequences(data):
if len(data) <= 1:
return []
current_group = [data[0]]
delta = 3600
results = []
for current, next in izip(data, data[1:]):
if abs((next - current).total_seconds()) > delta:
# Here, `current` is the last item of the previous subsequence
# and `next` is the first item of the next subsequence.
if len(current_group) >= 2:
results.append(current_group)
current_group = [next]
continue
current_group.append(next)
return results
</code></pre>
| 1 | 2016-10-06T21:37:50Z | [
"python",
"lambda"
]
|
No Pages getting crawled - scrapy | 39,905,469 | <p>The following spider code in Scrapy was developed to be used to crawl pages from americanas website:</p>
<pre><code> # -*- coding: utf-8 -*-
import scrapy
import urllib
import re
import webscrap.items
import time
from urlparse import urljoin
from HTMLParser import HTMLParser
class AmericanasSpider(scrapy.Spider):
name = "americanas"
start_urls = ('http://www.americanas.com.br/loja/226795/alimentos-e-bebidas?WT.mc_id=home-menuLista-alimentos/',)
source = webscrap.items.ImportSource ("Americanas")
def parse (self, response):
ind = 0
self.source.submit()
b = []
for c in response.xpath ('//div[@class="item-menu"]/ul'):
c1 = re.sub('[\t\n]','', c.xpath('//span [@class="menu-heading"]/text()').extract()[ind])
if (c1):
x = webscrap.items.Category(c1)
x.submit()
for b in c.xpath ('li'):
b1 = webscrap.items.Category( b.xpath('a/text()').extract()[0])
if (b1):
b1.setParent(x.getID())
b1.submit()
link = b.xpath ('@href').extract()
urla = urljoin (response.url, link)
request = scrapy.Request (urla, callback = self.parse_category)
request.meta['idCategory'] = b1.getID ()
yield request
for a in b.xpath ('ul/li/a/text()'):
a1 = webscrap.items.Category( a.extract())
a1.setParent(b1.getID())
a1.submit()
link = a.xpath ('@href').extract()
urla = urljoin (response.url, link)
request = scrapy.Request (urla, callback = self.parse_category)
request.meta['idCategory'] = a1.getID ()
yield request
ind = ind + 1
def parse_category(self, response):
# produtos na pagina
items = response.xpath('//div[@class="paginado"]//article[@class="single-product vitrine230 "]')
for item in items:
url = item.xpath('.//div[@itemprop="item"]/form/div[@class="productInfo"]/div]/a[@class="prodTitle"]/@href').extract()
urla = urljoin(response.url, link)
request = scrapy.Request (urla, callback = self.parse_product)
request.meta['idCategory'] = response.meta['idCategory']
yield request
# proxima pagina (caso exista)
nextpage = response.xpath('//div[@class="pagination"]/ul/li/a[@class="pure-button next"]/@href').extract()
if (nextpage):
link = nextpage[0]
urlb = urljoin(response.url, link)
self.log('Next Page: {0}'.format(nextpage))
request = scrapy.Request (urlb, callback = self.parse_category)
request.meta['idCategory'] = response.meta['idCategory']
yield request
def parse_product (self, response):
print response.url
title = response.xpath('//title/text()').extract()
self.log(u'TÃtulo: {0}'.format(title))
</code></pre>
<p>but i get the following output:</p>
<pre><code> PS C:\Users\Natalia Oliveira\Desktop\Be Happy\behappy\import\webscrap> scrapy crawl americanas
2016-10-06 17:28:04 [scrapy] INFO: Scrapy 1.1.2 started (bot: webscrap)
2016-10-06 17:28:04 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'webscrap.spiders', 'REDIRECT_ENABLED': Fal
se, 'SPIDER_MODULES': ['webscrap.spiders'], 'BOT_NAME': 'webscrap'}
2016-10-06 17:28:04 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2016-10-06 17:28:05 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-10-06 17:28:05 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-10-06 17:28:05 [scrapy] INFO: Enabled item pipelines:
[]
2016-10-06 17:28:05 [scrapy] INFO: Spider opened
2016-10-06 17:28:05 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-10-06 17:28:05 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-10-06 17:28:05 [scrapy] DEBUG: Crawled (200) <GET http://www.americanas.com.br/loja/226795/alimentos-e-bebidas?WT.m
c_id=home-menuLista-alimentos/> (referer: None)
2016-10-06 17:28:07 [scrapy] DEBUG: Filtered duplicate request: <GET http://www.americanas.com.br/loja/226795/alimentos-
e-bebidas?WT.mc_id=home-menuLista-alimentos/> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all dupli
cates)
2016-10-06 17:28:07 [scrapy] DEBUG: Crawled (200) <GET http://www.americanas.com.br/loja/226795/alimentos-e-bebidas?WT.m
c_id=home-menuLista-alimentos/> (referer: http://www.americanas.com.br/loja/226795/alimentos-e-bebidas?WT.mc_id=home-men
uLista-alimentos/)
2016-10-06 17:28:22 [scrapy] INFO: Closing spider (finished)
2016-10-06 17:28:22 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 931,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 80585,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'dupefilter/filtered': 60,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 10, 6, 20, 28, 22, 257000),
'log_count/DEBUG': 4,
'log_count/INFO': 7,
'request_depth_max': 1,
'response_received_count': 2,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2016, 10, 6, 20, 28, 5, 346000)}
2016-10-06 17:28:22 [scrapy] INFO: Spider closed (finished)
</code></pre>
<p>I really don't know what is wrong here, because i'm a beginner in scrapy. Here's the wrong point?
The def parse is running as expected, so, I think the error should be in def parse_category or parse_product methods.</p>
| 0 | 2016-10-06T20:55:21Z | 39,906,320 | <p>Your <em>xpath</em> is not correct and there is only one <code>item-menu</code> per page, i have removed the items logic as I don't know what they are. This will get you all the links from the <em>item-menu</em> ul, you can add back in whatever logic you like:</p>
<pre><code> def parse(self, response):
for url in response.xpath('//div[@class="item-menu"]/ul/li[@class="item-linha"]/a/@href').extract():
if not url.startswith("http"):
url = response.urljoin(url)
request = scrapy.Request(url, callback=self.parse_category)
request.meta['idCategory'] = url # add whatever here
yield request
</code></pre>
<p>You next method is also over complicated, you don't need to worry about anything but the <em>anchor</em> tags with the <code>prodTitle</code> class:</p>
<pre><code>def parse_category(self, response):
# produtos na pagina
urls = response.css('a.prodTitle::attr(href)').extract()
for url in urls:
request = scrapy.Request(url, callback=self.parse_product)
request.meta['idCategory'] = response.meta['idCategory']
yield request
# you want to check for the anchor with "Próxima" text
nextpage = response.xpath(u'//ul[@class="pure-paginator acrN"]/li/a[contains(.,"Próxima")]/@href').extract_first()
if nextpage:
self.log(u'Next Page: {0}'.format(nextpage))
request = scrapy.Request(nextpage, callback=self.parse_category)
request.meta['idCategory'] = response.meta['idCategory']
yield request
def parse_product(self, response):
print response.url
title = response.xpath('//title/text()').extract_first()
self.log(u'TÃtulo: {0}'.format(title))
</code></pre>
<p>If you run it now you will see lots of output like:</p>
<pre><code>2016-10-06 23:25:15 [americanas] DEBUG: Next Page: http://www.americanas.com.br/linha/314061/alimentos-e-bebidas/biscoitos?ofertas.offset=30
2016-10-06 23:25:15 [americanas] DEBUG: Next Page: http://www.americanas.com.br/linha/342151/alimentos-e-bebidas/azeite-e-vinagre?ofertas.offset=30
2016-10-06 23:25:15 [americanas] DEBUG: Next Page: http://www.americanas.com.br/linha/342129/alimentos-e-bebidas/barra-de-cereais?ofertas.offset=30
2016-10-06 23:25:16 [scrapy] DEBUG: Crawled (200) <GET http://www.americanas.com.br/produto/15815078/nan-comfor-1-formula-infantil-nestle-lata-800g> (referer: http://www.americanas.com.br/linha/314080/alimentos-e-bebidas/alimentacao-infantil)
http://www.americanas.com.br/produto/15815078/nan-comfor-1-formula-infantil-nestle-lata-800g
2016-10-06 23:25:16 [americanas] DEBUG: TÃtulo: Nan Comfor 1 Fórmula Infantil Nestlé Lata 800g - Americanas.com
2016-10-06 23:25:16 [scrapy] DEBUG: Crawled (200) <GET http://www.americanas.com.br/linha/316829/eletrodomesticos/adega-de-vinho> (referer: http://www.americanas.com.br/loja/226795/alimentos-e-bebidas?WT.mc_id=home-menuLista-alimentos/)
2016-10-06 23:25:16 [americanas] DEBUG: Next Page: http://www.americanas.com.br/linha/316829/eletrodomesticos/adega-de-vinho?ofertas.offset=30
2016-10-06 23:25:16 [scrapy] DEBUG: Crawled (200) <GET http://www.americanas.com.br/produto/7170286/goiabada-135g-diet-house> (referer: http://www.americanas.com.br/linha/314082/alimentos-e-bebidas/mercearia-doce)
2016-10-06 23:25:16 [scrapy] DEBUG: Crawled (200) <GET http://www.americanas.com.br/produto/9955598/adocante-em-sache-fit-caixa-com-30-unidades-de-2-5g-uniao> (referer: http://www.americanas.com.br/linha/314082/alimentos-e-bebidas/mercearia-doce)
2016-10-06 23:25:16 [scrapy] DEBUG: Crawled (200) <GET http://www.americanas.com.br/linha/285368/utilidades-domesticas/vinho> (referer: http://www.americanas.com.br/loja/226795/alimentos-e-bebidas?WT.mc_id=home-menuLista-alimentos/)
http://www.americanas.com.br/produto/7170286/goiabada-135g-diet-house
2016-10-06 23:25:16 [americanas] DEBUG: TÃtulo: Goiabada 135g - Diet House - Americanas.com
http://www.americanas.com.br/produto/9955598/adocante-em-sache-fit-caixa-com-30-unidades-de-2-5g-uniao
2016-10-06 23:25:16 [americanas] DEBUG: TÃtulo: Adoçante Em Sache Fit Caixa Com 30 Unidades De 2,5g União - Americanas.com
2016-10-06 23:25:16 [scrapy] DEBUG: Crawled (200) <GET http://www.americanas.com.br/produto/121047374/barra-de-chocolate-ao-leite-lacta-150g-1-unidade> (referer: http://www.americanas.com.br/linha/314045/alimentos-e-bebidas/bomboniere)
2016-10-06 23:25:16 [scrapy] DEBUG: Crawled (200) <GET http://www.americanas.com.br/linha/314080/alimentos-e-bebidas/alimentacao-infantil?ofertas.offset=30> (referer: http://www.americanas.com.br/linha/314080/alimentos-e-bebidas/alimentacao-infantil)
2016-10-06 23:25:16 [scrapy] DEBUG: Crawled (200) <GET http://www.americanas.com.br/linha/314082/alimentos-e-bebidas/mercearia-doce?ofertas.offset=30> (referer: http://www.americanas.com.br/linha/314082/alimentos-e-bebidas/mercearia-doce)
2016-10-06 23:25:16 [scrapy] DEBUG: Crawled (200) <GET http://www.americanas.com.br/produto/9800047/acucar-refinado-caixa-com-400-envelopes-x-5g-uniao-premium> (referer: http://www.americanas.com.br/linha/314082/alimentos-e-bebidas/mercearia-doce)
http://www.americanas.com.br/produto/121047374/barra-de-chocolate-ao-leite-lacta-150g-1-unidade
2016-10-06 23:25:16 [americanas] DEBUG: TÃtulo: Barra de Chocolate Ao leite Lacta 150g - 1 unidade - Americanas.com
</code></pre>
| 0 | 2016-10-06T22:04:12Z | [
"python",
"web-scraping",
"scrapy",
"web-crawler"
]
|
Python: How to piggyback on existing tests when developing third party packages | 39,905,532 | <p>My <a href="https://github.com/tomchristie/django-rest-framework/pull/4540" rel="nofollow">PR</a> on django-rest-framework to add in a "hybrid pagination" got rejected reason being better to be in a 3rd party package. </p>
<p>So I went ahead and created the package structure but got stuck in creating the test, if you have a look at the <a href="https://github.com/tomchristie/django-rest-framework/pull/4540/files" rel="nofollow">PR files changed</a>, my new tests are merely extending the existing tests and changed to use my new pagination class.</p>
<pre><code> +class TestCombinedPaginationPageNumber(TestPageNumberPagination):
+ def setup(self):
+ class ExamplePagination(pagination.HybridPagination):
+ page_size = 5
+
+ self.pagination = ExamplePagination()
+ self.queryset = range(1, 101)
+
+
+class TestCombinedPaginationLimitOffset(TestLimitOffset):
+ def setup(self):
+ class ExamplePagination(pagination.HybridPagination):
+ default_limit = 10
+ max_limit = 15
+
+ self.pagination = ExamplePagination()
+ self.queryset = range(1, 101)
</code></pre>
<p>I am having trouble working out a way to piggyback these tests in my own 3rd party tests, I can't extend it remotely since installing the package doesn't include test files. I tried copying the particular <code>test_pagination.py</code> file but getting a lot of errors.</p>
| 0 | 2016-10-06T20:59:36Z | 39,913,118 | <p>Correct that you won't be able to include the tests from the <code>pip install</code> package. You'll need to clone whichever bits of test case you want to replicate locally.</p>
<blockquote>
<p>I tried copying the particular test_pagination.py file but getting a lot of errors.</p>
</blockquote>
<p>I'd suggest starting off small. Take a single test case that you want to replicate. Copy <em>just that one</em>, and also any imports it relies on.</p>
<p>More generally you should probably just try to exclusively test the bits that your package adds, rather than re-testing REST framework's behavior. Eg for your "switch between pagination styles", don't aim to test the pagination itself, but rather the switching behavior.</p>
<p>Hope that helps. If you've any issues with resolving specific errors while writing the test case please do shout out on the REST framework mailing list.</p>
| 0 | 2016-10-07T08:45:44Z | [
"python",
"django"
]
|
How do I handle form data in aiohttp responses | 39,905,588 | <p>I'm looking to get the multipart form data and turn it into a dictionary. Easy enough for json, but this seems to be a bit different. </p>
<p>Current code:</p>
<pre><code>app = web.Application()
async def deploy(request):
# retrieve multipart form data or
# x-www-form-urlencoded data
# convert to a dictionary if not already
text = "Hello"
return web.Response(text=text)
app.router.add_post('/', deploy)
web.run_app(app)
</code></pre>
| 1 | 2016-10-06T21:03:38Z | 39,906,151 | <p>You can use the <code>request.post()</code> method.</p>
<pre><code>app = web.Application()
async def deploy(request):
# retrieve multipart form data or
# x-www-form-urlencoded data
data = await request.post()
print(data)
text = "Hello"
return web.Response(text=text)
app.router.add_post('/', deploy)
web.run_app(app)
</code></pre>
| 2 | 2016-10-06T21:50:21Z | [
"python",
"multipartform-data",
"python-3.5",
"aiohttp"
]
|
I can't get QueryAABB to work in PyBox2D. What am I doing wrong? | 39,905,657 | <p>I'm trying to detect if the mouse pointer is over a body so i can drag it but i'm getting the error below. I don't know if it's me or a bug in pybox2d but i've been at it for hours and the docs are ancient. </p>
<pre><code>>>> from Box2D.b2 import *
>>> w = world()
>>> my_body = w.CreateDynamicBody(position=(1,1))
>>> aabb = AABB()
>>> aabb.lowerBound = (1-.001,1-.001)
>>> aabb.upperBound = (1+.001,1+.001)
>>> def callback(fixture):
... shape = fixture.shape
... p = (1,1)
... if fixture.body.type != 0: # type 0 is static
... if shape.TestPoint(fixture.body.transform,p):
... return False
... return True
...
>>> w.QueryAABB(callback,aabb)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: in method 'b2World_QueryAABB', argument 2 of type 'b2QueryCallback *'
</code></pre>
<p>Obviously, I expect the query to return True (no shape detected, keep looking), because I haven't created a shape for the body but that doesn't explain the type error. Please help, thanks in advance!</p>
| 0 | 2016-10-06T21:08:22Z | 39,906,397 | <p>silly me i figured it out. right there in the 'ancient' docs.</p>
<p><a href="https://github.com/pybox2d/pybox2d/wiki/manual#aabb-queries" rel="nofollow">https://github.com/pybox2d/pybox2d/wiki/manual#aabb-queries</a></p>
| 0 | 2016-10-06T22:11:12Z | [
"python",
"python-3.x",
"box2d"
]
|
Appending output of a for loop for Python to a csv file | 39,905,678 | <p>I have a folder with .txt files in it. My code will find the line count and character count in each of these files and save the output for each file in a single csv file in a different directory. The csv file is <em>Linecount.csv</em>. For some reason the output to csv file is repeating for character and linecount for the last output, though printing the output is producing correct results. The output of the print statement is correct.</p>
<p>For the csv file it is not.</p>
<pre><code>import glob
import os
import csv
os.chdir('c:/Users/dasa17/Desktop/sample/Upload')
for file in glob.glob("*.txt"):
chars = lines = 0
with open(file,'r')as f:
for line in f:
lines+=1
chars += len(line)
a=file
b=lines
c=chars
print(a,b,c)
d=open('c:/Users/dasa17/Desktop/sample/Output/LineCount.csââv', 'w')
writer = csv.writer(d,lineterminator='\n')
for a in os.listdir('c:/Users/dasa17/Desktop/sample/Upload'):
writer.writerow((a,b,c)) d.close()
</code></pre>
| 0 | 2016-10-06T21:10:01Z | 39,906,978 | <p>Please check your indentation.</p>
<p>You are looping through each file using <code>for file in glob.glob("*.txt"):</code> </p>
<p>This stores the last result in <code>a</code>,<code>b</code>, and <code>c</code>. It doesn't appear to write it anywhere.</p>
<p>You then loop through each item using <code>for a in os.listdir('c:/Users/dasa17/Desktop/sample/Upload'):</code>, and store <code>a</code> from this loop (the filename), and the last value of <code>b</code> and <code>c</code> from the initial loop.</p>
<p>I've not run but reordering as follows may solve the issue:</p>
<pre><code>import glob
import os
import csv
os.chdir('c:/Users/dasa17/Desktop/sample/Upload')
d=open('c:/Users/dasa17/Desktop/sample/Output/LineCount.csââv', 'w')
writer = csv.writer(d,lineterminator='\n')
for file in glob.glob("*.txt"):
chars = lines = 0
with open(file,'r') as f:
for line in f:
lines+=1
chars += len(line)
a=file
b=lines
c=chars
print(a,b,c)
writer.writerow((a,b,c))
d.close()
</code></pre>
| 0 | 2016-10-06T23:08:51Z | [
"python",
"csv"
]
|
Closed lines in matplotlib contour plots | 39,905,702 | <p>When looking closely at contour plots made with matplotlib, I noticed that smaller contours have inaccurate endpoints which do not close perfectly in PDF figures. Consider the minimal example:</p>
<pre><code>plt.gca().set_aspect('equal')
x,y = np.meshgrid(np.linspace(-1,1,100), np.linspace(-1,1,100))
r = x*x + y*y
plt.contour(np.log(r))
plt.savefig("test.pdf")
</code></pre>
<p>The central part of the resulting test.pdf file, shown below, clearly shows the problem. Is there a way to solve this or is it a bug/intrinsic inaccuracy of matploplib?</p>
<p><a href="http://i.stack.imgur.com/NJWKC.png" rel="nofollow"><img src="http://i.stack.imgur.com/NJWKC.png" alt="enter image description here"></a></p>
| 3 | 2016-10-06T21:12:00Z | 39,906,056 | <p>So I just did the same thing, but got closed contours (see images). Did you check for any updates on the package?</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
plt.gca().set_aspect('equal')
x,y = np.meshgrid(np.linspace(-1,1,100), np.linspace(-1,1,100))
r = x*x + y*y
plt.contour(np.log(r))
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/QVrXD.png" rel="nofollow">Zoomed Out</a></p>
<p><a href="http://i.stack.imgur.com/y5P6n.png" rel="nofollow">Zoomed In</a></p>
| 0 | 2016-10-06T21:43:49Z | [
"python",
"pdf",
"matplotlib",
"contour"
]
|
Closed lines in matplotlib contour plots | 39,905,702 | <p>When looking closely at contour plots made with matplotlib, I noticed that smaller contours have inaccurate endpoints which do not close perfectly in PDF figures. Consider the minimal example:</p>
<pre><code>plt.gca().set_aspect('equal')
x,y = np.meshgrid(np.linspace(-1,1,100), np.linspace(-1,1,100))
r = x*x + y*y
plt.contour(np.log(r))
plt.savefig("test.pdf")
</code></pre>
<p>The central part of the resulting test.pdf file, shown below, clearly shows the problem. Is there a way to solve this or is it a bug/intrinsic inaccuracy of matploplib?</p>
<p><a href="http://i.stack.imgur.com/NJWKC.png" rel="nofollow"><img src="http://i.stack.imgur.com/NJWKC.png" alt="enter image description here"></a></p>
| 3 | 2016-10-06T21:12:00Z | 39,912,344 | <p>Disclaimer: this is more an <strong>explanation + hack</strong> than a real answer.</p>
<p>I believe that there is a fundamental problem the way matplotlib makes contour plots. Essentially, all contours are collections of lines (<code>LineCollection</code>), while they should be collection of possibly closed lines (<code>PolyCollection</code>). There might be good reasons why things are done this way, but in the simple example I made this choice clearly produces artifacts. A not-very-nice solution is to convert a posteriori all <code>LineCollection</code>'s into <code>PolyCollection</code>'s. This is what is done in the following code</p>
<pre><code>from matplotlib.collections import PolyCollection
eps = 1e-5
plt.gca().set_aspect('equal')
x,y = np.meshgrid(np.linspace(-1,1,100), np.linspace(-1,1,100))
r = x*x + y*y
plt.contour(np.log(r/1.2))
ca = plt.gca()
N = len(ca.collections)
for n in range(N):
c = ca.collections.pop(0)
for s in c.get_segments():
closed = (abs(s[0,0] - s[-1,0]) < eps) and (abs(s[0,1] - s[-1,1]) < eps)
p = PolyCollection([s], edgecolors=c.get_edgecolors(),
linewidths=c.get_linewidths(), linestyles=c.get_linestyles(),
facecolors=c.get_facecolors(), closed=closed)
ca.add_collection(p)
plt.savefig("test.pdf")
</code></pre>
<p>A zoom of the result obtained shows that everything is OK now:</p>
<p><a href="http://i.stack.imgur.com/rEP4i.png" rel="nofollow"><img src="http://i.stack.imgur.com/rEP4i.png" alt="enter image description here"></a></p>
<p>Some care is taken to check if a contour is closed: in the present code, this is done with an approximate equality check for the first and last point: I am wandering if there is a better way to do this (perhaps matplotlib returns some data to check closed contours?). In any case, again, this is hack: I would be happy to hear if anyone has a better solution (or has a way to fix this within matplotlib).</p>
| 3 | 2016-10-07T08:01:22Z | [
"python",
"pdf",
"matplotlib",
"contour"
]
|
How to execute code on save in Django User model? | 39,905,713 | <p>I would like to run some code specifically when the is_active field is changed for a Django User, similar to how the save method works for other models:</p>
<pre><code>class Foo(models.Model):
...
def save(self, *args, **kwargs):
if self.pk is not None:
orig = Foo.objects.get(pk=self.pk)
if orig.is_active != self.is_active:
# code goes here
</code></pre>
<p>Can this be done through another model that is in one to one relation with the User model? Something like:</p>
<pre><code>class Bar(models.Model):
owner = models.OneToOneField(User, on_save=?)
...
</code></pre>
<p>I guess I could duplicate the is_active field on the related model and then set the is_active field on the User when saving the related model. But this seems like a bit of a messy solution.</p>
| 1 | 2016-10-06T21:13:02Z | 39,905,768 | <p>You're looking for this <a href="https://docs.djangoproject.com/en/1.10/ref/signals/#django.db.models.signals.pre_save" rel="nofollow">Signal</a></p>
<pre><code>from django.db.models.signals import pre_save
from django.contrib.auth.models import User
def do_your_thing(sender, instance, **kwargs):
# Do something
print(instance)
pre_save.connect(do_your_thing, sender=User)
</code></pre>
| 2 | 2016-10-06T21:17:21Z | [
"python",
"django"
]
|
Out of bounds nanosecond timestamp | 39,905,822 | <p>I have a variable ['date_hiring'] in Googlespeedsheet in format like</p>
<pre><code>16.01.2016
</code></pre>
<p>I import it in Python, the variable has an object type. I try to convert to datetime </p>
<pre><code>from datetime import datetime
data['date_hiring'] = pd.to_datetime(data['date_hiring'])
</code></pre>
<p>and i get</p>
<pre><code>OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 16-01-06 00:00:00
</code></pre>
<p>i know from this <a href="http://stackoverflow.com/questions/32888124/pandas-out-of-bounds-nanosecond-timestamp-after-offset-rollforward-plus-adding-a">pandas out of bounds nanosecond timestamp after offset rollforward plus adding a month offset</a> that </p>
<blockquote>
<p>Since pandas represents timestamps in nanosecond resolution, the
timespan that can be represented using a 64-bit integer is limited to
approximately 584 years</p>
</blockquote>
<p>but in original data in the Googlespeedsheet i have no data like '16.01.06' </p>
<p>Just like '16.06.2006'</p>
<p>So the problem is in converting </p>
<p>How to improve it?</p>
| 2 | 2016-10-06T21:22:15Z | 39,905,987 | <p>According to the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow">documentation</a>, the <em>dayfirst</em> field defaults to false:</p>
<blockquote>
<p>dayfirst : boolean, default False</p>
</blockquote>
<p>So it must have decided that there was a malformed date there an tried to interpret it as a time-of-day. But even then it probably didn't think that 16 point anything could be hours or minutes, so it tried to convert it as seconds. But there is a extra decimal point so it gave up and said I don't like the fractional seconds. (Or something like that.)</p>
<p>I think you can fix it by giving an explicit format string or at least setting <em>dayfirst</em>.</p>
| 2 | 2016-10-06T21:36:01Z | [
"python",
"datetime",
"pandas"
]
|
Python Program Expecting a Return | 39,905,916 | <p>I'm having trouble getting this program I'm writing to execute correctly. I'm trying to build a table, within a separate class. The method within the class seems to require a return, in order for the table to show and I'm trying to figure out why. If I use "return" or "return None" the table will not show when I run the program. However, if I use a nonsense reference like "return junk", it will compile with the error that the reference is not defined but the table will appear when I run the program. Can someone help we with this? </p>
<pre><code>from PyQt4.QtCore import *
from PyQt4.QtGui import *
import sys
class MyMainWindow(QMainWindow):
def __init__(self, parent=None):
super(MyMainWindow, self).__init__(parent)
self.table_widget = TableWidget(self)
self.setCentralWidget(self.table_widget)
class TableWidget(QWidget):
def __init__(self, parent):
super(TableWidget, self).__init__(parent)
table = QTableWidget()
tableItem = QTableWidgetItem()
# initiate table
table.setWindowTitle("test_table")
table.resize(400, 250)
table.setRowCount(4)
table.setColumnCount(2)
# set data
table.setItem(0,0, QTableWidgetItem("Item (1,1)"))
table.setItem(0,1, QTableWidgetItem("Item (1,2)"))
table.setItem(1,0, QTableWidgetItem("Item (2,1)"))
table.setItem(1,1, QTableWidgetItem("Item (2,2)"))
table.setItem(2,0, QTableWidgetItem("Item (3,1)"))
table.setItem(2,1, QTableWidgetItem("Item (3,2)"))
table.setItem(3,0, QTableWidgetItem("Item (4,1)"))
table.setItem(3,1, QTableWidgetItem("Item (4,2)"))
table.setItem(3,0, QTableWidgetItem("Item (4,1)"))
table.setItem(3,1, QTableWidgetItem("Item (4,2)"))
# show table
table.show()
return junk
def main():
app = QApplication(sys.argv)
GUI = MyMainWindow()
GUI.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
</code></pre>
| 1 | 2016-10-06T21:29:46Z | 39,906,819 | <p><code>__init__</code> as constructor can't return value. </p>
<hr>
<p>Every <code>Widget</code> needs a parent - so <code>QTableWidget</code> needs too.</p>
<pre><code> table = QTableWidget(parent)
</code></pre>
<p>Because you send <code>MainWindow</code> to your class as parent so now <code>MainWindow</code> is <code>table</code> parent and <code>MainWindow</code> can show <code>table</code> in its area.</p>
<hr>
<p><code>table</code> is not (main) window so it can't set <code>WindowTitle</code>. You have to do it in <code>MainWindow</code> (or use <code>parent</code> in <code>TableWidget</code>). </p>
<p>The same with <code>resize()</code> - to resize window you have to use it in <code>MainWindow</code> (or use <code>parent</code> in <code>TableWidget</code>). </p>
<p><code>show()</code> is use to show (main) window. </p>
<hr>
<p>You could define your class using <code>QTableWidget</code> </p>
<pre><code> class TableWidget(QTableWidget):
</code></pre>
<p>and then you can use <code>self</code> instead of <code>table</code> </p>
<hr>
<p><strong>EDIT:</strong> full code</p>
<pre><code>from PyQt4.QtCore import *
from PyQt4.QtGui import *
import sys
class MyMainWindow(QMainWindow):
def __init__(self, parent=None):
super(MyMainWindow, self).__init__(parent)
self.table_widget = TableWidget(self)
self.setCentralWidget(self.table_widget)
#self.setWindowTitle("test_table")
#self.resize(400, 250)
self.show()
class TableWidget(QTableWidget):
def __init__(self, parent):
super(TableWidget, self).__init__(parent)
# change main window
parent.setWindowTitle("test_table")
parent.resize(400, 250)
# initiate table
self.setRowCount(4)
self.setColumnCount(2)
# set data
self.setItem(0,0, QTableWidgetItem("Item (1,1)"))
self.setItem(0,1, QTableWidgetItem("Item (1,2)"))
self.setItem(1,0, QTableWidgetItem("Item (2,1)"))
self.setItem(1,1, QTableWidgetItem("Item (2,2)"))
self.setItem(2,0, QTableWidgetItem("Item (3,1)"))
self.setItem(2,1, QTableWidgetItem("Item (3,2)"))
self.setItem(3,0, QTableWidgetItem("Item (4,1)"))
self.setItem(3,1, QTableWidgetItem("Item (4,2)"))
self.setItem(3,0, QTableWidgetItem("Item (4,1)"))
self.setItem(3,1, QTableWidgetItem("Item (4,2)"))
def main():
app = QApplication(sys.argv)
win = MyMainWindow() # probably it has to be assigned to variable
#win.show() # use if you don't have `self.show()` in class
sys.exit(app.exec_())
if __name__ == '__main__':
main()
</code></pre>
| 0 | 2016-10-06T22:51:41Z | [
"python",
"python-3.x",
"pyqt4"
]
|
Python Program Expecting a Return | 39,905,916 | <p>I'm having trouble getting this program I'm writing to execute correctly. I'm trying to build a table, within a separate class. The method within the class seems to require a return, in order for the table to show and I'm trying to figure out why. If I use "return" or "return None" the table will not show when I run the program. However, if I use a nonsense reference like "return junk", it will compile with the error that the reference is not defined but the table will appear when I run the program. Can someone help we with this? </p>
<pre><code>from PyQt4.QtCore import *
from PyQt4.QtGui import *
import sys
class MyMainWindow(QMainWindow):
def __init__(self, parent=None):
super(MyMainWindow, self).__init__(parent)
self.table_widget = TableWidget(self)
self.setCentralWidget(self.table_widget)
class TableWidget(QWidget):
def __init__(self, parent):
super(TableWidget, self).__init__(parent)
table = QTableWidget()
tableItem = QTableWidgetItem()
# initiate table
table.setWindowTitle("test_table")
table.resize(400, 250)
table.setRowCount(4)
table.setColumnCount(2)
# set data
table.setItem(0,0, QTableWidgetItem("Item (1,1)"))
table.setItem(0,1, QTableWidgetItem("Item (1,2)"))
table.setItem(1,0, QTableWidgetItem("Item (2,1)"))
table.setItem(1,1, QTableWidgetItem("Item (2,2)"))
table.setItem(2,0, QTableWidgetItem("Item (3,1)"))
table.setItem(2,1, QTableWidgetItem("Item (3,2)"))
table.setItem(3,0, QTableWidgetItem("Item (4,1)"))
table.setItem(3,1, QTableWidgetItem("Item (4,2)"))
table.setItem(3,0, QTableWidgetItem("Item (4,1)"))
table.setItem(3,1, QTableWidgetItem("Item (4,2)"))
# show table
table.show()
return junk
def main():
app = QApplication(sys.argv)
GUI = MyMainWindow()
GUI.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
</code></pre>
| 1 | 2016-10-06T21:29:46Z | 39,907,189 | <p>You're not keeping a reference to the table, so it gets garbage-collected when <code>__init__</code> returns. You can fix this by putting the table in a layout, like this:</p>
<pre><code>class TableWidget(QWidget):
def __init__(self, parent):
super(TableWidget, self).__init__(parent)
table = QTableWidget()
tableItem = QTableWidgetItem()
table.setRowCount(4)
table.setColumnCount(2)
# set data
table.setItem(0,0, QTableWidgetItem("Item (1,1)"))
table.setItem(0,1, QTableWidgetItem("Item (1,2)"))
table.setItem(1,0, QTableWidgetItem("Item (2,1)"))
table.setItem(1,1, QTableWidgetItem("Item (2,2)"))
table.setItem(2,0, QTableWidgetItem("Item (3,1)"))
table.setItem(2,1, QTableWidgetItem("Item (3,2)"))
table.setItem(3,0, QTableWidgetItem("Item (4,1)"))
table.setItem(3,1, QTableWidgetItem("Item (4,2)"))
table.setItem(3,0, QTableWidgetItem("Item (4,1)"))
table.setItem(3,1, QTableWidgetItem("Item (4,2)"))
layout = QVBoxLayout(self)
layout.addWidget(table)
</code></pre>
| 0 | 2016-10-06T23:33:07Z | [
"python",
"python-3.x",
"pyqt4"
]
|
How to sort a list of dictionaries by the value of a key and by the value of a value of a key? | 39,905,925 | <pre><code>{
"states": [
{
"timestamp": {
"double": 968628281.0
},
"sensorSerialNumber": 13020235
},
{
"timestamp": {
"double": 964069109.0
},
"sensorSerialNumber": 13020203
},
{
"timestamp": {
"double": 9641066.0
},
"sensorSerialNumber": 30785
}
]
}
</code></pre>
<p>Is there a way to sort this list of dictionaries by "sensorSerialNumber" and by this number inside the value of "timestamp" (9.68628281E8),</p>
<pre><code>"timestamp":{"double":9.68628281E8}
</code></pre>
<p>using the built-in function(<em>sorted</em>)</p>
<pre><code>from operator import itemgetter
newlist = sorted(list_to_be_sorted, key=itemgetter('name'))
</code></pre>
<p>However, my question is <strong>slightly different from other questions</strong>. If there wasn't a dictionary inside the dictionary I would just need to add the string of the second key to the built-in function, like:</p>
<pre><code>from operator import itemgetter
newlist = sorted(list_to_be_sorted, key=itemgetter('name', 'age'))
</code></pre>
| 1 | 2016-10-06T21:30:38Z | 39,905,948 | <p><code>key</code> can be any callable, it is passed each element in the <code>list_to_be_sorted</code> in turn. So a <code>lambda</code> would fit here:</p>
<pre><code>newlist = sorted(
data['states'],
key=lambda i: (i['sensorSerialNumber'], i['timestamp']['double']))
</code></pre>
<p>So the lambda returns the value for the <code>'sensorSerialNumber'</code> key as the first element of a tuple, and the value for the <code>'double'</code> key from the dictionary value for the <code>'timestamp'</code> key.</p>
<p>Demo:</p>
<pre><code>>>> data = {"states" :
... [{"timestamp":{"double":9.68628281E8},"sensorSerialNumber":13020235},
... {"timestamp":{"double":9.64069109E8},"sensorSerialNumber":13020203},
... {"timestamp":{"double":9641066.0},"sensorSerialNumber":30785}]
... }
>>> sorted(data['states'], key=lambda i: (i['sensorSerialNumber'], i['timestamp']['double']))
[{'timestamp': {'double': 9641066.0}, 'sensorSerialNumber': 30785}, {'timestamp': {'double': 964069109.0}, 'sensorSerialNumber': 13020203}, {'timestamp': {'double': 968628281.0}, 'sensorSerialNumber': 13020235}]
>>> from pprint import pprint
>>> pprint(_)
[{'sensorSerialNumber': 30785, 'timestamp': {'double': 9641066.0}},
{'sensorSerialNumber': 13020203, 'timestamp': {'double': 964069109.0}},
{'sensorSerialNumber': 13020235, 'timestamp': {'double': 968628281.0}}]
</code></pre>
<p>The demo is not that meaningful, as there are no equal serial numbers for the inclusion of the timestamp value to make a difference.</p>
| 1 | 2016-10-06T21:32:52Z | [
"python",
"list",
"sorting",
"dictionary"
]
|
Dataframe head not shown in PyCharm | 39,905,931 | <p>I have the following code in PyCharm</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib as plt
df = pd.read_csv("c:/temp/datafile.txt", sep='\t')
df.head(10)
</code></pre>
<p>I get the following output:</p>
<pre><code>Process finished with exit code 0
</code></pre>
<p>I am supposed to get the first ten rows of my datafile, but these do not appear in <code>PyCharm</code>.</p>
<p>I checked the Project interpreter and all settings seem to be alright there. The right packages are installed (numpy, pandas, matplotlib) under the right Python version. </p>
<p>What am I doing wrong? Thanks.</p>
| 0 | 2016-10-06T21:31:28Z | 39,906,582 | <p><code>PyCharm</code> is not <code>Python Shell</code> which automatically prints all results. You have to use <code>print()</code> to display anything.</p>
<pre><code>print(df.head(10))
</code></pre>
| 6 | 2016-10-06T22:28:07Z | [
"python",
"pandas",
"dataframe",
"pycharm"
]
|
A query that works in MySQL terminal fails when executed via PyMySQL | 39,905,956 | <p>I've provided three sample SQL queries below. Each of these works fine, returning the expected table output when executed directly from terminal in the <code>MySQL [db] ></code> environment. </p>
<p>Each other these queries is saved in a python doc called <code>queries.py</code>. The second two queries work fine when passed to the db via <code>pymysql</code>, but the first one returns an empty array. </p>
<p>I've checked out <a href="http://stackoverflow.com/questions/28489396/mysql-connector-python-query-works-in-mysql-but-not-python">this post</a>, <a href="http://stackoverflow.com/questions/11705450/sql-query-works-in-console-but-not-in-python">this post</a>, and <a href="https://github.com/PyMySQL/PyMySQL/issues/399" rel="nofollow">this post</a> and none of them seem to be addressing the issue. </p>
<p>Here's the sample code that I'm using to test in Python (version <code>3.5</code>): </p>
<pre><code>import pymysql
import params
import queries
conn = pymysql.connect(
host = params.HOST,
user = params.USER,
password = params.PWD,
db = 'db',
charset='utf8',
cursorclass=pymysql.cursors.DictCursor,
autocommit = True)
test_queries = [queries.VETTED, queries.CREATED, queries.CLOSED_OPPS]
with conn.cursor() as cursor:
for query in test_queries:
cursor.execute(query)
print(cursor.fetchall())
() #..blank output -- doesn't make sense because corresponding query works in MySQL env
[...] #..expected output from query 2
[...] #..expected output from query 3
</code></pre>
<p>Here's what <code>queries.py</code> looks like. Each of those queries returns expected output when executed in MySQL, but the first one, <code>VETTED</code>, returns a blank array when passed to the DB via <code>pymysql</code>: </p>
<pre><code>VETTED = """
SELECT
date_format(oa.logged_at, '%Y-%m-%d') as `action_date`,
count(1) `count`
FROM
crm_audit_field oaf,
crm_audit oa,
crm_sales_lead lead
WHERE
oa.id = oaf.audit_id AND
oaf.field = 'status' AND
(
oaf.new_text = 'Qualified' OR
oaf.new_text = 'Disqualified' OR
oaf.new_text = 'Canceled'
) AND
oa.object_class = 'CRM\\Bundle\\SalesBundle\\Entity\\Lead' AND
lead.id = oa.object_id AND
(lead.status_id = 'qualified' OR lead.status_id = 'canceled')
GROUP BY
`action_date`;"""
CREATED = """
SELECT
DATE_FORMAT(lead.createdat, '%Y-%m-%d') as `creation_date`,
count(1)
FROM
crm_sales_lead `lead`
GROUP BY
creation_date;"""
CLOSED_OPPS = """
SELECT
date_format(closed_at, '%Y-%m-%d') `close_date`,
count(1) `count`
FROM
crm_sales_opportunity
WHERE
status_id = 'won'
GROUP BY
`close_date`;"""
</code></pre>
| -1 | 2016-10-06T21:33:24Z | 39,906,259 | <p>I think you need <em>four</em> backslashes in the Python string literal to represent <em>two</em> backslash characters needed by MySQL, to represent a backslash character.</p>
<p>MySQL needs <em>two</em> backslashes in a string literal to represent a backslash character. The SQL text you have works in MySQL because the string literals contain <em>two</em> backslash characters.</p>
<p>But in the Python code, the SQL statement being sent to MySQL contains only <em>single</em> backslash characters.</p>
<p>That's because Python also needs <em>two</em> backslashes in a string literal to represent a backslash character, just like MySQL does.</p>
<p>So, in Python... </p>
<pre><code> """CRM\\Bundle"""
^^
</code></pre>
<p>represents a string containing only <em>one</em> backslash character: <strong><code>'CRM\Bundle'</code></strong></p>
<p>To get a string containing <em>two</em> backslash characters: <strong><code>'CRM\\Bundle'</code></strong></p>
<p>You would need <em>four</em> backslashes in the Python literal, like this:</p>
<pre><code> """CRM\\\\Bundle"""
^^^^
</code></pre>
| 1 | 2016-10-06T21:59:46Z | [
"python",
"mysql",
"python-3.x",
"pymysql"
]
|
XML attribute parsing with python and ElementTree | 39,905,988 | <p>folks!
I'm trying to parse some weird formed XML:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<analytics>
<standard1>
...
<attributes>
<attribute name="agentname" value="userx userx" />
<attribute name="agentpk" value="5" />
<attribute name="analytics:callid" value="757004000003597" />
...
<attribute name="wrapuptime" value="0" />
</attributes>
</standard1>
<standard2>
...
<attributes>
<attribute name="agentname" value="userx userx" />
<attribute name="agentpk" value="5" />
<attribute name="analytics:callid" value="757004000003597" />
...
<attribute name="wrapuptime" value="0" />
</attributes>
</standard2>
<engines>
...
</engines>
</analytics>
</code></pre>
<p>Since both <em>name</em> and <em>value</em> are attributes, I have no idea how to access <em>value</em> by <em>name</em> without looping through the whole attributes subsection with a foreach cycle.</p>
<p>Any ideas how to get a direct access using ElementTree?</p>
| 0 | 2016-10-06T21:36:08Z | 39,906,156 | <p>You can use a simple <em>XPath expression</em> to filter <code>attribute</code> element by the value of <code>name</code> attribute. Sample working code:</p>
<pre><code>import xml.etree.ElementTree as ET
data = """<?xml version="1.0" encoding="UTF-8"?>
<analytics>
<standard>
<attributes>
<attribute name="agentname" value="userx userx" />
<attribute name="agentpk" value="5" />
<attribute name="analytics:callid" value="757004000003597" />
<attribute name="wrapuptime" value="0" />
</attributes>
</standard>
</analytics>
"""
root = ET.fromstring(data)
print(root.find(".//attribute[@name='agentname']").attrib["value"])
</code></pre>
<p>Prints:</p>
<pre><code>userx userx
</code></pre>
<p>Beware that <code>xml.etree.ElementTree</code> has a <a href="https://docs.python.org/2/library/xml.etree.elementtree.html#xpath-support" rel="nofollow">limited XPath support</a>.</p>
| 1 | 2016-10-06T21:50:59Z | [
"python",
"xml",
"elementtree"
]
|
Python help displaying strings | 39,906,046 | <pre><code>def main():
name = input("Enter your name, eg Zelle: ")
name = name.lower()
output = []
for character in name:
number = ord(character) - 96
output.append(number)
print(output)
main()
</code></pre>
<p>This is what I have so far but I need to make this program run each of the letter in the name a display the output like this:</p>
<pre><code>Enter a name (eg, Zelle): Zelle
Letter Z value is 26
Letter e value is 5
Letter l value is 12
Letter l value is 12
Letter e value is 5
The numeric value of the name 'Zelle' is 60
</code></pre>
<p>And honestly Im not to sure how to do that</p>
| 0 | 2016-10-06T21:42:42Z | 39,906,171 | <p>here you go:</p>
<pre><code>def main():
name = input("Enter your name, eg Zelle: ")
name = name.lower()
output = []
name_value = 0
for character in name:
number = ord(character) - 96
output.append((character,number))
print("Letter {} value is {}".format(character, number))
name_value += number
print("The numeric value of the name: {} is {}".format(name, name_value))
main()
</code></pre>
<p>edited :)</p>
| 0 | 2016-10-06T21:51:57Z | [
"python"
]
|
Python help displaying strings | 39,906,046 | <pre><code>def main():
name = input("Enter your name, eg Zelle: ")
name = name.lower()
output = []
for character in name:
number = ord(character) - 96
output.append(number)
print(output)
main()
</code></pre>
<p>This is what I have so far but I need to make this program run each of the letter in the name a display the output like this:</p>
<pre><code>Enter a name (eg, Zelle): Zelle
Letter Z value is 26
Letter e value is 5
Letter l value is 12
Letter l value is 12
Letter e value is 5
The numeric value of the name 'Zelle' is 60
</code></pre>
<p>And honestly Im not to sure how to do that</p>
| 0 | 2016-10-06T21:42:42Z | 39,906,292 | <pre><code>def main():
name = input("Enter your name, eg Zelle: ")
name = name.lower()
output = []
total = 0
for character in name:
number = ord(character) - 96
total += number
print("Letter {0} value is {1}".format(character, number))
print("The numeric value of the name {0} is {1}".format(name, total))
main()
</code></pre>
<p>This is what I have now and it works perfectly thank you everyone.</p>
| 0 | 2016-10-06T22:02:00Z | [
"python"
]
|
Python help displaying strings | 39,906,046 | <pre><code>def main():
name = input("Enter your name, eg Zelle: ")
name = name.lower()
output = []
for character in name:
number = ord(character) - 96
output.append(number)
print(output)
main()
</code></pre>
<p>This is what I have so far but I need to make this program run each of the letter in the name a display the output like this:</p>
<pre><code>Enter a name (eg, Zelle): Zelle
Letter Z value is 26
Letter e value is 5
Letter l value is 12
Letter l value is 12
Letter e value is 5
The numeric value of the name 'Zelle' is 60
</code></pre>
<p>And honestly Im not to sure how to do that</p>
| 0 | 2016-10-06T21:42:42Z | 39,906,352 | <pre><code>def main():
name = input("Enter your name, eg Zelle:").lower()
char_values = [ord(char) for char in name]
total = sum(char_values)
for value in char_values:
print("Letter {} value is {}".format(chr(value), value))
print("The numeric value of the name {} is {}".format(name, total))
main()
</code></pre>
| 0 | 2016-10-06T22:06:50Z | [
"python"
]
|
Pattern matching python - return multiple items from input in one go | 39,906,048 | <p>Input: </p>
<pre><code>A->(B, 1), (C, 2), (AKSDFSDF, 1231231) ...
</code></pre>
<p>Expected output:</p>
<pre><code>[('A', 1, 2, 1231231)]
</code></pre>
<p>Cannot seem to get it to work. My code:</p>
<pre><code>import re
pattern = r"([a-zA-z]+)->(.*)"
r = re.compile(pattern)
print r.findall("A->(B, 1), (C, 2), (AKSDFSDF, 1231231)")
>>> [('A', '(B, 1), (C, 2), (AKSDFSDF, 1231231)')]
</code></pre>
<p>That's close enough, but surely it's possible to extract exactly what I want?</p>
<p>I would have though this could work, but it doesnt:</p>
<pre><code>pattern = r"([a-zA-z]+)->([\([a-zA-Z]+,([0-9]+)\)]*)"
</code></pre>
<p>That throws empty output (ie. <code>[]</code>), while this:</p>
<pre><code>pattern = r"([a-zA-z]+)->((\([a-zA-Z]+,([0-9]+)\))*)"
>>> [('A', '', '', '')]
</code></pre>
<p>Any idea?</p>
| 0 | 2016-10-06T21:42:59Z | 39,906,102 | <p>You can use a <em>positive lookahead assertion</em> to pick words starting with a word boundary <code>\b</code> and followed by <code>-</code> or <code>)</code>:</p>
<pre><code>import re
s = 'A->(B, 1), (C, 2), (AKSDFSDF, 1231231)'
pattern = re.compile(r'\b\w+(?=-|\))')
print pattern.findall(s)
#['A', '1', '2', '1231231']
</code></pre>
<p>Try it out: <a href="https://repl.it/DqSe/0" rel="nofollow">https://repl.it/DqSe/0</a></p>
| 2 | 2016-10-06T21:47:21Z | [
"python",
"regex",
"pattern-matching"
]
|
tweepy.Cursor returns the same users over and over | 39,906,052 | <p>I am trying to get all the search results in a list.</p>
<p>Here is the code:</p>
<pre><code>cursor = tweepy.Cursor(api.search_users,"foo")
count = 0
for u in cursor.items(30):
count += 1
print count, u.id_str
print count
</code></pre>
<p>Alas, item 1 is the same as 21, 2 is the same as 22 &c:</p>
<pre><code>1 19081001
2 313527365
3 89528870
4 682463
5 2607583036
6 219840627
7 725883651280363520
8 371980318
9 860066587
10 4794574949
11 88633646
12 137482245
13 1447284511
14 15369494
15 171657474
16 442113112
17 6130932
18 2587755194
19 191338693
20 528804165
21 19081001
22 313527365
23 89528870
24 682463
25 2607583036
26 219840627
27 725883651280363520
28 371980318
29 860066587
30 4794574949
30
</code></pre>
<p>How do I get <em>all</em> the search results?</p>
<p>as requested:</p>
<pre><code>dir(cursor)
['__class__',
'__delattr__',
'__dict__',
'__doc__',
'__format__',
'__getattribute__',
'__hash__',
'__init__',
'__module__',
'__new__',
'__reduce__',
'__reduce_ex__',
'__repr__',
'__setattr__',
'__sizeof__',
'__str__',
'__subclasshook__',
'__weakref__',
'items',
'iterator',
'pages']
</code></pre>
| 1 | 2016-10-06T21:43:26Z | 40,032,109 | <p>As per the <a href="http://docs.tweepy.org/en/v3.5.0/api.html#API.search_users" rel="nofollow">tweepy documentation</a>, you should not pass a number greater than 20. You're passing 30, that's why you get repeated ids after 20 id entries.</p>
<p>I hacked up a bit and came up with the below code which will get all the users which match the search query (here <code>foo</code>).</p>
<pre><code>def get_users():
try:
count = 0
all_users = []
for page in tweepy.Cursor(api.search_users,"foo").pages():
#page[0] has the UserObj
id_str = page[0].id_str
scr_name = page[0].screen_name
print(count, id_str, scr_name)
count += 1
all_users.append((id_str, scr_name))
except tweepy.error.TweepError as twerr:
print(" sleep because of error.. ")
time.sleep(10)
</code></pre>
<p>Of course, this is a very crude implementation. Please write a proper sleeper function to not exceed the twitter rate limit.</p>
| 1 | 2016-10-13T22:32:45Z | [
"python",
"twitter",
"tweepy"
]
|
list of list but specific representations in python | 39,906,109 | <p>I don't have much experience about lists in Python. So basically, I want to create N list inside one huge list. Reading from a file that looks like this:</p>
<pre><code>Fail = h yes
Sucess = h no
</code></pre>
<p>This is an example of 2 lines in the file, and the file contains N line, thus I want to create a list for each line such that it should be:</p>
<pre><code>List of list = [
["Fail", ("h", "yes")],
["Sucess", ("h", "no")]],]
</code></pre>
<p>And in case a line of:</p>
<pre><code>Fail = h
</code></pre>
<p>with only one output, its list should be like:</p>
<pre><code>["Fail", ("h", )),]
</code></pre>
<p>My code doesn't work for now,:</p>
<pre><code>with open('file.txt') as f:
for l in f:
l_rule = l.split()
# print(l_rule)
G_list.append(l_rule[0])
length= len(l_rule)
if length == 3:
m = "(" + l_rule[2]+ "," +")"
G_list.append(m)
else:
m = "(" + l_rule[2]+","+l_rule[3] +")"
G_list.append(m)
print(G_list)
listoflist.append(G_list)
print(listoflist)
</code></pre>
<p>And it doesn't return what I need as I explained above. I appreciate the help.</p>
| 0 | 2016-10-06T21:47:30Z | 39,906,229 | <p>You can't use string functions to create tuple - <code>("h", "yes")</code></p>
<p>You need</p>
<pre><code>m = ( l_rule[2], )
m = ( l_rule[2], l_rule[3] )
</code></pre>
<hr>
<p><strong>EDIT:</strong> I found you have to clear <code>G_list</code> before you add new elements</p>
<pre><code> G_list = []
G_list.append(l_rule[0])
</code></pre>
<hr>
<p><strong>EDIT:</strong> code with modifications</p>
<pre><code>with open('file.txt') as f:
for l in f:
l_rule = l.split()
G_list = [] # clear list
G_list.append(l_rule[0])
length= len(l_rule)
if length == 3:
m = (l_rule[2],) # create tuple instead of string
G_list.append(m)
else:
m = (l_rule[2], l_rule[3]) # create tuple instead of string
G_list.append(m)
print(G_list)
listoflist.append(G_list)
print(listoflist)
</code></pre>
| 1 | 2016-10-06T21:57:27Z | [
"python",
"list"
]
|
list of list but specific representations in python | 39,906,109 | <p>I don't have much experience about lists in Python. So basically, I want to create N list inside one huge list. Reading from a file that looks like this:</p>
<pre><code>Fail = h yes
Sucess = h no
</code></pre>
<p>This is an example of 2 lines in the file, and the file contains N line, thus I want to create a list for each line such that it should be:</p>
<pre><code>List of list = [
["Fail", ("h", "yes")],
["Sucess", ("h", "no")]],]
</code></pre>
<p>And in case a line of:</p>
<pre><code>Fail = h
</code></pre>
<p>with only one output, its list should be like:</p>
<pre><code>["Fail", ("h", )),]
</code></pre>
<p>My code doesn't work for now,:</p>
<pre><code>with open('file.txt') as f:
for l in f:
l_rule = l.split()
# print(l_rule)
G_list.append(l_rule[0])
length= len(l_rule)
if length == 3:
m = "(" + l_rule[2]+ "," +")"
G_list.append(m)
else:
m = "(" + l_rule[2]+","+l_rule[3] +")"
G_list.append(m)
print(G_list)
listoflist.append(G_list)
print(listoflist)
</code></pre>
<p>And it doesn't return what I need as I explained above. I appreciate the help.</p>
| 0 | 2016-10-06T21:47:30Z | 39,906,232 | <p>Split each line and unpack it. Then you can check whether the necessary last element was found, and create the appropriate tuple. Append the result to your list of results.</p>
<pre><code>with open('file.txt') as f:
result = []
for line in f:
a,b,c,*d = line.split()
if d:
tup = c,d[0]
else:
tup = c,
result.append([a, tup])
</code></pre>
| 2 | 2016-10-06T21:57:40Z | [
"python",
"list"
]
|
list of list but specific representations in python | 39,906,109 | <p>I don't have much experience about lists in Python. So basically, I want to create N list inside one huge list. Reading from a file that looks like this:</p>
<pre><code>Fail = h yes
Sucess = h no
</code></pre>
<p>This is an example of 2 lines in the file, and the file contains N line, thus I want to create a list for each line such that it should be:</p>
<pre><code>List of list = [
["Fail", ("h", "yes")],
["Sucess", ("h", "no")]],]
</code></pre>
<p>And in case a line of:</p>
<pre><code>Fail = h
</code></pre>
<p>with only one output, its list should be like:</p>
<pre><code>["Fail", ("h", )),]
</code></pre>
<p>My code doesn't work for now,:</p>
<pre><code>with open('file.txt') as f:
for l in f:
l_rule = l.split()
# print(l_rule)
G_list.append(l_rule[0])
length= len(l_rule)
if length == 3:
m = "(" + l_rule[2]+ "," +")"
G_list.append(m)
else:
m = "(" + l_rule[2]+","+l_rule[3] +")"
G_list.append(m)
print(G_list)
listoflist.append(G_list)
print(listoflist)
</code></pre>
<p>And it doesn't return what I need as I explained above. I appreciate the help.</p>
| 0 | 2016-10-06T21:47:30Z | 39,906,244 | <p>Try this:</p>
<pre><code>def format_data(line):
lst = line.split()
return lst[0], tuple(lst[2:])
with open('file.txt') as f:
for l in f:
G_list.append(format_data(l))
listoflist.append(G_list)
</code></pre>
<p>or you can do a one_liner inside your <code>with</code> statement.</p>
<pre><code>with open('file.txt') as f:
listoflist = [format_data(l) for l in f]
</code></pre>
| 0 | 2016-10-06T21:58:33Z | [
"python",
"list"
]
|
list of list but specific representations in python | 39,906,109 | <p>I don't have much experience about lists in Python. So basically, I want to create N list inside one huge list. Reading from a file that looks like this:</p>
<pre><code>Fail = h yes
Sucess = h no
</code></pre>
<p>This is an example of 2 lines in the file, and the file contains N line, thus I want to create a list for each line such that it should be:</p>
<pre><code>List of list = [
["Fail", ("h", "yes")],
["Sucess", ("h", "no")]],]
</code></pre>
<p>And in case a line of:</p>
<pre><code>Fail = h
</code></pre>
<p>with only one output, its list should be like:</p>
<pre><code>["Fail", ("h", )),]
</code></pre>
<p>My code doesn't work for now,:</p>
<pre><code>with open('file.txt') as f:
for l in f:
l_rule = l.split()
# print(l_rule)
G_list.append(l_rule[0])
length= len(l_rule)
if length == 3:
m = "(" + l_rule[2]+ "," +")"
G_list.append(m)
else:
m = "(" + l_rule[2]+","+l_rule[3] +")"
G_list.append(m)
print(G_list)
listoflist.append(G_list)
print(listoflist)
</code></pre>
<p>And it doesn't return what I need as I explained above. I appreciate the help.</p>
| 0 | 2016-10-06T21:47:30Z | 39,906,299 | <p>One straightforward way is to do two passes of splitting, first by <code>=</code> and then by whitespace (default for <code>split</code>). Use <code>strip</code> as necessary:</p>
<pre><code>>>> import io
>>> s = """Fail = h yes
... Sucess = h no
... Fail = h """
>>>
>>> L = [line.strip().split('=') for line in io.StringIO(s)]
>>> [[sub[0].strip(),tuple(sub[1].strip().split())] for sub in L]
[['Fail', ('h', 'yes')], ['Sucess', ('h', 'no')], ['Fail', ('h',)]]
>>>
</code></pre>
<p>If memory is an issue, you can adapt the above to process the lines lazily.</p>
| 0 | 2016-10-06T22:02:24Z | [
"python",
"list"
]
|
why does my openpyxl load only pull one cell in worksheet | 39,906,162 | <p>I have this openpyxl intent on reading rows in an XLSX document.</p>
<p>But for some reason It is only reading the value in CELL A1, then finishing.
What am I missing?</p>
<p>Thanks</p>
<p>from openpyxl import load_workbook</p>
<pre><code>Dutch = load_workbook(filename='languages/READY-Language Translation-- August (SOS) Dutch_dut_Compared_Results.xlsx', read_only=True)
Dws = Dutch.get_sheet_by_name(name='Ouput')
for row in Dws.iter_rows():
for cell in row:
print(cell.value)
</code></pre>
| 2 | 2016-10-06T21:51:29Z | 39,911,974 | <p>Read-only mode depends to a large extent on the metadata provided by the file you're reading, particularly the "dimensions". You can check this by seeing what <code>ws.max_row</code> and <code>ws.max_col</code> are. If these are different to what you expect then the metadata in the file is incorrect. You can force openpyxl to recalculate these values with <code>ws.calculate_dimensions()</code> but this is inadvisable as it forces openpyxl to read the whole sheet. Better is simply to reset the underlying values so that openpyxl knows the worksheet is unsized.</p>
| 0 | 2016-10-07T07:40:49Z | [
"python",
"excel",
"openpyxl"
]
|
Inputs on how to achieve REST based interaction between Java and Python? | 39,906,167 | <p>I have a <strong>Java</strong> process which interacts with its REST API called from my program's UI. When I receive the API call, I end up calling the (non-REST based) <strong>Python script(s)</strong> which do a bunch of work and return me back the results which are returned back as API response.
- I wanted to convert this interaction of UI API -> JAVA -> calling python scripts to become end to end a REST one, so that in coming times it becomes immaterial which language I am using instead of Python.
- Any inputs on whats the best way of making the call end-to-end a REST based ?</p>
| 0 | 2016-10-06T21:51:44Z | 39,906,371 | <p>Furthermore, in the future you might want to separate them from the same machine and use network to communicate.</p>
<p>You can use <strong>http</strong> requests.</p>
<p>Make a contract in java of which output you will provide to your python script (or any other language you will use) send the output as a json to your python script, so in that way you can easily change the language as long as you send the same json.</p>
| 0 | 2016-10-06T22:08:42Z | [
"java",
"python",
"rest",
"api"
]
|
Inputs on how to achieve REST based interaction between Java and Python? | 39,906,167 | <p>I have a <strong>Java</strong> process which interacts with its REST API called from my program's UI. When I receive the API call, I end up calling the (non-REST based) <strong>Python script(s)</strong> which do a bunch of work and return me back the results which are returned back as API response.
- I wanted to convert this interaction of UI API -> JAVA -> calling python scripts to become end to end a REST one, so that in coming times it becomes immaterial which language I am using instead of Python.
- Any inputs on whats the best way of making the call end-to-end a REST based ?</p>
| 0 | 2016-10-06T21:51:44Z | 39,907,522 | <p>You can use Flask as a wrapper for converting your Python scripts into microservices. Then, call your Python scripts from Java by passing them a JSON string in a REST call.</p>
<p>For each python script, listen on a port between 50,000-65,000 and then send your Java-->Python REST calls to <code>http://127.0.0.1:50000</code>, <code>http://127.0.0.1:50001</code>, etc.</p>
<p>There really isn't a limit to how far you can break down your code, each function is a potential microservice. Try to encapsulate complexity in classes, create simple interfaces for these classes, and then expose these interfaces via REST. As a Java programmer I'm sure you will have no problem with this! Google "flask microservices example" for code samples to get yourself started.</p>
<p>It gets more complicated if your Java app doesn't block between calling each Python script, though. In that case you are using async calls or launching a new thread for each request.</p>
<p>(PS: There is a good article on microservices architecture <a href="https://eng.uber.com/building-tincup/" rel="nofollow">here</a>.) </p>
| 0 | 2016-10-07T00:18:01Z | [
"java",
"python",
"rest",
"api"
]
|
How to run django application in background | 39,906,305 | <p>I have a Django application that send emails to customers. I want this application to be run in background every certain time, is like a job/quote process. What could be best way to do this with python/Django?</p>
| 0 | 2016-10-06T22:03:00Z | 39,906,403 | <blockquote>
<p>How to run django application in background?</p>
</blockquote>
<p>This question makes no sense. Django is a web based framework. What do you mean by running a web application in background?</p>
<p>I think you want to ask: <em>How to run periodic background tasks in <code>Django</code> application?</em></p>
<p>For that purpose you may use <a href="http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html" rel="nofollow"><code>Celery</code></a>. Celery with the help of Message Queueing service allows you to perform tasks in background. It also supports execution of custom script at mentioned duration. Check: <a href="http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html" rel="nofollow">Periodic tasks in Celery</a></p>
| 1 | 2016-10-06T22:11:39Z | [
"python",
"django"
]
|
Multithreaded file read python | 39,906,375 | <pre><code>import threading
def read_file():
f = open('text.txt')
for line in f:
print line.strip() ,' : ', threading.current_thread().getName()
if __name__ == '__main__':
threads = []
for i in range(15):
t = threading.Thread(target=read_file)
threads.append(t)
t.start()
</code></pre>
<p>Question: Will each thread read each line only once from the file above or there are chances that a given thread can end up reading a line twice?</p>
<p>My understanding was that a thread started later will overwrite the file handle for the thread started earlier causing the earlier thread to end up reading few lines twice or thrice or more times. </p>
<p>When I ran this code the outcome was different from what I expected to happen.</p>
<p>Any explanations are welcome.</p>
| 0 | 2016-10-06T22:09:17Z | 39,943,677 | <p>Each thread runs your function independently; each copy of the function opens the file as a local, which is not shared. Each Python file object tracks reading state completely independently; each has their own OS-level file handle here.</p>
<p>So no, if <em>nothing else is altering the file contents</em>, each thread will see each line just once, just the same as if separate <em>processes</em> tried to read the file.</p>
| 7 | 2016-10-09T12:34:03Z | [
"python",
"multithreading",
"python-multithreading"
]
|
Python function not running, interpreter not giving any errors | 39,906,388 | <p>I'm very new to Python and having a problem with a program I'm doing for a class. main() and create_file work, but when it gets to read_file, the interpreter just sits there. The program is running but nothing is happening.</p>
<p>The answer is probably something very simple, but I just can't see it. Thanks in advance for any help.</p>
<p>I'm using IDLE (Python and IDLE v. 3.5.2)</p>
<p>Here's the code:</p>
<pre><code>import random
FILENAME = "randomNumbers.txt"
def create_file(userNum):
#Create and open the randomNumbers.txt file
randomOutput = open(FILENAME, 'w')
#Generate random numbers and write them to the file
for num in range(userNum):
num = random.randint(1, 500)
randomOutput.write(str(num) + '\n')
#Confirm data written
print("Data written to file.")
#Close the file
randomOutput.close()
def read_file():
#Open the random number file
randomInput = open(FILENAME, 'r')
#Declare variables
entry = randomInput.readline()
count = 0
total = 0
#Check for eof, read in data, and add it
while entry != '':
num = int(entry)
total += num
count += 1
#Print the total and the number of random numbers
print("The total is:", total)
print("The number of random numbers generated and added is:", count)
#Close the file
randomInput.close()
def main():
#Get user data
numGenerate = int(input("Enter the number of random numbers to generate: "))
#Call create_file function
create_file(numGenerate)
#Call read_file function
read_file()
main()
</code></pre>
| 2 | 2016-10-06T22:10:37Z | 39,906,453 | <p>You have an infinite <code>while</code> loop in the function, since <code>entry</code> never changes during the loop.</p>
<p>The Pythonic way to process all the lines in a file is like this:</p>
<pre><code>for entry in randomInput:
num = int(entry)
total += num
count += 1
</code></pre>
| 6 | 2016-10-06T22:16:17Z | [
"python",
"function"
]
|
How to put an argument of a function inside a raw string | 39,906,492 | <p>I want to create a function that will delete a character in a string of text.
I'll pass the string of text and the character as arguments of the function.
The function works fine but I don't know how to do this correctly if I want to threat it as a raw string.</p>
<p>For example:</p>
<pre><code>import re
def my_function(text, ch):
Regex=re.compile(r'(ch)') # <-- Wrong, obviously this will just search for the 'ch' characters
print(Regex.sub('',r'text')) # <-- Wrong too, same problem as before.
text= 'Hello there'
ch= 'h'
my_function(text, ch)
</code></pre>
<p>Any help would be appreciated.</p>
| 0 | 2016-10-06T22:20:32Z | 39,906,518 | <p>How about changing:</p>
<pre><code>Regex=re.compile(r'(ch)')
print(Regex.sub('',r'text'))
</code></pre>
<p>to:</p>
<pre><code>Regex=re.compile(r'({})'.format(ch))
print(Regex.sub('',r'{}'.format(text)))
</code></pre>
<p>However, simpler way to achieve this is using <code>str.replace()</code> as:</p>
<pre><code>text= 'Hello there'
ch= 'h'
text = text.replace(ch, '')
# value of text: 'Hello tere'
</code></pre>
| 2 | 2016-10-06T22:22:16Z | [
"python",
"regex",
"string",
"variables"
]
|
How to put an argument of a function inside a raw string | 39,906,492 | <p>I want to create a function that will delete a character in a string of text.
I'll pass the string of text and the character as arguments of the function.
The function works fine but I don't know how to do this correctly if I want to threat it as a raw string.</p>
<p>For example:</p>
<pre><code>import re
def my_function(text, ch):
Regex=re.compile(r'(ch)') # <-- Wrong, obviously this will just search for the 'ch' characters
print(Regex.sub('',r'text')) # <-- Wrong too, same problem as before.
text= 'Hello there'
ch= 'h'
my_function(text, ch)
</code></pre>
<p>Any help would be appreciated.</p>
| 0 | 2016-10-06T22:20:32Z | 39,906,534 | <pre><code>def my_function(text, ch):
text.replace(ch, "")
</code></pre>
<p>This will replace all occurrences of <strong>ch</strong> with an empty string. No need to invoke the overhead of regular expressions in this.</p>
| 2 | 2016-10-06T22:23:49Z | [
"python",
"regex",
"string",
"variables"
]
|
how do I insert data to an arbitrary location in a binary file without overwriting existing file data? | 39,906,620 | <p>I've tried to do this using the 'r+b', 'w+b', and 'a+b' modes for <code>open()</code>. I'm using with <code>seek()</code> and <code>write()</code> to move to and write to an arbitrary location in the file, but all I can get it to do is either 1) write new info at the end of the file or 2) overwrite existing data in the file.
Does anyone know of some other way to do this or where I'm going wrong here?</p>
| 0 | 2016-10-06T22:31:47Z | 39,906,690 | <p>What you're doing wrong is assuming that it can be done. :-)</p>
<p>You don't get to insert and shove the existing data over; it's already in that position on disk, and overwrite is all you get.</p>
<p>What you need to do is to mark the insert position, read the remainder of the file, write your insertion, and then write that remainder <em>after</em> the insertion.</p>
| 1 | 2016-10-06T22:38:00Z | [
"python",
"file",
"binaryfiles",
"file-writing"
]
|
How to avoid .pyc files using selenium webdriver/python while running test suites? | 39,906,630 | <p>There's no relevant answer to this question. When I run my test cases inside a test suite using selenium webdriver with python the directory gets trashed with .pyc files. They do not appear if I run test cases separately, only when I run them inside one test suite.How to avoid them?</p>
<pre><code>import unittest
from FacebookLogin import FacebookLogin
from SignUp import SignUp
from SignIn import SignIn
class TestSuite(unittest.TestSuite):
def suite():
suite = unittest.TestSuite()
suite.addTest(FacebookLogin("test_FacebookLogin"))
suite.addTest(SignUp("test_SignUp"))
suite.addTest(SignIn("test_SignIn"))
return suite
if __name__ == "__main__":
unittest.main()
</code></pre>
| 0 | 2016-10-06T22:32:20Z | 39,906,839 | <p><code>pyc</code> files are created any time you <code>import</code> a module, but not when you run a module directly as a script. That's why you're seeing them when you import the modules with the test code but don't see them created when you run the modules separately.</p>
<p>If you're invoking Python from the command line, you can suppress the creating of <code>pyc</code> files by using the <code>-B</code> argument. You can also set the environment variable <code>PYTHONDONTWRITEBYTECODE</code> with the same effect. I don't believe there's a way to change that setting with Python code after the interpreter is running.</p>
<p>In Python 3.2 and later, the <code>pyc</code> files get put into a separate <code>__pycache__</code> folder, which might be visually nicer. It also allows multiple <code>pyc</code> files to exist simultaneously for different interpreter versions that have incompatible bytecode (a "tag" is added to the file name indicating which interpreter uses each file).</p>
<p>But even in earlier versions of Python, I think that saying the <code>pyc</code> files are "trashing" the directory is a bit hyperbolic. Usually you can exempt the created files from source control (e.g. by listing <code>.pyc</code> in a <code>.gitignore</code> or equivalent file), and otherwise ignore them. Having <code>pyc</code> files around speeds up repeated imports of the file, since the interpreter doesn't need to recompile the source to bytecode if the <code>pyc</code> file already has the bytecode available.</p>
| 1 | 2016-10-06T22:53:56Z | [
"python",
"selenium"
]
|
How to avoid .pyc files using selenium webdriver/python while running test suites? | 39,906,630 | <p>There's no relevant answer to this question. When I run my test cases inside a test suite using selenium webdriver with python the directory gets trashed with .pyc files. They do not appear if I run test cases separately, only when I run them inside one test suite.How to avoid them?</p>
<pre><code>import unittest
from FacebookLogin import FacebookLogin
from SignUp import SignUp
from SignIn import SignIn
class TestSuite(unittest.TestSuite):
def suite():
suite = unittest.TestSuite()
suite.addTest(FacebookLogin("test_FacebookLogin"))
suite.addTest(SignUp("test_SignUp"))
suite.addTest(SignIn("test_SignIn"))
return suite
if __name__ == "__main__":
unittest.main()
</code></pre>
| 0 | 2016-10-06T22:32:20Z | 39,906,859 | <p>You can supply the -B option to the interpreter stop the files from being generated. See: <a href="http://stackoverflow.com/questions/154443/how-to-avoid-pyc-files">How to avoid pyc files</a></p>
<p>If you really wanted to, you could also add to your test script a cleanup after running the test. Not that I'm recommending that.</p>
<pre><code>import os
pyc_files = [fn for fn in os.listdir(os.getcwd()) if fn.endswith(".pyc")]
for filename in pyc_files:
os.remove(filename)
</code></pre>
| 1 | 2016-10-06T22:56:36Z | [
"python",
"selenium"
]
|
Faster way to perform bulk insert, while avoiding duplicates, with SQLAlchemy | 39,906,704 | <p>I'm using the following method to perform a bulk insert, and to optionally avoid inserting duplicates, with SQLAlchemy:</p>
<pre><code>def bulk_insert_users(self, users, allow_duplicates = False):
if not allow_duplicates:
users_new = []
for user in users:
if not self.SQL_IO.db.query(User_DB.id).filter_by(user_id = user.user_id).scalar():
users_new.append(user)
users = users_new
self.SQL_IO.db.bulk_save_objects(users)
self.SQL_IO.db.commit()
</code></pre>
<p>Can the above functionality be implemented such that the function is faster?</p>
| 0 | 2016-10-06T22:39:58Z | 39,906,732 | <p>You can load all user ids first, put them into a set and then use <code>user.user_id in existing_user_ids</code> to determine whether to add a new user or not instead of sending a SELECT query every time. Even with ten thousands of users this will be quite fast, especially compared to contacting the database for each user.</p>
| 1 | 2016-10-06T22:42:22Z | [
"python",
"sqlalchemy"
]
|
Faster way to perform bulk insert, while avoiding duplicates, with SQLAlchemy | 39,906,704 | <p>I'm using the following method to perform a bulk insert, and to optionally avoid inserting duplicates, with SQLAlchemy:</p>
<pre><code>def bulk_insert_users(self, users, allow_duplicates = False):
if not allow_duplicates:
users_new = []
for user in users:
if not self.SQL_IO.db.query(User_DB.id).filter_by(user_id = user.user_id).scalar():
users_new.append(user)
users = users_new
self.SQL_IO.db.bulk_save_objects(users)
self.SQL_IO.db.commit()
</code></pre>
<p>Can the above functionality be implemented such that the function is faster?</p>
| 0 | 2016-10-06T22:39:58Z | 39,906,752 | <p>How many users do you have? You're querying for the users one at a time, every single iteration of that loop. You might have more luck querying for ALL user Ids, put them in a list, then check against that list.</p>
<pre><code>existing_users = #query for all user IDs
for user in new_users:
if user not in existing_users:
#do_stuff
</code></pre>
| 1 | 2016-10-06T22:43:59Z | [
"python",
"sqlalchemy"
]
|
zerorpc: how to convert string data from python to node | 39,906,825 | <p>I need to call a python script from <a href="https://nodejs.org/en/" rel="nofollow">nodejs</a> and getting back the result. I found the <a href="http://www.zerorpc.io" rel="nofollow">zerorpc</a> library which seems a good fit. The python script returns an array of strings but in node i got objects of binary data.</p>
<p>This is the python zerorpc server:</p>
<pre><code># python zerorpc server
import zerorpc
class HelloRPC(object):
def test(self):
return ["A", "B", "C"]
server = zerorpc.Server(HelloRPC())
serrver.bind("tcp://0.0.0.0:4242")
server.run()
</code></pre>
<p>This is the node zerorpc client:</p>
<pre><code>// nodejs zerorpc client
var zerorpc = require("zerorpc")
var client = new zerorpc.Client();
client.connect("tcp://127.0.0.1:4242");
client.invoke("test", function(error, response, more) {
if (response) {
for (var i = 0; i < response.length; i++) {
console.log(typeof response[i], response[i])
}
}
}
</code></pre>
<p>Which gives this output:</p>
<pre><code>object <Buffer 41>
object <Buffer 42>
object <Buffer 43>
</code></pre>
<p>Which is the best way to convert these objects in strings in nodejs?</p>
| 0 | 2016-10-06T22:52:24Z | 39,906,976 | <p>Node JS Buffer class has the toString method</p>
<pre><code>strings[i] = response[i].toString("utf8")
</code></pre>
<p>See the method:
<a href="https://nodejs.org/api/buffer.html#buffer_buf_tostring_encoding_start_end" rel="nofollow">https://nodejs.org/api/buffer.html#buffer_buf_tostring_encoding_start_end</a></p>
| 0 | 2016-10-06T23:08:42Z | [
"python",
"node.js",
"string",
"binary",
"zerorpc"
]
|
How to debug cython in and IDE | 39,906,830 | <p>I am trying to debug a Cython code, that wraps a c++ class, and the error I am hunting is somewhere in the C++ code.</p>
<p>It would be awfully convenient if I could somehow debug as if it were written in one language, i.e. if there's an error in the C++ part, it show me the source code line there, if the error is in the python part it does the same.</p>
<p>Right now I always have to try and replicate the python code using the class in C++ and right now I have an error that only occurs when running through python ... I hope somebody can help me out :)</p>
| 1 | 2016-10-06T22:53:05Z | 39,906,972 | <p>It's been a while for me and I forgot how I exactly did it, but when I was writing my own C/C++ library and interfaced it with swig into python, I was able to debug the C code with <a href="https://www.gnu.org/software/ddd/" rel="nofollow">DDD</a>. It was important to compile with debug options. It wasn't great, but it worked for me. I think you had to run <code>ddd python</code> and within the python terminal run my faulty C code. You would have to make sure all linked libraries including yours is loaded with the source code so that you could set breakpoints. </p>
| 2 | 2016-10-06T23:08:35Z | [
"python",
"c++",
"debugging",
"cython"
]
|
Problems Reversing a Doubly Linked List | 39,906,837 | <p>I have to reverse a doubly linked list(DLL) between two nodes. I've done this with singly linked lists(SLL), but I find it harder to do with DLL's? It might be the having to do it between two particular nodes.
Here is the code for my DLLNode & DLL. When I test this code it seems to do nothing to my DLL. Any tips on what I'm doing wrong??</p>
<p>EDIT: So I'm inputting a linked list 'a','b','c','d','e','f' and call twist('b','e'): This should result in the linked list 'a' 'b' 'e' 'd' 'c' 'f'</p>
<pre><code>class DoublyLinkedListNode:
def __init__(self, item, prevnode = None, nextnode = None):
self._data = item
self.setnext(nextnode)
self.setprev(prevnode)
def data(self):
return self._data
def next(self):
return self._next
def prev(self):
return self._prev
def setprev(self, prevnode):
self._prev = prevnode
def setnext(self, nextnode):
self._next = nextnode
class DoublyLinkedList:
def __init__(self):
self._head = None
self._tail = None
self._length = 0
def _newnode(self, item, nextnode = None, prevnode = None):
return DoublyLinkedListNode(item, nextnode, prevnode)
def addfirst(self, item):
if self.isempty():
return self._addtoempty(item)
node = self._newnode(item, None, self._head)
self._head.setprev(node)
self._head = node
self._length += 1
def addlast(self, item):
if self.isempty():
return self._addtoempty(item)
node = self._newnode(item, self._tail, None)
self._tail.setnext(node)
self._tail = node
self._length += 1
def _addtoempty(self, item):
node = self._newnode(item, None, None)
self._head = self._tail = node
self._length = 1
def removefirst(self):
if len(self) <= 1:
return self._removelastitem()
data = self._head.data()
self._head = self._head.next()
self._head.setprev(None)
self._length -= 1
return data
def removelast(self):
if len(self) <= 1:
return self._removelastitem()
data = self._tail.data()
self._tail = self._tail.prev()
self._tail.setnext(None)
self._length -= 1
return data
def _removelastitem(self):
if self.isempty():
return None # Should probably raise an error.
data = self._head.data()
self._head = self._tail = None
self._length = 0
return data
def twist(self, endpt1, endpt2):
current = self._head
while current != None:
if current.data() == endpt1:
current.next().setnext(endpt2.next())
endpt2.setnext(current.next())
current.setnext(endpt2)
else:
current = current.next()
def isempty(self):
return len(self) == 0
def _nodes(self):
node = self._head
while node is not None:
yield node
node = node.next()
def __iter__(self):
for node in self._nodes():
yield node.data()
def __len__(self):
return self._length
def __str__(self):
items = [str(data) for data in self]
return ", ".join(items)
</code></pre>
<p>Here is the test I'm running:</p>
<pre><code> def testtwist1(self):
n = [0,1,2,3,4,5]
L = DoublyLinkedList()
for i in n:
L.addlast(i)
L.twist(2,5)
print(L) # returns the list 0,1,2,3,4,5
</code></pre>
| 0 | 2016-10-06T22:53:32Z | 39,907,078 | <p>I don't see how this executes at all. You've apparently set up a DLL, and then called <strong>DLL.twist('b', 'e')</strong>. Thus, endpt1 = 'b' and endpt2 = 'e'. You then compare endpt1 to current.data, but then you access endpt2.next. <em>endpt2 is a single-character string</em>.</p>
<p>You never reference the <strong>prev</strong> pointers at all.</p>
<p>The only time you try to alter anything is when you hit node <strong>b</strong>. Then you seem to try to exchange the <strong>next</strong> pointers of <strong>b</strong> and <strong>e</strong> (so <strong>b.next</strong> is <strong>f</strong>, and <strong>e.next</strong> is <strong>c</strong>, but that's all.</p>
<p>At this point, I expect that your forward links give you two lists: a->b->f and c->d->e->c (a cycle), and that the backward links are unchanged.</p>
<p>Get pencil and paper or a white board and walk through these changes, one statement at a time; that's what I do ...</p>
<hr>
<p>I recovered <strong>twist</strong> from your previous post, fixed the indentation, and ran. As I explain above, this faults in execution:</p>
<pre><code>Traceback (most recent call last):
File "so.py", line 113, in <module>
L.twist(2,5)
File "so.py", line 102, in twist
current.next().setnext(endpt2.next())
AttributeError: 'int' object has no attribute 'next'
</code></pre>
<p>There is no such thing as <strong>2.next()</strong>. I don't think you're actually getting to your <strong>print</strong> statement. Also, <strong>print(L)</strong> will not print the node values in order; you haven't included a <strong>__repr__</strong> method for your DLL object. <strong>__str__</strong> would do the job, but you've use the list head pointer as if it were an iterable over the forward-linked list.</p>
<p>I strongly recommend that you back up and approach this with incremental programming: write a method or a few lines, test that, and don't continue until that works as you expect. Yes, this means that you're writing detailed tests as you go ... that's a good thing. Keep them, and keep running them.</p>
| 0 | 2016-10-06T23:19:48Z | [
"python",
"linked-list",
"doubly-linked-list"
]
|
Efficient Way to Permutate a Symmetric Square Matrix in Numpy | 39,906,919 | <p>What's the best way to do the following in Numpy when dealing with <strong>symmetric square matrices</strong> (<code>NxN</code>) where <code>N > 20000</code>? </p>
<pre><code>>>> a = np.arange(9).reshape([3,3])
>>> a = np.maximum(a, a.T)
>>> a
array([[0, 3, 6],
[3, 4, 7],
[6, 7, 8]])
>>> perm = np.random.permutation(3)
>>> perm
array([1, 0, 2])
>>> shuffled_arr = a[perm, :][:, perm]
>>> shuffled_arr
array([[4, 3, 7],
[3, 0, 6],
[7, 6, 8]])
</code></pre>
<p>This takes about 6-7 secs when N is about 19K. While the same opertation in Matlab takes less than a second:</p>
<pre><code>perm = randperm(N);
shuffled_arr = arr(perm, perm);
</code></pre>
| 1 | 2016-10-06T23:02:50Z | 39,908,235 | <pre><code>In [703]: N=10000
In [704]: a=np.arange(N*N).reshape(N,N);a=np.maximum(a, a.T)
In [705]: perm=np.random.permutation(N)
</code></pre>
<p>One indexing step is quite a bit faster:</p>
<pre><code>In [706]: timeit a[perm[:,None],perm] # same as `np.ix_...`
1 loop, best of 3: 1.88 s per loop
In [707]: timeit a[perm,:][:,perm]
1 loop, best of 3: 8.88 s per loop
In [708]: timeit np.take(np.take(a,perm,0),perm,1)
1 loop, best of 3: 1.41 s per loop
</code></pre>
<p><code>a[perm,perm[:,None]]</code> is in the 8s category.</p>
| 2 | 2016-10-07T02:02:05Z | [
"python",
"matlab",
"numpy"
]
|
PATCH and PUT don't work as expected when pytest is interacting with REST framework | 39,906,956 | <p>I am building an API using the django REST framework.</p>
<p>To test this API I am using pytest and the test client like so:</p>
<pre><code>def test_doesnt_find(self, client):
resp = client.post(self.url, data={'name': '123'})
assert resp.status_code == 404
</code></pre>
<p>or</p>
<pre><code>def test_doesnt_find(self, client):
resp = client.get(self.url, data={'name': '123'})
assert resp.status_code == 404
</code></pre>
<p>both work when using the general GET, POST and DELETE Classes of the REST framework (like <code>DestroyAPIView</code>, <code>RetrieveUpdateAPIView</code> or just <code>APIView</code> using get and post functions)</p>
<p>Where I get problems is when using PATCH and PUT views. Such as <code>RetrieveUpdateAPIView</code>. Here I suddenly have to use:</p>
<pre><code>resp = client.patch(self.url, data="name=123", content_type='application/x-www-form-urlencoded')
</code></pre>
<p>or</p>
<pre><code>resp = client.patch(self.url, data=json.dumps({'name': '123'}), content_type='application/json')
</code></pre>
<p>If I simply try to use the test client like I am used to, I get errors:</p>
<pre><code>rest_framework.exceptions.UnsupportedMediaType: Unsupported media type "application/octet-stream" in request.
</code></pre>
<p>And when I specify 'application/json' in the client.patch() call:</p>
<pre><code>rest_framework.exceptions.ParseError: JSON parse error - Expecting property name enclosed in double quotes: line 1 column 2 (char 1)`
</code></pre>
<p>Can anyone explain this behavior to me? It is especially hard to catch as curl simply works as well using <code>-X PATCH -d"name=123"</code>.</p>
| 0 | 2016-10-06T23:06:56Z | 39,923,402 | <p>As for the request with JSON data you are receiving this error due to <a href="http://www.json.org/" rel="nofollow">JSON syntax</a> it needs <strong>double quotes</strong> over a string. </p>
| -1 | 2016-10-07T18:06:08Z | [
"python",
"django-rest-framework",
"put",
"pytest-django"
]
|
PATCH and PUT don't work as expected when pytest is interacting with REST framework | 39,906,956 | <p>I am building an API using the django REST framework.</p>
<p>To test this API I am using pytest and the test client like so:</p>
<pre><code>def test_doesnt_find(self, client):
resp = client.post(self.url, data={'name': '123'})
assert resp.status_code == 404
</code></pre>
<p>or</p>
<pre><code>def test_doesnt_find(self, client):
resp = client.get(self.url, data={'name': '123'})
assert resp.status_code == 404
</code></pre>
<p>both work when using the general GET, POST and DELETE Classes of the REST framework (like <code>DestroyAPIView</code>, <code>RetrieveUpdateAPIView</code> or just <code>APIView</code> using get and post functions)</p>
<p>Where I get problems is when using PATCH and PUT views. Such as <code>RetrieveUpdateAPIView</code>. Here I suddenly have to use:</p>
<pre><code>resp = client.patch(self.url, data="name=123", content_type='application/x-www-form-urlencoded')
</code></pre>
<p>or</p>
<pre><code>resp = client.patch(self.url, data=json.dumps({'name': '123'}), content_type='application/json')
</code></pre>
<p>If I simply try to use the test client like I am used to, I get errors:</p>
<pre><code>rest_framework.exceptions.UnsupportedMediaType: Unsupported media type "application/octet-stream" in request.
</code></pre>
<p>And when I specify 'application/json' in the client.patch() call:</p>
<pre><code>rest_framework.exceptions.ParseError: JSON parse error - Expecting property name enclosed in double quotes: line 1 column 2 (char 1)`
</code></pre>
<p>Can anyone explain this behavior to me? It is especially hard to catch as curl simply works as well using <code>-X PATCH -d"name=123"</code>.</p>
| 0 | 2016-10-06T23:06:56Z | 39,953,904 | <blockquote>
<p>rest_framework.exceptions.ParseError: JSON parse error - Expecting property name enclosed in double quotes: line 1 column 2 (char 1)`</p>
</blockquote>
<p>This is usually sign that you send a string inside a string in json.
For example:</p>
<pre><code>resp = client.patch(self.url, data=json.dumps("name=123"), content_type='application/json')
</code></pre>
<p>will cause this kind of issues.</p>
<blockquote>
<p>rest_framework.exceptions.UnsupportedMediaType: Unsupported media type "application/octet-stream" in request.</p>
</blockquote>
<p>This means that the request has been sent as "application/octet-stream" which is Django's test default.</p>
<p>To ease the pain with dealing with all that, Django REST framework provides a client on its own: <a href="http://www.django-rest-framework.org/api-guide/testing/#apiclient" rel="nofollow">http://www.django-rest-framework.org/api-guide/testing/#apiclient</a></p>
<p>Note that the syntax is slightly different from Django's one and that you won't have to deal with json encoding.</p>
| 0 | 2016-10-10T08:11:17Z | [
"python",
"django-rest-framework",
"put",
"pytest-django"
]
|
python read in a txt file and assign strings to combined variable (concatenation) | 39,906,991 | <p>I have a text file (Grades) where I'm given last name, first name, and grades. I need to read them in, assign them to variables, and write them to a different text file (Graded). I'm a beginner and this is my first hw assignment. There are errors I know, but the biggest thing I want to know is how to read in and then write to the file, as well as how to combine the 3 variables into one. </p>
<pre><code>class Student:
def __init__(self, str_fname, str_lname, str_grade):
"""Initialization method"""
self.str_fname = str_fname
self.str_lname = str_lname
self.str_grade = str_grade
@property
def str_fname(self):
return self.str_fname
@str_fname.setter
def str_fname(self, str_fname):
"""Setter for first name attribute"""
if not isinstance(str_fname, str):
#This is not a string, raise an error
raise TypeError
self.fname = str_fname
@property
def str_lname(self):
"""Return the name"""
return self.str_lname
@str_lname.setter
def str_fname(self, str_lname):
"""Setter for last name attribute"""
if not isinstance(str_lname, str):
# This is not a string, raise an error
raise TypeError
self.str_lname = str_lname
@property
def str_grade(self):
"""Return the name"""
return self.str_grade
@str_grade.setter
def str_grade(self, str_grade):
"""Setter for grade attribute"""
if not isinstance(str_grade, str):
# This is not a string, raise an error
raise TypeError
self.str_grade = str_grade
Student = str_lname+str_fname+str_fname
f = open('Grades.txt', 'r')
g = open('Graded.text', 'w')
f.read()
g.write()
</code></pre>
| -1 | 2016-10-06T23:10:32Z | 39,907,047 | <p>Have you read through the Python tutorial?
<a href="https://docs.python.org/3.6/tutorial/inputoutput.html#reading-and-writing-files" rel="nofollow">https://docs.python.org/3.6/tutorial/inputoutput.html#reading-and-writing-files</a></p>
| 1 | 2016-10-06T23:16:06Z | [
"python",
"file",
"read-write"
]
|
concatenating variables into one large variable with python | 39,907,006 | <p>I have the below program in python which is supposed to print values from an array.</p>
<pre><code>import sys
from datetime import *
import pprint
txtfile=sys.argv[1]
f = open(txtfile,'r')
lines=f.readlines()
f.close
for line in lines:
col = line.split(',')
wrow=[]
grid=[]
lon=int(round(float(col[0]),0))
lat=int(round(float(col[1]),0))
val=float(col[2])
my_tuple=(lon,lat,val)
wrow.append(my_tuple)
if(len(wrow)==720):
row = wrow[0:] + wrow[:720]
grid.append(row)
wrow=[]
row=[]
if(len(grid)==360):
grid.reverse()
for row in grid:
string=''
i=1
for mytuple in row:
print "count is ",i
val=str(mytuple[2])
print "val is ",val
#string = string + val + '\t'
#print "string is ",string
i=i+1
if(i==4):
print "****************************"
string = string + val + '\t'
#print string
#string = string + '\n'
i=1
else:
string = string + '\n'
#print string
#pprint.pprint(grid)
grid=[]
wrow=[]
exit()
</code></pre>
<p>The output of this program looks something like this:</p>
<pre><code>****************************
count is 1
val is 0.51
count is 2
val is 0.69
count is 3
val is 0.83
****************************
count is 1
val is 0.7
count is 2
val is 0.59
count is 3
val is 0.93
*****************************
</code></pre>
<p>Now I want to set up the python script in such a way that I have a variable string that concatenates all the variables val. Therefore is I have:</p>
<pre><code>print "***************************"
print string
</code></pre>
<p>I would get the output:</p>
<pre><code>*******************************
0.51 0.69 0.83
*******************************
0.7 0.59 0.93
</code></pre>
<p>Any ideas on how to tweak this script to do this?</p>
| 0 | 2016-10-06T23:11:42Z | 39,907,072 | <p>You'll have to do some kind of loop or join to put them all on the same line, but it's not that bad. I'd do something like this:</p>
<pre><code>tuples = [(1, .51), (2, 0.69), (3, 0.83)]
joined = " ".join([repr(x[1]) for x in tuples])
print (joined)
'0.51 0.69 0.83'
</code></pre>
| 2 | 2016-10-06T23:19:16Z | [
"python",
"string",
"loops",
"variables",
"concatenation"
]
|
PySp-Pyomo error: 'dict' has no attribute 'f' | 39,907,057 | <p>i am new to promo and PySP. I am trying to replicate the solutions for the Stochastic Programming tutorial under Vehicle Routing Problems from <code>https://projects.coin-or.org/Coopr/browser/pyomo.data/trunk/pyomo/data/pysp/vehicle_routing/3-7f?rev=9398&order=name</code>
But with the excerption of PS3-7b, all the other codes, once i replicate in their respective folders and run the command </p>
<pre><code>`pyomo solve --solver=glpk ReferenceModel.py ReferenceModel.dat`
</code></pre>
<p>throws the following errors</p>
<pre><code>[ 0.00] Setting up Pyomo environment
[ 0.00] Applying Pyomo preprocessing actions
[ 0.78] Pyomo Finished
ERROR: Unexpected exception while loading model:
'dict' object has no attribute 'f
</code></pre>
<p>'
been bugging me for several days now. Any help as to what i am doing wrongly.</p>
<pre><code>I am running Pyomo 4.3.11388 (Python 2.7.10 on Darwin 15.6.0) on MacBook Late 2008 model.
</code></pre>
<p>Thanks</p>
| 0 | 2016-10-06T23:16:46Z | 39,907,309 | <p>Try adding -c at the end of the command. It will provide you with a full stack trace showing the source of the error.</p>
<p>You should also note that the Coopr project has been renamed to Pyomo, and we are now hosted on Github. The most up to date documentation can be found at pyomo.org</p>
<p>Edit:</p>
<p>I took a closer look at that example, and fixed a few bugs. You can find the updated code here: <a href="https://github.com/Pyomo/pyomo-model-libraries/blob/master/pysp/vehicle_routing/3-7b/ReferenceModel.py" rel="nofollow">https://github.com/Pyomo/pyomo-model-libraries/blob/master/pysp/vehicle_routing/3-7b/ReferenceModel.py</a>.</p>
<p>You should note that 3-7b is set up to run as a stand-alone script. That is, you should not run it using the pyomo command, but instead run it using the python interpreter that Pyomo was installed into</p>
<pre><code>python ReferenceModel.py
</code></pre>
<p>If you look at the bottom of that file, you will see code that (1) creates a concrete instance using the .dat file, (2) creates a solver and solves the model with it, and (3) interrogates the solution by printing the value of the objective and variables on the instance. This is basically what the pyomo command does when you provide it with a model file, so you should not provide it with a model file that contains this kind of code.</p>
| 0 | 2016-10-06T23:47:25Z | [
"python",
"python-2.7",
"pyomo"
]
|
How to pass dictionary as an argument of function and how to access them in the function | 39,907,174 | <p>I tried doing this:</p>
<pre><code>def func(dict):
if dict[a] == dict[b]:
dict[c] = dict[a]
return dict
num = { "a": 1, "b": 2, "c": 2}
print(func(**num))
</code></pre>
<p>But it gives in TypeError.
Func got an unexpected argument a</p>
| -2 | 2016-10-06T23:31:24Z | 39,907,227 | <p>Two main problems:</p>
<ul>
<li>passing the ** argument is incorrect. Just pass the dictionary; Python will handle the referencing.</li>
<li>you tried to reference locations with uninitialized variable names. Letters a/b/c are literal strings in your dictionary.</li>
</ul>
<p>Code:</p>
<pre><code>def func(table):
if table['a'] == table['b']:
table['c'] = table['a']
return table
num = { "a": 1, "b": 2, "c": 2}
print(func(num))
</code></pre>
<hr>
<p>Now, let's try a couple of test cases: one with a & b different, one matching:</p>
<pre><code>>>> letter_count = { "a": 1, "b": 2, "c": 2}
>>> print(func(letter_count))
{'b': 2, 'c': 2, 'a': 1}
>>> letter_count = { "a": 1, "b": 1, "c": 2}
>>> print(func(letter_count))
{'b': 1, 'c': 1, 'a': 1}
</code></pre>
| 1 | 2016-10-06T23:37:38Z | [
"python",
"python-3.x"
]
|
How to pass dictionary as an argument of function and how to access them in the function | 39,907,174 | <p>I tried doing this:</p>
<pre><code>def func(dict):
if dict[a] == dict[b]:
dict[c] = dict[a]
return dict
num = { "a": 1, "b": 2, "c": 2}
print(func(**num))
</code></pre>
<p>But it gives in TypeError.
Func got an unexpected argument a</p>
| -2 | 2016-10-06T23:31:24Z | 39,907,260 | <p>Using ** will unpack the dictionary, in your case you should just pass a reference to <code>num</code> to func, i.e.</p>
<p><code>print(num(func))</code></p>
<p>(Unpacking <code>**</code> is the equivalent of <code>func(a=1,b=2,c=3)</code>), e.g.</p>
<pre><code>def func(arg1,arg2):
return arg1 + arg2
args = {"arg1":3,"arg2":4}
print(func(**args))
</code></pre>
| 1 | 2016-10-06T23:40:51Z | [
"python",
"python-3.x"
]
|
Python not identifying a white space character | 39,907,195 | <p>I am near my wit's end with this problem: Basically, I need to remove a double space gap between words. My program happens to be in Hebrew, but this is the basic idea:</p>
<pre><code>TITLE: ××××ת â â×ש××תâ â×××קרâ
</code></pre>
<p>Notice there is an extra space between the first two words (Herbew reads right to left).</p>
<p>I tried many, many different methods, here are a few:</p>
<pre><code># tried all these with and without unicode
title = re.sub(u'\s+',u' ',title.decode('utf-8'))
title = title.replace(" "," ")
title = title.replace(u" ××××ת",u" ××××ת")
</code></pre>
<p>Until finally I resorted to making a very unnecessary method (some of the formatting got messed up when pasting):</p>
<pre><code>def remove_blanks(s):
word_list = s.split(" ")
final_word_list = []
for word in word_list:
print "word: " +word
#tried every qualifier I could think of...
if not_blank(word) and word!=" " and True != re.match("s*",word):
print "^NOT BLANK^"
final_word_list.append(word)
return ' '.join(final_word_list)
def not_blank(s):
while " " in s:
s = s.replace(" ","")
return (len(s.replace("\n","").replace("\r","").replace("\t",""))!=0);
</code></pre>
<p>And, to my utter amazement, this is what I got back:</p>
<pre><code>word: ××××ת
^NOT BLANK^
word: â #this should be tagged as Blank!!
^NOT BLANK^
word: â×ש××תâ
^NOT BLANK^
word: â×××קרâ
^NOT BLANK^
</code></pre>
<p>So apparently my qualifier didn't work. What is going on here?</p>
| -1 | 2016-10-06T23:33:33Z | 39,907,279 | <p>There was a hiding \xe2\x80\x8e, LEFT-TO-RIGHT MARK. Found it using repr(word). Thanks @mgilson!</p>
| 0 | 2016-10-06T23:44:24Z | [
"python",
"regex"
]
|
Python attach list of dict to another dict as new key | 39,907,209 | <p>I have two lists. First one (list a) contain lists of dicts and every list represent comments from the specific post. They all have the same 'id' value. Second list (list b) contain dicts only and these dicts are posts.</p>
<p>Now I need to create new key named 'comments' for every dict in b_list and assign appropriated list from a_list as value. So targeted list is the one where dict['id'] are the same values as post value. </p>
<pre><code>a_list=[
[{'id':'123', 'user':'Foo'}, {'id':'123','user':'Jonny'}, ...],
[{'id':'456', 'user':'Bar'}, {'id':'456','user':'Mary'}, ...],
...
]
b_list=[{'post':'123','text': 'Something'}, {'post':'456', 'text': 'Another thing'}, ...]
</code></pre>
<p>What will be the best and more pythonic way then to do that?</p>
| 0 | 2016-10-06T23:35:20Z | 39,907,286 | <p>Build a dictionary of IDs and then go through them:</p>
<pre><code>>>> a_list=[
... [{'id':'123', 'user':'Foo'}, {'id':'123','user':'Jonny'}, ],
... [{'id':'456', 'user':'Bar'}, {'id':'456','user':'Mary'},],
... ]
>>> b_list=[{'post':'123','text': 'Something'}, {'post':'456', 'text':'Another thing'}, ]
>>> d = {l[0]['id']:l for l in a_list}
>>> for item in b_list:
... item['comments'] = d[item['post']]
...
>>> import pprint
>>> pprint.pprint(b_list)
[{'comments': [{'id': '123', 'user': 'Foo'}, {'id': '123', 'user': 'Jonny'}],
'post': '123',
'text': 'Something'},
{'comments': [{'id': '456', 'user': 'Bar'}, {'id': '456', 'user': 'Mary'}],
'post': '456',
'text': 'Another thing'}]
</code></pre>
| 0 | 2016-10-06T23:45:00Z | [
"python",
"django",
"list",
"python-3.x",
"dictionary"
]
|
Python attach list of dict to another dict as new key | 39,907,209 | <p>I have two lists. First one (list a) contain lists of dicts and every list represent comments from the specific post. They all have the same 'id' value. Second list (list b) contain dicts only and these dicts are posts.</p>
<p>Now I need to create new key named 'comments' for every dict in b_list and assign appropriated list from a_list as value. So targeted list is the one where dict['id'] are the same values as post value. </p>
<pre><code>a_list=[
[{'id':'123', 'user':'Foo'}, {'id':'123','user':'Jonny'}, ...],
[{'id':'456', 'user':'Bar'}, {'id':'456','user':'Mary'}, ...],
...
]
b_list=[{'post':'123','text': 'Something'}, {'post':'456', 'text': 'Another thing'}, ...]
</code></pre>
<p>What will be the best and more pythonic way then to do that?</p>
| 0 | 2016-10-06T23:35:20Z | 39,907,313 | <p><em>I am assuming that in the <code>a_list</code>, one nested <code>list</code> will have same <code>'id'</code> and there will be only one list per id.</em></p>
<p>For achieving this, iterate over b_list and check for match in <code>a_list</code>. In case of match, add value to the dict object of <code>a_list</code></p>
<pre><code>>>> a_list=[
... [{'id':'123', 'user':'Foo'}, {'id':'123','user':'Jonny'}],
... [{'id':'456', 'user':'Bar'}, {'id':'456','user':'Mary'}],
... ]
>>> b_list=[{'post':'123','text': 'Something'}, {'post':'456', 'text': 'Another thing'}]
>>>
>>> for dict_item in b_list:
... id = dict_item['post']
... for list_item in a_list:
... if list_item[0]['id'] == id:
... dict_item['comments'] = list_item
... break
...
>>> b_list
[{
'text': 'Something',
'post': '123',
'comments': [
{
'id': '123',
'user': 'Foo'
},
{
'id': '123',
'user': 'Jonny'
}
]
},
{
'post': '456',
'text': 'Another thing',
'comments': [
{
'id': '456',
'user': 'Bar'
},
{
'id': '456',
'user': 'Mary'
}
]
}
]
</code></pre>
| 0 | 2016-10-06T23:49:18Z | [
"python",
"django",
"list",
"python-3.x",
"dictionary"
]
|
How to put swig wrappers in a reachable location to be tested by python tests? | 39,907,237 | <p>I got a simple test example like this one:</p>
<p><a href="http://i.stack.imgur.com/4GHE0.png" rel="nofollow"><img src="http://i.stack.imgur.com/4GHE0.png" alt="enter image description here"></a></p>
<p>And my CMakeLists.txt looks like this:</p>
<pre><code>cmake_minimum_required(VERSION 3.7)
FIND_PACKAGE(SWIG REQUIRED)
INCLUDE(${SWIG_USE_FILE})
FIND_PACKAGE(PythonLibs)
INCLUDE_DIRECTORIES(${PYTHON_INCLUDE_PATH})
INCLUDE_DIRECTORIES(${CMAKE_CURRENT_SOURCE_DIR})
SET(CMAKE_SWIG_FLAGS "")
SET_SOURCE_FILES_PROPERTIES(example.i PROPERTIES CPLUSPLUS ON)
set(SWIG_MODULE_example_EXTRA_DEPS example.cxx example.h)
SWIG_ADD_MODULE(example python example.i example.cxx)
SWIG_LINK_LIBRARIES(example ${PYTHON_LIBRARIES})
</code></pre>
<p>After generating the ST files and building the wrapper I'll get this output in the build directory:</p>
<p><a href="http://i.stack.imgur.com/NIPIL.png" rel="nofollow"><img src="http://i.stack.imgur.com/NIPIL.png" alt="enter image description here"></a></p>
<p>Problem here is my test <code>runme.py</code> isn't able to load those libraries. So, my question is, how can I place the <code>example.py</code> and <code>example.pyd</code> (only those ones) to be in a reachable place to be tested properly? I've tried adding this line to the CMakeLists.txt <code>set(CMAKE_SWIG_OUTDIR ${CMAKE_CURRENT_BINARY_DIR}/..)</code> and the result was placing the .py and the generated wrappers alone:</p>
<p><a href="http://i.stack.imgur.com/KrrcL.png" rel="nofollow"><img src="http://i.stack.imgur.com/KrrcL.png" alt="enter image description here"></a></p>
<p>And the .pyd remaining in the build folder:</p>
<p><a href="http://i.stack.imgur.com/9R3yO.png" rel="nofollow"><img src="http://i.stack.imgur.com/9R3yO.png" alt="enter image description here"></a></p>
<p>So, how can I put the wrappers in a reachable place to be tested by my python test files?</p>
| 0 | 2016-10-06T23:38:46Z | 39,907,318 | <p>I am assuming that you want to import the libraries into your runme.py. In order to import python files in a sub-directory, that sub-directory needs an __init__.py file in it. You can place an empty one into your build folder. This should allow your program to import the files.</p>
<p>You could then use:</p>
<pre><code>import build.example
import build._example
</code></pre>
<p>This way it wouldn't matter if the files aren't in the same directory, assuming that there is not another reason you want them there.</p>
<p>Edit:</p>
<p>Since the build folder doesn't exist ahead of time you can always create and compile the __init__.py file in the runme.py file by doing:</p>
<pre><code>import os
import py_compile
dir_path = os.path.dirname(os.path.realpath(__file__))
open(dir_path + os.sep + "build" + os.sep + "__init__.py", "w+")
py_compile.compile(dir_path + os.sep + "build" + os.sep + "__init__.py")
py_compile.compile(dir_path + os.sep + "build" + os.sep + "example.py")
import build.example
import build._example
</code></pre>
<p>This will create the file you need and then be able to do the import without using CMake.</p>
| 0 | 2016-10-06T23:50:03Z | [
"python",
"cmake",
"swig"
]
|
How to put swig wrappers in a reachable location to be tested by python tests? | 39,907,237 | <p>I got a simple test example like this one:</p>
<p><a href="http://i.stack.imgur.com/4GHE0.png" rel="nofollow"><img src="http://i.stack.imgur.com/4GHE0.png" alt="enter image description here"></a></p>
<p>And my CMakeLists.txt looks like this:</p>
<pre><code>cmake_minimum_required(VERSION 3.7)
FIND_PACKAGE(SWIG REQUIRED)
INCLUDE(${SWIG_USE_FILE})
FIND_PACKAGE(PythonLibs)
INCLUDE_DIRECTORIES(${PYTHON_INCLUDE_PATH})
INCLUDE_DIRECTORIES(${CMAKE_CURRENT_SOURCE_DIR})
SET(CMAKE_SWIG_FLAGS "")
SET_SOURCE_FILES_PROPERTIES(example.i PROPERTIES CPLUSPLUS ON)
set(SWIG_MODULE_example_EXTRA_DEPS example.cxx example.h)
SWIG_ADD_MODULE(example python example.i example.cxx)
SWIG_LINK_LIBRARIES(example ${PYTHON_LIBRARIES})
</code></pre>
<p>After generating the ST files and building the wrapper I'll get this output in the build directory:</p>
<p><a href="http://i.stack.imgur.com/NIPIL.png" rel="nofollow"><img src="http://i.stack.imgur.com/NIPIL.png" alt="enter image description here"></a></p>
<p>Problem here is my test <code>runme.py</code> isn't able to load those libraries. So, my question is, how can I place the <code>example.py</code> and <code>example.pyd</code> (only those ones) to be in a reachable place to be tested properly? I've tried adding this line to the CMakeLists.txt <code>set(CMAKE_SWIG_OUTDIR ${CMAKE_CURRENT_BINARY_DIR}/..)</code> and the result was placing the .py and the generated wrappers alone:</p>
<p><a href="http://i.stack.imgur.com/KrrcL.png" rel="nofollow"><img src="http://i.stack.imgur.com/KrrcL.png" alt="enter image description here"></a></p>
<p>And the .pyd remaining in the build folder:</p>
<p><a href="http://i.stack.imgur.com/9R3yO.png" rel="nofollow"><img src="http://i.stack.imgur.com/9R3yO.png" alt="enter image description here"></a></p>
<p>So, how can I put the wrappers in a reachable place to be tested by my python test files?</p>
| 0 | 2016-10-06T23:38:46Z | 39,967,741 | <p>I use C++ for unit testing and Python for testing against reference implementations or integration testing. </p>
<p>The approach I use is to add a custom target, which copies the libraries as well as any extra generated files to my source directory, where the python test files are located. I have configured my <code>.gitignore</code> file such that they are never added by mistake to my repository.</p>
<pre><code>add_custom_target(copy ALL)
# Some library my Swig wrapped library depends on (say gl)
add_custom_command(TARGET copy
PRE_BUILD
COMMAND ${CMAKE_COMMAND} -E copy $<TARGET_FILE:gl> ${CMAKE_CURRENT_SOURCE_DIR}
DEPENDS $<TARGET_FILE:gl>)
# The Swig library, say swig_example (Note the _ in the target name)
add_custom_command(TARGET copy
PRE_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different $<TARGET_FILE:_swig_example> ${CMAKE_CURRENT_SOURCE_DIR}
DEPENDS $<TARGET_FILE:_swig_example>)
# The extra .py files generated by SWIG
add_custom_command(TARGET copy
PRE_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${swig_extra_generated_files} ${CMAKE_CURRENT_SOURCE_DIR}
DEPENDS $<TARGET_FILE:_swig_example>)
enable_testing()
add_test(
NAME "${CMAKE_CURRENT_SOURCE_DIR}/some_test.py"
COMMAND ${PYTHON_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/some_test.py
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR})
</code></pre>
<p>I believe this is a very neat setup. Alternatively, you could start adding paths in your Python test scripts, but it is often desired to support an arbitrary build location.</p>
| 0 | 2016-10-10T22:31:34Z | [
"python",
"cmake",
"swig"
]
|
How do I apply a regex substitution in a string column | 39,907,239 | <p>I have a data frame with a column like below</p>
<pre><code>Years in current job
< 1 year
10+ years
9 years
1 year
</code></pre>
<p>I want to use regex or any other technique in python to get the result as</p>
<pre><code>Years in current job
1
10
9
1
</code></pre>
<p>I got something like this but, i guess it can be done in a better way using regex</p>
<pre><code>frame["Years in current job"] = frame["Years in current job"].str.replace(" ","")
frame["Years in current job"] = frame["Years in current job"].str.replace("<","")
frame["Years in current job"] = frame["Years in current job"].str.replace("year","")
frame["Years in current job"] = frame["Years in current job"].str.replace("years","")
</code></pre>
| -3 | 2016-10-06T23:39:04Z | 39,907,379 | <pre><code>df['Years in current job'] = df['Years in current job'].str.replace('\D+', '').astype('int')
</code></pre>
<p>Regex <code>\D+</code> search non-digits (and replace with empty string)</p>
<hr>
<p>I found this on SO: <a href="http://stackoverflow.com/a/22591024/1832058">http://stackoverflow.com/a/22591024/1832058</a></p>
| 1 | 2016-10-06T23:57:26Z | [
"python",
"regex",
"pandas"
]
|
How do I apply a regex substitution in a string column | 39,907,239 | <p>I have a data frame with a column like below</p>
<pre><code>Years in current job
< 1 year
10+ years
9 years
1 year
</code></pre>
<p>I want to use regex or any other technique in python to get the result as</p>
<pre><code>Years in current job
1
10
9
1
</code></pre>
<p>I got something like this but, i guess it can be done in a better way using regex</p>
<pre><code>frame["Years in current job"] = frame["Years in current job"].str.replace(" ","")
frame["Years in current job"] = frame["Years in current job"].str.replace("<","")
frame["Years in current job"] = frame["Years in current job"].str.replace("year","")
frame["Years in current job"] = frame["Years in current job"].str.replace("years","")
</code></pre>
| -3 | 2016-10-06T23:39:04Z | 39,907,487 | <pre><code>import re
def extract_nums(txt):
try:
return int(re.search('([0-9]+)', txt).group(1))
except:
return -1
df['Years in current job'] = df['Years in current job'].apply(extract_nums)
</code></pre>
<p>EDIT - adding context per suggestion below</p>
<p>this could be done easily enough with string methods, but I'll throw out an approach using regex since that might be helpful for more complicated tasks.</p>
<p>re.search and parenthesis will find the digits you're looking.... group extracts the match inside the parenthesis... and try/except will handle any problems that arise if there is no match. then just pass that function to the pandas.Series apply() method.</p>
<p>regex search: <a href="https://docs.python.org/2/library/re.html#regular-expression-objects" rel="nofollow">https://docs.python.org/2/library/re.html#regular-expression-objects</a></p>
<p>apply method: <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html</a></p>
| 0 | 2016-10-07T00:12:48Z | [
"python",
"regex",
"pandas"
]
|
Pandas df.isnull().all() count across multiple files | 39,907,249 | <p>I have 2000 csv files in a data set with 88 columns each: </p>
<pre><code> filenames = glob.glob('path\*.csv')
for f in filenames:
df = pd.read_csv(f, error_bad_lines = False)
df = df.isnull().all()
</code></pre>
<p>This returns a series with the column title, and True if the entire column is missing. How can count the number of Trues(completely missing columns) across the entire dataset(2000 csv files) so I can express as a % how much data is missing on a per file basis?</p>
<p>If an entire column is missing per file, I would want to add 1 and keep a running total of that </p>
| 1 | 2016-10-06T23:40:03Z | 39,907,334 | <p>The way you've phrased it, you're getting the number of missing columns per dataset. </p>
<p>However, you can get the number of missing rows per column, you can modify that code and call this:</p>
<pre><code>df.isnull().sum()
</code></pre>
<p>which will yield a count of missing rows per column. Something like: </p>
<p><code>
column1 0
column2 0
column3 171
column4 798
column5 0
dtype: int64</code></p>
<p>Simply calling <code>.sum()</code> again will give you the sum of missing observations. </p>
<p>The total number of cells in the data frame will be equal to the columns multiplied by rows, which you can calculate by calling this: </p>
<pre><code>df.shape[0]*df.shape[1]
</code></pre>
<p>Which means you can calculate the missing percent by calling this:</p>
<pre><code>total = df.shape[0]*df.shape[1]
missing = df.isnull().sum().sum()
percent = missing/float(total)
</code></pre>
<p>Just append those values to a list, so you can save them for reference later. Try something like this: </p>
<pre><code> misscount = []
for f in filenames:
df = pd.read_csv(f, error_bad_lines = False)
total = df.shape[0]*df.shape[1]
missing = df.isnull().sum().sum()
percent = missing/float(total)
misscount.append(percent)
</code></pre>
<h3>EDIT:</h3>
<p>based on feedback in the comments:</p>
<p>"....I actually do want the number of columns missing per the entire dataset(2000 csv files)....So if an entire column is missing, I'd want to add that to a "missing variable" like yours and then divide by the length of the entire dataset(2000)."</p>
<p>So, in order to calculate the total number of columns for a given csv file, you can call this: </p>
<pre><code>total =len(df.columns)
</code></pre>
<p>In order to calculate the total number of missing columns per csv file, you can call this:</p>
<pre><code>missing = df.isnull().all().sum()
</code></pre>
<p>So the missing column percent per csv file can be calculated like this:
percent = missing/float(total)</p>
<p>But it sounds like you want a running tally. So let's use this loop:</p>
<pre><code>colcount = 0
misscount = 0
for f in filenames:
df = pd.read_csv(f, error_bad_lines = False)
colcount +=len(df.columns)
misscount += df.isnull().all().sum()
percent = misscount/float(colcount)
</code></pre>
| 1 | 2016-10-06T23:51:25Z | [
"python",
"pandas"
]
|
PIL OverflowError on loading PIL.Image.fromArray | 39,907,275 | <p>I am trying to store large images using pillow 3.3.1 on python 3.4. These images tend to be in the range from 1 to 4 GB, as uint8 RGB pixels. Linux and OSX give me the same result.</p>
<pre><code>from PIL import Image
import numpy as np
imgArray = np.random.randint(255, size=(39000, 35000, 3)).astype(np.uint8)
print("buffer size:", imgArray.size)
print("image max bytes:", 2**32)
pilImage = Image.fromarray(imgArray)
</code></pre>
<p>I get the following output</p>
<pre><code>buffer size: 4095000000
image max bytes: 4294967296
Traceback (most recent call last):
File "storeLargeImage.py", line 6, in <module>
pilImage = Image.fromarray(imgArray)
File "/home/mpesavento/miniconda3/lib/python3.4/site-packages/PIL/Image.py", line 2189, in fromarray
return frombuffer(mode, size, obj, "raw", rawmode, 0, 1)
File "/home/mpesavento/miniconda3/lib/python3.4/site-packages/PIL/Image.py", line 2139, in frombuffer
return frombytes(mode, size, data, decoder_name, args)
File "/home/mpesavento/miniconda3/lib/python3.4/site-packages/PIL/Image.py", line 2074, in frombytes
im.frombytes(data, decoder_name, args)
File "/home/mpesavento/miniconda3/lib/python3.4/site-packages/PIL/Image.py", line 736, in frombytes
s = d.decode(data)
OverflowError: size does not fit in an int
</code></pre>
<p>The buffer is smaller than the max I think PIL uses in python 3, which I thought used a uint32 for the buffer length. PIL in python 2 uses an int32, making the max 2**31-1.</p>
<p>This bug comes up before we determine the codec for storing. For example, I would like to either store lossless png or tif, via<br>
<code>pilImage.save(BytesIO(), format="png")</code><br>
or<br>
<code>pilImage.save(BytesIO(), format="tiff")</code> </p>
<p>How can one go about saving an image larger than 2 GB (2147483647 bytes)?</p>
<p>Edit:
It looks like it should have been <a href="https://github.com/python-pillow/Pillow/issues/436" rel="nofollow">fixed a while ago</a>. Not sure why the problem still shows up.</p>
| 0 | 2016-10-06T23:43:54Z | 39,915,609 | <p>I realize you asked about PIL, but you could try <a href="http://www.vips.ecs.soton.ac.uk/index.php?title=VIPS" rel="nofollow">libvips</a>. It specializes in large images (images larger than your available RAM) and should have no problems with your 4gb files. There are some <a href="http://www.vips.ecs.soton.ac.uk/index.php?title=Speed_and_Memory_Use#Results" rel="nofollow">speed and memory use benchmarks on the vips website</a>.</p>
<p>For example, I just did:</p>
<pre><code>$ python
Python 2.7.12 (default, Jul 1 2016, 15:12:24)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from gi.repository import Vips
>>> x = Vips.Image.perlin(100000, 100000, uchar = True)
>>> x.write_to_file("x.tif", bigtiff = True)
>>>
$ ls -l x.tif
-rw-rw-r-- 1 john john 10000012844 Oct 7 11:43 x.tif
</code></pre>
<p>Makes an 10gb, 8 bit, 100,000 x 100,000 pixel image of Perlin noise. It takes a little while, of course --- the final write is about three minutes on my laptop and needs 100MB of RAM. The Python binding is documented <a href="http://www.vips.ecs.soton.ac.uk/supported/current/doc/html/libvips/using-from-python.html" rel="nofollow">here</a>. </p>
<p>You can <a href="http://stackoverflow.com/questions/31632265/extract-vips-image-and-save-it-to-numpy-array-same-as-with-pil">move images between numpy and vips</a>, though it's done by allocating and copying huge strings, sadly, so you need plenty of memory for that. You might be able to do what you want with just the vips operations, it depends on your needs. </p>
| 0 | 2016-10-07T10:59:21Z | [
"python",
"pillow"
]
|
Django, uwsgi static files not being served even after collectstatic | 39,907,281 | <p>I'm having trouble with the deployment of a Django application on a Debian 8 VPS. Python version 2.7, Django 1.10.2.</p>
<p>My problem is that it will not serve static files in production mode (DEBUG = False) even after having run 'collectstatic' and assigning a STATIC_ROOT directory.</p>
<p>I've followed every instruction regarding this deployment (nginx, uwsgi, python) but still get 404's for all my static files. When collectstatic is run, it does put all the files into a /static/ directory at the top of the application. When I run uwsgi or the development server, the HTML and python functions fine, but all my static CSS,JS,IMG are not accessible.</p>
<p>When I switch back to DEBUG=True and run the dev server, my static files are present again.</p>
<p>Can someone have a look and see what I could be doing wrong? If you need any more context of files please let me know.</p>
<p>My settings.py reads as follows:</p>
<pre><code>"""
Django settings for mysite project.
Generated by 'django-admin startproject' using Django 1.10.2.
For more information on this file, see
https://docs.djangoproject.com/en/1.10/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.10/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'removedforstackoverflow'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = False
ALLOWED_HOSTS = ['removedforstackoverflow']
# Application definition
INSTALLED_APPS = [
'main',
'instant',
'opengig',
'widget_tweaks',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'mysite.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'mysite.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.10/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.10/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.10/howto/static-files/
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, "static/")
</code></pre>
<p>Here's the header.html file where I call my static files. If it was just bootstrap I'd use the CDN, but I have a few images that I use and i'd rather not set up a server just to host my static files when this application is so small.</p>
<pre><code><!DOCTYPE html>
<html lang="en">
<head>
<title>Instant Backoffice</title>
<meta charset="utf-8" />
{% load staticfiles %}
<link rel="stylesheet" href="{% static 'css/bootstrap.min.css' %}" type = "text/css"/>
<meta name="viewport" content = "width=device-width, initial-scale=1.0">
</head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>
<script type="text/javascript" src="{% static 'js/bootstrap.min.js' %}"></script>
<body>
<nav class="navbar navbar-default">
<div class="container-fluid">
<div class="navbar-header">
<a class="navbar-brand" href="/">
<img alt="Brand" src="{% static 'img/logo.svg' %}" height="25">
</a>
</div>
<ul class="nav navbar-nav navbar-right">
<ul class="nav nav-pills">
<li><a href="/instant/allorders">List of Orders</a></li>
<li><a href="/instant/payment">Change Payment Status</a></li>
<li><a href="/instant/review">Add a Review</a></li>
<li><a href="/instant/cancel">Cancel an Order</a></li>
<li><a href="/logout/">Logout</a></li>
</ul>
</li>
</ul>
</div>
</nav>
<div class="row">
<div class='container-fluid'>
<div class="col-sm-12">
{% block content %}
{% endblock %}
</div>
</div>
</div>
</body>
</html>
</code></pre>
<p>Error list from the development server:</p>
<pre><code>[06/Oct/2016 23:40:43] "GET /static/css/bootstrap.min.css HTTP/1.1" 404 102
[06/Oct/2016 23:40:43] "GET /static/js/bootstrap.min.js HTTP/1.1" 404 100
[06/Oct/2016 23:40:44] "GET /static/img/logo.svg HTTP/1.1" 404 93
[06/Oct/2016 23:40:44] "GET /static/js/bootstrap.min.js HTTP/1.1" 404 100
[06/Oct/2016 23:40:44] "GET /static/img/logo.svg HTTP/1.1" 404 93
[06/Oct/2016 23:40:44] "GET /static/img/bg.jpg HTTP/1.1" 404 91
</code></pre>
<p>And from uWSGI:</p>
<pre><code>spawned uWSGI worker 1 (and the only) (pid: 21574, cores: 1)
[pid: 21574|app: 0|req: 1/1] 82.47.105.96 () {42 vars in 795 bytes} [Thu Oct 6 23:41:47 2016] GET / => generated 1515 bytes in 36 msecs (HTTP/1.1 200) 3 headers in 102 bytes (1 switches on core 0)
[pid: 21574|app: 0|req: 2/2] 82.47.105.96 () {40 vars in 780 bytes} [Thu Oct 6 23:41:47 2016] GET /static/css/bootstrap.min.css => generated 102 bytes in 3 msecs (HTTP/1.1 404) 2 headers in 80 bytes (1 switches on core 0)
[pid: 21574|app: 0|req: 3/3] 82.47.105.96 () {40 vars in 761 bytes} [Thu Oct 6 23:41:47 2016] GET /static/js/bootstrap.min.js => generated 100 bytes in 1 msecs (HTTP/1.1 404) 2 headers in 80 bytes (1 switches on core 0)
[pid: 21574|app: 0|req: 4/4] 82.47.105.96 () {40 vars in 772 bytes} [Thu Oct 6 23:41:47 2016] GET /static/img/logo.svg => generated 93 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 80 bytes (1 switches on core 0)
[pid: 21574|app: 0|req: 5/5] 82.47.105.96 () {40 vars in 761 bytes} [Thu Oct 6 23:41:48 2016] GET /static/js/bootstrap.min.js => generated 100 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 80 bytes (1 switches on core 0)
[pid: 21574|app: 0|req: 6/6] 82.47.105.96 () {40 vars in 772 bytes} [Thu Oct 6 23:41:48 2016] GET /static/img/logo.svg => generated 93 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 80 bytes (1 switches on core 0)
[pid: 21574|app: 0|req: 7/7] 82.47.105.96 () {40 vars in 768 bytes} [Thu Oct 6 23:41:48 2016] GET /static/img/bg.jpg => generated 91 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 80 bytes (1 switches on core 0)
</code></pre>
<p>As I said, I'm just getting 404's and I don't understand why. Any help would be great.</p>
| 0 | 2016-10-06T23:44:30Z | 39,907,426 | <p>You need to make sure, <code>urls.py</code> file of your project is updated to serve the static file from production.</p>
<p>Update the urls.py file of your project with the following code as shown below.</p>
<pre><code>from django.conf import settings
from django.conf.urls.static import static
urlpatterns = [
# ... the rest of your URLconf goes here ...
] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
</code></pre>
<p>Another thing, I don't see any STATICFILES_DIRS directories in your settings.py file as well. You need to update that too, so that when you run <code>./manage.py collectstatic</code> command all the static assets from static dirs will get collected to your root <code>static</code> directory, that is the best practice in Django, even though directly putting the static files on <code>STATIC_ROOT</code> directory works for production.</p>
| 0 | 2016-10-07T00:03:52Z | [
"python",
"django",
"django-staticfiles"
]
|
Replace duplicate values across columns in Pandas | 39,907,315 | <p>I have a simple dataframe as such: </p>
<pre><code>df = [ {'col1' : 'A', 'col2': 'B', 'col3': 'C', 'col4':'0'},
{'col1' : 'M', 'col2': '0', 'col3': 'M', 'col4':'0'},
{'col1' : 'B', 'col2': 'B', 'col3': '0', 'col4':'B'},
{'col1' : 'X', 'col2': '0', 'col3': 'Y', 'col4':'0'}
]
df = pd.DataFrame(df)
df = df[['col1', 'col2', 'col3', 'col4']]
df
</code></pre>
<p>Which looks like this: </p>
<pre><code>| col1 | col2 | col3 | col4 |
|------|------|------|------|
| A | B | C | 0 |
| M | 0 | M | 0 |
| B | B | 0 | B |
| X | 0 | Y | 0 |
</code></pre>
<p>I just want to replace repeated characters with the character '0', across the rows. It boils down to keeping the first duplicate value we come across, as like this: </p>
<pre><code>| col1 | col2 | col3 | col4 |
|------|------|------|------|
| A | B | C | 0 |
| M | 0 | 0 | 0 |
| B | 0 | 0 | 0 |
| X | 0 | Y | 0 |
</code></pre>
<p>This seems so simple but I'm stuck. Any nudges in the right direction would be really appreciated. </p>
| 3 | 2016-10-06T23:49:43Z | 39,907,637 | <p>You can use the <code>duplicated</code> method to return a boolean indexer of whether elements are duplicates or not:</p>
<pre><code>In [214]: pd.Series(['M', '0', 'M', '0']).duplicated()
Out[214]:
0 False
1 False
2 True
3 True
dtype: bool
</code></pre>
<p>Then you could create a mask by mapping this across the rows of your dataframe, and using <code>where</code> to perform your substitution:</p>
<pre><code>is_duplicate = df.apply(pd.Series.duplicated, axis=1)
df.where(~is_duplicate, 0)
col1 col2 col3 col4
0 A B C 0
1 M 0 0 0
2 B 0 0 0
3 X 0 Y 0
</code></pre>
| 4 | 2016-10-07T00:32:49Z | [
"python",
"pandas"
]
|
Plot a CSV file where the delimiter is '; ' (semicolon + space) | 39,907,407 | <p>I'm learning how to plot things (CSV files) in Python, using <code>import matplotlib.pyplot as plt</code>. </p>
<pre><code>Column1;Column2;Column3;
1;4;6;
2;2;6;
3;3;8;
4;1;1;
5;4;2;
</code></pre>
<p>I can plot the one above with <code>plt.plotfile('test0.csv', (0, 1), delimiter=';')</code>, getting the figure below.</p>
<p><a href="http://i.stack.imgur.com/mYkFJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/mYkFJ.png" alt="enter image description here"></a></p>
<p>I can also plot that data if I change the separator from <code>';'</code> (semicolon) to <code>','</code> (comma).</p>
<pre><code>Column1,Column2,Column3,
1,4,6,
2,2,6,
3,3,8,
4,1,1,
5,4,2,
</code></pre>
<p>using <code>plt.plotfile('test0.csv', (0, 1), delimiter=',')</code>.</p>
<p>But didn't managed to plot data where the separator is <code>'; '</code> (semicolon + space), as show below. Can I still shoot this with <code>matplotlib.pyplot</code> or it's time to something else?</p>
<pre><code>Column1; Column2; Column3;
1; 4; 6;
2; 2; 6;
3; 3; 8;
4; 1; 1;
5; 4; 2;
</code></pre>
| 2 | 2016-10-07T00:01:13Z | 39,918,711 | <p>So the error matplotlib is throwing you is</p>
<pre><code> TypeError: "delimiter" must be a 1-character string
</code></pre>
<p>Which makes it seem very unlikely you can use <code>'; '</code>. I also had errors thrown when I tried <code>delimiter=';'</code> although you may wish to check that that is reproducible.</p>
<p>However pandas handles this just fine with <code>pd.read_csv</code> if you have it available.</p>
<pre><code>import pandas as pd
alpha = pd.read_csv(filepath,delimiter=';')
alpha.Column1
0 1
1 2
2 3
3 4
4 5
Name: Column1, dtype: int64
</code></pre>
| 0 | 2016-10-07T13:41:04Z | [
"python",
"python-2.7",
"matplotlib"
]
|
Django python trying to use forms with token does not work for me | 39,907,411 | <p>I'm learning django, in this moment I'm trying to implement web forms, actually some of them works fine with the data base model but I'm trying to make a new one without use the models. The problem is that django show me the token and not the value typed in the form.</p>
<p>I hope you can help me, to understand more about it.</p>
<p>URLS:</p>
<pre><code>url(r'^test', views.test),
</code></pre>
<p>VIEWS:</p>
<pre><code>def test(request):
if request.method == "POST":
return HttpResponse(request.POST)
return render(request, 'datos.html')
</code></pre>
<p>DATOS HTML:</p>
<pre><code><form action="/test" method="post" name="myForm"> {% csrf_token %}
<input type="text">
<input type="submit">
</form>
</code></pre>
<p>When I run this django show me:</p>
<p><strong>csrfmiddlewaretoken</strong></p>
<p>Can any one help me please?</p>
| 0 | 2016-10-07T00:01:45Z | 39,907,550 | <p>To protect from <a href="https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)" rel="nofollow">Cross-Site_Request_Forgery</a> attack, for each post request we need to send a csrf token, from the form, which is missing in your form, you can get rid of this error by modifying your form as follows.</p>
<pre><code><form action="/test" method="POST" name="myForm">
{% csrf_token %}
<input type="text">
<input type="submit">
</form>
</code></pre>
<p>and You need to get the data from your views, </p>
<pre><code>def test(request):
if request.method == "POST":
# Getting the value of text field from the form
# If the value is empty set the default value to None
text = request.POST.get('text', None)
# Do not return the POST request
# Return the value you get from the form
return HttpResponse(text)
# This code is not going to execute
# Return is already encountered above in our code
return render(request, 'datos.html')
# To send data to form do,
# return render(request, 'datos.html', {'my_data': text})
</code></pre>
| 0 | 2016-10-07T00:20:26Z | [
"python",
"django",
"forms",
"token"
]
|
Computing the sum of the numbers with a define statement-python | 39,907,438 | <pre><code>data = [92.5, 87.7, 74.8, 93., 91.7, 90.0, 90.3, 92.5, 100.0,
100.0, 35.7, 37.4, 21.0]
def data_sum(data):
total=0.0
for element in data:
total+=element
return total
</code></pre>
<p>My task is to find the sum of the list above and that is what I came up with so far, however when I go to run the file it returns empty. Thanks for your time.</p>
| -1 | 2016-10-07T00:06:16Z | 39,907,451 | <p>Your code is fine. You are not making any call to this function, that is why your code is not executing. You can call your function as:</p>
<pre><code>data_sum(data)
</code></pre>
<p>However, there is <code>sum()</code> function in Python that returns the sum of the <code>list</code> if the list is of numeric type. Ideal way to get the sum of numeric list is by using <a href="https://docs.python.org/2/library/functions.html#sum" rel="nofollow"><code>sum()</code></a>. Hence your code should be:</p>
<pre><code>sum(data)
</code></pre>
<p>where <code>data</code> is the <code>list</code> as is mentioned in the question. </p>
| 0 | 2016-10-07T00:08:15Z | [
"python",
"python-2.7"
]
|
Computing the sum of the numbers with a define statement-python | 39,907,438 | <pre><code>data = [92.5, 87.7, 74.8, 93., 91.7, 90.0, 90.3, 92.5, 100.0,
100.0, 35.7, 37.4, 21.0]
def data_sum(data):
total=0.0
for element in data:
total+=element
return total
</code></pre>
<p>My task is to find the sum of the list above and that is what I came up with so far, however when I go to run the file it returns empty. Thanks for your time.</p>
| -1 | 2016-10-07T00:06:16Z | 39,907,561 | <p>You need to call your function like the below example:</p>
<pre><code>data = [92.5, 87.7, 74.8, 93., 91.7, 90.0, 90.3, 92.5, 100.0,
100.0, 35.7, 37.4, 21.0]
def data_sum(data):
total = 0.0
for element in data:
total += element
return total
print data_sum(data)
</code></pre>
<p>Another possible ways to sum values from a list are using the builtin-functions <a href="https://docs.python.org/2/library/functions.html#sum" rel="nofollow">sum</a> or <a href="https://docs.python.org/2/library/functions.html#reduce" rel="nofollow">reduce</a>, like this:</p>
<pre><code>print sum(data)
print reduce(lambda x, y: x + y, data)
</code></pre>
| 0 | 2016-10-07T00:22:48Z | [
"python",
"python-2.7"
]
|
Assist with re Module | 39,907,452 | <p>I need to extract all those strings between the patterns nr: or /nr:. Can anyone please help me?</p>
<p>The string is </p>
<pre><code>nr:Organization/nr:Customer/nr:Agreement/nr:date/nr:coverage/nr:Premium/nr:Option/nr:OptionID
</code></pre>
<p>Output I am expecting is</p>
<pre><code>Organization, Customer, Agreement,...OptionID
</code></pre>
| -1 | 2016-10-07T00:08:15Z | 39,907,532 | <p>Use the <code>re</code> module, <code>split</code> the string and filter out matches whose lengths are 0:</p>
<pre><code>import re
test_string = "nr:Organization/nr:Customer/nr:Agreement/nr:date/nr:coverage/nr:Premium/nr:Option/nr:OptionID"
columns = [x for x in re.split("/{0,1}nr:",test_string) if len(x) > 0]
print(columns)
['Organization', 'Customer', 'Agreement', 'date', 'coverage', 'Premium', 'Option', 'OptionID']
</code></pre>
<p>I hope this helps.</p>
| 0 | 2016-10-07T00:18:54Z | [
"python"
]
|
Assist with re Module | 39,907,452 | <p>I need to extract all those strings between the patterns nr: or /nr:. Can anyone please help me?</p>
<p>The string is </p>
<pre><code>nr:Organization/nr:Customer/nr:Agreement/nr:date/nr:coverage/nr:Premium/nr:Option/nr:OptionID
</code></pre>
<p>Output I am expecting is</p>
<pre><code>Organization, Customer, Agreement,...OptionID
</code></pre>
| -1 | 2016-10-07T00:08:15Z | 39,907,542 | <p>Here is a way with regular expression and split:</p>
<pre><code>import re
string = 'nr:Organization/nr:Customer/nr:Agreement/nr:date/nr:coverage/nr:Premium/nr:Option/nr:OptionID'
x = filter(None, re.split('/*nr:', string))
print x
</code></pre>
<p>The regular expression looks for <code>nr:</code> preceded (or not) by a '/'. Since the <code>re.split</code> function would leave some empty elements, applying a <code>filter</code> to it, will give you the desired output.</p>
<p>Also see it in action here: <a href="https://eval.in/656744" rel="nofollow">https://eval.in/656744</a></p>
| 0 | 2016-10-07T00:19:55Z | [
"python"
]
|
Assist with re Module | 39,907,452 | <p>I need to extract all those strings between the patterns nr: or /nr:. Can anyone please help me?</p>
<p>The string is </p>
<pre><code>nr:Organization/nr:Customer/nr:Agreement/nr:date/nr:coverage/nr:Premium/nr:Option/nr:OptionID
</code></pre>
<p>Output I am expecting is</p>
<pre><code>Organization, Customer, Agreement,...OptionID
</code></pre>
| -1 | 2016-10-07T00:08:15Z | 39,907,570 | <p>If you want to do it without using the "re" module, it looks like this would work:</p>
<pre><code>test_string = 'nr:Organization/nr:Customer/nr:Agreement/nr:date/nr:coverage/nr:Premium/nr:Option/nr:OptionID'
columns = [f[3:] for f in test_string.split('/')]
print(columns)
</code></pre>
| 1 | 2016-10-07T00:23:44Z | [
"python"
]
|
Cannot import sqlite3 in Python3 | 39,907,475 | <p>I am unable to import the sqlite3 module in Python, version 3.5.0. Here's what I get:</p>
<p><code>>>> import sqlite3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/sqlite3/__init__.py", line 23, in <module>
from sqlite3.dbapi2 import *
File "/usr/local/lib/python3.5/sqlite3/dbapi2.py", line 27, in <module>
from _sqlite3 import *
ImportError: No module named '_sqlite3'</code></p>
<p>I know, I know, there are PLENTY of StackOverflow posts and support forums across the web where people complain about this problem, but none of the posted solutions have worked for me so far. Here's where I've been:</p>
<ol>
<li><p>I also have Python 2.6.6 installed on this server, which is running CentOS 6.8 x86_64. I can open up the Python REPL and import sqlite3 just fine when using Python 2.6.6. I can also use sqlite3 from straight from bash and nothing seems awry.</p></li>
<li><p><a href="http://stackoverflow.com/questions/1210664/no-module-named-sqlite3">This helpful question</a> looked promising. I tried to re-configure and re-compile Python3.5 with the <code>--enable-loadable-sqlite-extensions</code> option, as user jammyWolf suggested. Nope, same error still occurs.</p></li>
<li><p>I've been using virtual environments like a good boy, but I have root access to this server. So, I was a bad boy and ran python3 as root without any virtualenvs activated. Still no luck. So I don't think it has anything to do with permissions.</p></li>
<li><p>I noticed that in the error message, it says <code>No module named '_sqlite3'</code>. <a href="http://stackoverflow.com/questions/20757763/why-are-modules-imported-as-name-in-another-module">This thread</a> suggests that the underscore before the module name means that the module is an implementation detail, and isn't exposed in the API. ... I'm not sure what to make of this information, but there may be a hint somewhere in there.</p></li>
</ol>
<p>Any ideas?</p>
| 1 | 2016-10-07T00:10:56Z | 39,907,500 | <p>Install <a href="http://centos-packages.com/7/package/sqlite-devel/" rel="nofollow"><code>sqlite-devel</code></a> package which includes header, library that is required to build <code>sqlite3</code> extension.</p>
<pre><code>yum install sqlite-devel
</code></pre>
<p><strong>NOTE</strong>: Python does not include <code>sqlite3</code> library itself, but an extension module (wrapper).</p>
| -1 | 2016-10-07T00:14:19Z | [
"python",
"linux",
"python-3.x",
"sqlite3",
"python-import"
]
|
Calculating cube root: OverflowError: ('Result too large') | 39,907,504 | <p>So I'm supposed to create a code that calculates the cube root of an inputted number with the approximation of up to 2 decimal places. This code above calculates the square root of a number with up to 2 decimal places:</p>
<pre><code>epsilon = 0.01
guess = num/2.0
while abs(guess**2 - num) >= epsilon:
guess = guess - abs(guess**2 - num) / (2 * guess)
print("Guess:", guess)
</code></pre>
<p>So apparently I am able to do the cube root with that criteria by modifying this code that was given and using this in the code:</p>
<pre><code>delta = abs(guess**3 - num) / 100.0
</code></pre>
<p>I tried using that line and modifying the code used for square root and I keep getting: </p>
<pre><code>OverflowError: ('Result too large')
</code></pre>
<p>This is what my code looks like so far:</p>
<pre><code>num = float(input("Enter a number: "))
epsilon = 0.01
guess = num/2.0
while abs(guess**3 - num) >= epsilon:
guess = abs(guess - (guess**3 - num)/100.0)
print("Guess:", guess)
</code></pre>
<p>When I run that code above this is what happens:</p>
<blockquote>
<p>runfile('C:/Users/100617828/Documents/CSCI1040U/edits.py', runfile('C:/Users/100617828/Documents/CSCI1040U/edits.py', wdir='C:/Users/100617828/Documents/CSCI1040U')</p>
<p>Enter a number: 34 Traceback (most recent call last):</p>
<p>File "", line 1, in runfile('C:/Users/100617828/Documents/CSCI1040U/edits.py', wdir='C:/Users/100617828/Documents/CSCI1040U')</p>
<p>File "C:\Anaconda\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 699, in runfile execfile(filename, namespace)</p>
<p>File "C:\Anaconda\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 88, in execfile exec(compile(open(filename, 'rb').read(), filename, 'exec'), namespace)</p>
<p>File "C:/Users/100617828/Documents/CSCI1040U/edits.py", line 11, in while abs(guess**3 - num) >= epsilon:</p>
<p>OverflowError: (34, 'Result too large')</p>
</blockquote>
<p><strong>Edit</strong>
<a href="http://i.stack.imgur.com/qT1nZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/qT1nZ.png" alt="enter image description here"></a></p>
<p>This is what my assignment sheet is telling me to do but it seems I don't need to use <code>delta = abs(guess**3 - num)/100.0</code> ?</p>
| 0 | 2016-10-07T00:15:30Z | 39,907,815 | <p>The method you are using is called <a href="https://en.wikipedia.org/wiki/Newton%27s_method" rel="nofollow">Newton-Raphson approximation</a>, and you should use the first derivative of the function you are trying to solve as the denominator. Because the first derivative of <code>x^3</code> is <code>3*x^2</code>, the iteration line must be:</p>
<pre><code>guess = guess - abs(guess**3 - num) / (3 * guess**2)
</code></pre>
<p>See the working code at <a href="https://repl.it/DqZA/0" rel="nofollow">https://repl.it/DqZA/0</a></p>
| 4 | 2016-10-07T00:57:36Z | [
"python",
"python-3.x"
]
|
How to perform a root command with Pexpect? | 39,907,546 | <p>I'm working on a python program to assist with the apt-get tool. I want to use pexpect to download the chosen package. I believe I'm getting stuck at the child.expect line. It seems to timeout when it comes to that line.</p>
<pre><code>butt = "vlc"
child = pexpect.spawn('sudo apt-get install ' + butt)
child.logfile = sys.stdout
child.expect('[sudo] password for user1: ')
child.sendline('mypassword')
</code></pre>
<p>This is the log file.</p>
<pre class="lang-none prettyprint-override"><code>TIMEOUT: Timeout exceeded.
<pexpect.spawn object at 0xb5ec558c>
version: 3.2
command: /usr/bin/sudo
args: ['/usr/bin/sudo', 'apt-get', 'install', 'vlc']
searcher: <pexpect.searcher_re object at 0xb565710c>
buffer (last 100 chars): '[sudo] password for user1: '
before (last 100 chars): '[sudo] password for user1: '
after: <class 'pexpect.TIMEOUT'>
match: None
match_index: None
exitstatus: None
flag_eof: False
pid: 27641
child_fd: 4
closed: False
timeout: 30
delimiter: <class 'pexpect.EOF'>
logfile: <open file '<stdout>', mode 'w' at 0xb74d8078>
logfile_read: None
logfile_send: None
maxread: 2000
ignorecase: False
searchwindowsize: None
delaybeforesend: 0.05
delayafterclose: 0.1
delayafterterminate: 0.1
</code></pre>
<p>UPDATE:</p>
<p>The password gets sent just fine. It also expects the next line as well, but then enters "Y" and does nothing.</p>
<pre><code>child = pexpect.spawn('sudo apt-get install ' + butt)
child.logfile = sys.stdout
child.expect_exact('[sudo] password for user1: ')
child.sendline('mypass')
child.expect_exact('Do you want to continue? [Y/n] ')
child.sendline('Y')
</code></pre>
<p>SOLVED: </p>
<p>I needed to add this line at the end.</p>
<pre><code>child.expect(pexpect.EOF, timeout=None)
</code></pre>
| 2 | 2016-10-07T00:20:14Z | 39,907,632 | <p>Try <code>child.expect_exact()</code>.</p>
<p>From the docs:</p>
<blockquote>
<p>The expect() method waits for the child application to return a given string. The string you specify is a regular expression, so you can match complicated patterns.</p>
</blockquote>
<p>It is good practice to use <code>expect()</code> only when the intent is to match a regular expression.</p>
| 1 | 2016-10-07T00:32:34Z | [
"python",
"linux",
"ubuntu",
"pexpect"
]
|
How to perform a root command with Pexpect? | 39,907,546 | <p>I'm working on a python program to assist with the apt-get tool. I want to use pexpect to download the chosen package. I believe I'm getting stuck at the child.expect line. It seems to timeout when it comes to that line.</p>
<pre><code>butt = "vlc"
child = pexpect.spawn('sudo apt-get install ' + butt)
child.logfile = sys.stdout
child.expect('[sudo] password for user1: ')
child.sendline('mypassword')
</code></pre>
<p>This is the log file.</p>
<pre class="lang-none prettyprint-override"><code>TIMEOUT: Timeout exceeded.
<pexpect.spawn object at 0xb5ec558c>
version: 3.2
command: /usr/bin/sudo
args: ['/usr/bin/sudo', 'apt-get', 'install', 'vlc']
searcher: <pexpect.searcher_re object at 0xb565710c>
buffer (last 100 chars): '[sudo] password for user1: '
before (last 100 chars): '[sudo] password for user1: '
after: <class 'pexpect.TIMEOUT'>
match: None
match_index: None
exitstatus: None
flag_eof: False
pid: 27641
child_fd: 4
closed: False
timeout: 30
delimiter: <class 'pexpect.EOF'>
logfile: <open file '<stdout>', mode 'w' at 0xb74d8078>
logfile_read: None
logfile_send: None
maxread: 2000
ignorecase: False
searchwindowsize: None
delaybeforesend: 0.05
delayafterclose: 0.1
delayafterterminate: 0.1
</code></pre>
<p>UPDATE:</p>
<p>The password gets sent just fine. It also expects the next line as well, but then enters "Y" and does nothing.</p>
<pre><code>child = pexpect.spawn('sudo apt-get install ' + butt)
child.logfile = sys.stdout
child.expect_exact('[sudo] password for user1: ')
child.sendline('mypass')
child.expect_exact('Do you want to continue? [Y/n] ')
child.sendline('Y')
</code></pre>
<p>SOLVED: </p>
<p>I needed to add this line at the end.</p>
<pre><code>child.expect(pexpect.EOF, timeout=None)
</code></pre>
| 2 | 2016-10-07T00:20:14Z | 39,907,724 | <p>The immediate problem is that this:</p>
<pre><code>child.expect('[sudo] password for user1: ')
</code></pre>
<p>uses a regular expression. The <code>[...]</code> construct has special meaning in regular expressions, so what you're actually waiting for there is one of the letters "d", "o", "s", or "u" followed by the text <code>password for user1:</code>. But <code>sudo</code> is sending the text <code>[sudo]</code> first and the regular expression doesn't match that, because the final character of it is <code>]</code> not one of those letters.</p>
<p>There are numerous possible solutions to this. You can just have it match <code>password for user1:</code>. You can use <code>expect_exact()</code> as suggested by JLeClerc (which is the solution I also favor). You can escape the brackets in your regular expression so they don't have their usual meaning: <code>\[sudo\]</code> (note that when specifying this as a Python string, you will need to double the backslashes or use a raw string literal).</p>
<p>The other problem is that if you have already given your password in the last few minutes, you may not be prompted for it. Then the <code>expect()</code> call will definitely time out waiting for it. The easiest way to address this is to issue <code>sudo -k</code> first. You can even do it on the same command line:</p>
<pre><code>child = pexpect.spawn('sudo -k; sudo apt-get install ' + butt)
</code></pre>
| 1 | 2016-10-07T00:45:57Z | [
"python",
"linux",
"ubuntu",
"pexpect"
]
|
pandas groupby a list of strings | 39,907,589 | <p>Imagine if you have a list of strings and a pandas dataframe with a column <code>Foo</code> that has words which may contain those strings:</p>
<p><code>
my_list = ['A', 'B', 'C']
</code> </p>
<p>df['Foo'] has words that contain 'A' or 'B' or 'C',</p>
<p>you can extract the ones that contain by <code>df.Foo.str.contains(my_list[0])</code> etc., but can you group by the rows that match the list? So the groupby would be by contains 'A' or 'B' or 'C'</p>
| 0 | 2016-10-07T00:26:08Z | 39,908,159 | <p>Yes, you can do this by passing a function to groupby()</p>
<pre><code>data = {'Foo': {0: 'apple',
1: 'body',
2: 'animal',
3: 'cot',
4: 'cord',
5: 'bed',
6: 'ant'}}
df = pd.DataFrame(data)
print (df)
Foo
0 apple
1 body
2 animal
3 cot
4 cord
5 bed
6 ant
</code></pre>
<p>get_grp() will be called for each value in df['Foo']. Note: x is only the index of df so we have to pass some additional things</p>
<pre><code>def get_grp(x, df, col_name, my_list):
for c in my_list:
if c in df[col_name][x]:
return c
my_list = ['a', 'b', 'c']
g = df.groupby(lambda x : get_grp(x, df, 'Foo', my_list))
print (type(g))
print (g.count())
<class 'pandas.core.groupby.DataFrameGroupBy'>
Foo
a 3
b 2
c 2
</code></pre>
<p>Note: get_grp() only returns one item of my_list. So 'Ball' would only fall into one grouping and it would be 'a' because its the first item in my_list that we check </p>
| 0 | 2016-10-07T01:50:43Z | [
"python",
"pandas",
"data-analysis"
]
|
How do I create a list of true and false values when comparing two numpy arrays? | 39,907,614 | <p>I'm creating a list of true and false values out of the following array using list comprehension:</p>
<pre><code>array([[ True, True, False, ..., False, True, False],
[ True, True, False, ..., False, True, True],
[ True, False, True, ..., False, True, False],
...,
[ True, True, False, ..., True, True, False],
[ True, True, False, ..., True, True, False],
[ True, True, False, ..., True, True, False]], dtype=bool)
</code></pre>
<p>(It is has 50 X 85 dimensions)</p>
<p>This is my list comprehension:</p>
<pre><code>list_1 = [features[index] & features[i] for i in features]
</code></pre>
<p>where <code>index</code> is an integer, (in my case, it's <code>14</code>).</p>
<p>This is what <code>print(features[index])</code> looks like:</p>
<pre><code>[ True True False False False False False False True False False True
True False False False False True True False False False True False
True True True True False False True True True True True False
False False False False False False False False True True False False
False True True False True False True False True False True True
False True True True False False True True False False True False
False True True False False True True False]
</code></pre>
<p>and an example of <code>print(features[i])</code> looks like:</p>
<pre><code>[False False False False True False True False True False False False
False False False False False False False True False False False False
False True False True False False True False False False False True
False False False False True True True False False True True True
False False False True True False False False False False False False
True True True False False True False True False True False False
True True True False False True False False]
</code></pre>
<p>Thus, both arrays seem to be of the same length. However, when I compare the index array to all of the other arrays in the feature array, I get the following error:</p>
<pre><code>IndexError: index 54 is out of bounds for axis 0 with size 50
</code></pre>
<p><strong>Question:</strong> how do I create a list of true and false values when comparing the <code>features[index]</code> array to <strong>ALL</strong> other arrays in the "features" array?
Ultimately, the list should be composed of true values where the elements in <code>features[index]</code> matches the other elements in <strong>ALL</strong> of the other arrays. Thus, it's unlikely that there'll be many true values in the list. The list should be composed of false values where the elements in <code>features[index]</code> doesn't match the elements in <strong>ANY</strong> of the arrays of the features array. </p>
| 0 | 2016-10-07T00:29:55Z | 39,907,905 | <p>Assuming my understanding is correct you just need to:</p>
<pre><code>numpy.logical_and.reduce(features[index] == features)
</code></pre>
<p>Here we first produce the matches between all rows and <code>feature[index]</code> with:</p>
<pre><code>features[index] == features
</code></pre>
<p>Then, we reduce the matrix, which effectively tests, for a column <code>j</code>, if <code>features[index][j] == features[j]</code> for all <code>i</code> </p>
<p>As an example:</p>
<pre><code>>>> features = numpy.asarray(numpy.random.randint(2, size=(5, 10)), dtype=bool)
>>> features
array([[False, True, True, True, False, False, True, False, False,
False],
[ True, False, True, True, True, True, True, True, False,
True],
[ True, False, True, False, False, True, True, True, True,
True],
[False, False, True, False, True, False, False, True, False,
True],
[False, True, True, True, False, True, False, True, True,
True]], dtype=bool)
>>> numpy.logical_and.reduce(features[3] == features)
array([False, False, True, False, False, False, False, False, False, False])
</code></pre>
| 0 | 2016-10-07T01:11:59Z | [
"python",
"arrays",
"numpy"
]
|
How do I create a list of true and false values when comparing two numpy arrays? | 39,907,614 | <p>I'm creating a list of true and false values out of the following array using list comprehension:</p>
<pre><code>array([[ True, True, False, ..., False, True, False],
[ True, True, False, ..., False, True, True],
[ True, False, True, ..., False, True, False],
...,
[ True, True, False, ..., True, True, False],
[ True, True, False, ..., True, True, False],
[ True, True, False, ..., True, True, False]], dtype=bool)
</code></pre>
<p>(It is has 50 X 85 dimensions)</p>
<p>This is my list comprehension:</p>
<pre><code>list_1 = [features[index] & features[i] for i in features]
</code></pre>
<p>where <code>index</code> is an integer, (in my case, it's <code>14</code>).</p>
<p>This is what <code>print(features[index])</code> looks like:</p>
<pre><code>[ True True False False False False False False True False False True
True False False False False True True False False False True False
True True True True False False True True True True True False
False False False False False False False False True True False False
False True True False True False True False True False True True
False True True True False False True True False False True False
False True True False False True True False]
</code></pre>
<p>and an example of <code>print(features[i])</code> looks like:</p>
<pre><code>[False False False False True False True False True False False False
False False False False False False False True False False False False
False True False True False False True False False False False True
False False False False True True True False False True True True
False False False True True False False False False False False False
True True True False False True False True False True False False
True True True False False True False False]
</code></pre>
<p>Thus, both arrays seem to be of the same length. However, when I compare the index array to all of the other arrays in the feature array, I get the following error:</p>
<pre><code>IndexError: index 54 is out of bounds for axis 0 with size 50
</code></pre>
<p><strong>Question:</strong> how do I create a list of true and false values when comparing the <code>features[index]</code> array to <strong>ALL</strong> other arrays in the "features" array?
Ultimately, the list should be composed of true values where the elements in <code>features[index]</code> matches the other elements in <strong>ALL</strong> of the other arrays. Thus, it's unlikely that there'll be many true values in the list. The list should be composed of false values where the elements in <code>features[index]</code> doesn't match the elements in <strong>ANY</strong> of the arrays of the features array. </p>
| 0 | 2016-10-07T00:29:55Z | 39,922,058 | <p>With your array excerpt I get the same sort of error with just the indexing:</p>
<pre><code>In [726]: features
Out[726]:
array([[ True, True, False, True, False, True, False],
[ True, True, False, True, False, True, True],
[ True, False, True, True, False, True, False],
[ True, True, False, True, True, True, False],
[ True, True, False, True, True, True, False],
[ True, True, False, True, True, True, False]], dtype=bool)
In [727]: [features[i] for i in features]
/usr/local/bin/ipython3:1: VisibleDeprecationWarning: boolean index did not match indexed array along dimension 0; dimension is 6 but corresponding boolean dimension is 7
...
IndexError: index 6 is out of bounds for axis 0 with size 6
</code></pre>
<p><code>[... for i in features]</code> iterates on the rows of <code>features</code>. So <code>features[i]</code> is trying to index the rows of <code>features</code> with the i'th row of features.</p>
<p>The correct iteration is:</p>
<pre><code>In [730]: [features[0] & features[i] for i in range(features.shape[0])]
Out[730]:
[array([ True, True, False, True, False, True, False], dtype=bool),
array([ True, True, False, True, False, True, False], dtype=bool),
array([ True, False, False, True, False, True, False], dtype=bool),
array([ True, True, False, True, False, True, False], dtype=bool),
array([ True, True, False, True, False, True, False], dtype=bool),
array([ True, True, False, True, False, True, False], dtype=bool)]
</code></pre>
<p>or even <code>[features[0] & i for i in features]</code>.</p>
<p>But you don't need to do this in a list comprehension. <code>features[i]</code> has shape (7,) which can be broadcast against the whole features <code>(6,7)</code></p>
<pre><code>features[0] & features
</code></pre>
<p>This addresses the error, but not the question of how you want to compare <code>features[index]</code> with the rest. I haven't fully digested that. Show how <code>features[i]</code> should be compared with <code>features[j]</code>.</p>
| 0 | 2016-10-07T16:34:33Z | [
"python",
"arrays",
"numpy"
]
|
Plot specifying column by name, upper case issue | 39,907,708 | <p>I'm learning how to plot things (CSV files) in Python, using <code>import matplotlib.pyplot as plt</code>. </p>
<pre><code>Column1;Column2;Column3;
1;4;6;
2;2;6;
3;3;8;
4;1;1;
5;4;2;
</code></pre>
<p>I can plot the one above with <code>plt.plotfile('test0.csv', (0, 1), delimiter=';')</code>, which gives me the figure below.</p>
<p>Do you see the axis labels, <code>column1</code> and <code>column2</code>? They are in lower case in the figure, but in the data file they beggin with upper case.</p>
<p>Also, I tried <code>plt.plotfile('test0.csv', ('Column1', 'Column2'), delimiter=';')</code>, which does not work.</p>
<p>So it seems <code>matplotlib.pyplot</code> works only with lowercase names :(</p>
<p>Summing this issue with <a href="http://stackoverflow.com/questions/39907407/plot-a-csv-file-where-the-delimiter-is-semicolon-space">this other</a>, I guess it's time to try something else.</p>
<p>As I am pretty new to plotting in Python, I would like to ask: Where should I go from here, to get a little more than what <code>matplotlib.pyplot</code> provides?</p>
<p>Should I go to <a href="http://pandas.pydata.org/" rel="nofollow">pandas</a>?</p>
<p><a href="http://i.stack.imgur.com/Mvqbd.png" rel="nofollow"><img src="http://i.stack.imgur.com/Mvqbd.png" alt="enter image description here"></a></p>
| 0 | 2016-10-07T00:44:14Z | 39,912,122 | <p>You are mixing up two things here.<br>
Matplotlib is designed for plotting data. It is not designed for managing data.<br>
Pandas is designed for data analysis. Even if you were using pandas, you would still need to plot the data. How? Well, probably using matplotlib!
Independently of what you're doing, think of it as a three step process:</p>
<ol>
<li>Data aquisition, data read-in </li>
<li>Data processing</li>
<li>Data representation / plotting</li>
</ol>
<p><code>plt.plotfile()</code> is a convenience function, which you can use if you don't need step 2. at all. But it surely has its limitations. </p>
<p>Methods to read in data (not complete of course) are using pure python <code>open</code>, python <code>csvReader</code> or similar, <code>numpy</code> / <code>scipy</code>, <code>pandas</code> etc.</p>
<p>Depeding on what you want to do with your data, you can already chose a suitable input method. <code>numpy</code> for large numerical data sets, <code>pandas</code> for datasets which include qualitative data or heavily rely on cross correlations etc.</p>
| 1 | 2016-10-07T07:49:14Z | [
"python",
"python-2.7",
"matplotlib"
]
|
Pandas: How to do analysis on array-like field? | 39,907,720 | <p>I'm doing analysis on movies, and each movie have a <code>genre</code> attribute, it might be several specific genre, like <code>drama</code>, <code>comedy</code>, the data looks like this:</p>
<pre><code>movie_list = [
{'name': 'Movie 1',
'genre' :'Action, Fantasy, Horror'},
{'name': 'Movie 2',
'genre' :'Action, Comedy, Family'},
{'name': 'Movie 3',
'genre' :'Biography, Drama'},
{'name': 'Movie 4',
'genre' :'Biography, Drama, Romance'},
{'name': 'Movie 5',
'genre' :'Drama'},
{'name': 'Movie 6',
'genre' :'Documentary'},
]
</code></pre>
<p>The problem is that, how do I do analysis on this? For example, how do I know how many action moviews are here, and how do I query for the category action? Specifically:</p>
<ol>
<li><p>How do I get all the categories in this list? So I know each contains how many moviews</p></li>
<li><p>How do I query for a certain kind of movies, like action?</p></li>
<li><p>Do I need to turn the <code>genre</code> into <code>array</code>?</p></li>
</ol>
<p>Currently I can get away the 2nd question with <code>df[df['genre'].str.contains("Action")].describe()</code>, but is there better syntax?</p>
| 0 | 2016-10-07T00:45:22Z | 39,908,150 | <p>If your data isn't too huge, I would do some pre-processing and get 1 record per genre. That is, I would structure your data frame like this:</p>
<pre><code> Name Genre
Movie 1 Action
Movie 1 Fantasy
Movie 1 Horor
...
</code></pre>
<p>Note the names should be repeated. While this may make your data set much bigger, if your system can handle it it can make data analysis very easy.
Use the following code to do the transformation:</p>
<pre><code>import pandas as pd
def reformat_movie_list(movies):
name = []
genre = []
result = pd.DataFrame()
for movie in movies:
movie_name = movie["name"]
movie_genres = movie["genre"].split(",")
for movie_genre in movie_genres:
name.append(movie_name.strip())
genre.append(movie_genre.strip())
result["name"] = name
result["genre"] = genre
return result
</code></pre>
<p>In this format, your 3 questions become</p>
<ol>
<li><p>How do I get all the categories in this list? So I know each contains how many movies?</p>
<p>movie_df.groupby("genre").agg("count")</p></li>
</ol>
<p>see <a href="http://stackoverflow.com/questions/19384532/how-to-count-number-of-rows-in-a-group-in-pandas-group-by-object">How to count number of rows in a group in pandas group by object?</a></p>
<ol start="2">
<li><p>How do I query for a certain kind of movies, like action?</p>
<p>horror_movies = movie_df[movie_df["genre"] == "horror"]</p></li>
</ol>
<p>see <a href="http://stackoverflow.com/questions/11869910/pandas-filter-rows-of-dataframe-with-operator-chaining">pandas: filter rows of DataFrame with operator chaining</a></p>
<ol start="3">
<li>Do I need to turn the genre into array?</li>
</ol>
<p>Your de-normalization of the data should take care of it.</p>
| 0 | 2016-10-07T01:49:37Z | [
"python",
"pandas",
"statistics"
]
|
Creating a time range in python from a set of custom dates | 39,907,791 | <p>Let's say I have a set of dates in a DateTimeIndex. There are no times just dates and for each date in the set I would like to have multiple DateTimes. For example for each day I would like index variables every hour from 10am-2pm? I have been using pd.date_range which works well for 1 datetime but not sure how to apply across a list of custom dates.</p>
| 0 | 2016-10-07T00:54:54Z | 39,908,209 | <p>Consider a cross join with <code>date</code> + <code>time</code> operation.</p>
<pre><code># Example data:
# NumData1 NumData2 NumData3 NumData4 NumData5
# DateExample
# 2016-10-01 0.299950 0.740431 0.275306 0.168967 0.902464
# 2016-10-02 0.335424 0.751552 0.458261 0.277734 0.204546
# 2016-10-03 0.376473 0.215968 0.757137 0.713013 0.337774
# 2016-10-04 0.078788 0.055791 0.766027 0.507360 0.808768
# 2016-10-05 0.860383 0.920024 0.922637 0.501969 0.097542
df['Date'] = df.index # CREATE A DATE COLUMN FROM INDEX
df['key'] = 1 # CROSS JOIN MERGE KEY
timedf = pd.DataFrame({'Hour': [pd.Timedelta(hours=h) for h in list(range(10,15))],
'key': 1})
df = pd.merge(df, timedf, on=['key']) # CROSS JOIN (M x N cols)
df['Date'] = df['Date'] + df['Hour'] # DATE + TIME OPERATION
df = df.set_index('Date').drop(['key', 'Hour'], axis=1) # CREATE NEW INDEX W/ FINAL COLS
print(df)
# NumData1 NumData2 NumData3 NumData4 NumData5
# Date
# 2016-10-01 10:00:00 0.299950 0.740431 0.275306 0.168967 0.902464
# 2016-10-01 11:00:00 0.299950 0.740431 0.275306 0.168967 0.902464
# 2016-10-01 12:00:00 0.299950 0.740431 0.275306 0.168967 0.902464
# 2016-10-01 13:00:00 0.299950 0.740431 0.275306 0.168967 0.902464
# 2016-10-01 14:00:00 0.299950 0.740431 0.275306 0.168967 0.902464
# 2016-10-02 10:00:00 0.335424 0.751552 0.458261 0.277734 0.204546
# 2016-10-02 11:00:00 0.335424 0.751552 0.458261 0.277734 0.204546
# 2016-10-02 12:00:00 0.335424 0.751552 0.458261 0.277734 0.204546
# 2016-10-02 13:00:00 0.335424 0.751552 0.458261 0.277734 0.204546
# 2016-10-02 14:00:00 0.335424 0.751552 0.458261 0.277734 0.204546
# 2016-10-03 10:00:00 0.376473 0.215968 0.757137 0.713013 0.337774
# 2016-10-03 11:00:00 0.376473 0.215968 0.757137 0.713013 0.337774
# ...
</code></pre>
| 1 | 2016-10-07T01:58:24Z | [
"python",
"pandas"
]
|
subprocess.Popen execution of a script stuck | 39,907,800 | <p>I am trying to execute a command as follows but it is STUCK in <code>try</code> block as below until the timeout kicks in,the python script executes fine by itself independently,can anyone suggest why is it so and how to debug this?</p>
<pre><code>cmd = "python complete.py"
proc = subprocess.Popen(cmd.split(' '),stdout=subprocess.PIPE )
print "Executing %s"%cmd
try:
print "In try" **//Stuck here**
proc.wait(timeout=time_out)
except TimeoutExpired as e:
print e
proc.kill()
with proc.stdout as stdout:
for line in stdout:
print line,
</code></pre>
| 0 | 2016-10-07T00:56:02Z | 39,907,869 | <p><code>proc.stdout</code> isn't available to be read <em>after the process exits</em>. Instead, you need to read it <em>while the process is running</em>. <code>communicate()</code> will do that for you, but since you're not using it, you get to do it yourself.</p>
<p>Right now, your process is almost certainly hanging trying to write to its stdout -- which it can't do, because the other end of the pipe isn't being read from.</p>
<p>See also <a href="http://stackoverflow.com/questions/1191374/using-module-subprocess-with-timeout">Using module 'subprocess' with timeout</a>.</p>
| 1 | 2016-10-07T01:06:51Z | [
"python"
]
|
Word guessing game -- Can this be written any better? | 39,907,806 | <p>This is just a portion of the game, a function that takes in the secret word and the letters guessed as arguments and tells you if they guessed the word correctly.</p>
<p>I'll be completely honest, this is from an assignment on an edX course, <strong>however</strong> I have already passed this assignment, this code works. I am just wondering if it can be written any better. Some people in the discussion forums were talking about how they solved it with 1 line, which is why I'm asking.</p>
<pre class="lang-py prettyprint-override"><code>def isWordGuessed(secretWord, lettersGuessed):
guessed = []
l= str(lettersGuessed)
s= list(secretWord)
for i in l:
if i in s:
guessed.append(i)
guessed.sort()
s.sort()
return guessed == s
</code></pre>
<p>Here is one of the test cases from the grader as an example:</p>
<p><code>isWordGuessed('durian', ['h', 'a', 'c', 'd', 'i', 'm', 'n', 'r', 't', 'u'])</code></p>
| 3 | 2016-10-07T00:56:47Z | 39,907,887 | <p>Something like this is pretty short:</p>
<pre><code>def isWordGuessed(secretWord, lettersGuessed):
return all([c in lettersGuessed for c in secretWord])
</code></pre>
<p>For every character in the <code>secretWord</code> ensure it's in the <code>lettersGuessed</code>. This basically creates a list of booleans and the built-in <a href="https://docs.python.org/3/library/functions.html#all" rel="nofollow">all</a> returns <code>True</code> if every element in the array is <code>True</code>.</p>
<p>Also, FWIW: Idiomatic python would use underscores and not camel case.</p>
| 6 | 2016-10-07T01:09:25Z | [
"python",
"python-3.x"
]
|
How to modify deprecated imports for a reusable app? | 39,907,808 | <p>My project depends on an OSS reusable app, and that app includes a Django import which is deprecated in Django 1.10:</p>
<p><code>from django.db.models.sql.aggregates import Aggregate</code></p>
<p>is changing to:</p>
<p><code>from django.db.models.aggregates import Aggregate</code></p>
<p>We get a warning on Django 1.9, which will become an error on Django 1.10. This is blocking our upgrade, and I want to contribute a fix to the app so we can upgrade.</p>
<p>One option would be to modify the requirements in setup.py so that Django 1.10 is required. But I'm sure my contribution would be rejected since it would break for everyone else. </p>
<p>To maintain backwards compatibility, I can do the import as a <code>try/except</code> but that feels hacky. It seems like I need to do some Django version checking in the imports. Should I do a Django version check, which returns a string, convert that to a float, and do an <code>if version > x</code>? That feels hacky too. </p>
<p>What's the best practice on this? Examples?</p>
| 1 | 2016-10-07T00:57:13Z | 39,910,769 | <p>Django has a strict backwards compatibility policy. If it's raising a deprecation warning, then the new version <em>works already</em> in 1.9. You should just switch to it before you upgrade.</p>
| 1 | 2016-10-07T06:27:31Z | [
"python",
"django"
]
|
Python Scripting in TIBCO Spotfire to show custom Messages | 39,907,870 | <p>I am loading a data table on demand and have linked it to markings in a previous tab. The Data table look like:</p>
<pre><code> ID Values
1 365
2 65
3 32
3 125
4 74
5 98
6 107
</code></pre>
<p>I want to limit the data that is brought into this new visualization based on distinct count of ID. </p>
<p>I am currently doing it using
"Limit Data by Expression" section in the properties.
Where my expression is</p>
<pre><code>UniqueCount(ID) <= 1000
</code></pre>
<p>This works perfectly, however, I'd also like it to display a message like
"Too many IDs selected. Max Limit is 1000"</p>
<p>I was thinking of doing this using a property control where the property control triggers an iron python script. Any suggestions on how to write that script ?</p>
| 0 | 2016-10-07T01:06:57Z | 39,979,462 | <p>One work-around would be use of a small Text Area on the page (perhaps as a header). You can <code>Insert Dynamic Item</code> -> Calculated Value or Icon and have it perform a count or unique count based on marking (selection). For example, once the Count(ID) is > 1000, the text can change to red, or the icon can change color. This is a processing-light alternative which won't necessarily create a pop-up, but can still provide instant notification to a user that too many rows have been selected.</p>
<p>Edit below:</p>
<pre><code><!-- Invisible span to contain a copy of your value -->
<span style="display:none;" id="stores-my-count"><span id="seek-ender"></span></span>
\\Javascript below to grab dynamic item element's value and put it into a span
\\To be executed every time the dynamic value changes
$("#DynamicItemSpotfireID").onchange = function() {
var myCount = $("#DynamicItemSpotfireID").text();
$("#stores-my-count").append(myCount);
}
#IronPython script to locate the value placed into the span by the JS above
#and put only that portion of the page's HTML (only the value) into a document property
parentSpan = '<span id="stores-my-count">'
endingSpan = '<span id="seek-ender">'
startingHTML = myTextArea.As[VisualContent]().HtmlContent
startVal = startingHTML.find(parentSpan, startingIndex, endingIndex) + len(parentSpan)
endVal = startingHTML.find(endingSpan, startingIndex, endingIndex)
Document.Properties["myMarkingCount"] = startingHTML[startVal:endVal]
</code></pre>
<p>I haven't tested most of this code and I provide it as a place to start thinking about the problem, rather than a turnkey solution. Hopefully it works with only minor tweaking necessary.</p>
| 1 | 2016-10-11T14:21:51Z | [
"python",
"visualization",
"data-visualization",
"ironpython",
"spotfire"
]
|
Merging files bases on partially matching file names | 39,907,927 | <p>I am measuring the dependent variable vs independent variable (let's say current vs voltage from a device measurement) and the measurement set up will give me a separate file for positive measurement and negative measurement values. Each file is an excel file and has 2 columns, one for voltage and current each. I can name them whatever I want so I name them as device1_pos, device1_neg, device2_pos, device2_neg, device3_pos, device3_neg and so on. Additionally, I could have a repeat measurement for a given device so I will name it as device1_pos_meas2, device1_neg_meas2. After I have collected all my data I would like to merge the positive and negative measurement values in a single file for a given device. So I would like to have files like device1 (combining device1_pos and device1_neg however I will combine the second measurement for the same device in a separate file like device1_meas2) and so on for every device data.</p>
<p>Is there a way I can automate this process in python or shell script? If there is a smarter way I could be naming my files to make the process easier, that will be a helpful suggestion as well.</p>
<p>Adding more information to my initial question- I guess I am OK with merging the 2 files as shown below. I am concatenating the 2 files but since I don't want the header and row index, I read it line by line into a csv file (not the most efficient way but one that I could figure out).</p>
<pre><code>import os
import pandas as pd
from xlrd import open_workbook
import xlwt
</code></pre>
<p>os.chdir('C:\Users\fg7xmx\Documents\Projects\ESD\TestBench\Measurement\100616')</p>
<pre><code>path=os.getcwd()
file_pos=raw_input("Enter pos data file:")
file_neg=raw_input("Enter neg data file:")
file_allData=raw_input("Enter all data file name:")
file_csv=raw_input("Enter csv file name:")
file1=pd.read_excel(file_pos)
file2=pd.read_excel(file_neg)
file3=pd.concat([file1,file2],axis=0)
file3.to_excel(file_allData)
wb=open_workbook(file_allData)
for sheet in wb.sheets():
workbook=xlwt.Workbook()
newSheet = workbook.add_sheet('TLP_IV')
for row in range(sheet.nrows):
if row==0: continue
for col in range(sheet.ncols):
if col==0: continue
newSheet.write(row-1,col-1,sheet.cell_value(row,col))
workbook.save(file_csv)
</code></pre>
<p>However, as you see, I am manually entering each file name which is not reasonable for large number of files. My actual file names look like </p>
<p>Mod5_pin10_pin8_pos_dev1_10-06-16_10'01'21_AM.xls</p>
<p>I know using regular expressions I can match the given pattern but here I need to group together the files which have same mod number, same pin number, same measurement domain (pos or neg), same dev number, same date stamp and ignore the time stamp. I am not sure what command can I use for such grouping.</p>
| 0 | 2016-10-07T01:15:34Z | 39,966,762 | <p>Here is how I went about doing this</p>
<pre><code>groups= defaultdict(list)
group_sweep=defaultdict(list)
for filename in os.listdir('C:\\Users\\TLP_IV'):
basename, extension = os.path.splitext(filename)
mod, name, pin1, pin2, sweep, dev, meas, date, time, hour=basename.split('_')
groups[mod, name, pin1, pin2, dev, meas, date].append(filename)
group_sweep[sweep].append(filename)
</code></pre>
<p>I made sure that I named each file in the same naming convention with '_' as a separator. Once I could create a dictionary with a list of various attributes, I could group by keys and read the values for a given key as below</p>
<pre><code>for keys,values in group_sweep.items():
i=""
for key in keys:
if key=='n':
for value in values:
print value
</code></pre>
| 0 | 2016-10-10T21:06:46Z | [
"python",
"shell"
]
|
How to merge two dataframes with different column names but same number of rows? | 39,907,958 | <p>I have two different data frames in pandas. Example:</p>
<pre><code>df1=a b df2= c
0 1 1
1 2 2
2 3 3
</code></pre>
<p>I want to merge them so </p>
<pre><code>df1= a b c
0 1 1
1 2 2
2 3 3
</code></pre>
<p>I tried using <code>df1['c'] = df2['c']</code> but i got a settingwithCopywarnings</p>
| 1 | 2016-10-07T01:19:55Z | 39,908,187 | <p>In order to merge two dataframes you can use this two examples. Both returns the same goal.</p>
<p>Using <code>merge</code> plus additional arguments instructing it to use the indexes</p>
<p>Try this:</p>
<pre><code>response = pandas.merge(df1, df2, left_index=True, right_index=True)
In [2]: response
Out[2]:
b c
0 1 1
1 2 2
2 3 3
</code></pre>
<p>Or you can use <code>join</code>. In case your daraframes are differently-indexed.</p>
<blockquote>
<p>DataFrame.join is a convenient method for combining the columns of two potentially differently-indexed DataFrames into a single result DataFrame.</p>
</blockquote>
<p>Here is a basic example:</p>
<pre><code>result = df1.join(df2)
In [3]: result
Out[3]:
b c
0 1 1
1 2 2
2 3 3
</code></pre>
| 1 | 2016-10-07T01:55:32Z | [
"python",
"pandas",
"dataframe"
]
|
Restructuring Pandas DataFrame | 39,907,981 | <p>I have been suggested to move from the class structure, defining my own class, to the pandas DataFrame realm as I envision to have many operations with my data.</p>
<p>At this point I have a dataframe that looks like this:</p>
<pre><code> ID Name Recording Direction Duration Distance Path Raw
0 129 Houston Woodlands X 12.3 8 HWX.txt
1 129 Houston Woodlands Y 12.3 8 HWY.txt
2 129 Houston Woodlands Z 12.3 8 HWZ.txt
3 129 Houston Downtown X 11.8 10 HDX.txt
4 129 Houston Downtown Y 11.8 10 HDY.txt
5 129 Houston Downtown Z 11.8 10 HDZ.txt
... ... ... .. .. ... ... ...
2998 333 Chicago Downtown X 3.4 50 CDX.txt
2999 333 Chicago Downtown Y 3.4 50 CDY.txt
3000 333 Chicago Downtown Z 3.4 50 CDZ.txt
</code></pre>
<p>This is ok at the time, however, I would like to group all the X Y Z after loading the files/arrays (add columns) and, in addition to that, add new column with products of the array manipulation (e.g. FFT).</p>
<p>Finally I would like a DataFrame that would look like this:</p>
<pre><code> ID Name Recording Duration Distance Rawx Rawy Raxz FFT-Rawx FFT-Rawy FFT-Raxz
0 129 Houston Woodlands 12.3 8 HWX.txt HWY.txt HWZ.txt FFT-HWX.txt FFT-HWY.txt FFT-HWZ.txt
1 129 Houston Downtown 11.8 10 HDX.txt HDY.txt HDZ.txt FFT-HDX.txt FFT-HDY.txt FFT-HDZ.txt
... ... ... .. ... ... ... ... ... ... ... ...
1000 333 Chicago Downtown 3.4 50 CDX.txt CDY.txt CDZ.txt FFT-CDX.txt FFT-CDY.txt FFT-CDZ.txt
</code></pre>
<p>Any idea how?</p>
<p>Unfortunately, not all my cells have this nice structure.</p>
<p>Instead of</p>
<p>HDX HDY HDZ</p>
<p>I can have "random names". However, I know that they are in this order:</p>
<p>First is Z, second is Y, and third is X always. Each record has those three signals and then the next record comes.</p>
<p>I was thinking something along the lines of:</p>
<pre><code>k =1
for row in df:
if k % 3 == 0:
# Do something
elif k % 3 == 2:
# Do something
else:
# Do something
k += 1
</code></pre>
<p>However, I don't know if there is an option to add an empty column to an already existing dataframe and fill it through a loop. If there is such an option, please let me know.</p>
| 1 | 2016-10-07T01:23:09Z | 39,908,393 | <p>I think I have a partial answer! I got a little confused about what you wanted with regard to the FFT (fast fourier transform?) and where the data were coming from. </p>
<p>HOWEVER, I got everything else. </p>
<p>First, I'm gonna make some sample data. </p>
<pre><code>import pandas as pd
df = pd.DataFrame({"ID": [0, 1, 2, 3, 4, 5], "Name":[129, 129, 129, 129, 129, 129],
"Recording":['Houston Woodlands', 'Houston Woodlands', 'Houston Woodlands',
'Houston Downtown', 'Houston Downtown', 'Houston Downtown'],
"Direction": ["X", "Y", "Z", "X", "Y", "Z"], "Duration":[12.3, 12.3, 12.3, 11.8, 11.8, 11.8],
"Path_Raw":["HWX.txt", "HWY.txt", "HWZ.txt", 'HDX.txt', 'HDY.txt', 'HDZ.txt'],
"Distance": [8, 8, 8, 10, 10, 10]})
</code></pre>
<p>NOW I'll define some new functions. I've split these apart, so they'll be a little easier to customize. Basically, I'm calling .unique and saving each Path Raw as a new variable. </p>
<pre><code>def splitunique0(group):
ulist = group.unique()
return(ulist[0])
def splitunique1(group):
ulist = group.unique()
return(ulist[1])
def splitunique2(group):
ulist = group.unique()
return(ulist[2])
dothis = {"Duration":"first", "Distance":"first", 'Path_Raw': {'Rawx': splitunique0,
'Rawy': splitunique1,
'Raxz': splitunique2}}
new = df.groupby(["Name", "Recording"]).agg(dothis)
new.columns = ["Duration", "Distance", "Raxz", "Rawx", "Rawy"]
</code></pre>
<p>Here's the finished dataframe!
<code>
Duration Distance Raxz Rawx Rawy
Name Recording<br>
129 Houston Downtown 11.8 10 HDZ.txt HDX.txt HDY.txt
Houston Woodlands 12.3 8 HWZ.txt HWX.txt HWY.txt</code></p>
| 1 | 2016-10-07T02:22:20Z | [
"python",
"pandas",
"dataframe"
]
|
Restructuring Pandas DataFrame | 39,907,981 | <p>I have been suggested to move from the class structure, defining my own class, to the pandas DataFrame realm as I envision to have many operations with my data.</p>
<p>At this point I have a dataframe that looks like this:</p>
<pre><code> ID Name Recording Direction Duration Distance Path Raw
0 129 Houston Woodlands X 12.3 8 HWX.txt
1 129 Houston Woodlands Y 12.3 8 HWY.txt
2 129 Houston Woodlands Z 12.3 8 HWZ.txt
3 129 Houston Downtown X 11.8 10 HDX.txt
4 129 Houston Downtown Y 11.8 10 HDY.txt
5 129 Houston Downtown Z 11.8 10 HDZ.txt
... ... ... .. .. ... ... ...
2998 333 Chicago Downtown X 3.4 50 CDX.txt
2999 333 Chicago Downtown Y 3.4 50 CDY.txt
3000 333 Chicago Downtown Z 3.4 50 CDZ.txt
</code></pre>
<p>This is ok at the time, however, I would like to group all the X Y Z after loading the files/arrays (add columns) and, in addition to that, add new column with products of the array manipulation (e.g. FFT).</p>
<p>Finally I would like a DataFrame that would look like this:</p>
<pre><code> ID Name Recording Duration Distance Rawx Rawy Raxz FFT-Rawx FFT-Rawy FFT-Raxz
0 129 Houston Woodlands 12.3 8 HWX.txt HWY.txt HWZ.txt FFT-HWX.txt FFT-HWY.txt FFT-HWZ.txt
1 129 Houston Downtown 11.8 10 HDX.txt HDY.txt HDZ.txt FFT-HDX.txt FFT-HDY.txt FFT-HDZ.txt
... ... ... .. ... ... ... ... ... ... ... ...
1000 333 Chicago Downtown 3.4 50 CDX.txt CDY.txt CDZ.txt FFT-CDX.txt FFT-CDY.txt FFT-CDZ.txt
</code></pre>
<p>Any idea how?</p>
<p>Unfortunately, not all my cells have this nice structure.</p>
<p>Instead of</p>
<p>HDX HDY HDZ</p>
<p>I can have "random names". However, I know that they are in this order:</p>
<p>First is Z, second is Y, and third is X always. Each record has those three signals and then the next record comes.</p>
<p>I was thinking something along the lines of:</p>
<pre><code>k =1
for row in df:
if k % 3 == 0:
# Do something
elif k % 3 == 2:
# Do something
else:
# Do something
k += 1
</code></pre>
<p>However, I don't know if there is an option to add an empty column to an already existing dataframe and fill it through a loop. If there is such an option, please let me know.</p>
| 1 | 2016-10-07T01:23:09Z | 39,909,040 | <p>Consider concatenating a list of <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow">pandas.pivot_tables</a>. However, prior to concatenating, the dataframe must be sliced by the <em>Raw</em> value common stems --<em>HW.txt</em>, <em>HD.txt</em>, <em>CD.txt</em>-- grouped using regex:</p>
<pre><code>from io import StringIO
import pandas as pd
import re
df = pd.read_csv(StringIO('''
ID,Name,Recording,Direction,Duration,Distance,Path,Raw
0,129,Houston,Woodlands,X,12.3,8,HWX.txt
1,129,Houston,Woodlands,Y,12.3,8,HWY.txt
2,129,Houston,Woodlands,Z,12.3,8,HWZ.txt
3,129,Houston,Downtown,X,11.8,10,HDX.txt
4,129,Houston,Downtown,Y,11.8,10,HDY.txt
5,129,Houston,Downtown,Z,11.8,10,HDZ.txt
6,333,Chicago,Downtown,X,3.4,50,CDX.txt
7,333,Chicago,Downtown,Y,3.4,50,CDY.txt
8,333,Chicago,Downtown,Z,3.4,50,CDZ.txt'''))
# UNIQUE 'RAW' STEM GROUPINGS
grp = set([re.sub(r'X|Y|Z', '', i) for i in df['Raw'].tolist()])
dfList = []
for i in grp:
# FILTER FOR 'RAW' VALUES THAT CONTAIN STEMS
temp = df[df['Raw'].isin([i.replace('.txt', txt+'.txt') for txt in ['X','Y','Z']])]
# RUN PIVOT (LONG TO WIDE)
temp = temp.pivot_table(values='Raw',
index=['Name', 'Recording', 'Direction','Distance', 'Path'],
columns=['Duration'], aggfunc='min')
dfList.append(temp)
# CONCATENATE (STACK) DFS IN LIST
finaldf = pd.concat(dfList).reset_index()
# RENAME AND CREATE FFT COLUMNS
finaldf = finaldf.rename(columns={'X': 'Rawx', 'Y': 'Rawy', 'Z': 'Rawz'})
finaldf[['FFT-Rawx', 'FFT-Rawy', 'FFT-Rawz']] = 'FFT-' + finaldf[['Rawx', 'Rawy', 'Rawz']]
</code></pre>
<p><strong>Output</strong></p>
<pre><code># Duration Name Recording Direction Distance Path Rawx Rawy Rawz FFT-Rawx FFT-Rawy FFT-Rawz
# 0 129 Houston Downtown 11.8 10 HDX.txt HDY.txt HDZ.txt FFT-HDX.txt FFT-HDY.txt FFT-HDZ.txt
# 1 129 Houston Woodlands 12.3 8 HWX.txt HWY.txt HWZ.txt FFT-HWX.txt FFT-HWY.txt FFT-HWZ.txt
# 2 333 Chicago Downtown 3.4 50 CDX.txt CDY.txt CDZ.txt FFT-CDX.txt FFT-CDY.txt FFT-CDZ.txt
</code></pre>
| 1 | 2016-10-07T03:47:00Z | [
"python",
"pandas",
"dataframe"
]
|
Output list of files from slideshow | 39,908,060 | <p>I have adapted a python script to display a slideshow of images. The original script can be found at <a href="https://github.com/cgoldberg/py-slideshow" rel="nofollow">https://github.com/cgoldberg/py-slideshow</a></p>
<p>I want to be able to record the filename of each of the images that is displayed so that I may more easily debug any errors (i.e., remove incompatible images).</p>
<p>I have attempted to include a command to write the filename to a text file in the <code>def get_image_paths</code> function. However, that has not worked. My code appears below - any help is appreciated.</p>
<pre><code>import pyglet
import os
import random
import argparse
window = pyglet.window.Window(fullscreen=True)
def get_scale(window, image):
if image.width > image.height:
scale = float(window.width) / image.width
else:
scale = float(window.height) / image.height
return scale
def update_image(dt):
img = pyglet.image.load(random.choice(image_paths))
sprite.image = img
sprite.scale = get_scale(window, img)
if img.height >= img.width:
sprite.x = ((window.width / 2) - (sprite.width / 2))
sprite.y = 0
elif img.width >= img.height:
sprite.y = ((window.height / 2) - (sprite.height / 2))
sprite.x = 0
else:
sprite.x = 0
sprite.y = 0
window.clear()
thefile=open('test.txt','w')
def get_image_paths(input_dir='.'):
paths = []
for root, dirs, files in os.walk(input_dir, topdown=True):
for file in sorted(files):
if file.endswith(('jpg', 'png', 'gif')):
path = os.path.abspath(os.path.join(root, file))
paths.append(path)
thefile.write(file)
return paths
@window.event()
def on_draw():
sprite.draw()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('dir', help='directory of images',
nargs='?', default=os.getcwd())
args = parser.parse_args()
image_paths = get_image_paths(args.dir)
img = pyglet.image.load(random.choice(image_paths))
sprite = pyglet.sprite.Sprite(img)
pyglet.clock.schedule_interval(update_image, 3)
pyglet.app.run()
</code></pre>
| 0 | 2016-10-07T01:37:19Z | 39,908,247 | <p>System don't have to write to file at once but it can keep text in buffer and saves when you close file. So probably you have to close file.</p>
<p>Or you can use <code>thefile.flush()</code> after every <code>thefile.write()</code> to send new text from buffer to file at once.</p>
| 1 | 2016-10-07T02:04:12Z | [
"python",
"for-loop",
"pyglet"
]
|
Output list of files from slideshow | 39,908,060 | <p>I have adapted a python script to display a slideshow of images. The original script can be found at <a href="https://github.com/cgoldberg/py-slideshow" rel="nofollow">https://github.com/cgoldberg/py-slideshow</a></p>
<p>I want to be able to record the filename of each of the images that is displayed so that I may more easily debug any errors (i.e., remove incompatible images).</p>
<p>I have attempted to include a command to write the filename to a text file in the <code>def get_image_paths</code> function. However, that has not worked. My code appears below - any help is appreciated.</p>
<pre><code>import pyglet
import os
import random
import argparse
window = pyglet.window.Window(fullscreen=True)
def get_scale(window, image):
if image.width > image.height:
scale = float(window.width) / image.width
else:
scale = float(window.height) / image.height
return scale
def update_image(dt):
img = pyglet.image.load(random.choice(image_paths))
sprite.image = img
sprite.scale = get_scale(window, img)
if img.height >= img.width:
sprite.x = ((window.width / 2) - (sprite.width / 2))
sprite.y = 0
elif img.width >= img.height:
sprite.y = ((window.height / 2) - (sprite.height / 2))
sprite.x = 0
else:
sprite.x = 0
sprite.y = 0
window.clear()
thefile=open('test.txt','w')
def get_image_paths(input_dir='.'):
paths = []
for root, dirs, files in os.walk(input_dir, topdown=True):
for file in sorted(files):
if file.endswith(('jpg', 'png', 'gif')):
path = os.path.abspath(os.path.join(root, file))
paths.append(path)
thefile.write(file)
return paths
@window.event()
def on_draw():
sprite.draw()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('dir', help='directory of images',
nargs='?', default=os.getcwd())
args = parser.parse_args()
image_paths = get_image_paths(args.dir)
img = pyglet.image.load(random.choice(image_paths))
sprite = pyglet.sprite.Sprite(img)
pyglet.clock.schedule_interval(update_image, 3)
pyglet.app.run()
</code></pre>
| 0 | 2016-10-07T01:37:19Z | 39,917,033 | <p>I ended up declaring the random image chosen to a variable that I, then, wrote to a txt file. Relevant code with changes appears below:</p>
<pre><code>thefile=open('test.txt','w')
def update_image(dt):
pic = random.choice(image_paths)
img = pyglet.image.load(pic)
thefile.write(pic+'\n')
thefile.flush()
sprite.image = img
sprite.scale = get_scale(window, img)
if img.height >= img.width:
sprite.x = ((window.width / 2) - (sprite.width / 2))
sprite.y = 0
elif img.width >= img.height:
sprite.y = ((window.height / 2) - (sprite.height / 2))
sprite.x = 0
else:
sprite.x = 0
sprite.y = 0
window.clear()
</code></pre>
<p>Thank you @furas for pointing me in the right direction regarding log files and flushing the buffer to be certain to capture all instances.</p>
| 0 | 2016-10-07T12:13:41Z | [
"python",
"for-loop",
"pyglet"
]
|
Regex find ALL patterns betwen string | 39,908,061 | <p>I want to match digits betwen "000" or betwen \b and "000" or "000" and \b from a string like this: </p>
<pre><code>11101110001011101000000011101010111
</code></pre>
<p>I have tried with expressions like this:</p>
<pre><code>(?<=000)\d+(?=000)
</code></pre>
<p>but I only get the largest occurrence </p>
<p>I expect to get:</p>
<pre><code>1110111
1011101
0
11101010111
</code></pre>
| 0 | 2016-10-07T01:37:36Z | 39,908,125 | <p>You can use the <a href="https://pypi.python.org/pypi/regex" rel="nofollow"><code>regex</code> package</a> and the <code>.findall()</code> method:</p>
<pre><code>In [1]: s = "11101110001011101000000011101010111"
In [2]: import regex
In [3]: regex.findall(r"(?<=000|^)\d+?(?=000|$)", s)
Out[3]: ['1110111', '1011101', '0', '00011101010111']
</code></pre>
<p>The <code>000|^</code> and <code>000|$</code> would help to match either the <code>000</code> or the beginning and the end of a string respectively. Also note the <code>?</code> after the <code>\d+</code> - we are making it <a href="http://stackoverflow.com/questions/2301285/what-do-lazy-and-greedy-mean-in-the-context-of-regular-expressions"><em>non-greedy</em></a>.</p>
<p>Note that the regular <code>re.findall()</code> would fail with the following error in this case:</p>
<blockquote>
<p>error: look-behind requires fixed-width pattern</p>
</blockquote>
<p>This is because <code>re</code> does not support <em>variable-length lookarounds</em> but <code>regex</code> does.</p>
| 1 | 2016-10-07T01:44:47Z | [
"python",
"regex",
"string",
"digits"
]
|
Regex find ALL patterns betwen string | 39,908,061 | <p>I want to match digits betwen "000" or betwen \b and "000" or "000" and \b from a string like this: </p>
<pre><code>11101110001011101000000011101010111
</code></pre>
<p>I have tried with expressions like this:</p>
<pre><code>(?<=000)\d+(?=000)
</code></pre>
<p>but I only get the largest occurrence </p>
<p>I expect to get:</p>
<pre><code>1110111
1011101
0
11101010111
</code></pre>
| 0 | 2016-10-07T01:37:36Z | 39,915,616 | <p>you can do it with the re module like this:</p>
<pre><code>re.findall(r'(?:\b|(?<=000))(\d+?)(?:000|\b)', s)
</code></pre>
| 1 | 2016-10-07T10:59:41Z | [
"python",
"regex",
"string",
"digits"
]
|
Extract the year and the month from a line in a file and use a map to print every time its found to add 1 to the value | 39,908,080 | <pre><code>def Stats():
file = open('mbox.txt')
d = dict()
for line in file:
if line.startswith('From'):
words = line.split()
for words in file:
key = words[3] + " " + words[6]
if key:
d[key] +=1
return d
</code></pre>
<p>The line reads</p>
<pre><code>From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008
</code></pre>
<p>I want to pull "Jan 2008" as the key </p>
<p>My error message: </p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Robert\Documents\PYTHON WORKSPACE\Program 1.py", line 78, in <module>
File "C:\Users\Robert\Documents\PYTHON WORKSPACE\Program 1.py", line 76, in <module>
File "C:\Users\Robert\Documents\PYTHON WORKSPACE\Program 1.py", line 63, in <module>
builtins.KeyError: 'u -'
</code></pre>
| -1 | 2016-10-07T01:39:45Z | 39,908,172 | <p>Not the direct answer to the question, but a possible alternative solution - use the <a href="https://labix.org/python-dateutil" rel="nofollow"><code>dateutil</code></a> datetime parser in a "fuzzy" mode and simply format the extracted <code>datetime</code> object via <a href="https://docs.python.org/2/library/datetime.html#datetime.datetime.strftime" rel="nofollow"><code>.strftime()</code></a>:</p>
<pre><code>In [1]: from dateutil.parser import parse
In [2]: s = "From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008"
In [3]: parse(s, fuzzy=True).strftime("%b %Y")
Out[3]: 'Jan 2008'
</code></pre>
| 0 | 2016-10-07T01:52:29Z | [
"python",
"python-2.7",
"python-3.x"
]
|
Extract the year and the month from a line in a file and use a map to print every time its found to add 1 to the value | 39,908,080 | <pre><code>def Stats():
file = open('mbox.txt')
d = dict()
for line in file:
if line.startswith('From'):
words = line.split()
for words in file:
key = words[3] + " " + words[6]
if key:
d[key] +=1
return d
</code></pre>
<p>The line reads</p>
<pre><code>From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008
</code></pre>
<p>I want to pull "Jan 2008" as the key </p>
<p>My error message: </p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Robert\Documents\PYTHON WORKSPACE\Program 1.py", line 78, in <module>
File "C:\Users\Robert\Documents\PYTHON WORKSPACE\Program 1.py", line 76, in <module>
File "C:\Users\Robert\Documents\PYTHON WORKSPACE\Program 1.py", line 63, in <module>
builtins.KeyError: 'u -'
</code></pre>
| -1 | 2016-10-07T01:39:45Z | 39,908,413 | <p>You need something more like this</p>
<pre><code>def Stats():
with open('mbox.txt') as f:
d = {}
for line in f:
if line.startswith('From'):
words = line.split()
key = words[3] + " " + words[6]
if key in d:
d[key] += 1
else:
d[key] = 1
return d
print(Stats())
</code></pre>
<p>The issues with your code are that you are reusing variable name oops. </p>
<pre><code>words = line.split()
for words in file:
</code></pre>
<p>The original <code>words</code> (the split string) is lost, overwritten by the next line from your file. Which leads to the next error.</p>
<pre><code>for words in file:
</code></pre>
<p>You are iterating over the file again, inside the loop where you are iterating over the file. Not what you intended I'll bet. Instead you probably wanted to remove that loop, it serves no purpose.</p>
<p>Next you want to add to the dict value if the key exists. </p>
<pre><code>if key:
</code></pre>
<p>Won't do that. It just checks that the string isn't <code>None</code> or empty. Instead you want </p>
<pre><code>if key in d:
</code></pre>
<p>Finally you want to add to your total if the key exists, otherwise intialise the value to 1. There is a class <code>defautltdict</code> that could neaten that up a touch. I'll leave it as an exercise to the reader to use it if you want.</p>
<p>hth.</p>
| 0 | 2016-10-07T02:25:22Z | [
"python",
"python-2.7",
"python-3.x"
]
|
Problems cropping entire white lines from .png file | 39,908,104 | <p>What I want to do is to crop out the white lines above a given instagram print screen. I tried doing that by finding the center of the image and going up, line by line, until I found the first line entirely white. Any idea why my code is not working? </p>
<pre><code>from PIL import Image
image_file = "test.png"
im = Image.open(image_file)
width, height = im.size
centerLine = height // 2
entireWhiteLine = set()
entireWhiteLine.add(im.getpixel((0, 0)))
terminateUpperCrop = 1
while terminateUpperCrop != 2 :
for i in range(centerLine, 1, -1) :
entireLine = set()
upperBorder = i - 1
for j in range(0, width, 1) :
entireLine.add((im.getpixel((i, j))))
if entireLine == im.getpixel((0,0)):
box = (0, upperBorder, width, height)
crop = im.crop((box))
crop.save('test2.png')
terminateUpperCrop = 2
</code></pre>
| 0 | 2016-10-07T01:41:58Z | 39,911,102 | <p>Your <code>getpixel()</code> call is actually searching with the coordinates the wrong way around, so in effect you were scanning for the left edge. You could use the following approach. This creates a row of data containing only white pixels. If the length of the row equals your width, then you know they are all white.</p>
<pre><code>from PIL import Image
image_file = "test.png"
im = Image.open(image_file)
width, height = im.size
centerLine = height // 2
white = (255, 255, 255)
for y in range(centerLine, 0, -1) :
if len([1 for x in range(width) if im.getpixel((x, y)) == white]) == width - 1:
box = (0, y, width, height)
crop = im.crop((box))
crop.save('test2.png')
break
</code></pre>
| 0 | 2016-10-07T06:51:07Z | [
"python",
"set",
"python-imaging-library",
"crop"
]
|
cx_Freeze error: no commands supplied | 39,908,111 | <p>I'm trying to create a executable from my .py file.</p>
<p>I did this:</p>
<pre><code>import cx_Freeze
executables = [cx_Freeze.Executable("Cobra.py")]
cx_Freeze.setup(name="Snake Python", options={"build_exe":{"packages":["pygame","time","sys","os","random"]}}, executables = executables)
</code></pre>
<p>And run from Python GUI. It returns the follow erros:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/victor/Desktop/tcc/Jogo_Cobra/setup.py", line 5, in <module>
cx_Freeze.setup(name="Snake Python", options={"build_exe":{"packages":["pygame","time","sys","os","random"]}}, executables = executables)
File "C:\Python32\lib\site-packages\cx_Freeze\dist.py", line 365, in setup
distutils.core.setup(**attrs)
File "C:\Python32\lib\distutils\core.py", line 137, in setup
raise SystemExit(gen_usage(dist.script_name) + "\nerror: %s" % msg)
SystemExit: usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: no commands supplies
</code></pre>
<p>I using Python 3.2 on windows 8.1 and cx_Freeze-4.3.2.win32-py3.2</p>
<p>thanks for any help!</p>
| 0 | 2016-10-07T01:42:59Z | 39,908,178 | <p>You need to actually pass a command to <code>setup.py</code> to tell it to build the executable</p>
<pre><code>python setup.py build
</code></pre>
<blockquote>
<p>This command will create a subdirectory called <code>build</code> with a further subdirectory starting with the letters <code>exe.</code> and ending with the typical identifier for the platform that distutils uses</p>
</blockquote>
<p>Alternately you can have it build the executable and wrap it in an installer for easy distribution</p>
<pre><code>python setup.py bdist_msi
</code></pre>
| 0 | 2016-10-07T01:53:45Z | [
"python",
"cx-freeze"
]
|
control of user input | 39,908,116 | <p>I'm wondering why my input control here doesn't work. I tried it with only <code>userOption != "1"</code> and it worked fine (Side note: Ignore the other functions.)</p>
<pre><code>userOption = str(input("Chose option 1, 2 or \"E/e to exit\": "))
print(userOption)
while(userOption != "1" or userOption != "2" or userOption != "e"
or userOption != "E"):
reply = (userOption + " is not a valid option")
print(reply)
userOption = input("Chose option 1, 2 or \"E/e to exit\": ")
if userOption == "1":
validEntry("Celsius")
C_to_F(int(userInput))
print(F)
elif userOption == "2":
validEntry("Fahrenheit")
F_to_C(int(userInput))
print(C)
else:
input("Thank you. Press enter to close this screen in the exe mode.")
</code></pre>
| 0 | 2016-10-07T01:43:30Z | 39,908,157 | <p>You have to use <code>and</code> instead of <code>or</code></p>
<pre><code>while userOption != "1" and userOption != "2" and userOption != "e" and userOption != "E":
</code></pre>
<p>or <code>not</code> with <code>==</code></p>
<pre><code>while not(userOption == "1" or userOption == "2" or userOption == "e" or userOption == "E"):
</code></pre>
<p>Or you can use <code>not in</code></p>
<pre><code>while userOption not in ("1", "2", "e", "E"):
</code></pre>
| 1 | 2016-10-07T01:50:24Z | [
"python"
]
|
control of user input | 39,908,116 | <p>I'm wondering why my input control here doesn't work. I tried it with only <code>userOption != "1"</code> and it worked fine (Side note: Ignore the other functions.)</p>
<pre><code>userOption = str(input("Chose option 1, 2 or \"E/e to exit\": "))
print(userOption)
while(userOption != "1" or userOption != "2" or userOption != "e"
or userOption != "E"):
reply = (userOption + " is not a valid option")
print(reply)
userOption = input("Chose option 1, 2 or \"E/e to exit\": ")
if userOption == "1":
validEntry("Celsius")
C_to_F(int(userInput))
print(F)
elif userOption == "2":
validEntry("Fahrenheit")
F_to_C(int(userInput))
print(C)
else:
input("Thank you. Press enter to close this screen in the exe mode.")
</code></pre>
| 0 | 2016-10-07T01:43:30Z | 39,908,747 | <pre><code>while(userOption != "1" or userOption != "2" or userOption != "e"
or userOption != "E"):
</code></pre>
<p>That logic will produce a true result <em>no matter what the user types</em>. As others have said, you need to use <code>and</code> here instead of <code>or</code>.</p>
| 0 | 2016-10-07T03:07:47Z | [
"python"
]
|
Check parallelism of files using their suffixes | 39,908,156 | <p>Given a directory of files, e.g.:</p>
<pre><code>mydir/
test1.abc
set123.abc
jaja98.abc
test1.xyz
set123.xyz
jaja98.xyz
</code></pre>
<p>I need to check that for every <code>.abc</code> file there is an equivalent <code>.xyz</code> file. I could do it like this:</p>
<pre><code>>>> filenames = ['test1.abc', 'set123.abc', 'jaja98.abc', 'test1.xyz', 'set123.xyz', 'jaja98.xyz']
>>> suffixes = ('.abc', '.xyz')
>>> assert all( os.path.splitext(_filename)[0]+suffixes[1] in filenames for _filename in filenames if _filename.endswith(suffixes[0]) )
</code></pre>
<p>The above code should pass the assertion, while something like this would fail:</p>
<pre><code>>>> filenames = ['test1.abc', 'set123.abc', 'jaja98.abc', 'test1.xyz', 'set123.xyz']
>>> suffixes = ('.abc', '.xyz') >>> assert all(os.path.splitext(_filename)[0]+suffixes[1] in filenames for _filename in filenames if _filename.endswith(suffixes[0]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AssertionError
</code></pre>
<p>But that's a little too verbose.<br>
Is there a better to do the same checks?</p>
| 0 | 2016-10-07T01:50:09Z | 39,908,318 | <p>You could define helper function that would return <code>set</code> of filenames without extension that match to the given suffix. Then you could easily check is files with suffix <code>.abc</code> is subset of files with suffix <code>.xyz</code>:</p>
<pre><code>filenames = ['test1.abc', 'set123.abc', 'jaja98.abc', 'test1.xyz', 'set123.xyz', 'jaja98.xyz']
filenames2 = ['test1.abc', 'set123.abc', 'jaja98.abc', 'test1.xyz', 'set123.xyz']
suffixes = ('.abc', '.xyz')
def filter_ext(names, ext):
return {n[:-len(ext)] for n in names if n.endswith(ext)}
assert filter_ext(filenames, suffixes[0]) <= filter_ext(filenames, suffixes[1])
assert filter_ext(filenames2, suffixes[0]) <= filter_ext(filenames2, suffixes[1]) # fail
</code></pre>
<p>Above approach would be more efficient as well since it has <strong>O(n)</strong> time complexity where as the original is <strong>O(n^2)</strong>. Of course if the list is small this doesn't really matter.</p>
| 2 | 2016-10-07T02:13:34Z | [
"python",
"operating-system",
"filepath",
"file-extension",
"suffix"
]
|
drop datetimeindex where df shows nan | 39,908,183 | <p>I have a pandas dataframe with datetime index which has nan values on some rows. How do I remove the datetimeindex along with the nan rows? </p>
<pre><code>2016-10-06 13:15:00 2.923383 0.007970 -0.001883
2016-10-06 13:30:00 2.809612 0.007389 0.001466
2016-10-06 13:45:00 3.022803 0.028234 -0.005162
2016-10-06 14:00:00 3.005836 0.017393 -0.000727
2016-10-06 14:15:00 3.031413 0.002826 -0.001097
2016-10-06 14:30:00 3.107922 0.011489 0.001837
2016-10-06 14:45:00 3.090017 -0.015071 0.006606
2016-10-06 15:00:00 3.032213 -0.028361 -0.008619
2016-10-06 15:15:00 3.010773 -0.020547 0.008827
2016-10-06 15:30:00 2.948293 -0.002611 0.013339
2016-10-06 15:45:00 2.965507 -0.012090 0.004819
2016-10-06 16:00:00 2.939935 0.009255 -0.016812
2016-10-06 16:15:00 NaN NaN NaN
2016-10-06 16:30:00 NaN NaN NaN
2016-10-06 16:45:00 NaN NaN NaN
2016-10-06 17:00:00 NaN NaN NaN
2016-10-06 17:15:00 NaN NaN NaN
2016-10-06 17:30:00 NaN NaN NaN
2016-10-06 17:45:00 NaN NaN NaN
2016-10-06 18:00:00 2.790215 -0.006258 -0.006561
2016-10-06 18:15:00 2.760398 -0.019173 -0.005650
2016-10-06 18:30:00 2.806837 -0.004759 0.003778
2016-10-06 18:45:00 2.707243 -0.011007 0.000657
2016-10-06 19:00:00 2.690583 -0.011315 0.011752
2016-10-06 19:15:00 2.632939 -0.010978 0.018907
2016-10-06 19:30:00 2.665248 -0.009146 0.016380
2016-10-06 19:45:00 2.637122 -0.015417 0.021086
2016-10-06 20:00:00 2.688877 -0.004790 0.009998
2016-10-06 20:15:00 2.574410 -0.000862 0.014240
2016-10-06 20:30:00 2.641405 0.010043 0.010205
</code></pre>
<p>I tried: </p>
<pre><code>for row in range(len(df)):
if df.iloc[row,:] is None:
df.index.drop(row)
</code></pre>
<p>but what is returned is the above. </p>
<p>Note that <code>df.dropna()</code> is not what I'm looking for... I am dealing with missing time-series data interpolation done on plotly. I am looking to connect the missing data so I don't have gaps (caused by NaN) or interpolation (caused by missing data but datatimeindex is still there) on charts. I don't want to go into it further but apparently I've been told removing the NaN rows <em>and</em> datetimeindex where the NaN rows appear is a solution... </p>
| 0 | 2016-10-07T01:54:15Z | 39,908,682 | <p>If you just want to find the indices where there is <em>no</em> NaN in a row, you can do this:</p>
<pre><code># get a single column of T/F values that
# where at least one value in the row is null
msk = df.isnull().any(axis=1)
# now drop all of the index values where msk = True
new_index = df.index.drop(msk[msk == True].index)
</code></pre>
<p>However, <code>new_index</code> is just a list of the index values where no columns are NaN. Doing this does not modify the DataFrame instance.</p>
<p>You can also interpolate the data yourself with <a href="http://pandas.pydata.org/pandas-docs/stable/missing_data.html#interpolation" rel="nofollow"><code>interpolate()</code>.</a></p>
<p>You will likely want to do something like the following:</p>
<pre><code>df.interpolate(method="time")
</code></pre>
| 0 | 2016-10-07T02:58:59Z | [
"python",
"pandas",
"dataframe"
]
|
Printing Parallel Tuples in python | 39,908,251 | <p>I'm trying to make a word guessing program and I'm having trouble printing parallel tuples. I need to print the <code>"secret word"</code> with the corresponding hint, but the code that I wrote doesn't work. I can't figure out where I'm going wrong.</p>
<p>Any help would be appreciated :)</p>
<p>This is my code so far:</p>
<pre><code>import random
Words = ("wallet","canine")
Hints = ("Portable money holder","Man's best friend")
vowels = "aeiouy"
secret_word = random.choice(Words)
new_word = ""
for letter in secret_word:
if letter in vowels:
new_word += '_'
else:
new_word += letter
maxIndex = len(Words)
for i in range(1):
random_int = random.randrange(maxIndex)
print(new_word,"\t\t\t",Hints[random_int])
</code></pre>
| 0 | 2016-10-07T02:04:41Z | 39,919,414 | <p>The issue here is that <code>random_int</code> is, as defined, random. As a result you'll randomly get the right result sometimes.</p>
<p>A quick fix is by using the <code>tuple.index</code> method, get the index of the element inside the tuple <code>Words</code> and then use that index on <code>Hints</code> to get the corresponding word, your print statement looking like:</p>
<pre><code>print(new_word,"\t\t\t",Hints[Words.index(secret_word)])
</code></pre>
<p>This does the trick but is clunky. Python has a data structure called a dictionary with which you can map one value to another. This could make your life easier in the long run. To create a dictionary from the two tuples we can <code>zip</code> them together:</p>
<pre><code>mapping = dict(zip(Words, Hints))
</code></pre>
<p>and create a structure that looks like:</p>
<pre><code>{'canine': "Man's best friend", 'wallet': 'Portable money holder'}
</code></pre>
<p>This helps. </p>
<p>Another detail you could fix is in how you create the <code>new_word</code>; instead of looping you can use a comprehension to create the respective letters and then <code>join</code> these on the empty string <code>""</code> to create the resulting string:</p>
<pre><code>new_word = "".join("_" if letter in vowels else letter for letter in secret_word)
</code></pre>
<p>with exactly the same effect. Now, since you also have the dictionary <code>mapping</code>, getting the respective hint is easy, just supply the key <code>new_word</code> to <code>mapping</code> and it'll return the key. </p>
<p>A revised version of your code looks like this:</p>
<pre><code>import random
Words = ("wallet", "canine")
Hints = ("Portable money holder", "Man's best friend")
mapping = dict(zip(Words, Hints))
vowels = "aeiouy"
secret_word = random.choice(Words)
new_word = "".join("_" if letter in vowels else letter for letter in secret_word)
print(new_word,"\t\t\t", d[secret_word])
</code></pre>
| 0 | 2016-10-07T14:16:02Z | [
"python",
"python-3.x",
"tuples"
]
|
pass data result from python to Java variable by processbuilder | 39,908,283 | <p>I used process builder to run a python script from java. and i can sent the data from Java to python variable ( import sys to get data from Java) . and print out the python result in java.
for example: </p>
<pre><code>public static void main(String a[]){
try{
int number1 = 100;
int number2 = 200;
String searchTerm="water";
ProcessBuilder pb = new ProcessBuilder("C:/Python27/python","D://my_utils.py",""+number1,""+number2,""+searchTerm);
Process p = pb.start();
BufferedReader bfr = new BufferedReader(new InputStreamReader(p.getInputStream()));
System.out.println(".........start process.........");
String line = "";
while ((line = bfr.readLine()) != null){
System.out.println("Python Output: " + line);
}
System.out.println("........end process.......");
}catch(Exception e){System.out.println(e);}
}
}
</code></pre>
<p>However, I do not know how to get back the result from python and pass the result to <strong>JAVA Variable</strong> for further use. How to do that?
I have issue with passing a list as a method argument in JAVA.
[![enter image description here][1]][1]
[![enter image description here][2]][2]
How to pass the variable (<strong>return variable</strong> in def MatchSearch() in python) to the <strong>JAVA VARIABLE</strong>? </p>
| 0 | 2016-10-07T02:09:42Z | 39,955,186 | <p>As far as I am aware, you can't do that with ProcessBuilder. You can print your variable to the output and let Java interpret it. Or you can use some API that is connecting Java and Python stronger (for example <a href="http://www.jython.org/" rel="nofollow">Jython</a>).</p>
| 0 | 2016-10-10T09:27:09Z | [
"java",
"python",
"json",
"processbuilder",
"sys"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.