title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Should I handle ajax requests in vanilla Django or rest Django?
| 39,608,377 |
<p>I have a bunch of ajax requests on my website (ex. upvote sends request to server)</p>
<p>Should I integrate this functionality server side just with another view function </p>
<p>Or is it recommended that I shove all the necessary views into a Django rest framework?</p>
| 0 |
2016-09-21T05:44:29Z
| 39,608,464 |
<p>I usually follow DDD approach. So all my requests end up being just a CRUD operation for an entity. I always prefer REST APIs, thus I would say if you have DDD approach already in place go with django-rest-framework.</p>
<p>Otherwise, it really does not matter, depends on your need.</p>
| 0 |
2016-09-21T05:52:01Z
|
[
"python",
"ajax",
"django",
"rest",
"django-rest-framework"
] |
Binary converter alogrithim
| 39,608,380 |
<p>Create a function called binary_converter. Inside the function, implement an algorithm to convert decimal numbers between 0 and 255 to their binary equivalents.</p>
<p>For any invalid input, return string Invalid input</p>
<p>Example: For number 5 return string 101</p>
<p>my code</p>
<pre><code>import unittest
class BinaryConverterTestCases(unittest.TestCase):
def test_conversion_one(self):
result = binary_converter(0)
self.assertEqual(result, '0', msg='Invalid conversion')
def test_conversion_two(self):
result = binary_converter(62)
self.assertEqual(result, '111110', msg='Invalid conversion')
def test_no_negative_numbers(self):
result = binary_converter(-1)
self.assertEqual(result, 'Invalid input', msg='Input below 0 not allowed')
def test_no_numbers_above_255(self):
result = binary_converter(300)
self.assertEqual(result, 'Invalid input', msg='Input above 255 not allowed')
</code></pre>
<p>code has error pls i'm new to programming, home study actually</p>
<p>Edited
code </p>
<pre><code>def binary_converter(n):
if(n==0):
return "0"
elif(n>255):
print("out of range")
return ""
else:
ans=""
while(n>0):
temp=n%2
ans=str(temp)+ans
n=n/2
return ans
</code></pre>
<p>Error report</p>
<p>THERE IS AN ERROR/BUG IN YOUR CODE</p>
<blockquote>
<p>Results: {"finished": true, "success": [{"fullName":
"test_conversion_one", "passedSpecNumber": 1}, {"fullName":
"test_conversion_two", "passedSpecNumber": 2}], "passed": false,
"started": true, "failures": [{"failedSpecNumber": 1, "fullName":
"test_no_negative_numbers", "failedExpectations": [{"message":
"Failure in line 19, in test_no_negative_numbers\n<br>
self.assertEqual(result, 'Invalid input', msg='Input below 0 not
allowed')\nAssertionError: Input below 0 not allowed\n"}]},
{"failedSpecNumber": 2, "fullName": "test_no_numbers_above_255",
"failedExpectations": [{"message": "Failure in line 23, in
test_no_numbers_above_255\n self.assertEqual(result, 'Invalid
input', msg='Input above 255 not allowed')\nAssertionError: Input
above 255 not allowed\n"}]}], "specs": {"count": 4, "pendingCount": 0,
"time": "0.000112"}} out of range</p>
</blockquote>
| -2 |
2016-09-21T05:44:35Z
| 39,609,447 |
<p>Try this code... </p>
<pre><code>def binary_converter(n):
if(n==0):
return "0"
elif(n>255):
print("out of range")
return ""
else:
ans=""
while(n>0):
temp=n%2
ans=str(temp)+ans
n=n/2
return ans
</code></pre>
| 0 |
2016-09-21T06:57:04Z
|
[
"python"
] |
Showing ValueError: shapes (1,3) and (1,3) not aligned: 3 (dim 1) != 1 (dim 0)
| 39,608,421 |
<p>I am trying to run this code,and the last 2 dot products are showing error as suggested in the heading. I checked the size of the matrices and both are (3, 1), then why it is showing me an error while doing dot product?</p>
<pre><code>coordinate1 = [-7.173, -2.314, 2.811]
coordinate2 = [-5.204, -3.598, 3.323]
coordinate3 = [-3.922, -3.881, 4.044]
coordinate4 = [-2.734, -3.794, 3.085]
import numpy as np
from numpy import matrix
coordinate1i=matrix(coordinate1)
coordinate2i=matrix(coordinate2)
coordinate3i=matrix(coordinate3)
coordinate4i=matrix(coordinate4)
b0 = coordinate1i - coordinate2i
b1 = coordinate3i - coordinate2i
b2 = coordinate4i - coordinate3i
n1 = np.cross(b0, b1)
n2 = np.cross(b2, b1)
n12cross = np.cross(n1,n2)
x1= np.cross(n1,b1)/np.linalg.norm(b1)
print np.shape(x1)
print np.shape(n2)
np.asarray(x1)
np.asarray(n2)
y = np.dot(x1,n2)
x = np.dot(n1,n2)
return np.degrees(np.arctan2(y, x))
</code></pre>
| 0 |
2016-09-21T05:47:09Z
| 39,608,560 |
<p>By converting the matrix to array by using<br>
n12 = np.squeeze(np.asarray(n2))</p>
<p>X12 = np.squeeze(np.asarray(x1))</p>
<p>solved the issue.</p>
| 0 |
2016-09-21T05:59:16Z
|
[
"python",
"arrays",
"numpy"
] |
Showing ValueError: shapes (1,3) and (1,3) not aligned: 3 (dim 1) != 1 (dim 0)
| 39,608,421 |
<p>I am trying to run this code,and the last 2 dot products are showing error as suggested in the heading. I checked the size of the matrices and both are (3, 1), then why it is showing me an error while doing dot product?</p>
<pre><code>coordinate1 = [-7.173, -2.314, 2.811]
coordinate2 = [-5.204, -3.598, 3.323]
coordinate3 = [-3.922, -3.881, 4.044]
coordinate4 = [-2.734, -3.794, 3.085]
import numpy as np
from numpy import matrix
coordinate1i=matrix(coordinate1)
coordinate2i=matrix(coordinate2)
coordinate3i=matrix(coordinate3)
coordinate4i=matrix(coordinate4)
b0 = coordinate1i - coordinate2i
b1 = coordinate3i - coordinate2i
b2 = coordinate4i - coordinate3i
n1 = np.cross(b0, b1)
n2 = np.cross(b2, b1)
n12cross = np.cross(n1,n2)
x1= np.cross(n1,b1)/np.linalg.norm(b1)
print np.shape(x1)
print np.shape(n2)
np.asarray(x1)
np.asarray(n2)
y = np.dot(x1,n2)
x = np.dot(n1,n2)
return np.degrees(np.arctan2(y, x))
</code></pre>
| 0 |
2016-09-21T05:47:09Z
| 39,609,781 |
<p>Unlike standard arithmetic, which desires matching dimensions, dot products require that the dimensions are one of:</p>
<ul>
<li><code>(X..., A, B) dot (Y..., B, C) -> (X..., Y..., A, C)</code>, where <code>...</code> means "0 or more different values</li>
<li><code>(B,) dot (B, C) -> (C,)</code></li>
<li><code>(A, B) dot (B,) -> (A,)</code></li>
<li><code>(B,) dot (B,) -> ()</code></li>
</ul>
<p>Your problem is that you are using <code>np.matrix</code>, which is totally unnecessary in your code - the main purpose of <code>np.matrix</code> is to translate <code>a * b</code> into <code>np.dot(a, b)</code>. As a general rule, <code>np.matrix</code> is probably not a good choice.</p>
| 0 |
2016-09-21T07:14:33Z
|
[
"python",
"arrays",
"numpy"
] |
Selenium text not working
| 39,608,637 |
<p>I have this span tag:</p>
<pre><code><span class="pricefield" data-usd="11000">$11,000</span>
</code></pre>
<p>And I want to get the text for it (<code>$11,000</code>). Naturally I should be using <code>text</code> like so:</p>
<pre><code># empty
print self.selenium.find_element_by_css_selector(".pricefield").text
</code></pre>
<p>But it prints nothing.</p>
<p>This doesn't work either:</p>
<pre><code># None
print self.selenium.find_element_by_css_selector(".pricefield").get_attribute("text")
# None
print self.selenium.find_element_by_css_selector(".pricefield").get_attribute("value")
</code></pre>
<p>However, I can get the elements attributes just fine:</p>
<pre><code># 11000
print self.selenium.find_element_by_css_selector(".pricefield").get_attribute("data-usd")
</code></pre>
<p>Why doesn't this work? Did something change in Selenium? I am using 2.52.0.</p>
| 0 |
2016-09-21T06:05:44Z
| 39,608,816 |
<p><code>get_attribute("textContent")</code> or <code>get_attribute("innerText")</code> are the attributes you are looking for.</p>
| 2 |
2016-09-21T06:18:54Z
|
[
"python",
"selenium"
] |
readable socket times out on recv
| 39,608,697 |
<p>I have a 'jobs' server which accepts requests from a client (there are 8 clients sending requests from another machine). The server then submits a 'job' (a 'job' is just an executable which writes a results file to disk), and on a 'jobs manager' thread waits until the job is done. When a job is done it sends a message to the client that a results files is ready to be copied back to the client.</p>
<p>On the main thread I use <code>select</code> to read incoming connections from clients, as well as jobs requests:</p>
<pre><code>readable, writable, exceptional = select.select(inputs, [], [])
</code></pre>
<p>where <code>inputs</code> is a list of accepted connections (sockets), and this list also includes the <code>server</code> socket. All sockets are set to non-blocking. To my best understanding, if this call to <code>select</code> returns a non-empty <code>readable</code>, it means some elements of <code>inputs</code> has incoming data waiting to be read.
I am reading data using the following logic (<code>SIZE</code> is a constant):</p>
<pre><code>for s in readable:
if s is not server:
try:
socket_ok = True
data = s.recv(SIZE)
except socket.error as e:
print ('ERROR socket error: ' + str(e) )
socket_ok = False
except Exception as e:
print ('ERROR error reading from socket: ' + str(e))
socket_ok = False
if not socket_ok:
# do something
</code></pre>
<p>I have 2 problems:</p>
<ul>
<li>Sometimes I get a <code>[Errno 110] Connection timed out</code> exception, and I don't understand why - if I have a readable socket, doesn't it mean it has some data to be read? </li>
<li>How to deal with this exception - the <code>#do something</code> part. I can do a 'cleanup' - delete the running jobs which were requested by the timed-out socket, and remove the dead socket from the list. But I have no way of letting the client know that it should stop waiting for these jobs' results. Ideally I would like to reconnect somehow, because the jobs themselves keep running and produce results which I don't want to throw away.</li>
</ul>
<p><strong>EDIT</strong> I realized now that the jobs manager thread also have access to the sockets via a <code>Queue</code> instance - if a job is finished, the thread sends a 'job done' message through the relevant socket - so maybe the <code>send</code> and <code>recv</code> methods of the same socket cause some kind of race condition? But anyway, I don't see how this can cause a 'connection timed out' error.</p>
| 1 |
2016-09-21T06:10:54Z
| 39,684,075 |
<p>A solution that was just a guess and seems to work: On the client side, I am using a blocking <code>recv</code> method to get message from the server that the job is done. Since a job can take a long time (e.g if the cluster running the jobs is low on resources), I guessed that maybe the socket waiting was the cause of the time-out. So instead of using <code>recv</code> in blocking mode, I use it with time-out of 5 seconds, so I can send a dummy message to the server every 5 seconds to keep the connection alive until a message is received. Now I don't get the exception (on the server side) any more.</p>
| 0 |
2016-09-25T06:27:38Z
|
[
"python",
"sockets",
"client-server"
] |
MySQL database error using scrapy
| 39,608,990 |
<p>I am trying to save scrapped data in MySQL database. My script.py is</p>
<pre><code> # -*- coding: utf-8 -*-
import scrapy
import unidecode
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from lxml import html
class ElementSpider(scrapy.Spider):
name = 'books'
download_delay = 3
allowed_domains = ["goodreads.com"]
start_urls = ["https://www.goodreads.com/list/show/19793.I_Marked_My_Calendar_For_This_Book_s_Release",]
rules = (Rule(LinkExtractor(allow=(), restrict_xpaths=('//a[@class="next_page"]',)), callback="parse", follow= True),)
def parse(self, response):
for href in response.xpath('//div[@id="all_votes"]/table[@class="tableList js-dataTooltip"]/tr/td[2]/div[@class="js-tooltipTrigger tooltipTrigger"]/a/@href'):
full_url = response.urljoin(href.extract())
print full_url
yield scrapy.Request(full_url, callback = self.parse_books)
break;
next_page = response.xpath('.//a[@class="next_page"]/@href').extract()
if next_page:
next_href = next_page[0]
next_page_url = 'https://www.goodreads.com' + next_href
print next_page_url
request = scrapy.Request(next_page_url, self.parse)
yield request
def parse_books(self, response):
yield{
'url': response.url,
'title':response.xpath('//div[@id="metacol"]/h1[@class="bookTitle"]/text()').extract(),
'link':response.xpath('//div[@id="metacol"]/h1[@class="bookTitle"]/a/@href').extract(),
}
</code></pre>
<p>And pipeline.py is </p>
<pre><code> # -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
import MySQLdb
import hashlib
from scrapy.exceptions import DropItem
from scrapy.http import Request
import sys
class SQLStore(object):
def __init__(self):
self.conn = MySQLdb.connect("localhost","root","","books" )
self.cursor = self.conn.cursor()
print "connected to DB"
def process_item(self, item, spider):
print "hi"
try:
self.cursor.execute("""INSERT INTO books_data(next_page_url) VALUES (%s)""", (item['url']))
self.conn.commit()
except Exception, e:
print e
</code></pre>
<p>When i run the script there is no error. Spider running well but I think cursor not points to process_item. Even it not print hi.</p>
| 2 |
2016-09-21T06:31:28Z
| 39,612,675 |
<p>Your method signature is wrong, it should take item and spider parameters:</p>
<pre><code>process_item(self, item, spider)
</code></pre>
<p>Also you need to have the pipeline setup in your <em>settings.py</em> file:</p>
<pre><code> ITEM_PIPELINES = {"project_name.path.SQLStore"}
</code></pre>
<p>Your syntax is also incorrect, you need to pass a tuple:</p>
<pre><code> self.cursor.execute("""INSERT INTO books_data(next_page_url) VALUES (%s)""",
(item['url'],) # <- add ,
</code></pre>
| 2 |
2016-09-21T09:31:10Z
|
[
"python",
"mysql",
"scrapy"
] |
Is there any way to read micr font characters from cheques using tesseract ocr or anyother package for python?
| 39,609,054 |
<p>When i used the pytesseract for character recognition on cheques, the micr characters are not getting recognized properly.</p>
| 0 |
2016-09-21T06:35:15Z
| 39,621,411 |
<p>You will need to use the MICR language file in order to properly recognize the MICR characters. See this post:</p>
<p><a href="http://stackoverflow.com/questions/25279271/android-how-to-recognize-micr-codes">Android : How to recognize MICR codes</a></p>
<p>Excerpt from post: </p>
<blockquote>
<p>You can use Tesseract with mcr.traineddata language data file.</p>
</blockquote>
<p>or you can use this Github project: <a href="https://github.com/BigPino67/Tesseract-MICR-OCR" rel="nofollow">Tesseract MICR OCR</a></p>
<p>If you are not opposed to calling .NET libraries in Python, another option is to use a 3rd party commercial library that support MICR fonts as well as other languages and characters. The <a href="https://www.leadtools.com/corporate/press/2015/leadtools-micr-sdk-becomes-go-to-solution-for-check-recognition" rel="nofollow">LEADTOOLS SDK has MICR</a> support that's very easy to use. Please note that I am an employee of this product. Also included is <a href="https://www.leadtools.com/help/leadtools/v19/dh/po/leadtools.imageprocessing.core~leadtools.imageprocessing.core.micrcodedetectioncommand.html" rel="nofollow">an image processing function</a> that will automatically detect and find where the MICR is, so you do not have to have any manual input.</p>
<p>To call .NET functions from Python, you can check out this post:</p>
<p><a href="http://stackoverflow.com/questions/7367976/calling-a-c-sharp-library-from-python">Calling a C# library from python</a></p>
<p>Here is an example of extracting the MICR field from a check using one of the OCR demos:</p>
<p><a href="http://i.stack.imgur.com/DT13i.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/DT13i.jpg" alt="enter image description here"></a></p>
<p>Here is a .NET C# method that will load a file and auto-detect the MICR zone and OCR it then return the results using LEADTOOLS:</p>
<pre><code> private string DetectandRecognizeMICR(string fileName)
{
string micr = "";
//Initialize the codecs class and load the image
using (RasterCodecs codecs = new RasterCodecs())
{
using (RasterImage img = codecs.Load(fileName))
{
//prepare the MICR detector command and run it
MICRCodeDetectionCommand micrDetector = new MICRCodeDetectionCommand();
micrDetector.SearchingZone = new LeadRect(0, 0, img.Width, img.Height);
micrDetector.Run(img);
//See if there is a MICR detected - if not return
if (micrDetector.MICRZone == LeadRect.Empty)
return "No MICR detected in this file.";
//if there is a MICR zone detected startup OCR
using (IOcrEngine ocrEngine = OcrEngineManager.CreateEngine(OcrEngineType.Advantage, false))
{
ocrEngine.Startup(null, null, null, null);
//create the OCR Page
IOcrPage ocrPage = ocrEngine.CreatePage(img, OcrImageSharingMode.None);
//Create the OCR zone for the MICR and add to the Ocr Page and recognize
OcrZone micrZone = new OcrZone();
micrZone.Bounds = LogicalRectangle.FromRectangle(micrDetector.MICRZone);
micrZone.ZoneType = OcrZoneType.Micr;
ocrPage.Zones.Add(micrZone);
ocrPage.Recognize(null);
//return the MICR text
micr = ocrPage.GetText(-1);
}
}
}
return micr;
}
</code></pre>
| 0 |
2016-09-21T16:01:32Z
|
[
"python",
"ocr",
"tesseract",
"python-tesseract"
] |
How to do this Class inheritance in Python?
| 39,609,108 |
<p>I have a Python/<a href="https://github.com/tornadoweb/tornado" rel="nofollow">Tornado</a> application that responds to HTTP requests with the following 3 classes:</p>
<pre><code>import tornado.web
class MyClass1(tornado.web.RequestHandler):
x = 1
y = 2
def my_method1(self):
print "Hello World"
class MyClass2(MyClass1):
@tornado.web.authenticated
def get(self):
#Do Something 1
pass
@tornado.web.authenticated
def post(self):
#Do Something 2
pass
class MyClass3(MyClass2):
pass
</code></pre>
<p>I would like all instances <code>MyClass2</code> to have an instance variable <code>m</code> set to the integer 3. But any instances of <code>MyClass3</code> should over-ride that and have <code>m</code> set to the integer 4. How can I do it?</p>
<p>I tried adding the following constructors to <code>MyClass2</code> and <code>MyClass3</code> respectively, but then when I try to create an instance of <code>MyClass3</code>, I get the following error: <code>TypeError: __init__() takes exactly 1 argument (3 given)</code></p>
<p>MyClass2.<strong>init</strong>():</p>
<pre><code>def __init__(self):
self.m = 3 # This value will be overridden by a subclass
</code></pre>
<p>MyClass3.<strong>init</strong>():</p>
<pre><code>def __init__(self):
self.m = 4
</code></pre>
| 0 |
2016-09-21T06:38:47Z
| 39,609,254 |
<p><code>ReequestHandler</code>'s <a href="http://www.tornadoweb.org/en/stable/web.html#tornado.web.RequestHandler" rel="nofollow">constructor takes arguments</a>:</p>
<pre><code>class RequestHandler(object):
...
def __init__(self, application, request, **kwargs):
...
</code></pre>
<p>When you inherit <code>RequestHandler</code> you then either:</p>
<p>a) Do not override <code>__init__</code> (i.e. do not provide your own constructor)
or
b) If you override <code>__init__</code> (provide your own constructor), then your constructor should have the same signature, since the framework will call the constructor.</p>
| 0 |
2016-09-21T06:46:37Z
|
[
"python",
"inheritance",
"tornado"
] |
How to do this Class inheritance in Python?
| 39,609,108 |
<p>I have a Python/<a href="https://github.com/tornadoweb/tornado" rel="nofollow">Tornado</a> application that responds to HTTP requests with the following 3 classes:</p>
<pre><code>import tornado.web
class MyClass1(tornado.web.RequestHandler):
x = 1
y = 2
def my_method1(self):
print "Hello World"
class MyClass2(MyClass1):
@tornado.web.authenticated
def get(self):
#Do Something 1
pass
@tornado.web.authenticated
def post(self):
#Do Something 2
pass
class MyClass3(MyClass2):
pass
</code></pre>
<p>I would like all instances <code>MyClass2</code> to have an instance variable <code>m</code> set to the integer 3. But any instances of <code>MyClass3</code> should over-ride that and have <code>m</code> set to the integer 4. How can I do it?</p>
<p>I tried adding the following constructors to <code>MyClass2</code> and <code>MyClass3</code> respectively, but then when I try to create an instance of <code>MyClass3</code>, I get the following error: <code>TypeError: __init__() takes exactly 1 argument (3 given)</code></p>
<p>MyClass2.<strong>init</strong>():</p>
<pre><code>def __init__(self):
self.m = 3 # This value will be overridden by a subclass
</code></pre>
<p>MyClass3.<strong>init</strong>():</p>
<pre><code>def __init__(self):
self.m = 4
</code></pre>
| 0 |
2016-09-21T06:38:47Z
| 39,609,290 |
<p>The <code>tornado.web.RequestHandler</code> already has a <code>__init__</code> method and Tornado expects it to take two arguments (plus the <code>self</code> argument of a bound method). Your overridden versions don't take these.</p>
<p>Update your <code>__init__</code> methods to take <em>arbitrary extra arguments</em> and pass these on via <a href="https://docs.python.org/2/library/functions.html#super" rel="nofollow"><code>super()</code></a>:</p>
<pre><code>class MyClass2(MyClass1):
def __init__(self, *args, **kwargs):
super(MyClass2, self).__init__(*args, **kwargs)
self.m = 3
@tornado.web.authenticated
def get(self):
#Do Something 1
pass
@tornado.web.authenticated
def post(self):
#Do Something 2
pass
class MyClass3(MyClass2):
def __init__(self, *args, **kwargs):
super(MyClass3, self).__init__(*args, **kwargs)
self.m = 4
</code></pre>
<p>You could also use the <a href="http://www.tornadoweb.org/en/stable/web.html#tornado.web.RequestHandler.initialize" rel="nofollow"><code>RequestHandler.initialize()</code> method</a> to set up per-request instance variables; you <em>may</em> have to use <code>super()</code> again to pass on the call to the parent class, if your parent class <code>initialize()</code> does <em>more</em> work than just set <code>self.m</code>.</p>
| 4 |
2016-09-21T06:48:41Z
|
[
"python",
"inheritance",
"tornado"
] |
Login to Odoo from external php system
| 39,609,130 |
<p>I have a requirement where I need to have a redirect from the external php system to Odoo, and the user should be logged in as well. I thought of the following two ways to get this done:</p>
<ol>
<li><p>A url redirection from the php side which calls a particular controller,and pass the credentials alongiwth the url, which is not a secure option for obvious reasons</p></li>
<li><p>A call of method using xmlrpc from php, and pass the necessary arguments along from php, use the arguments to sign in and then in the method over here a call for redirect is made. Will have to check further whether this method will work, as the controller and normal functions work in different ways when it comes to redirections within odoo.</p></li>
</ol>
<p>Please suggest as to which way would be better or are there any other ways to get this done which might be simpler. And, would it make sense to add a new method in the openerp/service/common.py and call that method, and then will it be possible to redirect to the odoo logged in page from there?</p>
<p>Hoping for inputs on the above, and I also hope that this helps out with other external system integration queries which are frequent in Odoo.</p>
<p>Thanks And Regards,
Yaseen Shareef</p>
| 1 |
2016-09-21T06:39:58Z
| 39,617,759 |
<p>In your php code you could make a jsonrpc call to <code>/web/session/authenticate</code> and receive the session_id in the response. You could pass the session_id as the hash of your url in your redirect. Create a page in odoo that uses javascript to read the hash and write the cookie <code>"session_id=733a54f4663629ffb89d3895f357f6b1715e8666"</code> (obviously an example) to your browser on your odoo page. At this point you should be able to navigate in odoo as the user you authenticated as in your php code. </p>
<pre><code> from requests import Request,Session
import json
base_url = "127.0.0.1:8069"
url = "%s/web/session/authenticate" % base_url
db = <db_name>
user = <login>
passwd = <password>
s = Session()
data = {
'jsonrpc': '2.0',
'params': {
'context': {},
'db': db,
'login': user,
'password': passwd,
},
}
headers = {
'Content-type': 'application/json'
}
req = Request('POST',url,data=json.dumps(data),headers=headers)
prepped = req.prepare()
resp = s.send(prepped)
r_data = json.loads(resp.text)
session_id = r_data['result']['session_id']
</code></pre>
<p>This example uses vanilla curl. Which I know is not php however I may update this post later for php. The principle still stands however. Just convert this curl to php. </p>
<p>How you pass this session_id with your redirect is up to you. There are security concerns which you will need to be wary of. So be careful not to pass the session_id insecurely or someone could sniff it and become your logged in user.</p>
<p>This is an example (untested), you will have to create json encoded string similar to the curl example I provided above. Hope this helps.</p>
<pre><code>$data = <your_json_data>
$data_string = json_encode($data);
$ch = curl_init('http://127.0.0.1:8069/web/session/authenticate');
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST");
curl_setopt($ch, CURLOPT_POSTFIELDS, $data_string);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, array(
'Content-Type: application/json',
'Content-Length: ' . strlen($data_string))
);
$result = curl_exec($ch);
</code></pre>
<p>Once you have your <code>session_id</code> you will redirect to your odoo server to a route which is handled by a controller in your addon.</p>
<p>Controller</p>
<pre><code>imports ...
class MyAddon(http.Controller):
@http.route('/path/for/controller', type='http', auth='public', website=True)
def ext_login_redirect(self, **kw):
return http.request.render('myaddon.template',{})
</code></pre>
<p>The important part is that the hash in the url contains the session_id you obtained in your php.</p>
<p>Your template <code>your_template.xml</code></p>
<pre><code><openerp>
<data>
<template id="template" name="Redirect Template" page="True">
document.cookie = "session_id=" + window.location.hash;
window.location = "127.0.0.1:8069/web";
</template>
</data>
</openerp>
</code></pre>
| 2 |
2016-09-21T13:19:44Z
|
[
"python",
"openerp",
"xml-rpc",
"odoo-9"
] |
Django Ajax response date and time format
| 39,609,234 |
<p>Got a problem... </p>
<p>I use <code>Django</code>, <code>SQLite</code>, <code>jquery</code> and <code>AJAX</code>.</p>
<p>The problem is that when I get date and time from my database, it looks weird.</p>
<p>Is there any way to display it normally, as <code>dd/mm/yyyy HH:MM</code> ?</p>
<p><strong>Modeles.py</strong></p>
<pre><code>class QueryHistory(models.Model) :
userID = models.CharField(max_length=100)
date = models.DateTimeField(auto_now_add=True, blank=True)
</code></pre>
<p><strong>View.py</strong></p>
<pre><code>queries_list = serializers.serialize(
'json',
(QueryHistory.objects.filter(
userID = request.session['user_id']
).order_by('-id')[:5])
)
return HttpResponse(json.dumps(queries_list), content_type="application/json")
</code></pre>
<p><strong>Js.js</strong></p>
<pre><code>success : function(response) {
var queries_list = jQuery.parseJSON(response);
console.log(response);
}
</code></pre>
<blockquote>
<p><strong>Result:</strong> 2016-09-21T06:43:26.693Z</p>
<p><strong>Should be:</strong> 21/09/2016 06:43</p>
</blockquote>
| 0 |
2016-09-21T06:45:54Z
| 39,610,860 |
<p>It's not weird, it's ISO 8601 and it's not a good idea to change it. But you can by defining your own encoder:</p>
<pre><code>import json
from datetime import dateteime
from django.forms.models import model_to_dict
from django.core.serializers.json import DjangoJSONEncoder
class MyEncoder(DjangoJSONEncoder):
def default(self, obj):
if isinstance(obj, date):
return obj.strftime('%d.%m.%Y')
return super(MyEncoder, self).default(obj)
json.dumps([model_to_dict(o) for o in QueryHistory.objects.all()], cls=MyEncoder)
</code></pre>
<p>Or by your own serializer:</p>
<pre><code>import json
from datetime import dateteime
from django.core.serializers.python import Serializer
class MySerializer(Serializer):
def handle_field(self, obj, field):
value = field.value_from_object(obj)
if isinstance(value, date):
self._current['date'] = value.strftime('%d.%m.%Y')
else:
super(MySerializer, self).handle_field(obj, field)
serializer = MySerializer()
queries_list = serializer.serialize(QueryHistory.objects.all())
json.dumps(queries_list)
</code></pre>
| 0 |
2016-09-21T08:07:54Z
|
[
"jquery",
"python",
"ajax",
"django",
"response"
] |
Django Ajax response date and time format
| 39,609,234 |
<p>Got a problem... </p>
<p>I use <code>Django</code>, <code>SQLite</code>, <code>jquery</code> and <code>AJAX</code>.</p>
<p>The problem is that when I get date and time from my database, it looks weird.</p>
<p>Is there any way to display it normally, as <code>dd/mm/yyyy HH:MM</code> ?</p>
<p><strong>Modeles.py</strong></p>
<pre><code>class QueryHistory(models.Model) :
userID = models.CharField(max_length=100)
date = models.DateTimeField(auto_now_add=True, blank=True)
</code></pre>
<p><strong>View.py</strong></p>
<pre><code>queries_list = serializers.serialize(
'json',
(QueryHistory.objects.filter(
userID = request.session['user_id']
).order_by('-id')[:5])
)
return HttpResponse(json.dumps(queries_list), content_type="application/json")
</code></pre>
<p><strong>Js.js</strong></p>
<pre><code>success : function(response) {
var queries_list = jQuery.parseJSON(response);
console.log(response);
}
</code></pre>
<blockquote>
<p><strong>Result:</strong> 2016-09-21T06:43:26.693Z</p>
<p><strong>Should be:</strong> 21/09/2016 06:43</p>
</blockquote>
| 0 |
2016-09-21T06:45:54Z
| 39,628,900 |
<p>I came up with my own solution. </p>
<p>to display date in your own format you just need to create a variable <code>new Date</code></p>
<pre><code>var date = new Date();
</code></pre>
<p>after that we parse date from our response to this variable, so change it to:</p>
<pre><code>var date = new Date(ourAjaxResponse.date);
</code></pre>
<p>now if you try to display it, you will get a standard JavaScript date format:</p>
<blockquote>
<p>Wed Mar 25 2015 13:00:00 GMT+1300 (New Zealand ())</p>
</blockquote>
<p>Thanks to JavaScript, it has a couple of methods that allow us to get Hours, Minutes and everything else we need to display date as we wish.</p>
<pre><code>var date = new Date(ourAjaxResponse.date);
var day = date.getDate();
var mnth = date.getMonth();
var year = date.getFullYear();
var hrs = date.getHours();
var mnts = date.getMinutes();
</code></pre>
<p><code>var mnths</code> will be number from 1 to 12, so you can create an array with list of months in your own language</p>
<pre><code>var monthNames = [
"ЯнваÑÑ", "ФевÑалÑ", "ÐаÑÑа",
"ÐпÑелÑ", "ÐаÑ", "ÐÑнÑ", "ÐÑлÑ",
"ÐвгÑÑÑа", "СенÑÑбÑÑ", "ÐкÑÑбÑÑ",
"ÐоÑбÑÑ", "ÐекабÑÑ"
]; // For Example: in Russian
var monthNames = [
"January", "February", "March",
"April", "May", "June", "July",
"August", "September", "October",
"November", "December"
]; //or in English
</code></pre>
<p><strong>Final result:</strong> </p>
<p>I have function that gets date variable:</p>
<pre><code>function get_date(date){
var date = new Date(date);
var day = date.getDate();
var monthIndex = date.getMonth();
var year = date.getFullYear();
var hours = addZero(date.getHours());//addZero() function described below
var minutes = addZero(date.getMinutes());
return "<i class='fa fa-calendar-o' aria-hidden='true'></i> " + day + " " + monthNames[monthIndex] + " " + year + " <i class='fa fa-clock-o' aria-hidden='true'></i> " + hours + ":" + minutes;
}
</code></pre>
<p>Another small function to add zero in front of hours and minutes, because if these are less than 10, the result will be </p>
<blockquote>
<p>8:21 or 19:8 instead of 08:21 or 19:08 </p>
</blockquote>
<pre><code>function addZero(i) {
if (i < 10) {
i = "0" + i;
}
return i;
}
</code></pre>
<p>Now, wherever you want in your code you can call this function and parse date to it from your server or any other sources, and you will always get the same result:</p>
<pre><code>console.log(get_date(ourAjaxResponse.date));
</code></pre>
<blockquote>
<p>22 September 2016 01:16</p>
</blockquote>
<p>if you change <code>return</code> in function <code>get_date()</code> on:</p>
<pre><code>return day + "/" + monthIndex + "/" + year + " " + hours + ":" + minutes;
</code></pre>
<blockquote>
<p>22/09/2016 01:16</p>
</blockquote>
| 0 |
2016-09-22T01:31:50Z
|
[
"jquery",
"python",
"ajax",
"django",
"response"
] |
Pandas how to split dataframe by column by interval
| 39,609,391 |
<p>I have a gigantic dataframe with a datetime type column called <code>dt</code>, the data frame is sorted based on <code>dt</code> already. I want to split the dataframe into several dataframes based on <code>dt</code>, each dataframe contains rows within <code>1 hr</code> range.</p>
<p>Split</p>
<pre><code> dt text
0 20160811 11:05 a
1 20160811 11:35 b
2 20160811 12:03 c
3 20160811 12:36 d
4 20160811 12:52 e
5 20160811 14:32 f
</code></pre>
<p>into </p>
<pre><code> dt text
0 20160811 11:05 a
1 20160811 11:35 b
2 20160811 12:03 c
dt text
0 20160811 12:36 d
1 20160811 12:52 e
dt text
0 20160811 14:32 f
</code></pre>
| 4 |
2016-09-21T06:53:53Z
| 39,609,439 |
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> by difference of first value of column <code>dt</code> converted to <code>hour</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html" rel="nofollow"><code>astype</code></a>:</p>
<pre><code>S = pd.to_datetime(df.dt)
for i, g in df.groupby([(S - S[0]).astype('timedelta64[h]')]):
print (g.reset_index(drop=True))
dt text
0 20160811 11:05 a
1 20160811 11:35 b
2 20160811 12:03 c
dt text
0 20160811 12:36 d
1 20160811 12:52 e
dt text
0 20160811 14:32 f
</code></pre>
<p><code>List comprehension</code> solution:</p>
<pre><code>S = pd.to_datetime(df.dt)
print ((S - S[0]).astype('timedelta64[h]'))
0 0.0
1 0.0
2 0.0
3 1.0
4 1.0
5 3.0
Name: dt, dtype: float64
L = [g.reset_index(drop=True) for i, g in df.groupby([(S - S[0]).astype('timedelta64[h]')])]
print (L[0])
dt text
0 20160811 11:05 a
1 20160811 11:35 b
2 20160811 12:03 c
print (L[1])
dt text
0 20160811 12:36 d
1 20160811 12:52 e
print (L[2])
dt text
0 20160811 14:32 f
</code></pre>
<hr>
<p>Old solution, which split by <code>hour</code>:</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.hour.html" rel="nofollow"><code>dt.hour</code></a>, but first need convert <code>dt</code> <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow"><code>to_datetime</code></a>:</p>
<pre><code>for i, g in df.groupby([pd.to_datetime(df.dt).dt.hour]):
print (g.reset_index(drop=True))
dt text
0 20160811 11:05 a
1 20160811 11:35 b
dt text
0 20160811 12:03 c
1 20160811 12:36 d
2 20160811 12:52 e
dt text
0 20160811 14:32 f
</code></pre>
<p><code>List comprehension</code> solution:</p>
<pre><code>L = [g.reset_index(drop=True) for i, g in df.groupby([pd.to_datetime(df.dt).dt.hour])]
print (L[0])
dt text
0 20160811 11:05 a
1 20160811 11:35 b
print (L[1])
dt text
0 20160811 12:03 c
1 20160811 12:36 d
2 20160811 12:52 e
print (L[2])
dt text
0 20160811 14:32 f
</code></pre>
<hr>
<p>Or use <code>list comprehension</code> with converting column <code>dt</code> to <code>datetime</code>:</p>
<pre><code>df.dt = pd.to_datetime(df.dt)
L =[g.reset_index(drop=True) for i, g in df.groupby([df['dt'].dt.hour])]
print (L[1])
dt text
0 2016-08-11 12:03:00 c
1 2016-08-11 12:36:00 d
2 2016-08-11 12:52:00 e
print (L[2])
dt text
0 2016-08-11 14:32:00 f
</code></pre>
<hr>
<p>If need split by <code>date</code>s and <code>hour</code>s:</p>
<pre><code>#changed dataframe for testing
print (df)
dt text
0 20160811 11:05 a
1 20160812 11:35 b
2 20160813 12:03 c
3 20160811 12:36 d
4 20160811 12:52 e
5 20160811 14:32 f
serie = pd.to_datetime(df.dt)
for i, g in df.groupby([serie.dt.date, serie.dt.hour]):
print (g.reset_index(drop=True))
dt text
0 20160811 11:05 a
dt text
0 20160811 12:36 d
1 20160811 12:52 e
dt text
0 20160811 14:32 f
dt text
0 20160812 11:35 b
dt text
0 20160813 12:03 c
</code></pre>
| 3 |
2016-09-21T06:56:35Z
|
[
"python",
"python-2.7",
"pandas",
"numpy",
"scipy"
] |
Pandas how to split dataframe by column by interval
| 39,609,391 |
<p>I have a gigantic dataframe with a datetime type column called <code>dt</code>, the data frame is sorted based on <code>dt</code> already. I want to split the dataframe into several dataframes based on <code>dt</code>, each dataframe contains rows within <code>1 hr</code> range.</p>
<p>Split</p>
<pre><code> dt text
0 20160811 11:05 a
1 20160811 11:35 b
2 20160811 12:03 c
3 20160811 12:36 d
4 20160811 12:52 e
5 20160811 14:32 f
</code></pre>
<p>into </p>
<pre><code> dt text
0 20160811 11:05 a
1 20160811 11:35 b
2 20160811 12:03 c
dt text
0 20160811 12:36 d
1 20160811 12:52 e
dt text
0 20160811 14:32 f
</code></pre>
| 4 |
2016-09-21T06:53:53Z
| 39,609,723 |
<p>take the difference of dates with first date and group by total_seconds</p>
<pre><code>df.groupby((df.dt - df.dt[0]).dt.total_seconds() // 3600,
as_index=False).apply(pd.DataFrame.reset_index, drop=True)
</code></pre>
<p><a href="http://i.stack.imgur.com/9MxQ6.png" rel="nofollow"><img src="http://i.stack.imgur.com/9MxQ6.png" alt="enter image description here"></a></p>
| 2 |
2016-09-21T07:11:50Z
|
[
"python",
"python-2.7",
"pandas",
"numpy",
"scipy"
] |
Pandas: remove encoding from the string
| 39,609,426 |
<p>I have the following data frame:</p>
<pre><code> str_value
0 Mock%20the%20Week
1 law
2 euro%202016
</code></pre>
<p>There are many such special characters such as <code>%20%</code>, <code>%2520</code>, etc..How do I remove them all. I have tried the following but the dataframe is large and I am not sure how many such different characters are there.</p>
<pre><code>dfSearch['str_value'] = dfSearch['str_value'].str.replace('%2520', ' ')
dfSearch['str_value'] = dfSearch['str_value'].str.replace('%20', ' ')
</code></pre>
| 3 |
2016-09-21T06:56:08Z
| 39,609,639 |
<p>You can use the <code>urllib</code> library and apply it using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html"><code>map</code></a> method of a series.
Example - </p>
<pre><code>In [23]: import urllib
In [24]: dfSearch["str_value"].map(lambda x:urllib.unquote(x).decode('utf8'))
Out[24]:
0 Mock the Week
1 law
2 euro 2016
</code></pre>
| 7 |
2016-09-21T07:07:57Z
|
[
"python",
"python-2.7",
"pandas"
] |
how to read binary after resize image on python?
| 39,609,455 |
<p>i use two packages
<code>from PIL import Image</code> and <code>from resizeimage import resizeimage</code> when i use this packages i should after resize save to other image file, but i need image binary after resize not save to other file ?</p>
<p>code :</p>
<pre><code>with open(imgpath, 'r+b') as f:
with Image.open(f) as image:
cover = resizeimage.resize_cover(image, [100, 100])
img = bytearray(cover.read())
</code></pre>
<p>i need read binary like this : <code>bytearray(cover.read())</code> this code not work,
how to read binary after resize image?</p>
| 0 |
2016-09-21T06:57:15Z
| 39,610,574 |
<p>please use this code.</p>
<pre><code>basewidth = 100
hsize = 100
img = Image.open(imgpath)
img = img.resize((basewidth, hsize), PIL.Image.ANTIALIAS)
byte_io = io.BytesIO()
img.save(byte_io, format='PNG')
byte_io = byte_io.getvalue()
print byte_io
</code></pre>
| 0 |
2016-09-21T07:53:04Z
|
[
"python",
"python-2.7",
"opencv"
] |
How to identify ordered substrings in string using python?
| 39,609,537 |
<p>I am new to python, so my question would be naive but I would appreciate your suggestion that help me get to the solution of it.</p>
<p>For example I have string "aeshfytifghkjgiomntrop" and I want to find ordered substrings from it. How should I approach?</p>
| 0 |
2016-09-21T07:01:56Z
| 39,609,623 |
<p>Scan the string using <code>enumerate</code> to issue index and value.
Print string parts when chars are decreasing and start a new substring:</p>
<pre><code>s = "aeshfytifghkjgiomntrop"
prev_c = None
prev_i = 0
for i,c in enumerate(s):
if prev_c>c:
print(s[prev_i:i])
prev_i=i
prev_c=c
print(s[prev_i:])
</code></pre>
<p>result:</p>
<pre><code>aes
h
fy
t
i
fghk
j
gio
mnt
r
op
</code></pre>
<p>(No particular Python magic here, could be done simply even with C)</p>
| 1 |
2016-09-21T07:07:01Z
|
[
"python",
"string"
] |
How to set width of Treeview in tkinter of python
| 39,609,865 |
<p>Recently, I use <code>tkinter</code> <code>TreeView</code> to show many columns in <code>Python</code>. Specifically, 49 columns data in a treeview. I use <code>grid</code> to manage my widgets.</p>
<p>I found out the width of the treeview only depends on the width of columns.</p>
<p>My question is, How can I set the width of the Treeview. (The default width is the summation of all columns' width)</p>
<p>When all my column width is set as 20. This is 49 columns. :)
<a href="http://i.stack.imgur.com/zvqmh.png" rel="nofollow"><img src="http://i.stack.imgur.com/zvqmh.png" alt="enter image description here"></a></p>
<p>Here is my main code :</p>
<pre><code>frame = Frame(self.master, width = 1845, height = 670)
frame.place(x = 20, y = 310)
tree_view = Treeview(frame, height = 33, selectmode = "extended")
for i in range(len(id_list)):
tree_view.column(id_list[i], width = width_list[i], anchor = CENTER)
tree_view.heading(id_list[i] , text = text_list[i])
tree_view["show"] = "headings"
# Omit the declaration of scrollbar
tree_view['yscroll'] = self.y_scollbar.set
tree_view['xscroll'] = self.x_scollbar.set
tree_view.grid(row = 0, column = 0, sticky = NSEW)
for item in detail_info_list:
tree_view.insert("",0, text = str(item.id), value=(...))
</code></pre>
<p><code>id_list</code>,<code>width_list</code>,<code>text_list</code> is used to store columns information.
<code>detail_info_list</code> is to store the data showed in the Treeview.</p>
<p>My target is when I define a large width(for example: 3000) of some column, my treeview could show the data as I expected. But the treeview is across my screen and the horizontal scrollbar also can't slide.</p>
<p>When 17th column is set as 1500:
<a href="http://i.stack.imgur.com/A3rOb.png" rel="nofollow"><img src="http://i.stack.imgur.com/A3rOb.png" alt="enter image description here"></a></p>
<p>I can't see my buttons, and I can't slide the horizontal scrollbar.</p>
<p>Also, I have tried some solution.</p>
<ul>
<li>Define a frame to be the Treeview parent. And define the width and height to constraint the Treeview. But it didn't work.</li>
<li>I looked up some documents and search for this question. I don't find any solution to set the width as a constant number.</li>
</ul>
| 0 |
2016-09-21T07:19:11Z
| 39,629,542 |
<p>After trying so many ways to solve this problem, I find out a simple but silly solution. After initializing the width of the <code>Treeview</code>, you just need to resize the with of the column and the width of the <code>Treeview</code> will NOT change.</p>
<p>Perhaps, this is the constraint of Tcl/Tk.</p>
| 0 |
2016-09-22T02:57:05Z
|
[
"python",
"tkinter",
"treeview"
] |
Create a user defined number of loops for quiz in python
| 39,609,876 |
<p>I need to have a user defined amount of time the questions are asked and keep track of the attempts</p>
<pre><code>def main():
print("Thank you for taking the Quiz")
score= 0
question1 = input("How many donuts are in a dozen? ")
if question1 == "12":
print("Correct")
score = score +1
else:
print("Incorrect")
question2 = input("How many mb are in a gb? ")
if question2 == "1024":
print("Correct")
score = score +1
else:
print("Incorrect")
question3 = input("How many pokemon are in gen 1? ")
if question3 == "151":
print("Correct")
score = score +1
else:
print("Incorrect")
print("The final score is: " + str(score), "out of 3")
main()
</code></pre>
| -4 |
2016-09-21T07:19:42Z
| 39,609,967 |
<p>I would do something like this:</p>
<pre><code> def main():
print("Thank you for taking the Quiz")
times = input("How many time you'll play? ")
for i in range(int(times))
print("The final score is: " + str(question), "out of 3")
def question():
score= 0
question1 = input("How many donuts are in a dozen? ")
if question1 == "12":
print("Correct")
score = score +1
else:
print("Incorrect")
question2 = input("How many mb are in a gb? ")
if question2 == "1024":
print("Correct")
score = score +1
else:
print("Incorrect")
question3 = input("How many pokemon are in gen 1? ")
if question3 == "151":
print("Correct")
score = score +1
else:
print("Incorrect")
return score
main()
</code></pre>
| 0 |
2016-09-21T07:23:44Z
|
[
"python",
"python-3.x"
] |
I got an error while using "git push heroku master" command
| 39,609,912 |
<p>When I'm trying to push python project on heroku using <code>git push heroku master</code>
command I got an error like this. </p>
<pre><code>Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
</code></pre>
<p>How can I solve this error?</p>
| 0 |
2016-09-21T07:21:26Z
| 39,611,081 |
<p>You have to add your ssh keys to heroku by running <code>heroku keys:add</code>. Heroku can then verify you and allow access to your repository on heroku. More on this here: <a href="https://devcenter.heroku.com/articles/keys" rel="nofollow">https://devcenter.heroku.com/articles/keys</a></p>
| 1 |
2016-09-21T08:18:32Z
|
[
"python",
"git",
"heroku"
] |
Tokenizing a concatenated string
| 39,609,925 |
<p>I got a set of strings that contain concatenated words like the followings:</p>
<pre><code>longstring (two English words)
googlecloud (a name and an English word)
</code></pre>
<p>When I type these terms into Google, it recognizes the words with "did you mean?" ("long string", "google cloud"). I need similar functionality in my application.</p>
<p>I looked into the options provided by Python and ElasticSearch. All the tokenizing examples I found are based on whitespace, upper case, special characters etc. </p>
<p>What are my options provided the strings are in English (but they may contain names)? It doesn't have to be on a specific technology.</p>
<p>Can I get this done with Google BigQuery?</p>
| 1 |
2016-09-21T07:21:54Z
| 39,610,850 |
<p>Can you also roll your own implementation? I am thinking of an algorithm like this:</p>
<ol>
<li>Get a dictionary with all words you want to distinguish</li>
<li>Build a data structure that allows quick lookup (I am thinking of a <a href="https://en.wikipedia.org/wiki/Trie" rel="nofollow"><code>trie</code></a>)</li>
<li>Try to find the first word (starting with one character and increasing it until a word is found); if found, use the remaining string and do the same until nothing is left. If it doesn't find anything, backtrack and extend the previous word. </li>
</ol>
<p>Should be ok-ish if the string can be split, but will try all possibilities if its gibberish. Of course, it depends on how big your dictionary is going to be. But this was just a quick thought, maybe it helps.</p>
| 1 |
2016-09-21T08:07:21Z
|
[
"python",
"elasticsearch",
"machine-learning",
"google-bigquery"
] |
Tokenizing a concatenated string
| 39,609,925 |
<p>I got a set of strings that contain concatenated words like the followings:</p>
<pre><code>longstring (two English words)
googlecloud (a name and an English word)
</code></pre>
<p>When I type these terms into Google, it recognizes the words with "did you mean?" ("long string", "google cloud"). I need similar functionality in my application.</p>
<p>I looked into the options provided by Python and ElasticSearch. All the tokenizing examples I found are based on whitespace, upper case, special characters etc. </p>
<p>What are my options provided the strings are in English (but they may contain names)? It doesn't have to be on a specific technology.</p>
<p>Can I get this done with Google BigQuery?</p>
| 1 |
2016-09-21T07:21:54Z
| 39,641,912 |
<p>If you do choose to solve this with BigQuery, then the following is a candidate solution:</p>
<ol>
<li><p>Load list of all possible English words into a table called <code>words</code>. For example, <a href="https://github.com/dwyl/english-words" rel="nofollow">https://github.com/dwyl/english-words</a> has list of ~350,000 words. There are other datasets (i.e. WordNet) freely available in Internet too.</p></li>
<li><p>Using Standard SQL, run the following query over list of candidates:</p></li>
</ol>
<p><code>SELECT first, second FROM (
SELECT word AS first, SUBSTR(candidate, LENGTH(word) + 1) AS second
FROM dataset.words
CROSS JOIN (
SELECT candidate
FROM UNNEST(["longstring", "googlecloud", "helloxiuhiewuh"]) candidate)
WHERE STARTS_WITH(candidate, word))
WHERE second IN (SELECT word FROM dataset.words)</code></p>
<p>For this example it produces:</p>
<pre><code>Row first second
1 long string
2 google cloud
</code></pre>
<p>Even very big list of English words would be only couple of MBs, so the cost of this query is minimal. First 1 TB scan is free - which is good enough for about 500,000 scans on 2 MB table. After that each additional scan is 0.001 cents.</p>
| 1 |
2016-09-22T14:30:37Z
|
[
"python",
"elasticsearch",
"machine-learning",
"google-bigquery"
] |
How to add dynamically C function in embedded Python
| 39,610,280 |
<p>I declare a C function as Python prototype</p>
<pre><code>static PyObject* MyFunction(PyObject* self, PyObject* args)
{
return Py_None ;
}
</code></pre>
<p>Now I want to add it into a dynamically loaded module</p>
<pre><code>PyObject *pymod = PyImport_ImportModule("mymodule");
PyObject_SetAttrString( pymod, "myfunction", ? );
</code></pre>
<p>How to convert C function into PyObject callable ?</p>
| -1 |
2016-09-21T07:38:19Z
| 39,611,394 |
<p>You need to construct a new <code>PyCFunctionObject</code> object from the <code>MyFunction</code>. Usually this is done under the hood using the module initialization code, but as you're now doing it the opposite way, you need to construct the <code>PyCFunctionObject</code> yourself, using the undocumented <code>PyCFunction_New</code> or <code>PyCFunction_NewEx</code>, and a suitable <a href="https://docs.python.org/3/c-api/structures.html#c.PyMethodDef" rel="nofollow"><code>PyMethodDef</code></a>:</p>
<pre><code>static PyMethodDef myfunction_def = {
"myfunction",
MyFunction,
METH_VARARGS,
"the doc string for myfunction"
};
...
// Use PyUnicode_FromString in Python 3.
PyObject* module_name = PyString_FromString("mymodule");
if (module_name == NULL) {
// error exit!
}
// this is adapted from code in code in
// Objects/moduleobject.c, for Python 3.3+ and perhaps 2.7
PyObject *func = PyCFunction_NewEx(&myfunction_def, pymod, module_name);
if (func == NULL) {
// error exit!
}
if (PyObject_SetAttrString(module, myfunction_def.ml_name, func) != 0) {
Py_DECREF(func);
// error exit!
}
Py_DECREF(func);
</code></pre>
<p>Again, this is not the preferred way to do things; usually a C extension creates concrete module objects (such as <code>_mymodule</code>) and <code>mymodule.py</code> would import <code>_mymodule</code> and put things into proper places.</p>
| 0 |
2016-09-21T08:35:13Z
|
[
"python",
"c",
"function"
] |
Invalid syntax error in for loop pandas
| 39,610,336 |
<p>I am new to pandas and While trying to iterate over the rows through a for loop I am getting a Invalid syntax error in pandas </p>
<p>Below is the code I tried </p>
<pre><code>for index, row in df.iterrows():
if ((df['c1'] == 'cond1')&(df['c2']=='cond2')):
df['c3']='cond2'
elif ((df['c1'] == 'cond3')&(df['c2']=='cond2')):
df['c3']='cond2'
elif ((df['c1'] == 'cond4')&(df['c2']=='cond2')):
df['c3']='cond2'
else:
df['c3']='Nan'
df
</code></pre>
| -2 |
2016-09-21T07:41:07Z
| 39,610,388 |
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a>, but are you sure you need always string <code>cond2</code> get to column <code>c3</code>?</p>
<pre><code>df['c3']='Nan'
df.ix[(df['c1'] == 'cond1')&(df['c2']=='cond2'), 'df3'] = 'cond2'
df.ix[(df['c1'] == 'cond3')&(df['c2']=='cond2'), 'df3'] = 'cond2'
df.ix[(df['c1'] == 'cond4')&(df['c2']=='cond2'), 'df3'] = 'cond2'
</code></pre>
<p>Sample:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'c1':['cond1','cond2','cond3','cond4'],
'c2':['cond2','cond2','cond2','cond2']})
print (df)
c1 c2
0 cond1 cond2
1 cond2 cond2
2 cond3 cond2
3 cond4 cond2
#set to NaN value
df['c3']= np.nan
#set to string 'Nan'
#df['c3']= 'Nan'
df.ix[(df['c1'] == 'cond1')&(df['c2']=='cond2'), 'c3'] = 'a'
df.ix[(df['c1'] == 'cond3')&(df['c2']=='cond2'), 'c3'] = 'b'
df.ix[(df['c1'] == 'cond4')&(df['c2']=='cond2'), 'c3'] = 'c'
print (df)
c1 c2 c3
0 cond1 cond2 a
1 cond2 cond2 NaN
2 cond3 cond2 b
3 cond4 cond2 c
</code></pre>
<hr>
<p>Looping solution is very slow, but if need it, you need use <code>row</code> instead <code>df</code> in loop:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'c1':['cond1','cond2','cond3','cond4'],
'c2':['cond2','cond2','cond2','cond2']})
print (df)
df['c3'] = ''
for index, row in df.iterrows():
if ((row['c1'] == 'cond1')&(row['c2']=='cond2')):
row['c3']='a'
elif ((row['c1'] == 'cond3')&(row['c2']=='cond2')):
row['c3']='b'
elif ((row['c1'] == 'cond4')&(row['c2']=='cond2')):
row['c3']='c'
else:
row['c3']=np.nan
print (df)
c1 c2 c3
0 cond1 cond2 a
1 cond2 cond2 NaN
2 cond3 cond2 b
3 cond4 cond2 c
</code></pre>
| 0 |
2016-09-21T07:43:48Z
|
[
"python",
"pandas",
"for-loop"
] |
Django rest framework one to one relation Update serializer
| 39,610,427 |
<p>I'm a beginner to the Django Rest Frame work. I have a problem from a long period i try to find a solution through many forums but unfortunately i didn't succeed. hope you help me </p>
<p>models.py</p>
<pre><code>from __future__ import unicode_literals
from django.contrib.auth.models import User
from django.db import models
class Account(models.Model):
my_user=models.OneToOneField(User,on_delete=models.CASCADE)
statut=models.CharField(max_length=80)
date=models.DateField(auto_now=True,auto_now_add=False)
def __unicode__(self):
return self.my_user.first_name
</code></pre>
<p>Now i want to update Account serilizer .
Serializers .py</p>
<pre><code>class AccountUpdateSerializer(serializers.ModelSerializer):
username=serializers.CharField(source ='my_user.username')
class Meta:
model= Account
fields=['id','username','statut','date']
def update(self, instance, validated_data):
print(instance)
instance.statut = validated_data.get('statut', instance.statut)
instance.my_user.username=validated_data['username']
return instance
</code></pre>
<p>Trace Back:
Environment:</p>
<pre><code>Request Method: PUT
Request URL: http://127.0.0.1:9000/api/account/edit/1/
Django Version: 1.9
Python Version: 2.7.6
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'project',
'sponsors',
'contacts',
'medias',
'conferencier',
'competition',
'poste',
'account']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback:
File "/home/asus/Documents/Gsource/gsource/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
149. response = self.process_exception_by_middleware(e, request)
File "/home/asus/Documents/Gsource/gsource/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
147. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/asus/Documents/Gsource/gsource/local/lib/python2.7/site-packages/django/views/decorators/csrf.py" in wrapped_view
58. return view_func(*args, **kwargs)
File "/home/asus/Documents/Gsource/gsource/local/lib/python2.7/site-packages/django/views/generic/base.py" in view
68. return self.dispatch(request, *args, **kwargs)
File "/home/asus/Documents/Gsource/gsource/local/lib/python2.7/site-packages/rest_framework/views.py" in dispatch
474. response = self.handle_exception(exc)
File "/home/asus/Documents/Gsource/gsource/local/lib/python2.7/site-packages/rest_framework/views.py" in handle_exception
434. self.raise_uncaught_exception(exc)
File "/home/asus/Documents/Gsource/gsource/local/lib/python2.7/site-packages/rest_framework/views.py" in dispatch
471. response = handler(request, *args, **kwargs)
File "/home/asus/Documents/Gsource/gsource/local/lib/python2.7/site-packages/rest_framework/generics.py" in put
256. return self.update(request, *args, **kwargs)
File "/home/asus/Documents/Gsource/gsource/local/lib/python2.7/site-packages/rest_framework/mixins.py" in update
70. self.perform_update(serializer)
File "/home/asus/Documents/Gsource/gsource/local/lib/python2.7/site-packages/rest_framework/mixins.py" in perform_update
74. serializer.save()
File "/home/asus/Documents/Gsource/gsource/local/lib/python2.7/site-packages/rest_framework/serializers.py" in save
187. self.instance = self.update(self.instance, validated_data)
File "/home/asus/Documents/Gsource/gsource/local/lib/python2.7/site-packages/rest_framework/serializers.py" in update
907. setattr(instance, attr, value)
File "/home/asus/Documents/Gsource/gsource/local/lib/python2.7/site-packages/django/db/models/fields/related_descriptors.py" in __set__
207. self.field.remote_field.model._meta.object_name,
Exception Type: ValueError at /api/account/edit/1/
Exception Value: Cannot assign "{u'username': u'kais'}": "Account.my_user" must be a "User" instance.
</code></pre>
| 0 |
2016-09-21T07:45:49Z
| 39,611,626 |
<p>Your <code>update</code> method is not called, because it is a method of the meta class of the serializer (<code>AccountUpdateSerializer.Meta</code>), not the serializer class <code>AccountUpdateSerializer</code> itself.</p>
<p>Here is how it should look:</p>
<pre><code>class AccountUpdateSerializer(serializers.ModelSerializer):
username=serializers.CharField(source ='my_user.username')
class Meta:
model= Account
fields=['id','username','statut','date']
def update(self, instance, validated_data):
print(instance)
instance.statut = validated_data.get('statut', instance.statut)
instance.my_user.username = validated_data['username']
return instance
</code></pre>
<p>(Or did you just post your code incorrectly?)</p>
| 1 |
2016-09-21T08:45:29Z
|
[
"python",
"django",
"django-rest-framework"
] |
Nesting item data in Scrapy
| 39,610,761 |
<p>I'm fairly new to Python and Scrapy and have issues wrapping my head around how to create nested JSON with the help of Scrapy.</p>
<p>Selecting the elements I want from HTML has not been a problem with the help of XPath Helper and some Googling. I am however not quite sure how Iâm supposed to get the JSON structure that I want. </p>
<p>The JSON structure I desire would look like:</p>
<pre><code>{"menu": {
"Monday": {
"alt1": "Item 1",
"alt2": "Item 2",
"alt3": "Item 3"
},
"Tuesday": {
"alt1": "Item 1",
"alt2": "Item 2",
"alt3": "Item 3"
}
}}
</code></pre>
<p>The HTML looks like:</p>
<pre><code><ul>
<li class="title"><h2>Monday</h2></li>
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
</ul>
<ul>
<li class="title"><h2>Tuesday</h2></li>
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
</ul>
</code></pre>
<p>I did find <a href="http://stackoverflow.com/a/25096896/6856987">http://stackoverflow.com/a/25096896/6856987</a>, I was however not able to adapt this to fit my needs. I would greatly appreciate a nudge in the right direction on how I would accomplish this.</p>
<p>Edit: With the nudge provided by Padraic I managed to get one step closer to what I want to accomplish. I've come up with the following, which is a slight improvement over my previous situation. The JSON is still not quite where I want it.</p>
<p>Scrapy spider:</p>
<pre><code>import scrapy
from dmoz.items import DmozItem
class DmozSpider(scrapy.Spider):
name = "dmoz"
start_urls = ['http://urlto.com']
def parse(self, response):
uls = response.xpath('//ul[position() >= 1 and position() < 6]')
item = DmozItem()
item['menu'] = {}
item['menu'] = {"restaurant": "name"}
for ul in uls:
item['menu']['restaurant']['dayOfWeek'] = ul.xpath("li/h2/text()").extract()
item['menu']['restaurant']['menuItem'] = ul.xpath("li/text()").extract()
yield item
</code></pre>
<p>Resulting JSON: </p>
<pre><code>[
{
"menu":{
"dayOfWeek":[
"Monday"
],
"menuItem":[
"Item 1",
"Item 2",
"Item 3"
]
}
},
{
"menu":{
"dayOfWeek":[
"Tuesday"
],
"menuItem":[
"Item 1",
"Item 2",
"Item 3"
]
}
}
]
</code></pre>
<p>It sure feels like I'm doing a thousand and a one things wrong with this, hopefully someone more clever than me can point me the right way.</p>
| 2 |
2016-09-21T08:02:46Z
| 39,612,963 |
<p>You just need to find all the uls and then extract the lis to group them, an example using lxml below:</p>
<pre><code>from lxml import html
h = """<ul>
<li class="title"><h2>Monday</h2></li>
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
</ul>
<ul>
<li class="title"><h2>Tuesday</h2></li>
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
</ul>"""
tree = html.fromstring(h)
uls = tree.xpath("//ul")
data = {}
# iterate over all uls
for ul in uls:
# extract the ul's li's
lis = ul.xpath("li")
# use the h2 text as the key and all the text from the remaining as values
# with enumerate to add the alt logic
data[lis[0].xpath("h2")[0].text] = {"alt{}".format(i): node.text for i, node in enumerate(lis[1:], 1)}
print(data)
</code></pre>
<p>Which would give you:</p>
<pre><code>{'Monday': {'alt1': 'Item 1', 'alt2': 'Item 2', 'alt3': 'Item 3'},
'Tuesday': {'alt1': 'Item 1', 'alt2': 'Item 2', 'alt3': 'Item 3'}}
</code></pre>
<p>If you wanted to put it into a single comporehension:</p>
<pre><code>data = {lis[0].xpath("h2")[0].text:
{"alt{}".format(i): node.text for i, node in enumerate(lis[1:], 1)}
for lis in (ul.xpath("li") for ul in tree.xpath("//ul"))}
</code></pre>
<p>Working with your edited code in your question and following the same required output:</p>
<pre><code>def parse(self, response):
uls = response.xpath('//ul[position() >= 1 and position() < 6]')
item = DmozItem()
# just create an empty dict
item['menu'] = {}
for ul in uls:
# for each ul, add a key value pair {day: {alti: each li_text skipping the first}}
item['menu'][ul.xpath("li/h2/text()").extract_first()]\
= {"alt{}".format(i): node.text for i, node in enumerate(ul.xpath("li[postition() > 1]/text()").extract(), 1)}
# yield outside the loop
yield item
</code></pre>
<p>That will give you data in one dict like:</p>
<pre><code>In [15]: d = {"menu":{'Monday': {'alt1': 'Item 1', 'alt2': 'Item 2', 'alt3': 'Item 3'},
'Tuesday': {'alt1': 'Item 1', 'alt2': 'Item 2', 'alt3': 'Item 3'}}}
In [16]: d["menu"]["Tuesday"]
Out[16]: {'alt1': 'Item 1', 'alt2': 'Item 2', 'alt3': 'Item 3'}
In [17]: d["menu"]["Monday"]
Out[17]: {'alt1': 'Item 1', 'alt2': 'Item 2', 'alt3': 'Item 3'}
In [18]: d["menu"]["Monday"]["alt1"]
Out[18]: 'Item 1'
</code></pre>
<p>That matches your original question expected output more than your new but I see no advantage to what you are doing in the new logic adding <code>"dayOfWeek"</code> etc.. </p>
| 0 |
2016-09-21T09:44:28Z
|
[
"python",
"json",
"scrapy"
] |
How to pass value from def to a view openerp
| 39,610,896 |
<p>This is my code: </p>
<pre><code>def view_purchase(self, cr, uid, ids, context=None):
return {
'type': 'ir.actions.act_window',
'name': 'diary_purchase',
'view_mode': 'form',
'view_type': 'form',
'context': "{'name': 'my purchase'}",
'res_model': 'diaries_purchases',
'target': 'current',
'flags': {'form': {'action_buttons': True}}
}
</code></pre>
<p>I need to send the value "my purchase" to the specific field called name of the view diaries_purchases, i guess by context, but my code not works, thanks for your help</p>
| -1 |
2016-09-21T08:09:29Z
| 39,619,596 |
<p>In your destination model (the on you open with your <code>view_purchase()</code> function). Access the context variable <code>self.env.context</code> there you should find your value. You should use a computed or default value on the field <code>name</code>.</p>
<pre><code>def _get_name(self):
self.name = self.env.context.get('name')
name = fields.char(string="Name",compute=_get_name,store=True)
</code></pre>
| 0 |
2016-09-21T14:36:32Z
|
[
"python",
"view",
"openerp"
] |
How to pass value from def to a view openerp
| 39,610,896 |
<p>This is my code: </p>
<pre><code>def view_purchase(self, cr, uid, ids, context=None):
return {
'type': 'ir.actions.act_window',
'name': 'diary_purchase',
'view_mode': 'form',
'view_type': 'form',
'context': "{'name': 'my purchase'}",
'res_model': 'diaries_purchases',
'target': 'current',
'flags': {'form': {'action_buttons': True}}
}
</code></pre>
<p>I need to send the value "my purchase" to the specific field called name of the view diaries_purchases, i guess by context, but my code not works, thanks for your help</p>
| -1 |
2016-09-21T08:09:29Z
| 39,619,992 |
<p>Pass the context like this, by prefixing the field name with <code>default_</code></p>
<pre><code>def view_purchase(self, cr, uid, ids, context=None):
return {
'type': 'ir.actions.act_window',
'name': 'diary_purchase',
'view_mode': 'form',
'view_type': 'form',
'context': {'default_name': 'my purchase'},
'res_model': 'diaries_purchases',
'target': 'current',
'flags': {'form': {'action_buttons': True}}
}
</code></pre>
| 0 |
2016-09-21T14:55:28Z
|
[
"python",
"view",
"openerp"
] |
Python script to change dir for user
| 39,611,036 |
<p>I am trying to create Python script <code>jump.py</code> that allows me to jump to a set of predefined dirs, and looks like it is not possible to do this, because after scripts exits, the previous directory is restored:</p>
<pre><code>import os
print(os.getcwd())
os.chdir('..')
print(os.getcwd())
</code></pre>
<hr>
<pre><code>[micro]$ python jump.py
/home/iset/go/src/github.com/zyedidia/micro
/home/iset/go/src/github.com/zyedidia
[micro]$
</code></pre>
<p>Is that at all possible to land to <code>[zyedidia]</code> dir after the script finishes?</p>
| 0 |
2016-09-21T08:15:56Z
| 39,612,023 |
<p>You can try this;</p>
<pre><code>import os
print(os.getcwd())
os.chdir('..')
print(os.getcwd())
shell = os.environ.get('SHELL', '/bin/sh')
os.execl(shell, shell)
</code></pre>
| 0 |
2016-09-21T09:03:15Z
|
[
"python",
"shell",
"cd"
] |
Use multi-processing/threading to break numpy array operation into chunks
| 39,611,045 |
<p>I have a function defined which renders a MxN array.
The array is very huge hence I want to use the function to produce small arrays (M1xN, M2xN, M3xN --- MixN. M1+M2+M3+---+Mi = M) simultaneously using multi-processing/threading and eventually join these arrays to form mxn array. As Mr. Boardrider rightfully suggested to provide a viable example, following example would broadly convey what I intend to do </p>
<pre><code>import numpy as n
def mult(y,x):
r = n.empty([len(y),len(x)])
for i in range(len(r)):
r[i] = y[i]*x
return r
x = n.random.rand(10000)
y = n.arange(0,100000,1)
test = mult(y=y,x=x)
</code></pre>
<p>As the lengths of <code>x</code> and <code>y</code> increase the system will take more and more time. With respect to this example, I want to run this code such that if I have 4 cores, I can give quarter of the job to each, i.e give job to compute elements <code>r[0]</code> to <code>r[24999]</code> to the 1st core, <code>r[25000]</code> to <code>r[49999]</code> to the 2nd core, <code>r[50000]</code> to <code>r[74999]</code> to the 3rd core and <code>r[75000]</code> to <code>r[99999]</code> to the 4th core. Eventually club the results, append them to get one single array <code>r[0]</code> to <code>r[99999]</code>. </p>
<p>I hope this example makes things clear. If my problem is still not clear, please tell.</p>
| 8 |
2016-09-21T08:16:21Z
| 39,856,310 |
<p>The first thing to say is: if it's about multiple cores on the same processor, <code>numpy</code> is already capable of parallelizing the operation better than we could ever do by hand (see the discussion at <a href="https://stackoverflow.com/questions/38000663/multiplication-of-large-arrays-in-python">multiplication of large arrays in python</a> )</p>
<p>In this case the key would be simply to ensure that the multiplication is all done in a wholesale array operation rather than a Python <code>for</code>-loop:</p>
<pre><code>test2 = x[n.newaxis, :] * y[:, n.newaxis]
n.abs( test - test2 ).max() # verify equivalence to mult(): output should be 0.0, or very small reflecting floating-point precision limitations
</code></pre>
<p>[If you actually wanted to spread this across multiple separate CPUs, that's a different matter, but the question seems to suggest a single (multi-core) CPU.]</p>
<hr>
<p>OK, bearing the above in mind: let's suppose you want to parallelize an operation more complicated than just <code>mult()</code>. Let's assume you've tried hard to optimize your operation into wholesale array operations that <code>numpy</code> can parallelize itself, but your operation just isn't susceptible to this. In that case, you can use a shared-memory <code>multiprocessing.Array</code> created with <code>lock=False</code>, and <code>multiprocessing.Pool</code> to assign processes to address non-overlapping chunks of it, divided up over the <code>y</code> dimension (and also simultaneously over <code>x</code> if you want). An example listing is provided below. Note that this approach does not explicitly do exactly what you specify (club the results together and append them into a single array). Rather, it does something more efficient: multiple processes simultaneously assemble their portions of the answer in non-overlapping portions of shared memory. Once done, no collation/appending is necessary: we just read out the result.</p>
<pre><code>import os, numpy, multiprocessing, itertools
SHARED_VARS = {} # the best way to get multiprocessing.Pool to send shared multiprocessing.Array objects between processes is to attach them to something global - see http://stackoverflow.com/questions/1675766/
def operate( slices ):
# grok the inputs
yslice, xslice = slices
y, x, r = get_shared_arrays('y', 'x', 'r')
# create views of the appropriate chunks/slices of the arrays:
y = y[yslice]
x = x[xslice]
r = r[yslice, xslice]
# do the actual business
for i in range(len(r)):
r[i] = y[i] * x # If this is truly all operate() does, it can be parallelized far more efficiently by numpy itself.
# But let's assume this is a placeholder for something more complicated.
return 'Process %d operated on y[%s] and x[%s] (%d x %d chunk)' % (os.getpid(), slicestr(yslice), slicestr(xslice), y.size, x.size)
def check(y, x, r):
r2 = x[numpy.newaxis, :] * y[:, numpy.newaxis] # obviously this check will only be valid if operate() literally does only multiplication (in which case this whole business is unncessary)
print( 'max. abs. diff. = %g' % numpy.abs(r - r2).max() )
return y, x, r
def slicestr(s):
return ':'.join( '' if x is None else str(x) for x in [s.start, s.stop, s.step] )
def m2n(buf, shape, typecode, ismatrix=False):
"""
Return a numpy.array VIEW of a multiprocessing.Array given a
handle to the array, the shape, the data typecode, and a boolean
flag indicating whether the result should be cast as a matrix.
"""
a = numpy.frombuffer(buf, dtype=typecode).reshape(shape)
if ismatrix: a = numpy.asmatrix(a)
return a
def n2m(a):
"""
Return a multiprocessing.Array COPY of a numpy.array, together
with shape, typecode and matrix flag.
"""
if not isinstance(a, numpy.ndarray): a = numpy.array(a)
return multiprocessing.Array(a.dtype.char, a.flat, lock=False), tuple(a.shape), a.dtype.char, isinstance(a, numpy.matrix)
def new_shared_array(shape, typecode='d', ismatrix=False):
"""
Allocate a new shared array and return all the details required
to reinterpret it as a numpy array or matrix (same order of
output arguments as n2m)
"""
typecode = numpy.dtype(typecode).char
return multiprocessing.Array(typecode, int(numpy.prod(shape)), lock=False), tuple(shape), typecode, ismatrix
def get_shared_arrays(*names):
return [m2n(*SHARED_VARS[name]) for name in names]
def init(*pargs, **kwargs):
SHARED_VARS.update(pargs, **kwargs)
if __name__ == '__main__':
ylen = 1000
xlen = 2000
init( y=n2m(range(ylen)) )
init( x=n2m(numpy.random.rand(xlen)) )
init( r=new_shared_array([ylen, xlen], float) )
print('Master process ID is %s' % os.getpid())
#print( operate([slice(None), slice(None)]) ); check(*get_shared_arrays('y', 'x', 'r')) # local test
pool = multiprocessing.Pool(initializer=init, initargs=SHARED_VARS.items())
yslices = [slice(0,333), slice(333,666), slice(666,None)]
xslices = [slice(0,1000), slice(1000,None)]
#xslices = [slice(None)] # uncomment this if you only want to divide things up in the y dimension
reports = pool.map(operate, itertools.product(yslices, xslices))
print('\n'.join(reports))
y, x, r = check(*get_shared_arrays('y', 'x', 'r'))
</code></pre>
| 6 |
2016-10-04T15:30:13Z
|
[
"python",
"arrays",
"multithreading",
"multiprocessing"
] |
Converting python dictionary to unique key-value pairs
| 39,611,077 |
<p>I want to convert a python dictionary to a list which contains all possible key-value pair. For example, if the dict is like:</p>
<pre><code>{
"x": {
"a1": { "b": {
"c1": { "d": { "e1": {}, "e2": {} } },
"c2": { "d": { "e3": {}, "e4": {} } }
}
},
"a2": { "b": {
"c3": { "d": { "e1": {}, "e5": {} } },
"c4": { "d": { "e6": {} } }
}
}
}
}
</code></pre>
<p>I would like to get a list like:</p>
<pre><code>[
{ "x": "a1", "b": "c1", "d": "e1" },
{ "x": "a1", "b": "c1", "d": "e2" },
{ "x": "a1", "b": "c2", "d": "e3" },
{ "x": "a1", "b": "c2", "d": "e4" },
{ "x": "a2", "b": "c3", "d": "e1" },
{ "x": "a2", "b": "c3", "d": "e5" },
{ "x": "a2", "b": "c4", "d": "e6" }
]
</code></pre>
<p>I am struggling to write a recursive function.
I have written this, but it is not working</p>
<pre><code>def get_list(groups, partial_row):
row = []
for k, v in groups.items():
if isinstance(v, dict):
for k2, v2 in v.items():
partial_row.update({k: k2})
if isinstance(v2, dict):
row.extend(get_list(v2, partial_row))
else:
row.append(partial_row)
return row
</code></pre>
| -2 |
2016-09-21T08:18:12Z
| 39,612,032 |
<pre><code>a = {
"x": {
"a1": { "b": {
"c1": { "d": { "e1": {}, "e2": {} } },
"c2": { "d": { "e3": {}, "e4": {} } }
}
},
"a2": { "b": {
"c3": { "d": { "e1": {}, "e5": {} } },
"c4": { "d": { "e6": {} } }
}
}
}
}
def print_dict(d, depth, *arg):
if type(d) == dict and len(d):
for key in d:
if not len(arg):
new_arg = key
else:
new_arg = arg[0] + (': ' if depth % 2 else ', ') + key
print_dict(d[key], depth+1, new_arg)
else:
print(arg[0])
print_dict(a, depth=0)
</code></pre>
<p>Result:</p>
<pre><code>x: a1, b: c1, d: e1
x: a1, b: c1, d: e2
x: a1, b: c2, d: e4
x: a1, b: c2, d: e3
x: a2, b: c4, d: e6
x: a2, b: c3, d: e1
x: a2, b: c3, d: e5
</code></pre>
| 0 |
2016-09-21T09:03:32Z
|
[
"python",
"list",
"dictionary",
"flatten"
] |
Converting python dictionary to unique key-value pairs
| 39,611,077 |
<p>I want to convert a python dictionary to a list which contains all possible key-value pair. For example, if the dict is like:</p>
<pre><code>{
"x": {
"a1": { "b": {
"c1": { "d": { "e1": {}, "e2": {} } },
"c2": { "d": { "e3": {}, "e4": {} } }
}
},
"a2": { "b": {
"c3": { "d": { "e1": {}, "e5": {} } },
"c4": { "d": { "e6": {} } }
}
}
}
}
</code></pre>
<p>I would like to get a list like:</p>
<pre><code>[
{ "x": "a1", "b": "c1", "d": "e1" },
{ "x": "a1", "b": "c1", "d": "e2" },
{ "x": "a1", "b": "c2", "d": "e3" },
{ "x": "a1", "b": "c2", "d": "e4" },
{ "x": "a2", "b": "c3", "d": "e1" },
{ "x": "a2", "b": "c3", "d": "e5" },
{ "x": "a2", "b": "c4", "d": "e6" }
]
</code></pre>
<p>I am struggling to write a recursive function.
I have written this, but it is not working</p>
<pre><code>def get_list(groups, partial_row):
row = []
for k, v in groups.items():
if isinstance(v, dict):
for k2, v2 in v.items():
partial_row.update({k: k2})
if isinstance(v2, dict):
row.extend(get_list(v2, partial_row))
else:
row.append(partial_row)
return row
</code></pre>
| -2 |
2016-09-21T08:18:12Z
| 39,612,229 |
<p>Alternative solution:</p>
<pre><code>from pprint import pprint
dic = {
"x": {
"a1": { "b": {
"c1": { "d": { "e1": {}, "e2": {} } },
"c2": { "d": { "e3": {}, "e4": {} } }
}
},
"a2": { "b": {
"c3": { "d": { "e1": {}, "e5": {} } },
"c4": { "d": { "e6": {} } }
}
}
}
}
def rec(dic, path=[], all_results=[]):
if not dic:
# No items in the dictionary left, add the path
# up to this point to all_results
# This is based on the assumption that there is an even
# number of items in the path, otherwise you output format
# makes no sense
even_items = path[::2]
odd_items = path[1::2]
result = dict(zip(even_items, odd_items))
all_results.append(result)
return all_results
for key in dic:
# Make a copy of the current path
path_cp = list(path)
path_cp.append(key)
all_results = rec(dic[key], path_cp, all_results)
return all_results
results = rec(dic)
pprint(results)
</code></pre>
<p>Output:</p>
<pre><code>[{'b': 'c2', 'd': 'e4', 'x': 'a1'},
{'b': 'c2', 'd': 'e3', 'x': 'a1'},
{'b': 'c1', 'd': 'e1', 'x': 'a1'},
{'b': 'c1', 'd': 'e2', 'x': 'a1'},
{'b': 'c3', 'd': 'e5', 'x': 'a2'},
{'b': 'c3', 'd': 'e1', 'x': 'a2'},
{'b': 'c4', 'd': 'e6', 'x': 'a2'}]
</code></pre>
| 1 |
2016-09-21T09:12:41Z
|
[
"python",
"list",
"dictionary",
"flatten"
] |
Converting python dictionary to unique key-value pairs
| 39,611,077 |
<p>I want to convert a python dictionary to a list which contains all possible key-value pair. For example, if the dict is like:</p>
<pre><code>{
"x": {
"a1": { "b": {
"c1": { "d": { "e1": {}, "e2": {} } },
"c2": { "d": { "e3": {}, "e4": {} } }
}
},
"a2": { "b": {
"c3": { "d": { "e1": {}, "e5": {} } },
"c4": { "d": { "e6": {} } }
}
}
}
}
</code></pre>
<p>I would like to get a list like:</p>
<pre><code>[
{ "x": "a1", "b": "c1", "d": "e1" },
{ "x": "a1", "b": "c1", "d": "e2" },
{ "x": "a1", "b": "c2", "d": "e3" },
{ "x": "a1", "b": "c2", "d": "e4" },
{ "x": "a2", "b": "c3", "d": "e1" },
{ "x": "a2", "b": "c3", "d": "e5" },
{ "x": "a2", "b": "c4", "d": "e6" }
]
</code></pre>
<p>I am struggling to write a recursive function.
I have written this, but it is not working</p>
<pre><code>def get_list(groups, partial_row):
row = []
for k, v in groups.items():
if isinstance(v, dict):
for k2, v2 in v.items():
partial_row.update({k: k2})
if isinstance(v2, dict):
row.extend(get_list(v2, partial_row))
else:
row.append(partial_row)
return row
</code></pre>
| -2 |
2016-09-21T08:18:12Z
| 39,612,796 |
<p>You are missing one condition check in the following section in your original solution:</p>
<pre><code>if isinstance(v2, dict):
row.extend(get_list(v2, partial_row))
else:
row.append(partial_row)
</code></pre>
<p>Rather it has to be</p>
<pre><code>if isinstance(v2, dict) and v2:
row.extend(get_list(v2, partial_row))
else:
row.append(partial_row)
</code></pre>
<p>Due to the missing <code>and v2</code>, the <code>else</code> block never gets executed and you are always getting the empty list as a result.</p>
| 0 |
2016-09-21T09:36:48Z
|
[
"python",
"list",
"dictionary",
"flatten"
] |
Django: Test content of email sent
| 39,611,105 |
<p>I'm sending out an email to users with a generated link, and I wanna write a test that verifies whether the link is correct, but I can't find a way to get the content of the email inside the test.</p>
<p>Is there a way to do that?</p>
<p>If it helps at all, this is how I'm sending the email:</p>
<pre><code>content = template.render(Context({'my_link': my_link}))
subject = _('Email with link')
msg = EmailMultiAlternatives(subject=subject,
from_email='MyWebsite Team <mywebsite@mywebsite.com>',
to=[user.email, ])
msg.attach_alternative(content, 'text/html')
msg.send()
</code></pre>
| 0 |
2016-09-21T08:20:04Z
| 39,611,137 |
<p>The docs have <a href="https://docs.djangoproject.com/en/1.10/topics/testing/tools/#email-services" rel="nofollow">an entire section</a> on testing emails.</p>
<pre><code>self.assertEqual(mail.outbox[0].subject, 'Email with link')
</code></pre>
| 1 |
2016-09-21T08:21:54Z
|
[
"python",
"django",
"django-testing"
] |
Single producer to multi consumers (Same consumer group)
| 39,611,124 |
<p>I've try before sending message from single producer to 2 different consumer with <strong>DIFFERENT</strong> consumer group id. The result is both consumer able to read the complete message (both consumers getting the same message). But I would like to ask is it possible for these 2 consumers read different messages while setting them under a <strong>SAME</strong> consumer group name? </p>
| 0 |
2016-09-21T08:21:25Z
| 39,652,029 |
<p>I found the answer already, just make sure the partition number is not equal to one while creating new topic. </p>
| 0 |
2016-09-23T03:19:29Z
|
[
"python",
"producer-consumer",
"python-kafka"
] |
Trouble freezing (with PyInstaller) python source including multiprocessing module in Solaris
| 39,611,127 |
<p>I'm trying to freeze and distribute among my Solaris11 machines the following python code which makes use of multiprocessing module:</p>
<pre><code>import multiprocessing
def f(name):
print 'hello', name
if __name__ == '__main__':
p = multiprocessing.Process(target=f, args=('fer',))
p.start()
p.join()
</code></pre>
<p>However, even if the executable works fine under the compiler machine (Solaris11) ...</p>
<pre><code>[root@zgv-wodbuild01 pyinstaller]# testfer/dist/testfer
hello fer
[root@zgv-wodbuild01 pyinstaller]# echo $?
0
</code></pre>
<p>...complains about multiprocessing library in any other machine (Solaris11):</p>
<pre><code>root@dest01a # ./testfer.r004
Traceback (most recent call last):
File "testfer.py", line 1, in <module>
File "/root/pyinstaller/PyInstaller/loader/pyimod03_importers.py", line 389, in load_module
File "multiprocessing/__init__.py", line 84, in <module>
File "/root/pyinstaller/PyInstaller/loader/pyimod03_importers.py", line 546, in load_module
ImportError: ld.so.1: testfer.r004: fatal: relocation error: file /tmp/_MEIlBa4uh/_multiprocessing.so: symbol __xnet_sendmsg: referenced symbol not found
Failed to execute script testfer
root@dest01a # echo $?
255
</code></pre>
<p>PyInstaller command has been launched with <code>--onefile</code> flag so every needed library should be included inside final ELF file (multiprocessing too). But I have also tried to include explicitly multiprocessing library by editing <code>hidden-import</code> section in the .spec file.</p>
<p>I have also tried to freeze the source in an older Solaris 10 machine to ensure backwards compatibility. Compiling the PyInstaller bootloaders both with and without special LDFLAGS like <code>-lrt</code>. Using <code>--debug</code> flag. But so far nothing worked nor gave me a clue.</p>
<p>Apparently the binary is properly built for the right architecture and there is no library issues:</p>
<pre><code>root@dest01a # file testfer.r004
testfer.r004: ELF 32-bit MSB executable SPARC32PLUS Version 1, V8+ Required, UltraSPARC1 Extensions Required, dynamically linked, stripped
root@dest01a # crle
Platform: 32-bit MSB SPARC
Default Library Path (ELF): /lib:/usr/lib (system default)
Trusted Directories (ELF): /lib/secure:/usr/lib/secure (system default)
root@dest01a # ldd -r testfer.r004
libdl.so.1 => /lib/libdl.so.1
libm.so.2 => /lib/libm.so.2
libz.so.1 => /lib/libz.so.1
librt.so.1 => /lib/librt.so.1
libc.so.1 => /lib/libc.so.1
root@dest01a # ldd -r /lib/libsocket.so.1
libnsl.so.1 => /lib/libnsl.so.1
libc.so.1 => /lib/libc.so.1
libmp.so.2 => /lib/libmp.so.2
libmd.so.1 => /lib/libmd.so.1
libsoftcrypto.so.1 => /lib/libsoftcrypto.so.1
libelf.so.1 => /lib/libelf.so.1
libcryptoutil.so.1 => /lib/libcryptoutil.so.1
libz.so.1 => /lib/libz.so.1
libm.so.2 => /lib/libm.so.2
</code></pre>
<p>So I ran out of ideas.
Thanks in advance for any insight.</p>
<p><strong>Update</strong>: Thanks to Andrew Henle comments I've been able to make some progress. I have recompiled the PyInstaller bootloaders again but this time establishing LDFLAGS environment variable to <code>LDFLAGS="-lsocket -lrt"</code>.</p>
<p>Now it's not complaining about <code>__xnet</code> symbols anymore but <code>symbol get_fips_mode</code> instead. As you can see:</p>
<pre><code>root@lnrep01a # ./testfer.r004
ld.so.1: testfer.r004: fatal: relocation error: file /lib/libsoftcrypto.so.1: symbol get_fips_mode: referenced symbol not found
Killed
</code></pre>
<p>So probably I just need to add some extra flags to compiling process. I will look for them across the internet but if someone knows what is missing, that would be more than welcome.</p>
| 1 |
2016-09-21T08:21:28Z
| 39,704,878 |
<p>Turned out it wasn't a gcc bug. After analysing in depth both environments, I realized the <code>/lib/libsoftcrypto.so.1</code> libraries differs a lot. As a matter of fact, the library in the compiler machine contains <code>get_fips_mode</code> symbol...</p>
<pre><code>[root@zgv-wodbuild01 pyinstaller]# nm /lib/libsoftcrypto.so.1 | grep -i get_fips_mode
[3438] | 623016| 212|FUNC |GLOB |0 |18 |get_fips_mode
</code></pre>
<p>...while the destination machine does not:</p>
<pre><code>root@dest01a # nm /lib/libsoftcrypto.so.1 | grep -i get_fips_mode
[3755] | 0| 0|FUNC |GLOB |0 |UNDEF |get_fips_mode
</code></pre>
<p>After trying another compiler machine with similar library version, compiling the pyinstaller bootloaders with <code>-lsocket</code> ldflag was enough:</p>
<pre><code>root@dest01a # ./testferv3
hello fer
root@dest01a # echo $?
0
</code></pre>
| 0 |
2016-09-26T13:56:58Z
|
[
"python",
"multiprocessing",
"solaris",
"pyinstaller"
] |
pandas memory consumption hdf file grouping
| 39,611,197 |
<p>I wrote the following script but I have an issue with memory consumption, pandas is allocating more than 30 G of ram, where the sum of data files is roughly 18 G </p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import time
mean_wo = pd.DataFrame()
mean_w = pd.DataFrame()
std_w = pd.DataFrame()
std_wo = pd.DataFrame()
start_time=time.time() #taking current time as starting time
data_files=['2012.h5','2013.h5','2014.h5','2015.h5', '2016.h5', '2008_2011.h5']
for data_file in data_files:
print data_file
df = pd.read_hdf(data_file)
grouped = df.groupby('day')
mean_wo_tmp=grouped['Significance_without_muons'].agg([np.mean])
mean_w_tmp=grouped['Significance_with_muons'].agg([np.mean])
std_wo_tmp=grouped['Significance_without_muons'].agg([np.std])
std_w_tmp=grouped['Significance_with_muons'].agg([np.std])
mean_wo = pd.concat([mean_wo, mean_wo_tmp])
mean_w = pd.concat([mean_w, mean_w_tmp])
std_w = pd.concat([std_w,std_w_tmp])
std_wo = pd.concat([std_wo,std_wo_tmp])
print mean_wo.info()
print mean_w.info()
del df, grouped, mean_wo_tmp, mean_w_tmp, std_w_tmp, std_wo_tmp
std_wo=std_wo.reset_index()
std_w=std_w.reset_index()
mean_wo=mean_wo.reset_index()
mean_w=mean_w.reset_index()
#setting the field day as date
std_wo['day']= pd.to_datetime(std_wo['day'], format='%Y-%m-%d')
std_w['day']= pd.to_datetime(std_w['day'], format='%Y-%m-%d')
mean_w['day']= pd.to_datetime(mean_w['day'], format='%Y-%m-%d')
mean_wo['day']= pd.to_datetime(mean_w['day'], format='%Y-%m-%d')
</code></pre>
<p>So someone has an idea how to decrease the memory consumption?</p>
<p>Cheers,</p>
| 2 |
2016-09-21T08:25:05Z
| 39,611,770 |
<p>I'd do something like this<br>
<strong><em>Solution</em></strong> </p>
<pre><code>data_files=['2012.h5', '2013.h5', '2014.h5', '2015.h5', '2016.h5', '2008_2011.h5']
cols = ['Significance_without_muons', 'Significance_with_muons']
def agg(data_file):
return pd.read_hdf(data_file).groupby('day')[cols].agg(['mean', 'std'])
big_df = pd.concat([agg(fn) for fn in data_files], axis=1, keys=data_files)
mean_wo_tmp = big_df.xs(('Significance_without_muons', 'mean'), axis=1, level=[1, 2])
mean_w_tmp = big_df.xs(('Significance_with_muons', 'mean'), axis=1, level=[1, 2])
std_wo_tmp = big_df.xs(('Significance_without_muons', 'std'), axis=1, level=[1, 2])
std_w_tmp = big_df.xs(('Significance_with_muons', 'mean'), axis=1, level=[1, 2])
del big_df
</code></pre>
<hr>
<p><strong><em>Setup</em></strong> </p>
<pre><code>data_files=['2012.h5', '2013.h5', '2014.h5', '2015.h5', '2016.h5', '2008_2011.h5']
cols = ['Significance_without_muons', 'Significance_with_muons']
np.random.seed([3,1415])
data_df = pd.DataFrame(np.random.rand(1000, 2), columns=cols)
data_df['day'] = np.random.choice(list('ABCDEFG'), 1000)
for fn in data_files:
data_df.to_hdf(fn, 'day', append=False)
</code></pre>
<p><strong><em>Run Above Solution</em></strong><br>
Then</p>
<pre><code>mean_wo_tmp
</code></pre>
<p><a href="http://i.stack.imgur.com/iHKHj.png" rel="nofollow"><img src="http://i.stack.imgur.com/iHKHj.png" alt="enter image description here"></a></p>
| 2 |
2016-09-21T08:51:43Z
|
[
"python",
"pandas",
"ram"
] |
Use of Parameter and Signature
| 39,611,265 |
<p>I am a relatively new Python learner. So, while going through different coding techniques, I came across this:</p>
<pre><code>from inspect import Parameter, Signature
def make_signature(names):
return Signature(Parameter(name, Parameter.POSITIONAL_OR_KEYWORD) for name in names)
class Structure:
list_fields = []
def __init__(self, *args):
for name, val in zip(self.list_fields, args):
setattr(self, name, val)
class Stock(Structure):
__signature__ = make_signature(['name', 'shares', 'price'])
#list_fields = ['name', 'shares', 'price']
class Point(Structure):
list_fields = ['x', 'y']
obj2=Point(20,40)
obj1=Stock('googl', 100, 8000)
print(obj1.name)
</code></pre>
<p>I understand the <code>Structure</code> class and its integration with the <code>Point</code> class which is inheriting <code>Structure</code> class hence it's <code>__init__</code> method. But when I create object of <code>Point</code> class it doesn't support <strong>Positional arguments</strong> but the <code>Stock</code> class object does support the feature.</p>
<ul>
<li>Can anyone please explain to me why & how this happens?</li>
<li>When to use Parameter, Signature?</li>
<li>Also how is this related to meta programming?</li>
<li>Some more examples like this.</li>
<li>what is the use of <code>Parameter</code> function in <code>make_signature</code> method and what <code>make_signature</code> method is doing?</li>
<li>The flow of the program i.e which function is returning to whom and vice-versa. To my knowledge the <code>Stock</code> class and <code>Point</code> class are calling the <code>Structure</code> class but when does the <code>make_signature</code> method comes in?</li>
</ul>
<p>And I tried to read some documentation about <code>Signature</code> but those examples aren't of this kind and is too much heavy stuff for me as I keep getting lost backtracking this code documentations also I couldn't find any good explanatory documentation on Meta programming in Python.</p>
| 2 |
2016-09-21T08:28:50Z
| 39,612,065 |
<blockquote>
<p>Can anyone please explain to me why & how this happens?</p>
</blockquote>
<p>Both classes <em>only</em> accept positional arguments as dictated by <code>*args</code> in <code>Structure.__init__</code>:</p>
<pre><code>s = Stock('pos_arg1', 'pos_arg2', 'pos_arg3')
p = Point('pos_arg1', 'pos_arg2', 'pos_arg3')
</code></pre>
<p>The difference is that <code>Stock</code> doesn't actually <em>set</em> any arguments because, since you commented out <code>list_fields</code>, <code>Structure.__init__</code> will use <code>Structure.list_fields</code> which is empty. That is why trying to access <code>name</code> on a <code>Stock</code> instance raises an <code>AttributeError</code>.</p>
<p>In both cases, <code>list_fields</code> <strong>limits</strong> what arguments can be set. For the <code>Point</code> instance in the previous snippet, <code>x</code> will equal <code>pos_arg1</code> and <code>y</code> will equal <code>pos_arg2</code>; <code>pos_arg3</code> is essentially tossed. This is due to <code>zip</code> which builds tuples until one of the iterables is exhausted:</p>
<pre><code>for i, j in zip(['x', 'y'], ['pos_arg1', 'pos_arg2', 'pos_arg3']):
print(i, j)
</code></pre>
<p>Prints:</p>
<pre><code>x pos_arg1
y pos_arg2
</code></pre>
<p>When an empty list is supplied, it doesn't even loop. This is what happens when you initialize <code>Stock</code>, <code>Structure.list_fields = []</code> is used:</p>
<pre><code>for i, j in zip([], ['pos_arg1', 'pos_arg2', 'pos_arg3']):
print(i, j)
</code></pre>
<p>Prints nothing so no <code>setattr</code>s are going to get called.</p>
<blockquote>
<p>When to use Parameter, Signature?</p>
</blockquote>
<p>When you want to support further introspection of your classes (or, callables in the general case) you could add a <code>__signature__</code> attribute to your class (as is done with <code>Stock</code>) and get it picked up by tools like <code>inspect.signature</code>, i.e:</p>
<pre><code>inspect.signature(Stock)
Out[16]: <Signature (name, shares, price)>
inspect.signature(Point)
Out[17]: <Signature (*args)>
</code></pre>
<p>Signature tries to see if the object has a <code>object.__signature__</code> and if so constructs a representation of the signature when called.</p>
<p>Furthermore you could <code>bind</code> a signature to self and get it to support <code>POSITIONAL_OR_KEYWORD</code> arguments.</p>
<p>In general this will only present itself as requirement in very few cases. In short: you'll know if you'll need it. </p>
<blockquote>
<p>Also how is this related to meta programming? </p>
</blockquote>
<p>This specific example isn't related. You could enhance it with meta programming as you'll see in the presentation I'll link.</p>
<blockquote>
<p>Some more examples like this.</p>
</blockquote>
<p>This is too broad of a request. But your example comes from david beazly's metaprogramming presentation, <a href="http://pyvideo.org/pycon-us-2013/python-3-metaprogramming.html" rel="nofollow">here's the presentation</a>.</p>
<blockquote>
<p>To my knowledge Stock function and Point function is calling Structure class but where does the make_signature method comes in?</p>
</blockquote>
<p>Both <code>Stock</code> and <code>Point</code> use <code>Structure.__init__</code> which will populate the instance dictionary with attributes defined in <code>list_fields</code>. </p>
<p><code>__signature__ = make_signature([...])</code> is executing <em>when the class is being created</em>, Python executes the body of the <code>class</code> when it encounters it. <code>make_signature</code> is going to get called and create a <code>Signature</code> object and the assignment to <code>__signature__</code> is going to be made.</p>
| 2 |
2016-09-21T09:05:00Z
|
[
"python",
"python-3.x",
"metaprogramming"
] |
How do I get data sensor data from Microsoft band 2 to a Raspberry pi without SDK? (To a programming language)
| 39,611,281 |
<p>How can I get sensor data from the Microsoft Band 2 to a Raspberry Pi 3 (Raspbian Jessie)? The data should be available in a higher level programming language such as Python, Java or similar.</p>
<p>I want to be able to run a program (Java, Python or similar) that automatically receive data for processing when the Band is in range.</p>
<p>It is ok to pair the MS Band with a supported phone (and app) first in order to get past the setup on the Band. It is also ok to run some command-line tools on the Raspberry in order to pair the devices the first time (keys etc).</p>
<p>I have managed to pair and connect to the device using command-line tool:
sudo bluetoothctl -a</p>
<p>But I can't create any connection from Python using BluePy or following Tony DiCola's tutorial:
<a href="https://learn.adafruit.com/bluefruit-le-python-library/installation" rel="nofollow">https://learn.adafruit.com/bluefruit-le-python-library/installation</a></p>
<p>My guess is that the Bluetooth LE privacy is messing things up?</p>
<p>Thanks for your time!</p>
| -1 |
2016-09-21T08:29:35Z
| 39,612,102 |
<p>You should take a look at the documentation which for some reason is all jammed into a PDF <a href="https://developer.microsoftband.com/Content/docs/Microsoft%20Band%20SDK.pdf" rel="nofollow">here</a>.</p>
<p>It appears at the moment the only supported ways to get data of the band is using the SDK which only has support for iOS, Windoze and Andriod. </p>
| 0 |
2016-09-21T09:06:32Z
|
[
"java",
"python",
"raspberry-pi3",
"microsoft-band"
] |
How do I get data sensor data from Microsoft band 2 to a Raspberry pi without SDK? (To a programming language)
| 39,611,281 |
<p>How can I get sensor data from the Microsoft Band 2 to a Raspberry Pi 3 (Raspbian Jessie)? The data should be available in a higher level programming language such as Python, Java or similar.</p>
<p>I want to be able to run a program (Java, Python or similar) that automatically receive data for processing when the Band is in range.</p>
<p>It is ok to pair the MS Band with a supported phone (and app) first in order to get past the setup on the Band. It is also ok to run some command-line tools on the Raspberry in order to pair the devices the first time (keys etc).</p>
<p>I have managed to pair and connect to the device using command-line tool:
sudo bluetoothctl -a</p>
<p>But I can't create any connection from Python using BluePy or following Tony DiCola's tutorial:
<a href="https://learn.adafruit.com/bluefruit-le-python-library/installation" rel="nofollow">https://learn.adafruit.com/bluefruit-le-python-library/installation</a></p>
<p>My guess is that the Bluetooth LE privacy is messing things up?</p>
<p>Thanks for your time!</p>
| -1 |
2016-09-21T08:29:35Z
| 40,037,325 |
<p>I'm trying to achieve a similar result (direct connection via BLE without SDK on Windows/other OS) but even if I managed to connect to the band "manually" it is impossible for me to understand the GATT profiles to read the data from sensors. I can't find any (unofficial) documentation on the web to read data directly.</p>
| 0 |
2016-10-14T07:25:16Z
|
[
"java",
"python",
"raspberry-pi3",
"microsoft-band"
] |
Open files older than 3 days of date stamp in file name - Python 2.7
| 39,611,418 |
<p>** Problem **
I'm trying to open (in python) files older than 3 days of the date stamp which is in the current name. Example: 2016_08_18_23_10_00 - JPN - MLB - Mickeymouse v Burgerface.ply. So far I can create a date variable, however I do not know how to search for this variable in a filename. I presume I need to convert it to a string first?</p>
<pre><code>from datetime import datetime, timedelta
import os
import re
path = "C:\Users\michael.lawton\Desktop\Housekeeper"
## create variable d where current date time is subtracted by 3 days ##
days_to_subtract = 3
d = datetime.today() - timedelta(days=days_to_subtract)
print d
## open file in dir where date in filename = d or older ##
for filename in os.listdir(path):
if re.match(d, filename):
with open(os.path.join(path, filename), 'r') as f:
print line,
</code></pre>
<p>Any help will be much appreciated</p>
| 0 |
2016-09-21T08:36:30Z
| 39,611,676 |
<p>You can use <code>strptime</code> for this. It will convert your string (assuming it is correctly formatted) into a datetime object which you can use to compare if your file is older than 3 days based on the filename:</p>
<pre><code>from datetime import datetime
...
lines = []
for filename in os.listdir(path):
date_filename = datetime.strptime(filename.split(" ")[0], '%Y_%m_%d_%H_%M_%S')
if date_filename < datetime.datetime.now()-datetime.timedelta(days=days_to_subtract):
with open(os.path.join(path, filename), 'r') as f:
lines.extend(f.readlines()) # put all lines into array
</code></pre>
<p>If the filename is <code>2016_08_18_23_10_00 - JPN - MLB - Mickeymouse v Burgerface.ply</code> the datetime part will be extracted with <code>filename.split(" ")[0]</code>. Then we can use that to check if it is older than three days using <code>datetime.timedelta</code></p>
| 0 |
2016-09-21T08:47:49Z
|
[
"python",
"python-2.7",
"datestamp"
] |
Open files older than 3 days of date stamp in file name - Python 2.7
| 39,611,418 |
<p>** Problem **
I'm trying to open (in python) files older than 3 days of the date stamp which is in the current name. Example: 2016_08_18_23_10_00 - JPN - MLB - Mickeymouse v Burgerface.ply. So far I can create a date variable, however I do not know how to search for this variable in a filename. I presume I need to convert it to a string first?</p>
<pre><code>from datetime import datetime, timedelta
import os
import re
path = "C:\Users\michael.lawton\Desktop\Housekeeper"
## create variable d where current date time is subtracted by 3 days ##
days_to_subtract = 3
d = datetime.today() - timedelta(days=days_to_subtract)
print d
## open file in dir where date in filename = d or older ##
for filename in os.listdir(path):
if re.match(d, filename):
with open(os.path.join(path, filename), 'r') as f:
print line,
</code></pre>
<p>Any help will be much appreciated</p>
| 0 |
2016-09-21T08:36:30Z
| 39,622,972 |
<p>To open all files in the given directory that contain a timestamp in their name older than 3 days:</p>
<pre><code>#!/usr/bin/env python2
import os
import time
DAY = 86400 # POSIX day in seconds
three_days_ago = time.time() - 3 * DAY
for filename in os.listdir(dirpath):
time_string = filename.partition(" ")[0]
try:
timestamp = time.mktime(time.strptime(time_string, '%Y_%m_%d_%H_%M_%S'))
except Exception: # can't get timestamp
continue
if timestamp < three_days_ago: # old enough to open
with open(os.path.join(dirpath, filename)) as file: # assume it is a file
for line in file:
print line,
</code></pre>
<p>The code assumes that the timestamps are in the local timezone. It may take DST transitions into account on platforms where C <code>mktime()</code> has access to <a href="https://en.wikipedia.org/wiki/Tz_database" rel="nofollow">the tz database</a> (if it doesn't matter whether the file is 72 or 73 hours old in your case then just ignore this paragraph).</p>
<p>Consider using file metadata such as "the last modification time of a file" instead of extracting the timestamp from its name: <code>timestamp = os.path.getmtime(path)</code>.</p>
| 0 |
2016-09-21T17:28:03Z
|
[
"python",
"python-2.7",
"datestamp"
] |
__init__ vs __enter__ in context managers
| 39,611,520 |
<p>As far as I understand, <code>__init__()</code> and <code>__enter__()</code> methods of the context manager are called exactly once each, one after another, leaving no chance for any other code to be executed in between. What is the purpose of separating them into two methods, and what should I put into each?</p>
<p>Edit: sorry, wasn't paying attention to the docs.</p>
<p>Edit 2: actually, the reason I got confused is because I was thinking of <code>@contextmanager</code> decorator. A context manager created using <code>@contextmananger</code> can only be used once (the generator will be exhausted after the first use), so often they are written with the constructor call inside <code>with</code> statement; and if that was the only way to use <code>with</code> statement, my question would have made sense. Of course, in reality, context managers are more general than what <code>@contextmanager</code> can create; in particular context managers can, in general, be reused. I hope I got it right this time?</p>
| 0 |
2016-09-21T08:40:39Z
| 39,611,597 |
<blockquote>
<p>As far as I understand, <code>__init__()</code> and <code>__enter__()</code> methods of the context manager are called exactly once each, one after another, leaving no chance for any other code to be executed in between.</p>
</blockquote>
<p>And your understanding is incorrect. <code>__init__</code> is called when the object is created, <code>__enter__</code> when it is entered with <code>with</code> statement, and these are 2 quite distinct things. Often it is so that the constructor is directly called in <code>with</code> initialization, with no intervening code, but this doesn't have to be the case.</p>
<p>Consider this example:</p>
<pre><code>class Foo:
def __init__(self):
print('__init__ called')
def __enter__(self):
print('__enter__ called')
return self
def __exit__(self, *a):
print('__exit__ called')
myobj = Foo()
print('\nabout to enter with 1')
with myobj:
print('in with 1')
print('\nabout to enter with 2')
with myobj:
print('in with 2')
</code></pre>
<p><code>myobj</code> can be initialized separately and entered in multiple <code>with</code> blocks:</p>
<p>Output:</p>
<pre><code>__init__ called
about to enter with 1
__enter__ called
in with 1
__exit__ called
about to enter with 2
__enter__ called
in with 2
__exit__ called
</code></pre>
<p>Furthermore if <code>__init__</code> and <code>__enter__</code> weren't separated, it wouldn't be possible to even use the following:</p>
<pre><code>def open_etc_file(name):
return open(os.path.join('/etc', name))
with open_etc('passwd'):
...
</code></pre>
<p>since the initialization (within <code>open</code>) is clearly separate from <code>with</code> entry.</p>
<hr>
<p>The managers created by <a href="https://docs.python.org/3/library/contextlib.html#contextlib.contextmanager" rel="nofollow"><code>contextlib.manager</code></a> are single-entrant, but they again can be constructed outside the <code>with</code> block. Take the example:</p>
<pre><code>from contextlib import contextmanager
@contextmanager
def tag(name):
print("<%s>" % name)
yield
print("</%s>" % name)
</code></pre>
<p>you can use this as:</p>
<pre><code>def heading(level=1):
return tag('h{}'.format(level))
my_heading = heading()
print('Below be my heading')
with my_heading:
print('Here be dragons')
</code></pre>
<p>output:</p>
<pre><code>Below be my heading
<h1>
Here be dragons
</h1>
</code></pre>
<p>However, if you try to reuse the returned object h` block, you will get</p>
<pre><code>RuntimeError: generator didn't yield
</code></pre>
| 8 |
2016-09-21T08:44:13Z
|
[
"python",
"python-3.x",
"contextmanager"
] |
Select Columns in pandas DF
| 39,611,568 |
<p>Below is my data and I am trying to access a column. It was working fine until yesterday, but now I'm not sure if I am doing something wrong:</p>
<pre><code> DISTRICT;CPE;EQUIPMENT,NR_EQUIPM
0 47;CASTELO BRANCO;17520091VM;101
1 48;CASTELO BRANCO;17520103VV;160
2 49;CASTELO BRANCO;17520103VV;160
</code></pre>
<p>When I try this, it gives me an error:</p>
<pre><code>df = pd.read_csv(archiv, sep=",")
df['EQUIPMENT']
</code></pre>
<p>ERROR:</p>
<blockquote>
<p>KeyError: 'EQUIPMENT'</p>
</blockquote>
<p>Also I am trying this, but doesn`t work either:</p>
<pre><code>df.EQUIPMENT
</code></pre>
<p>ERROR:</p>
<blockquote>
<p>AttributeError: 'DataFrame' object has no attribute 'EQUIPMENT'</p>
</blockquote>
<p>BTW, I am using:</p>
<blockquote>
<p>Python 2.7.12 |Anaconda 4.1.1 (32-bit)| (default, Jun 29 2016,
11:42:13) [MSC v.1500 32 bit (Intel)]</p>
</blockquote>
<p>Any idea?</p>
| 2 |
2016-09-21T08:43:14Z
| 39,611,585 |
<p>You need change sep to <code>;</code>, because separator is changed in <code>csv</code>:</p>
<pre><code>df = pd.read_csv(archiv, sep=";")
</code></pre>
<p>If check last separator of columns, there is <code>,</code>, so you can use two separators - <code>;,</code>, but is necessary add parameter <code>engine='python'</code> because warning:</p>
<blockquote>
<p>ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support regex separators (separators > 1 char and different from '\s+' are interpreted as regex); you can avoid this warning by specifying engine='python'.
for index, row in df.iterrows():</p>
</blockquote>
<p>Sample:</p>
<pre><code>import pandas as pd
import io
temp=u"""DISTRICT;CPE;EQUIPMENT,NR_EQUIPM
47;CASTELO BRANCO;17520091VM;101
48;CASTELO BRANCO;17520103VV;160
49;CASTELO BRANCO;17520103VV;160"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), sep="[;,]", engine='python')
print (df)
DISTRICT CPE EQUIPMENT NR_EQUIPM
0 47 CASTELO BRANCO 17520091VM 101
1 48 CASTELO BRANCO 17520103VV 160
2 49 CASTELO BRANCO 17520103VV 160
</code></pre>
| 2 |
2016-09-21T08:43:52Z
|
[
"python",
"csv",
"pandas",
"multiple-columns",
"separator"
] |
Fill area of polygons with python
| 39,611,644 |
<p>I use this part of a code to plot polygons :</p>
<pre><code>plt.plot(x,y,'k',linewidth=2)
</code></pre>
<p>And I get this image:
<a href="http://i.stack.imgur.com/wqV8n.png" rel="nofollow"><img src="http://i.stack.imgur.com/wqV8n.png" alt="enter image description here"></a></p>
<p>I would like to fill the area of my polygons in blue, but I can't find any properties to do it... How can I do that ?</p>
| 0 |
2016-09-21T08:46:29Z
| 39,611,758 |
<p>you should consider using matplotlib's <a href="http://matplotlib.org/api/patches_api.html" rel="nofollow">polygon</a></p>
| 1 |
2016-09-21T08:51:07Z
|
[
"python",
"matplotlib"
] |
Fill area of polygons with python
| 39,611,644 |
<p>I use this part of a code to plot polygons :</p>
<pre><code>plt.plot(x,y,'k',linewidth=2)
</code></pre>
<p>And I get this image:
<a href="http://i.stack.imgur.com/wqV8n.png" rel="nofollow"><img src="http://i.stack.imgur.com/wqV8n.png" alt="enter image description here"></a></p>
<p>I would like to fill the area of my polygons in blue, but I can't find any properties to do it... How can I do that ?</p>
| 0 |
2016-09-21T08:46:29Z
| 39,617,877 |
<p>I finally found the solution with plt.fill() instead of plt.plot()</p>
| 0 |
2016-09-21T13:24:19Z
|
[
"python",
"matplotlib"
] |
python - beautifulsoup find_all() resulting invalid date
| 39,611,789 |
<p>my code:</p>
<pre><code>import requests
import re
from bs4 import BeautifulSoup
r = requests.get(
"https://www.traveloka.com/hotel/detail?spec=22-9-2016.24-9-2016.2.1.HOTEL.3000010016588.&nc=1474427752464")
data = r.content
soup = BeautifulSoup(data, "html.parser")
ratingdates = soup.find_all("div", {"class": "reviewDate"})
for i in range(0,10):
print(ratingdates[i].get_text())
</code></pre>
<p>Those code will print "Invalid date". How to get the date?</p>
<p>Additional note:</p>
<p>It seems the solution is using selenium or spynner but I don't know how to use it. Moreover I can't install spynner, it always stuck on installing lxml</p>
| 1 |
2016-09-21T08:52:43Z
| 39,613,919 |
<p>It's really simple if you use Selenium. Here's a basic example with some explanation:</p>
<p>To install selenium run <code>pip install selenium</code></p>
<pre><code>from bs4 import BeautifulSoup
from selenium import webdriver
# set webdriver's browser to Firefox
driver = webdriver.Firefox()
#load page in browser
driver.get(
"https://www.traveloka.com/hotel/detail?spec=22-9-2016.24-9-2016.2.1.HOTEL.3000010016588.&nc=1474427752464")
#Wait 5 seconds after page load so dates are loaded
driver.implicitly_wait(5)
#get page's source
data = driver.page_source
#rest is pretty much the same
soup = BeautifulSoup(data, "html.parser")
ratingdates = soup.find_all("div", {"class": "reviewDate"})
#I changed this bit to always print all dates without range issues
for i in ratingdates:
print(i.get_text())
</code></pre>
<p>For more on using Selenium take a look at the docs here - <a href="http://selenium-python.readthedocs.io/" rel="nofollow">http://selenium-python.readthedocs.io/</a></p>
<p>If you don't want to get Firefox popping up every time you run the script, you could use <code>PhantomJS</code> - a lightweight headerless browser. After <a href="http://phantomjs.org/download.html" rel="nofollow">downloading</a> and setting it up you can just change <code>driver = webdriver.Firefox()</code> to <code>driver = webdriver.PhantomJS()</code> in the example above.</p>
| 1 |
2016-09-21T10:26:01Z
|
[
"python",
"beautifulsoup"
] |
how can i change a grayscale pixel value by a colored value given the value of the pixel not the coordiante
| 39,611,802 |
<p>Hello i want to change a grayscale pixel value by a colored value given the value of the pixel not by the coordiante.
I know how to do it given the coordiante:</p>
<pre><code>I = np.dstack([im, im, im])
x = 5
y = 5
I[x, y, :] = [1, 0, 0]
plt.imshow(I, interpolation='nearest' )
</code></pre>
<p>But how to do it in values like I[im == 10] = [1, 0 ,0]
doesn't work.</p>
| 0 |
2016-09-21T08:53:15Z
| 39,613,530 |
<p>I suggest this, although it might be dirty :</p>
<pre><code>while [10,10,10] in I:
I[I.index([10,10,10])] = [1,0,0]
</code></pre>
| 0 |
2016-09-21T10:08:10Z
|
[
"python",
"image",
"image-processing",
"grayscale"
] |
Creating a list where a certain char occur a set number of times
| 39,611,916 |
<p>I am creating a list in python 2.7
The list consists of 1's and 0's however I need the 1's to appear randomly in the list and a set amount of times.</p>
<p>Here is a way I found of doing this however can take a long time to create the list</p>
<pre><code>numcor = 0
while numcor != (wordlen): #wordlen being the set amount of times
usewrong = []
for l in list(mymap):
if l == "L": #L is my map an telling how long the list needs to be
use = random.choice((True, False))
if use == True:
usewrong.append(0)
else:
usewrong.append(1)
numcor = numcor + 1
</code></pre>
<p>Is there a more effiecient way of doing this?</p>
| 1 |
2016-09-21T08:58:46Z
| 39,611,999 |
<p>Simpler to way to create the list with <code>0</code>s and '1's is:</p>
<pre><code>>>> n, m = 5, 10
>>> [0]*n + [1]*m
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
</code></pre>
<p>where <code>n</code> is the number of <code>0</code>s and <code>m</code> is the number of <code>1</code>s</p>
<p>However if you want <em>the list to be shuffled in random order</em>, you may use <a href="https://docs.python.org/2/library/random.html#random.shuffle" rel="nofollow"><code>random.shuffle()</code></a> as:</p>
<pre><code>>>> from random import shuffle
>>> mylist = [0]*n + [1]*m # n and m are from above example
>>> shuffle(mylist)
>>> mylist
[1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1]
</code></pre>
| 1 |
2016-09-21T09:02:24Z
|
[
"python",
"arrays",
"list",
"python-2.7"
] |
Creating a list where a certain char occur a set number of times
| 39,611,916 |
<p>I am creating a list in python 2.7
The list consists of 1's and 0's however I need the 1's to appear randomly in the list and a set amount of times.</p>
<p>Here is a way I found of doing this however can take a long time to create the list</p>
<pre><code>numcor = 0
while numcor != (wordlen): #wordlen being the set amount of times
usewrong = []
for l in list(mymap):
if l == "L": #L is my map an telling how long the list needs to be
use = random.choice((True, False))
if use == True:
usewrong.append(0)
else:
usewrong.append(1)
numcor = numcor + 1
</code></pre>
<p>Is there a more effiecient way of doing this?</p>
| 1 |
2016-09-21T08:58:46Z
| 39,612,340 |
<p>Here is a different approach:</p>
<pre><code>from random import *
# create a list full of 0's
ls = [0 for _ in range(10)]
# pick e.g. 3 non-duplicate random indexes in range(len(ls))
random_indexes = sample(range(len(ls)), 3)
# create in-place our random list which contains 3 1's in random indexes
ls = [1 if (i in random_indexes) else ls[i] for i,j in enumerate(ls)]
</code></pre>
<p>The output will be:</p>
<pre><code>>>> ls
[0, 1, 0, 1, 0, 0, 0, 0, 1, 0]
</code></pre>
| 1 |
2016-09-21T09:17:43Z
|
[
"python",
"arrays",
"list",
"python-2.7"
] |
how to get all the elements of a html table with pagination using selenium?
| 39,611,956 |
<p>I have a webpage with a table. The table has pagination with 7 entries per page. I want to get access to all the elements of the table. </p>
<pre><code>table_element = driver.find_element_by_xpath(xpath)
for tr in table_element.find_elements_by_tag_name('tr'):
print tr.text
</code></pre>
<p>With the above code, I am able to get the elements in the first page only but not the subsequent pages of the same table. How can I get all the elements?</p>
<p><a href="http://i.stack.imgur.com/6158V.png" rel="nofollow">sample image of the table</a></p>
| 0 |
2016-09-21T09:00:11Z
| 39,619,105 |
<p>You will need a loop outside of the code you provided that clicks the next page link (or the equivalent) until the next page link no longer exists. Without any HTML, we can't provide a more specific answer.</p>
| 0 |
2016-09-21T14:14:04Z
|
[
"python",
"python-2.7",
"selenium",
"selenium-webdriver",
"selenium-chromedriver"
] |
Django context variable names for the templates
| 39,611,973 |
<p>EDIT:
I know I can change the names of the variables. My question is in the case that I don't want to do that. I want to know what are all the variables that django generates automatically.</p>
<hr>
<p>I'm doing Django's getting started tutorial and I'm on the <a href="https://docs.djangoproject.com/en/1.10/intro/tutorial04/" rel="nofollow">generic views section</a> where at one point it explains:</p>
<blockquote>
<p>In previous parts of the tutorial, the templates have been provided
with a context that contains the question and latest_question_list
context variables. For DetailView the question variable is provided
automatically â since weâre using a Django model (Question), Django is
able to determine an appropriate name for the context variable.
However, for ListView, the automatically generated context variable is
question_list.</p>
</blockquote>
<p>My problem is that I don't know how Django determines this "appropriate names". I want to know this for when I write my own template. I would want to know what context variable names to use in such template.</p>
<p>From what I can understand, if my model is <code>Question</code>, the <code>question</code> context variable would store that question, and the <code>question_list</code> context variable would store every question.</p>
<p>So my doubt is: What other context variables names can I use? and what would they store? I can't seem to find this on the documentation, please redirect me to it if you know where it is.</p>
| 1 |
2016-09-21T09:00:53Z
| 39,612,425 |
<p>I think this default context variable name only applies when dealing with Django's Class Based Views.</p>
<p>E.g. If you are using a DetailView for a Animal model, Django will auto create a context variable called 'animal' for you to use in template. I think it also allows the use of 'object'. </p>
<p>Another example is, as you mentioned, the ListView for a Animal model which would generate context name called animal_list. </p>
<p>However, in both of these cases, there are ways to change the default context variable name. If you specify 'context_object_name' in your DetailView, this will be the name you refer to in your template. This will also work for ListViews.</p>
<p>This website has all info on CBVs of all Django versions:</p>
<p><a href="https://ccbv.co.uk/projects/Django/1.9/django.views.generic.detail/DetailView/" rel="nofollow">https://ccbv.co.uk/projects/Django/1.9/django.views.generic.detail/DetailView/</a></p>
| 1 |
2016-09-21T09:20:47Z
|
[
"python",
"django",
"django-templates",
"django-class-based-views"
] |
Django context variable names for the templates
| 39,611,973 |
<p>EDIT:
I know I can change the names of the variables. My question is in the case that I don't want to do that. I want to know what are all the variables that django generates automatically.</p>
<hr>
<p>I'm doing Django's getting started tutorial and I'm on the <a href="https://docs.djangoproject.com/en/1.10/intro/tutorial04/" rel="nofollow">generic views section</a> where at one point it explains:</p>
<blockquote>
<p>In previous parts of the tutorial, the templates have been provided
with a context that contains the question and latest_question_list
context variables. For DetailView the question variable is provided
automatically â since weâre using a Django model (Question), Django is
able to determine an appropriate name for the context variable.
However, for ListView, the automatically generated context variable is
question_list.</p>
</blockquote>
<p>My problem is that I don't know how Django determines this "appropriate names". I want to know this for when I write my own template. I would want to know what context variable names to use in such template.</p>
<p>From what I can understand, if my model is <code>Question</code>, the <code>question</code> context variable would store that question, and the <code>question_list</code> context variable would store every question.</p>
<p>So my doubt is: What other context variables names can I use? and what would they store? I can't seem to find this on the documentation, please redirect me to it if you know where it is.</p>
| 1 |
2016-09-21T09:00:53Z
| 39,612,452 |
<p>You can change the question_list to something else by using the <a href="https://docs.djangoproject.com/en/1.10/ref/class-based-views/mixins-multiple-object/#django.views.generic.list.MultipleObjectMixin.context_object_name" rel="nofollow">context_object_name</a> this isn't explained all that well in that part of the documentation, but ...</p>
<blockquote>
<p>Return the context variable name that will be used to contain the list
of data that this view is manipulating. If object_list is a queryset
of Django objects and context_object_name is not set, the context name
will be the model_name of the model that the queryset is composed
from, with postfix '_list' appended. For example, the model Article
would have a context object named article_list.</p>
</blockquote>
<p>is given under <a href="https://docs.djangoproject.com/en/1.10/ref/class-based-views/mixins-multiple-object/#django.views.generic.list.MultipleObjectMixin.get_context_object_name" rel="nofollow">get_context_object_name</a> method</p>
<p>This is what that method's <a href="https://github.com/django/django/blob/master/django/views/generic/list.py#L115" rel="nofollow">code</a> looks like, It ought to clear up all doubts:</p>
<pre><code> """
Get the name of the item to be used in the context.
"""
if self.context_object_name:
return self.context_object_name
elif hasattr(object_list, 'model'):
return '%s_list' % object_list.model._meta.model_name
else:
return None
</code></pre>
| 1 |
2016-09-21T09:21:30Z
|
[
"python",
"django",
"django-templates",
"django-class-based-views"
] |
Not able to convert cassandra blob/bytes string to integer
| 39,611,995 |
<p>I have a column-family/table in cassandra-3.0.6 which has a column named "value" which is defined as a blob data type.</p>
<p>CQLSH query <code>select * from table limit 2;</code> returns me:</p>
<blockquote>
<p>id | name | value</p>
<p>id_001 | john | 0x010000000000000000</p>
<p>id_002 | terry | 0x044097a80000000000</p>
</blockquote>
<p>If I read this value using cqlengine(Datastax Python Driver), I get the output something like:</p>
<blockquote>
<p>{'id':'id_001', 'name':'john', 'value': '\x01\x00\x00\x00\x00\x00\x00\x00\x00'}</p>
<p>{'id':'id_002', 'name':'terry', 'value': '\x04@\x97\xa8\x00\x00\x00\x00\x00'}</p>
</blockquote>
<p>Ideally the values in the "value" field are <code>0</code> and <code>1514</code> for <code>row1</code> and <code>row2</code> resp. </p>
<p>However, I am not sure how I can convert the "value" field values extracted using cqlengine to <code>0</code> and <code>1514</code>. I tried few methods like <code>ord()</code>, <code>decode()</code>, etc but nothing worked. :(</p>
<p><strong>Questions:</strong></p>
<ol>
<li>What is this format?
<code>'\x01\x00\x00\x00\x00\x00\x00\x00\x00'</code> or
<code>'\x04@\x97\xa8\x00\x00\x00\x00\x00'</code>?</li>
<li>How I can convert these arbitrary values to <code>0</code> and <code>1514</code>?</li>
</ol>
<p>NOTE: I am using python 2.7.9 on Linux</p>
<p>Any help or pointers would be useful.</p>
<p>Thanks,</p>
| 0 |
2016-09-21T09:02:13Z
| 39,628,303 |
<ol>
<li><p>Blob will be converted to a byte array in Python if you read it directly. That looks like a byte array containing the Hex value of the blob.</p></li>
<li><p>One way is to explicitly do the conversion in your query.</p>
<p><code>select id, name, blobasint(value) from table limit 3</code></p>
<p>There should be a conversion method with the Python driver as well. </p></li>
</ol>
| 0 |
2016-09-22T00:09:02Z
|
[
"python",
"cassandra",
"cqlsh",
"cqlengine"
] |
Parsing NBA reference with python beautiful soup
| 39,612,000 |
<p>So I'm trying to scrape out the miscellaneous stats table from this site <a href="http://www.basketball-reference.com/leagues/NBA_2016.html" rel="nofollow">http://www.basketball-reference.com/leagues/NBA_2016.html</a> using python and beautiful soup. This is the basic code so far I just want to see if it is even reading the table but when I do print table I just get none.</p>
<pre><code>from bs4 import BeautifulSoup
import requests
import pandas as pd
url = "http://www.basketball-reference.com/leagues/NBA_2016.html"
r = requests.get(url)
data = r.text
soup = BeautifulSoup(data)
table = soup.find('table', id='misc_stats')
print table
</code></pre>
<p>When I inspect the html on the webpage itself, the table that I want appears with this symbol in front <code><!--</code> and the html text is green for the portion. What can I do? </p>
| 0 |
2016-09-21T09:02:26Z
| 39,612,132 |
<p><code><!--</code> is the start of a comment and <code>--></code> is the end in html so just remove the comments before you parse it:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
comm = re.compile("<!--|-->")
html = requests.get("http://www.basketball-reference.com/leagues/NBA_2016.html").content
cleaned_soup = BeautifulSoup(re.sub("<!--|-->","", html))
tableStats = cleaned_soup.find('table', {'id':'team_stats'})
print(tableStats)
</code></pre>
| 1 |
2016-09-21T09:07:59Z
|
[
"python",
"html",
"parsing",
"beautifulsoup"
] |
store function values for future integration
| 39,612,016 |
<p>I have a function H(t) returning a float. I then want to numerically compute several integrals involving this function. However, one of the integrals basically invokes the previous integral, e.g.</p>
<pre><code>from scipy.integrate import quad
def H(t):
return t
def G1(t):
return quad(lambda x: H(x)*H(t-x),0,1)[0]
def G2(t):
return quad(lambda x: G1(x)*H(t-x),0,1)[0]
res = quad(G1,0,1) + quad(G2,0,1)
</code></pre>
<p>I will need to do this for a large number of functions (i.e. continue like this with G3, G4... Gn) and my end-result will depend on all of them. Since I have to compute G1 for many values of t anyway in order to perform quad(G1,0,1), is it possible to store those computed values and use them when computing quad(G2,0,1)? I could maybe try to write my own simplistic integration function, but how would I go about storing and re-using these values?</p>
| 0 |
2016-09-21T09:02:57Z
| 39,613,802 |
<p>I guess it's exactly same as lru_cache, but since I already wrote it, might as well share:</p>
<pre><code>def store_result(func):
def remember(t):
if t not in func.results:
func.results[t] = func(t)
return func.results[t]
setattr(func, 'results', {}) # make dictionary to store results
return remember
</code></pre>
<p>This is decorator that modifies function to store results to dictionary and retrieve those when they are called with the same arguments. You can modify it to suit your needs.</p>
<p>usage:</p>
<pre><code>@store_result
def G1(t):
return quad(lambda x: H(x)*H(t-x),0,1)[0]
</code></pre>
| 0 |
2016-09-21T10:19:58Z
|
[
"python",
"function",
"store",
"integrate"
] |
python - scatter plot with dates and 3rd variable as color
| 39,612,054 |
<p>I am trying to plot an x-y plot, with x or y as date variable, and using a 3rd variable to color the points.
I managed to do it if none of the 3 variables are date, using:</p>
<pre><code>ax.scatter(df['x'],df['y'],s=20,c=df['z'], marker = 'o', cmap = cm.jet )
</code></pre>
<p>After searching, I find out that for normal plot, we have to use plot_date(). Unfortunately, I haven't been able to color the points.
Can anybody help me?</p>
<p>Here is a small example:</p>
<pre><code>import matplotlib, datetime
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import pandas as pd
todayTime=datetime.datetime.now();
df = pd.DataFrame({'x': [todayTime+datetime.timedelta(hours=i) for i in range(10)], 'y': range(10),'z' : [2*j for j in range(10)]});
xAlt=[0.5*i for i in range(10)];
fig, ax = plt.subplots()
ax.scatter(df['x'],df['y'],s=20,c=df['z'], marker = 'o', cmap = cm.jet )
plt.show()
</code></pre>
<p>You can replace df['x'] by xAlt to see the desired result</p>
<p>Thank you</p>
| 1 |
2016-09-21T09:04:44Z
| 39,612,731 |
<p>As far as I know, one has to use <code>scatter</code> in order to color the points as you describe. One workaround could be to use a <code>FuncFormatter</code> to convert the tick labels into times on the x-axis. The code below converts the dates into numbers, makes the scatter plot, and uses a <code>FuncFormatter</code> to convert the tick labels back into dates.</p>
<pre><code>import matplotlib, datetime
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import pandas as pd
from matplotlib.ticker import FuncFormatter
todayTime=datetime.datetime.now()
df = pd.DataFrame({'x': [todayTime+datetime.timedelta(hours=i) for i in range(10)], 'y': range(10),'z' : [2*j for j in range(10)]})
def my_formatter(x, pos=None):
d = matplotlib.dates.num2date(x)
if len(str(d.minute)) == 1:
mn = '0{}'.format(d.minute)
else:
mn = str(d.minute)
if len(str(d.hour)) == 1:
hr = '0{}'.format(d.hour)
else:
hr = str(d.hour)
return hr+':'+mn
major_formatter=FuncFormatter(my_formatter)
nums = np.array([matplotlib.dates.date2num(di) for di in df['x']])
fig, ax = plt.subplots()
ax.xaxis.set_major_formatter(major_formatter)
ax.scatter(nums,df['y'],s=20,c=df['z'], marker = 'o', cmap = cm.jet )
xmin = df['x'][0]-datetime.timedelta(hours=1)
xmax = df['x'][len(df['x'])-1]+datetime.timedelta(hours=1)
ax.set_xlim((xmin,xmax))
plt.show()
</code></pre>
| 1 |
2016-09-21T09:33:39Z
|
[
"python",
"date",
"matplotlib",
"scatter"
] |
How do I replace a line in Python
| 39,612,059 |
<p>I am trying to write a program in Python for an assessment where I have to update a stock file for a shop.</p>
<p><a href="http://i.stack.imgur.com/0OV0L.jpg" rel="nofollow">My instructions</a></p>
<p>My code so far:</p>
<pre class="lang-python prettyprint-override"><code>#open a file in read mode
file = open("data.txt","r")
#read each line of data to a vairble
FileData = file.readlines()
#close the file
file.close()
total = 0 #create the vairble total
AnotherItem = "y" # create
while AnotherItem == "y" or AnotherItem == "Y" or AnotherItem == "yes" or AnotherItem == "Yes" or AnotherItem == "YES":
print("please enter the barcode")
UsersItem=input()
for line in FileData:
#split the line in to first and second section
barcode = line.split(":")[0]
item = line.split(":")[1]
price = line.split(":")[2]
stock = line.split(":")[3]
if barcode==UsersItem:
file = open("data.txt","r")
#read each line of data to a vairble
FileData = file.readlines()
#close the file
file.close()
print(item +" £" + str(float(price)/100) + " Stock: " + str(stock))
print("how many do you want to buy")
HowMany= input()
total+=float(price) * float( HowMany)
file = open("data.txt","a")
StockLevel=float(stock)
file.write(item + ":" + price + ":" +str(float((StockLevel-float(HowMany)))))
file.close()
print("Do you want to buy another item? [Yes/No]")
AnotherItem = input()
if AnotherItem == "n" or "N" or "no" or "No" or "NO":
print("total price: £"+ str(total/100))
</code></pre>
<p>
When I try to update my stock file it writes to end of the data file, instead of over the line I want it to.</p>
| 0 |
2016-09-21T09:04:51Z
| 39,613,991 |
<pre><code>file = open("data.txt", "a")
</code></pre>
<p>Opens the file in append mode, means writing to the end of the file without overwriting data (adding to it, not changing).</p>
<p>Use </p>
<pre><code>file = open("data.txt", "w")
</code></pre>
<p>Instead, to flush the file contents and overwrite them.</p>
<hr>
<p>If you have some lines, and you only want to change the last do as follows:</p>
<pre><code>data = open("data.txt").read().split("\n")
</code></pre>
<p>get an array of the file lines.</p>
<pre><code>data[-1] = new_line_contents
</code></pre>
<p>change the last line to the new content.</p>
<pre><code>open("data.txt", "w").write("\n".join(data))
</code></pre>
<p>write back with flushing the new data (join the elements in data with newline between them)</p>
| 0 |
2016-09-21T10:28:46Z
|
[
"python"
] |
How can I convert from Datetime to Milliseconds in Django?
| 39,612,125 |
<p>I want to convert date (ex: 2016-09-20 22:00:00+00:00) to milliseconds. I will apply this block:</p>
<pre><code>def get_calendar_events(request):
user = request.user
taken_course = Course.objects.get_enrollments(user=user)
homework_list = []
for course in taken_course:
homework_list = course.get_homeworks()
data_l = []
for homework in homework_list:
data_l.append({
"id": user.id,
"title": homework.title,
"class": "event-important",
"start": homework.start_date, # Milliseconds
"end": homework.end_date # Milliseconds
})
data = {
"success": 1,
"result": [data_l]
}
return JsonResponse(data, safe=False)
</code></pre>
<p>I need to edit "start" and "end" tags in data_l. Thank you.</p>
| -1 |
2016-09-21T09:07:37Z
| 39,612,626 |
<p>In python 3, you can use <a href="https://docs.python.org/3/library/datetime.html?highlight=re#datetime.datetime.timestamp" rel="nofollow"><code>timestamp()</code></a> method to get the number seconds elapsed since Jan 1, 1970. </p>
<pre><code>import datetime
d=datetime.datetime.now()
print d.timestamp()
# 1474450000.164866
</code></pre>
<p>This is assuming the <code>homework.start_date</code> field is a <code>datetime</code> object.</p>
| 0 |
2016-09-21T09:29:23Z
|
[
"python",
"json",
"django",
"database",
"datetime"
] |
Unable to write a squarefree algorithm
| 39,612,207 |
<p>I'm trying to write a function that returns true, if an integer is <a href="https://en.wikipedia.org/wiki/Square-free_integer" rel="nofollow">Square-Free</a> - this is what I've tried:</p>
<pre><code>def squarefree(n):
for i in range (2,n-1):
if n%(i**2)==0:
return False
else:
return True
</code></pre>
<blockquote>
<p>In mathematics, a square-free, or quadratfrei (from German language) integer, is an integer which is divisible by no other perfect square than 1. For example, 10 is square-free but 18 is not, as 18 is divisible by 9 = 32. The smallest positive square-free numbers are</p>
</blockquote>
| -6 |
2016-09-21T09:11:42Z
| 39,613,737 |
<p><strong>Your loop ends on its first run.</strong> It checks only for divisibility by the first <code>i</code>, i.e. by 4, what is caused by your <code>else</code> statement that makes the function return whether this specific <code>i</code> is a divisor or not.</p>
<p>You should wait until it ends to decide if <code>n</code> is square free:</p>
<pre><code>def squarefree(n):
for i in range (2, round(n**0.5)):
if n % (i**2) == 0:
return False
return True
</code></pre>
<p>That way, if the program finds that <code>n</code> fails this condition, it returns <code>False</code> immediately. But, if no square ruins this loop, then the program will output <code>True</code>, as it ends without "interruptions".</p>
<p>Remember - not every <code>if</code> must have an <code>else</code> attached. Also, consider improving by running only in the range of <code>sqrt(n)</code>, also called <code>round(n ** 0.5)</code>, since every higher number will surely be off modulo.</p>
| 0 |
2016-09-21T10:17:18Z
|
[
"python"
] |
Writing pandas DataFrame to JSON in unicode
| 39,612,240 |
<p>I'm trying to write a pandas DataFrame containing unicode to json, but the built in <code>.to_json</code> function escapes the characters. How do I fix this?</p>
<p>Some sample code:</p>
<pre><code>import pandas as pd
df=pd.DataFrame([['Ï','a',1],['Ï','b',2]])
df.to_json('df.json')
</code></pre>
<p>gives:</p>
<pre><code>{"0":{"0":"\u03c4","1":"\u03c0"},"1":{"0":"a","1":"b"},"2":{"0":1,"1":2}}
</code></pre>
<p>instead of what I want:</p>
<pre><code>{"0":{"0":"Ï","1":"Ï"},"1":{"0":"a","1":"b"},"2":{"0":1,"1":2}}
</code></pre>
<p>Adding the <code>force_asciii=False</code> argument gives me the following error:
<code>UnicodeEncodeError: 'charmap' codec can't encode character '\u03c4' in position 11: character maps to <undefined></code></p>
<p>I'm using WinPython 3.4.4.2 64bit with pandas 0.18.0</p>
| 1 |
2016-09-21T09:13:29Z
| 39,612,316 |
<p>Opening a file with the encoding set to utf-8, and then passing that file to the <code>.to_json</code> function fixes the problem:</p>
<pre><code>with open('df.json', 'w', encoding='utf-8') as file:
df.to_json(file, force_ascii=False)
</code></pre>
<p>gives the correct:</p>
<pre><code>{"0":{"0":"Ï","1":"Ï"},"1":{"0":"a","1":"b"},"2":{"0":1,"1":2}}
</code></pre>
<p>Note: it does still require the <code>force_ascii=False</code> argument.</p>
| 4 |
2016-09-21T09:16:38Z
|
[
"python",
"json",
"pandas",
"unicode"
] |
How to use multiprocessing in a for loop - python
| 39,612,248 |
<p>I have a script that use python mechanize and bruteforce html form. This is a for loop that check every password from "PassList" and runs until it matches the current password by checking the redirected url. How can i implement multiprocessing here</p>
<pre><code>for x in PasswordList:
br.form['password'] = ''.join(x)
print "Bruteforce in progress.. checking : ",br.form['password']
response=br.submit()
if response.geturl()=="http://192.168.1.106/success.html":
#url to which the page is redirected after login
print "\n Correct password is ",''.join(x)
break
</code></pre>
| -1 |
2016-09-21T09:13:50Z
| 39,612,504 |
<pre><code>from multiprocessing import Pool
def process_bruteforce(PasswordList):
<process>
if __name__ == '__main__':
pool = Pool(processes=4) # process per core
is_connected = pool.map(process_bruteforce, PasswordList)
</code></pre>
<p>I would try something like that</p>
| 0 |
2016-09-21T09:23:55Z
|
[
"python",
"multiprocessing",
"mechanize-python"
] |
How to use multiprocessing in a for loop - python
| 39,612,248 |
<p>I have a script that use python mechanize and bruteforce html form. This is a for loop that check every password from "PassList" and runs until it matches the current password by checking the redirected url. How can i implement multiprocessing here</p>
<pre><code>for x in PasswordList:
br.form['password'] = ''.join(x)
print "Bruteforce in progress.. checking : ",br.form['password']
response=br.submit()
if response.geturl()=="http://192.168.1.106/success.html":
#url to which the page is redirected after login
print "\n Correct password is ",''.join(x)
break
</code></pre>
| -1 |
2016-09-21T09:13:50Z
| 39,612,552 |
<p>I do hope this is not for malicious purposes.</p>
<p>I've never used python mechanize, but seeing as you have no answers I can share what I know, and you can modify it accordingly.</p>
<p>In general, it needs to be its own function, which you then call pool over. I dont know about your br object, but i would probably recommend having many of those objects to prevent any clashing. (Can try with the same br object tho, modify code accordingly)</p>
<pre><code>list_of_br_and_passwords = [[br_obj,'password1'],[br_obj,'password2'] ...]
from multiprocessing import Pool
from multiprocessing import cpu_count
def crackPassword(lst):
br_obj = lst[0]
password = lst[1]
br.form['password'] = ''.join(password)
print "Bruteforce in progress.. checking : ",br.form['password']
response=br.submit()
pool = Pool(cpu_count() * 2)
crack_password = pool.map(crackPassword,list_of_br_and_passwords)
pool.close()
</code></pre>
<p>Once again, this is not a full answer, just a general guideline on how to do multiprocessing</p>
| 0 |
2016-09-21T09:26:18Z
|
[
"python",
"multiprocessing",
"mechanize-python"
] |
How to convert a large Json file into a csv using python
| 39,612,262 |
<p>(Python 3.5)
I am trying to parse a large user review.json file (1.3gb) into python and convert to a .csv file. I have tried looking for a simple converter tool online, most of which accept a file size maximum of 1Mb or are super expensive.
as i am fairly new to python i guess i ask 2 questions.</p>
<ol>
<li><p>is it even possible/ efficient to do so or should i be looking for another method?</p></li>
<li><p>I tried the following code, it only is reading the and writing the top 342 lines in my .json doc then returning an error.</p></li>
</ol>
<blockquote>
<p>Blockquote
File "C:\Anaconda3\lib\json__init__.py", line 319, in loads
return _default_decoder.decode(s)</p>
</blockquote>
<p>File "C:\Anaconda3\lib\json\decoder.py", line 342, in decode
raise JSONDecodeError("Extra data", s, end)
JSONDecodeError: Extra data</p>
<p><strong>This is the code im using</strong></p>
<pre><code>import csv
import json
infile = open("myfile.json","r")
outfile = open ("myfile.csv","w")
writer = csv.writer(outfile)
for row in json.loads(infile.read()):
writer.writerow(row)
</code></pre>
<p><strong>my .json example:</strong></p>
<p>Link To small part of <a href="http://pastebin.com/mfHr703f" rel="nofollow">Json</a></p>
<p>My thoughts is its some type of error related to my for loop, with json.loads... but i do not know enough about it. Is it possible to create a dictionary{} and take convert just the values "user_id", "stars", "text"? or am i dreaming.</p>
<p>Any suggestions or criticism are appreciated.</p>
| 0 |
2016-09-21T09:14:37Z
| 39,612,324 |
<p>This is not a JSON file; this is a file containing individual lines of JSON. You should parse each line individually.</p>
<pre><code>for row in infile:
data = json.loads(row)
writer.writerow(data)
</code></pre>
| 0 |
2016-09-21T09:17:05Z
|
[
"python",
"json",
"csv",
"dictionary"
] |
How to convert a large Json file into a csv using python
| 39,612,262 |
<p>(Python 3.5)
I am trying to parse a large user review.json file (1.3gb) into python and convert to a .csv file. I have tried looking for a simple converter tool online, most of which accept a file size maximum of 1Mb or are super expensive.
as i am fairly new to python i guess i ask 2 questions.</p>
<ol>
<li><p>is it even possible/ efficient to do so or should i be looking for another method?</p></li>
<li><p>I tried the following code, it only is reading the and writing the top 342 lines in my .json doc then returning an error.</p></li>
</ol>
<blockquote>
<p>Blockquote
File "C:\Anaconda3\lib\json__init__.py", line 319, in loads
return _default_decoder.decode(s)</p>
</blockquote>
<p>File "C:\Anaconda3\lib\json\decoder.py", line 342, in decode
raise JSONDecodeError("Extra data", s, end)
JSONDecodeError: Extra data</p>
<p><strong>This is the code im using</strong></p>
<pre><code>import csv
import json
infile = open("myfile.json","r")
outfile = open ("myfile.csv","w")
writer = csv.writer(outfile)
for row in json.loads(infile.read()):
writer.writerow(row)
</code></pre>
<p><strong>my .json example:</strong></p>
<p>Link To small part of <a href="http://pastebin.com/mfHr703f" rel="nofollow">Json</a></p>
<p>My thoughts is its some type of error related to my for loop, with json.loads... but i do not know enough about it. Is it possible to create a dictionary{} and take convert just the values "user_id", "stars", "text"? or am i dreaming.</p>
<p>Any suggestions or criticism are appreciated.</p>
| 0 |
2016-09-21T09:14:37Z
| 39,612,609 |
<p>Sometimes it's not as easy as having one JSON definition per line of input. A JSON definition can spread out over multiple lines, and it's not necessarily easy to determine which are the start and end braces reading line by line (for example, if there are strings containing braces, or nested structures). </p>
<p>The answer is to use the <code>raw_decode</code> method of <code>json.JSONDecoder</code> to fetch the JSON definitions from the file one at a time. This will work for any set of concatenated valid JSON definitions. It's further described in my answer here: <a href="http://stackoverflow.com/questions/36019907/importing-wrongly-concatenated-jsons-in-python/36021001#36021001">Importing wrongly concatenated JSONs in python</a></p>
| 0 |
2016-09-21T09:28:27Z
|
[
"python",
"json",
"csv",
"dictionary"
] |
Pandas: Change values chosen by boolean indexing in a column without getting a warning
| 39,612,300 |
<p>I have a dataframe, I want to change only those values of a column where another column fulfills a certain condition. I'm trying to do this with <code>iloc</code> at the moment and it either does not work or I'm getting that annoying warning:</p>
<blockquote>
<p>A value is trying to be set on a copy of a slice from a DataFrame</p>
</blockquote>
<p>Example:</p>
<pre><code>import pandas as pd
DF = pd.DataFrame({'A':[1,1,2,1,2,2,1,2,1],'B':['a','a','b','c','x','t','i','x','b']})
</code></pre>
<p>Doing one of those</p>
<pre><code>DF['B'].iloc[:][DF['A'] == 1] = 'X'
DF.iloc[:]['B'][DF['A'] == 1] = 'Y'
</code></pre>
<p>works, but leads to the warning above.</p>
<p>This one also gives a warning, but does not work:</p>
<pre><code>DF.iloc[:][DF['A'] == 1]['B'] = 'Z'
</code></pre>
<p>I'm really confused about how to do boolean indexing using <code>loc</code>, <code>iloc</code>, and <code>ix</code> right, that is, how to provide row index, column index, AND boolean index in the right order and with the correct syntax. </p>
<p>Can someone clear this up for me?</p>
| 3 |
2016-09-21T09:16:03Z
| 39,612,376 |
<p>You are chaining you're selectors, leading to the warning. Consolidate the selection into one.<br>
Use <code>loc</code> instead</p>
<pre><code>DF.loc[DF['A'] == 1, 'B'] = 'X'
DF
</code></pre>
<p><a href="http://i.stack.imgur.com/HX26a.png" rel="nofollow"><img src="http://i.stack.imgur.com/HX26a.png" alt="enter image description here"></a></p>
| 3 |
2016-09-21T09:18:52Z
|
[
"python",
"python-2.7",
"pandas",
"indexing",
"condition"
] |
Pandas: Change values chosen by boolean indexing in a column without getting a warning
| 39,612,300 |
<p>I have a dataframe, I want to change only those values of a column where another column fulfills a certain condition. I'm trying to do this with <code>iloc</code> at the moment and it either does not work or I'm getting that annoying warning:</p>
<blockquote>
<p>A value is trying to be set on a copy of a slice from a DataFrame</p>
</blockquote>
<p>Example:</p>
<pre><code>import pandas as pd
DF = pd.DataFrame({'A':[1,1,2,1,2,2,1,2,1],'B':['a','a','b','c','x','t','i','x','b']})
</code></pre>
<p>Doing one of those</p>
<pre><code>DF['B'].iloc[:][DF['A'] == 1] = 'X'
DF.iloc[:]['B'][DF['A'] == 1] = 'Y'
</code></pre>
<p>works, but leads to the warning above.</p>
<p>This one also gives a warning, but does not work:</p>
<pre><code>DF.iloc[:][DF['A'] == 1]['B'] = 'Z'
</code></pre>
<p>I'm really confused about how to do boolean indexing using <code>loc</code>, <code>iloc</code>, and <code>ix</code> right, that is, how to provide row index, column index, AND boolean index in the right order and with the correct syntax. </p>
<p>Can someone clear this up for me?</p>
| 3 |
2016-09-21T09:16:03Z
| 39,612,385 |
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a>:</p>
<pre><code>import pandas as pd
DF = pd.DataFrame({'A':[1,1,2,1,2,2,1,2,1],'B':['a','a','b','c','x','t','i','x','b']})
DF.ix[DF['A'] == 1, 'B'] = 'X'
print (DF)
0 1 X
1 1 X
2 2 b
3 1 X
4 2 x
5 2 t
6 1 X
7 2 x
8 1 X
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.mask.html" rel="nofollow"><code>mask</code></a>:</p>
<pre><code>DF.B = DF.B.mask(DF['A'] == 1, 'X')
print (DF)
A B
0 1 X
1 1 X
2 2 b
3 1 X
4 2 x
5 2 t
6 1 X
7 2 x
8 1 X
</code></pre>
<p>Nice article about <a href="http://tomaugspurger.github.io/modern-1.html" rel="nofollow"><code>SettingWithCopy</code> by Tom Augspurger</a>.</p>
| 3 |
2016-09-21T09:19:05Z
|
[
"python",
"python-2.7",
"pandas",
"indexing",
"condition"
] |
Ealasticsearch results exactly as parameter
| 39,612,301 |
<p>I'm trying to filter logs based on the domain name. For example I only want the results of domain: bh250.example.com.</p>
<p>When I use the following query:</p>
<p><a href="http://localhost:9200/_search?pretty&size=150&q=domainname=bh250.example.com" rel="nofollow">http://localhost:9200/_search?pretty&size=150&q=domainname=bh250.example.com</a></p>
<p>the first 3 results have a domain name: bh250.example.com where the 4th having bh500.example.com</p>
<p>I have read several documentations on how to query to Elasticsearch but I seem to miss something. I only want results having 100% match with the parameter.</p>
<p>UPDATE!! After question from Val</p>
<pre><code>queryFilter = Q("match", domainname="bh250.example.com")
search=Search(using=dev_client, index="logstash-2016.09.21").query("bool", filter=queryFilter)[0:20]
</code></pre>
| 0 |
2016-09-21T09:16:04Z
| 39,612,618 |
<p>You're almost there, you just need to make a small change:</p>
<pre><code>http://localhost:9200/_search?pretty&size=150&q=domainname:"bh250.example.com"
^ ^
| |
use colon instead of equal... and double quotes
</code></pre>
| 0 |
2016-09-21T09:28:56Z
|
[
"python",
"django",
"elasticsearch"
] |
Evaluating trigonometric expressions in sympy
| 39,612,356 |
<p>Using python 2.7 with PyCharm Community Edition 2016.2.3 + Anaconda distribution.</p>
<p>I have an input similar to :</p>
<pre><code>from sympy import *
x = symbols('x')
f = cos(x)
print (f.subs(x, 25))
</code></pre>
<p>The output is <code>cos(25)</code>, . Is there a way to evaluate trigonometric identities such as sin/cos, at a certain angle ? I've tried <code>cos(degrees(x))</code>, but nothing differs. Am I missing some crucial part of documentation or there really isn't a way to do this ? Ty for your help :)</p>
| 0 |
2016-09-21T09:18:16Z
| 39,612,587 |
<p>Perform a <em>numerical evaluation</em> using <a href="http://docs.sympy.org/latest/modules/evalf.html#numerical-evaluation" rel="nofollow">function <code>N</code></a>:</p>
<pre><code>>>> from sympy import N, symbols, cos
>>> x = symbols('x')
>>> f = cos(x)
>>> f.subs(x, 25)
cos(25)
>>> N(f.subs(x, 25)) # evaluate after substitution
0.991202811863474
</code></pre>
<p>To make the computation in degrees, convert the angle to radians, using <a href="http://docs.sympy.org/0.6.7/modules/mpmath/functions/trigonometric.html#radians" rel="nofollow"><code>mpmath.radians</code></a>, so the computation is performed on a <em>rad</em> value:</p>
<pre><code>>>> from symspy import mpmath
>>> f.subs(x, mpmath.radians(25))
0.906307787036650
</code></pre>
<hr>
<p>Importing with <code>*</code> (wildcard imports) isn't a very good idea. Imagine what happens if you equally did <code>from math import *</code>, then one of the <code>cos</code> functions from both modules will be out in the wild.</p>
<p>See the <a href="https://www.python.org/dev/peps/pep-0008/#imports" rel="nofollow">PEP 8 guideline on imports</a>.</p>
| 2 |
2016-09-21T09:27:36Z
|
[
"python",
"numpy",
"trigonometry",
"sympy"
] |
declare a variable as *not* an integer in sage/maxima solve
| 39,612,476 |
<p>I am trying to solve symbolically a simple equation for x:</p>
<pre><code>solve(x^K + d == R, x)
</code></pre>
<p>I am declaring these variables and assumptions:</p>
<pre><code>var('K, d, R')
assume(K>0)
assume(K, 'real')
assume(R>0)
assume(R<1)
assume(d<R)
assumptions()
︡> [K > 0, K is real, R > 0, R < 1, d < R]
</code></pre>
<p>Yet when I run the solve, I obtain the following error:</p>
<blockquote>
<p>Error in lines 1-1 </p>
<p>Traceback (most recent call last): </p>
<p>File
"/projects/sage/sage-7.3/local/lib/python2.7/site-packages/smc_sagews/sage_server.py",
line 957, in execute
exec compile(block+'\n', '', 'single') in namespace, locals </p>
<p>...</p>
<p>File "/projects/sage/sage-7.3/local/lib/python2.7/site-packages/sage/interfaces/interface.py",
line 671, in <strong>init</strong>
raise TypeError(x)</p>
<p>TypeError: Computation failed since Maxima requested additional constraints; using the 'assume' command before evaluation <em>may</em> help (example of legal syntax is 'assume(K>0)', see <code>assume?</code> for more details) </p>
<p><strong>Is K an integer?</strong></p>
</blockquote>
<p>Apparently, maxima is asking whether K is an integer? But I explicitly declared it 'real'!
How can I spell out to maxima that it should not assume that K is an integer?</p>
<p>I am simply expecting <code>(R-d)^(1/K)</code> or <code>exp(log(R-d)/K)</code> as answer.</p>
| 1 |
2016-09-21T09:22:38Z
| 39,616,465 |
<p>The assumption framework in both Sage and Maxima is fairly weak, though in this case it doesn't matter, since integers are real numbers, right? </p>
<p>However, you might want to try `assume(K,'noninteger') because apparently <a href="http://maxima.sourceforge.net/docs/manual/maxima_11.html#Item_003a-integer" rel="nofollow">Maxima does support this</a> particular assumption (I had not seen it before). I can't try this right now, unfortunately, good luck!</p>
| 1 |
2016-09-21T12:23:53Z
|
[
"python",
"sage",
"maxima"
] |
airflow triggle_dag execution_date is the next day, why?
| 39,612,488 |
<p>Recently I have tested airflow so much that have one problem with <code>execution_date</code> when running <code>airflow trigger_dag <my-dag></code>.</p>
<p>I have learned that <code>execution_date</code> is not what we think at first time from <a href="https://cwiki.apache.org/confluence/display/AIRFLOW/Common+Pitfalls" rel="nofollow">here</a>:</p>
<blockquote>
<p>Airflow was developed as a solution for ETL needs. In the ETL world,
you typically summarize data. So, if I want to summarize data for
2016-02-19, I would do it at 2016-02-20 midnight GMT, which would be
right after all data for 2016-02-19 becomes available.</p>
</blockquote>
<pre><code>start_date = datetime.combine(datetime.today(),
datetime.min.time())
args = {
"owner": "xigua",
"start_date": start_date
}
dag = DAG(dag_id="hadoopprojects", default_args=args,
schedule_interval=timedelta(days=1))
wait_5m = ops.TimeDeltaSensor(task_id="wait_5m",
dag=dag,
delta=timedelta(minutes=5))
</code></pre>
<p>Above codes is the start part of my daily workflow, the first task is a TimeDeltaSensor that waits another 5 minutes before actual work, so this means my dag will be triggered at <code>2016-09-09T00:05:00</code>, <code>2016-09-10T00:05:00</code>... etc.</p>
<p>In Web UI, I can see something like <code>scheduled__2016-09-20T00:00:00</code>, and task is run at <code>2016-09-21T00:00:00</code>, which seems reasonable according to <code>ETL</code> model.</p>
<p>However someday my dag is not triggered for unknown reason, so I trigger it manually, if I trigger it at <code>2016-09-20T00:10:00</code>, then the TimeDeltaSensor will wait until <code>2016-09-21T00:15:00</code> before run. </p>
<p>This is not what I want, I want it to run at <code>2016-09-20T00:15:00</code> not the next day, I have tried passing <code>execution_date</code> through <code>--conf '{"execution_date": "2016-09-20"}'</code>, but it doesn't work.</p>
<p>How should I deal with this issue ?</p>
<pre><code>$ airflow version
[2016-09-21 17:26:33,654] {__init__.py:36} INFO - Using executor LocalExecutor
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
v1.7.1.3
</code></pre>
| 2 |
2016-09-21T09:23:09Z
| 39,620,901 |
<p>First, I recommend you use constants for <code>start_date</code>, because dynamic ones would act unpredictably based on with your airflow pipeline is evaluated by the scheduler.</p>
<p>More information about <code>start_date</code> here in an FAQ entry that I wrote and sort all this out:
<a href="http://pythonhosted.org/airflow/faq.html#what-s-the-deal-with-start-date" rel="nofollow">http://pythonhosted.org/airflow/faq.html#what-s-the-deal-with-start-date</a></p>
<p>Now, about <code>execution_date</code> and when it is triggered, this is a common gotcha for people onboarding on Airflow. Airflow sets <code>execution_date</code> based on the left bound of the schedule period it is covering, not based on when it fires (which would be the right bound of the period). When running an <code>schedule='@hourly'</code> task for instance, a task will fire every hour. The task that fires at 2pm will have an <code>execution_date</code> of 1pm because it assumes that you are processing the 1pm to 2pm time window at 2pm. Similarly, if you run a daily job, the run an with <code>execution_date</code> of <code>2016-01-01</code> would trigger soon after midnight on <code>2016-01-02</code>.</p>
<p>This left-bound labelling makes a lot of sense when thinking in terms of ETL and differential loads, but gets confusing when thinking in terms of a simple, cron-like scheduler.</p>
| 2 |
2016-09-21T15:34:49Z
|
[
"python",
"airflow"
] |
ValueError: too many values to unpack matplotlib errorbar
| 39,612,579 |
<p>I am trying to plot a errorbar:</p>
<pre><code>plt.errorbar(np.array(x_axis), np.array(y_axis), yerr=(np.array(y_bot), np.array(y_top)), linestyle='None', marker='^')
</code></pre>
<p>But it throws an error :</p>
<pre><code>plt.errorbar(np.array(x_axis), np.array(y_axis), yerr=(np.array(y_bot), np.array(y_top)), linestyle='None', marker='^')
File "/Library/Python/2.7/site-packages/matplotlib-1.4.x-py2.7-macosx-10.9-intel.egg/matplotlib/pyplot.py", line 2747, in errorbar
errorevery=errorevery, capthick=capthick, **kwargs)
File "/Library/Python/2.7/site-packages/matplotlib-1.4.x-py2.7-macosx-10.9-intel.egg/matplotlib/axes/_axes.py", line 2792, in errorbar
barcols.append(self.vlines(xo, lo, uo, **lines_kw))
File "/Library/Python/2.7/site-packages/matplotlib-1.4.x-py2.7-macosx-10.9-intel.egg/matplotlib/axes/_axes.py", line 1067, in vlines
for thisx, (thisymin, thisymax) in zip(x, Y)]
ValueError: too many values to unpack
</code></pre>
<p>x_axis, y_axis, y_bot, x_bot are 1D array of length 4.</p>
| -1 |
2016-09-21T09:27:23Z
| 39,612,854 |
<p>The following works fine for me:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x_axis = range(4)
y_axis = range(4)
y_bot = range(4)
y_top = range(4)
plt.errorbar(np.array(x_axis), np.array(y_axis), yerr=(np.array(y_bot), np.array(y_top)), linestyle='None', marker='^')
</code></pre>
<p>You way want to verify your arrays</p>
| 2 |
2016-09-21T09:39:10Z
|
[
"python",
"numpy",
"matplotlib"
] |
Python: Socket send returns malformed URL
| 39,612,721 |
<p>Hello I am trying to access a file via python socket module and I am getting HTTP 400 Bad request error but I am not sure what mistake I have done. </p>
<pre><code>import socket
mysocket=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
mysocket.connect(('www.pythonlearn.com',80))
mysocket.send(b'GET http://www.pythonlearn.com/code/intro-short.txt HTTP/1.0\n\n')
while True:
data=mysocket.recv(512)
if( len(data) < 1 ):
break
print(data)
mysocket.close()
</code></pre>
<p>Error: </p>
<pre><code>=== RESTART: E:\My Data\coursera\Using_Python_to_Access_Web_Data\week3.py ===
b'HTTP/1.1 400 Bad Request\r\nDate: Wed, 21 Sep 2016 09:27:20 GMT\r\nConnection: close\r\nSet-Cookie: __cfduid=d0586f04e861e59d45b703f20ea77a17c1474450040; expires=Thu, 21-Sep-17 09:27:20 GMT; path=/; domain=.pythonlearn.com; HttpOnly\r\nServer: cloudflare-nginx\r\nCF-RAY: 2e5c7b12d01d5426-LAX\r\n\r\n'
>>>
</code></pre>
<p>I am out of ideas here , any help greatly appreciated.</p>
<p>Thank you.</p>
| 1 |
2016-09-21T09:33:04Z
| 39,613,088 |
<p>The error is on the http request, Host and requested file should be separated.</p>
<pre><code>http_request = 'GET /code/intro-short.txt HTTP/1.0\n'
http_request += 'Host:www.pythonlearn.com\n'
http_request += '\n'
mysocket.send(http_request)
</code></pre>
| 0 |
2016-09-21T09:48:58Z
|
[
"python",
"sockets"
] |
Python - Add a line behind a specific string with a consecutive numeric
| 39,612,737 |
<p>I do have a little problem i can't solve in Python, iam not really familiar with this codes commands, that's one of the reasons this is kind of difficult for me.</p>
<p>For example, when i have a text file like this:</p>
<pre><code>Indicate somename X1
Random qwerty
Indicate somename X2
random azerty
Indicate somename X3
random qwertz
Indicate somename X4
random asdfg
Indicate somename X5
</code></pre>
<p>I would like to make a script to get specific value's behind it, like this:</p>
<pre><code>Indicate somename X1 value = 500
Random qwerty
Indicate somename X2 value = 500
random azerty
Indicate somename X3 value = 500
random qwertz
Indicate somename X4 value = 500
random asdfg
Indicate somename X5 value = 500
</code></pre>
<p>I already tried a script like this: </p>
<pre><code>def replace_score(file_name, line_num, text):
f = open(file_name, 'r')
contents = f.readlines()
f.close()
contents[line_num] = text+"\n"
f = open(file_name, "w")
contents = "".join(contents)
f.write(contents)
f.close()
replace_score("file_path", 10, "replacing_text")
</code></pre>
<p>But I couldn't get it working the way i wanted it to do.</p>
<p>I hope someone can help me out,</p>
<p>Greeting,</p>
| 0 |
2016-09-21T09:33:57Z
| 39,613,108 |
<pre><code>with open('sample') as fp, open('sample_out', 'w') as fo:
for line in fp:
if 'Indicate' in line:
content = line.strip() + " = 500"
else:
content = line.strip()
fo.write(content + "\n")
</code></pre>
| 0 |
2016-09-21T09:49:59Z
|
[
"python",
"string"
] |
Python - Add a line behind a specific string with a consecutive numeric
| 39,612,737 |
<p>I do have a little problem i can't solve in Python, iam not really familiar with this codes commands, that's one of the reasons this is kind of difficult for me.</p>
<p>For example, when i have a text file like this:</p>
<pre><code>Indicate somename X1
Random qwerty
Indicate somename X2
random azerty
Indicate somename X3
random qwertz
Indicate somename X4
random asdfg
Indicate somename X5
</code></pre>
<p>I would like to make a script to get specific value's behind it, like this:</p>
<pre><code>Indicate somename X1 value = 500
Random qwerty
Indicate somename X2 value = 500
random azerty
Indicate somename X3 value = 500
random qwertz
Indicate somename X4 value = 500
random asdfg
Indicate somename X5 value = 500
</code></pre>
<p>I already tried a script like this: </p>
<pre><code>def replace_score(file_name, line_num, text):
f = open(file_name, 'r')
contents = f.readlines()
f.close()
contents[line_num] = text+"\n"
f = open(file_name, "w")
contents = "".join(contents)
f.write(contents)
f.close()
replace_score("file_path", 10, "replacing_text")
</code></pre>
<p>But I couldn't get it working the way i wanted it to do.</p>
<p>I hope someone can help me out,</p>
<p>Greeting,</p>
| 0 |
2016-09-21T09:33:57Z
| 39,613,273 |
<pre><code>with open('/tmp/content.txt') as f: # where: '/tmp/content.txt' is the path of file
for i, line in enumerate(f.readlines()):
line = line.strip()
if not (i % 2):
line += ' value = 500'
print line.strip()
# Output:
Indicate somename X1 value = 500
Random qwerty
Indicate somename X2 value = 500
random azerty
Indicate somename X3 value = 500
random qwertz
Indicate somename X4 value = 500
random asdfg
Indicate somename X5 value = 500
</code></pre>
| 0 |
2016-09-21T09:56:30Z
|
[
"python",
"string"
] |
Python - Add a line behind a specific string with a consecutive numeric
| 39,612,737 |
<p>I do have a little problem i can't solve in Python, iam not really familiar with this codes commands, that's one of the reasons this is kind of difficult for me.</p>
<p>For example, when i have a text file like this:</p>
<pre><code>Indicate somename X1
Random qwerty
Indicate somename X2
random azerty
Indicate somename X3
random qwertz
Indicate somename X4
random asdfg
Indicate somename X5
</code></pre>
<p>I would like to make a script to get specific value's behind it, like this:</p>
<pre><code>Indicate somename X1 value = 500
Random qwerty
Indicate somename X2 value = 500
random azerty
Indicate somename X3 value = 500
random qwertz
Indicate somename X4 value = 500
random asdfg
Indicate somename X5 value = 500
</code></pre>
<p>I already tried a script like this: </p>
<pre><code>def replace_score(file_name, line_num, text):
f = open(file_name, 'r')
contents = f.readlines()
f.close()
contents[line_num] = text+"\n"
f = open(file_name, "w")
contents = "".join(contents)
f.write(contents)
f.close()
replace_score("file_path", 10, "replacing_text")
</code></pre>
<p>But I couldn't get it working the way i wanted it to do.</p>
<p>I hope someone can help me out,</p>
<p>Greeting,</p>
| 0 |
2016-09-21T09:33:57Z
| 39,613,603 |
<p>using 're' module</p>
<p>eg.</p>
<pre><code> if re.match(r'Indicate somename [A-Z][0-2]', line):
modified = line.strip() + ' value = XXX'
</code></pre>
<p>if you want require to modify input file in-place,
read entry file in to buffer then write result back. </p>
| 0 |
2016-09-21T10:12:10Z
|
[
"python",
"string"
] |
Runtime.exec() in java hangs because it is waiting for input from System.in
| 39,612,862 |
<p>I have the following short python program "test.py"</p>
<pre><code>n = int(raw_input())
print n
</code></pre>
<p>I'm executing the above program from following java program "ProcessRunner.java"</p>
<pre><code>import java.util.*;
import java.io.*;
public class ProcessRunner {
public static void main(String[] args) {
try {
Scanner s = new Scanner(Runtime.getRuntime().exec("python test.py").getInputStream()).useDelimiter("\\A");
System.out.println(s.next());
}
catch(Exception e) {
System.out.println(e.getMessage());
}
}
}
</code></pre>
<p>Upon running the command,</p>
<pre><code>java ProcessRunner
</code></pre>
<p>I'm not able to pass a value 'n' in proper format to Python program and also the java run hangs. What is the proper way to handle the situation and pass a value to 'n' dynamically to python program from inside java program?</p>
| 2 |
2016-09-21T09:39:33Z
| 39,613,049 |
<p><a href="https://docs.python.org/2/library/functions.html?highlight=raw_input#raw_input" rel="nofollow"><code>raw_input()</code></a>, or <a href="https://docs.python.org/3/library/functions.html?highlight=raw_input#input" rel="nofollow"><code>input()</code></a> in Python 3, will block waiting for new line terminated input on standard input, however, the Java program is not sending it anything.</p>
<p>Try writing to the Python subprocess using the stream returned by <code>getOutputStream()</code>. Here's an example:</p>
<pre><code>import java.util.*;
import java.io.*;
public class ProcessRunner {
public static void main(String[] args) {
try {
Process p = Runtime.getRuntime().exec("python test.py");
Scanner s = new Scanner(p.getInputStream());
PrintWriter toChild = new PrintWriter(p.getOutputStream());
toChild.println("1234"); // write to child's stdin
toChild.close(); // or you can use toChild.flush()
System.out.println(s.next());
}
catch(Exception e) {
System.out.println(e.getMessage());
}
}
}
</code></pre>
<hr>
<p>An alternative is to pass <code>n</code> as a command line argument. This requires modification of the Python script to expect and process the command line arguments, and to the Java code to send the argument:</p>
<pre><code>import java.util.*;
import java.io.*;
public class ProcessRunner {
public static void main(String[] args) {
try {
int n = 1234;
Process p = Runtime.getRuntime().exec("python test.py " + n);
Scanner s = new Scanner(p.getInputStream());
System.out.println(s.next());
}
catch(Exception e) {
System.out.println(e.getMessage());
}
}
}
</code></pre>
<p>And the Python script, <code>test.py</code>:</p>
<pre><code>import sys
if len(sys.argv) > 1:
print int(sys.argv[1])
</code></pre>
| 1 |
2016-09-21T09:47:49Z
|
[
"java",
"python",
"multithreading",
"process",
"runtime.exec"
] |
Runtime.exec() in java hangs because it is waiting for input from System.in
| 39,612,862 |
<p>I have the following short python program "test.py"</p>
<pre><code>n = int(raw_input())
print n
</code></pre>
<p>I'm executing the above program from following java program "ProcessRunner.java"</p>
<pre><code>import java.util.*;
import java.io.*;
public class ProcessRunner {
public static void main(String[] args) {
try {
Scanner s = new Scanner(Runtime.getRuntime().exec("python test.py").getInputStream()).useDelimiter("\\A");
System.out.println(s.next());
}
catch(Exception e) {
System.out.println(e.getMessage());
}
}
}
</code></pre>
<p>Upon running the command,</p>
<pre><code>java ProcessRunner
</code></pre>
<p>I'm not able to pass a value 'n' in proper format to Python program and also the java run hangs. What is the proper way to handle the situation and pass a value to 'n' dynamically to python program from inside java program?</p>
| 2 |
2016-09-21T09:39:33Z
| 39,613,611 |
<p>If I understand you correctly you want your java program to pass any output from your python script to System.out and any input into your java program to your python Script, right?</p>
<p>Have a look at the following program to get an idea how you could do this. </p>
<pre><code>import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
public class ProcessRunner {
public static void main(String[] args) {
try {
final Process process = Runtime.getRuntime().exec("/bin/sh");
try (
final InputStream inputStream = process.getInputStream();
final InputStream errorStream = process.getErrorStream();
final OutputStream outputStream = process.getOutputStream()
) {
while (process.isAlive()) {
forwardOneByte(inputStream, System.out);
forwardOneByte(errorStream, System.err);
forwardOneByte(System.in, outputStream);
}
}
} catch (IOException e) {
throw new RuntimeException(e);
}
}
private static void forwardOneByte(final InputStream inputStream,
final OutputStream outputStream)
throws IOException {
if(inputStream.available() <= 0) {
return;
}
final int b = inputStream.read();
if(b != -1) {
outputStream.write(b);
outputStream.flush();
}
}
}
</code></pre>
<p><strong>Note</strong> This code is just a concept demo. It will eat up your cpu and will not be able to cope with bigger amounts of throughput.</p>
| 0 |
2016-09-21T10:12:28Z
|
[
"java",
"python",
"multithreading",
"process",
"runtime.exec"
] |
Collapsable bootstrap multilevel listview populated from database for beginners
| 39,613,065 |
<p>As a beginning coder, I could use some help understanding what's involved in building a listview populated from a database with rows that can expand and collapse, providing more information, with embedded buttons for controls? I'd like to use mostly bootstrap and python or PHP and replicate this: <a href="http://preview.iatistandard.org/index.php?url=http%3A//iati.oxfam.org.uk/xml/oxfamgb-lb.xml" rel="nofollow">http://preview.iatistandard.org/index.php?url=http%3A//iati.oxfam.org.uk/xml/oxfamgb-lb.xml</a></p>
<p>Answers to similar questions are too high-level. Can someone please map out the example's basic components and functions at least and where the database scripts go and what they do? Thanks!</p>
| -1 |
2016-09-21T09:48:27Z
| 39,616,051 |
<p>Here is the DEMO that you required </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>$('.collapse').on('shown.bs.collapse', function (e) {
//Get the id of the collapsible which is opened
var id = $(this).attr("id");
//Set the minus icon as this collapsible is now opened
$('a[href="#'+ id +'"]').find("span.glyphicon-plus").removeClass("glyphicon-plus").addClass("glyphicon-minus");
//You know now which collapsible is clicked, so get the data for this id which is in attribute data-collapisbleId
var collapsibleDataId = $(this).data("collapisbleId");
//Now we have the id of the collapisble whihc we can use to get the deatails for this id, and then we will insert that into the detail section
}).on('hide.bs.collapse', function (e) {
var id = $(this).attr("id");
$('a[href="#'+ id +'"]').find("span.glyphicon-minus").removeClass("glyphicon-minus").addClass("glyphicon-plus");
var collapsibleDataId = $(this).data("collapisbleId");
$.ajax({
//Make hit to get the detials using ajax and on success of that insert the data in the dic with id = $(this).attr("id"); whihc is current div
});
});</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code> <meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="http://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>
<script src="http://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
<!-- Here I have just added the hardwired collapisbles, you have to make here loop to make these
for example. You got here the list of the collapisable heading and then you are making loop to make the collapisble
I have used here ASP.Net MVC as am not good in PHP or Python but the logic will go same here. Think as the below div are created using this loop
@(for var i= 0; i> Model.Count(); i++){
<div>
<a class="btn btn-link" role="button" data-toggle="collapse" href="#link@(i+1)" aria-expanded="false" aria-controls="collapseExample">
<span class="glyphicon glyphicon-plus"></span> &nbsp;Link 1
</a>
<div class="collapse" id="link@(i+1)" data-collapisbleId="@Model.Id">
//Detials section
</div>
</div>
} -->
<div>
<a class="btn btn-link" role="button" data-toggle="collapse" href="#link1" aria-expanded="false" aria-controls="collapseExample">
<span class="glyphicon glyphicon-plus"></span> &nbsp;Link 1
</a>
<div class="collapse" id="link1" data-collapisbleId="1">
<!--Detials section-->
Link 1
</div>
</div>
<div>
<a class="btn btn-link" role="button" data-toggle="collapse" href="#link2" aria-expanded="false" aria-controls="collapseExample">
<span class="glyphicon glyphicon-plus"></span> &nbsp;Link 2
</a>
<div class="collapse" id="link2" data-collapisbleId="2">
<!--Detials section-->
<ul class="list-group">
<li class="list-group-item">Cras justo odio</li>
<li class="list-group-item">Dapibus ac facilisis in</li>
<li class="list-group-item">Morbi leo risus</li>
<li class="list-group-item">Porta ac consectetur ac</li>
<li class="list-group-item">Vestibulum at eros</li>
</ul>
</div>
</div></code></pre>
</div>
</div>
</p>
<pre><code>**Complete DEMO**
<!DOCTYPE html>
<html lang="en">
<head>
<title>Bootstrap Example</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="http://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>
<script src="http://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
<script>
$(document).ready(function(){
$('.collapse').on('shown.bs.collapse', function (e) {
//Get the id of the collapsible which is opened
var id = $(this).attr("id");
//Set the minus icon as this collapsible is now opened
$('a[href="#'+ id +'"]').find("span.glyphicon-plus").removeClass("glyphicon-plus").addClass("glyphicon-minus");
//You know now which collapsible is clicked, so get the data for this id which is in attribute data-collapisbleId
var collapsibleDataId = $(this).data("collapisbleId");
//Now we have the id of the collapisble whihc we can use to get the deatails for this id, and then we will insert that into the detail section
}).on('hide.bs.collapse', function (e) {
var id = $(this).attr("id");
$('a[href="#'+ id +'"]').find("span.glyphicon-minus").removeClass("glyphicon-minus").addClass("glyphicon-plus");
var collapsibleDataId = $(this).data("collapisbleId");
$.ajax({
//Make hit to get the detials using ajax and on success of that insert the data in the dic with id = $(this).attr("id"); whihc is current div
});
});
});
</script>
</head>
<body>
<!-- Here I have just added the hardwired collapisbles, you have to make here loop to make these
for example. You got here the list of the collapisable heading and then you are making loop to make the collapisble
I have used here ASP.Net MVC as am not good in PHP or Python but the logic will go same here. Think as the below div are created using this loop
@(for var i= 0; i> Model.Count(); i++){
<div>
<a class="btn btn-link" role="button" data-toggle="collapse" href="#link@(i+1)" aria-expanded="false" aria-controls="collapseExample">
<span class="glyphicon glyphicon-plus"></span> &nbsp;Link 1
</a>
<div class="collapse" id="link@(i+1)" data-collapisbleId="@Model.Id">
//Detials section
</div>
</div>
} -->
<div>
<a class="btn btn-link" role="button" data-toggle="collapse" href="#link1" aria-expanded="false" aria-controls="collapseExample">
<span class="glyphicon glyphicon-plus"></span> &nbsp;Link 1
</a>
<div class="collapse" id="link1" data-collapisbleId="1">
<!--Detials section-->
Link 1
</div>
</div>
<div>
<a class="btn btn-link" role="button" data-toggle="collapse" href="#link2" aria-expanded="false" aria-controls="collapseExample">
<span class="glyphicon glyphicon-plus"></span> &nbsp;Link 2
</a>
<div class="collapse" id="link2" data-collapisbleId="2">
<!--Detials section-->
<ul class="list-group">
<li class="list-group-item">Cras justo odio</li>
<li class="list-group-item">Dapibus ac facilisis in</li>
<li class="list-group-item">Morbi leo risus</li>
<li class="list-group-item">Porta ac consectetur ac</li>
<li class="list-group-item">Vestibulum at eros</li>
</ul>
</div>
</div>
</body>
</html>
</code></pre>
| 0 |
2016-09-21T12:02:35Z
|
[
"php",
"python",
"database",
"twitter-bootstrap",
"listview"
] |
How to deal with the parameters with script
| 39,613,113 |
<p>I print the args from: </p>
<pre><code>if __name__ == '__main__':
print('The sys.argv is :\n',sys.argv)
</code></pre>
<p>and find that all the parameters transfer into string and i don't know how to deal with it.</p>
<p>Command <code>python3 MAX_HEAPIFY.py [1,2,3,4,5,6,7,8,9,10] 2</code></p>
<p>output: <code>['MAX_HEAPIFY.py', '[1,2,3,4,5,6,7,8,9,10]', '2']</code></p>
| -1 |
2016-09-21T09:50:07Z
| 39,613,414 |
<p>You need to understand that python accepts all input on the command line as strings. It is up to you to process the args in your code. </p>
<p>Ideally you would use <code>argparse</code> to make this more robust, but for this example you can run a command like this:</p>
<pre><code>python3 MAX_HEAPIFY.py "1,2,3,4,5,6,7,8,9,10" 2
</code></pre>
<p>Then you will use <code>my_list = sys.argv[1].split(",")</code> to process your first arg as a list.</p>
<p>Obviously you need to deal with typecasting and things specific to your logic, but I hope this gives you a starting point. </p>
<p>Edit:</p>
<p>If you need to convert <code>my_list</code> to a list of ints, you can use a list comprehension like this:</p>
<pre><code>my_int_list = [int(i) for i in my_list if i.isdigit()]
</code></pre>
| 0 |
2016-09-21T10:03:17Z
|
[
"python",
"python-3.x"
] |
Pandas: concatenate dataframes
| 39,613,228 |
<p>I have 2 dataframe</p>
<pre><code>category count_sec_target
3D-ÑÑÑеÑÑ 0.09375
CеÑÐ¸Ð°Ð»Ñ 201.90625
GPS и ÐÐÐÐÐСС 0.015625
Hi-Tech 187.1484375
ÐбиÑÑÑиенÑам 0.8125
Ðвиакомпании 8.40625
</code></pre>
<p>and </p>
<pre><code>category count_sec_random
3D-ÑÑÑеÑÑ 0.369565217
Hi-Tech 70.42391304
ÐСУ ТÐ, пÑомÑлекÑÑоника 0.934782609
ÐбиÑÑÑиенÑам 1.413043478
Ðвиакомпании 14.93478261
ÐвÑо 480.3369565
</code></pre>
<p>I need to concatenate that And get </p>
<pre><code>category count_sec_target count_sec_random
3D-ÑÑÑеÑÑ 0.09375 0.369565217
CеÑÐ¸Ð°Ð»Ñ 201.90625 0
GPS и ÐÐÐÐÐСС 0.015625 0
Hi-Tech 187.1484375 70.42391304
ÐбиÑÑÑиенÑам 0.8125 1.413043478
Ðвиакомпании 8.40625 14.93478261
ÐСУ ТÐ, пÑомÑлекÑÑоника 0 0.934782609
ÐвÑо 0 480.3369565
</code></pre>
<p>And next I want to divide values in col <code>(count_sec_target / count_sec_random) * 100%</code>
But when I try to concatenate df</p>
<pre><code>frames = [df1, df1]
df = pd.concat(frames)
I get
category count_sec_random count_sec_target
0 3D-ÑÑÑеÑÑ 0.369565 NaN
1 Hi-Tech 70.423913 NaN
2 ÐСУ ТÐ, пÑомÑлекÑÑоника 0.934783 NaN
3 ÐбиÑÑÑиенÑам 1.413043 NaN
4 Ðвиакомпании 14.934783 NaN
</code></pre>
<p>Also I try <code>df = df1.append(df2)</code>
BUt I get wrong result.
How can I fix that? </p>
| 2 |
2016-09-21T09:54:35Z
| 39,613,343 |
<pre><code>df3 = pd.concat([d.set_index('category') for d in frames], axis=1).fillna(0)
df3['ratio'] = df3.count_sec_random / df3.count_sec_target
df3
</code></pre>
<p><a href="http://i.stack.imgur.com/Xp8Dg.png" rel="nofollow"><img src="http://i.stack.imgur.com/Xp8Dg.png" alt="enter image description here"></a></p>
<hr>
<p><strong><em>Setup Reference</em></strong> </p>
<pre><code>import pandas as pd
from StringIO import StringIO
t1 = """category;count_sec_target
3D-ÑÑÑеÑÑ;0.09375
CеÑиалÑ;201.90625
GPS и ÐÐÐÐÐСС;0.015625
Hi-Tech;187.1484375
ÐбиÑÑÑиенÑам;0.8125
Ðвиакомпании;8.40625"""
t2 = """category;count_sec_random
3D-ÑÑÑеÑÑ;0.369565217
Hi-Tech;70.42391304
ÐСУ ТÐ, пÑомÑлекÑÑоника;0.934782609
ÐбиÑÑÑиенÑам;1.413043478
Ðвиакомпании;14.93478261
ÐвÑо;480.3369565"""
df1 = pd.read_csv(StringIO(t1), sep=';')
df2 = pd.read_csv(StringIO(t2), sep=';')
frames = [df1, df2]
</code></pre>
| 5 |
2016-09-21T09:59:50Z
|
[
"python",
"pandas"
] |
Pandas: concatenate dataframes
| 39,613,228 |
<p>I have 2 dataframe</p>
<pre><code>category count_sec_target
3D-ÑÑÑеÑÑ 0.09375
CеÑÐ¸Ð°Ð»Ñ 201.90625
GPS и ÐÐÐÐÐСС 0.015625
Hi-Tech 187.1484375
ÐбиÑÑÑиенÑам 0.8125
Ðвиакомпании 8.40625
</code></pre>
<p>and </p>
<pre><code>category count_sec_random
3D-ÑÑÑеÑÑ 0.369565217
Hi-Tech 70.42391304
ÐСУ ТÐ, пÑомÑлекÑÑоника 0.934782609
ÐбиÑÑÑиенÑам 1.413043478
Ðвиакомпании 14.93478261
ÐвÑо 480.3369565
</code></pre>
<p>I need to concatenate that And get </p>
<pre><code>category count_sec_target count_sec_random
3D-ÑÑÑеÑÑ 0.09375 0.369565217
CеÑÐ¸Ð°Ð»Ñ 201.90625 0
GPS и ÐÐÐÐÐСС 0.015625 0
Hi-Tech 187.1484375 70.42391304
ÐбиÑÑÑиенÑам 0.8125 1.413043478
Ðвиакомпании 8.40625 14.93478261
ÐСУ ТÐ, пÑомÑлекÑÑоника 0 0.934782609
ÐвÑо 0 480.3369565
</code></pre>
<p>And next I want to divide values in col <code>(count_sec_target / count_sec_random) * 100%</code>
But when I try to concatenate df</p>
<pre><code>frames = [df1, df1]
df = pd.concat(frames)
I get
category count_sec_random count_sec_target
0 3D-ÑÑÑеÑÑ 0.369565 NaN
1 Hi-Tech 70.423913 NaN
2 ÐСУ ТÐ, пÑомÑлекÑÑоника 0.934783 NaN
3 ÐбиÑÑÑиенÑам 1.413043 NaN
4 Ðвиакомпании 14.934783 NaN
</code></pre>
<p>Also I try <code>df = df1.append(df2)</code>
BUt I get wrong result.
How can I fix that? </p>
| 2 |
2016-09-21T09:54:35Z
| 39,613,460 |
<p>Merge should be appropriate here:</p>
<pre><code>df_1.merge(df_2, on='category', how='outer').fillna(0)
</code></pre>
<p><a href="http://i.stack.imgur.com/4pk3L.png" rel="nofollow"><img src="http://i.stack.imgur.com/4pk3L.png" alt="Image"></a></p>
<hr>
<p>To get the division output, simply do:</p>
<pre><code>df['division'] = df['count_sec_target'].div(df['count_sec_random']) * 100
</code></pre>
<p>where: <code>df</code> is the merged DF</p>
| 4 |
2016-09-21T10:05:23Z
|
[
"python",
"pandas"
] |
How to get the number of non-shared insertions and gaps in a sequence in Python?
| 39,613,421 |
<p>I am trying to obtain the number of insertions and gaps contained in a series of sequences with relation to a reference with which they were aligned; therefore, <strong>all sequences are now of the same length.</strong></p>
<p>For instance</p>
<pre><code>>reference
AGCAGGCAAGGCAA--GGAA-CCA
>sequence1
AAAA---AAAGCAATTGGAA-CCA
>sequence2
AGCAGGCAAAACAA--GGAAACCA
</code></pre>
<p>In this example, sequence1 has two insertions (two T) and three gaps. The last gap should not be counted since it appears both in the reference and sequence1. Sequence2 has one insertion (an A before the last triplet) and no gaps. (Again, the gaps are shared with the reference and should no enter in the count.). There are also 3 polymorphisms in sequence 1 and 2 in sequence 2.</p>
<p>My current script is able to give an estimate of the differences but not the count of "relevant gaps and insertions" as described above. For example</p>
<pre><code>records = list(SeqIO.parse(file("sequences.fasta"),"fasta"))
reference = records[0] #reference is the first sequence in the file
del records[0]
for record in records:
gaps = record.seq.count("-") - reference.seq.count("-")
basesinreference = reference.seq.count("A") + reference.seq.count("C") + reference.seq.count("G") + reference.seq.count("T")
basesinsequence = record.seq.count("A") + record.seq.count("C") + record.seq.count("G") + record.seq.count("T")
print(record.id)
print(gaps)
print(basesinsequence - basesinreference)
#Gives
sequence1
1 #Which means sequence 1 has one more Gap than the reference
-1 #Which means sequence 1 has one base less than the reference
sequence2
-1 #Which means sequence 2 has one Gap less than the reference
1 #Which means sequence 2 has one more base than the reference
</code></pre>
<p>I am kind of a Python newy and still learning the tools of this language. Is there a way to achieve this? I am thinking about spliting the sequences and iteratively compare one position at a time and count the difference but I am not sure if it possible in Python (not to mention that it would be horribly slow.)</p>
| 2 |
2016-09-21T10:03:51Z
| 39,614,370 |
<p>This is a job for the <code>zip</code> function. We iterate over the reference and a test sequence in parallel, seeing if either one contains a <code>-</code> at the current position. We use the result of that test to update counts of insertions, deletions and unchanged in a dictionary.</p>
<pre><code>def kind(u, v):
if u == '-':
if v != '-':
return 'I' # insertion
else:
if v == '-':
return 'D' # deletion
return 'U' # unchanged
reference = 'AGCAGGCAAGGCAA--GGAA-CCA'
sequences = [
'AGCA---AAGGCAATTGGAA-CCA',
'AGCAGGCAAGGCAA--GGAAACCA',
]
print('Reference')
print(reference)
for seq in sequences:
print(seq)
counts = dict.fromkeys('DIU', 0)
for u, v in zip(reference, seq):
counts[kind(u, v)] += 1
print(counts)
</code></pre>
<p><strong>output</strong></p>
<pre><code>Reference
AGCAGGCAAGGCAA--GGAA-CCA
AGCA---AAGGCAATTGGAA-CCA
{'I': 2, 'D': 3, 'U': 19}
AGCAGGCAAGGCAA--GGAAACCA
{'I': 1, 'D': 0, 'U': 23}
</code></pre>
<hr>
<p>Here's an updated version that also checks for polymorphism.</p>
<pre><code>def kind(u, v):
if u == '-':
if v != '-':
return 'I' # insertion
else:
if v == '-':
return 'D' # deletion
elif v != u:
return 'P' # polymorphism
return 'U' # unchanged
reference = 'AGCAGGCAAGGCAA--GGAA-CCA'
sequences = [
'AAAA---AAAGCAATTGGAA-CCA',
'AGCAGGCAAAACAA--GGAAACCA',
]
print('Reference')
print(reference)
for seq in sequences:
print(seq)
counts = dict.fromkeys('DIPU', 0)
for u, v in zip(reference, seq):
counts[kind(u, v)] += 1
print(counts)
</code></pre>
<p><strong>output</strong></p>
<pre><code>Reference
AGCAGGCAAGGCAA--GGAA-CCA
AAAA---AAAGCAATTGGAA-CCA
{'D': 3, 'P': 3, 'I': 2, 'U': 16}
AGCAGGCAAAACAA--GGAAACCA
{'D': 0, 'P': 2, 'I': 1, 'U': 21}
</code></pre>
| 1 |
2016-09-21T10:45:33Z
|
[
"python",
"bioinformatics",
"biopython",
"fasta"
] |
How to get the number of non-shared insertions and gaps in a sequence in Python?
| 39,613,421 |
<p>I am trying to obtain the number of insertions and gaps contained in a series of sequences with relation to a reference with which they were aligned; therefore, <strong>all sequences are now of the same length.</strong></p>
<p>For instance</p>
<pre><code>>reference
AGCAGGCAAGGCAA--GGAA-CCA
>sequence1
AAAA---AAAGCAATTGGAA-CCA
>sequence2
AGCAGGCAAAACAA--GGAAACCA
</code></pre>
<p>In this example, sequence1 has two insertions (two T) and three gaps. The last gap should not be counted since it appears both in the reference and sequence1. Sequence2 has one insertion (an A before the last triplet) and no gaps. (Again, the gaps are shared with the reference and should no enter in the count.). There are also 3 polymorphisms in sequence 1 and 2 in sequence 2.</p>
<p>My current script is able to give an estimate of the differences but not the count of "relevant gaps and insertions" as described above. For example</p>
<pre><code>records = list(SeqIO.parse(file("sequences.fasta"),"fasta"))
reference = records[0] #reference is the first sequence in the file
del records[0]
for record in records:
gaps = record.seq.count("-") - reference.seq.count("-")
basesinreference = reference.seq.count("A") + reference.seq.count("C") + reference.seq.count("G") + reference.seq.count("T")
basesinsequence = record.seq.count("A") + record.seq.count("C") + record.seq.count("G") + record.seq.count("T")
print(record.id)
print(gaps)
print(basesinsequence - basesinreference)
#Gives
sequence1
1 #Which means sequence 1 has one more Gap than the reference
-1 #Which means sequence 1 has one base less than the reference
sequence2
-1 #Which means sequence 2 has one Gap less than the reference
1 #Which means sequence 2 has one more base than the reference
</code></pre>
<p>I am kind of a Python newy and still learning the tools of this language. Is there a way to achieve this? I am thinking about spliting the sequences and iteratively compare one position at a time and count the difference but I am not sure if it possible in Python (not to mention that it would be horribly slow.)</p>
| 2 |
2016-09-21T10:03:51Z
| 39,627,265 |
<p>Using Biopython and numpy:</p>
<pre><code>from Bio import AlignIO
from collections import Counter
import numpy as np
alignment = AlignIO.read("alignment.fasta", "fasta")
events = []
for i in range(alignment.get_alignment_length()):
this_column = alignment[:, i]
# Mark insertions, polymorphism and deletions following PM 2Ring notation
events.append(["U" if b == this_column[0] else
"I" if this_column[0] == "-" else
"P" if b != "-" else
"D" for b in this_column])
# Apply a Counter over the columns (axis 0) of the array
print(np.apply_along_axis(Counter, 0, np.array(events)))
</code></pre>
<p>This should output an array of Counts in the same order as the alignment:</p>
<pre><code>[[Counter({'U': 23})
Counter({'U': 15, 'P': 3, 'D': 3, 'I': 2})
Counter({'U': 21, 'P': 2, 'I': 1})]]
</code></pre>
| 1 |
2016-09-21T22:04:00Z
|
[
"python",
"bioinformatics",
"biopython",
"fasta"
] |
How to handle SQLAlchemy Connections in ProcessPool?
| 39,613,476 |
<p>I have a reactor that fetches messages from a RabbitMQ broker and triggers worker methods to process these messages in a process pool, something like this:</p>
<p><a href="http://i.stack.imgur.com/eKbAK.png"><img src="http://i.stack.imgur.com/eKbAK.png" alt="Reactor"></a></p>
<p>This is implemented using python <code>asyncio</code>, <code>loop.run_in_executor()</code> and <code>concurrent.futures.ProcessPoolExecutor</code>.</p>
<p>Now I want to access the database in the worker methods using SQLAlchemy. Mostly the processing will be very straightforward and quick CRUD operations.</p>
<p>The reactor will process 10-50 messages per second in the beginning, so it is not acceptable to open a new database connection for every request. Rather I would like to maintain one persistent connection per process.</p>
<p>My questions are: How can I do this? Can I just store them in a global variable? Will the SQA connection pool handle this for me? How to clean up when the reactor stops?</p>
<p><strong>[Update]</strong></p>
<ul>
<li>The database is MySQL with InnoDB.</li>
</ul>
<p><strong>Why choosing this pattern with a process pool?</strong></p>
<p>The current implementation uses a different pattern where each consumer runs in its own thread. Somehow this does not work very well. There are already about 200 consumers each running in their own thread, and the system is growing quickly. To scale better, the idea was to separate concerns and to consume messages in an I/O loop and delegate the processing to a pool. Of course, the performance of the whole system is mainly I/O bound. However, CPU is an issue when processing large result sets.</p>
<p>The other reason was "ease of use." While the connection handling and consumption of messages is implemented asynchronously, the code in the worker can be synchronous and simple. </p>
<p>Soon it became evident that accessing remote systems through persistent network connections from within the worker are an issue. This is what the CommunicationChannels are for: Inside the worker, I can grant requests to the message bus through these channels.</p>
<p>One of my current ideas is to handle DB access in a similar way: Pass statements through a queue to the event loop where they are sent to the DB. However, I have no idea how to do this with SQLAlchemy.
Where would be the entry point?
Objects need to be <code>pickled</code> when they are passed through a queue. How do I get such an object from an SQA query?
The communication with the database has to work asynchronously in order not to block the event loop. Can I use e.g. aiomysql as a database driver for SQA? </p>
| 20 |
2016-09-21T10:06:08Z
| 39,842,259 |
<p>@roman: Nice challenge you have there.</p>
<p>I have being in a similar scenario before so here is my <em>2 cents</em>: unless this consumer only <em>"read"</em> and <em>"write"</em> the message, without do any real proccessing of it, you could <em>re-design</em> this consumer as a consumer/producer that will <em>consume</em> the message, it will process the message and then will put result in another queue, that queue (processed messages for say) could be read by 1..N non-pooled asynchronous processes that would have open the DB connection in it's own entire life-cycle.</p>
<p>I can extend my answer, but I don't know if this approach fits for your needs, if so, I can give you more detail about the extended design.</p>
| 0 |
2016-10-04T00:01:37Z
|
[
"python",
"sqlalchemy",
"rabbitmq",
"python-multiprocessing",
"python-asyncio"
] |
How to handle SQLAlchemy Connections in ProcessPool?
| 39,613,476 |
<p>I have a reactor that fetches messages from a RabbitMQ broker and triggers worker methods to process these messages in a process pool, something like this:</p>
<p><a href="http://i.stack.imgur.com/eKbAK.png"><img src="http://i.stack.imgur.com/eKbAK.png" alt="Reactor"></a></p>
<p>This is implemented using python <code>asyncio</code>, <code>loop.run_in_executor()</code> and <code>concurrent.futures.ProcessPoolExecutor</code>.</p>
<p>Now I want to access the database in the worker methods using SQLAlchemy. Mostly the processing will be very straightforward and quick CRUD operations.</p>
<p>The reactor will process 10-50 messages per second in the beginning, so it is not acceptable to open a new database connection for every request. Rather I would like to maintain one persistent connection per process.</p>
<p>My questions are: How can I do this? Can I just store them in a global variable? Will the SQA connection pool handle this for me? How to clean up when the reactor stops?</p>
<p><strong>[Update]</strong></p>
<ul>
<li>The database is MySQL with InnoDB.</li>
</ul>
<p><strong>Why choosing this pattern with a process pool?</strong></p>
<p>The current implementation uses a different pattern where each consumer runs in its own thread. Somehow this does not work very well. There are already about 200 consumers each running in their own thread, and the system is growing quickly. To scale better, the idea was to separate concerns and to consume messages in an I/O loop and delegate the processing to a pool. Of course, the performance of the whole system is mainly I/O bound. However, CPU is an issue when processing large result sets.</p>
<p>The other reason was "ease of use." While the connection handling and consumption of messages is implemented asynchronously, the code in the worker can be synchronous and simple. </p>
<p>Soon it became evident that accessing remote systems through persistent network connections from within the worker are an issue. This is what the CommunicationChannels are for: Inside the worker, I can grant requests to the message bus through these channels.</p>
<p>One of my current ideas is to handle DB access in a similar way: Pass statements through a queue to the event loop where they are sent to the DB. However, I have no idea how to do this with SQLAlchemy.
Where would be the entry point?
Objects need to be <code>pickled</code> when they are passed through a queue. How do I get such an object from an SQA query?
The communication with the database has to work asynchronously in order not to block the event loop. Can I use e.g. aiomysql as a database driver for SQA? </p>
| 20 |
2016-09-21T10:06:08Z
| 39,930,596 |
<p>An approach that has served me really well is to use a webserver to handle and scale the process pool. flask-sqlalchemy even in its default state will keep a connection pool and not close each connection on each request response cycle. </p>
<p>The asyncio executor can just call url end points to execute your functions. The added benefit is that because all processes doing the work are behind a url you can trivially scale your worker pool across mutliple machines, adding more processes via gunicorn or one of the other many methods to scale a simple wsgi server. Plus you get all the fault tolerant goodness. </p>
<p>The downside is that you might be passing more information across the network. However as you say the problem is CPU bound and you will probably be passing much more data to and from the database. </p>
| 0 |
2016-10-08T09:04:20Z
|
[
"python",
"sqlalchemy",
"rabbitmq",
"python-multiprocessing",
"python-asyncio"
] |
How to handle SQLAlchemy Connections in ProcessPool?
| 39,613,476 |
<p>I have a reactor that fetches messages from a RabbitMQ broker and triggers worker methods to process these messages in a process pool, something like this:</p>
<p><a href="http://i.stack.imgur.com/eKbAK.png"><img src="http://i.stack.imgur.com/eKbAK.png" alt="Reactor"></a></p>
<p>This is implemented using python <code>asyncio</code>, <code>loop.run_in_executor()</code> and <code>concurrent.futures.ProcessPoolExecutor</code>.</p>
<p>Now I want to access the database in the worker methods using SQLAlchemy. Mostly the processing will be very straightforward and quick CRUD operations.</p>
<p>The reactor will process 10-50 messages per second in the beginning, so it is not acceptable to open a new database connection for every request. Rather I would like to maintain one persistent connection per process.</p>
<p>My questions are: How can I do this? Can I just store them in a global variable? Will the SQA connection pool handle this for me? How to clean up when the reactor stops?</p>
<p><strong>[Update]</strong></p>
<ul>
<li>The database is MySQL with InnoDB.</li>
</ul>
<p><strong>Why choosing this pattern with a process pool?</strong></p>
<p>The current implementation uses a different pattern where each consumer runs in its own thread. Somehow this does not work very well. There are already about 200 consumers each running in their own thread, and the system is growing quickly. To scale better, the idea was to separate concerns and to consume messages in an I/O loop and delegate the processing to a pool. Of course, the performance of the whole system is mainly I/O bound. However, CPU is an issue when processing large result sets.</p>
<p>The other reason was "ease of use." While the connection handling and consumption of messages is implemented asynchronously, the code in the worker can be synchronous and simple. </p>
<p>Soon it became evident that accessing remote systems through persistent network connections from within the worker are an issue. This is what the CommunicationChannels are for: Inside the worker, I can grant requests to the message bus through these channels.</p>
<p>One of my current ideas is to handle DB access in a similar way: Pass statements through a queue to the event loop where they are sent to the DB. However, I have no idea how to do this with SQLAlchemy.
Where would be the entry point?
Objects need to be <code>pickled</code> when they are passed through a queue. How do I get such an object from an SQA query?
The communication with the database has to work asynchronously in order not to block the event loop. Can I use e.g. aiomysql as a database driver for SQA? </p>
| 20 |
2016-09-21T10:06:08Z
| 40,060,154 |
<p>Your requirement of <strong>one database connection per process-pool process</strong> can be easily satisfied if some care is taken on how you instantiate the <code>session</code>, assuming you are working with the orm, in the worker processes.</p>
<p>A simple solution would be to have a global <a href="http://docs.sqlalchemy.org/en/latest/orm/contextual.html" rel="nofollow">session</a> which you reuse across requests:</p>
<pre><code># db.py
engine = create_engine("connection_uri", pool_size=1, max_overflow=0)
DBSession = scoped_session(sessionmaker(bind=engine))
</code></pre>
<p>And on the worker task:</p>
<pre><code># task.py
from db import engine, DBSession
def task():
DBSession.begin() # each task will get its own transaction over the global connection
...
DBSession.query(...)
...
DBSession.close() # cleanup on task end
</code></pre>
<p>Arguments <code>pool_size</code> and <code>max_overflow</code> <a href="http://docs.sqlalchemy.org/en/latest/core/pooling.html#connection-pool-configuration" rel="nofollow">customize</a> the default <a href="http://docs.sqlalchemy.org/en/latest/core/pooling.html#sqlalchemy.pool.QueuePool" rel="nofollow">QueuePool</a> used by create_engine.<code>pool_size</code> will make sure your process only keeps 1 connection alive per process in the process pool.</p>
<p>If you want it to reconnect you can use <code>DBSession.remove()</code> which will remove the session from the registry and will make it reconnect at the next DBSession usage. You can also use the <code>recycle</code> argument of <a href="http://docs.sqlalchemy.org/en/latest/core/pooling.html#sqlalchemy.pool.Pool" rel="nofollow">Pool</a> to make the connection reconnect after the specified amount of time.</p>
<p>During development/debbuging you can use <a href="http://docs.sqlalchemy.org/en/latest/core/pooling.html#sqlalchemy.pool.AssertionPool" rel="nofollow">AssertionPool</a> which will raise an exception if more than one connection is checked-out from the pool, see <a href="http://docs.sqlalchemy.org/en/latest/core/pooling.html#switching-pool-implementations" rel="nofollow">switching pool implementations</a> on how to do that.</p>
| 3 |
2016-10-15T14:20:23Z
|
[
"python",
"sqlalchemy",
"rabbitmq",
"python-multiprocessing",
"python-asyncio"
] |
Is there a way I can prevent users from entering numbers with input()
| 39,613,496 |
<p>I would like to prevent the user from entering numbers to save troubles down the line in my program. I know how to use try, except with int(input()) to prevent strings being entered when integers are required but I was wondering if a similar thing was possible with str(input()).</p>
<p>For example, if the user was asked for their name and they entered, "1994", they would receive and error message for entering an integer.</p>
| 0 |
2016-09-21T10:06:55Z
| 39,613,634 |
<p>Use a <code>try-except</code> with an <code>else</code> block in which you'll raise a <code>ValueError</code> if an Exception <em>didn't</em> occur during conversion to an <code>int</code> (which means the input <em>is</em> an <code>int</code>:</p>
<pre><code>v = input("> ")
try:
_ = int(v)
except:
pass
else:
raise ValueError("input supplied should be of type 'str'")
</code></pre>
<p>This will catch any <em>numbers</em> entered by raising the exception in the <code>else</code> block:</p>
<pre><code>> 1992
ValueErrorTraceback (most recent call last)
<ipython-input-35-180b61d98820> in <module>()
5 pass
6 else:
----> 7 raise ValueError("input supplied should be of type 'str'")
ValueError: input supplied should be of type 'str'
</code></pre>
<p>And allow strings by <code>pass</code>ing in the <code>except</code>:</p>
<pre><code>> jim
v
Out[37]: 'jim'
</code></pre>
<p>Alternatively, you could also do this with <code>any</code> and <code>isdigit</code>:</p>
<pre><code>v = input("> ")
if any(s.isdigit() for s in v):
raise ValueError("input supplied should be of type 'str'")
</code></pre>
<p>This checks to see if <code>any</code> characters are digits and if so raises the error. </p>
<hr>
<p>You could prevent floats too, but this starts to get ugly:</p>
<pre><code>v = input("> ")
for f in [int, float]:
try:
_ = f(v)
except:
pass
else:
raise ValueError("Numbers not allowed")
</code></pre>
| 2 |
2016-09-21T10:13:13Z
|
[
"python",
"string",
"python-3.x",
"input"
] |
External tools stdout not shown during execution
| 39,613,514 |
<p>I am calling python scripts from android studio as external tools.</p>
<p>Only after the script has exited, its stderr/stdout is displayed in the integrated Android Studio "Run" tool window. I want to be able to see the output during execution. (How) is that possible?</p>
<p><a href="http://i.stack.imgur.com/Gw1xn.png" rel="nofollow"><img src="http://i.stack.imgur.com/Gw1xn.png" alt="edit external tools"></a></p>
| 0 |
2016-09-21T10:07:38Z
| 39,618,508 |
<p>Running Python in unbuffered mode (<code>-u</code>) fixxed it
- first Line of my python script: </p>
<pre><code>#! /usr/bin/python3 -u
</code></pre>
<p>Thanks for the solution to Serge Baranov from Jetbrains support:</p>
<blockquote>
<p>Sep 21, 16:23 MSK</p>
<p>Could it be caused by the buffered output? See
<a href="http://stackoverflow.com/a/230780/104891">http://stackoverflow.com/a/230780/104891</a>.</p>
</blockquote>
| 0 |
2016-09-21T13:49:58Z
|
[
"python",
"android-studio",
"intellij-idea"
] |
Installing MySQL-python on windows 7
| 39,613,528 |
<p>Hey I tried installing MySQL-Pyhton for abut 5 hours now but keep getting an error. at first it was "Unable to find vcvarsall.bat" and some thing abut me needing to have visual C++ 2010 . so after looking around I found a "solution" for my problem ... only to receive a new error when I pip install MySQL-Pyhton.</p>
<p>I'm using python 3.4 with Pycharm The install is with pip on windows 7 .</p>
<p>This is what I get when I pip install </p>
<pre><code>> Collecting MySQL-python
Using cached MySQL-python-1.2.5.zip
Building wheels for collected packages: MySQL-python
Running setup.py bdist_wheel for MySQL-python: started
Running setup.py bdist_wheel for MySQL-python: finished with status 'error'
Complete output from command C:\Python34\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\yuval\\AppData\\Local\\Temp\\pycharm-packaging0.tmp\\MySQL-python\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d C:\Users\yuval\AppData\Local\Temp\tmp_8v3yxijpip-wheel- --python-tag cp34:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win32-3.4
copying _mysql_exceptions.py -> build\lib.win32-3.4
creating build\lib.win32-3.4\MySQLdb
copying MySQLdb\__init__.py -> build\lib.win32-3.4\MySQLdb
copying MySQLdb\converters.py -> build\lib.win32-3.4\MySQLdb
copying MySQLdb\connections.py -> build\lib.win32-3.4\MySQLdb
copying MySQLdb\cursors.py -> build\lib.win32-3.4\MySQLdb
copying MySQLdb\release.py -> build\lib.win32-3.4\MySQLdb
copying MySQLdb\times.py -> build\lib.win32-3.4\MySQLdb
creating build\lib.win32-3.4\MySQLdb\constants
copying MySQLdb\constants\__init__.py -> build\lib.win32-3.4\MySQLdb\constants
copying MySQLdb\constants\CR.py -> build\lib.win32-3.4\MySQLdb\constants
copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win32-3.4\MySQLdb\constants
copying MySQLdb\constants\ER.py -> build\lib.win32-3.4\MySQLdb\constants
copying MySQLdb\constants\FLAG.py -> build\lib.win32-3.4\MySQLdb\constants
copying MySQLdb\constants\REFRESH.py -> build\lib.win32-3.4\MySQLdb\constants
copying MySQLdb\constants\CLIENT.py -> build\lib.win32-3.4\MySQLdb\constants
warning: build_py: byte-compiling is disabled, skipping.
running build_ext
building '_mysql' extension
creating build\temp.win32-3.4
creating build\temp.win32-3.4\Release
C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -Dversion_info=(1,2,5,'final',1) -D__version__=1.2.5 "-IC:\Program Files (x86)\MySQL\MySQL Connector C 6.0.2\include" -IC:\Python34\include -IC:\Python34\include /Tc_mysql.c /Fobuild\temp.win32-3.4\Release\_mysql.obj /Zl
_mysql.c
_mysql.c(42) : fatal error C1083: Cannot open include file: 'config-win.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
----------------------------------------
Running setup.py clean for MySQL-python
Failed to build MySQL-python
Installing collected packages: MySQL-python
Running setup.py install for MySQL-python: started
Running setup.py install for MySQL-python: finished with status 'error'
Complete output from command C:\Python34\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\yuval\\AppData\\Local\\Temp\\pycharm-packaging0.tmp\\MySQL-python\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record C:\Users\yuval\AppData\Local\Temp\pip-hqm_a4jr-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win32-3.4
copying _mysql_exceptions.py -> build\lib.win32-3.4
creating build\lib.win32-3.4\MySQLdb
copying MySQLdb\__init__.py -> build\lib.win32-3.4\MySQLdb
copying MySQLdb\converters.py -> build\lib.win32-3.4\MySQLdb
copying MySQLdb\connections.py -> build\lib.win32-3.4\MySQLdb
copying MySQLdb\cursors.py -> build\lib.win32-3.4\MySQLdb
copying MySQLdb\release.py -> build\lib.win32-3.4\MySQLdb
copying MySQLdb\times.py -> build\lib.win32-3.4\MySQLdb
creating build\lib.win32-3.4\MySQLdb\constants
copying MySQLdb\constants\__init__.py -> build\lib.win32-3.4\MySQLdb\constants
copying MySQLdb\constants\CR.py -> build\lib.win32-3.4\MySQLdb\constants
copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win32-3.4\MySQLdb\constants
copying MySQLdb\constants\ER.py -> build\lib.win32-3.4\MySQLdb\constants
copying MySQLdb\constants\FLAG.py -> build\lib.win32-3.4\MySQLdb\constants
copying MySQLdb\constants\REFRESH.py -> build\lib.win32-3.4\MySQLdb\constants
copying MySQLdb\constants\CLIENT.py -> build\lib.win32-3.4\MySQLdb\constants
warning: build_py: byte-compiling is disabled, skipping.
running build_ext
building '_mysql' extension
creating build\temp.win32-3.4
creating build\temp.win32-3.4\Release
C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -Dversion_info=(1,2,5,'final',1) -D__version__=1.2.5 "-IC:\Program Files (x86)\MySQL\MySQL Connector C 6.0.2\include" -IC:\Python34\include -IC:\Python34\include /Tc_mysql.c /Fobuild\temp.win32-3.4\Release\_mysql.obj /Zl
_mysql.c
_mysql.c(42) : fatal error C1083: Cannot open include file: 'config-win.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
----------------------------------------
Failed building wheel for MySQL-python
Command "C:\Python34\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\yuval\\AppData\\Local\\Temp\\pycharm-packaging0.tmp\\MySQL-python\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record C:\Users\yuval\AppData\Local\Temp\pip-hqm_a4jr-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\yuval\AppData\Local\Temp\pycharm-packaging0.tmp\MySQL-python\
</code></pre>
| 0 |
2016-09-21T10:08:01Z
| 39,613,663 |
<p>you're going to want to add Python to your Path Environment Variable in this way. Go to:</p>
<ol>
<li>My Computer</li>
<li>System Properties</li>
<li>Advanced System Settings</li>
<li>Under the "Advanced" tab click the button that says "Environment Variables"</li>
<li>Then under System Variables you are going to want to add / change the following variables: <strong>PYTHONPATH</strong> and <strong>Path</strong>. Here is a paste of what my variables look like:</li>
</ol>
<p><strong>PYTHONPATH</strong></p>
<pre><code>C:\Python27;C:\Python27\Lib\site-packages;C:\Python27\Lib;C:\Python27\DLLs;C:\Python27\Lib\lib-tk;C:\Python27\Scripts
</code></pre>
<p><strong>Path</strong></p>
<pre><code>C:\Program Files\MySQL\MySQL Utilities 1.3.5\;C:\Python27;C:\Python27\Lib\site-packages;C:\Python27\Lib;C:\Python27\DLLs;C:\Python27\Lib\lib-tk;C:\Python27\Scripts
</code></pre>
<p>Your Path's might be different, so please adjust them, but this configuration works for me and you should be able to run MySQL after making these changes.</p>
| 0 |
2016-09-21T10:14:24Z
|
[
"python",
"mysql",
"python-3.x"
] |
N-grams - not in memory
| 39,613,555 |
<p>I have 3 milion abstracts and I would like to extract 4-grams from them. I want to build a language model so I need to find the frequencies of these 4-grams. </p>
<p>My problem is that I can't extract all these 4-grams in memory. How can I implement a system that it can estimate all frequencies for these 4-grams? </p>
| 1 |
2016-09-21T10:09:33Z
| 39,613,813 |
<p>Sounds like you need to store the intermediate frequency counts on disk rather than in memory. Luckily most databases can do this, and python can talk to most databases.</p>
| 0 |
2016-09-21T10:20:25Z
|
[
"python",
"n-gram",
"language-model"
] |
Programmatically disown/nohup in Python
| 39,613,617 |
<p>I want to run a little script that pings a server once every X seconds/minutes and if it doesn't get the expected response, send an email to the specified email address to notify me the server is down.
Obviously, I want to run this script on some server with a nohup option, so that it stays alive when I disconnect. I'm using smtplib (<a href="https://docs.python.org/3.5/library/smtplib.html" rel="nofollow">https://docs.python.org/3.5/library/smtplib.html</a>) to send the email, all works great. But because I do not want to hardcode the password for my email account in the script, I want to provide it to the python script interactively. That is where I didn't get it to work in combination with the nohup option. (e.g. running <code>nohup pingServer.py -u <username> -p <password> &</code> seemed to kind of cancel out the idea of nohup/background. After specifying the password and then disconnecting the terminal, the process still seemed to have stopped.</p>
<p>So I wrote a little bash script around this to handle the passing of the username and password, and still being able to do it in a nohup/disown way. Here's my bash code:</p>
<pre><code>#!/bin/bash
read -p "Please enter sender email: " sender
stty -echo
read -p "Plese enter sender password: " passw; echo
stty echo
echo "INFO: Starting pingServer.py now."
echo
python pingServer.py -u $sender -p $passw &
disown
</code></pre>
<p>Which works great and does exactly what I want.
Untill, when I did some sanity checks, I noticed that the output of <code>ps -ax | grep py</code> gave me:</p>
<p><code>27489 pts/4 S 0:00 python pingServer.py -u <username> -p <password></code>
where both my email and my password show up in plaintext in the terminal window. Seeing that the process runs on a server, this is definitely not something I want.</p>
<p>Does anyone have some ideas on how to get around this?
I enjoyed the exercise of writing this little script and liked the small challenge, hence did it in this way. But probably there are much better ways of achieving this small kind of notification service for when a server/service is down. </p>
<p>If anyone could give me some pointers on to how to get around this displaying of my password in plaintext and passing it to the python script in a secure way in combination with no hangup, that would be great (i.e. programmatically disowning/setting a nohup in python after the interactive prompt stuff has been done, although it may of course very well be that this is just plain impossible). Or perhaps it is possible to pass the password to python in some obfuscated/encrypted way?</p>
<p>Alternatively, any tips on how to achieve the same (e.g. getting an email when a server doesn't return the expected response) would also be welcome (although doing it yourself/getting it working yourself is much cooler of course :))</p>
| 1 |
2016-09-21T10:12:40Z
| 39,614,269 |
<p>You can give your password as encrypted using openssl.</p>
<p>For example :</p>
<pre><code>echo $(openssl passwd -crypt mypassword)
</code></pre>
<p>Output :</p>
<pre><code>eXzWQlhzBJZ9.
</code></pre>
<p>Similarly You can give it as -</p>
<pre><code>nohup pingServer.py -u <username> -p $(openssl passwd -crypt mypassword) &
</code></pre>
<p>This way you can encrypt your password. I hope it helps.</p>
| 0 |
2016-09-21T10:40:45Z
|
[
"python"
] |
Why this python script is giving wrong output,using awk and comapring value in if block?
| 39,613,747 |
<p>I want to get desktop notification whenever load is more than five,for that I have written this python script but it is giving opposite to expected</p>
<pre><code>#!/usr/bin/python
import commands
a=commands.getoutput("cat /proc/loadavg | awk '{print $1}'")
float (a)
print a
if (a > 5.00):
commands.getoutput('notify-send "Hello world!" ')
else:
print "load looks fine!!"
</code></pre>
<p>Can someone rectify this ?</p>
| -1 |
2016-09-21T10:17:49Z
| 39,613,843 |
<p>You need to assign the typecast float value back to <code>a</code>. A plain print to console can be deceiving since you will not be able to tell if the variable is a float or not. So you can use <code>type</code> to confirm</p>
<pre><code>#!/usr/bin/python
import commands
a = commands.getoutput("cat /proc/loadavg | awk '{print $1}'")
a = float(a) # assign back to a
print a, type(a)
if a > 5.00:
commands.getoutput('notify-send "Hello world!" ')
else:
print "load looks fine!!"
</code></pre>
| 4 |
2016-09-21T10:22:10Z
|
[
"python",
"linux",
"shell",
"if-statement"
] |
Why this python script is giving wrong output,using awk and comapring value in if block?
| 39,613,747 |
<p>I want to get desktop notification whenever load is more than five,for that I have written this python script but it is giving opposite to expected</p>
<pre><code>#!/usr/bin/python
import commands
a=commands.getoutput("cat /proc/loadavg | awk '{print $1}'")
float (a)
print a
if (a > 5.00):
commands.getoutput('notify-send "Hello world!" ')
else:
print "load looks fine!!"
</code></pre>
<p>Can someone rectify this ?</p>
| -1 |
2016-09-21T10:17:49Z
| 39,652,599 |
<p>Getting the first token from a file is easy to do natively in Python. There is no reason to call <code>awk</code> (let alone then <a href="http://www.iki.fi/era/unix/award.html" rel="nofollow"><code>cat</code> and <code>awk</code></a>) and wastefully create a subprocess chain for this simple thing.</p>
<pre><code>#!/usr/bin/env python
import commands
with open('/proc/loadavg') as load:
a = float(load.readline().split()[0])
if a > 5.00:
commands.getoutput('notify-send "Hello world!"')
else:
print("load looks fine!!")
</code></pre>
| 1 |
2016-09-23T04:22:37Z
|
[
"python",
"linux",
"shell",
"if-statement"
] |
Fast random to unique relabeling of numpy 2d regions (without loops)
| 39,613,884 |
<p>I have a large numpy 2d array (10000,10000) in which regions (clusters of cells with the same number) are randomly labeled. As a result, some separate regions were assigned to the same label. What I would like is to relabel the numpy 2d array so that all separate regions are assigned to a unique label (see example).</p>
<p>I now how to solve this problem with a loop. But as I am working with a large array with a lot of small regions, this process takes ages. Therefore, a vectorized approach would be more suitable.</p>
<p>Example:</p>
<p>-Two separate regions are labeled with 1<br>
-Two separate regions are
labeled with 3</p>
<pre><code>## Input
random_arr=np.array([[1,1,3,3],[1,2,2,3],[2,2,1,1],[3,3,3,1]])
</code></pre>
<p><a href="http://i.stack.imgur.com/jJcuz.png" rel="nofollow"><img src="http://i.stack.imgur.com/jJcuz.png" alt="random"></a></p>
<pre><code>## Apply function
unique_arr=relabel_regions(random_arr)
## Output
>>> unique_arr
array([[1, 1, 3, 3],
[1, 2, 2, 3],
[2, 2, 4, 4],
[5, 5, 5, 4]])
</code></pre>
<p><a href="http://i.stack.imgur.com/KKifw.png" rel="nofollow"><img src="http://i.stack.imgur.com/KKifw.png" alt="unique"></a></p>
<p>Slow solution with loop:</p>
<pre><code>def relabel_regions(random_regions):
# Locate random regions index
random_labs=np.unique(random_regions)
unique_segments=np.zeros(np.shape(random_regions),dtype='uint64')
count=0
kernel=np.array([[0,1,0],[1,1,1],[0,1,0]],dtype='uint8')
# Assign unique number to each random labeled region
for i in range(len(random_labs)):
mask=np.zeros(np.shape(random_regions))
mask[np.where(random_regions==random_labs[i])]=1
labeled_mask, freq = ndimage.label(mask, structure=kernel)
labeled_mask=labeled_mask+count
unique_segments[np.where(labeled_mask>0+count)]=labeled_mask[np.where(labeled_mask>0+count)]
count+=freq
return unique_segments
</code></pre>
| 2 |
2016-09-21T10:24:09Z
| 39,614,709 |
<p>Let's cheat and just use some high-quality library (<a href="http://scikit-image.org/" rel="nofollow">scikit-image</a>) which offers exactly this.</p>
<p>You may learn from it's implementation or just use it!</p>
<pre><code>import numpy as np
from skimage.measure import label
random_arr = np.array([[1,1,3,3],[1,2,2,3],[2,2,1,1],[3,3,3,1]])
labels = label(random_arr, connectivity=1) # neighborhood-definition here!
print(labels)
</code></pre>
<h3>Output</h3>
<pre><code>[[1 1 2 2]
[1 3 3 2]
[3 3 4 4]
[5 5 5 4]]
</code></pre>
<p><strong>EDIT:</strong> Like mentioned by Jeon in the comments, scipy's <a href="http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.ndimage.measurements.label.html" rel="nofollow">scipy.ndimage.measurements.label</a> might also be a candidate if one does not want to use one more extra library! Thanks for the comment Jeon!</p>
| 3 |
2016-09-21T11:01:20Z
|
[
"python",
"arrays",
"numpy",
"scipy",
"vectorization"
] |
Antialiased text rendering in matplotlib pgf backend
| 39,614,011 |
<p>I have a question regarding the text rendering in the matplotlib <code>pgf</code> backend. I am using matplotlib to export .pdf files of my plots. In the section with the rcParameters I define that I want to use sans-serif and I want to use Helvetica as font. Therefore I disabled the option <code>text.usetex</code>. Here's a MWE:</p>
<pre><code>import matplotlib as mpl
import os
mpl.use('pgf')
pgf_with_latex = {
"pgf.texsystem": "pdflatex",
"text.usetex": False,
"font.family": "sans-serif",
"font.sans-serif": "Helvetica",
"pgf.preamble": [
r"\usepackage[utf8x]{inputenc}",
r"\usepackage[T1]{fontenc}",
r"\usepackage{textcomp}",
r"\usepackage{sfmath}",
]
}
mpl.rcParams.update(pgf_with_latex)
import matplotlib.pyplot as plt
def newfig():
plt.clf()
fig = plt.figure()
ax = fig.add_subplot(111)
return fig, ax
fig, ax = newfig()
ax.set_xlabel("Some x-label text")
ax.text(0.3, 0.5, r"This text is not antialiased! 0123456789", transform=ax.transAxes, fontsize=8)
plt.savefig(os.getcwd() + "/test.pdf")
</code></pre>
<p>The result is, that the tick labels and the text are rendered in Computer Modern (-> LaTeX) instead of Helvetica and they are not rendered as vector graphic and look pixelated. Now, when I enable <code>text.usetex</code> the tick labels become vector graphics (I can zoom in without seeing pixels), but the text doesn't!</p>
<p>What do I have to do to get everything (tick labels, axis labels, legend, text etc.) to be vectorized Helvetica? Is that even possible? If not, how do I get text, legend, etc. to be vectorized in Computer Modern like the tick labels?</p>
<p>Edit: Python 3.4.4, matplotlib 1.5.2</p>
<p>here are the smooth tick labels vs. the ragged xlabel
<img src="https://i.imgur.com/kTZ5zSb.png" alt="Zoom"></p>
<p>Another edit: If I save my file as .eps instead of .pdf and enable <code>usextex</code> I get wonderfully vectorized fonts, but the tick labels are in serif font :< </p>
| 0 |
2016-09-21T10:29:39Z
| 39,616,713 |
<p>I think I finally found my answer after many attempts. I found it in <a href="http://stackoverflow.com/a/20709149/5528308">this SO post</a>.</p>
<p>I merely added this to the preamble:</p>
<pre><code>r'\usepackage{helvet}', # set the normal font here
r'\usepackage{sansmath}', # load up the sansmath so that math -> helvet
r'\sansmath' # <- tricky! -- gotta actually tell tex to use!
</code></pre>
<p>and set <code>"text.usetex": False"</code>. Now it finally uses Helvetica everywhere and it is vectorized everywhere.. well, except for axes with logarithmic scaling. There I have to manually set axis labels by using <code>ax.set_yticklabels([1, 2, 3, 4])</code>.</p>
| 0 |
2016-09-21T12:34:55Z
|
[
"python",
"matplotlib",
"fonts",
"pdf-generation",
"pgf"
] |
List available font families in `tkinter`
| 39,614,027 |
<p>In many <code>tkinter</code> examples available out there, you may see things like:</p>
<pre><code>canvas.create_text(x, y, font=('Helvetica', 12), text='foo')
</code></pre>
<p>However, this may not work when run in your computer (the result would completely ignore the font parameter). Aparently, the <code>font</code> parameter is ignored if there is any incorrect value in it.</p>
<p>In order to check if the font family is valid, how can I list all available in my system?</p>
| 0 |
2016-09-21T10:30:00Z
| 39,614,028 |
<pre><code>from tkinter import Tk, font
root = Tk()
font.families()
</code></pre>
| 1 |
2016-09-21T10:30:00Z
|
[
"python",
"tkinter"
] |
collect rows and columns based on id matching
| 39,614,059 |
<p>I use python and the pandas library. I want to collect the rows and columns from a data-frame according to one criteria, collect only those ids with a pattern like 'BIKE-\d\d\d\d' from a specific column 'BikeID'. I tried several versions of the following: </p>
<p>d1 = pandas.dataframe</p>
<pre><code>d2 = d1[d1["BikeID"] == re.compile(r' (BIKE-\d\d\d\d)')]
</code></pre>
<p>but I am getting an empty data-frame instead. It works when it is specific: </p>
<pre><code>d2 = d1[d1["BikeID"] == 'BIKE-0001']
</code></pre>
<p>,but I want to match all ids that have BIKE in the front. I would appreciate if you could show me a way of doing this task. </p>
| -1 |
2016-09-21T10:30:48Z
| 39,614,204 |
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html" rel="nofollow"><code>str.extract</code></a> to achieve this with the regex pattern <code>'(^BIKE-[\d]{4})'</code> this will look for strings that start with BIKE- and then 4 digits:</p>
<pre><code>In [167]:
s= pd.Series(['BIKE-0001', 'BIKE','BIKE-000','sdBIKE-0001'])
s
Out[167]:
0 BIKE-0001
1 BIKE
2 BIKE-000
3 sdBIKE-0001
dtype: object
In [168]:
s.str.extract(r'(^BIKE-[\d]{4})', expand=False)
Out[168]:
0 BIKE-0001
1 NaN
2 NaN
3 NaN
dtype: object
</code></pre>
| 0 |
2016-09-21T10:37:57Z
|
[
"python",
"pandas",
"dataframe"
] |
Duplicates / common elements between two lists
| 39,614,083 |
<p>I have a stupid question for people who are familiar with lists in Python.
I want to get the common items in two lists. Assuming that I have this list :</p>
<pre><code>dates_list = ['2016-07-08 02:00:02',
'2016-07-08 02:00:17',
'2016-07-08 02:00:03',
'2016-07-08 02:00:20',
'2016-07-08 02:01:08',
'2016-07-08 02:00:09',
'2016-07-08 02:01:22',
'2016-07-08 02:01:33']
</code></pre>
<p>And a list named 'time_by_seconds' which contains a lists of all seconds of a day:</p>
<pre><code>time_by_seconds = [['2016-07-08 02:00:00',
'2016-07-08 02:00:01',
'2016-07-08 02:00:02',
'2016-07-08 02:00:03',
'2016-07-08 02:00:04',
'2016-07-08 02:00:05',
'2016-07-08 02:00:06',
etc ],
['2016-07-08 02:01:00',
'2016-07-08 02:01:01',
'2016-07-08 02:01:02',
'2016-07-08 02:01:03',
'2016-07-08 02:01:04',
etc ]]
</code></pre>
<p>This is my code to print the items if they are in this list:</p>
<pre><code>for item in dates_list:
for one_list in time_by_seconds:
if item in one_list:
print item
</code></pre>
<p>This is the result :</p>
<pre><code>2016-07-08 02:00:02
2016-07-08 02:00:17
2016-07-08 02:00:03
2016-07-08 02:00:20
2016-07-08 02:01:08
2016-07-08 02:00:09
2016-07-08 02:01:22
2016-07-08 02:01:33
</code></pre>
<p>But if I use another list, with 49 as length, I have duplicates. Concretely I must have 49 elements as result because all those dates exists in my time_by_seconds.
This is the list :</p>
<pre><code>beginning_time_list = ['2016-07-08 02:17:42',
'2016-07-08 02:05:35',
'2016-07-08 02:03:22',
'2016-07-08 02:26:33',
'2016-07-08 02:14:54',
'2016-07-08 02:05:13',
'2016-07-08 02:15:30',
'2016-07-08 02:01:53',
'2016-07-08 02:02:31',
'2016-07-08 02:00:08',
'2016-07-08 02:04:16',
'2016-07-08 02:08:44',
'2016-07-08 02:11:17',
'2016-07-08 02:01:40',
'2016-07-08 02:04:23',
'2016-07-08 02:01:34',
'2016-07-08 02:24:31',
'2016-07-08 02:00:27',
'2016-07-08 02:14:35',
'2016-07-08 02:00:57',
'2016-07-08 02:02:24',
'2016-07-08 02:02:46',
'2016-07-08 02:05:04',
'2016-07-08 02:11:26',
'2016-07-08 02:06:24',
'2016-07-08 02:04:32',
'2016-07-08 02:08:50',
'2016-07-08 02:08:27',
'2016-07-08 02:02:30',
'2016-07-08 02:03:59',
'2016-07-08 02:01:19',
'2016-07-08 02:02:09',
'2016-07-08 02:05:47',
'2016-07-08 02:02:36',
'2016-07-08 02:01:02',
'2016-07-08 02:02:58',
'2016-07-08 02:06:19',
'2016-07-08 02:02:34',
'2016-07-08 02:00:17',
'2016-07-08 02:10:03',
'2016-07-08 02:08:20',
'2016-07-08 02:02:36',
'2016-07-08 02:17:25',
'2016-07-08 02:07:19',
'2016-07-08 02:13:07',
'2016-07-08 02:03:51',
'2016-07-08 02:03:35',
'2016-07-08 02:14:53',
'2016-07-08 02:18:36']
</code></pre>
<p>The same code :</p>
<pre><code>for item in beginning_time_list:
for one_list in time_by_seconds:
if item in one_list:
print item
</code></pre>
<p>And this is the result :</p>
<pre><code>2016-07-08 02:17:42
2016-07-08 02:17:42
2016-07-08 02:17:42
2016-07-08 02:17:42
2016-07-08 02:05:35
2016-07-08 02:05:35
2016-07-08 02:03:22
2016-07-08 02:26:33
2016-07-08 02:26:33
2016-07-08 02:26:33
2016-07-08 02:26:33
2016-07-08 02:26:33
2016-07-08 02:26:33
2016-07-08 02:14:54
2016-07-08 02:14:54
2016-07-08 02:14:54
2016-07-08 02:05:13
2016-07-08 02:05:13
2016-07-08 02:15:30
2016-07-08 02:15:30
2016-07-08 02:15:30
2016-07-08 02:15:30
2016-07-08 02:01:53
2016-07-08 02:02:31
2016-07-08 02:00:08
2016-07-08 02:04:16
2016-07-08 02:08:44
2016-07-08 02:08:44
2016-07-08 02:11:17
2016-07-08 02:11:17
2016-07-08 02:11:17
2016-07-08 02:01:40
2016-07-08 02:04:23
2016-07-08 02:01:34
2016-07-08 02:24:31
2016-07-08 02:24:31
2016-07-08 02:24:31
2016-07-08 02:24:31
2016-07-08 02:24:31
2016-07-08 02:00:27
2016-07-08 02:14:35
2016-07-08 02:14:35
2016-07-08 02:14:35
2016-07-08 02:00:57
2016-07-08 02:02:24
2016-07-08 02:02:46
2016-07-08 02:05:04
2016-07-08 02:05:04
2016-07-08 02:11:26
2016-07-08 02:11:26
2016-07-08 02:11:26
2016-07-08 02:06:24
2016-07-08 02:06:24
etc
</code></pre>
<p>Sorry there are 95 items !</p>
<p>Someone knows why I have duplicates?
Thnx</p>
| 0 |
2016-09-21T10:32:09Z
| 39,614,124 |
<p>Once the item is found in one of the sublists, the search goes on with the other sublists. </p>
<p>You should consider using a <code>break</code>, to stop the search for the current date item once it is found in one of the sublists:</p>
<pre><code>for item in beginning_time_list:
for one_list in time_by_seconds:
if item in one_list:
print item
break
</code></pre>
| 1 |
2016-09-21T10:34:28Z
|
[
"python",
"list",
"python-2.7",
"for-loop"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.