title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
How do I resolve builtins.ConnectionRefusedError error in attempting to send email using flask-mail
39,702,400
<p>I am making a simple WebApp using Flask framework in python. It will take user inputs for email and name from my website (<a href="http://www.anshulbansal.esy.es" rel="nofollow">www.anshulbansal.esy.es</a>) and will check if email exists in database (here database is supposed as dictionary for now) then it will not work further, but if it doesn't exists in database, it will send a random link to the submitted email and if the user clicks the link, then its info will be added to my database.</p> <p>It's almost done but this error in coming in my way. Check out this code: </p> <pre><code>from flask import Flask, render_template, request, redirect, url_for from flask_mail import Mail, Message import random import string def random_generator(size=6, chars=string.ascii_letters + string.digits): return ''.join(random.choice(chars) for x in range(size)) subscribers_d = {'anshul.bansal5@yahoo.com': 'Anshul Bansal', 'anshul.bansal3@yahoo.com': 'Bansal', 'anshul.bansal@yahoo.com': 'Anshul',} app = Flask(__name__) mail = Mail(app) app.config.update( MAIL_SERVER='smtp.gmail.com', MAIL_PORT=465, MAIL_USE_TLS = False, MAIL_USE_SSL=True, MAIL_USERNAME='anshul.bansal950@gmail.com', MAIL_PASSWORD="It's Secret" ) @app.route('/') def index(): return render_template("index.html") @app.route('/submit', methods=['POST']) def submit(): if request.method == "POST": v_name = request.form['vname'] v_email = request.form['vemail'] return send_mail(v_name, v_email) else: return redirect(url_for("/")) random_link_sent = random_generator(20) @app.route("/") def send_mail(v_name, v_email): if v_email in subscribers_d: return "Oh! It seems that you have already registered." else: msg = Message('Confirm Subscription', sender=['anshul.bansal950@gmail.com'], recipients=[v_email]) msg.html = "&lt;h3&gt;Confirm Subscription&lt;/h3&gt;" \ "&lt;p&gt;Hi! &lt;/p&gt;" + v_name + "&lt;p&gt; , Please click on below link to subscribe&lt;/p&gt;" \ "Link: " + ' www.anshulbansal.esy.es/' + random_link_sent mail.send(msg) return 'Check Your Inbox For Confirmation Email' @app.route("/&lt;random_link_sent&gt;") def confirm(random_link_sent): return "You have registered on " + random_link_sent subscribers_d[v_email] = v_name if __name__ == "__main__": app.run(debug=True) </code></pre> <p>But this code is giving me a builtins.ConnectionRefusedError Error. But before 2-3 attempts of sending email were successful without any error. How do I resolve it?</p> <p><a href="http://i.stack.imgur.com/cJNKS.png" rel="nofollow">Here is the screenshot of error</a></p>
-1
2016-09-26T12:08:04Z
39,714,728
<p>You should update configuration before you initialize Mail:</p> <pre><code>app = Flask(__name__) app.config.update( DEBUG = True, MAIL_SERVER = 'smtp.gmail.com', MAIL_PORT = 587, MAIL_USE_TLS = True, MAIL_USE_SSL = False, MAIL_USERNAME = 'your_username@gmail.com', MAIL_PASSWORD = 'your_password', ) mail = Mail(app) </code></pre>
1
2016-09-27T01:30:44Z
[ "python", "email", "flask", "flask-mail" ]
hash function that outputs integer from 0 to 255?
39,702,457
<p>I need a very simple hash function in Python that will convert a string to an integer from 0 to 255.</p> <p>For example:</p> <pre><code>&gt;&gt;&gt; hash_function("abc_123") 32 &gt;&gt;&gt; hash_function("any-string-value") 99 </code></pre> <p>It does not matter what the integer is as long as I get the same integer every time I call the function.</p> <p>I want to use the integer to generate a random subnet mask based on the name of the network.</p>
2
2016-09-26T12:11:09Z
39,702,481
<p>You could just use the modulus of the <a href="https://docs.python.org/3/library/functions.html#hash" rel="nofollow"><code>hash()</code> function</a> output:</p> <pre><code>def onebyte_hash(s): return hash(s) % 256 </code></pre> <p>This is what dictionaries and sets use (hash modulus the internal table size).</p> <p>Demo:</p> <pre><code>&gt;&gt;&gt; onebyte_hash('abc_123') 182 &gt;&gt;&gt; onebyte_hash('any-string-value') 12 </code></pre> <p>Caveat: On Python 3.3 and up, <em>hash randomisation</em> is enabled by default, and <em>between restarts of Python</em> you'll get different values. The hash, then, is only stable if you don't restart the Python process or set <a href="https://docs.python.org/3/using/cmdline.html#envvar-PYTHONHASHSEED" rel="nofollow"><code>PYTHONHASHSEED</code></a> to a fixed decimal number (with <code>0</code> disabling it altogether). On Python 2 and 3.0 through to 3.2 hash randomisation is either not available or only enabled if you set a seed explicitly.</p> <p>Another alternative is to just <a href="https://docs.python.org/3/library/functions.html#hash" rel="nofollow"><code>hashlib.md5()</code></a> and just take (integer value of) the first byte:</p> <pre><code>import hashlib try: # Python 2; Python 3 will throw an exception here as bytes are required hashlib.md5('') def onebyte_hash(s): return ord(hashlib.md5(s).digest()[0]) except TypeError: # Python 3; encode the string first, return first byte def onebyte_hash(s): return hashlib.md5(s.encode('utf8')).digest()[0] </code></pre> <p>MD5 is a well-establish cryptographic hash, the output is stable across Python versions and independent of hash randomisation.</p> <p>The disadvantage of the latter would be that it'd be marginally slower; Python caches string hashes on the string object, so retrieving the hash later on is fast and cheap most of the time.</p>
10
2016-09-26T12:12:16Z
[ "python", "hash", "integer" ]
Python: Converting a queryset in a list of tuples
39,702,538
<p>i want to turn a queryset into a list of tuples. "result" ist what it should look like:</p> <pre><code>queryset = [{x:'1',y:'a'}, {x:'2',y:'b'}] result = [(1,'a'),(2,'b')] </code></pre> <p>So the solution i do have right now is this one, but it seems to me that there could be a way to shorten the code, and make it more efficient. Is there a way?</p> <p>My current solution:</p> <pre><code>result = [] for dic in queryset: result.append((dic[x],dic[y])) </code></pre>
-1
2016-09-26T12:15:01Z
39,702,563
<p><code>your_tuple = [(x, y) for x, y in queryset.items()]</code></p> <hr> <p>EDIT</p> <p><code>your_tuple = [(x.get('x'), x.get('y')) for x in queryset]</code></p>
1
2016-09-26T12:15:58Z
[ "python", "django", "performance" ]
Python: Converting a queryset in a list of tuples
39,702,538
<p>i want to turn a queryset into a list of tuples. "result" ist what it should look like:</p> <pre><code>queryset = [{x:'1',y:'a'}, {x:'2',y:'b'}] result = [(1,'a'),(2,'b')] </code></pre> <p>So the solution i do have right now is this one, but it seems to me that there could be a way to shorten the code, and make it more efficient. Is there a way?</p> <p>My current solution:</p> <pre><code>result = [] for dic in queryset: result.append((dic[x],dic[y])) </code></pre>
-1
2016-09-26T12:15:01Z
39,702,671
<p>You can use <code>dict.values()</code>:</p> <pre><code>queryset = [{x:'1',y:'a'}, {x:'2',y:'b'}] result = [] for i in queryset: result.append(tuple(i.values())) </code></pre> <p>Or in one line:</p> <pre><code>result = [tuple(i.values()) for i in queryset] </code></pre> <hr> <p>If you want them in a particular order:</p> <p><code>result = [(i[x], i[y]) for i in queryset]</code></p>
1
2016-09-26T12:21:08Z
[ "python", "django", "performance" ]
Python: Converting a queryset in a list of tuples
39,702,538
<p>i want to turn a queryset into a list of tuples. "result" ist what it should look like:</p> <pre><code>queryset = [{x:'1',y:'a'}, {x:'2',y:'b'}] result = [(1,'a'),(2,'b')] </code></pre> <p>So the solution i do have right now is this one, but it seems to me that there could be a way to shorten the code, and make it more efficient. Is there a way?</p> <p>My current solution:</p> <pre><code>result = [] for dic in queryset: result.append((dic[x],dic[y])) </code></pre>
-1
2016-09-26T12:15:01Z
39,702,688
<p>Assuming <code>queryset</code> is supposed to look like this, with <code>'x'</code> and <code>'y'</code> as string keys:</p> <pre><code>&gt;&gt;&gt; queryset = [{'x':'1', 'y':'a'}, {'x':'2', 'y':'b'}] &gt;&gt;&gt; result = [(q['x'], q['y']) for q in queryset] &gt;&gt;&gt; result [('1', 'a'), ('2', 'b')] &gt;&gt;&gt; # or if x and y are actually the correct names/vars for the keys ... result = [(q[x], q[y]) for q in queryset] </code></pre>
1
2016-09-26T12:21:46Z
[ "python", "django", "performance" ]
Python: Converting a queryset in a list of tuples
39,702,538
<p>i want to turn a queryset into a list of tuples. "result" ist what it should look like:</p> <pre><code>queryset = [{x:'1',y:'a'}, {x:'2',y:'b'}] result = [(1,'a'),(2,'b')] </code></pre> <p>So the solution i do have right now is this one, but it seems to me that there could be a way to shorten the code, and make it more efficient. Is there a way?</p> <p>My current solution:</p> <pre><code>result = [] for dic in queryset: result.append((dic[x],dic[y])) </code></pre>
-1
2016-09-26T12:15:01Z
39,702,727
<p>Use list comprehension and <code>dict.values()</code></p> <pre><code>&gt;&gt;&gt; queryset = [{'x': '1', 'y': 'a'}, {'x': '2', 'y': 'b'}] &gt;&gt;&gt; result = [tuple(v.values()) for v in queryset] &gt;&gt;&gt; result [('1', 'a'), ('2', 'b')] </code></pre> <p><strong>UPDATE</strong></p> <p>as @aneroid reasonably mentioned, since <code>dict</code> object are not ordered the code snippet could return different order in <code>tuple</code></p> <p>So since i don't want to add duplicate solution. There is one option, not so elegant, and maybe with a lack of efficiency, to use <a href="https://docs.python.org/3/library/collections.html#collections.OrderedDict" rel="nofollow"><code>OrderedDict</code></a></p> <pre><code>&gt;&gt;&gt; from collections import OrderedDict &gt;&gt;&gt; queryset = [{'x': '1', 'y': 'a'}, {'x': '2', 'y': 'b'}] &gt;&gt;&gt; order = ('x', 'y') &gt;&gt;&gt; result = [tuple(OrderedDict((k, v[k]) for k in myorder).values()) for v in queryset] &gt;&gt;&gt; result [('1', 'a'), ('2', 'b')] </code></pre> <p>But i personally think that @PadraicCunningham 's <a href="http://stackoverflow.com/a/39702851/3124746">solution</a> is the most elegant here.</p>
2
2016-09-26T12:23:32Z
[ "python", "django", "performance" ]
Python: Converting a queryset in a list of tuples
39,702,538
<p>i want to turn a queryset into a list of tuples. "result" ist what it should look like:</p> <pre><code>queryset = [{x:'1',y:'a'}, {x:'2',y:'b'}] result = [(1,'a'),(2,'b')] </code></pre> <p>So the solution i do have right now is this one, but it seems to me that there could be a way to shorten the code, and make it more efficient. Is there a way?</p> <p>My current solution:</p> <pre><code>result = [] for dic in queryset: result.append((dic[x],dic[y])) </code></pre>
-1
2016-09-26T12:15:01Z
39,702,851
<p>If you can have multiple keys and you just want certain key values, you can use <em>itemgetter</em> with map passing the keys you want to extract:</p> <pre><code>from operator import itemgetter result = list(map(itemgetter("x", "y"), queryset))) </code></pre>
2
2016-09-26T12:28:39Z
[ "python", "django", "performance" ]
Python: Converting a queryset in a list of tuples
39,702,538
<p>i want to turn a queryset into a list of tuples. "result" ist what it should look like:</p> <pre><code>queryset = [{x:'1',y:'a'}, {x:'2',y:'b'}] result = [(1,'a'),(2,'b')] </code></pre> <p>So the solution i do have right now is this one, but it seems to me that there could be a way to shorten the code, and make it more efficient. Is there a way?</p> <p>My current solution:</p> <pre><code>result = [] for dic in queryset: result.append((dic[x],dic[y])) </code></pre>
-1
2016-09-26T12:15:01Z
39,702,885
<p>If this is an django ORM queryset (or result of it), you can just use <code>values_list</code> method instead of <code>values</code>. That will give exactly what you want.</p>
2
2016-09-26T12:30:40Z
[ "python", "django", "performance" ]
How to make my function run every hour?
39,702,803
<p>With the help of some online tuts (Bucky), I've managed to write a simple web scraper that just checks if some text is on a webpage. What I would like to do however, is have the code run every hour. I assume I will need to host the code also so it does this? I've done some research but can't seem to find a proper way of running it every hour. Here is the code I've got so far:</p> <pre><code>import requests from bs4 import BeautifulSoup def odeon_spider(max_pages): page = 1 while page &lt;= max_pages: url = "http://www.odeon.co.uk/films/rogue_one_a_star_wars_story/16038/" + str(page) #stores url in variable source_code = requests.get(url) #gets url and sets it as source_code variable plain_text = source_code.text #stores plain text in plain_text variable soup = BeautifulSoup(plain_text, "lxml") #create beautifulsoup object div_content = soup.findAll("div", {"class": "textComponent"}) #finds all divs with specific class for x in div_content: find_para = str(x.find('p').text) #finds all paragraphs and stores them in variable text_to_search = "Register to be notified" #set text to search to variable if text_to_search in find_para: #checks if text is in find_para print("No tickets") else: print("Tickets") page += 1 odeon_spider(1) </code></pre> <p>Thanks!</p>
-1
2016-09-26T12:26:52Z
39,703,100
<p>The easiest way would be like this:</p> <pre><code>import time while True: call_your_function() time.sleep(3600) </code></pre> <p>If you wanna do this on Linux, you can just type</p> <pre><code>nohup python -u your_script_name &amp; </code></pre> <p>and then your script will run as a process.(If you don't kill it, it just keeps running without hangup.)</p>
4
2016-09-26T12:39:50Z
[ "python", "function", "web", "scraper" ]
glsl parser using pyparsing giving AttributeErrors
39,702,882
<p>I'm trying to update this <a href="https://github.com/rougier/glsl-parser" rel="nofollow">glsl-parser</a> which uses an old pyparsing version and python2.x to python3.x &amp; the newest pyparsing version (2.1.9 atm). </p> <p>I don't know which pyparsing version was using the original source code but it must be quite old because is still using <code>keepOriginalText</code> helper method, after reading pyparsing <a href="http://pyparsing.wikispaces.com/News" rel="nofollow">news</a> I've seen this comment <code>Removed keepOriginalText helper method, which was deprecated ages ago. Superceded by originalTextFor.</code></p> <p>Anyway, here's the first attempt of the port using python3.5.1 &amp; pyparsing==2.1.9:</p> <pre><code># -*- coding: utf-8 -*- # ----------------------------------------------------------------------------- # Copyright (c) 2014, Nicolas P. Rougier # Distributed under the (new) BSD License. See LICENSE.txt for more info. # ----------------------------------------------------------------------------- import pyparsing keywords = ("attribute const uniform varying break continue do for while" "if else" "in out inout" "float int void bool true false" "lowp mediump highp precision invariant" "discard return" "mat2 mat3 mat4" "vec2 vec3 vec4 ivec2 ivec3 ivec4 bvec2 bvec3 bvec4 sampler2D samplerCube" "struct") reserved = ("asm" "class union enum typedef template this packed" "goto switch default" "inline noinline volatile public static extern external" "interface flat long short double half fixed unsigned superp" "input output" "hvec2 hvec3 hvec4 dvec2 dvec3 dvec4 fvec2 fvec3 fvec4 sampler1D sampler3D" "sampler1DShadow sampler2DShadow" "sampler2DRect sampler3DRect sampler2DRectShadow" "sizeof cast" "namespace using") precision = "lowp mediump high" storage = "const uniform attribute varying" # Tokens # ---------------------------------- LPAREN = pyparsing.Literal("(").suppress() RPAREN = pyparsing.Literal(")").suppress() LBRACK = pyparsing.Literal("[").suppress() RBRACK = pyparsing.Literal("]").suppress() LBRACE = pyparsing.Literal("{").suppress() RBRACE = pyparsing.Literal("}").suppress() IDENTIFIER = pyparsing.Word(pyparsing.alphas + '_', pyparsing.alphanums + '_') TYPE = pyparsing.Word(pyparsing.alphas + '_', pyparsing.alphanums + "_") END = pyparsing.Literal(";").suppress() INT = pyparsing.Word(pyparsing.nums) FLOAT = pyparsing.Regex( '[+-]?(((\d+\.\d*)|(\d*\.\d+))([eE][-+]?\d+)?)|(\d*[eE][+-]?\d+)') STORAGE = pyparsing.Regex('|'.join(storage.split(' '))) PRECISION = pyparsing.Regex('|'.join(precision.split(' '))) STRUCT = pyparsing.Literal("struct").suppress() # ------------------------ def get_prototypes(code): """ Get all function declarations Code example ------------ mediump vec3 function_1(vec4); vec3 function_2(float a, float b); """ PARAMETER = pyparsing.Group(pyparsing.Optional(PRECISION).setResultsName("precision") + TYPE.setResultsName("type") + pyparsing.Optional(IDENTIFIER).setResultsName("name")) PARAMETERS = pyparsing.delimitedList(PARAMETER).setResultsName( "arg", listAllMatches=True) PROTOTYPE = (pyparsing.Optional(PRECISION).setResultsName("precision") + TYPE.setResultsName("type") + IDENTIFIER.setResultsName("name") + LPAREN + pyparsing.Optional(PARAMETERS).setResultsName("args") + RPAREN + END) PROTOTYPE.ignore(pyparsing.cStyleComment) for (token, start, end) in PROTOTYPE.scanString(code): print(token.precision, token.type, token.name, '(',) for arg in token.args: print(arg.precision, arg.type, arg.name, ',',) print(')') # ------------------------ def get_functions(code): """ Get all function definitions Code example ------------ mediump vec3 compute_normal(vec4); """ PARAMETER = pyparsing.Group(pyparsing.Optional(PRECISION).setResultsName("precision") + TYPE.setResultsName("type") + pyparsing.Optional(IDENTIFIER).setResultsName("name")) PARAMETERS = pyparsing.delimitedList(PARAMETER).setResultsName( "arg", listAllMatches=True) FUNCTION = (pyparsing.Optional(PRECISION).setResultsName("precision") + TYPE.setResultsName("type") + IDENTIFIER.setResultsName("name") + LPAREN + pyparsing.Optional(PARAMETERS).setResultsName("args") + RPAREN + pyparsing.nestedExpr("{", "}").setParseAction(pyparsing.originalTextFor).setResultsName("code")) FUNCTION.ignore(pyparsing.cStyleComment) for (token, start, end) in FUNCTION.scanString(code): print(token.precision, token.type, token.name, '(',) for arg in token.args: print(arg.precision, arg.type, arg.name, ',',) print(') { ... }') # print token.code # print code[start:end] # ------------------------ def get_version(code): """ Get shader version (if specified) Code example ------------ #version 120 """ VERSION = ( pyparsing.Literal("#") + pyparsing.Keyword("version")).suppress() + INT for (token, start, end) in VERSION.scanString(code): version = token[0] # print code[start:end] return version # ------------------------ def get_declarations(code): """ Get all declarations prefixed with a storage qualifier. Code example ------------ uniform lowp vec4 fg_color = vec4(1), bg_color = vec4(vec3(0),1); """ # Callable expression EXPRESSION = pyparsing.Forward() ARG = pyparsing.Group(EXPRESSION) | IDENTIFIER | FLOAT | INT ARGS = pyparsing.delimitedList(ARG) EXPRESSION &lt;&lt; IDENTIFIER + \ pyparsing.Group(LPAREN + pyparsing.Optional(ARGS) + RPAREN) # Value VALUE = (EXPRESSION | pyparsing.Word(pyparsing.alphanums + "_()+-/*") ).setParseAction(pyparsing.originalTextFor) # Single declaration VARIABLE = (IDENTIFIER.setResultsName("name") + pyparsing.Optional(LBRACK + (INT | IDENTIFIER).setResultsName("size") + RBRACK) + pyparsing.Optional(pyparsing.Literal("=").suppress() + VALUE.setResultsName("value"))) # Several declarations at once DECLARATION = (STORAGE.setResultsName("storage") + pyparsing.Optional(PRECISION).setResultsName("precision") + TYPE.setResultsName("type") + pyparsing.delimitedList(VARIABLE.setResultsName("variable", listAllMatches=True)) + END) DECLARATION.ignore(pyparsing.cStyleComment) for (tokens, start, end) in DECLARATION.scanString(code): for token in tokens.variable: print(tokens.storage, tokens.precision, tokens.type,) print(token.name, token.size) # ------------------------ def get_definitions(code): """ Get all structure definitions and associated declarations. Code example ------------ uniform struct Light { vec4 position; vec3 color; } light0, light1; """ # Single declaration DECLARATION = pyparsing.Group(IDENTIFIER.setResultsName("name") + pyparsing.Optional(LBRACK + (INT | IDENTIFIER).setResultsName("size") + RBRACK)) # Several declarations at once DECLARATIONS = (pyparsing.Optional(PRECISION) + TYPE + pyparsing.delimitedList(DECLARATION) + END) # Definition + declarations DEFINITION = (STRUCT + IDENTIFIER.setResultsName("name") + LBRACE + pyparsing.OneOrMore(DECLARATIONS).setResultsName('content') + RBRACE + pyparsing.Optional(pyparsing.delimitedList(DECLARATION.setResultsName("declarations", listAllMatches=True))) + END) DEFINITION.ignore(pyparsing.cStyleComment) for (tokens, start, end) in DEFINITION.scanString(code): for token in tokens.declarations: print(tokens.name, token.name) # print tokens.content # ---------------- def resolve(code): """ Expand const and preprocessor definitions in order to get constant values. Return the transformed code """ constants = {} DEFINITION = (pyparsing.Literal("#") + pyparsing.Literal("define") + IDENTIFIER.setResultsName("name") + pyparsing.restOfLine.setResultsName("value")) VALUE = pyparsing.Word(pyparsing.alphanums + "_()+-/*") DECLARATION = (pyparsing.Literal("const") + TYPE.setResultsName("type") + IDENTIFIER.setResultsName("name") + pyparsing.Literal("=") + VALUE.setResultsName("value") + pyparsing.Literal(";")) REFERENCE = pyparsing.Forward() def process_definition(s, l, t): value = REFERENCE.transformString(t.value) constants[t.name] = value REFERENCE &lt;&lt; pyparsing.MatchFirst( map(pyparsing.Keyword, constants.keys())) return "#define " + t.name + " " + value def process_declaration(s, l, t): value = REFERENCE.transformString(t.value) constants[t.name] = value REFERENCE &lt;&lt; pyparsing.MatchFirst( map(pyparsing.Keyword, constants.keys())) return "const " + t.type + " " + t.name + "=" + value + ";" def process_reference(s, l, t): return constants[t[0]] REFERENCE.setParseAction(process_reference) DEFINITION.setParseAction(process_definition) DECLARATION.setParseAction(process_declaration) EXPANDER = REFERENCE | DEFINITION | DECLARATION code = EXPANDER.transformString(code) for key, val in constants.items(): constants[key] = eval(val) return code, constants # ----------------------------------------------------------------------------- if __name__ == '__main__': code = """ #version 120 #define A (1) const int B=(A+2); #define C (B+3) const int D=C+4; uniform float array[D]; struct Point { vec4 position; float size; }; uniform struct Light { vec4 position; vec3 color; } light0, light1; const float PI = 3.14159265358979323846264; const float SQRT_2 = 1.4142135623730951; uniform vec4 fg_color = vec4(1), bg_color = vec4(vec3(0),1); mediump vec3 compute_normal(vec4 position, vec3 orientation); vec3 /* */ compute_light(vec4, vec3, float intensity) { vec3 hello; vec3 hello; } """ code, _ = resolve(code) print("GLSL version: %s\n" % get_version(code)) get_definitions(code) get_declarations(code) get_prototypes(code) get_functions(code) # code = """ # #if A # #if B # #if C # #endif # #endif # #endif # """ # IF = (pyparsing.Literal('#') + (pyparsing.Keyword('if') | pyparsing.Keyword('ifdef') | pyparsing.Keyword('ifndef'))) # ENDIF = (pyparsing.Literal('#') + pyparsing.Keyword('endif')) # MACRO = (IF + pyparsing.restOfLine() + # SkipTo(ENDIF, include=True)).setParseAction(pyparsing.originalTextFor) # for (tokens, start, end) in MACRO.scanString(code): # print tokens </code></pre> <p>When you try running the above mcve code you'll get:</p> <pre><code>GLSL version: 120 ('Light', 'light0') ('Light', 'light1') d:\virtual_envs\py2711\lib\site-packages\pyparsing.py:3536: SyntaxWarning: Cannot combine element of type &lt;type 'int'&gt; with ParserElement matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end") d:\virtual_envs\py2711\lib\site-packages\pyparsing.py:3536: SyntaxWarning: Cannot combine element of type &lt;type 'NoneType'&gt; with ParserElement matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end") Traceback (most recent call last): File "D:\sources\personal\python\pyqt\pyshaders\gui\glsl-parser.py", line 311, in &lt;module&gt; get_declarations(code) File "D:\sources\personal\python\pyqt\pyshaders\gui\glsl-parser.py", line 173, in get_declarations for (tokens, start, end) in DECLARATION.scanString(code): File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 1258, in scanString nextLoc,tokens = parseFn( instring, preloc, callPreParse=False ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 1084, in _parseNoCache loc,tokens = self.parseImpl( instring, preloc, doActions ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 2576, in parseImpl loc, exprtokens = e._parse( instring, loc, doActions ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 1084, in _parseNoCache loc,tokens = self.parseImpl( instring, preloc, doActions ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 2576, in parseImpl loc, exprtokens = e._parse( instring, loc, doActions ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 1084, in _parseNoCache loc,tokens = self.parseImpl( instring, preloc, doActions ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 3038, in parseImpl loc, tokens = self.expr._parse( instring, loc, doActions, callPreParse=False ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 1084, in _parseNoCache loc,tokens = self.parseImpl( instring, preloc, doActions ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 2576, in parseImpl loc, exprtokens = e._parse( instring, loc, doActions ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 1110, in _parseNoCache tokens = fn( instring, tokensStart, retTokens ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 831, in wrapper ret = func(*args[limit[0]:]) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 3542, in originalTextFor matchExpr.setParseAction(extractText) AttributeError: 'NoneType' object has no attribute 'setParseAction' </code></pre> <p>I'm still in the process of learning pyparsing, what's the problem here?</p>
2
2016-09-26T12:30:15Z
39,704,620
<p><code>originalTextFor</code> is not a parse action, but a helper method that attaches a parse action to a defined expression. In your example it is used in two places:</p> <pre><code># Value VALUE = (EXPRESSION | pyparsing.Word(pyparsing.alphanums + "_()+-/*") ).setParseAction(pyparsing.originalTextFor) FUNCTION = (pyparsing.Optional(PRECISION).setResultsName("precision") + TYPE.setResultsName("type") + IDENTIFIER.setResultsName("name") + LPAREN + pyparsing.Optional(PARAMETERS).setResultsName("args") + RPAREN + pyparsing.nestedExpr("{", "}").setParseAction(pyparsing.originalTextFor).setResultsName("code")) </code></pre> <p>change these to:</p> <pre><code># Value VALUE = pyparsing.originalTextFor(EXPRESSION | pyparsing.Word(pyparsing.alphanums + "_()+-/*")) FUNCTION = (pyparsing.Optional(PRECISION).setResultsName("precision") + TYPE.setResultsName("type") + IDENTIFIER.setResultsName("name") + LPAREN + pyparsing.Optional(PARAMETERS).setResultsName("args") + RPAREN + pyparsing.originalTextFor(pyparsing.nestedExpr("{", "}")).setResultsName("code")) </code></pre> <p>You might find that the more recent pyparsing form of <code>setResultsName</code> to be a little cleaner-looking, but the old form still works fine:</p> <pre><code>FUNCTION = (pyparsing.Optional(PRECISION)("precision") + TYPE("type") + IDENTIFIER("name") + LPAREN + pyparsing.Optional(PARAMETERS)("args") + RPAREN + pyparsing.originalTextFor(pyparsing.nestedExpr("{", "}"))("code")) </code></pre> <p>If you make this change, those places that use <code>listAllMatches</code> are handled by adding a '*' to the name argument:</p> <pre><code>pyparsing.delimitedList(VARIABLE("variable*")) </code></pre>
1
2016-09-26T13:46:06Z
[ "python", "python-3.x", "glsl", "pyparsing" ]
glsl parser using pyparsing giving AttributeErrors
39,702,882
<p>I'm trying to update this <a href="https://github.com/rougier/glsl-parser" rel="nofollow">glsl-parser</a> which uses an old pyparsing version and python2.x to python3.x &amp; the newest pyparsing version (2.1.9 atm). </p> <p>I don't know which pyparsing version was using the original source code but it must be quite old because is still using <code>keepOriginalText</code> helper method, after reading pyparsing <a href="http://pyparsing.wikispaces.com/News" rel="nofollow">news</a> I've seen this comment <code>Removed keepOriginalText helper method, which was deprecated ages ago. Superceded by originalTextFor.</code></p> <p>Anyway, here's the first attempt of the port using python3.5.1 &amp; pyparsing==2.1.9:</p> <pre><code># -*- coding: utf-8 -*- # ----------------------------------------------------------------------------- # Copyright (c) 2014, Nicolas P. Rougier # Distributed under the (new) BSD License. See LICENSE.txt for more info. # ----------------------------------------------------------------------------- import pyparsing keywords = ("attribute const uniform varying break continue do for while" "if else" "in out inout" "float int void bool true false" "lowp mediump highp precision invariant" "discard return" "mat2 mat3 mat4" "vec2 vec3 vec4 ivec2 ivec3 ivec4 bvec2 bvec3 bvec4 sampler2D samplerCube" "struct") reserved = ("asm" "class union enum typedef template this packed" "goto switch default" "inline noinline volatile public static extern external" "interface flat long short double half fixed unsigned superp" "input output" "hvec2 hvec3 hvec4 dvec2 dvec3 dvec4 fvec2 fvec3 fvec4 sampler1D sampler3D" "sampler1DShadow sampler2DShadow" "sampler2DRect sampler3DRect sampler2DRectShadow" "sizeof cast" "namespace using") precision = "lowp mediump high" storage = "const uniform attribute varying" # Tokens # ---------------------------------- LPAREN = pyparsing.Literal("(").suppress() RPAREN = pyparsing.Literal(")").suppress() LBRACK = pyparsing.Literal("[").suppress() RBRACK = pyparsing.Literal("]").suppress() LBRACE = pyparsing.Literal("{").suppress() RBRACE = pyparsing.Literal("}").suppress() IDENTIFIER = pyparsing.Word(pyparsing.alphas + '_', pyparsing.alphanums + '_') TYPE = pyparsing.Word(pyparsing.alphas + '_', pyparsing.alphanums + "_") END = pyparsing.Literal(";").suppress() INT = pyparsing.Word(pyparsing.nums) FLOAT = pyparsing.Regex( '[+-]?(((\d+\.\d*)|(\d*\.\d+))([eE][-+]?\d+)?)|(\d*[eE][+-]?\d+)') STORAGE = pyparsing.Regex('|'.join(storage.split(' '))) PRECISION = pyparsing.Regex('|'.join(precision.split(' '))) STRUCT = pyparsing.Literal("struct").suppress() # ------------------------ def get_prototypes(code): """ Get all function declarations Code example ------------ mediump vec3 function_1(vec4); vec3 function_2(float a, float b); """ PARAMETER = pyparsing.Group(pyparsing.Optional(PRECISION).setResultsName("precision") + TYPE.setResultsName("type") + pyparsing.Optional(IDENTIFIER).setResultsName("name")) PARAMETERS = pyparsing.delimitedList(PARAMETER).setResultsName( "arg", listAllMatches=True) PROTOTYPE = (pyparsing.Optional(PRECISION).setResultsName("precision") + TYPE.setResultsName("type") + IDENTIFIER.setResultsName("name") + LPAREN + pyparsing.Optional(PARAMETERS).setResultsName("args") + RPAREN + END) PROTOTYPE.ignore(pyparsing.cStyleComment) for (token, start, end) in PROTOTYPE.scanString(code): print(token.precision, token.type, token.name, '(',) for arg in token.args: print(arg.precision, arg.type, arg.name, ',',) print(')') # ------------------------ def get_functions(code): """ Get all function definitions Code example ------------ mediump vec3 compute_normal(vec4); """ PARAMETER = pyparsing.Group(pyparsing.Optional(PRECISION).setResultsName("precision") + TYPE.setResultsName("type") + pyparsing.Optional(IDENTIFIER).setResultsName("name")) PARAMETERS = pyparsing.delimitedList(PARAMETER).setResultsName( "arg", listAllMatches=True) FUNCTION = (pyparsing.Optional(PRECISION).setResultsName("precision") + TYPE.setResultsName("type") + IDENTIFIER.setResultsName("name") + LPAREN + pyparsing.Optional(PARAMETERS).setResultsName("args") + RPAREN + pyparsing.nestedExpr("{", "}").setParseAction(pyparsing.originalTextFor).setResultsName("code")) FUNCTION.ignore(pyparsing.cStyleComment) for (token, start, end) in FUNCTION.scanString(code): print(token.precision, token.type, token.name, '(',) for arg in token.args: print(arg.precision, arg.type, arg.name, ',',) print(') { ... }') # print token.code # print code[start:end] # ------------------------ def get_version(code): """ Get shader version (if specified) Code example ------------ #version 120 """ VERSION = ( pyparsing.Literal("#") + pyparsing.Keyword("version")).suppress() + INT for (token, start, end) in VERSION.scanString(code): version = token[0] # print code[start:end] return version # ------------------------ def get_declarations(code): """ Get all declarations prefixed with a storage qualifier. Code example ------------ uniform lowp vec4 fg_color = vec4(1), bg_color = vec4(vec3(0),1); """ # Callable expression EXPRESSION = pyparsing.Forward() ARG = pyparsing.Group(EXPRESSION) | IDENTIFIER | FLOAT | INT ARGS = pyparsing.delimitedList(ARG) EXPRESSION &lt;&lt; IDENTIFIER + \ pyparsing.Group(LPAREN + pyparsing.Optional(ARGS) + RPAREN) # Value VALUE = (EXPRESSION | pyparsing.Word(pyparsing.alphanums + "_()+-/*") ).setParseAction(pyparsing.originalTextFor) # Single declaration VARIABLE = (IDENTIFIER.setResultsName("name") + pyparsing.Optional(LBRACK + (INT | IDENTIFIER).setResultsName("size") + RBRACK) + pyparsing.Optional(pyparsing.Literal("=").suppress() + VALUE.setResultsName("value"))) # Several declarations at once DECLARATION = (STORAGE.setResultsName("storage") + pyparsing.Optional(PRECISION).setResultsName("precision") + TYPE.setResultsName("type") + pyparsing.delimitedList(VARIABLE.setResultsName("variable", listAllMatches=True)) + END) DECLARATION.ignore(pyparsing.cStyleComment) for (tokens, start, end) in DECLARATION.scanString(code): for token in tokens.variable: print(tokens.storage, tokens.precision, tokens.type,) print(token.name, token.size) # ------------------------ def get_definitions(code): """ Get all structure definitions and associated declarations. Code example ------------ uniform struct Light { vec4 position; vec3 color; } light0, light1; """ # Single declaration DECLARATION = pyparsing.Group(IDENTIFIER.setResultsName("name") + pyparsing.Optional(LBRACK + (INT | IDENTIFIER).setResultsName("size") + RBRACK)) # Several declarations at once DECLARATIONS = (pyparsing.Optional(PRECISION) + TYPE + pyparsing.delimitedList(DECLARATION) + END) # Definition + declarations DEFINITION = (STRUCT + IDENTIFIER.setResultsName("name") + LBRACE + pyparsing.OneOrMore(DECLARATIONS).setResultsName('content') + RBRACE + pyparsing.Optional(pyparsing.delimitedList(DECLARATION.setResultsName("declarations", listAllMatches=True))) + END) DEFINITION.ignore(pyparsing.cStyleComment) for (tokens, start, end) in DEFINITION.scanString(code): for token in tokens.declarations: print(tokens.name, token.name) # print tokens.content # ---------------- def resolve(code): """ Expand const and preprocessor definitions in order to get constant values. Return the transformed code """ constants = {} DEFINITION = (pyparsing.Literal("#") + pyparsing.Literal("define") + IDENTIFIER.setResultsName("name") + pyparsing.restOfLine.setResultsName("value")) VALUE = pyparsing.Word(pyparsing.alphanums + "_()+-/*") DECLARATION = (pyparsing.Literal("const") + TYPE.setResultsName("type") + IDENTIFIER.setResultsName("name") + pyparsing.Literal("=") + VALUE.setResultsName("value") + pyparsing.Literal(";")) REFERENCE = pyparsing.Forward() def process_definition(s, l, t): value = REFERENCE.transformString(t.value) constants[t.name] = value REFERENCE &lt;&lt; pyparsing.MatchFirst( map(pyparsing.Keyword, constants.keys())) return "#define " + t.name + " " + value def process_declaration(s, l, t): value = REFERENCE.transformString(t.value) constants[t.name] = value REFERENCE &lt;&lt; pyparsing.MatchFirst( map(pyparsing.Keyword, constants.keys())) return "const " + t.type + " " + t.name + "=" + value + ";" def process_reference(s, l, t): return constants[t[0]] REFERENCE.setParseAction(process_reference) DEFINITION.setParseAction(process_definition) DECLARATION.setParseAction(process_declaration) EXPANDER = REFERENCE | DEFINITION | DECLARATION code = EXPANDER.transformString(code) for key, val in constants.items(): constants[key] = eval(val) return code, constants # ----------------------------------------------------------------------------- if __name__ == '__main__': code = """ #version 120 #define A (1) const int B=(A+2); #define C (B+3) const int D=C+4; uniform float array[D]; struct Point { vec4 position; float size; }; uniform struct Light { vec4 position; vec3 color; } light0, light1; const float PI = 3.14159265358979323846264; const float SQRT_2 = 1.4142135623730951; uniform vec4 fg_color = vec4(1), bg_color = vec4(vec3(0),1); mediump vec3 compute_normal(vec4 position, vec3 orientation); vec3 /* */ compute_light(vec4, vec3, float intensity) { vec3 hello; vec3 hello; } """ code, _ = resolve(code) print("GLSL version: %s\n" % get_version(code)) get_definitions(code) get_declarations(code) get_prototypes(code) get_functions(code) # code = """ # #if A # #if B # #if C # #endif # #endif # #endif # """ # IF = (pyparsing.Literal('#') + (pyparsing.Keyword('if') | pyparsing.Keyword('ifdef') | pyparsing.Keyword('ifndef'))) # ENDIF = (pyparsing.Literal('#') + pyparsing.Keyword('endif')) # MACRO = (IF + pyparsing.restOfLine() + # SkipTo(ENDIF, include=True)).setParseAction(pyparsing.originalTextFor) # for (tokens, start, end) in MACRO.scanString(code): # print tokens </code></pre> <p>When you try running the above mcve code you'll get:</p> <pre><code>GLSL version: 120 ('Light', 'light0') ('Light', 'light1') d:\virtual_envs\py2711\lib\site-packages\pyparsing.py:3536: SyntaxWarning: Cannot combine element of type &lt;type 'int'&gt; with ParserElement matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end") d:\virtual_envs\py2711\lib\site-packages\pyparsing.py:3536: SyntaxWarning: Cannot combine element of type &lt;type 'NoneType'&gt; with ParserElement matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end") Traceback (most recent call last): File "D:\sources\personal\python\pyqt\pyshaders\gui\glsl-parser.py", line 311, in &lt;module&gt; get_declarations(code) File "D:\sources\personal\python\pyqt\pyshaders\gui\glsl-parser.py", line 173, in get_declarations for (tokens, start, end) in DECLARATION.scanString(code): File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 1258, in scanString nextLoc,tokens = parseFn( instring, preloc, callPreParse=False ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 1084, in _parseNoCache loc,tokens = self.parseImpl( instring, preloc, doActions ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 2576, in parseImpl loc, exprtokens = e._parse( instring, loc, doActions ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 1084, in _parseNoCache loc,tokens = self.parseImpl( instring, preloc, doActions ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 2576, in parseImpl loc, exprtokens = e._parse( instring, loc, doActions ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 1084, in _parseNoCache loc,tokens = self.parseImpl( instring, preloc, doActions ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 3038, in parseImpl loc, tokens = self.expr._parse( instring, loc, doActions, callPreParse=False ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 1084, in _parseNoCache loc,tokens = self.parseImpl( instring, preloc, doActions ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 2576, in parseImpl loc, exprtokens = e._parse( instring, loc, doActions ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 1110, in _parseNoCache tokens = fn( instring, tokensStart, retTokens ) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 831, in wrapper ret = func(*args[limit[0]:]) File "d:\virtual_envs\py2711\lib\site-packages\pyparsing.py", line 3542, in originalTextFor matchExpr.setParseAction(extractText) AttributeError: 'NoneType' object has no attribute 'setParseAction' </code></pre> <p>I'm still in the process of learning pyparsing, what's the problem here?</p>
2
2016-09-26T12:30:15Z
39,705,506
<p>Following @Paul McGuire advices, here's a working version which uses python3.5.1 and pyparsing 2.1.9:</p> <pre><code># -*- coding: utf-8 -*- # ----------------------------------------------------------------------------- # Copyright (c) 2014, Nicolas P. Rougier # Distributed under the (new) BSD License. See LICENSE.txt for more info. # ----------------------------------------------------------------------------- import pyparsing keywords = ("attribute const uniform varying break continue do for while" "if else" "in out inout" "float int void bool true false" "lowp mediump highp precision invariant" "discard return" "mat2 mat3 mat4" "vec2 vec3 vec4 ivec2 ivec3 ivec4 bvec2 bvec3 bvec4 sampler2D samplerCube" "struct") reserved = ("asm" "class union enum typedef template this packed" "goto switch default" "inline noinline volatile public static extern external" "interface flat long short double half fixed unsigned superp" "input output" "hvec2 hvec3 hvec4 dvec2 dvec3 dvec4 fvec2 fvec3 fvec4 sampler1D sampler3D" "sampler1DShadow sampler2DShadow" "sampler2DRect sampler3DRect sampler2DRectShadow" "sizeof cast" "namespace using") precision = "lowp mediump high" storage = "const uniform attribute varying" # Tokens # ---------------------------------- LPAREN = pyparsing.Literal("(").suppress() RPAREN = pyparsing.Literal(")").suppress() LBRACK = pyparsing.Literal("[").suppress() RBRACK = pyparsing.Literal("]").suppress() LBRACE = pyparsing.Literal("{").suppress() RBRACE = pyparsing.Literal("}").suppress() IDENTIFIER = pyparsing.Word(pyparsing.alphas + '_', pyparsing.alphanums + '_') TYPE = pyparsing.Word(pyparsing.alphas + '_', pyparsing.alphanums + "_") END = pyparsing.Literal(";").suppress() INT = pyparsing.Word(pyparsing.nums) FLOAT = pyparsing.Regex( '[+-]?(((\d+\.\d*)|(\d*\.\d+))([eE][-+]?\d+)?)|(\d*[eE][+-]?\d+)') STORAGE = pyparsing.Regex('|'.join(storage.split(' '))) PRECISION = pyparsing.Regex('|'.join(precision.split(' '))) STRUCT = pyparsing.Literal("struct").suppress() # ------------------------ def get_prototypes(code): """ Get all function declarations Code example ------------ mediump vec3 function_1(vec4); vec3 function_2(float a, float b); """ PARAMETER = pyparsing.Group(pyparsing.Optional(PRECISION)("precision") + TYPE("type") + pyparsing.Optional(IDENTIFIER)("name")) PARAMETERS = pyparsing.delimitedList(PARAMETER)( "arg*") PROTOTYPE = (pyparsing.Optional(PRECISION)("precision") + TYPE("type") + IDENTIFIER("name") + LPAREN + pyparsing.Optional(PARAMETERS)("args") + RPAREN + END) PROTOTYPE.ignore(pyparsing.cStyleComment) for (token, start, end) in PROTOTYPE.scanString(code): print(token.precision, token.type, token.name, '(',) for arg in token.args: print(arg.precision, arg.type, arg.name, ',',) print(')') # ------------------------ def get_functions(code): """ Get all function definitions Code example ------------ mediump vec3 compute_normal(vec4); """ PARAMETER = pyparsing.Group(pyparsing.Optional(PRECISION)("precision") + TYPE("type") + pyparsing.Optional(IDENTIFIER)("name")) PARAMETERS = pyparsing.delimitedList(PARAMETER)( "arg*") FUNCTION = (pyparsing.Optional(PRECISION)("precision") + TYPE("type") + IDENTIFIER("name") + LPAREN + pyparsing.Optional(PARAMETERS)("args") + RPAREN + pyparsing.originalTextFor(pyparsing.nestedExpr("{", "}"))("code")) FUNCTION.ignore(pyparsing.cStyleComment) for (token, start, end) in FUNCTION.scanString(code): print(token.precision, token.type, token.name, '(',) for arg in token.args: print(arg.precision, arg.type, arg.name, ',',) print(') { ... }') # print token.code # print code[start:end] # ------------------------ def get_version(code): """ Get shader version (if specified) Code example ------------ #version 120 """ VERSION = ( pyparsing.Literal("#") + pyparsing.Keyword("version")).suppress() + INT for (token, start, end) in VERSION.scanString(code): version = token[0] # print code[start:end] return version # ------------------------ def get_declarations(code): """ Get all declarations prefixed with a storage qualifier. Code example ------------ uniform lowp vec4 fg_color = vec4(1), bg_color = vec4(vec3(0),1); """ # Callable expression EXPRESSION = pyparsing.Forward() ARG = pyparsing.Group(EXPRESSION) | IDENTIFIER | FLOAT | INT ARGS = pyparsing.delimitedList(ARG) EXPRESSION &lt;&lt; IDENTIFIER + \ pyparsing.Group(LPAREN + pyparsing.Optional(ARGS) + RPAREN) # Value VALUE = pyparsing.originalTextFor(EXPRESSION | pyparsing.Word(pyparsing.alphanums + "_()+-/*")) # Single declaration VARIABLE = (IDENTIFIER("name") + pyparsing.Optional(LBRACK + (INT | IDENTIFIER)("size") + RBRACK) + pyparsing.Optional(pyparsing.Literal("=").suppress() + VALUE("value"))) # Several declarations at once DECLARATION = (STORAGE("storage") + pyparsing.Optional(PRECISION)("precision") + TYPE("type") + pyparsing.delimitedList(VARIABLE("variable*")) + END) DECLARATION.ignore(pyparsing.cStyleComment) for (tokens, start, end) in DECLARATION.scanString(code): for token in tokens.variable: print(tokens.storage, tokens.precision, tokens.type,) print(token.name, token.size) # ------------------------ def get_definitions(code): """ Get all structure definitions and associated declarations. Code example ------------ uniform struct Light { vec4 position; vec3 color; } light0, light1; """ # Single declaration DECLARATION = pyparsing.Group(IDENTIFIER("name") + pyparsing.Optional(LBRACK + (INT | IDENTIFIER)("size") + RBRACK)) # Several declarations at once DECLARATIONS = (pyparsing.Optional(PRECISION) + TYPE + pyparsing.delimitedList(DECLARATION) + END) # Definition + declarations DEFINITION = (STRUCT + IDENTIFIER("name") + LBRACE + pyparsing.OneOrMore(DECLARATIONS)('content') + RBRACE + pyparsing.Optional(pyparsing.delimitedList(DECLARATION("declarations*"))) + END) DEFINITION.ignore(pyparsing.cStyleComment) for (tokens, start, end) in DEFINITION.scanString(code): for token in tokens.declarations: print(tokens.name, token.name) # print tokens.content # ---------------- def resolve(code): """ Expand const and preprocessor definitions in order to get constant values. Return the transformed code """ constants = {} DEFINITION = (pyparsing.Literal("#") + pyparsing.Literal("define") + IDENTIFIER("name") + pyparsing.restOfLine("value")) VALUE = pyparsing.Word(pyparsing.alphanums + "_()+-/*") DECLARATION = (pyparsing.Literal("const") + TYPE("type") + IDENTIFIER("name") + pyparsing.Literal("=") + VALUE("value") + pyparsing.Literal(";")) REFERENCE = pyparsing.Forward() def process_definition(s, l, t): value = REFERENCE.transformString(t.value) constants[t.name] = value REFERENCE &lt;&lt; pyparsing.MatchFirst( map(pyparsing.Keyword, constants.keys())) return "#define " + t.name + " " + value def process_declaration(s, l, t): value = REFERENCE.transformString(t.value) constants[t.name] = value REFERENCE &lt;&lt; pyparsing.MatchFirst( map(pyparsing.Keyword, constants.keys())) return "const " + t.type + " " + t.name + "=" + value + ";" def process_reference(s, l, t): return constants[t[0]] REFERENCE.setParseAction(process_reference) DEFINITION.setParseAction(process_definition) DECLARATION.setParseAction(process_declaration) EXPANDER = REFERENCE | DEFINITION | DECLARATION code = EXPANDER.transformString(code) for key, val in constants.items(): constants[key] = eval(val) return code, constants # ----------------------------------------------------------------------------- if __name__ == '__main__': code = """ #version 120 #define A (1) const int B=(A+2); #define C (B+3) const int D=C+4; uniform float array[D]; struct Point { vec4 position; float size; }; uniform struct Light { vec4 position; vec3 color; } light0, light1; const float PI = 3.14159265358979323846264; const float SQRT_2 = 1.4142135623730951; uniform vec4 fg_color = vec4(1), bg_color = vec4(vec3(0),1); mediump vec3 compute_normal(vec4 position, vec3 orientation); vec3 /* */ compute_light(vec4, vec3, float intensity) { vec3 hello; vec3 hello; } """ code, _ = resolve(code) print("GLSL version: %s\n" % get_version(code)) get_definitions(code) get_declarations(code) get_prototypes(code) get_functions(code) # code = """ # #if A # #if B # #if C # #endif # #endif # #endif # """ # IF = (pyparsing.Literal('#') + (pyparsing.Keyword('if') | pyparsing.Keyword('ifdef') | pyparsing.Keyword('ifndef'))) # ENDIF = (pyparsing.Literal('#') + pyparsing.Keyword('endif')) # MACRO = (IF + pyparsing.restOfLine() + # SkipTo(ENDIF, include=True)).setParseAction(pyparsing.originalTextFor) # for (tokens, start, end) in MACRO.scanString(code): # print tokens </code></pre>
0
2016-09-26T14:28:16Z
[ "python", "python-3.x", "glsl", "pyparsing" ]
Query a boolean property in NDB Python
39,702,889
<p>I have a Greeting model</p> <pre><code>class Greeting(ndb.Model): author = ndb.StructuredProperty(Author) content = ndb.TextProperty(indexed=False) avatar = ndb.BlobProperty() date = ndb.DateTimeProperty(auto_now_add=True) public = ndb.BooleanProperty(default=False) </code></pre> <p>wherein I use the following code to query seven Greeting posts at each page in Python Google App Engine:</p> <pre><code>posts_query = Greeting.query( ancestor=session_key(session_name)).order(-Greeting.date) curs = Cursor(urlsafe=self.request.get('cursor')) posts,next_curs, more = posts_query.fetch_page(7, start_cursor=curs) </code></pre> <p>I wanted it to show only posts to that has public modified into True so I changed it into </p> <pre><code>posts_query = Greeting.query( ancestor=session_key(session_name), Greeting.public == True).order(-Greeting.date) #line changed curs = Cursor(urlsafe=self.request.get('cursor')) posts,next_curs, more = posts_query.fetch_page(7, start_cursor=curs) </code></pre> <p>However, it is giving me an error:</p> <pre><code>File "/home/ralf/Desktop/google_projects/website/views/events.py", line 28 Greeting.public == True).order(-Greeting.date) SyntaxError: non-keyword arg after keyword arg </code></pre> <p>How can I fix this? What is the appropriate code for this kind of query? Help is greatly appreciated.</p> <p>P.S. As you can see I am also using a query cursor.</p>
1
2016-09-26T12:30:57Z
39,703,562
<p>I fixed it by using this </p> <pre><code>posts_query = Greeting.query(Greeting.public == True).order(-Greeting.date) </code></pre> <p>instead of </p> <pre><code>posts_query = Greeting.query(ancestor=session_key(session_name), Greeting.public == True).order(-Greeting.date) </code></pre>
0
2016-09-26T12:59:31Z
[ "python", "google-app-engine", "app-engine-ndb" ]
requests - fetch data from api-based website
39,703,021
<p>I want to get all the review from <a href="https://www.traveloka.com/hotel/singapore/mandarin-orchard-singapore-10602" rel="nofollow">this site</a>.</p> <p>at first, I use this code:</p> <pre><code>import requests from bs4 import BeautifulSoup r = requests.get( "https://www.traveloka.com/hotel/singapore/mandarin-orchard-singapore-10602") data = r.content soup = BeautifulSoup(data, "html.parser") reviews = soup.find_all("div", {"class": "reviewText"}) for i in range(len(reviews)): print(reviews[i].get_text()) </code></pre> <p>But this way, I can only get the reviews from the first page only.</p> <p>Some said I could use api for this using the same <code>requests</code> module. I've found the api which is <a href="https://api.traveloka.com/v1/hotel/hotelReviewAggregate" rel="nofollow">https://api.traveloka.com/v1/hotel/hotelReviewAggregate</a> but I can't read the parameter because I don't know how to use api which use <code>request payload</code> way.</p> <p>So I'm hoping for a code to get all the review using python or the parameter of api to get the review of specific hotel in all or specific pages.</p>
0
2016-09-26T12:36:19Z
39,704,516
<p>Look at the request payload at the network tab. There is a part where <code>skip:8</code> and <code>top:8</code> and you will see those numbers increment by 8 when you click on the right arrow to get the next page of reviews. </p> <p>You can duplicate that request and scrape the results the same way</p> <p><strong>Edit:</strong></p> <p>Open your page with chrome and hit <code>f12</code>. Go to <code>Network</code> tab, scroll down at the bottom of your page where you can advance to the next batch of reviews. As soon as you hit the right arrow the network tab will be populated. Find the second <code>hotelReviewAggregate</code> and click on it. Under the headers tab you will find <code>Request Payload</code>. Open the <code>data</code> dict and find <code>skip</code> and <code>top</code>. Advance the next batch of reviews and see how those numbers change. You can simulate this behavior to get to the other pages. </p> <p>Then what you need to do is to prepare your payload where you can increment the values and make <code>GET</code> requests and use the <code>response objects</code> to scrape the data with BeautifulSoup. </p> <p>Requests <a href="http://docs.python-requests.org/en/master/user/quickstart/" rel="nofollow">here</a></p> <p>Quick Example from the tutorial:</p> <p><code>payload = {'key1': 'value1', 'key2': 'value2'} r = requests.get('http://httpbin.org/get', params=payload)</code></p> <p>I don't know why people decided to give a negative value to my answer without an explanation. But ohh well, If you find this useful and answers your question, please accept it.</p>
-1
2016-09-26T13:41:39Z
[ "python", "python-2.7", "api", "python-requests" ]
SQLAlchemy: Inconsistent behaviour after commit
39,703,044
<p>I have a SQLAlchemy ORM, whose <code>__str__()</code> function prints a table of keys and values. The following code commits it to the DB and prints it (attribute names changed for clarity):</p> <pre><code>user.some_attribute = &lt;Some integer&gt; session.add(user) session.commit() app.logger.debug("some_attribute is %s" % user.some_attribute) app.logger.debug("Created a DB row:\n%s" % user) </code></pre> <p><strong>The crazy thing here is that the second debug print only works if the first debug exists!</strong></p> <p>In other words, if the two debug lines exist, I get:</p> <pre><code>some_attribute is 5 +------------------------------------------+-------------------------------------------------+ | Field | Value | +------------------------------------------+-------------------------------------------------+ | creation_time | 2016-09-26 15:25:45.630230 | | description | Test Poly 835 | | destination_truck_ids | ['A'] | | future_event_series_id | None | | id | 1017 | ... </code></pre> <p>But if only the second one is present, I get:</p> <pre><code>+-------+-------+ | Field | Value | +-------+-------+ +-------+-------+ </code></pre> <p><strong>Why does the ORM have no attributes, unless one of its features is printed?</strong> </p>
-1
2016-09-26T12:37:24Z
39,703,194
<p>I found a workaround, even though I'm not really sure about the nature of the problem.</p> <p>Quoting <a href="http://docs.sqlalchemy.org/en/latest/orm/session_api.html#sqlalchemy.orm.session.Session.commit" rel="nofollow">the manual</a>:</p> <blockquote> <p>By default, the Session also expires all database loaded state on all ORM-managed attributes after transaction commit. This so that subsequent operations load the most recent data from the database. This behavior can be disabled using the expire_on_commit=False option to sessionmaker or the Session constructor.</p> </blockquote> <p>If I load the session with <code>expire_on_commit=False</code>, the problem is solved - I get the ORM even after the commit.</p>
0
2016-09-26T12:43:24Z
[ "python", "session", "orm", "sqlalchemy" ]
Join dataframes by column values pandas
39,703,165
<p>I have two data frames <code>df1</code> and <code>df2</code> taken from different databases. Each item in the dataframes is identified by an <code>id</code>.</p> <pre><code>df1 = pd.DataFrame({'id':[10,20,30,50,100,110],'cost':[100,0,300,570,400,140]}) df2 = pd.DataFrame({'id':[10,23,30,58,100,110],'name':['a','b','j','d','k','g']}) </code></pre> <p>there are some common products in both dataframes, in this case those with the ids: 10,30,100,110. I want to merge this information in one single dataframe, as this one:</p> <pre><code>df3 = pd.DataFrame({'id':[10,30,100,110],'name':['a','j','k','g'],'cost':[100,300,400,140]}) </code></pre> <p>I was trying to do it with dictionaries and nested loops but I handling a rather big amount of data and it just take to long to do it that way.</p>
1
2016-09-26T12:42:26Z
39,703,204
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a>, default parameter <code>how='inner'</code> is omited:</p> <pre><code>print (pd.merge(df1,df2,on='id')) cost id name 0 100 10 a 1 300 30 j 2 400 100 k 3 140 110 g </code></pre>
2
2016-09-26T12:43:54Z
[ "python", "pandas", "dataframe", "merge", "inner-join" ]
unable to run a python script from localhost via ansible
39,703,224
<p>I have my structure as :</p> <pre><code>playbooks_only | |- read_replica_boto3.yml |- roles | |-read_replica_boto3 |-defaults |-tasks--&gt;&gt; main.yml |-files--&gt;&gt; - rds_read_replica_ops.py - sample.yml </code></pre> <p>I need to run the rds_read_replica_ops.py , i wrote the following :</p> <pre><code>- name: Create a cross-region replica using boto3 script command: python rds_read_replica_ops.py sample.yml args: chdir: '"{{ role_path }}"/files' </code></pre> <p>But this can't find the file and says:</p> <pre><code>sg: cannot change to directory '/home/blah/recovery/playbooks_only/"/home/blah/recovery/playbooks_only/roles/read_replica_boto3"/files': path does not exist FATAL: all hosts have already failed -- aborting </code></pre>
1
2016-09-26T12:45:00Z
39,703,412
<p>You have a typo in this line:</p> <pre><code> chdir: '"{{ role_path }}"/files' </code></pre> <p>You shouldn't surround variables with quotes. Instead, change the line to:</p> <pre><code> chdir: '{{ role_path }}/files' </code></pre> <p>And that should work!</p>
2
2016-09-26T12:53:05Z
[ "python", "ansible", "ansible-playbook" ]
Read and parse a file of tokens?
39,703,241
<p><strong>EDIT</strong>: <em>I have edited the question to correct a major mistake (which unfortunately invalidates all the answers provided so far): the command lines can contain spaces between the words, so no solution based on using spaces as delimiters between the tokens and their parameters will work! I deeply apologize for this omission in my original post.</em></p> <p>I have a text file containing commands in a simple (hypothetical) command language, as follows:</p> <pre><code>$BOOLEAN_COMMAND $NUMERIC COMMAND ALPHA 1 3 6 9 10 $NUMERIC COMMAND BETA 2 7 9 10 15 25 40 900 2000 $NUMERIC COMMAND GAMMA 6 9 11 </code></pre> <p>1) Each "COMMAND" starts with a special character ('$') and may be followed by a sequence of digits (the "command parameters").</p> <p>2) Commands without parameters are considered "boolean commands" and assume by default a value of True.</p> <p>3) There can be many commands with parameters (I call them here "Alpha", "Beta", etc.), but no matter their names, all are followed by one of more lines containing parameters.</p> <p>4) There may or may not be blank lines between lines contaning commands.</p> <p>I wrote a function which reads a file containing said commands and parameters and returns only the parameters of a specific command (passed as a function parameter). Here it is:</p> <pre><code>def get_params(fname, command): fspecs = open(fname,"r") params = [] for cline in fspecs: cline = cline.strip() if not cline: continue # Blank line if cline.startswith('$'): if command in cline: params = cline.partition(command)[-1].split() #else: # Continuation of a command. # params.append(cline) fspecs.close() if len(params) == 0: # Boolean command, defaults to True ret_val = True else: ret_val = ' '.join(params) # Numeric command, gets parameters return ret_val p = get_params('command_file', '$BOOLEAN COMMAND') print p # returns True p = get_params('command_file', '$NUMERIC COMMAND ALPHA') print p # returns 1 3 6 9 10 p = get_params('command_file', '$NUMERIC COMMAND BETA') print p # should return 2 7 9 10 15, but returns True </code></pre> <p>The above code works when the parameters of a given command are in a single line (immediately after the command token), but fails when the parameters are in subsequent lines (in that case, it just returns 'True' because no parameters are found after the command token). If the 'else' clause is not commented out, it just takes all lines containing parameters of whatever tokens there are up to the end of the file. Actually running the above code will better demonstrate the problem.</p> <p>What I want is being able to read one specific token (passed to the function) and get only its parameters, no matter if they extend into several lines or how many other tokens there may be in the command file.</p>
3
2016-09-26T12:45:56Z
39,703,860
<p>As the commands may take more than one line its much easier to NOT split the text-file by newlines. I would suggest splitting by '$' instead.</p> <p>This example code works:</p> <pre><code>def get_params(fname, desired_command): with open(fname,"r") as f: content = f.read() for element in content.split('$'): element = element.replace('\n', ' ').strip() if not element: continue if ' ' in element: command, result = element.split(' ', 1) else: command, result = element, True if desired_command == command or desired_command == '${}'.format(command): return result </code></pre> <hr> <p>Here is my edit which works with space containing commands:</p> <pre><code>import re COMMAND_RE = re.compile('([A-Z_ ]+[A-Z]) ?(.+)? *') def get_params(fname, desired_command): with open(fname,"r") as f: content = f.read() for element in content.split('$'): element = element.replace('\n', ' ').strip() if not element: continue command, result = COMMAND_RE.search(element).groups() if desired_command == command or desired_command == '${}'.format(command): return result or True </code></pre>
2
2016-09-26T13:12:39Z
[ "python", "python-2.7" ]
Read and parse a file of tokens?
39,703,241
<p><strong>EDIT</strong>: <em>I have edited the question to correct a major mistake (which unfortunately invalidates all the answers provided so far): the command lines can contain spaces between the words, so no solution based on using spaces as delimiters between the tokens and their parameters will work! I deeply apologize for this omission in my original post.</em></p> <p>I have a text file containing commands in a simple (hypothetical) command language, as follows:</p> <pre><code>$BOOLEAN_COMMAND $NUMERIC COMMAND ALPHA 1 3 6 9 10 $NUMERIC COMMAND BETA 2 7 9 10 15 25 40 900 2000 $NUMERIC COMMAND GAMMA 6 9 11 </code></pre> <p>1) Each "COMMAND" starts with a special character ('$') and may be followed by a sequence of digits (the "command parameters").</p> <p>2) Commands without parameters are considered "boolean commands" and assume by default a value of True.</p> <p>3) There can be many commands with parameters (I call them here "Alpha", "Beta", etc.), but no matter their names, all are followed by one of more lines containing parameters.</p> <p>4) There may or may not be blank lines between lines contaning commands.</p> <p>I wrote a function which reads a file containing said commands and parameters and returns only the parameters of a specific command (passed as a function parameter). Here it is:</p> <pre><code>def get_params(fname, command): fspecs = open(fname,"r") params = [] for cline in fspecs: cline = cline.strip() if not cline: continue # Blank line if cline.startswith('$'): if command in cline: params = cline.partition(command)[-1].split() #else: # Continuation of a command. # params.append(cline) fspecs.close() if len(params) == 0: # Boolean command, defaults to True ret_val = True else: ret_val = ' '.join(params) # Numeric command, gets parameters return ret_val p = get_params('command_file', '$BOOLEAN COMMAND') print p # returns True p = get_params('command_file', '$NUMERIC COMMAND ALPHA') print p # returns 1 3 6 9 10 p = get_params('command_file', '$NUMERIC COMMAND BETA') print p # should return 2 7 9 10 15, but returns True </code></pre> <p>The above code works when the parameters of a given command are in a single line (immediately after the command token), but fails when the parameters are in subsequent lines (in that case, it just returns 'True' because no parameters are found after the command token). If the 'else' clause is not commented out, it just takes all lines containing parameters of whatever tokens there are up to the end of the file. Actually running the above code will better demonstrate the problem.</p> <p>What I want is being able to read one specific token (passed to the function) and get only its parameters, no matter if they extend into several lines or how many other tokens there may be in the command file.</p>
3
2016-09-26T12:45:56Z
39,704,405
<p>Here is my approach: split everything based on white spaces (spaces, tabs and new lines). Then construct a dictionary with command names as keys and parameters as values. From this dictionary, you can look up parameters for any command. This approach opens and reads the file only once:</p> <pre><code>from collections import deque def parse_commands_file(filename): with open(filename) as f: tokens = deque(f.read().split()) command2parameters = dict() while tokens: command_name = tokens.popleft() # Added while tokens and tokens[0].isalpha() and not tokens[0].startswith('$'): command_name = command_name + ' ' + tokens.popleft() # end added parameters = [] while tokens and not tokens[0].startswith('$'): parameters.append(int(tokens.popleft())) command2parameters[command_name] = parameters or True return command2parameters if __name__ == '__main__': command = parse_commands_file('commands.txt') print '$BOOLEAN_COMMAND:', command.get('$BOOLEAN_COMMAND') print '$NUMERIC_COMMAND_ALPHA:', command.get('$NUMERIC_COMMAND_ALPHA') print '$NUMERIC_COMMAND_BETA:', command.get('$NUMERIC_COMMAND_BETA') </code></pre> <p>Output:</p> <pre><code>$BOOLEAN_COMMAND: True $NUMERIC_COMMAND_ALPHA: [1, 3, 6, 9, 10] $NUMERIC_COMMAND_BETA: [2, 7, 9, 10, 15, 25, 40, 900, 2000] </code></pre> <h1>Discussion</h1> <ul> <li>I use the <code>deque</code> data structure, which stands for <em>double-end queue</em>. This structure behaves like a list, but more efficient in term of insert and pop from both ends</li> <li>When parsing the parameters, I converted them to <code>int</code>, you can convert them to float or leave them be</li> <li>The expression <code>parameters or True</code> basically says: if parameters is empty, use <code>True</code>, otherwise, leave it be</li> </ul> <h1>Update</h1> <p>I have added a patch to handle commands with spaces in their names. However, this solution is just a patch, it does not work if you have multiple spaces such as:</p> <pre><code>$MY COMMAND HERE </code></pre> <p>In this case, multiple spaces got squeezed into one.</p>
1
2016-09-26T13:37:00Z
[ "python", "python-2.7" ]
Read and parse a file of tokens?
39,703,241
<p><strong>EDIT</strong>: <em>I have edited the question to correct a major mistake (which unfortunately invalidates all the answers provided so far): the command lines can contain spaces between the words, so no solution based on using spaces as delimiters between the tokens and their parameters will work! I deeply apologize for this omission in my original post.</em></p> <p>I have a text file containing commands in a simple (hypothetical) command language, as follows:</p> <pre><code>$BOOLEAN_COMMAND $NUMERIC COMMAND ALPHA 1 3 6 9 10 $NUMERIC COMMAND BETA 2 7 9 10 15 25 40 900 2000 $NUMERIC COMMAND GAMMA 6 9 11 </code></pre> <p>1) Each "COMMAND" starts with a special character ('$') and may be followed by a sequence of digits (the "command parameters").</p> <p>2) Commands without parameters are considered "boolean commands" and assume by default a value of True.</p> <p>3) There can be many commands with parameters (I call them here "Alpha", "Beta", etc.), but no matter their names, all are followed by one of more lines containing parameters.</p> <p>4) There may or may not be blank lines between lines contaning commands.</p> <p>I wrote a function which reads a file containing said commands and parameters and returns only the parameters of a specific command (passed as a function parameter). Here it is:</p> <pre><code>def get_params(fname, command): fspecs = open(fname,"r") params = [] for cline in fspecs: cline = cline.strip() if not cline: continue # Blank line if cline.startswith('$'): if command in cline: params = cline.partition(command)[-1].split() #else: # Continuation of a command. # params.append(cline) fspecs.close() if len(params) == 0: # Boolean command, defaults to True ret_val = True else: ret_val = ' '.join(params) # Numeric command, gets parameters return ret_val p = get_params('command_file', '$BOOLEAN COMMAND') print p # returns True p = get_params('command_file', '$NUMERIC COMMAND ALPHA') print p # returns 1 3 6 9 10 p = get_params('command_file', '$NUMERIC COMMAND BETA') print p # should return 2 7 9 10 15, but returns True </code></pre> <p>The above code works when the parameters of a given command are in a single line (immediately after the command token), but fails when the parameters are in subsequent lines (in that case, it just returns 'True' because no parameters are found after the command token). If the 'else' clause is not commented out, it just takes all lines containing parameters of whatever tokens there are up to the end of the file. Actually running the above code will better demonstrate the problem.</p> <p>What I want is being able to read one specific token (passed to the function) and get only its parameters, no matter if they extend into several lines or how many other tokens there may be in the command file.</p>
3
2016-09-26T12:45:56Z
39,706,835
<p>Here is another solution. This one uses regular expression and it does not squeeze multiple spaces within a command:</p> <pre><code>import re def parse_commands_file(filename): command_pattern = r""" (\$[A-Z _]+)* # The command, optional ([0-9 \n]+)* # The parameter which might span multiple lines, optional """ command_pattern = re.compile(command_pattern, flags=re.VERBOSE) with open(filename) as f: tokens = re.findall(command_pattern, f.read()) return {cmd.strip(): [int(n) for n in params.split()] for cmd, params in tokens} if __name__ == '__main__': command = parse_commands_file('commands.txt') print '$BOOLEAN_COMMAND:', command.get('$BOOLEAN_COMMAND') print '$NUMERIC COMMAND ALPHA:', command.get('$NUMERIC COMMAND ALPHA') print '$NUMERIC COMMAND BETA:', command.get('$NUMERIC COMMAND BETA') </code></pre> <h1>Discussion</h1> <p>Basically, the command pattern says each line might contain two parts the command name and the numerical parameters, both are optional. </p> <p>Note that a command might contain trailing space, that is why we strip them off using the expression <code>cmd.strip()</code>.</p> <p>Also, the parameters part returned by <code>re.findall()</code> needs to be parsed by splitting them off by white spaces, then convert to <code>int</code> with the expression <code>[int(n) for n in params.split()]</code></p>
1
2016-09-26T15:32:09Z
[ "python", "python-2.7" ]
Elementwise operation on pandas series
39,703,665
<p>I have a pandas Series <code>x</code> with values <code>1</code>, <code>2</code> or <code>3</code>.</p> <p>I want it to have values <code>monkey</code>, <code>gorilla</code>, and <code>tarzan</code> depending on the values.</p> <p>I guess I should do something like</p> <pre><code>values = ['monkey', 'gorilla', 'tarzan'] x = values[x - 1] </code></pre> <p>but it doesn't work. I guess it's because it doesn't operate elementwisely.</p>
0
2016-09-26T13:04:14Z
39,703,706
<p>Use maping by <code>dict</code> with function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow"><code>map</code></a>.</p> <p>Sample:</p> <pre><code>s = pd.Series([1,2,3]) print (s) 0 1 1 2 2 3 dtype: int64 d = {1:'monkey',2:'gorilla',3:'tarzan'} print (s.map(d)) 0 monkey 1 gorilla 2 tarzan dtype: object </code></pre>
2
2016-09-26T13:05:50Z
[ "python", "pandas", "sklearn-pandas" ]
Specify which Python version interprets an executable
39,703,725
<p>I developed an application with Python 3 that produces various executables. I then used <code>setuptools</code> to build and distribute this application, again all using Python 3. </p> <p>When this application is installed in a test environment, the executables are being correctly deployed to the <code>bin</code> folder and thus become invokable from anywhere in the system. However, when these executables are invoked, the system tries to use the Python 2 interpreter, leading to an exception. How can I make sure the Python interpreter is used when I invoke these executables?</p>
1
2016-09-26T13:06:42Z
39,704,446
<p>You might need to use bash <a href="http://stackoverflow.com/questions/25165808/should-i-use-a-shebang-with-bash-scripts">shebangs</a> on your scripts, which are small strings at the beginning that specifies what binary should interpret them.</p> <p>In your case you need to add <code>#!/usr/bin/env python3</code> at the beginning of your scripts. The bash shell should read this and pass the script to your <code>python3</code> installled interpreter.</p> <p>Example:</p> <pre><code>#!/usr/bin/env python3 # This should work on python3 and fail on python2: print("Hello from python3!") </code></pre>
-2
2016-09-26T13:38:49Z
[ "python", "python-3.x" ]
Specify which Python version interprets an executable
39,703,725
<p>I developed an application with Python 3 that produces various executables. I then used <code>setuptools</code> to build and distribute this application, again all using Python 3. </p> <p>When this application is installed in a test environment, the executables are being correctly deployed to the <code>bin</code> folder and thus become invokable from anywhere in the system. However, when these executables are invoked, the system tries to use the Python 2 interpreter, leading to an exception. How can I make sure the Python interpreter is used when I invoke these executables?</p>
1
2016-09-26T13:06:42Z
39,746,982
<p>I made sure <code>install</code> was run with Python 3 and that the resulting script included the correct header. Still I kept getting exceptions from Python 2.7.</p> <p>Out of desperation I created a new Python 3 virtual environment and in it the scripts started working properly. There are previous reports of old virtual environments going haywire, particularly <a href="http://stackoverflow.com/a/26499478/2066215">during a system upgrade</a>.</p> <p>For future reference, the command I used:</p> <p><code>mkvirtualenv -p /usr/bin/python3.5 venv_p3</code></p>
0
2016-09-28T12:05:28Z
[ "python", "python-3.x" ]
input column based on conditional statement pandas
39,703,734
<p>I have two Pandas data frames and wish to insert day and week columns from DF2 values in DF1 where the dates match. e.g. for example below, day - 4 and week - 1 would be extracted from DF2 row 1 and inserted to all of the day and week columns in DF1.</p> <p><strong>DF 1</strong></p> <pre><code>montgomery year date day week 0.0 2016 04/01/2016 0.0 2016 04/01/2016 0.0 2016 04/01/2016 0.0 2016 04/01/2016 </code></pre> <p><strong>DF 2</strong></p> <pre><code>date day week 04/01/2016 4 1 05/01/2016 5 1 06/01/2016 6 1 </code></pre> <p>I have looked into using the numpy conditional statements but haven't reached a solution, thanks</p>
0
2016-09-26T13:07:09Z
39,703,840
<p>IIUC you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow"><code>to_datetime</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a>:</p> <pre><code>#if empty columns in DF1, remove them DF1 = DF1.drop(['day','week'], axis=1) #convert columns to datetimes DF1.date = pd.to_datetime(DF1.date) DF2.date = pd.to_datetime(DF2.date) print (pd.merge(DF1,DF2, how='left', on='date')) montgomery year date day week 0 0.0 2016 2016-04-01 4 1 1 0.0 2016 2016-04-01 4 1 2 0.0 2016 2016-04-01 4 1 3 0.0 2016 2016-04-01 4 1 </code></pre>
0
2016-09-26T13:11:46Z
[ "python", "pandas", "numpy", "dataframe" ]
Calculate a new column with Pandas
39,703,749
<p>Based on <a href="http://stackoverflow.com/q/39698097/6618225">this</a> Question, I would like to know how can I use a def() to calculate a new column with Pandas and use more than one arguments (strings and integers)?</p> <p>Concrete example:</p> <pre><code>df_joined["IVbest"] = IV(df_joined["Saison"], df_joined["Wald_Typ"], df_joined["NS_Cap"]) </code></pre> <p>"Saison", "Wald_Typ" are strings "NS_Cap" is an integer</p> <p>Now I want to run all those values through this definition and return me again an x-value:</p> <pre><code>def IV(saison, wald, ns): if saison == "Sommer": if wald == "Laubwald": x = ns * 0.1 elif wald == "Nadelwald": x = ns * 0.2 elif wald == "Mischwald": x = ns * 0.3 elif saison == "Winter": if wald == "Laubwald": x = ns * 0.01 elif wald == "Nadelwald": x = ns * 0.02 elif wald == "Mischwald": x = ns * 0.03 return x </code></pre> <p>How would I accomplish that best?</p> <p>I have tried stuff like </p> <pre><code>df_joined["IVbest"] = IV(df_joined["Saison", "Wald_Typ", "NS_Cap"]) </code></pre> <p>or </p> <pre><code>df_joined["IVbest"] = df_joined["Saison", "Wald_Typ", "NS_Cap"].apply(IV) </code></pre> <p>but nothing works :(</p>
1
2016-09-26T13:07:46Z
39,704,712
<p>I think in this case it would be better to use 6 masks and use these to perform the calculations just on those rows:</p> <pre><code>sommer_laub = (df_joined['Saison'] == 'Sommer') &amp; (df_joined['Wald_Typ'] == 'Laubwald') sommer_nadel = (df_joined['Saison'] == 'Sommer') &amp; (df_joined['Wald_Typ'] == 'Nadelwald') sommer_misch = (df_joined['Saison'] == 'Sommer') &amp; (df_joined['Wald_Typ'] == 'Mischwald') winter_laub = (df_joined['Saison'] == 'Winter') &amp; (df_joined['Wald_Typ'] == 'Laubwald') winter_nadel = (df_joined['Saison'] == 'Winter') &amp; (df_joined['Wald_Typ'] == 'Nadelwald') winter_misch = (df_joined['Saison'] == 'Winter') &amp; (df_joined['Wald_Typ'] == 'Mischwald') df.loc[sommer_laub, 'IVbest'] = df.loc[sommer_laub,'NS_Cap'] * 0.1 df.loc[sommer_nadel, 'IVbest'] = df.loc[sommer_nadel,'NS_Cap'] * 0.2 df.loc[sommer_misch, 'IVbest'] = df.loc[sommer_misch,'NS_Cap'] * 0.3 df.loc[winter_laub, 'IVbest'] = df.loc[winter_laub,'NS_Cap'] * 0.01 df.loc[winter_nadel, 'IVbest'] = df.loc[winter_nadel,'NS_Cap'] * 0.02 df.loc[winter_misch, 'IVbest'] = df.loc[winter_misch,'NS_Cap'] * 0.03 </code></pre>
0
2016-09-26T13:49:57Z
[ "python", "python-2.7", "csv", "pandas" ]
Disable focus for tkinter widgets?
39,703,766
<p>How can I make a widget that will not get the focus ever in tkinter? For example, a button that when I will press TAB the focus will skip on him</p>
0
2016-09-26T13:08:19Z
39,704,095
<p>found some time to provide working example ;)</p> <pre><code>import Tkinter import tkMessageBox root = Tkinter.Tk() but1 = Tkinter.Button(root, text ="Button 1") but1.pack() butNoFocus = Tkinter.Button(root, text ="Button no focus", takefocus = 0) butNoFocus.pack() but2 = Tkinter.Button(root, text ="Button 2") but2.pack() root.mainloop() </code></pre> <p><code>takefocus</code> option set to 0 will disable tab focus on created button.</p>
1
2016-09-26T13:23:05Z
[ "python", "button", "tkinter", "focus" ]
Find corresponding value with maximum in a list of lists
39,703,879
<p>I have a list (created from a .csv file) with an output like this: </p> <p><code>[('25.09.2016 01:00:00', 'MQ100D1_3_1_4', '225'), ('25.09.2016 02:00:00', 'MQ100D1_3_1_4', '173'), ('25.09.2016 03:00:00', 'MQ100D1_3_1_4', '106'), ('25.09.2016 04:00:00', 'MQ100D1_3_1_4', '74'), ('25.09.2016 05:00:00', 'MQ100D1_3_1_4', '84'), ('25.09.2016 06:00:00', 'MQ100D1_3_1_4', '122'), ('25.09.2016 07:00:00', 'MQ100D1_3_1_4', '110'), ('25.09.2016 08:00:00', 'MQ100D1_3_1_4', '177'), ('25.09.2016 09:00:00', 'MQ100D1_3_1_4', '301'), ('25.09.2016 10:00:00', 'MQ100D1_3_1_4', '552'), ('25.09.2016 11:00:00', 'MQ100D1_3_1_4', '812'), ('25.09.2016 12:00:00', 'MQ100D1_3_1_4', '922'), ('25.09.2016 13:00:00', 'MQ100D1_3_1_4', '970'), ('25.09.2016 14:00:00', 'MQ100D1_3_1_4', '1264'), ('25.09.2016 15:00:00', 'MQ100D1_3_1_4', '1338'), ('25.09.2016 16:00:00', 'MQ100D1_3_1_4', '1347'), ('25.09.2016 17:00:00', 'MQ100D1_3_1_4', '1491'), ('25.09.2016 18:00:00', 'MQ100D1_3_1_4', '1637'), ('25.09.2016 19:00:00', 'MQ100D1_3_1_4', '1544'), ('25.09.2016 20:00:00', 'MQ100D1_3_1_4', '974'), ('25.09.2016 21:00:00', 'MQ100D1_3_1_4', '503'), ('25.09.2016 22:00:00', 'MQ100D1_3_1_4', '359'), ('25.09.2016 23:00:00', 'MQ100D1_3_1_4', '218'), ('25.09.2016 23:59:59', 'MQ100D1_3_1_4', '132')......</code></p> <p>The first element is the time value. The second is the measuring point. The third the amount of cars measured in that time interval.</p> <p>There are 536 items in total.</p> <p>For my program, I need the maximum of the third element in chunks of 12 (before noon, after noon). </p> <p>For the maximum I've written the code:</p> <p><code>i = 0 topHour = [] for i in range(0, len(finalList), 12): values = max([int(i[-1]) for i in finalList[i:i+12]]) topHour.append(values)</code></p> <p>This provides me with an output like this:</p> <pre><code>[922, 1637, ...] </code></pre> <p>However, how do I get the corresponding time value (first element) with it? In this very example the program should output <code>'25.09.2016 12:00:00'</code> and <code>'25.09.2016 18:00:00'</code> together with the maximum.</p>
0
2016-09-26T13:13:20Z
39,703,997
<p>Iterate on the tuples (not on the last items in each tuple) and use the <code>key</code> function in <code>max</code> to get the last item that is used to compute the maximum.</p> <p>You can do this with a <em>list comprehension</em>:</p> <pre><code>top_hour = [max(lst[i:i+12], key=lambda x: int(x[-1])) for i in range(0, len(lst), 12)] # ^^^ print(top_hour) # [('25.09.2016 12:00:00', 'MQ100D1_3_1_4', '922'), ('25.09.2016 18:00:00', 'MQ100D1_3_1_4', '1637')] </code></pre>
1
2016-09-26T13:18:35Z
[ "python", "list", "python-3.x" ]
Find corresponding value with maximum in a list of lists
39,703,879
<p>I have a list (created from a .csv file) with an output like this: </p> <p><code>[('25.09.2016 01:00:00', 'MQ100D1_3_1_4', '225'), ('25.09.2016 02:00:00', 'MQ100D1_3_1_4', '173'), ('25.09.2016 03:00:00', 'MQ100D1_3_1_4', '106'), ('25.09.2016 04:00:00', 'MQ100D1_3_1_4', '74'), ('25.09.2016 05:00:00', 'MQ100D1_3_1_4', '84'), ('25.09.2016 06:00:00', 'MQ100D1_3_1_4', '122'), ('25.09.2016 07:00:00', 'MQ100D1_3_1_4', '110'), ('25.09.2016 08:00:00', 'MQ100D1_3_1_4', '177'), ('25.09.2016 09:00:00', 'MQ100D1_3_1_4', '301'), ('25.09.2016 10:00:00', 'MQ100D1_3_1_4', '552'), ('25.09.2016 11:00:00', 'MQ100D1_3_1_4', '812'), ('25.09.2016 12:00:00', 'MQ100D1_3_1_4', '922'), ('25.09.2016 13:00:00', 'MQ100D1_3_1_4', '970'), ('25.09.2016 14:00:00', 'MQ100D1_3_1_4', '1264'), ('25.09.2016 15:00:00', 'MQ100D1_3_1_4', '1338'), ('25.09.2016 16:00:00', 'MQ100D1_3_1_4', '1347'), ('25.09.2016 17:00:00', 'MQ100D1_3_1_4', '1491'), ('25.09.2016 18:00:00', 'MQ100D1_3_1_4', '1637'), ('25.09.2016 19:00:00', 'MQ100D1_3_1_4', '1544'), ('25.09.2016 20:00:00', 'MQ100D1_3_1_4', '974'), ('25.09.2016 21:00:00', 'MQ100D1_3_1_4', '503'), ('25.09.2016 22:00:00', 'MQ100D1_3_1_4', '359'), ('25.09.2016 23:00:00', 'MQ100D1_3_1_4', '218'), ('25.09.2016 23:59:59', 'MQ100D1_3_1_4', '132')......</code></p> <p>The first element is the time value. The second is the measuring point. The third the amount of cars measured in that time interval.</p> <p>There are 536 items in total.</p> <p>For my program, I need the maximum of the third element in chunks of 12 (before noon, after noon). </p> <p>For the maximum I've written the code:</p> <p><code>i = 0 topHour = [] for i in range(0, len(finalList), 12): values = max([int(i[-1]) for i in finalList[i:i+12]]) topHour.append(values)</code></p> <p>This provides me with an output like this:</p> <pre><code>[922, 1637, ...] </code></pre> <p>However, how do I get the corresponding time value (first element) with it? In this very example the program should output <code>'25.09.2016 12:00:00'</code> and <code>'25.09.2016 18:00:00'</code> together with the maximum.</p>
0
2016-09-26T13:13:20Z
39,704,232
<p>use the lambda function is appropriate, you can do it like:</p> <pre><code>max(finalList[i:i+12]], key=lambda x: int(x[-1])) </code></pre> <p>then you can process the triple further...</p> <p>btw, in your code, the <code>i</code> is ambiguous, you should take care.</p>
0
2016-09-26T13:29:28Z
[ "python", "list", "python-3.x" ]
Q: How do I concatenate str + int to = an existing variable
39,704,072
<p>I want to combine a str + int to equal a existing variable that I already defined. I tried looking up on how this can be done and I've only found how to concatenate a str and int together with </p> <pre><code>print "var%d" % currentIndex </code></pre> <p>what I have is currentIndex being the index number of the selection in a PyQt4 combo box. I related the index number of the combo box selection to a large file full of var0 - var30. Each one being a list of information that I want to pull on demand when ever currentIndex is changed.</p> <pre><code>var0 = [ "a", "b", "c", "d" ] ( user picks a selection from GUI comboBox ) print "var%d" % currentIndex var0 </code></pre> <p>It's not printing the list it's just printing the var0 as a string. How do I get the two to equal var0 the variable and not var0 the str?</p>
0
2016-09-26T13:22:00Z
39,704,671
<p>You can print the list like this:</p> <pre><code>print eval("var%d" % currentIndex) </code></pre> <p>But I would suggest you to use a nested list rather than 30 variables:</p> <pre><code>var = [["a", "b", "c", "d"], ["e", "f"], ...] print var[currentIndex] </code></pre>
0
2016-09-26T13:48:14Z
[ "python", "string", "python-2.7", "concatenation" ]
How to write a recursive function that takes a list and return the same list without vowels?
39,704,084
<p>I am supposed to write a recursive function that takes a list of strings or a list of lists of strings and return the list without vowels, if found. Here is my attempt to solve it:</p> <pre><code>def noVow(seq): keys = ['a','i','e','o','u','u'] if not seq or not isinstance(seq, list) : return else: if seq[0] in keys: del seq[0] return (noVow(seq[0:])) else: return (noVow(seq[1:])) li = ["b", "c", "d","a"] print (noVow(li)) </code></pre> <p>I am aware that the bug lies in my base case however I can't come up with the right base case. </p> <p>Note that the recursive function has to be written in pure functional programming i.e. side effects are not allowed.</p>
3
2016-09-26T13:22:43Z
39,704,387
<blockquote> <p>return the <strong>same</strong> list without vowels</p> </blockquote> <p>Eh, you're <em>slicing</em> the original list in the recursive calls, so you have a <strong>copy</strong> not the same list. </p> <p>More so, your code actually works, but since you're passing a <em>slice</em> of the list, the vowel items in the slice (not the original list) are deleted and the original remains unchanged. </p> <p>You can instead use a <em>non-slicing</em> variant that moves from <em>start</em> to <em>end</em> indices of the original list:</p> <pre><code>def no_vow(seq, index=0): keys = ['a','i','e','o','u'] if not seq or not isinstance(seq, list) or index &gt;= len(seq): return else: if seq[index] in keys: del seq[index] return no_vow(seq, index) else: return no_vow(seq, index+1) </code></pre> <p>Finally, if you're going to print your result, you shouldn't print the output of the function call (which will be <code>None</code>) but the list.</p> <hr> <p><strong>Trial</strong>:</p> <pre><code>li = ["b", "c", "e", "d", "a"] no_vow(li) # list is modified in-place print(li) # ["b", "c", "d"] </code></pre>
1
2016-09-26T13:36:27Z
[ "python", "recursion", "functional-programming" ]
How to write a recursive function that takes a list and return the same list without vowels?
39,704,084
<p>I am supposed to write a recursive function that takes a list of strings or a list of lists of strings and return the list without vowels, if found. Here is my attempt to solve it:</p> <pre><code>def noVow(seq): keys = ['a','i','e','o','u','u'] if not seq or not isinstance(seq, list) : return else: if seq[0] in keys: del seq[0] return (noVow(seq[0:])) else: return (noVow(seq[1:])) li = ["b", "c", "d","a"] print (noVow(li)) </code></pre> <p>I am aware that the bug lies in my base case however I can't come up with the right base case. </p> <p>Note that the recursive function has to be written in pure functional programming i.e. side effects are not allowed.</p>
3
2016-09-26T13:22:43Z
39,704,399
<pre><code>def no_vowel(seq): if not isinstance(seq, list): raise ValueError('Expected list, got {}'.format(type(seq))) if not seq: return [] head, *tail = seq if isinstance(head, list): return [[no_vowel(head)]] + no_vowel(tail) else: if head in 'aeiou': return no_vowel(tail) else: return [head] + novowel(tail) </code></pre> <p>The cool unpacking of the list is a Python 3 feature, and is very similar to functional programmings pattern matching. </p>
1
2016-09-26T13:36:52Z
[ "python", "recursion", "functional-programming" ]
How to write a recursive function that takes a list and return the same list without vowels?
39,704,084
<p>I am supposed to write a recursive function that takes a list of strings or a list of lists of strings and return the list without vowels, if found. Here is my attempt to solve it:</p> <pre><code>def noVow(seq): keys = ['a','i','e','o','u','u'] if not seq or not isinstance(seq, list) : return else: if seq[0] in keys: del seq[0] return (noVow(seq[0:])) else: return (noVow(seq[1:])) li = ["b", "c", "d","a"] print (noVow(li)) </code></pre> <p>I am aware that the bug lies in my base case however I can't come up with the right base case. </p> <p>Note that the recursive function has to be written in pure functional programming i.e. side effects are not allowed.</p>
3
2016-09-26T13:22:43Z
39,704,404
<p>Your base case returns a None. So whenever you passes the empty list the None is sent up the stack of recursive calls. </p> <p>Moreover you are not storing the characters which are not vowels, so your else case is wrong.</p> <p>What you can have is something like this: </p> <pre><code>&gt;&gt;&gt; def noVow(seq): ... keys = ['a','i','e','o','u','u'] ... if not seq or not isinstance(seq, list) : ... return [] ... else: ... if seq[0] in keys: ... return noVow(seq[1:]) ... else: ... return [seq[0]] + noVow(seq[1:]) </code></pre> <p>Also <code>seq[0:]</code> is equivalent to <code>seq</code>.</p>
1
2016-09-26T13:37:00Z
[ "python", "recursion", "functional-programming" ]
How to write a recursive function that takes a list and return the same list without vowels?
39,704,084
<p>I am supposed to write a recursive function that takes a list of strings or a list of lists of strings and return the list without vowels, if found. Here is my attempt to solve it:</p> <pre><code>def noVow(seq): keys = ['a','i','e','o','u','u'] if not seq or not isinstance(seq, list) : return else: if seq[0] in keys: del seq[0] return (noVow(seq[0:])) else: return (noVow(seq[1:])) li = ["b", "c", "d","a"] print (noVow(li)) </code></pre> <p>I am aware that the bug lies in my base case however I can't come up with the right base case. </p> <p>Note that the recursive function has to be written in pure functional programming i.e. side effects are not allowed.</p>
3
2016-09-26T13:22:43Z
39,713,114
<p>To work for a list of lists containing string and a flat list of strings, you need to iterate over the sequence and then check the type:</p> <pre><code>def noVow(seq): vowels = {'a', 'i', 'e', 'o', 'u', 'u'} for ele in seq: if isinstance(ele, list): # a list so recursively process yield [s for s in noVow(ele)] # else it has to be a string so just see if it is not a vowel elif ele not in vowels: yield ele </code></pre> <p>You use it like:</p> <pre><code>In [39]: li Out[39]: [['b', 'c'], ['d', 'a']] In [40]: li[:] = noVow(li) In [41]: print(li) [['b', 'c'], ['d']] In [42]: li = ["a","b","c","e"] In [43]: li[:] = noVow(li) In [44]: print(li) ['b', 'c'] In [10]: li = [["b", "c"], ["d", ["a"]]] In [11]: li[:] = noVow(li) In [12]: li Out[12]: [['b', 'c'], ['d', []]] # works for nested lists </code></pre> <p>If you wanted a flat list of all non-vowels and you can use python3, you can use <em>yield from</em>:</p> <pre><code>def noVow(seq): vowels = {'a', 'i', 'e', 'o', 'u', 'u'} for ele in seq: if isinstance(seq, list): yield from noVow(ele) elif ele not in vowels: yield ele </code></pre> <p>You use it the same way:</p> <pre><code>In [2]: li = [["b", "c"], ["d", "a"]] In [3]: li[:] = noVow(li) In [4]: li Out[4]: ['b', 'c', 'd'] In [5]: li = ["a","b","c","e"] In [6]: li[:] = noVow(li) In [7]: li Out[7]: ['b', 'c'] </code></pre> <p>You can do the same with python2, you just need another loop</p>
1
2016-09-26T22:02:28Z
[ "python", "recursion", "functional-programming" ]
How to write a recursive function that takes a list and return the same list without vowels?
39,704,084
<p>I am supposed to write a recursive function that takes a list of strings or a list of lists of strings and return the list without vowels, if found. Here is my attempt to solve it:</p> <pre><code>def noVow(seq): keys = ['a','i','e','o','u','u'] if not seq or not isinstance(seq, list) : return else: if seq[0] in keys: del seq[0] return (noVow(seq[0:])) else: return (noVow(seq[1:])) li = ["b", "c", "d","a"] print (noVow(li)) </code></pre> <p>I am aware that the bug lies in my base case however I can't come up with the right base case. </p> <p>Note that the recursive function has to be written in pure functional programming i.e. side effects are not allowed.</p>
3
2016-09-26T13:22:43Z
39,714,563
<p>I believe this solution correctly implements both of the criteria "a list of strings or a list of lists of strings" and "return the same list" without any external assistance:</p> <pre><code>def noVowels(sequence, index=0): if not (sequence and type(sequence) is list and index &lt; len(sequence)): return vowels = {'a','i','e','o','u'} if type(sequence[index]) is list: noVowels(sequence[index]) elif sequence[index] in vowels: del sequence[index] index -= 1 noVowels(sequence, index + 1) return sequence </code></pre> <p><strong>TEST</strong></p> <pre><code>array = [['a', 'b', 'c', 'd', 'e'], 'f', 'g', 'h', 'i', ['l', 'm', ['n', 'o', 'p'], 'q'], 'r', 's', 't', 'u'] print(array) print(id(array)) print(id(array[0])) result = noVowels(array) print(result) print(id(result)) print(id(result[0])) </code></pre> <p><strong>RESULT</strong></p> <pre><code>&gt; python3 test.py [['a', 'b', 'c', 'd', 'e'], 'f', 'g', 'h', 'i', ['l', 'm', ['n', 'o', 'p'], 'q'], 'r', 's', 't', 'u'] 4315447624 4315344520 [['b', 'c', 'd'], 'f', 'g', 'h', ['l', 'm', ['n', 'p'], 'q'], 'r', 's', 't'] 4315447624 4315344520 &gt; </code></pre> <p>Note that the same list, and inner lists, are left intact based on their id not changing. And it handles a list of lists of lists.</p> <p>I don't believe it's functional program because it works by side-effect but that contradiction is inherent in the OP's problem description.</p>
1
2016-09-27T01:01:58Z
[ "python", "recursion", "functional-programming" ]
Parse online comma delimited text file in Python 3.5
39,704,096
<p>I guess this is a combination of two questions - read online text file and then parse the result into lists. I tried the following code, which can read the file into byte file but not able to convert into list</p> <pre><code>import urllib CFTC_URL = r"http://www.cftc.gov/dea/newcot/FinFutWk.txt" CFTC_url = urllib.request.urlopen(CFTC_URL) output = CFTC_url.read().decode('utf-8') </code></pre>
-1
2016-09-26T13:23:09Z
39,704,721
<p>Rather than attempting parse every line from the url and put it into specific rows for a csv file, you can just push it all into a text file to clean up the formating, and then read back from it, it may seem like a bit more works but this is generally my approach to comma delimited information from a URL. </p> <pre><code>import requests URL = "http://www.cftc.gov/dea/newcot/FinFutWk.txt" r = requests.get(URL,stream=True) with open('file.txt','w') as W: W.write(r.text) with open('file.txt', 'r') as f: lines = f.readlines() for line in lines: print(line.split(',')) </code></pre> <p>You can take what is in that forloop, and swap it around to actually saving the lists into a array of lists so you can use rather than print them.</p> <pre><code>content = [] for line in lines: content.append(line.split(',')) </code></pre> <p>Also note that upon splitting, you will still notice that there is content that has quite a large amount of white space after it, you could run through the entire list, for each list in the array, and remove all white space but that would ruin the first element in the list, or just convert the numeric values which have the white space into actual integers as they were read in as strings. That would be your preference. If you have any questions feel free to add a comment below.</p> <p>EDIT 1: On a side note, if you do not wish to keep the file that was saved with the content, import the os library and then after you read the lines into the lines array, remove the file.</p> <pre><code>import os os.remove('file.txt') </code></pre>
0
2016-09-26T13:50:13Z
[ "python", "csv", "urllib", "delimited-text" ]
Parse online comma delimited text file in Python 3.5
39,704,096
<p>I guess this is a combination of two questions - read online text file and then parse the result into lists. I tried the following code, which can read the file into byte file but not able to convert into list</p> <pre><code>import urllib CFTC_URL = r"http://www.cftc.gov/dea/newcot/FinFutWk.txt" CFTC_url = urllib.request.urlopen(CFTC_URL) output = CFTC_url.read().decode('utf-8') </code></pre>
-1
2016-09-26T13:23:09Z
39,704,727
<p>Assuming you want to interpret the file as a table you want to first get the rows by using <a href="http://stackoverflow.com/questions/172439/how-do-i-split-a-multi-line-string-into-multiple-lines">split</a>. Then you can get the columns by splitting each row again.</p> <pre><code>import urllib.request CFTC_URL = r"http://www.cftc.gov/dea/newcot/FinFutWk.txt" CFTC_url = urllib.request.urlopen(CFTC_URL) output = CFTC_url.read().decode('utf-8') lines = output.split("\r\n"))) # split on newline print(lines[0]) # first line "CANADIAN DOLLAR ..." columns_0 = lines[0].split(",") # split on , print(columns[0]) # first column of first line </code></pre> <p>You can then iterate through the list of lines and for each entry in lines you can iterate through the columns.</p>
0
2016-09-26T13:50:25Z
[ "python", "csv", "urllib", "delimited-text" ]
Parse online comma delimited text file in Python 3.5
39,704,096
<p>I guess this is a combination of two questions - read online text file and then parse the result into lists. I tried the following code, which can read the file into byte file but not able to convert into list</p> <pre><code>import urllib CFTC_URL = r"http://www.cftc.gov/dea/newcot/FinFutWk.txt" CFTC_url = urllib.request.urlopen(CFTC_URL) output = CFTC_url.read().decode('utf-8') </code></pre>
-1
2016-09-26T13:23:09Z
39,704,798
<p>You can use standart <a href="https://docs.python.org/3.3/library/csv.html" rel="nofollow">csv</a> module with <a href="https://docs.python.org/3/library/io.html#text-i-o" rel="nofollow">StringIO</a> wrapper for file content (example with <a href="http://docs.python-requests.org/en/master/" rel="nofollow">requests</a> library for getting data):</p> <pre><code>import requests, io, csv CFTC_URL = r"http://www.cftc.gov/dea/newcot/FinFutWk.txt" data = io.StringIO(requests.get(CFTC_URL).text) dialect = csv.Sniffer().sniff(data.read(1024)) data.seek(0) reader = csv.reader(data, dialect) for row in reader: print(row) </code></pre>
2
2016-09-26T13:53:18Z
[ "python", "csv", "urllib", "delimited-text" ]
How to call django.setup() in console_script?
39,704,298
<p>The current django docs tell me this:</p> <blockquote> <p>django.setup() may only be called once.</p> <p>Therefore, avoid putting reusable application logic in standalone scripts so that you have to import from the script elsewhere in your application. If you can’t avoid that, put the call to django.setup() inside an if block:</p> </blockquote> <pre><code>if __name__ == '__main__': import django django.setup() </code></pre> <p>Source: <a href="https://docs.djangoproject.com/en/1.10/topics/settings/#calling-django-setup-is-required-for-standalone-django-usage">Calling django.setup() is required for “standalone” Django usage</a></p> <p>I am using entry points in setup.py. This way I don't have <code>__name__ == '__main__'</code>.</p> <h1>Question</h1> <p>How to ensure django.setup() gets only called once if you use <a href="http://python-packaging.readthedocs.io/en/latest/command-line-scripts.html#the-console-scripts-entry-point">console_scripts</a>?</p> <p>Where should I put <code>django.setup()</code>?</p> <h1>Background</h1> <p>The actual error I have: Django hangs. Here is the reason: <a href="https://code.djangoproject.com/ticket/27176">https://code.djangoproject.com/ticket/27176</a></p> <p>I want to port my application to the current django version. Changing to a management command is not an option, since other (third party applications) rely on the existence of my console scripts.</p>
9
2016-09-26T13:32:10Z
39,791,949
<p>had the same problem (or something similar). I solved it by doing:</p> <p>[Warning: dirty solution]</p> <pre><code>if not hasattr(django, 'apps'): django.setup() </code></pre> <p>this way it'll be called only once even if it's imported multiple times</p>
5
2016-09-30T12:50:13Z
[ "python", "django", "python-import", "hang" ]
How to call django.setup() in console_script?
39,704,298
<p>The current django docs tell me this:</p> <blockquote> <p>django.setup() may only be called once.</p> <p>Therefore, avoid putting reusable application logic in standalone scripts so that you have to import from the script elsewhere in your application. If you can’t avoid that, put the call to django.setup() inside an if block:</p> </blockquote> <pre><code>if __name__ == '__main__': import django django.setup() </code></pre> <p>Source: <a href="https://docs.djangoproject.com/en/1.10/topics/settings/#calling-django-setup-is-required-for-standalone-django-usage">Calling django.setup() is required for “standalone” Django usage</a></p> <p>I am using entry points in setup.py. This way I don't have <code>__name__ == '__main__'</code>.</p> <h1>Question</h1> <p>How to ensure django.setup() gets only called once if you use <a href="http://python-packaging.readthedocs.io/en/latest/command-line-scripts.html#the-console-scripts-entry-point">console_scripts</a>?</p> <p>Where should I put <code>django.setup()</code>?</p> <h1>Background</h1> <p>The actual error I have: Django hangs. Here is the reason: <a href="https://code.djangoproject.com/ticket/27176">https://code.djangoproject.com/ticket/27176</a></p> <p>I want to port my application to the current django version. Changing to a management command is not an option, since other (third party applications) rely on the existence of my console scripts.</p>
9
2016-09-26T13:32:10Z
39,996,838
<p>Here <a href="https://docs.djangoproject.com/en/1.10/_modules/django/#setup" rel="nofollow">https://docs.djangoproject.com/en/1.10/_modules/django/#setup</a> we can see what <code>django.setup</code> actually does. </p> <blockquote> <p>Configure the settings (this happens as a side effect of accessing the first setting), configure logging and populate the app registry. Set the thread-local urlresolvers script prefix if <code>set_prefix</code> is True.</p> </blockquote> <p>So basically to ensure that setup was already done we can check if apps are ready and settings are configured</p> <pre><code>from django.apps import apps from django.conf impor settings if not apps.ready and not settings.configured: django.setup() </code></pre>
0
2016-10-12T10:55:47Z
[ "python", "django", "python-import", "hang" ]
How to call django.setup() in console_script?
39,704,298
<p>The current django docs tell me this:</p> <blockquote> <p>django.setup() may only be called once.</p> <p>Therefore, avoid putting reusable application logic in standalone scripts so that you have to import from the script elsewhere in your application. If you can’t avoid that, put the call to django.setup() inside an if block:</p> </blockquote> <pre><code>if __name__ == '__main__': import django django.setup() </code></pre> <p>Source: <a href="https://docs.djangoproject.com/en/1.10/topics/settings/#calling-django-setup-is-required-for-standalone-django-usage">Calling django.setup() is required for “standalone” Django usage</a></p> <p>I am using entry points in setup.py. This way I don't have <code>__name__ == '__main__'</code>.</p> <h1>Question</h1> <p>How to ensure django.setup() gets only called once if you use <a href="http://python-packaging.readthedocs.io/en/latest/command-line-scripts.html#the-console-scripts-entry-point">console_scripts</a>?</p> <p>Where should I put <code>django.setup()</code>?</p> <h1>Background</h1> <p>The actual error I have: Django hangs. Here is the reason: <a href="https://code.djangoproject.com/ticket/27176">https://code.djangoproject.com/ticket/27176</a></p> <p>I want to port my application to the current django version. Changing to a management command is not an option, since other (third party applications) rely on the existence of my console scripts.</p>
9
2016-09-26T13:32:10Z
40,081,252
<p>Since I love condition-less code, I use this solution. It's like in the django docs, but the ulgy <code>if __name__ == '__pain__'</code> gets avoided.</p> <p>File with code:</p> <pre><code># utils/do_good_stuff.py # This file contains the code # No `django.setup()` needed # This code can be used by web apps and console scripts. from otherlib import othermethod def say_thank_you(): ... </code></pre> <p>File for main:</p> <pre><code># do_good_stuff_main.py import django django.setup() def say_thank_you_main(): from myapp import do_good_stuff return do_good_stuff() </code></pre> <p>setup.py:</p> <pre><code> 'console_scripts': [ 'say_thank_you=myapp.do_good_stuff_main:say_thank_you_main', ... </code></pre> <p>This is my current solution. Ffeeback welcome. Is there something to improve?</p>
0
2016-10-17T08:01:54Z
[ "python", "django", "python-import", "hang" ]
How to call django.setup() in console_script?
39,704,298
<p>The current django docs tell me this:</p> <blockquote> <p>django.setup() may only be called once.</p> <p>Therefore, avoid putting reusable application logic in standalone scripts so that you have to import from the script elsewhere in your application. If you can’t avoid that, put the call to django.setup() inside an if block:</p> </blockquote> <pre><code>if __name__ == '__main__': import django django.setup() </code></pre> <p>Source: <a href="https://docs.djangoproject.com/en/1.10/topics/settings/#calling-django-setup-is-required-for-standalone-django-usage">Calling django.setup() is required for “standalone” Django usage</a></p> <p>I am using entry points in setup.py. This way I don't have <code>__name__ == '__main__'</code>.</p> <h1>Question</h1> <p>How to ensure django.setup() gets only called once if you use <a href="http://python-packaging.readthedocs.io/en/latest/command-line-scripts.html#the-console-scripts-entry-point">console_scripts</a>?</p> <p>Where should I put <code>django.setup()</code>?</p> <h1>Background</h1> <p>The actual error I have: Django hangs. Here is the reason: <a href="https://code.djangoproject.com/ticket/27176">https://code.djangoproject.com/ticket/27176</a></p> <p>I want to port my application to the current django version. Changing to a management command is not an option, since other (third party applications) rely on the existence of my console scripts.</p>
9
2016-09-26T13:32:10Z
40,081,707
<p>I have worked in two production CLI python packages with explicitly calling <code>django.setup()</code> in <code>console_scripts</code>. </p> <p>The most important thing you should note is <code>DJANGO_SETTINGS_MODULE</code> in <code>env</code> path.</p> <p>You can set this value in shell script or even load default settings in your python script.</p> <p>Here is example:</p> <pre><code># setup.py entry_points={ 'my-cli = mypackage.cli:main' } </code></pre> <p>.</p> <pre><code># cli.py import logging from os import environ as env if not 'DJANGO_SETTINGS_MODULE' in env: from mypackage import settings env.setdefault('DJANGO_SETTINGS_MODULE', settings.__name__) import django django.setup() # this line must be after django.setup() for logging configure logger = logging.getLogger('mypackage') def main(): # to get configured settings from django.conf import settings # do stuffs if __name__ == '__main__': main() </code></pre>
1
2016-10-17T08:29:15Z
[ "python", "django", "python-import", "hang" ]
How to call django.setup() in console_script?
39,704,298
<p>The current django docs tell me this:</p> <blockquote> <p>django.setup() may only be called once.</p> <p>Therefore, avoid putting reusable application logic in standalone scripts so that you have to import from the script elsewhere in your application. If you can’t avoid that, put the call to django.setup() inside an if block:</p> </blockquote> <pre><code>if __name__ == '__main__': import django django.setup() </code></pre> <p>Source: <a href="https://docs.djangoproject.com/en/1.10/topics/settings/#calling-django-setup-is-required-for-standalone-django-usage">Calling django.setup() is required for “standalone” Django usage</a></p> <p>I am using entry points in setup.py. This way I don't have <code>__name__ == '__main__'</code>.</p> <h1>Question</h1> <p>How to ensure django.setup() gets only called once if you use <a href="http://python-packaging.readthedocs.io/en/latest/command-line-scripts.html#the-console-scripts-entry-point">console_scripts</a>?</p> <p>Where should I put <code>django.setup()</code>?</p> <h1>Background</h1> <p>The actual error I have: Django hangs. Here is the reason: <a href="https://code.djangoproject.com/ticket/27176">https://code.djangoproject.com/ticket/27176</a></p> <p>I want to port my application to the current django version. Changing to a management command is not an option, since other (third party applications) rely on the existence of my console scripts.</p>
9
2016-09-26T13:32:10Z
40,086,329
<p>I have some scripts that are placed inside the django folder, i just placed this at the start of the script.</p> <pre><code>import sys, os, django os.environ.setdefault("DJANGO_SETTINGS_MODULE", "eternaltool.settings") django.setup() </code></pre>
-1
2016-10-17T12:24:35Z
[ "python", "django", "python-import", "hang" ]
Unable to use google-cloud in a GAE app
39,704,367
<p>The following line in my Google App Engine app (<code>webapp.py</code>) fails to import the <a href="https://googlecloudplatform.github.io/google-cloud-python/" rel="nofollow">Google Cloud</a> library:</p> <pre><code>from google.cloud import storage </code></pre> <p>With the following error:</p> <pre><code>ImportError: No module named google.cloud.storage </code></pre> <p>I did some research and found the following articles to be helpful:</p> <ul> <li><a href="https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27#installing_a_library" rel="nofollow">https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27#installing_a_library</a></li> <li><a href="http://stackoverflow.com/a/34585485">http://stackoverflow.com/a/34585485</a></li> <li><a href="https://www.simonmweber.com/2013/06/18/python-protobuf-on-app-engine.html" rel="nofollow">https://www.simonmweber.com/2013/06/18/python-protobuf-on-app-engine.html</a></li> </ul> <p>Using a combination of the techniques suggested by the above articles, I did the following:</p> <ol> <li><p>Create a <code>requirements.txt</code> file:</p> <pre><code>google-cloud==0.19.0 </code></pre></li> <li><p>Import this library using <code>pip</code>:</p> <pre><code>pip install -t lib -r requirements.txt </code></pre></li> <li><p>Use the following code in my <code>appengine_config.py</code> file:</p> <pre><code>import os import sys import google libDir = os.path.join(os.path.dirname(__file__), "lib") google.__path__.append(os.path.join(libDir, "google")) sys.path.insert(0, libDir) </code></pre></li> </ol> <p>Can anyone shed light on what I might be missing to get this working? I'm just trying to write a Google App Engine app that can write/read from Google Cloud Storage, and I'd like to test locally before deploying.</p>
3
2016-09-26T13:35:38Z
39,710,086
<p>Your appengine_config.py only needs to contain:</p> <pre><code>from google.appengine.ext import vendor vendor.add('lib') </code></pre> <p>All the rest you have posted looks fine to me.</p>
0
2016-09-26T18:41:23Z
[ "python", "google-app-engine", "google-cloud-storage", "google-cloud-platform" ]
Unable to use google-cloud in a GAE app
39,704,367
<p>The following line in my Google App Engine app (<code>webapp.py</code>) fails to import the <a href="https://googlecloudplatform.github.io/google-cloud-python/" rel="nofollow">Google Cloud</a> library:</p> <pre><code>from google.cloud import storage </code></pre> <p>With the following error:</p> <pre><code>ImportError: No module named google.cloud.storage </code></pre> <p>I did some research and found the following articles to be helpful:</p> <ul> <li><a href="https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27#installing_a_library" rel="nofollow">https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27#installing_a_library</a></li> <li><a href="http://stackoverflow.com/a/34585485">http://stackoverflow.com/a/34585485</a></li> <li><a href="https://www.simonmweber.com/2013/06/18/python-protobuf-on-app-engine.html" rel="nofollow">https://www.simonmweber.com/2013/06/18/python-protobuf-on-app-engine.html</a></li> </ul> <p>Using a combination of the techniques suggested by the above articles, I did the following:</p> <ol> <li><p>Create a <code>requirements.txt</code> file:</p> <pre><code>google-cloud==0.19.0 </code></pre></li> <li><p>Import this library using <code>pip</code>:</p> <pre><code>pip install -t lib -r requirements.txt </code></pre></li> <li><p>Use the following code in my <code>appengine_config.py</code> file:</p> <pre><code>import os import sys import google libDir = os.path.join(os.path.dirname(__file__), "lib") google.__path__.append(os.path.join(libDir, "google")) sys.path.insert(0, libDir) </code></pre></li> </ol> <p>Can anyone shed light on what I might be missing to get this working? I'm just trying to write a Google App Engine app that can write/read from Google Cloud Storage, and I'd like to test locally before deploying.</p>
3
2016-09-26T13:35:38Z
39,725,337
<p>The package namespace seems to have been changed as pointed out in this <a href="https://github.com/GoogleCloudPlatform/google-cloud-python/pull/2264" rel="nofollow">github issue</a> and hasn't been fully fixed. You could install the older version (<code>pip install gcloud</code>) which uses a different namespace &amp; use this import statement instead:</p> <pre><code>from gcloud import storage </code></pre> <p>You should also ensure that you're importing vendor libs in your appengine_config.py as pointed out in dyeray's answer.</p> <p>The issue seems to have been fixed as at version 0.20.0 of google-cloud. So the import statement in the question should work. Just remember to run <code>pip install --upgrade google-cloud</code></p>
1
2016-09-27T12:58:44Z
[ "python", "google-app-engine", "google-cloud-storage", "google-cloud-platform" ]
Unable to use google-cloud in a GAE app
39,704,367
<p>The following line in my Google App Engine app (<code>webapp.py</code>) fails to import the <a href="https://googlecloudplatform.github.io/google-cloud-python/" rel="nofollow">Google Cloud</a> library:</p> <pre><code>from google.cloud import storage </code></pre> <p>With the following error:</p> <pre><code>ImportError: No module named google.cloud.storage </code></pre> <p>I did some research and found the following articles to be helpful:</p> <ul> <li><a href="https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27#installing_a_library" rel="nofollow">https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27#installing_a_library</a></li> <li><a href="http://stackoverflow.com/a/34585485">http://stackoverflow.com/a/34585485</a></li> <li><a href="https://www.simonmweber.com/2013/06/18/python-protobuf-on-app-engine.html" rel="nofollow">https://www.simonmweber.com/2013/06/18/python-protobuf-on-app-engine.html</a></li> </ul> <p>Using a combination of the techniques suggested by the above articles, I did the following:</p> <ol> <li><p>Create a <code>requirements.txt</code> file:</p> <pre><code>google-cloud==0.19.0 </code></pre></li> <li><p>Import this library using <code>pip</code>:</p> <pre><code>pip install -t lib -r requirements.txt </code></pre></li> <li><p>Use the following code in my <code>appengine_config.py</code> file:</p> <pre><code>import os import sys import google libDir = os.path.join(os.path.dirname(__file__), "lib") google.__path__.append(os.path.join(libDir, "google")) sys.path.insert(0, libDir) </code></pre></li> </ol> <p>Can anyone shed light on what I might be missing to get this working? I'm just trying to write a Google App Engine app that can write/read from Google Cloud Storage, and I'd like to test locally before deploying.</p>
3
2016-09-26T13:35:38Z
39,814,684
<p>There is an <a href="https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/read-write-to-cloud-storage" rel="nofollow">App Engine specific Google Cloud Storage API</a> that ships with the App Engine SDK that you can use to work with Cloud Storage buckets.</p> <pre><code>import cloudstorage as gcs </code></pre> <p>Is there a reason you didn't use this built-in library, which requires no configuration to load?</p>
2
2016-10-02T07:08:22Z
[ "python", "google-app-engine", "google-cloud-storage", "google-cloud-platform" ]
Unable to use google-cloud in a GAE app
39,704,367
<p>The following line in my Google App Engine app (<code>webapp.py</code>) fails to import the <a href="https://googlecloudplatform.github.io/google-cloud-python/" rel="nofollow">Google Cloud</a> library:</p> <pre><code>from google.cloud import storage </code></pre> <p>With the following error:</p> <pre><code>ImportError: No module named google.cloud.storage </code></pre> <p>I did some research and found the following articles to be helpful:</p> <ul> <li><a href="https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27#installing_a_library" rel="nofollow">https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27#installing_a_library</a></li> <li><a href="http://stackoverflow.com/a/34585485">http://stackoverflow.com/a/34585485</a></li> <li><a href="https://www.simonmweber.com/2013/06/18/python-protobuf-on-app-engine.html" rel="nofollow">https://www.simonmweber.com/2013/06/18/python-protobuf-on-app-engine.html</a></li> </ul> <p>Using a combination of the techniques suggested by the above articles, I did the following:</p> <ol> <li><p>Create a <code>requirements.txt</code> file:</p> <pre><code>google-cloud==0.19.0 </code></pre></li> <li><p>Import this library using <code>pip</code>:</p> <pre><code>pip install -t lib -r requirements.txt </code></pre></li> <li><p>Use the following code in my <code>appengine_config.py</code> file:</p> <pre><code>import os import sys import google libDir = os.path.join(os.path.dirname(__file__), "lib") google.__path__.append(os.path.join(libDir, "google")) sys.path.insert(0, libDir) </code></pre></li> </ol> <p>Can anyone shed light on what I might be missing to get this working? I'm just trying to write a Google App Engine app that can write/read from Google Cloud Storage, and I'd like to test locally before deploying.</p>
3
2016-09-26T13:35:38Z
39,815,014
<p>It looks like the only thing that is required is to include <code>google-cloud</code> into your project <code>requirements.txt</code> file.</p> <p>Check if this simple sample works for you (you shouldn't get any imports error). Create below files and run <code>pip install -r requirements.txt -t lib</code>. Nothing more is required on my site to make it work.</p> <p><strong>app.yaml</strong></p> <pre><code>application: mysample runtime: python27 api_version: 1 threadsafe: true handlers: - url: /.* script: main.app </code></pre> <p><strong>main.py</strong></p> <pre><code>import webapp2 from google.cloud import storage class MainPage(webapp2.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('Hello, World!') app = webapp2.WSGIApplication([ ('/', MainPage), ], debug=True) </code></pre> <p><strong>appengine_config.py</strong></p> <pre><code>from google.appengine.ext import vendor import os # Third-party libraries are stored in "lib", vendoring will make # sure that they are importable by the application. if os.path.isdir(os.path.join(os.getcwd(), 'lib')): vendor.add('lib') </code></pre> <p><strong>requirements.txt</strong></p> <pre><code>google-cloud </code></pre>
0
2016-10-02T07:58:51Z
[ "python", "google-app-engine", "google-cloud-storage", "google-cloud-platform" ]
Unable to click the linktext using selenium execute_script function using selenium python
39,704,554
<p>Unable to click the linktext using <strong>selenium</strong> <code>execute_script</code> function</p> <p>This is what I am tring to do:</p> <pre><code>self.driver.execute_script("document.getElementByLinktext('Level 1s').click;") </code></pre>
0
2016-09-26T13:43:20Z
39,704,800
<p>You are <em>not calling</em> the <code>click()</code> method:</p> <pre><code>self.driver.execute_script("document.getElementByLinktext('Level 1s').click();") FIX HERE^ </code></pre> <p>Note that you can also locate the element with selenium and then pass it into the script:</p> <pre><code>link = self.driver.find_element_by_link_text('Level 1s') self.driver.execute_script("arguments[0].click();", link) </code></pre> <p>You can also perform the click via selenium directly if applicable:</p> <pre><code>link.click() </code></pre> <p>Also related:</p> <ul> <li><a href="http://stackoverflow.com/questions/34562061/webdriver-click-vs-javascript-click">WebDriver click() vs JavaScript click()</a></li> </ul>
1
2016-09-26T13:53:18Z
[ "python", "selenium" ]
How do I limit my users input to a single digit? (Python)
39,704,616
<p>I want my program to say 'invalid input' if 2 digits or more are entered so is there a way i can limit my users input to a single digit in python?</p> <p>This is my code:</p> <pre><code>print('List Maker') tryagain = "" while 'no' not in tryagain: list = [] for x in range(0,10): number = int(input("Enter a number: ")) list.append(number) print('The sum of the list is ',sum(list)) print('The product of the list is ',sum(list) / float(len(list))) print('The minimum value of the list is ',min(list)) print('The maximum vlaue of the list is ',max(list)) print(' ') tryagain = input('Would you like to restart? ').lower() if 'no' in tryagain: break print('Goodbye') </code></pre>
1
2016-09-26T13:46:00Z
39,704,679
<p>Use a <code>while</code> loop instead of a <code>for</code> loop, break out when you have 10 digits, and refuse to accept any number over 9:</p> <pre><code>numbers = [] while len(numbers) &lt; 10: number = int(input("Enter a number: ")) if not 1 &lt;= number &lt;= 9: print('Only numbers between 1 and 9 are accepted, try again') else: numbers.append(number) </code></pre> <p>Note that I renamed the list used to <code>numbers</code>; <code>list</code> is a built-in type and you generally want to avoid using built-in names.</p>
4
2016-09-26T13:48:32Z
[ "python" ]
How do I limit my users input to a single digit? (Python)
39,704,616
<p>I want my program to say 'invalid input' if 2 digits or more are entered so is there a way i can limit my users input to a single digit in python?</p> <p>This is my code:</p> <pre><code>print('List Maker') tryagain = "" while 'no' not in tryagain: list = [] for x in range(0,10): number = int(input("Enter a number: ")) list.append(number) print('The sum of the list is ',sum(list)) print('The product of the list is ',sum(list) / float(len(list))) print('The minimum value of the list is ',min(list)) print('The maximum vlaue of the list is ',max(list)) print(' ') tryagain = input('Would you like to restart? ').lower() if 'no' in tryagain: break print('Goodbye') </code></pre>
1
2016-09-26T13:46:00Z
39,705,108
<p>There is another option for getting only one character, based on this previous <a href="http://stackoverflow.com/questions/27750536/python-input-single-character-without-enter">answer</a>:</p> <pre><code>import termios import sys, tty def getch(): fd = sys.stdin.fileno() old_settings = termios.tcgetattr(fd) try: tty.setraw(fd) ch = sys.stdin.read(1) finally: termios.tcsetattr(fd, termios.TCSADRAIN, old_settings) return ch numbers = [] while len(numbers) &lt; 10: try: ch = int(getch()) except: print 'error...' </code></pre>
0
2016-09-26T14:08:07Z
[ "python" ]
Page not found (404) with Django
39,704,623
<p>I'm learning Django Framework and I'm following a tutorial based on Django 1.8. I have Django 1.10 on my Mac OSX and I get an error : <code>Page not found</code></p> <p>I started a project which is named <code>"Crepes_bretonnes"</code> and I created an app : <code>"Blog"</code>.</p> <p>The blog urls.py looks like : </p> <pre><code>from django.conf.urls import url from django.contrib import admin from blog.views import home urlpatterns = [ url(r'^accueil/$', home), ] </code></pre> <p>The blog views.py looks like : </p> <pre><code>#-*- coding: utf-8 -*- from django.shortcuts import render from django.http import HttpResponse # Ce module permet de retourner une réponse (texte brute, HTML, JSON, ...) depuis une string # Create your views here. def home(request) : text = """&lt;h1&gt;Bienvenue sur mon blog ! &lt;/h1&gt; &lt;p&gt; Edité sous Django 1.10 &lt;/p&gt;""" return HttpResponse(text) </code></pre> <p>And the crepes_bretonnes urls.py looks like :</p> <pre><code>from django.conf.urls import url, include from django.contrib import admin urlpatterns = [ url(r'^blog/', include('blog.urls')) ] </code></pre> <p>However, when I'm running the server, I get this error (picture) and I don't understand why it's not working :/</p> <p><a href="http://i.stack.imgur.com/bLqJj.png" rel="nofollow"><img src="http://i.stack.imgur.com/bLqJj.png" alt="enter image description here"></a></p> <p>As I said, I'm beginning Django so if anyone can explain me the problem ?</p> <p>Thank you !</p>
0
2016-09-26T13:46:22Z
39,704,664
<p>Your crepes_bretonnes urls are including blog urls, so your blog urls are only visible from <code>blog/</code> url. </p> <p>If you want to get to your view you need to open this url:</p> <p><code>/blog/accueil/</code></p>
1
2016-09-26T13:47:59Z
[ "python", "django" ]
Python: Problems with functions of imported dll
39,704,642
<p>A DLL for a device was imported into Python using ctypes "windll.LoadLibrary" First, I wanted to use one function of the DLL which is to return the library version. There came a header file with the DLL in which the function is defined like this:</p> <pre><code> /** * \brief Returns the library version in the format 'AA.BB.CC.DD' (e.g. '1.0.0.0'). * * OUTPUT PARAMETER * \param version Version number of the library */ #ifdef __cplusplus extern "C" #endif C3_DLL_API void JC_GetDllVersion(char *version); </code></pre> <p>I called the function with <code>print (str(c3dll.JC_GetDllVersion),"\n")</code> But when I run the script I only get this output:</p> <blockquote> <p>('&lt;_FuncPtr object at 0x01A187B0>', '\n')</p> </blockquote> <p>What am I doing wrong here?</p>
0
2016-09-26T13:47:09Z
39,706,314
<p>So a colleague helped me: The function "JC_GetDllVersion()" manipulates the paramter it gets handed over and doesn't return a string. But as a beginner I wasn't aware of that.</p>
0
2016-09-26T15:07:44Z
[ "python", "dll", "import" ]
Python: For loop not working properly
39,704,752
<p>Why is the for loop not working properly? It's just returned only one element.</p> <p>1.</p> <pre><code>rx = ['abc', 'de fg', 'hg i'] def string_m(a=rx): for i in a: l = re.sub(" ","/",i) r = l.split("/") r.reverse() rx = " ".join(r) return rx a=string_m(rx) print(a) </code></pre> <p>Outputs:</p> <pre><code>abc </code></pre> <p>2.</p> <pre><code>rx = ['abc', 'de fg', 'hg i'] def string_m(a=rx): for i in a: l = re.sub(" ","/",i) r = l.split("/") r.reverse() rx = " ".join(r) return rx a=string_m(rx) print(a) </code></pre> <p>Outputs:</p> <pre><code>i hg </code></pre> <p>-- Can someone help me to see what I am doing wrong?</p>
-6
2016-09-26T13:51:38Z
39,705,062
<pre><code>import re rx = ['abc', 'de fg', 'hg i'] def string_m(a=rx): new = [] for i in a: l = re.sub(" ","/",i) r = l.split("/") #r.reverse() rx = " ".join(r) new.append(rx) new.reverse() return new a=string_m(rx) print(a) </code></pre> <p>Your biggest problem is that you return the result of the first iteration of your for loop instead of after the for loop has finished.</p> <p>Another issue is that strings cannot be reversed with the reverse() method. If you want to reverse the output, Try the above.</p>
0
2016-09-26T14:05:44Z
[ "python", "django" ]
Unknown format from Picamera recording
39,704,846
<p>I have been using my RaspberryPi 3 model B with the Picamera to record videos. I saw in the RaspberryPi website that the default video format that you get from capturing video is h264, however when I use the camera, the file created appears as fyle type unknown.</p> <p>To capture I tried both the basic command line </p> <pre><code>raspivid -o video -t 10 #for a 10s video </code></pre> <p>or a small programm using the PiCamera.start_recording (wich i believe lets you choose the format of the output file) but I still get these not known format files.</p> <p>If it helps (doubtful) I try to read the recorded files with omxplayer and the footage is displayed at roughly twice the speed it should. </p>
0
2016-09-26T13:55:20Z
39,754,679
<p>You're not specifiying the h264 extension - without that omxplayer doesn't know what type of video format it is - unless you can suggest what to try via command line</p> <p>Try:</p> <pre><code>raspivid -o video.h264 -t 10 #for a 10s video </code></pre> <p>There's plenty of online help and examples for raspivid</p> <p>Also - you may have to wrap your h264 file in mp4 "box" to play it</p> <p>Here's one suggestion on how to do this wrapping using MP4Box </p> <pre><code>sudo apt-get update sudo apt-get install -y gpac </code></pre> <p>Once installed use the following command to wrap your H264 video in an MP4 container file. This allows most media players to play the video.</p> <pre><code>MP4Box -fps 30 -add video.h264 video.mp4 </code></pre> <p>This will give you a nice video at 30 frames per second that should play in most modern media players.</p>
1
2016-09-28T18:05:22Z
[ "python", "python-3.x", "camera", "raspberry-pi", "raspberry-pi3" ]
python reading in files for a specific column range
39,704,877
<p>I'm writing a script that reads in one file containing a list of files and performing gaussian fits on each of those files. Each of these files is made up of two columns (wv and flux in the script below). My small issue here is how do I limit the range based "wv" values? I tried using a "for" loop for this but I get errors related to the fit (which I don't get if I don't limit the "wv" range).</p> <pre><code>import numpy as np from scipy.optimize import curve_fit import matplotlib.pyplot as plt fits = [] wvi_b = [] wvi_r = [] p = open("file_input.txt","r") for line in p: fits.append(str(line.split()[0])) wvi_b.append(float(line.split()[1])) wvi_r.append(float(line.split()[2])) p.close() for j in range(len(fits)): wv = [] flux = [] f = open("%s"%(fits[j]),"r") for line in f: wv.append(float(line.split()[0])) flux.append(float(line.split()[1])) f.close() def gauss(x,a,b,c,a1,b1,c1,d): func = a*np.exp(-((x-b)**2)/(2.0*(c)**2)) + a1*np.exp(-((x-b1)**2)/(2.0*(c1)**2))+d return func for wv in range(6450, 6575): guess=(0.8,wvi_b[j],3.0,1.0,wvi_r[j],3.0,1.0) popt,pconv=curve_fit(gauss,wv,flux,guess) print popt[1], popt[4] ymod=gauss(wv,*popt) plt.plot(wv,ymod) plt.plot(wv,flux,marker='.') plt.show() </code></pre>
0
2016-09-26T13:56:55Z
39,705,103
<p>When you call <code>for wv in range(6450, 6575)</code>, <code>wv</code> is just an integer in that range, not a member of the list. I'd try taking a look at how you're using that variable. If you want to access data from the <code>list wv</code>, you would have to update the syntax to be <code>wv[wv]</code> (which is a little confusing - it might be best to change the variable in your <code>for</code> loop to something else).</p>
0
2016-09-26T14:07:51Z
[ "python" ]
How to reconstruct a list with punctuation into a sentence? In python 3.5
39,704,913
<p>I have a list e.g. <code>['hello',', ','how','are','you','?']</code> and i want it to become <code>"hello, how are you?"</code> Is there anyway at doing this?</p> <p>I should also note that i am opening this from a text document, using the code:</p> <pre><code> with gzip.open('task3.txt.gz', 'r+') as f: reconstruct = f.readline() reconstruct2 = f.readline() reconstruct3 = f.readline() reconstruct4 = f.readline() f.close() </code></pre> <p>So my lists come out as <code>b'["hello", ",", "how", "are", "you", "?"]\n'</code> could this be linked to the fact i saved them with new lines?</p>
0
2016-09-26T13:58:35Z
39,705,291
<p>I believe, you're looking for <strong>join()</strong>:</p> <pre><code>&gt;&gt;&gt; a = ['hello',', ','how','are','you','?'] &gt;&gt;&gt; " ".join(a) 'hello , how are you ?' </code></pre> <p>Note, that the content of quotation marks will be inserted in between of your words. So you need to decide, if you want to use " " (which will insert space also between "you" and "?", for example) or "" - but insert space at the end of each word (your array should look like that: ['Hello', ', ', 'how ', 'are ', 'you', '?'])</p>
0
2016-09-26T14:16:44Z
[ "python", "python-3.x" ]
How to reconstruct a list with punctuation into a sentence? In python 3.5
39,704,913
<p>I have a list e.g. <code>['hello',', ','how','are','you','?']</code> and i want it to become <code>"hello, how are you?"</code> Is there anyway at doing this?</p> <p>I should also note that i am opening this from a text document, using the code:</p> <pre><code> with gzip.open('task3.txt.gz', 'r+') as f: reconstruct = f.readline() reconstruct2 = f.readline() reconstruct3 = f.readline() reconstruct4 = f.readline() f.close() </code></pre> <p>So my lists come out as <code>b'["hello", ",", "how", "are", "you", "?"]\n'</code> could this be linked to the fact i saved them with new lines?</p>
0
2016-09-26T13:58:35Z
39,705,673
<p>You can actually properly <em>detokenize</em> the list of tokens back into a sentence with an <a href="https://github.com/nltk/nltk/pull/1282" rel="nofollow"><code>nltk</code>'s <code>moses</code> detokenizer</a> (available in the <code>nltk</code> trunk at the moment - has not been released yet):</p> <pre><code>In [1]: from nltk.tokenize.moses import MosesDetokenizer In [2]: l = ['hello', ', ', 'how', 'are', 'you', '?'] In [3]: detokenizer = MosesDetokenizer() In [4]: detokenizer.detokenize(l, return_str=True) Out[4]: u'hello, how are you?' </code></pre>
1
2016-09-26T14:35:46Z
[ "python", "python-3.x" ]
Regex Match equal amount of two characters
39,704,916
<p>I'ld like to match the parameters of any function as a string using regex. As an example lets assume the following string:</p> <pre><code>predicate(foo(x.bar, predicate(foo(...), bar)), bar) </code></pre> <p>this may be part of a longer sequence</p> <pre><code>predicate(foo(x.bar, predicate(foo(...), bar)), bar)predicate(foo(x.bar, predicate(foo(...), bar)), bar)predicate(foo(x.bar, predicate(foo(...), bar)), bar) </code></pre> <p>I now want to find all substrings that represent a function/predicate and its parameters (i.e. in the first example the whole string as well as the nested <code>predicate(foo(...), bar)</code>). The problem is that I cant simply match like this</p> <pre><code>predicate\(.*, bar\) </code></pre> <p>as i may then match more than the parameters of the predicate if the <code>*</code> is greedy, or less if it is lazy. Which is because such predicates() can be nested.</p> <p>I need a regex that finds the string <code>predicate(...)</code> where <code>...</code> matches any string that contains an equal amount of <code>(</code>'s and <code>)</code>'s (lazy).</p> <p>If it matters: I am using regex with the re module in python.</p>
1
2016-09-26T13:58:39Z
39,705,887
<pre><code>import re def parse(s): pattern = re.compile(r'([^(),]+)|\s*([(),])\s*') stack = [] state = 0 # 0 = before identifier, 1 = after identifier, 2 = after closing paren current = None args = [] for match in pattern.finditer(s): if match.group(1): if state != 0: raise SyntaxError("Expected identifier at {0}".format(match.start())) current = match.group(1) state = 1 elif match.group(2) == '(': if state != 1: raise SyntaxError("Unexpected open paren at {0}".format(match.start())) stack.append((args, current)) state = 0 current = None args = [] elif match.group(2) == ',': if state != 0: args.append(current) state = 0 current = None elif match.group(2) == ')': if state != 0: args.append(current) if len(stack) == 0: raise SyntaxError("Unmatched paren at {0}".format(match.start())) newargs = args args, current = stack.pop() current = (current, newargs) state = 2 if state != 0: args.append(current) if len(stack) &gt; 0: raise SyntaxError("Unclosed paren") return args </code></pre> <pre><code>&gt;&gt;&gt; from pprint import pprint &gt;&gt;&gt; pprint(parse('predicate(foo(x.bar, predicate(foo(...), bar)), bar)'), width=1) [('predicate', [('foo', ['x.bar', ('predicate', [('foo', ['...']), 'bar'])]), 'bar'])] </code></pre> <p>It returns a list of all comma-separated top-level expressions. Function calls become a tuple of name and arguments.</p>
1
2016-09-26T14:47:17Z
[ "python", "regex", "first-order-logic" ]
Regex Match equal amount of two characters
39,704,916
<p>I'ld like to match the parameters of any function as a string using regex. As an example lets assume the following string:</p> <pre><code>predicate(foo(x.bar, predicate(foo(...), bar)), bar) </code></pre> <p>this may be part of a longer sequence</p> <pre><code>predicate(foo(x.bar, predicate(foo(...), bar)), bar)predicate(foo(x.bar, predicate(foo(...), bar)), bar)predicate(foo(x.bar, predicate(foo(...), bar)), bar) </code></pre> <p>I now want to find all substrings that represent a function/predicate and its parameters (i.e. in the first example the whole string as well as the nested <code>predicate(foo(...), bar)</code>). The problem is that I cant simply match like this</p> <pre><code>predicate\(.*, bar\) </code></pre> <p>as i may then match more than the parameters of the predicate if the <code>*</code> is greedy, or less if it is lazy. Which is because such predicates() can be nested.</p> <p>I need a regex that finds the string <code>predicate(...)</code> where <code>...</code> matches any string that contains an equal amount of <code>(</code>'s and <code>)</code>'s (lazy).</p> <p>If it matters: I am using regex with the re module in python.</p>
1
2016-09-26T13:58:39Z
39,706,795
<p>You can create a regex to find all of the function calls within your code. Something like this:</p> <pre><code>([_a-zA-Z]+)(?=\() </code></pre> <p>Then using the <code>re</code> module, you create a data structure indexing the function calls within your code.</p> <pre><code>import re code = 'predicate(foo(x.bar, predicate(foo(...), bar)), bar)predicate(foo(x.bar, predicate(foo(...), bar)), bar)predicate(foo(x.bar, predicate(foo(...), bar)), bar)' code_cp = code regex = re.compile(r'([_a-zA-Z]+)(?=\()') matches = re.findall(regex, code) structured_matches = [] for m in matches: beg = str.index(code, m) end = beg + len(m) structured_matches.append((m, beg, end)) code = code[:beg] + '_' * len(m) + code[end:] </code></pre> <p>This will give you a data structure that looks like this:</p> <pre><code>[ ('predicate', 0, 9), ('foo', 10, 13), ('predicate', 21, 30), ('foo', 31, 34), ('predicate', 52, 61), ('foo', 62, 65), ('predicate', 73, 82), ('foo', 83, 86), ('predicate', 104, 113), ('foo', 114, 117), ('predicate', 125, 134), ('foo', 135, 138) ] </code></pre> <p>You can use this data structure in conjunction with a <code>parse</code> function to pull out the contents of each function call's parens.</p> <pre><code>def parse(string): stack = [] contents = '' opened = False for c in string: if len(stack) &gt; 0: contents += c if c == '(': opened = True stack.append('o') elif c == ')': stack.pop() if opened and len(stack) == 0: break return contents[:-1] paren_contents = [] for m in structured_matches: fn_name, beg, end = m paren_contents.append((fn_name, parse(code_cp[end:]))) </code></pre> <p>In the end, <code>paren_contents</code> should look something like this:</p> <pre><code>[ ('predicate', 'foo(x.bar, predicate(foo(...), bar)), bar'), ('foo', 'x.bar, predicate(foo(...), bar)'), ('predicate', 'foo(...), bar'), ('foo', '...'), ('predicate', 'foo(x.bar, predicate(foo(...), bar)), bar'), ('foo', 'x.bar, predicate(foo(...), bar)'), ('predicate', 'foo(...), bar'), ('foo', '...'), ('predicate', 'foo(x.bar, predicate(foo(...), bar)), bar'), ('foo', 'x.bar, predicate(foo(...), bar)'), ('predicate', 'foo(...), bar'), ('foo', '...') ] </code></pre> <p>Hopefully this points you in the right direction.</p>
1
2016-09-26T15:30:29Z
[ "python", "regex", "first-order-logic" ]
Regex Match equal amount of two characters
39,704,916
<p>I'ld like to match the parameters of any function as a string using regex. As an example lets assume the following string:</p> <pre><code>predicate(foo(x.bar, predicate(foo(...), bar)), bar) </code></pre> <p>this may be part of a longer sequence</p> <pre><code>predicate(foo(x.bar, predicate(foo(...), bar)), bar)predicate(foo(x.bar, predicate(foo(...), bar)), bar)predicate(foo(x.bar, predicate(foo(...), bar)), bar) </code></pre> <p>I now want to find all substrings that represent a function/predicate and its parameters (i.e. in the first example the whole string as well as the nested <code>predicate(foo(...), bar)</code>). The problem is that I cant simply match like this</p> <pre><code>predicate\(.*, bar\) </code></pre> <p>as i may then match more than the parameters of the predicate if the <code>*</code> is greedy, or less if it is lazy. Which is because such predicates() can be nested.</p> <p>I need a regex that finds the string <code>predicate(...)</code> where <code>...</code> matches any string that contains an equal amount of <code>(</code>'s and <code>)</code>'s (lazy).</p> <p>If it matters: I am using regex with the re module in python.</p>
1
2016-09-26T13:58:39Z
39,731,679
<p>Adding the <a href="https://pypi.python.org/pypi/regex" rel="nofollow">PyPI package regex</a>, as @Tim Pietzcker suggested, you can use <a href="http://www.regular-expressions.info/recurse.html" rel="nofollow">recursive regexes</a>.</p> <pre><code>&gt;&gt;&gt; import regex &gt;&gt;&gt; s = 'predicate(foo(x.bar, predicate(foo(...), bar)), bar)' &gt;&gt;&gt; pattern = regex.compile(r'(\w+)(?=\(((?:\w+\((?2)\)|[^()])*)\))') &gt;&gt;&gt; pattern.findall(s) [('predicate', 'foo(x.bar, predicate(foo(...), bar)), bar'), ('foo', 'x.bar, predicate(foo(...), bar)'), ('predicate', 'foo(...), bar'), ('foo', '...')] </code></pre> <p>You could also constrain it to look for just "predicate":</p> <pre><code>&gt;&gt;&gt; pattern = regex.compile(r'(predicate)(?=\(((?:\w+\((?2)\)|[^()])*)\))') &gt;&gt;&gt; pattern.findall(s) [('predicate', 'foo(x.bar, predicate(foo(...), bar)), bar'), ('predicate', 'foo(...), bar')] </code></pre>
1
2016-09-27T18:16:57Z
[ "python", "regex", "first-order-logic" ]
Disable SSL certificate check Twisted Agents
39,704,932
<p>I am using Twisted (16.3) and Treq (15.1) to make async requests in Python (2.7). </p> <p>I am having issues with some requests over HTTPS.</p> <p>Some sites have an invalid certificate and thus when making a request to them, I get this:</p> <pre><code>twisted.python.failure.Failure OpenSSL.SSL.Error </code></pre> <p>I want my client to trust any server, including those without certificates or with self signed certificates.</p> <p>How can I disable certificate checks on my client?</p> <p>This is a question essentially identical to mine: <a href="http://stackoverflow.com/questions/34357439/ssl-options-for-twisted-agents">SSL Options for Twisted Agents</a></p> <p>Thanks!</p>
0
2016-09-26T13:59:49Z
39,709,525
<p>I've been trying to do this too over the past few days as well. With all the effort I put into circumventing certificate verification, I could've easily just created a pair of keys and been on my merry way :D. I found <a href="https://github.com/twisted/treq/issues/65#issuecomment-249034981" rel="nofollow">this comment</a> on the <code>treq</code> issues board that monkeypatches the issue:</p> <pre><code>from twisted.internet import _sslverify _sslverify.platformTrust = lambda : None </code></pre> <p>I'm sure there's a convoluted way of doing it "correctly" but it wouldn't be worth the effort in my opinion. I did a patch such that it wouldn't override <code>platformTrust()</code> and I'll try to get it merged but I wouldn't hold my breath. From the tone of some of the bug comments I've seen pertaining to trust roots, ssl, and certificates, I don't think it will get merged. Hope this helps though.</p>
0
2016-09-26T18:07:41Z
[ "python", "ssl", "twisted", "twisted.web", "twisted.internet" ]
xlsxwriter conditional format: find the exactly value
39,704,954
<p>this is my code:</p> <pre><code> import xlsxwriter for val in arr: sheet.conditional_format('B2:B2000', {'type': 'text', 'criteria': 'containingwith', 'value': val, 'format': formatOk}) </code></pre> <p>and my question is: how i get the exactly value. for example:<br> if my values in excel are: aaa,aab,aa. and my val is:aa, so the result will be only aa. and other question: how i get the number of my column in this part: B2:B2000, and no writing a random value. thanks.</p>
1
2016-09-26T14:00:45Z
39,707,439
<p>Something like the following should work:</p> <pre><code>worksheet1.conditional_format('B2:K12', {'type': 'cell', 'criteria': '==', 'value': '"aa"', 'format': format1}) </code></pre> <p><strong>Update</strong>: Working example:</p> <pre><code>import xlsxwriter workbook = xlsxwriter.Workbook('conditional_format.xlsx') worksheet = workbook.add_worksheet() # Add a format. my_format = workbook.add_format({'bg_color': '#FFC7CE', 'font_color': '#9C0006'}) # Add some data to the sheet. data = [ ['Foo', 'Mood', 'Fool', 'Foo'], ['Boo', 'Mood', 'Bool', 'Boo'], ['Goo', 'Good', 'Gool', 'Foo'], ['Foo', 'Food', 'Fool', 'Zoo'], ] for row, row_data in enumerate(data): worksheet.write_row(row, 0, row_data) # Add a conditional format. worksheet.conditional_format('A1:D4', {'type': 'cell', 'criteria': '==', 'value': '"Foo"', 'format': my_format}) # Close the file. workbook.close() </code></pre> <p>Output:</p> <p><a href="http://i.stack.imgur.com/zlx5N.png" rel="nofollow"><img src="http://i.stack.imgur.com/zlx5N.png" alt="enter image description here"></a></p>
0
2016-09-26T16:03:30Z
[ "python" ]
Dictionary Print Out All Word Of What I Type In
39,704,969
<p>I need to create a program which prints out all the word in the dictionary that I wrote, So let's say I write house, It could come out like</p> <blockquote> <p>home, house, high, hollow, out, over, overrated, on, united, untitled, urgent, song, silent, silence, sorry, emotion, elf, egg, end</p> </blockquote> <p>So it gets all the word out of my dictionary with the word I wrote in, here is the code.</p> <pre><code>#User types in a word word = input("Type in a word: ") #initiate the dictionary struct. dictionary = {} #run through the dictionary file, one line at a time. with open("dict.txt") as dict_filehandle: for cword in dict_filehandle: #here a single line has been placed into the variable cword. #no newlines are allowed in the words. cword=cword.replace("\n",""); #give the given word the value 1. dictionary[cword]=1; #here we have a dictionary struct. #here we push through all the words in the dictionary print(dictionary["house"]) </code></pre> <p>In the code I have above, I made it search "house" and it shows that it's true by getting the number 1, I don't know how to do the thing I said above but I need to be able to do it since it's a school project and my teacher recommended for us to look on the internet so I decided to just ask people.</p>
-3
2016-09-26T14:01:24Z
39,706,673
<p>Maybe not a crisp solution, but an approach and an explanation that is longer for a comment: </p> <p>Before you could get all the words that match a particular word someone gives to your program, you need to classify the words in your file into categories and establish mappings. That is, it should be established somewhere, preferably in your data file, to which words an entry can map to. This however, is a bit of a complex task. As you can see, a given word and the other words you want to refer to that word needs to be modeled through some logic. Building this logic is the entirety of your problem. If what you need is to give standard synonyms for a given word, I can suggest something like: <a href="https://www.wordsapi.com/" rel="nofollow">words api</a>. I am not really aware of how their api works and what the usage limits are. If you do a bit of searching, you might be able to find an open source solution as well. Good luck! </p>
1
2016-09-26T15:24:42Z
[ "python", "project" ]
Looping over pandas data frame applying formula to each value
39,705,022
<p>I have the following pandas data frame:</p> <pre><code> PC1 PC2 PC3 PC4 PC5 PC6 PC7 ind NA06984 -0.0082 -0.0594 -0.0148 -0.0569 -0.1128 -0.0276 -0.0217 NA06986 -0.0131 -0.0659 -0.0426 0.0654 0.0473 0.0603 -0.0454 NA06989 -0.0073 -0.0551 -0.0457 0.0971 -0.0051 -0.0123 0.0035 NA06994 -0.0051 -0.0599 -0.0239 0.0930 0.0765 0.0321 0.0392 NA07000 -0.0046 -0.0362 0.0006 -0.0639 -0.0197 -0.0132 0.0631 NA07037 -0.0132 -0.0600 -0.0252 -0.0381 -0.0091 0.0005 0.0235 NA07048 -0.0128 -0.0653 -0.0234 -0.0417 0.0233 0.1034 0.0180 NA07051 -0.0028 -0.0591 -0.0117 -0.0791 -0.0387 0.0102 -0.0840 NA07056 -0.0121 -0.0389 0.0113 -0.0754 0.0226 -0.0304 -0.0490 NA07347 -0.0192 -0.0441 -0.0588 0.1099 -0.0414 0.0505 0.0295 NA07357 -0.0100 -0.0360 -0.0268 -0.0621 -0.0737 -0.0090 0.0379 </code></pre> <p>and I would like to standardize the distributions of each column, i.e. applying the formula </p> <p>column_i[row_j] - column_i.mean()) / column_i.std() </p> <p>for every value in every column, and substitute the original data frame with these values.</p> <p>So far I have come up with </p> <pre><code>for index, row in evec_pandas.iterrows(): new_row = None evec_pandas.loc[index,'PC1'] = (row['PC1'] - evec_pandas['PC1'].mean()) / evec_pandas['PC1'].std() print evec_pandas </code></pre> <p>but the results are</p> <pre><code> PC1 PC2 PC3 PC4 PC5 PC6 PC7 ind NA06984 0.343471 -0.0594 -0.0148 -0.0569 -0.1128 -0.0276 -0.0217 NA06986 -0.330077 -0.0659 -0.0426 0.0654 0.0473 0.0603 -0.0454 NA06989 -0.003975 -0.0551 -0.0457 0.0971 -0.0051 -0.0123 0.0035 NA06994 0.008607 -0.0599 -0.0239 0.0930 0.0765 0.0321 0.0392 NA07000 0.003659 -0.0362 0.0006 -0.0639 -0.0197 -0.0132 0.0631 NA07037 -0.058300 -0.0600 -0.0252 -0.0381 -0.0091 0.0005 0.0235 NA07048 -0.028319 -0.0653 -0.0234 -0.0417 0.0233 0.1034 0.0180 NA07051 0.046818 -0.0591 -0.0117 -0.0791 -0.0387 0.0102 -0.0840 NA07056 -0.043817 -0.0389 0.0113 -0.0754 0.0226 -0.0304 -0.0490 NA07347 -0.071195 -0.0441 -0.0588 0.1099 -0.0414 0.0505 0.0295 NA07357 0.019495 -0.0360 -0.0268 -0.0621 -0.0737 -0.0090 0.0379 </code></pre> <p>The first value is correct (0.343471), but the rest of the values in the PC1 column are not, and of course the rest of columns have no changes. If I use:</p> <pre><code>for index, row in evec_pandas.iterrows(): new_row = None new_row = (row['PC1'] - evec_pandas['PC1'].mean()) / evec_pandas['PC1'].std() print new_row </code></pre> <p>I do obtain the PC1 column as it should be, but as an independent object, not inside the data frame:</p> <pre><code>0.343471311655 -0.673732188246 0.530304607555 0.987008219756 1.09080449526 -0.694491443346 -0.611454422946 1.46447108706 -0.466139637246 -1.94004674935 -0.0301952801455 </code></pre> <p>So I need to substitute PC1 with these values, and then do the same for each column; I had thought of something like</p> <pre><code>for index, column in evec_pandas.iteritems(): for index, row in evec_pandas.iterrows(): new_row = None evec_pandas.loc[index,column] = (row[column] - evec_pandas[column].mean()) / evec_pandas[column].std() </code></pre> <p>But I understand it won't work like this. Any ideas?</p> <p>The desired output would be:</p> <pre><code> PC1 PC2 PC3 PC4 PC5 PC6 PC7 NA06984 0.34347131 -0.5760881 0.439607045 -0.6710009 -1.8594019 -1.0130591 -0.50633142 NA06986 -0.67373219 -1.1365003 -0.929352573 0.9013689 1.0906816 1.0794999 -1.02745500 NA06989 0.53030461 -0.2053539 -1.082006343 1.3089251 0.1251327 -0.6488253 0.04777466 NA06994 0.98700822 -0.6191967 -0.008505635 1.2562128 1.6287356 0.4081670 0.83275827 NA07000 1.09080450 1.4241525 1.197951582 -0.7609975 -0.1438943 -0.6702508 1.35827952 NA07037 -0.69449144 -0.6278185 -0.072521733 -0.4292956 0.0514267 -0.3441068 0.48754139 NA07048 -0.61145442 -1.0847700 0.016115941 -0.4755796 0.6484455 2.1055441 0.36660554 NA07051 1.46447109 -0.5502229 0.592260816 -0.9564188 -0.4939979 -0.1131873 -1.87620479 NA07056 -0.46613964 1.1913658 1.724853306 -0.9088491 0.6355469 -1.0797163 -1.10661301 NA07347 -1.94004675 0.7430361 -1.727091631 1.4734904 -0.5437494 0.8461998 0.61947141 NA07357 -0.03019528 1.4413959 -0.151310775 -0.7378555 -1.1389255 -0.5702651 0.80417343 </code></pre>
2
2016-09-26T14:03:33Z
39,705,072
<p>You can just do the following:</p> <pre><code>In [19]: (df - df.mean())/df.std() Out[19]: PC1 PC2 PC3 PC4 PC5 PC6 PC7 ind NA06984 0.343471 -0.576088 0.439607 -0.671001 -1.859402 -1.013059 -0.506331 NA06986 -0.673732 -1.136500 -0.929353 0.901369 1.090682 1.079500 -1.027455 NA06989 0.530305 -0.205354 -1.082006 1.308925 0.125133 -0.648825 0.047775 NA06994 0.987008 -0.619197 -0.008506 1.256213 1.628736 0.408167 0.832758 NA07000 1.090804 1.424152 1.197952 -0.760998 -0.143894 -0.670251 1.358280 NA07037 -0.694491 -0.627818 -0.072522 -0.429296 0.051427 -0.344107 0.487541 NA07048 -0.611454 -1.084770 0.016116 -0.475580 0.648445 2.105544 0.366606 NA07051 1.464471 -0.550223 0.592261 -0.956419 -0.493998 -0.113187 -1.876205 NA07056 -0.466140 1.191366 1.724853 -0.908849 0.635547 -1.079716 -1.106613 NA07347 -1.940047 0.743036 -1.727092 1.473490 -0.543749 0.846200 0.619471 NA07357 -0.030195 1.441396 -0.151311 -0.737856 -1.138926 -0.570265 0.804173 </code></pre> <p>This will operate on the whole df so there is no need to iterate over rows/columns</p>
3
2016-09-26T14:06:29Z
[ "python", "loops", "pandas", "dataframe" ]
Combine json file with CSV- similar to vlookup
39,705,088
<p>In plain terms, I'm trying to combine two sets of data. I'm open to use grep/bash or python.</p> <ol> <li><p>Read directory /mediaid</p></li> <li><p>Read the .json files' filename</p></li> <li><p>if the .json file name matches a row in the .csv, copy the contents of the json file in that row (if not, just skip)</p></li> </ol> <p>INPUT DATA</p> <p><strong>File1.csv</strong></p> <pre><code>testentry, 1234 testentry1, 6789 </code></pre> <p>INPUT DATA (the filename is the MEDIAID to check)</p> <p><strong>1234.json</strong></p> <pre><code>[ {"id":"1", "text":"Nice man!"}, {"id":"2", "text":"Good job"} ] </code></pre> <p><strong>6789.json</strong></p> <pre><code>[ {"id":"1", "text":"Test1"}, {"id":"2", "text":"Test2"} ] </code></pre> <p><strong>DESIRED OUTPUT DATA .csv</strong></p> <pre><code>testentry, 1234, Nice man!, Good job testentry1, 6789, Test1, Test2 </code></pre> <p>My attempt using GREP is in progress, but I can't get the json file names to be checked and pass the data from them. </p> <pre><code>#!/usr/bin/env bash indir="$HOME/indir" outdir="$HOME/outdir" cd "$indir" || exit mkdir -p "$outdir" || exit for f in *.csv; do [[ -f $f ]] || continue lines=() while IFS=, read -ra cols; do if (( ${#cols[@]} != 2 )); then echo "Sorry buddy, you'll have to use a real CSV parser to handle: $f" &gt;&amp;2 exit 1 fi # Does the basename match the contents of the first column? if [[ ${cols[0]} == "${f%.*}" ]]; then echo "Match found in $f" fi lines+=("${cols[0]},${cols[1]}") done &lt;"$f" # something with JQ to read the json filename, and pass its data into the row printf '%s\n' "${lines[@]}" &gt; "$outdir/$f" || exit done </code></pre> <p>A failed, but slightly better attempt in Python:</p> <pre><code>import csv import json path_to_json = 'somedir/' json_files = [pos_json for pos_json in os.listdir(path_to_json) if pos_json.endswith('.json')] print json_files # with open(json_files) as lookuplist: # IT NEEDS to match the mediaID from the json FILENAME with open('file1.csv', "r") as csvinput: with open('VlookupOut','w') as output: reader = csv.reader(lookuplist) reader2 = csv.reader(csvinput) writer = csv.writer(output) d = {} for xl in reader2: d[xl[2]] = xl[3:] for i in reader: if i[4] in d: i.append(d[i[4]]) writer.writerow(i) </code></pre>
1
2016-09-26T14:07:16Z
39,705,777
<p>This provides your required output:</p> <pre class="lang-bsh prettyprint-override"><code>for file in /mediaid/*; do while read -r entry fileid; do jsonfile="$fileid.json" if [[ -f "$jsonfile" ]]; then text=$(jq -r 'map(.text) | join(", ")' "$jsonfile") echo "$entry $fileid, $text" fi done &lt; "$file" done &gt; output.csv </code></pre> <p>Uses <a href="/questions/tagged/jq" class="post-tag" title="show questions tagged &#39;jq&#39;" rel="tag">jq</a> to parse the JSON files</p>
1
2016-09-26T14:41:06Z
[ "python", "json", "bash", "csv", "grep" ]
Adjusting nested dictionary depending on provided key values
39,705,093
<p>I would like to create the following dictionary based on various parameters given by <code>params</code>. Now I do not want to add keys which provide no value. So for instance if <code>params['ean']</code> is blank or does not exist I want to exclude the entire line <code>"EAN": params['ean']</code>. Any suggestions are highly welcome.</p> <pre><code>myitem = { "Item": { "Title": params['title'], "Description": "Some Text", "PrimaryCategory": {"CategoryID": params['category_id']}, "ISBN": params['isbn'], "EAN": params['ean'] } } </code></pre> <p>Please note that that params will not have the same keys like myitem.</p>
0
2016-09-26T14:07:30Z
39,705,260
<p>I don't think i understand the question perfectly but how about something like that:</p> <pre><code>myitem = {} myitem['Item'] = {"Description": "Some Text"} for keys in ['title', 'category_id', 'isbn', 'ean']: if params[keys]: myitem['Item'].append({keys.upper(): params[keys]}) </code></pre> <p>which produces:</p> <pre><code>myitem = {"Item": {"TITLE": params['title'], "Description": "Some Text", "CATEGORY_ID": params['category_id'], "ISBN": params['isbn'], "EAN": params['ean'] } } </code></pre> <p>Notice that at this stage the category id does not match your specifications exactly but that can be easily <em>fetched</em> using the following:</p> <pre><code>myitem["item"].append({"PrimaryCategory": myitem["item"].pop(["CategoryID"])}) </code></pre> <p>which takes advantage of the <code>pop</code>'s return so it removes and re-adds the <code>"CATEGORY_ID": params['category_id']</code> key value pair leaving you with the desired result (hopefully).</p>
1
2016-09-26T14:15:20Z
[ "python", "dictionary", "conditional" ]
Adjusting nested dictionary depending on provided key values
39,705,093
<p>I would like to create the following dictionary based on various parameters given by <code>params</code>. Now I do not want to add keys which provide no value. So for instance if <code>params['ean']</code> is blank or does not exist I want to exclude the entire line <code>"EAN": params['ean']</code>. Any suggestions are highly welcome.</p> <pre><code>myitem = { "Item": { "Title": params['title'], "Description": "Some Text", "PrimaryCategory": {"CategoryID": params['category_id']}, "ISBN": params['isbn'], "EAN": params['ean'] } } </code></pre> <p>Please note that that params will not have the same keys like myitem.</p>
0
2016-09-26T14:07:30Z
39,705,334
<p><strong>If you can change the way you create the dict</strong> just user dict comprehension:</p> <pre><code>myitem = {k: v for k, v in params.items() if v} </code></pre> <p>If you already have the dict in first place and you <strong>can't change the initialization</strong> of it, you can delete the empty values like that:</p> <pre><code>for k, v in myitems.items(): if not v: del myitems[k] </code></pre>
1
2016-09-26T14:18:28Z
[ "python", "dictionary", "conditional" ]
Adjusting nested dictionary depending on provided key values
39,705,093
<p>I would like to create the following dictionary based on various parameters given by <code>params</code>. Now I do not want to add keys which provide no value. So for instance if <code>params['ean']</code> is blank or does not exist I want to exclude the entire line <code>"EAN": params['ean']</code>. Any suggestions are highly welcome.</p> <pre><code>myitem = { "Item": { "Title": params['title'], "Description": "Some Text", "PrimaryCategory": {"CategoryID": params['category_id']}, "ISBN": params['isbn'], "EAN": params['ean'] } } </code></pre> <p>Please note that that params will not have the same keys like myitem.</p>
0
2016-09-26T14:07:30Z
39,705,516
<p>I think the following code does what you are looking for.</p> <pre><code>d = {} def addItem(name, description, params): d[name] = {} if 'title' in params: d[name]['Title'] = params['title'] if 'category_id' in params: d[name]['PrimaryCategory'] = {"CategoryID" : params['category_id']} if 'isbn' in params: d[name]['ISBN'] = params['isbn'] if 'ean' in params: d[name]['EAN'] = params['ean'] d[name]['Description'] = description params1 = {'title' : 'Boh'} addItem('item1', 'item1 description', params1) params2 = {'title' : 'Boh 2', 'isbn' : '12345'} addItem('item2', 'item2 description', params2) print d </code></pre>
1
2016-09-26T14:28:39Z
[ "python", "dictionary", "conditional" ]
Python - Merge data from multiple thread instances
39,705,131
<p>I am currently working on a project that involves connecting two devices to a python script, retrieving data from them and outputting the data. </p> <p>Code outline:</p> <p>• Scans for paired devices</p> <p>• Paired device found creates thread instance (Two devices connected = two thread instances )</p> <p>• Data is printed within the thread i.e. each instance has a separate bundle of data</p> <p>Basically when two devices are connected two instances of my thread class is created. Each thread instance returns a different bundle of data.</p> <p>My question is: Is there a way I can combine the two bundles of data into one bundle of data?</p> <p>Any help on this is appreciated :)</p>
0
2016-09-26T14:09:12Z
39,705,375
<p>My general advice: Avoid using any of these things </p> <ul> <li>avoid threads</li> <li>avoid the multiprocessing module in Python</li> <li>avoid the futures module in Python.</li> </ul> <p>Use a tool like <a href="http://python-rq.org/" rel="nofollow">http://python-rq.org/</a></p> <p>Benefit:</p> <ul> <li>You need to define the input- and output data well, since only serializable data can be passed around</li> <li>You have distinct interpreters.</li> <li>No dead locks</li> <li>Easier to debug.</li> </ul>
-1
2016-09-26T14:20:27Z
[ "python", "multithreading" ]
Python - Merge data from multiple thread instances
39,705,131
<p>I am currently working on a project that involves connecting two devices to a python script, retrieving data from them and outputting the data. </p> <p>Code outline:</p> <p>• Scans for paired devices</p> <p>• Paired device found creates thread instance (Two devices connected = two thread instances )</p> <p>• Data is printed within the thread i.e. each instance has a separate bundle of data</p> <p>Basically when two devices are connected two instances of my thread class is created. Each thread instance returns a different bundle of data.</p> <p>My question is: Is there a way I can combine the two bundles of data into one bundle of data?</p> <p>Any help on this is appreciated :)</p>
0
2016-09-26T14:09:12Z
39,705,451
<p>I assume you are using the <code>threading</code> module.</p> <h2>Threading in Python</h2> <p>Python is not multithreaded for CPU activity. The interpreter still uses a GIL (Global Interpreter Lock) for most operations and therefore linearizing operations in a python script. Threading is good to do IO however, as other threads can be woken up while a thread waits for IO.</p> <h2>Idea</h2> <p>Because of the GIL we can just use a standard list to combine our data. The idea is to pass the same list or dictionary to every Thread we create using the <code>args</code> parameter. See <a href="https://docs.python.org/2/library/threading.html" rel="nofollow">pydoc for threading</a>.</p> <p>Our simple implementation uses two Threads to show how it can be done. In real-world applications you probably use a Thread group or something similar..</p> <h2>Implementation</h2> <pre><code>def worker(data): # retrieve data from device data.append(1) data.append(2) l = [] # Let's pass our list to the target via args. a = Thread(target=worker, args=(l,)) b = Thread(target=worker, args=(l,)) # Start our threads a.start() b.start() # Join them and print result a.join() b.join() print(l) </code></pre> <h2>Further thoughts</h2> <p>If you want to be 100% correct and don't rely on the GIL to linearize access to your list, you can use a simple mutex to lock and unlock or use the <a href="https://docs.python.org/2/library/queue.html#module-Queue" rel="nofollow">Queue module</a> which implements correct locking.</p> <p>Depending on the nature of the data a dictionary might be more convenient to join data by certain keys.</p> <h2>Other considerations</h2> <p>Threads should be carefully considered. Alternatives such as <code>asyncio</code>, etc might be better suited.</p>
3
2016-09-26T14:24:46Z
[ "python", "multithreading" ]
loss function as min of several points, custom loss function and gradient
39,705,175
<p>I am trying to predict quality of metal coil. I have the metal coil with width 10 meters and length from 1 to 6 kilometers. As training data I have ~600 parameters measured each 10 meters, and final quality control mark - good/bad (for whole coil). Bad means there is at least 1 place there is coil is bad, there is no data where is exactly. I have data for approx 10000 coils.</p> <p>Lets imagine we want to train logistic regression for this data(with 2 factors).</p> <pre><code>X = [[0, 0], ... [0, 0], [1, 1], # coil is actually broken here, but we don't know it yet. [0, 0], ... [0, 0]] Y = ????? </code></pre> <p>I can't just put all "bad" in Y and run classifier, because I will be confusing for classifier. I can't put all "good" and one "bad" in Y becuase I don't know where is the bad position. </p> <p>The solution I have in mind is the following, I could define loss function as <strong>sum( (Y-min(F(x1,x2)))^2 )</strong> (min calculated by all F belonging to one coil) not <strong>sum( (Y-F(x1,x2))^2 )</strong>. In this case probably I get F trained correctly to point bad place. I need gradient for that, it there is impossible to calculate it in all points, the min is not differentiable in all places, but I could use weak gradient instead(using values of functions which is minimal in coil in every place). </p> <p>I more or less know how to implement it myself, the question is what is the simplest way to do it in python with scikit-learn. Ideally it should be same (or easily adaptable) with several learning method(a lot of methods based on loss function and gradient), is where possible to make some wrapper for learning methods which works this way? </p> <p><strong>update</strong>: looking at gradient_boosting.py - there is internal abstract class LossFunction with ability to calculate loss and gradient, looks perspective. Looks like there is no common solution.</p>
2
2016-09-26T14:11:10Z
39,712,223
<p>What you are considering here is known in machine learning community as <strong>superset learning</strong>, meaning, that instead of typical supervised setting where you have training set in the form of {(x_i, y_i)} you have {({x_1, ..., x_N}, y_1)} such that you know that at least one element from the set has property y_1. This is not a very common setting, but existing, with some research available, google for papers in the domain.</p> <p>In terms of your own loss functions - scikit-learn is a no-go. Scikit-learn is about simplicity, it provides you with a small set of ready to use tools with very little flexibility. It is not a research tool, and your problem is researchy. What can you use instead? I suggest you go for any symbolic-differentiation solution, for example <a href="https://github.com/HIPS/autograd" rel="nofollow">autograd</a> which gives you ability to <strong>differentiate through python code</strong>, simply apply <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html" rel="nofollow">scipy.optimize.minimize</a> on top of it and you are done! Any custom loss function will work just fine.</p> <p>As a side note - minimum operator is not differentiable, thus the model might have hard time figuring out what is going on. You could instead try to do <code>sum((Y - prod_x F(x_1, x_2) )^2)</code> since multiplication is nicely differentiable, and you will still get the similar effect - if at least one element is predicted to be 0 it will remove any "1" answer from the remaining ones. You can even go one step further to make it more numerically stable and do:</p> <pre><code>if Y==0 then loss = sum_x log(F(x_1, x_2 ) ) if Y==1 then loss = sum_x log(1-F(x_1, x_2)) </code></pre> <p>which translates to</p> <pre><code>Y * sum_x log(1-F(x_1, x_2)) + (1-Y) * sum_x log( F(x_1, x_2) ) </code></pre> <p>you can notice similarity with cross entropy cost which makes perfect sense since your problem is indeed a <strong>classification</strong>. And now you have perfect probabilistic loss - you are attaching such probabilities of each segment to be "bad" or "good" so the probability of the whole object being bad is either high (if Y==0) or low (if Y==1).</p>
1
2016-09-26T20:55:14Z
[ "python", "machine-learning", "scikit-learn", "gradient-descent" ]
Add annotation in an interactive plot
39,705,211
<p>I m trying to add an annotation at the middle of my interactive plot I would like to see my <em>i</em> value of the loop generating my <em>test</em> list with all my data. For each imshow plot, i would like to see my <em>i</em> value, i add an ax.annotate but it doesnt work.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation fig = plt.figure() # make figure ax = fig.add_subplot(111) test = [] mask2 = np.random.randint(255, size=(20, 20)) for i in range(1,5,3): kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(i,i)) res = (cv2.morphologyEx(mask2.astype(uint8),cv2.MORPH_OPEN,kernel)) #plt.imshow(res,cmap=plt.cm.gray,alpha=1);plt.show() test.append(res) # make axesimage object # the vmin and vmax here are very important to get the color map correct im = ax.imshow(test[0], cmap=plt.get_cmap('hot'), vmin=0, vmax=255) im2 = ax.annotate('This is awesome!', xy=(76, -10.75), xycoords='data', textcoords='offset points', arrowprops=dict(arrowstyle="-&gt;")) plt.show() # function to update figure def updatefig(j): # set the data in the axesimage object im.set_array(test[j]) # return the artists set return im, # kick off the animation ani = animation.FuncAnimation(fig, updatefig, frames=range(20), interval=50, blit=True) plt.show() </code></pre>
0
2016-09-26T14:12:32Z
39,752,300
<p>I have found a way out. I added a "set_text" inside my update function and i return picture and text : </p> <pre><code> test = [] test2 = [] for i in range(3,27,3): kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(i,i)) res = (cv2.morphologyEx(mask2,cv2.MORPH_OPEN,kernel)) #plt.imshow(res,cmap=plt.cm.gray,alpha=1);plt.show() test.append(res) test2.append(i) fig = plt.figure() # make figure ax = fig.add_subplot(111) # make axesimage object # the vmin and vmax here are very important to get the color map correct im = ax.imshow(test[0], cmap=plt.get_cmap('hot'), vmin=0, vmax=255) time_template = 'Diffusion - Kernel size : %2.2d' # prints running simulation time txt = ax.text(500, 80, '', fontsize=15,color='red') #plt.show() # function to update figure def updatefig(j): # set the data in the axesimage object im.set_array(test[j]) txt.set_text(time_template%(float(np.asarray(test2[j])))) return im,txt ani = animation.FuncAnimation(fig, updatefig, frames=range(len(test)), interval=100, blit=False,repeat=True) plt.show() </code></pre>
0
2016-09-28T15:53:39Z
[ "python", "numpy", "matplotlib", "annotations", "interactive" ]
Django REST framework without model
39,705,370
<p>I want to use Django REST framework to create an API to call different methods. I read the guide of [django-rest-framework][1] to work with this framework, but I still have some questions.</p> <p>I have no model I get my data from an external database. I want to try first something simple:</p> <ul> <li>Get all list of project</li> <li>Get the data from one project</li> </ul> <p>For that I create new app I include in the setting file and in my view.py I include that for the fist case</p> <pre><code>def connect_database(): db = MySQLdb.connect(host='...', port=, user='...', passwd='...', db='...') try: cursor = db.cursor() cursor.execute('SELECT * FROM proj_cpus') columns = [column[0] for column in cursor.description] # all_rows = cursor.fetchall() all_rows = [] for row in iter_row(cursor): all_rows.append(dict(zip(columns, row))) finally: db.close() return all_rows def iter_row(cursor, size= 1000): while True: results = cursor.fetchmany(size) if not results: break for item_result in results: yield item_result class cpuProjectsViewSet(viewsets.ViewSet): serializer_class = serializers.cpuProjectsSerializer def list(self, request): all_rows = connect_database() name_project = [] for item_row in all_rows: name_project.append(item_row['project']) name_project = list(sorted(set(name_project))) serializer = serializers.cpuProjectsSerializer(instance=name_project, many=False) return Response(serializer.data) </code></pre> <p>my serializers file I have this</p> <pre><code>class cpuProjectsSerializer(serializers.Serializer): project = serializers.CharField(max_length=256) def update(self, instance, validated_data): instance.project = validated_data.get('project', instance.project) return instance </code></pre> <p>Now when I execute this <code>http://127.0.0.1:8000/hpcAPI</code></p> <p>I obtain this error </p> <pre><code>Got AttributeError when attempting to get a value for field `project` on serializer `cpuProjectsSerializer`. The serializer field might be named incorrectly and not match any attribute or key on the `list` instance. Original exception text was: 'list' object has no attribute 'project'. </code></pre> <p>I look for in google and I change this </p> <p><code>serializers.cpuProjectsSerializer(instance=name_project, many=False)</code> for</p> <pre><code>serializers.cpuProjectsListSerializer(instance=name_project, many=False) </code></pre> <p>but I obtain the same error! </p> <p>any idea about that!</p> <p>Thanks in adavances</p>
-1
2016-09-26T14:20:09Z
39,708,239
<p>From docs <a href="http://www.django-rest-framework.org/tutorial/1-serialization/#creating-a-serializer-class" rel="nofollow">here</a>.You don't have to have a model for create a Serializer class.You can define some serializers field then use them. You should not import CPUProjectsViewSet and also define it in below</p> <pre><code>from mysite.hpcAPI.serializers import CPUProjectsViewSet class CPUProjectsViewSet(viewsets.ViewSet): """ return all project name """ all_rows = connect_database() </code></pre>
0
2016-09-26T16:49:35Z
[ "python", "django", "django-rest-framework" ]
Searching two of the three characters in python
39,705,378
<pre><code> info=('x','y','z') info2=('x','Bob','y') match=False if any(all x in info for x in info2): match=True print("True") else: print("False") </code></pre> <p>Is that is there a way I can make it work so that it only prints <code>True</code> when <code>x</code> and either <code>y</code> or <code>z</code> are in <code>info2</code>?</p>
0
2016-09-26T14:20:31Z
39,705,467
<p>The way I read this you want the first element in <code>info</code> (<code>info[0]</code>), and at least one other element in <code>info</code> to be in <code>info2</code></p> <pre><code> if info[0] in info2 and any(i in info2 for i in info[1:]): # do stuff </code></pre>
2
2016-09-26T14:25:39Z
[ "python", "python-3.4" ]
python urllib2.urlopen returns error 500 while Chrome loads the page
39,705,579
<p>I have a web-page (<a href="http://rating.chgk.info/api/tournaments/3506/" rel="nofollow">http://rating.chgk.info/api/tournaments/3506/</a>) I want to open in Python 2 via urllib2. It opens perfectly well in my browser but when I do this:</p> <pre><code>import urllib2 url = 'http://rating.chgk.info/api/tournaments/3506/' urllib2.urlopen(url) </code></pre> <p>i get HTTP Error 500.</p> <p>I tried tweaking User-Agent and Accept headers but nothing worked. What else can be the problem?</p>
0
2016-09-26T14:31:48Z
39,705,891
<p>You need to first visit the page on the site to get the session cookie set:</p> <pre><code>In [7]: import requests In [8]: requests.get("http://rating.chgk.info/api/tournaments/3506") Out[8]: &lt;Response [500]&gt; In [9]: with requests.Session() as session: ...: session.get("http://rating.chgk.info/index.php/api") ...: response = session.get("http://rating.chgk.info/api/tournaments/3506") ...: print(response.status_code) ...: 200 </code></pre>
1
2016-09-26T14:47:25Z
[ "python", "web-scraping", "urllib2" ]
How to get __lt__() to sort
39,705,660
<pre><code>class City: def __init__(self, string): self._string = string.split(',') self._name = self._string[0] self._state = self._string[1] self._latitude = self._string[2] self._longitude = self._string[3] self._location = [self._latitude, self._longitude] def name(self): return self._name def location(self): return self._location self.hand.sort(key=lambda x: x.longitude) def __lt__(self, other): if self._longitude &lt; other._longitude: return True if self._longitude &gt; other._longitude: return False if self._longitude == other._longitude: if self._latitude &lt; other._latitude: return True if self._latitude &gt; other._latitude: return False citystrings = ["Charleston,WV,38.35,81.63", "Charlotte,NC,35.23,80.83", "Cheyenne,WY,41.15,104.87", "Chicago,IL,41.83,87.62", "Cincinnati,OH,39.13,84.50", "Cleveland,OH,41.47,81.62", "Columbia,SC,34.00,81.03", "Columbus,OH,40.00,83.02", "Dallas,TX,32.77,96.77", "Denver,CO,39.75,105.00"] westtoeastnames = [ "Denver", "Cheyenne", "Dallas", "Chicago", "Cincinnati", "Columbus", "Charleston", "Cleveland", "Columbia", "Charlotte", ] cities = [City(s) for s in citystrings] cities.sort() sortednames = [c.name() for c in cities] print(sortednames) print(westtoeastnames) ['Cheyenne', 'Denver', 'Charlotte', 'Columbia', 'Cleveland', 'Charleston', 'Columbus', 'Cincinnati', 'Chicago', 'Dallas'] ['Denver', 'Cheyenne', 'Dallas', 'Chicago', 'Cincinnati', 'Columbus', 'Charleston', 'Cleveland', 'Columbia', 'Charlotte'] </code></pre> <p>This code tries to use <code>__lt__()</code> to sort the cities by how far they are west and the longitudes are based west of the prime meridian. I wrote an <code>__lt__()</code> method in the class but the <code>citystrings</code> won't sort to the correct order.</p>
0
2016-09-26T14:35:11Z
39,705,829
<p>You are comparing your longitude and latitude as <em>strings</em>, not numbers. They are therefor compared lexicographically, not numerically, so <code>'104'</code> will sort <em>before</em> <code>'80'</code> because <code>'1'</code> comes before <code>'8'</code> in the ASCII table (it doesn't matter what other characters follow).</p> <p>Convert your values to floating point numbers:</p> <pre><code>self._latitude = float(self._string[2]) self._longitude = float(self._string[3]) </code></pre> <p>Your comparison has a small bug; if both longitude and latitude match, you return <code>None</code> instead of <code>False</code>. You <em>may</em> want to test for equality and apply the <a href="https://docs.python.org/3/library/functools.html#functools.total_ordering" rel="nofollow"><code>@functools.total_ordering()</code> decorator</a> rather than assume that only <code>__lt__()</code> is called.</p> <p>Cleaning up the code a little (and removing the <code>name()</code> and <code>location()</code> methods, just use <code>name</code> and <code>location</code> attributes):</p> <pre><code>from functools import total_ordering @total_ordering class City: def __init__(self, string): self.name, self.state, lat, long = string.split(',') self.location = (self._latitude, self._longitude) = float(lat), float(long) def __lt__(self, other): if not isinstance(other, City): return NotImplemented # tuples defer ordering to the contents; compare them # in (longitude, latitude) order so that if longitude is # equal, the outcome is based on latitude. return self.location[::-1] &lt; other.location[::-1] def __eq__(self, other): if not isinstance(other, City): return NotImplemented return self.location == other.location </code></pre> <p>Note that <code>__lt__()</code> really only needs to compare <code>self.location</code>; tuple ordering takes care of the rest:</p> <pre><code>sortednames = [c.name for c in sorted(map(City, citystrings), reverse=True)] </code></pre> <p>Note the <code>reverse=True</code>; you want the larger values (further west from Greenwich) to be listed first.</p>
6
2016-09-26T14:43:33Z
[ "python", "python-3.x", "sorting" ]
Yielding multiple starting point requests with scrapy
39,705,692
<p>I am working with the following code ( simplified):</p> <pre><code>def parse(self, response): print('hello') for x in xrange(8): print x random_form_page = session.query(.... PR = Request( 'htp://my-api', headers=self.headers, meta={'newrequest': Request(random_form_page, headers=self.headers)}, callback=self.parse_PR ) yield PR </code></pre> <p>I want to loop through a db table and grab the starting page for each scrape (random_form_page), then yield a request for each start page. In my code I can see that although it loops through 8 times it only yields a request for the first start page. What am I doing wrong?</p>
0
2016-09-26T14:36:44Z
39,705,732
<p>You should be using <a href="http://doc.scrapy.org/en/latest/topics/spiders.html#scrapy.spiders.Spider.start_requests" rel="nofollow"><code>start_requests()</code> method</a> instead of <code>parse()</code>:</p> <pre><code>def start_requests(self): for x in xrange(8): random_form_page = session.query(.... PR = Request( 'htp://my-api', headers=self.headers, meta={'newrequest': Request(random_form_page, headers=self.headers)}, callback=self.parse_PR ) yield PR </code></pre> <p>You should also omit <code>start_urls</code> if it is set.</p>
2
2016-09-26T14:38:56Z
[ "python", "scrapy" ]
Checking user group membership in permission_required
39,705,707
<p>How would I check if a user is a part of a group inside the <code>permission_required</code> decorator?</p> <p>This is what I have currently but it doesn't seem to check it..</p> <pre><code>@permission_required(['user.is_super_user', "'NormalUser' in user.groups.all"], raise_exception=True) </code></pre> <p>This is supposed to check whether the user is a super user OR the user is part of the group <code>NormalUser</code> but when I try to access the site it just give me a 403 error when the user is part of the <code>NormalUser</code> group.</p> <p>Is there a way I can get this done? I only want to use <code>permission_required</code> decorator, nothing else :S</p>
0
2016-09-26T14:37:24Z
39,705,953
<p>You should use <a href="https://docs.djangoproject.com/en/1.10/topics/auth/default/#django.contrib.auth.decorators.user_passes_test" rel="nofollow"><code>user_passes_test</code></a> for this. The <code>permission_required</code> decorator checks permissions. It doesn't make sense to use it here.</p> <p>First, you need to define a test function that returns True if the user is a superuser or is in the <code>NormalUser</code> group:</p> <pre><code>def superuser_or_normaluser(user): return user.is_superuser or user.groups.filter(name='NormalUser').exists() </code></pre> <p>Then you can use that test function with the <code>user_passes_test</code> decorator.</p> <pre><code>@user_passes_test(superuser_or_normaluser, raise_exception=True) </code></pre>
1
2016-09-26T14:50:04Z
[ "python", "django", "python-2.7", "django-admin", "django-views" ]
Checking user group membership in permission_required
39,705,707
<p>How would I check if a user is a part of a group inside the <code>permission_required</code> decorator?</p> <p>This is what I have currently but it doesn't seem to check it..</p> <pre><code>@permission_required(['user.is_super_user', "'NormalUser' in user.groups.all"], raise_exception=True) </code></pre> <p>This is supposed to check whether the user is a super user OR the user is part of the group <code>NormalUser</code> but when I try to access the site it just give me a 403 error when the user is part of the <code>NormalUser</code> group.</p> <p>Is there a way I can get this done? I only want to use <code>permission_required</code> decorator, nothing else :S</p>
0
2016-09-26T14:37:24Z
39,706,004
<p>I don't believe you can pass a perm check like the latter in your list, that's essentially trying to execute some python code, which won't work.</p> <p>It would be handy to understand what your use case actually is, but if you strictly only want to use the permission_required decorator (instead of very simply creating your own decorator that does precisely what you want, or using the 'user_passes_test' decorator which seems more appropriate), then I would suggest you should <a href="https://docs.djangoproject.com/en/1.10/topics/auth/customizing/#custom-permissions" rel="nofollow">add a custom permission</a> somewhere in your system, add that as a permission of the 'Normal User' Group that you have created, and then just test (using permission required) the presence of that permission.</p> <p>Depending on your use case will determine where you create this new permission. Perhaps it's on a UserProfile type object ...</p>
1
2016-09-26T14:52:49Z
[ "python", "django", "python-2.7", "django-admin", "django-views" ]
Using third-party Python module that blocks Flask application
39,705,729
<p>My application that uses websockets also makes use of several third-party Python modules that appear to be written in way that blocks the rest of the application when called. For example, I use <b>xlrd</b> to parse Excel files a user has uploaded.</p> <p>I've monkey patched the builtins like this in the first lines of the application:</p> <pre><code>import os import eventlet if os.name == 'nt': eventlet.monkey_patch(os=False) else: eventlet.monkey_patch() </code></pre> <p>Then I use the following to start the task that contains calls to <b>xlrd</b>.</p> <pre><code>socketio.start_background_task(my_background_task) </code></pre> <p>What is the appropriate way to now call these other modules so that my application runs smoothly? Is the <b>multiprocessing</b> module to start another process within the greened thread the right way?</p>
1
2016-09-26T14:38:46Z
39,723,635
<ul> <li>First you should try thread pool [1].</li> <li>If that doesn't work as well as you want, please submit an issue [2] and go with multiprocessing as workaround.</li> </ul> <p><code>eventlet.tpool.execute(xlrd_read, file_path, other=arg)</code></p> <blockquote> <p>Execute meth in a Python thread, blocking the current coroutine/ greenthread until the method completes.</p> <p>The primary use case for this is to wrap an object or module that is not amenable to monkeypatching or any of the other tricks that Eventlet uses to achieve cooperative yielding. With tpool, you can force such objects to cooperate with green threads by sticking them in native threads, at the cost of some overhead.</p> </blockquote> <p>[1] <a href="http://eventlet.net/doc/threading.html" rel="nofollow">http://eventlet.net/doc/threading.html</a></p> <p>[2] <a href="https://github.com/eventlet/eventlet/issues" rel="nofollow">https://github.com/eventlet/eventlet/issues</a></p>
0
2016-09-27T11:36:02Z
[ "python", "flask", "eventlet", "flask-socketio" ]
Iterate over Python list and convert Python date to human readable string
39,705,756
<p>I have a python list that stores IP addresses, hostnames and dates.</p> <p>I need to iterate over this list and convert the python dates to human readable dates.</p> <p>How can I iterate over the list and convert the dates to human readable strings?</p> <p>List:</p> <pre><code>{'_id': '192.168.1.5', '192.168.1.5', 'u_updated_timestamp': datetime.datetime(2016, 9, 1, 20, 27, 38, 364000), 'u_hostname': 'test.example.com'}, {'_id': '192.168.1.3', 'u_ipv4': '192.168.1.3', 'u_updated_timestamp': datetime.datetime(2016, 9, 2, 9, 40, 5, 347000), 'u_hostname': test.test.com}, {'_id': '192.168.1.8', 'u_ipv4': '192.168.1.8', 'u_updated_timestamp': datetime.datetime(2016, 9, 2, 13, 13, 5, 403000), 'u_hostname': hosttest.example.com} </code></pre>
-1
2016-09-26T14:40:24Z
39,705,821
<p>Simply iterate by the list and modify / add datetime to human-readable:</p> <pre><code>for row_id, row in enumerate(your_list): your_list[row_id]['human_readable_date'] = row['u_updated_timestamp'].strftime("specify-your-format") </code></pre> <p>You can find datetime formats <a href="https://docs.python.org/2/library/datetime.html" rel="nofollow">here</a>.</p>
1
2016-09-26T14:43:05Z
[ "python" ]
Iterate over Python list and convert Python date to human readable string
39,705,756
<p>I have a python list that stores IP addresses, hostnames and dates.</p> <p>I need to iterate over this list and convert the python dates to human readable dates.</p> <p>How can I iterate over the list and convert the dates to human readable strings?</p> <p>List:</p> <pre><code>{'_id': '192.168.1.5', '192.168.1.5', 'u_updated_timestamp': datetime.datetime(2016, 9, 1, 20, 27, 38, 364000), 'u_hostname': 'test.example.com'}, {'_id': '192.168.1.3', 'u_ipv4': '192.168.1.3', 'u_updated_timestamp': datetime.datetime(2016, 9, 2, 9, 40, 5, 347000), 'u_hostname': test.test.com}, {'_id': '192.168.1.8', 'u_ipv4': '192.168.1.8', 'u_updated_timestamp': datetime.datetime(2016, 9, 2, 13, 13, 5, 403000), 'u_hostname': hosttest.example.com} </code></pre>
-1
2016-09-26T14:40:24Z
39,705,861
<p>If those are real python datetime.date objects you can simply call</p> <pre><code>date.isoformat() </code></pre> <p>This should return '2016-26-09' (today). Your 'list' does not look like a list tho...</p>
0
2016-09-26T14:45:42Z
[ "python" ]
Scrapy - running spider from a python script
39,705,892
<p>I am trying to run scrapy from a <code>python</code> script according to documentation <a href="http://scrapy.readthedocs.io/en/0.16/topics/practices.html" rel="nofollow">http://scrapy.readthedocs.io/en/0.16/topics/practices.html</a></p> <pre><code>def CrawlTest(): spider = PitchforkSpider(domain='"pitchfork.com"') crawler = Crawler(Settings()) crawler.configure() crawler.crawl(spider) crawler.start() log.start() reactor.run() # the script will block here </code></pre> <p>but when I run it, I get the following error:</p> <pre><code>AttributeError: 'Settings' object has no attribute 'update_settings' </code></pre> <p>has something been deprecated? what is wrong here?</p> <p>my version is <code>Scrapy 1.1.2</code></p>
0
2016-09-26T14:47:26Z
39,705,942
<p>You are looking at Scrapy 0.16 docs but using Scrapy 1.1.2. </p> <p>Here is the <a href="http://scrapy.readthedocs.io/en/latest/topics/practices.html#run-scrapy-from-a-script" rel="nofollow">correct documentation page</a>.</p> <p>FYI, you should now be using <code>CrawlerProcess</code> or <code>CrawlerRunner</code>.</p>
0
2016-09-26T14:49:38Z
[ "python", "web-scraping", "scrapy" ]
CrawlerProcess vs CrawlerRunner
39,706,005
<p><a href="http://scrapy.readthedocs.io/en/latest/topics/practices.html#run-scrapy-from-a-script" rel="nofollow">Scrapy 1.x documentation</a> explains that there are two ways to <em>run a Scrapy spider from a script</em>:</p> <ul> <li>using <a href="http://scrapy.readthedocs.io/en/latest/topics/api.html#scrapy.crawler.CrawlerProcess" rel="nofollow"><code>CrawlerProcess</code></a></li> <li>using <a href="http://scrapy.readthedocs.io/en/latest/topics/api.html#scrapy.crawler.CrawlerRunner" rel="nofollow"><code>CrawlerRunner</code></a></li> </ul> <p>What is the difference between the two? When should I use "process" and when "runner"?</p>
3
2016-09-26T14:52:58Z
39,706,311
<p>CrawlerRunner:</p> <blockquote> <p>This class shouldn’t be needed (since Scrapy is responsible of using it accordingly) unless writing scripts that manually handle the crawling process. See Run Scrapy from a script for an example.</p> </blockquote> <p>CrawlerProcess:</p> <blockquote> <p>This utility should be a better fit than CrawlerRunner if you aren’t running another Twisted reactor within your application.</p> </blockquote> <p>It sounds like the CrawlerProcess is what you want unless you're adding your crawlers to an existing Twisted application.</p>
0
2016-09-26T15:07:40Z
[ "python", "web-scraping", "scrapy" ]
CrawlerProcess vs CrawlerRunner
39,706,005
<p><a href="http://scrapy.readthedocs.io/en/latest/topics/practices.html#run-scrapy-from-a-script" rel="nofollow">Scrapy 1.x documentation</a> explains that there are two ways to <em>run a Scrapy spider from a script</em>:</p> <ul> <li>using <a href="http://scrapy.readthedocs.io/en/latest/topics/api.html#scrapy.crawler.CrawlerProcess" rel="nofollow"><code>CrawlerProcess</code></a></li> <li>using <a href="http://scrapy.readthedocs.io/en/latest/topics/api.html#scrapy.crawler.CrawlerRunner" rel="nofollow"><code>CrawlerRunner</code></a></li> </ul> <p>What is the difference between the two? When should I use "process" and when "runner"?</p>
3
2016-09-26T14:52:58Z
39,708,960
<p>Scrapy's documentation does a pretty bad job at giving examples on real applications of both.</p> <p><code>CrawlerProcess</code> assumes that scrapy is the only thing that is going to use twisted's reactor. If you are using threads in python to run other code this isn't always true. Let's take this as an example.</p> <pre><code>from scrapy.crawler import CrawlerProcess import scrapy def notThreadSafe(x): """do something that isn't thread-safe""" # ... class MySpider1(scrapy.Spider): # Your first spider definition ... class MySpider2(scrapy.Spider): # Your second spider definition ... process = CrawlerProcess() process.crawl(MySpider1) process.crawl(MySpider2) process.start() # the script will block here until all crawling jobs are finished notThreadSafe(3) # it will get executed when the crawlers stop </code></pre> <p>Now, as you can see, the function will only get executed when the crawlers stop, what if I want the function to be executed while the crawlers crawl in the same reactor?</p> <pre><code>from twisted.internet import reactor from scrapy.crawler import CrawlerRunner import scrapy def notThreadSafe(x): """do something that isn't thread-safe""" # ... class MySpider1(scrapy.Spider): # Your first spider definition ... class MySpider2(scrapy.Spider): # Your second spider definition ... runner = CrawlerRunner() runner.crawl(MySpider1) runner.crawl(MySpider2) d = runner.join() d.addBoth(lambda _: reactor.stop()) reactor.callFromThread(notThreadSafe, 3) reactor.run() #it will run both crawlers and code inside the function </code></pre> <p>The Runner class is not limited to this functionality, you may want some custom settings on your reactor (defer, threads, getPage, custom error reporting, etc)</p>
2
2016-09-26T17:31:26Z
[ "python", "web-scraping", "scrapy" ]
input output floating number issue python
39,706,011
<p>Python seems to do funny things to floating point numbers, it produces different floating point numbers from the input i give it, i would like the floating numbers to stay the same as the input. </p> <p>here i have a small test dataset:</p> <pre><code>import pandas as pd df = {'ID': ['H1','H2','H3','H4','H5','H6'], 'Length': [72, 72, '', 72, 72,'' ], 'AA1': ['C','C','C','C','C','C'], 'AA2': ['W','W','W','W','W','W'], 'Freq': [0.14532872, 0.141868512,0.138408304, 0.14532872,0.138408304, 0.138408304 ], 'M': [-282.0570386,-279.1090993,-276.16116,-282.0570386,-274.7748657,-274.6160337]} df = pd.DataFrame(df) </code></pre> <p>it is <strong>supposed</strong> to look like : </p> <pre><code> df Out[2]: AA1 AA2 Freq ID Length M 0 C W 0.14532872 H1 72 -282.0570386 1 C W 0.141868512 H2 72 -279.1090993 2 C W 0.138408304 H3 -276.16116 3 C W 0.14532872 H4 72 -282.0570386 4 C W 0.138408304 H5 72 -274.7748657 5 C W 0.138408304 H6 -274.6160337 </code></pre> <p>but it actually looks like this, <strong>notice the difference in floating numbers in columns 'Freq' and 'M':</strong></p> <pre><code>df Out[2]: AA1 AA2 Freq ID Length M 0 C W 0.145329 H1 72 -282.057039 1 C W 0.141869 H2 72 -279.109099 2 C W 0.138408 H3 -276.161160 3 C W 0.145329 H4 72 -282.057039 4 C W 0.138408 H5 72 -274.774866 5 C W 0.138408 H6 -274.616034 </code></pre> <p>And when i run my script to just simply filter out the rows i don't want:</p> <pre><code>import pandas as pd df = pd.read_csv('test.txt', sep='\t' ) df2 = df[(df['Length'] != 0 ) &amp; (df['AA1'] == 'C')&amp; (df['AA2']== 'C')] df2.to_csv('results.txt', sep = '\t', index=False) </code></pre> <p>the 'results.txt' file contains weird floating numbers that's not the same as the input, there must be a way to output the floating numbers as the input, but i couldn't find related topics online. </p>
1
2016-09-26T14:53:22Z
39,706,083
<p>Floats are weird: <a href="https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/02Numerics/Double/paper.pdf" rel="nofollow">https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/02Numerics/Double/paper.pdf</a></p> <p>It's not uncommon to see peculiar looking behavior out of them. If you're not doing any calculations on them, I suggest converting them to strings first, so they're stored in the format you want. </p>
0
2016-09-26T14:57:01Z
[ "python", "floating-point" ]
Database query count
39,706,074
<p>I am trying to create a program that will iterate through a data base to match a query for "_75" and set the count to 0 when that match is made. For every records in a database that does not match the query I would like for it to accumulate a negative 1 count. No matter how I try the code I get either the total record count or the total records that match the query. </p> <pre><code>import sqlite3 conn = sqlite3.connect('p34.db') c = conn.cursor() q = "SELECT * FROM 'Pick 3'" z = "SELECT * FROM 'Pick 3' WHERE Number LIKE '_75' ORDER BY Draw DESC;" c.execute (q) rez = c.fetchall() count = 0 for row in rez: if row == rez: count = 0 else: count = count -1 print (count) conn.close() </code></pre> <p>my example for desired results for query _75.</p> <pre><code>record 1 075 count 0 record 2 223 count -1 record 3 228 count -2 record 4 323 count -3 record 5 275 count 0 record 6 888 count -1 </code></pre>
-1
2016-09-26T14:56:43Z
39,706,677
<p>Change your query to return a count rather than counting in the python code For example</p> <pre><code>SELECT COUNT(CustomerID) AS OrdersFromCustomerID7 FROM Orders WHERE CustomerID=7; </code></pre> <p><a href="http://www.w3schools.com/sql/sql_func_count.asp" rel="nofollow">Source</a></p> <p>Query to count the number of rows where the field is like _75 and also query where fields are not like _75. Take one number from the other. Or just count how many entries are in the table.</p>
0
2016-09-26T15:24:58Z
[ "python", "python-3.x", "sqlite3" ]
Return 400 when login to a website using python requests
39,706,134
<p>I am trying to write a python script to login to a website using the requests library. This is the login form.</p> <pre><code>&lt;form action="/login" method="POST"&gt;&lt;input type="hidden" name="post_key" value="b762c617d52cf987fdb40d74c6a04e07"&gt;&lt;input type="hidden" name="return_to" value="http://www.pixiv.net/"&gt;&lt;input type="hidden" name="lang" value="en"&gt;&lt;input type="hidden" name="source" value="pc"&gt;&lt;div class="input-field-group"&gt;&lt;div class="input-field"&gt;&lt;input type="text" name="pixiv_id" placeholder="E-mail address / pixiv ID" autocapitalize="off"&gt;&lt;/div&gt;&lt;div class="input-field"&gt;&lt;input type="password" name="password" placeholder="Password" autocapitalize="off"&gt; </code></pre> <p>This is my code.</p> <pre><code>import requests url = "https://accounts.pixiv.net/login" # set requests headers headers = { 'Connection':'keep-alive', 'User-agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36', 'Content-Type':'application/x-www-form-urlencoded; charset=UTF-8' } # get user id and password pixiv_id = raw_input("Your pixiv id : ") password = raw_input("Your pixiv password: ") payload = { 'action' : '/login', 'return_to' : 'http://www.pixiv.net' } payload['pixiv_id']=pixiv_id payload['password']=password with requests.Session() as s: r = s.post(url, data=payload, headers=headers) response = s.get("http://www.pixiv.net") print r.status_code print response.text </code></pre> <p>My question is, should I fill in all the hidden value in the form? Also, I have run it for many times however it always return 400. Could anyone help me figuring out the problem of my code? </p>
1
2016-09-26T14:58:51Z
39,706,266
<p>When I log in and look into browser developer tools I see much more POST request parameters being sent after clicking "log in":</p> <p><a href="http://i.stack.imgur.com/dmaNZm.png" rel="nofollow"><img src="http://i.stack.imgur.com/dmaNZm.png" alt="enter image description here"></a> </p> <p><code>requests</code> would send only what you explicitly tell it to send - meaning, you should also send all the hidden form parameters. It might involve HTML parsing - you can use, for example, <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow"><code>BeautifulSoup</code></a> for it.</p> <p>Or, you can use tools like <a href="http://wwwsearch.sourceforge.net/mechanize/" rel="nofollow"><code>mechanize</code></a>, <a href="https://github.com/hickford/MechanicalSoup" rel="nofollow"><code>mechanicalsoup</code></a> or <a href="https://github.com/jmcarp/robobrowser" rel="nofollow"><code>robobrowser</code></a> which would auto-discover and send the hidden attributes of a form.</p>
1
2016-09-26T15:05:41Z
[ "python", "http", "login", "python-requests" ]
I can send emails, but not replies with smtplib and GMail
39,706,158
<p>I'm making a program that replies automatically to emails on my GMail account. I can send mails just fine, but I can't seem to reply to them. I'm using <code>smtplib</code>.</p> <p>This is the code I use for sending plain mails (suppose that <code>foobar@gmail.com</code> is my personal email):</p> <pre><code># params contains the header data of the original email. print smtpserver.sendmail( "Name Surname &lt;foobar@gmail.com&gt;", str(params["From"]), msg ) </code></pre> <p>This is what I use to send replies:</p> <pre><code>print smtpserver.sendmail( "Giul Mus &lt;giul.mus@gmail.com&gt;", str(params["From"]), msg, { "In-Reply-To": params["Message-ID"], "Message-ID": email.utils.make_msgid(), "References": params["Message-ID"], "Subject": "Re: " + params["Subject"] } ) </code></pre> <p>The former works correctly, and I can see the mail it sent in my mailbox; however, the latter fails with this stack trace:</p> <pre><code>Traceback (most recent call last): File "imap.py", line 65, in &lt;module&gt; imapprocess(imapdata[0].split(" ")) File "imap.py", line 55, in imapprocess raise e smtplib.SMTPSenderRefused: (555, '5.5.2 Syntax error. o2sm22774327wjo.3 - gsmtp', 'Name Surname &lt;foobar@gmail.com&gt;') </code></pre> <p>Why does this happen? I saw <a href="http://stackoverflow.com/questions/4321346/555-5-5-2-syntax-error-gmails-smtp">this</a>, question, but it wasn't of any help (I tried sending it from <code>"Foo Bar &lt;foobar@gmail.com&gt;"</code>, <code>"&lt;foobar@gmail.com&gt;"</code>, or to <code>"&lt;hardcoded-address@gmail.com&gt;"</code>, but none of these worked).</p>
0
2016-09-26T15:00:31Z
39,707,698
<p>The options are not to be passed as an argument, but for whatever reason they actually belong in the message. Here is an example:</p> <pre><code>msg = MIMEMultipart("mixed") body = MIMEMultipart("alternative") body.attach(MIMEText(text, "plain")) body.attach(MIMEText("&lt;html&gt;" + text + "&lt;/html&gt;", "html")) msg.attach(body) msg["In-Reply-To"] = params["Message-ID"] msg["Message-ID"] = email.utils.make_msgid() msg["References"] = params["Message-ID"] msg["Subject"] = "Re: " + params["Subject"] destination = msg["To"] = params["Reply-To"] or params["From"] smtpserver.sendmail( "&lt;foobar@gmail.com&gt;", destination, msg.as_string() ) </code></pre>
0
2016-09-26T16:18:27Z
[ "python", "email", "smtp", "gmail" ]
i'm trying to make a 2 players racing game with pygame and event.key don't work
39,706,164
<p>I'm trying to make a 2-player racing game, and when I try to add the <code>event.key == pygame.K_LEFT</code> it says <code>AttributeError: 'Event' object has no attribute 'key'</code>.</p> <p>I have tried many things as adding <code>()</code>, but nothing fixed it and I have no clue about it.</p> <p>Code:</p> <pre><code>import pygame pygame.init() display_width = 1280 display_height = 720 black = (0, 0, 0) white = (255, 255, 255) red = (255, 0, 0) green = (0, 255, 0) blue = (0, 0, 255) gameDisplay = pygame.display.set_mode((display_width, display_height)) pygame.display.set_caption('U-race multiplayer') clock = pygame.time.Clock() car1 = pygame.image.load('car1.png') def carone(xone, yone): gameDisplay.blit(car1,(xone, yone)) xone = (display_width * 0.48) yone = (display_height * 0.8) xone_change = 0 crashed = False while not crashed: for event in pygame.event.get(): if event.type == pygame.QUIT: crashed = True if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: xone_change = -5 elif event.key == pygame.K_RIGHT: xone_change = 5 if event.type == pygame.KEYUP: if event.key == pygame.K_LEFT or event.key == pygame.K_RIGHT: xone_change = 0 </code></pre> <p>error message:</p> <pre><code> RESTART: C:\Users\Osamas\Desktop\U-racing multiplayer\U-racing multiplayer.py Traceback (most recent call last): File "C:\Users\Osamas\Desktop\U-racing multiplayer\U-racing multiplayer.py", line 44, in &lt;module&gt; elif event.key == pygame.K_RIGHT: AttributeError: 'Event' object has no attribute 'key' </code></pre>
3
2016-09-26T15:00:42Z
39,706,264
<pre><code>if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: xone_change = -5 elif event.key == pygame.K_RIGHT: #indented to be part of the keydown events xone_change = 5 </code></pre> <p>You want this. Your indentation was wrong. </p> <p>You had <code>elif event.key == pygame.K_RIGHT:</code> checking with your event.type which doesn't make sense if you think about it.</p>
1
2016-09-26T15:05:35Z
[ "python", "pygame" ]
i'm trying to make a 2 players racing game with pygame and event.key don't work
39,706,164
<p>I'm trying to make a 2-player racing game, and when I try to add the <code>event.key == pygame.K_LEFT</code> it says <code>AttributeError: 'Event' object has no attribute 'key'</code>.</p> <p>I have tried many things as adding <code>()</code>, but nothing fixed it and I have no clue about it.</p> <p>Code:</p> <pre><code>import pygame pygame.init() display_width = 1280 display_height = 720 black = (0, 0, 0) white = (255, 255, 255) red = (255, 0, 0) green = (0, 255, 0) blue = (0, 0, 255) gameDisplay = pygame.display.set_mode((display_width, display_height)) pygame.display.set_caption('U-race multiplayer') clock = pygame.time.Clock() car1 = pygame.image.load('car1.png') def carone(xone, yone): gameDisplay.blit(car1,(xone, yone)) xone = (display_width * 0.48) yone = (display_height * 0.8) xone_change = 0 crashed = False while not crashed: for event in pygame.event.get(): if event.type == pygame.QUIT: crashed = True if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: xone_change = -5 elif event.key == pygame.K_RIGHT: xone_change = 5 if event.type == pygame.KEYUP: if event.key == pygame.K_LEFT or event.key == pygame.K_RIGHT: xone_change = 0 </code></pre> <p>error message:</p> <pre><code> RESTART: C:\Users\Osamas\Desktop\U-racing multiplayer\U-racing multiplayer.py Traceback (most recent call last): File "C:\Users\Osamas\Desktop\U-racing multiplayer\U-racing multiplayer.py", line 44, in &lt;module&gt; elif event.key == pygame.K_RIGHT: AttributeError: 'Event' object has no attribute 'key' </code></pre>
3
2016-09-26T15:00:42Z
39,712,446
<p>The reason why you're getting an error is because different events have different attributes. You can check which event has which attributes at the <a class='doc-link' href="http://stackoverflow.com/documentation/pygame/5110/event-handling/18046/event-loop#t=201609262052092301037">Stackoverflow pygame documentation</a> or at the official <a href="http://www.pygame.org/docs/ref/event.html" rel="nofollow">pygame docs</a>.</p> <p>Let's take an example between two types of events:</p> <ol> <li>QUIT event types has no attributes.</li> <li>KEYDOWN event types has attributes <em>unicode</em>, <em>key</em> and <em>mod</em>.</li> </ol> <p>This means that we cannot check for the <em>key</em> attribute in an event until we've made sure that the event is of type <code>KEYDOWN</code> (or <code>KEYUP</code>).</p> <p>In your code you have:</p> <pre><code>for event in pygame.event.get(): if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: xone_change = -5 elif event.key == pygame.K_RIGHT: xone_change = 5 </code></pre> <p>If the event type isn't <code>KEYDOWN</code> then you're checking with the event's <em>key</em> attribute, which will raise an attribute error if the event is of type <code>QUIT</code>, <code>MOUSEBUTTONDOWN</code> or any other event that isn't <code>KEYDOWN</code> or <code>KEYUP</code>.</p> <p>To correct this you'll do as MooingRawr answered and make sure that you've indented the two last lines inside the first if-statement. Just thought I'd give an answer to why it's wrong.</p>
1
2016-09-26T21:10:26Z
[ "python", "pygame" ]
Update a null values in countrycode column in a data frame by matching substring of country name using python
39,706,194
<p>I have two data frames: Disaster, CountryInfo Disaster has a column country code which has some null values for example: </p> <p>Disaster:</p> <pre><code> 1.**Country** - **Country_code** 2.India - Null 3.Afghanistan (the) - AFD 4.India - IND 5.United States of America - Null </code></pre> <p>CountryInfo:</p> <pre><code>0.**CountryName** - **ISO** 1.India - IND 2.Afganistan - AFD 3.United States - US </code></pre> <p>Expected Result</p> <pre><code> Country Country_code 0 India IND 1 Afghanistan AFD 2 India IND 3 United States US </code></pre> <p>I need to fill the country code with reference to the substring of the country name.Can anyone suggest a solution for this?</p>
0
2016-09-26T15:02:17Z
39,709,167
<p>This should do it. You need to change the column names with <code>rename</code> so that both <code>dataframes</code> have the same column names. Then, the <code>difflib</code> module and its <code>get_close_matches</code> method can be used to do a fuzzy match and replace of <code>Country</code> names. Then it is a simple matter of merging the <code>dataframes</code></p> <pre><code>import pandas as pd import numpy as np import difflib df1 = pd.DataFrame({'Country' : ['India', 'Afghanistan', 'India', 'United States of America'], 'Country_code' : ['Null', 'AFD', 'IND', 'Null']}) df1 Country Country_code 0 India Null 1 Afghanistan AFD 2 India IND 3 United States of America Null df2 = pd.DataFrame({'Country' : ['India', 'Afghanistan', 'India', 'United States'], 'ISO' : ['IND', 'AFD', 'IND', 'USA']}) df2 Country ISO 0 India IND 1 Afghanistan AFD 2 India IND 3 United States USA df2.rename(columns={'ISO' : 'Country_code'}, inplace=True) df2 Country Country_code 0 India IND 1 Afghanistan AFD 2 India IND 3 United States USA </code></pre> <p>The following code will change the <code>Country</code> column in <code>df2</code> with the names in the <code>Country</code> column in <code>df1</code> that provide the closest match. This is a way of performing a kind of "fuzzy join" on the substrings.</p> <pre><code>df1['Country'] = df1.Country.map(lambda x: difflib.get_close_matches(x, df2.Country)[0]) df1 Country Country_code 0 India Null 1 Afghanistan AFD 2 India IND 3 United States Null </code></pre> <p>Now you can simply <code>merge</code> the <code>dataframes</code>, which will update missing <code>Country_code</code> rows in <code>df1</code>.</p> <pre><code>df1.merge(df2, how='right', on=['Country', 'Country_code']) Country Country_code 0 Afghanistan AFD 1 India IND 2 India IND 3 United States USA </code></pre>
0
2016-09-26T17:44:01Z
[ "python", "python-2.7", "pandas" ]
Unable to create disk from snapshot using google cloud engine API
39,706,233
<p>I'm trying to create a disk from a snapshot using google cloud python API:</p> <pre><code>def createDisk(compute, project, zone): config = { 'name': disk_name } return compute.disks().insert( project=project, zone=zone, sourceSnapshot='global/snapshots/' + snap_name, body=config).execute() </code></pre> <p>But it throws:</p> <pre><code>TypeError: Got an unexpected keyword argument "sourceSnapshot" </code></pre> <p>According to the <a href="https://cloud.google.com/compute/docs/reference/latest/disks/insert" rel="nofollow">docs</a> it should be possible:</p> <blockquote> <p>Creates a persistent disk in the specified project using the data in the request. You can create a disk with a sourceImage, a sourceSnapshot, or create an empty 500 GB data disk by omitting all properties. You can also create a disk that is larger than the default size by specifying the sizeGb property.</p> </blockquote> <p>I need it to automate an image creation which I want to base on the 'dummy' instance. The image should then be used to create a disk which in turn will be used in the instance template for the auto scaling.</p> <p>Any tips on that one? Is it possible? If no, the <code>sourceSnapshot</code> reference be in the docs is really misleading?</p> <p>Thank you in advance.</p>
0
2016-09-26T15:04:25Z
39,851,976
<p>It came out that <code>sourceSnapshot</code> should be part of the body request, not an argument. So that would work:</p> <pre><code>def createDisk(compute, project, zone): config = { 'name': disk_name, 'sourceSnapshot': 'global/snapshots/' + snap_name, } return compute.disks().insert( project=project, zone=zone, body=config).execute() </code></pre>
0
2016-10-04T12:11:15Z
[ "python", "google-cloud-platform" ]
P value for Normality test very small despite normal histogram
39,706,242
<p>I've looked over the normality tests in scipy stats for both <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.mstats.normaltest.html" rel="nofollow">scipy.stats.mstats.normaltest</a> as well as <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.shapiro.html" rel="nofollow">scipy.stats.shapiro</a> and it looks like they both assume the null hypothesis is that the data they're given is normal.</p> <p>Ie, a p value less than .05 would indicate that they're not normal. </p> <p>I'm doing a regression with LassoCV in SKLearn, and in order to give myself better results I log transformed the answers, which gives a histogram that looks like this:</p> <p><a href="http://i.stack.imgur.com/poo2E.png" rel="nofollow"><img src="http://i.stack.imgur.com/poo2E.png" alt="Histogram of data"></a></p> <p>Looks normal to me. </p> <p>However, when I run the data through either of the two tests mentioned above I get very small p values that would indicate the data is not normal, and in a big way.</p> <p>This is what I get when I use scipy.stats.shapiro</p> <pre><code>scipy.stats.shapiro(y) Out[69]: (0.9919402003288269, 3.8889791653673456e-07) </code></pre> <p>And I get this when I run scipy.stats.mstats.normaltest:</p> <pre><code>scipy.stats.mstats.normaltest(y) NormaltestResult(statistic=25.755128535282189, pvalue=2.5547293546709236e-06) </code></pre> <p>It seems implausible to me that my data would test out as being so far from normality with the histogram it has.</p> <p>Is there something causing this discrepancy, or am I not interpreting the results correctly?</p>
2
2016-09-26T15:04:45Z
39,717,961
<p>If the numbers on the vertical axis are the number of observations for the respective class, then the sample size is about 1500. For such a large sample size goodness-of-fit tests are rarely useful. But is it really necessary that your data is perfectly normally distributed? If you want to analyze the data with a statistical method, is this method maybe robust under ("small") deviations from the normal distribution assumption? In practice the question is usually "Is the normal distribution assumption acceptable" for my statistical analysis. A perfect normal distribution is very rarly available. An additional comment on histograms: One has to be careful by interpreting data from histograms because if the data "looks normal" or not may depend on the width of the histogram classes. Histograms are only hints which should be treated with caution.</p>
0
2016-09-27T06:55:31Z
[ "python", "numpy", "scipy", "statistics" ]
Custom list of lists into dictionary
39,706,256
<p>I have list of lists and I wish to create a dictionary with length of each element as values. I tried the following:</p> <pre><code>tmp = [['A', 'B', 'E'], ['B', 'E', 'F'], ['A', 'G']] tab = [] for line in tmp: tab.append(dict((k, len(tmp)) for k in line)) </code></pre> <p>But it gives the output as:</p> <pre><code>[{'A': 3, 'B': 3, 'E': 3}, {'B': 3, 'E': 3, 'F': 3}, {'A': 3, 'G': 3}] </code></pre> <p>What is the modification that I should make to get the output:</p> <pre><code>{['A', 'B', 'E']:3, ['B', 'E', 'F']:3, ['A', 'G']:2} </code></pre> <p>Thanks in advance.</p> <p>AP</p>
0
2016-09-26T15:05:11Z
39,706,326
<p>You can't use <code>list</code> objects as dictionary keys, they are mutable and unhashable. You can convert them to tuple. Also note that you are looping over each sub list. You can use a generator expression by only looping over the main list:</p> <pre><code>In [3]: dict((tuple(sub), len(sub)) for sub in tmp) Out[3]: {('A', 'B', 'E'): 3, ('A', 'G'): 2, ('B', 'E', 'F'): 3} </code></pre>
3
2016-09-26T15:08:15Z
[ "python", "python-2.7", "dictionary" ]
Custom list of lists into dictionary
39,706,256
<p>I have list of lists and I wish to create a dictionary with length of each element as values. I tried the following:</p> <pre><code>tmp = [['A', 'B', 'E'], ['B', 'E', 'F'], ['A', 'G']] tab = [] for line in tmp: tab.append(dict((k, len(tmp)) for k in line)) </code></pre> <p>But it gives the output as:</p> <pre><code>[{'A': 3, 'B': 3, 'E': 3}, {'B': 3, 'E': 3, 'F': 3}, {'A': 3, 'G': 3}] </code></pre> <p>What is the modification that I should make to get the output:</p> <pre><code>{['A', 'B', 'E']:3, ['B', 'E', 'F']:3, ['A', 'G']:2} </code></pre> <p>Thanks in advance.</p> <p>AP</p>
0
2016-09-26T15:05:11Z
39,706,482
<pre><code>{tuple(t):len(t) for t in tmp} </code></pre> <p>Input : </p> <pre><code>[['A', 'B', 'E'], ['B', 'E', 'F'], ['A', 'G']] </code></pre> <p>Output :</p> <pre><code>{('A', 'G'): 2, ('A', 'B', 'E'): 3, ('B', 'E', 'F'): 3} </code></pre> <p>Dictionnary does not accept list as key, but tuple</p>
1
2016-09-26T15:15:04Z
[ "python", "python-2.7", "dictionary" ]
Numpy: Why is difference of a (2,1) array and a vertical matrix slice not a (2,1) array
39,706,277
<p>Consider the following code:</p> <pre><code>&gt;&gt;x=np.array([1,3]).reshape(2,1) array([[1], [3]]) &gt;&gt;M=np.array([[1,2],[3,4]]) array([[1, 2], [3, 4]]) &gt;&gt;y=M[:,0] &gt;&gt;x-y array([[ 0, 2], [-2, 0]]) </code></pre> <p>I would intuitively feel this should give a (2,1) vector of zeros.</p> <p>I am not saying, however, that this is how it should be done and everything else is stupid. I would simply love if someone could offer some logic that I can remember so things like this don't keep producing bugs in my code.</p> <p>Note that I am not asking how I can achieve what I want (I could reshape y), but I am hoping to get some deeper understanding of why Python/Numpy works as it does. Maybe I am doing something conceptually wrong?</p>
1
2016-09-26T15:06:06Z
39,706,689
<p>Look at the shape of <code>y</code>. It is <code>(2,)</code>; 1d. The source array is (2,2), but you are selecting one column. <code>M[:,0]</code> not only selects the column, but removes that singleton dimension.</p> <p>So we have for the 2 operations, this change in shape:</p> <pre><code>M[:,0]: (2,2) =&gt; (2,) x - y: (2,1) (2,) =&gt; (2,1), (1,2) =&gt; (2,2) </code></pre> <p>There are various ways of ensuring that <code>y</code> has the shape (2,1). Index with a list/vector, <code>M[:,[0]]</code>; index with a slice, <code>M[:,:1]</code>. Add a dimension, <code>M[:,0,None]</code>.</p> <p>Think also what happens when <code>M[0,:]</code> or <code>M[0,0]</code>.</p>
1
2016-09-26T15:25:42Z
[ "python", "arrays", "numpy" ]
Numpy: Why is difference of a (2,1) array and a vertical matrix slice not a (2,1) array
39,706,277
<p>Consider the following code:</p> <pre><code>&gt;&gt;x=np.array([1,3]).reshape(2,1) array([[1], [3]]) &gt;&gt;M=np.array([[1,2],[3,4]]) array([[1, 2], [3, 4]]) &gt;&gt;y=M[:,0] &gt;&gt;x-y array([[ 0, 2], [-2, 0]]) </code></pre> <p>I would intuitively feel this should give a (2,1) vector of zeros.</p> <p>I am not saying, however, that this is how it should be done and everything else is stupid. I would simply love if someone could offer some logic that I can remember so things like this don't keep producing bugs in my code.</p> <p>Note that I am not asking how I can achieve what I want (I could reshape y), but I am hoping to get some deeper understanding of why Python/Numpy works as it does. Maybe I am doing something conceptually wrong?</p>
1
2016-09-26T15:06:06Z
39,707,276
<p><code>numpy.array</code> indexes such that a single value in any position collapses that dimension, while slicing retains it, even if the slice is only one element wide. This is completely consistent, for any number of dimensions:</p> <pre><code>&gt;&gt; A = numpy.arange(27).reshape(3, 3, 3) &gt;&gt; A[0, 0, 0].shape () &gt;&gt; A[:, 0, 0].shape (3,) &gt;&gt; A[:, :, 0].shape (3, 3) &gt;&gt; A[:1, :1, :1].shape (1, 1, 1) </code></pre> <p>Notice that every time a single number is used, that dimension is dropped.</p> <p>You can obtain the semantics you expect by using <code>numpy.matrix</code>, where two single indexes return a order 0 array and all other types of indexing return matrices</p> <pre><code>&gt;&gt; M = numpy.asmatrix(numpy.arange(9).reshape(3, 3)) &gt;&gt; M[0, 0].shape () &gt;&gt; M[:, 0].shape # This is different from the array (3, 1) &gt;&gt; M[:1, :1].shape (1, 1) </code></pre> <p>Your example works as you expect when you use <code>numpy.matrix</code>:</p> <pre><code>&gt;&gt; x = numpy.matrix([[1],[3]]) &gt;&gt; M = numpy.matrix([[1,2],[3,4]]) &gt;&gt; y = M[:, 0] &gt;&gt; x - y matrix([[0], [0]]) </code></pre>
1
2016-09-26T15:54:32Z
[ "python", "arrays", "numpy" ]
Python MySQLdb connection to online database
39,706,278
<p>While I have no problems to connect to my localhost database that way:</p> <pre><code>import MySQLdb localdb = MySQLdb.connect( host="127.0.0.1", user="root", passwd="password", db="events") </code></pre> <p>I couldent connect to my online database.</p> <p>Usually I access manually to this database with <em>phpmyadmin</em> and the adress is something like <strong>212.227.000.000/phpmyadmin</strong></p> <p>So I tried something like </p> <pre><code>onlinedb = MySQLdb.connect( host="212.227.000.000" ... </code></pre> <p>or</p> <pre><code>onlinedb = MySQLdb.connect( host="212.227.000.000/phpmyadmin" ... </code></pre> <p>But I get an error such as:</p> <pre><code>OperationalError: (2003, "Can't connect to MySQL server on '212.227.000.000' (10061)") </code></pre>
0
2016-09-26T15:06:07Z
39,706,480
<p>It sounds like <code>212.227.000.000/phpmyadmin</code> is the URL of PHPMyAdmin (the thing you open in the browser). If so, the database may not be hosted on the machine with IP 212.227.000.000. You should check how PHPMyAdmin connects to the database. If PHPMyAdmin connects to 127.0.0.1, that probably means the database doesn't listen on the external IP address, and can't be reached over the network.</p> <p>If you have ssh access to 212.227.000.000 you can check that with the <code>netstat</code> command:</p> <pre><code>$ netstat -pant | grep LISTEN | grep 3306 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN - </code></pre> <p>The 0.0.0.0 above indicates that MySQL is listening on all IPs, and barring any firewalls, you should be able to connect to the database. </p> <p>Otherwise, if it says <code>127.0.0.1:3306</code>, the database can only be accessed from the machine itself and not over the network. In that case you can use an SSH tunnel.</p>
1
2016-09-26T15:15:02Z
[ "python", "mysql" ]
Django-haystack search static content
39,706,302
<p>My Django 1.10 app provides a search functionality using Haystack + Elastic Search. It works great for models data, but I need to make it work for static content too (basically HTML files). </p> <p>I was thinking on scrapping the content from the HTML files (BeautifulSoup?) and save them to the database, this way the templates content could be indexed.</p> <p>I found this module that does exactly what I need but seems deprecated:</p> <p><a href="https://github.com/trapeze/haystack-static-pages" rel="nofollow">https://github.com/trapeze/haystack-static-pages</a></p> <p>So, what's the best way to allow haystack to find the content included in HTML pages?</p>
1
2016-09-26T15:07:14Z
39,753,461
<p>I forked the module haystack-static-pages and adapted it to my needs. Now is compatible with Django 1.10 + haystack 2.5 and support login to scrap logged pages :)</p> <p>Updated version: <a href="https://github.com/pisapapiros/haystack-static-pages" rel="nofollow">https://github.com/pisapapiros/haystack-static-pages</a></p>
0
2016-09-28T16:54:40Z
[ "python", "django", "django-haystack", "static-files", "html-content-extraction" ]
How to delete record from table where field is equal to header?
39,706,344
<p>I have a sqlite table which contains duplicate headers which I would like to remove. This is my current statement.</p> <pre><code>DELETE FROM table WHERE FIELD = "FIELD" </code></pre> <p>This statement, when executed, deletes the entire table.</p>
0
2016-09-26T15:08:54Z
39,706,365
<p>Don't use double quotes for strings!</p> <pre><code>delete from table where field = 'field'; </code></pre> <p>The SQL standard for string delimiters is single quotes. Sometimes double quotes are used, but they are also used as escape characters for column names. So your code was just <code>where field = field</code> and that deletes all non-NULL values.</p>
5
2016-09-26T15:09:29Z
[ "python", "sql", "sqlite", "sql-delete" ]
Open a file in Sublime Text and wait until it is closed while Python script is running
39,706,394
<p>I want to open a file in Sublime Text while my Python script waits until I have <em>closed</em> the editor. I have tried <code>subprocess.check_call</code> and <code>subprocess.Popen</code>. However, the call ends after the file is opened, rather than waiting for the file to close. How can I wait until the file is closed in Sublime Text?</p> <pre><code>p = subprocess.Popen(['subl', 'parameters.py']) p.communicate() </code></pre> <pre><code>p = subprocess.check_call(['subl', 'parameters.py']) </code></pre>
3
2016-09-26T15:10:59Z
39,710,409
<p>When you just call <code>subl parameters.py</code> it does not block the thread, but you can use the <code>-w</code> flag to do so. I.e. just call</p> <pre class="lang-py prettyprint-override"><code>subprocess.Popen(["subl", "-w", "parameters.py"]).wait() </code></pre> <p>and it should work as requested.</p>
3
2016-09-26T19:01:53Z
[ "python", "subprocess", "sublimetext", "popen" ]