title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
tf.contrib.learn.RunConfig(save_checkpoints_secs=1)) throws unexpected keyword TypeError | 39,827,664 | <p>As per the instructions at <a href="https://www.tensorflow.org/versions/r0.10/tutorials/monitors/index.html" rel="nofollow">https://www.tensorflow.org/versions/r0.10/tutorials/monitors/index.html</a> I have added a monitor to the training script as such:</p>
<pre><code>tf.contrib.learn.DNNClassifier(model_dir=model_dir,
feature_columns=deep_columns,
hidden_units=[50, 100, 50],
config=tf.contrib.learn.RunConfig(
save_checkpoints_secs=1))
</code></pre>
<p>However, this throws the following TypeError:</p>
<blockquote>
<p>TypeError: <strong>init</strong>() got an unexpected keyword argument 'save_checkpoints_secs'</p>
</blockquote>
<p>There are additional lines of code included related to the logging and monitoring, however they seem irrelevant to this problem...maybe...maybe not...</p>
| 0 | 2016-10-03T08:48:38Z | 39,827,799 | <p>Apparently, you're running a version of tensorflow that is < <code>0.10</code>. Versions below <code>0.10</code> do not take a <code>save_checkpoints_secs</code> keyword for <code>RunConfig</code> initialization.</p>
<p>You should upgrade your <code>tensorflow</code> installation:</p>
<p><a href="http://stackoverflow.com/questions/34239537/how-to-update-tensorflow-from-source">How to Update Tensorflow from source</a></p>
<p><a href="https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html" rel="nofollow">Download and Setup</a></p>
| 0 | 2016-10-03T08:55:49Z | [
"python",
"tensorflow"
]
|
Cryptography with ASCII in python 3 | 39,827,719 | <p>Hello there and thanks in advance.</p>
<p>i'm trying to making a cryptography program i have to do for school. i'm not a advanced python expert, so i'm sorry if this is a silly question. when i run this program and insert for example abc with shift 2 it will return cde which is good. but i tried to insert xyz aswell with shift 3 and instead of shifting correctly abc it returns aaa. this also happens if i use shift 2 then it returns zaa. how can i adjust my program to correctly start from the beginning when the alphabet is done with the ASCII tabel</p>
<pre><code>shift = int(input("Please insert a number you want to shift the characters with: "))
end = ""
for x in alf:
ascii = ord(x)
if ascii >= 97 and ascii <= 122:
res = ascii + shift
if res > 122:
res = 0 + 97
min = res + shift
end = end + chr(min)
print (end)
</code></pre>
| 0 | 2016-10-03T08:51:18Z | 39,827,937 | <p>It is because your logic expression is wrong. Here is an example that will allow any positive integer as right shift, beginning again at a again. And it can be very much optimized (hint: use modulo operator), but this is with minor changes to your code and numerics written out loud.</p>
<pre><code>for x in alf:
ascii = ord(x)
if ascii >= 97 and ascii <= 122:
res = ascii + shift
while res > 122:
res = res - (122 - 97) - 1
end = end + chr(res)
</code></pre>
| 0 | 2016-10-03T09:03:40Z | [
"python",
"python-3.x",
"loops",
"ascii"
]
|
global and local variables misconception | 39,827,768 | <p>I have a question regarding the output from the following code:</p>
<pre><code>def f():
global s
print(s)
s = "That's clear."
print(s)
s = "Python is great!"
f()
print(s)
</code></pre>
<p>The output is this:</p>
<blockquote>
<p>Python is great!</p>
<p>That's clear.</p>
<p>That's clear.</p>
</blockquote>
<p>My question is: Why is it the very last output (i.e. the third output) is also "That's clear" .<br>
How come the third output is not "Python is great!" .
I thought the very last statement from the code (i.e. the print(s)) statement is outside the function f(). So shouldn't print(s) here looks at the s variable that is defined globally? in this case the globally defined variable s would refers to the value "Python is great!", isn't it? Sorry there must be some concepts I have mis-understand. I am very newbie to python. Could someone kindly explain this simple concept. </p>
| -2 | 2016-10-03T08:54:15Z | 39,828,091 | <p>Assuming you have declared the variable <code>s = "Python is great!"</code> globally.
You may not have indented the code properly, the below code:</p>
<pre><code>def f():
global s
print(s)
s = "That's clear."
print(s)
s = "Python is great!"
f()
print(s)
</code></pre>
<p>Will give you the output</p>
<blockquote>
<p>Python is great!</p>
<p>That's clear.</p>
<p>Python is great!</p>
</blockquote>
<p>The code you have written should essentially be giving an infinite recursion and should reach the max recursion depth.</p>
<p>Indentation is very important in python, when you indent the function call <code>f()</code> and the statement <code>print(s)</code>, python considers both of these statements to be a part of the function <code>f()</code>. When the statement <code>f()</code> is reached during the first function call, python will automatically call the function again and repeats the whole process. In reality, you will never be able to reach the third print statement.</p>
| -1 | 2016-10-03T09:12:50Z | [
"python",
"global-variables"
]
|
global and local variables misconception | 39,827,768 | <p>I have a question regarding the output from the following code:</p>
<pre><code>def f():
global s
print(s)
s = "That's clear."
print(s)
s = "Python is great!"
f()
print(s)
</code></pre>
<p>The output is this:</p>
<blockquote>
<p>Python is great!</p>
<p>That's clear.</p>
<p>That's clear.</p>
</blockquote>
<p>My question is: Why is it the very last output (i.e. the third output) is also "That's clear" .<br>
How come the third output is not "Python is great!" .
I thought the very last statement from the code (i.e. the print(s)) statement is outside the function f(). So shouldn't print(s) here looks at the s variable that is defined globally? in this case the globally defined variable s would refers to the value "Python is great!", isn't it? Sorry there must be some concepts I have mis-understand. I am very newbie to python. Could someone kindly explain this simple concept. </p>
| -2 | 2016-10-03T08:54:15Z | 39,828,216 | <p>To see the output you do, the structure of you code has to be:</p>
<pre><code>def f():
global s
print(s) # s outside the function
s = "That's clear." # new global s created
print(s) # print the new s
s = "Python is great!" # s before you call the function/get to s = "That's clear."
f() # f gets called and new global s is created
print(s) # you see the new global s created in the function
</code></pre>
<p>Making <em>s</em> global means you see it outside the scope of <code>f</code>, you have already executed the function by the time you reach the last print so now the <em>s</em> points to <code>That's clear.</code></p>
<p>If you wanted to get the output you expected, you would pass s into the function and not use the global keyword so the <em>s</em> created in <code>f</code> would be accessible only in the scope of the function itself.</p>
<pre><code>def f(s):
print(s)
s = "That's clear."
print(s)
s = "Python is great!"
f(s)
print(s)
</code></pre>
<p>This should be a good lesson on why using <em>global</em> is rarely a good idea.</p>
| 0 | 2016-10-03T09:19:28Z | [
"python",
"global-variables"
]
|
How to run Spark Java code in Airflow? | 39,827,804 | <p>Hello people of the Earth!
I'm using Airflow to schedule and run Spark tasks.
All I found by this time is python DAGs that Airflow can manage.
<br> DAG example:</p>
<pre><code>spark_count_lines.py
import logging
from airflow import DAG
from airflow.operators import PythonOperator
from datetime import datetime
args = {
'owner': 'airflow'
, 'start_date': datetime(2016, 4, 17)
, 'provide_context': True
}
dag = DAG(
'spark_count_lines'
, start_date = datetime(2016, 4, 17)
, schedule_interval = '@hourly'
, default_args = args
)
def run_spark(**kwargs):
import pyspark
sc = pyspark.SparkContext()
df = sc.textFile('file:///opt/spark/current/examples/src/main/resources/people.txt')
logging.info('Number of lines in people.txt = {0}'.format(df.count()))
sc.stop()
t_main = PythonOperator(
task_id = 'call_spark'
, dag = dag
, python_callable = run_spark
)
</code></pre>
<p>The problem is I'm not good in Python code and have some tasks written in Java. My question is how to run Spark Java jar in python DAG? Or maybe there is other way yo do it? I found spark submit: <a href="http://spark.apache.org/docs/latest/submitting-applications.html" rel="nofollow">http://spark.apache.org/docs/latest/submitting-applications.html</a>
<br>But I don't know how to connect everything together. Maybe someone used it before and has working example. Thank you for your time!</p>
| 1 | 2016-10-03T08:56:09Z | 39,833,103 | <p>You should be able to use <code>BashOperator</code>. Keeping the rest of your code as is, import required class and system packages:</p>
<pre><code>from airflow.operators.bash_operator import BashOperator
import os
import sys
</code></pre>
<p>set required paths:</p>
<pre><code>os.environ['SPARK_HOME'] = '/path/to/spark/root'
sys.path.append(os.path.join(os.environ['SPARK_HOME'], 'bin'))
</code></pre>
<p>and add operator:</p>
<pre><code>spark_task = BashOperator(
task_id='spark_java',
bash_command='spark-submit --class {{ params.class }} {{ params.jar }} ',
params={'class': 'MainClassName', 'jar': '/path/to/your.jar'},
dag=dag
)
</code></pre>
<p>You can easily extend this to provide additional arguments using Jinja templates.</p>
| 0 | 2016-10-03T13:41:08Z | [
"java",
"python",
"apache-spark",
"directed-acyclic-graphs",
"airflow"
]
|
python json object array not serializable | 39,827,862 | <p>I have a class and array in a loop. I'm filling the object and appending it to the array. How do I pass array data to web side through JSON? I have an error that says JSON is not serializable. How can I serialize to a JSON?</p>
<pre><code>class myclass(object):
def __init__(self,vehicle,mng01):
self.vehicle = vehicle
self.vehicle = vehicle
#--Main function--
@subApp.route('/jsontry', method=['POST'])
def data():
for x in list:
vehicle_sum.append(myclass( x,str(mng01))
return json.dumps({'success':'1','vehicle':vehicle_sum})
</code></pre>
| 1 | 2016-10-03T09:00:03Z | 39,828,172 | <p>Does it say myclass object is not JSON serializable? This is because there is no way for <code>json.dumps(..)</code> to know how to JSON-serialize your class. You will need to write your <a href="https://docs.python.org/2/library/json.html#json.JSONEncoder" rel="nofollow">custom encoder to do that</a>.</p>
<p>Below is a sample implementation. I am sure you can modify it for your use case.</p>
<pre><code>import json
class Temp(object):
def __init__(self, num):
self.num = num
class CustomEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Temp):
return {"num": obj.num}
#Let the base class handle the problem.
return json.JSONEncoder.default(self, obj)
obj = [Temp(42), Temp(42)]
print json.dumps(obj, cls=CustomEncoder)
#Output: [{"num": 42}, {"num": 42}]
</code></pre>
<hr>
<p>If you don't want over-complicate stuff, here's what you can do:</p>
<pre><code>all_vehicles = [{"vehicle": x.vehicle} for x in vehicle_sum]
json.dumps({'success':'1','vehicle':all_vehicles)
</code></pre>
| 2 | 2016-10-03T09:16:45Z | [
"python",
"json",
"ajax",
"serialization"
]
|
python json object array not serializable | 39,827,862 | <p>I have a class and array in a loop. I'm filling the object and appending it to the array. How do I pass array data to web side through JSON? I have an error that says JSON is not serializable. How can I serialize to a JSON?</p>
<pre><code>class myclass(object):
def __init__(self,vehicle,mng01):
self.vehicle = vehicle
self.vehicle = vehicle
#--Main function--
@subApp.route('/jsontry', method=['POST'])
def data():
for x in list:
vehicle_sum.append(myclass( x,str(mng01))
return json.dumps({'success':'1','vehicle':vehicle_sum})
</code></pre>
| 1 | 2016-10-03T09:00:03Z | 39,828,178 | <p>It doesn't say that JSON is not serializable, it says that your instances of <code>myclass</code> are not JSON serializable. I.e. they cannot be represented as JSON, because it is really not clear what do you expect as an output.</p>
<p>To find out how to make a class JSON serializable, check this question: <a href="http://stackoverflow.com/questions/3768895/how-to-make-a-class-json-serializable">How to make a class JSON serializable</a></p>
<p>In this trivial case, depending on what exactly you're trying to achieve, you might be good with inserting <code>[x.vehicle for x in vehicle_sum]</code> in your JSON. If you really need to insert your instances data in JSON in a direct way, you'll have to write an encoder.</p>
| 0 | 2016-10-03T09:16:54Z | [
"python",
"json",
"ajax",
"serialization"
]
|
Get dataframe columns from a list using isin | 39,827,897 | <p>I have a dataframe <code>df1</code>, and I have a list which contains names of several columns of <code>df1</code>.</p>
<pre><code>df1:
User_id month day Age year CVI ZIP sex wgt
0 1 7 16 1977 2 NA M NaN
1 2 7 16 1977 3 NA M NaN
2 3 7 16 1977 2 DM F NaN
3 4 7 16 1977 7 DM M NaN
4 5 7 16 1977 3 DM M NaN
... ... ... ... ... ... ... ... ...
35544 35545 12 31 2002 15 AH NaN NaN
35545 35546 12 31 2002 15 AH NaN NaN
35546 35547 12 31 2002 10 RM F 14
35547 35548 12 31 2002 7 DO M 51
35548 35549 12 31 2002 5 NaN NaN NaN
list= [u"User_id", u"day", u"ZIP", u"sex"]
</code></pre>
<p>I want to make a new dataframe <code>df2</code> which will contain omly those columns which are in the list, and a dataframe <code>df3</code> which will contain columns which are not in the list.</p>
<p><a href="http://stackoverflow.com/questions/16804476/python-pandas-isin-on-a-list">Here</a> I found that I need to do:</p>
<pre><code>df2=df1[df1[df1.columns[1]].isin(list)]
</code></pre>
<p>But as a result I get:</p>
<pre><code>Empty DataFrame
Columns: []
Index: []
[0 rows x 9 columns]
</code></pre>
<p>What Im I odoing wrong and how can i get a needed result? Why "9 columns" if it supossed to be 4?</p>
| 1 | 2016-10-03T09:01:30Z | 39,828,023 | <p>You can try :</p>
<pre><code>df2 = df1[list] # it does a projection on the columns contained in the list
df3 = df1[[col for col in df1.columns if col not in list]]
</code></pre>
| 1 | 2016-10-03T09:09:04Z | [
"python",
"list",
"pandas",
"condition",
"multiple-columns"
]
|
Get dataframe columns from a list using isin | 39,827,897 | <p>I have a dataframe <code>df1</code>, and I have a list which contains names of several columns of <code>df1</code>.</p>
<pre><code>df1:
User_id month day Age year CVI ZIP sex wgt
0 1 7 16 1977 2 NA M NaN
1 2 7 16 1977 3 NA M NaN
2 3 7 16 1977 2 DM F NaN
3 4 7 16 1977 7 DM M NaN
4 5 7 16 1977 3 DM M NaN
... ... ... ... ... ... ... ... ...
35544 35545 12 31 2002 15 AH NaN NaN
35545 35546 12 31 2002 15 AH NaN NaN
35546 35547 12 31 2002 10 RM F 14
35547 35548 12 31 2002 7 DO M 51
35548 35549 12 31 2002 5 NaN NaN NaN
list= [u"User_id", u"day", u"ZIP", u"sex"]
</code></pre>
<p>I want to make a new dataframe <code>df2</code> which will contain omly those columns which are in the list, and a dataframe <code>df3</code> which will contain columns which are not in the list.</p>
<p><a href="http://stackoverflow.com/questions/16804476/python-pandas-isin-on-a-list">Here</a> I found that I need to do:</p>
<pre><code>df2=df1[df1[df1.columns[1]].isin(list)]
</code></pre>
<p>But as a result I get:</p>
<pre><code>Empty DataFrame
Columns: []
Index: []
[0 rows x 9 columns]
</code></pre>
<p>What Im I odoing wrong and how can i get a needed result? Why "9 columns" if it supossed to be 4?</p>
| 1 | 2016-10-03T09:01:30Z | 39,828,114 | <p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.difference.html" rel="nofollow"><code>Index.difference</code></a>:</p>
<pre><code>L = [u"User_id", u"day", u"ZIP", u"sex"]
df2 = df1[L]
df3 = df1[df1.columns.difference(df2.columns)]
print (df2)
User_id day ZIP sex
0 0 7 NaN M
1 1 7 NaN M
2 2 7 DM F
3 3 7 DM M
4 4 7 DM M
print (df3)
Age CVI month wgt year
0 16 2 1 NaN 1977
1 16 3 2 NaN 1977
2 16 2 3 NaN 1977
3 16 7 4 NaN 1977
4 16 3 5 NaN 1977
</code></pre>
<p>Or:</p>
<pre><code>df2 = df1[L]
df3 = df1[df1.columns.difference(pd.Index(L))]
print (df2)
User_id day ZIP sex
0 0 7 NaN M
1 1 7 NaN M
2 2 7 DM F
3 3 7 DM M
4 4 7 DM M
print (df3)
Age CVI month wgt year
0 16 2 1 NaN 1977
1 16 3 2 NaN 1977
2 16 2 3 NaN 1977
3 16 7 4 NaN 1977
4 16 3 5 NaN 1977
</code></pre>
| 1 | 2016-10-03T09:13:44Z | [
"python",
"list",
"pandas",
"condition",
"multiple-columns"
]
|
Get dataframe columns from a list using isin | 39,827,897 | <p>I have a dataframe <code>df1</code>, and I have a list which contains names of several columns of <code>df1</code>.</p>
<pre><code>df1:
User_id month day Age year CVI ZIP sex wgt
0 1 7 16 1977 2 NA M NaN
1 2 7 16 1977 3 NA M NaN
2 3 7 16 1977 2 DM F NaN
3 4 7 16 1977 7 DM M NaN
4 5 7 16 1977 3 DM M NaN
... ... ... ... ... ... ... ... ...
35544 35545 12 31 2002 15 AH NaN NaN
35545 35546 12 31 2002 15 AH NaN NaN
35546 35547 12 31 2002 10 RM F 14
35547 35548 12 31 2002 7 DO M 51
35548 35549 12 31 2002 5 NaN NaN NaN
list= [u"User_id", u"day", u"ZIP", u"sex"]
</code></pre>
<p>I want to make a new dataframe <code>df2</code> which will contain omly those columns which are in the list, and a dataframe <code>df3</code> which will contain columns which are not in the list.</p>
<p><a href="http://stackoverflow.com/questions/16804476/python-pandas-isin-on-a-list">Here</a> I found that I need to do:</p>
<pre><code>df2=df1[df1[df1.columns[1]].isin(list)]
</code></pre>
<p>But as a result I get:</p>
<pre><code>Empty DataFrame
Columns: []
Index: []
[0 rows x 9 columns]
</code></pre>
<p>What Im I odoing wrong and how can i get a needed result? Why "9 columns" if it supossed to be 4?</p>
| 1 | 2016-10-03T09:01:30Z | 39,828,564 | <p>never name a list as "list"</p>
<pre><code>my_list= [u"User_id", u"day", u"ZIP", u"sex"]
df2 = df1[df1.keys()[df1.keys().isin(my_list)]]
</code></pre>
| 1 | 2016-10-03T09:36:19Z | [
"python",
"list",
"pandas",
"condition",
"multiple-columns"
]
|
Get dataframe columns from a list using isin | 39,827,897 | <p>I have a dataframe <code>df1</code>, and I have a list which contains names of several columns of <code>df1</code>.</p>
<pre><code>df1:
User_id month day Age year CVI ZIP sex wgt
0 1 7 16 1977 2 NA M NaN
1 2 7 16 1977 3 NA M NaN
2 3 7 16 1977 2 DM F NaN
3 4 7 16 1977 7 DM M NaN
4 5 7 16 1977 3 DM M NaN
... ... ... ... ... ... ... ... ...
35544 35545 12 31 2002 15 AH NaN NaN
35545 35546 12 31 2002 15 AH NaN NaN
35546 35547 12 31 2002 10 RM F 14
35547 35548 12 31 2002 7 DO M 51
35548 35549 12 31 2002 5 NaN NaN NaN
list= [u"User_id", u"day", u"ZIP", u"sex"]
</code></pre>
<p>I want to make a new dataframe <code>df2</code> which will contain omly those columns which are in the list, and a dataframe <code>df3</code> which will contain columns which are not in the list.</p>
<p><a href="http://stackoverflow.com/questions/16804476/python-pandas-isin-on-a-list">Here</a> I found that I need to do:</p>
<pre><code>df2=df1[df1[df1.columns[1]].isin(list)]
</code></pre>
<p>But as a result I get:</p>
<pre><code>Empty DataFrame
Columns: []
Index: []
[0 rows x 9 columns]
</code></pre>
<p>What Im I odoing wrong and how can i get a needed result? Why "9 columns" if it supossed to be 4?</p>
| 1 | 2016-10-03T09:01:30Z | 39,828,659 | <p>never name a list as "list"</p>
<pre><code>my_list= [u"User_id", u"day", u"ZIP", u"sex"]
df2 = df1[df1.keys()[df1.keys().isin(my_list)]]
</code></pre>
<p>or</p>
<pre><code>df2 = df1[df1.columns[df1.columns.isin(my_list)]]
</code></pre>
| 1 | 2016-10-03T09:41:45Z | [
"python",
"list",
"pandas",
"condition",
"multiple-columns"
]
|
using python to read a column vector from excel | 39,827,924 | <p>OK i think this must be a super simple thing to do, but i keep getting index error messages no matter how i try to format this. my professor is making us multiply a 1X3 row vector by a 3x1 column vector, and i cant get python to read the column vector. the row vector is from cells A1-C1, and the column vector is from cells A3-A5 in my excel spreadsheet. I am using the right "format" for how he wants us to do it, (if i do something that works, but don't format it the way he likes i don't get credit.) the row vector is reading properly in the variable explorer, but i am only getting a 2x2 column vector (with the first column being the 0th column and being all zeros, again how he wants it), I havent even gotten to the multiplication part of the code because i cant get python to read the column vector correctly. here is the code:</p>
<pre><code>import xlwings as xw
import numpy as np
filename = 'C:\\python\\homework4.xlsm'
wb=xw.Workbook(filename)
#initialize vectors
a = np.zeros((1+1,3+1))
b = np.zeros((3+1,1+1))
n=3
#Read a and b vectors from excel
for i in range(1,n+1):
for j in range(1,n+1):
a[i,j] = xw.Range((i,j)).value
'end j'
b[i,j] = xw.Range((i+2,j)).value
'end i'
</code></pre>
| 0 | 2016-10-03T09:02:40Z | 39,828,590 | <p>Something like this should work. The way you iterate over <code>i</code> and <code>j</code> are wrong (plus the initalization of <code>a</code> and <code>b</code>)</p>
<pre><code> #initialize vectors
a = np.zeros((1,3))
b = np.zeros((3,1))
n=3
#Read a and b vectors from excel
for i in range(0,n):
a[0,i] = xw.Range((1,i+1)).value
for i in range (0,n)
b[i,0] = xw.Range((3+i,1)).value
</code></pre>
| 0 | 2016-10-03T09:38:00Z | [
"python",
"excel",
"vector"
]
|
using python to read a column vector from excel | 39,827,924 | <p>OK i think this must be a super simple thing to do, but i keep getting index error messages no matter how i try to format this. my professor is making us multiply a 1X3 row vector by a 3x1 column vector, and i cant get python to read the column vector. the row vector is from cells A1-C1, and the column vector is from cells A3-A5 in my excel spreadsheet. I am using the right "format" for how he wants us to do it, (if i do something that works, but don't format it the way he likes i don't get credit.) the row vector is reading properly in the variable explorer, but i am only getting a 2x2 column vector (with the first column being the 0th column and being all zeros, again how he wants it), I havent even gotten to the multiplication part of the code because i cant get python to read the column vector correctly. here is the code:</p>
<pre><code>import xlwings as xw
import numpy as np
filename = 'C:\\python\\homework4.xlsm'
wb=xw.Workbook(filename)
#initialize vectors
a = np.zeros((1+1,3+1))
b = np.zeros((3+1,1+1))
n=3
#Read a and b vectors from excel
for i in range(1,n+1):
for j in range(1,n+1):
a[i,j] = xw.Range((i,j)).value
'end j'
b[i,j] = xw.Range((i+2,j)).value
'end i'
</code></pre>
| 0 | 2016-10-03T09:02:40Z | 39,828,597 | <p>Remember, Python use 0-based indexing and Excel use 1-based indexing. </p>
<p>This code will read out the vectors properly, and then you can check on numpy "scalar product" to produce the multiplication. You can also assign the whole vectors immediately without loop.</p>
<pre><code>import xlwings as xw
import numpy as np
filename = 'C:\\Temp\\Book2.xlsx'
wb=xw.Book(filename).sheets[0]
n=3
#initialize vectors
a = np.zeros((1,n))
b = np.zeros((n,1))
#Read a and b vectors from excel
for j in range(1,n+1):
a[0, j-1] = wb.range((1, j)).value
b[j-1, 0] = wb.range((j+3-1, 1)).value
#Without loop
a = wb.range((1, 1),(1, 3)).value
b = wb.range((3, 1),(5, 1)).value
</code></pre>
| 0 | 2016-10-03T09:38:13Z | [
"python",
"excel",
"vector"
]
|
Python program to match string based on user input | 39,828,202 | <p>I need to write a python program that allows user to enter input
like <code>Apple,Ball</code> and it if matches the line in the file print it.
so far I am able to get this.</p>
<pre><code>import re
import sys
print('Enter the values')
value1=input()
try:
filenm1="D:\names.txt"
t=open(filenm1,'r')
regexp=re.search(value1,line)
for line in t:
if regexp:
print(line)
catch IOerror:
print('File not opened')
sys.exit(0)
</code></pre>
<p>Sample Input File </p>
<pre><code>Apple
Ball
Stackoverflow
Call
Doll
User input : App
Output : Apple
</code></pre>
<p>now I want to modify this program to search by
user input : <code>App,Doll</code>
Output : </p>
<pre><code>Apple
Doll
</code></pre>
| 0 | 2016-10-03T09:18:31Z | 39,830,847 | <p>You can change your loop into this:</p>
<pre><code>import sys
print('Enter the values')
value1=input()
value1=value1.split(',')
try:
filenm1="D:\names.txt"
t=open(filenm1,'r')
for line in t:
alreadyPrinted = False
for value in value1:
if value in line:
if not alreadyPrinted: #this bit prevents line being printed twice
print(line)
alreadyPrinted = True
catch IOerror:
print('File not opened')
sys.exit(0)
</code></pre>
| 0 | 2016-10-03T11:46:19Z | [
"python",
"python-3.x"
]
|
How to get For loops to communicate with a While loop | 39,828,257 | <p>I need to write a program that takes the last number from the for loop ( if it exceeds a certain number) and places it in the while loop. I have left the unfinished code below.</p>
<pre><code>t=0
num=int(input("How many presents do you want to buy?: "))
for i in range(num):
t=t+int(input("Please enter the price of a present")
while t>'200':
print("Limit Exceeded.")
print("You need to get rid of the " #price from the previous loop
</code></pre>
| -1 | 2016-10-03T09:21:40Z | 39,828,586 | <p>U need to use a list.</p>
<p>Ex</p>
<pre><code>t = 0
num=int(input("How many presents do you want to buy?: "))
prices = []
for i in range(num):
prices.append(int(input("Please enter the price of a present")))
t += prices[-1]
while t > 200:
print("Limit Exceeded.")
print("You need to get rid of the: {}".format(prices[-1]))
t -= prices.pop()
</code></pre>
<p>AND
you are comparing wrongly:</p>
<pre><code>if t > '200'
</code></pre>
<p>t is an 'int' type and you are comparing it with a string.
please correct it.</p>
<p>Hope this helps.</p>
| 0 | 2016-10-03T09:37:48Z | [
"python",
"for-loop",
"while-loop"
]
|
Using grep to compare two lists of Python packages | 39,828,366 | <p>I want to generate the list of packages install in Python 3, the list of all packages in Python 2.7 and find all entries in the 2.7 list <em>not</em> in the Python 3 list.</p>
<p>Generating the list is easy: <code>pip freeze</code> or <code>pip3.4 freeze</code>.</p>
<p>Searching for a package in the list is equally trivial <code>pip freeze | grep "wheel"</code> for example</p>
<p>However, if I want to search for intersections between the list, or in this instance <em>non</em>-intersections I would expect to use something like this <code>pip freeze | grep -n pip3.4 freeze</code></p>
<p>However it tells me that, obviously the parameter for grep <code>...is not a file or directory</code>. My shell scripting is rusty and I vaguely remember there should be a simple way of doing this other than piping both lists to files?</p>
| 3 | 2016-10-03T09:26:31Z | 39,829,092 | <p>you can use also comm command as below</p>
<pre><code>comm -12 <(pip freeze) <(pip3.4 freeze)
</code></pre>
<p>to search for intersections;</p>
<pre><code>grep -f <(pip freeze) <(pip3.4 freeze)
</code></pre>
| 1 | 2016-10-03T10:06:09Z | [
"python",
"bash",
"list",
"shell",
"grep"
]
|
SLURM with python multiprocessing give inconsistent results | 39,828,425 | <p>We have a small HPC with 4*64 cores, and it has SLURM installed in it.</p>
<p>The nodes are:</p>
<pre><code>sinfo -N -l
Mon Oct 3 08:58:12 2016
NODELIST NODES PARTITION STATE CPUS S:C:T MEMORY TMP_DISK WEIGHT FEATURES REASON
dlab-node1 1 dlab* idle 64 2:16:2 257847 0 1 (null) none
dlab-node2 1 dlab* idle 64 2:16:2 257847 0 1 (null) none
dlab-node3 1 dlab* idle 64 2:16:2 257847 0 1 (null) none
dlab-node4 1 dlab* idle 64 2:16:2 257847 0 1 (null) none
</code></pre>
<p>To test the SLURM i wrote a little script in python with multiprocessing:</p>
<pre><code>import multiprocessing
import os
def func(i):
print(n_procs)
n_procs = int(os.environ['SLURM_JOB_CPUS_PER_NODE'].split('(')[0]) * int(os.environ['SLURM_JOB_NUM_NODES'])
p = multiprocessing.Pool(n_procs)
list(p.imap_unordered(func, [i for i in range(n_procs*2)]))
</code></pre>
<p>I use the following batch <code>sh</code> script to run it with SLURM</p>
<pre><code>#!/bin/bash
#
#SBATCH -p dlab # partition (queue)
#SBATCH -N 2 # number of nodes
#SBATCH -n 64 # number of cores
#SBATCH --mem 250 # memory pool for all cores
#SBATCH -t 0-2:00 # time (D-HH:MM)
#SBATCH -o slurm.%N.%j.out # STDOUT
#SBATCH -e slurm.%N.%j.err # STDERR
python3 asd.py
</code></pre>
<p>As i would expect this would print <code>128</code> <code>256</code> times to the STDOUT file.</p>
<p>However if i run this multiple times i get very different amount of lines (they all contain <code>128</code> which is correct)</p>
<p>For the first run i got 144 lines, the second time i got 256 (which is correct) and the third time i get 184.</p>
<p>What is the problem? Should i investigate something inside the configuration of SLURM, or there is something wrong within python <code>multiprocessing</code>? </p>
| 0 | 2016-10-03T09:29:22Z | 39,834,957 | <p>From sbatch man page:</p>
<blockquote>
<p>SLURM_JOB_CPUS_PER_NODE</p>
<p>Count of processors available to the job <strong>on this node</strong>. Note the select/linear plugin allocates entire nodes to jobs, so the value indicates the total count of CPUs on the node. The select/cons_res plugin allocates individual processors to jobs, so this number indicates the number of processors on this node allocated to the job</p>
</blockquote>
<p>As highlighted, the variable will only return the number on cpus allocated in the node where the script is running. If you want to have an homogeneous allocation you should specify <code>--ntasks-per-node=32</code></p>
<p>Also, bear in mind that multiprocessing will not spawn processes in more than one node. If you want to span multiple nodes you have a nice documentation <a href="https://rcc.uchicago.edu/docs/tutorials/kicp-tutorials/running-jobs.html#multiprocessing" rel="nofollow">here</a></p>
| 2 | 2016-10-03T15:17:39Z | [
"python",
"python-3.x",
"multiprocessing",
"slurm"
]
|
how to get ALL your channel-IDs - youtube api v3 | 39,828,449 | <p>I was wondering if it was possible to gain your other channel-IDs, this code under is only getting one channelID (the one channel i clicked on during the authorization).</p>
<pre><code> def gaining_channelID(youtube):
channels_list_response = youtube.channels().list(
part="id",
mine=True
).execute()
channelid= ""
for list in channels_list_response["items"]:
channelid= list["id"]
print(channelid)
return channelid
</code></pre>
<p>And can i upload videos to your other channels? Or do i HAVE to go through the the authorization again and choose a different channel i want to upload to? </p>
<p>right now it seems like that's the only way</p>
<p>ive been watching this and it said so, but was wondering if it had changed (since that was 2 years ago the question was asked)</p>
<p><a href="http://stackoverflow.com/questions/20072799/youtube-api-v3-get-all-channels-associated-with-a-logged-in-user">YouTube API v3 get all channels associated with a logged in user</a></p>
| 1 | 2016-10-03T09:30:31Z | 39,828,790 | <p>When you authenticate to the YouTube API you pick a channel. In my case I have 6 channels I think. If I authenticate to the first channel I will only have access to the first channels data. </p>
<p>Yes you need to authenticate for each of the channels. The access you get will be channel specific. </p>
| 2 | 2016-10-03T09:48:54Z | [
"python",
"api",
"youtube",
"youtube-api",
"youtube-data-api"
]
|
Finding dimensions of a keras intermediate expression | 39,828,691 | <p>I am implementing a custom keras layer.
The call method in my class is as follows.</p>
<pre><code> def call(self, inputs, mask=None):
if type(inputs) is not list or len(inputs) <= 1:
raise Exception('Merge must be called on a list of tensors '
'(at least 2). Got: ' + str(inputs))
e1 = inputs[0]
e2 = inputs[1]
f = K.transpose((K.batch_dot(e1, K.dot(e2, self.W), axes=1))) #Removing K.transpose also works, why?
return f
</code></pre>
<p>I verfied and the code works but I am trying to find ways to better debug when implementing a custom layer in keras.
Assuming e1 and e2 are (batch_size * d) and W is (d*d)
How can I find the dimensions of each subpart of my expression?
For eg. K.dot(e2, self.W), the result of batch_dot etc.</p>
| 0 | 2016-10-03T09:43:23Z | 39,863,014 | <p>If you are using the theano backend, you can define Theano functions. (<a href="https://github.com/fchollet/keras/issues/41" rel="nofollow">like François suggested</a>)</p>
<p>E.g. </p>
<pre class="lang-py prettyprint-override"><code>import theano
from keras import layers
input = layers.Input(params)
layer = YourLayer(params)
output = layer(input)
debug_fn = theano.function([input], output)
print(debug_fn(numpy_array))
</code></pre>
<p>If you want intermediate results I usually just return them temporarily, like this for example:</p>
<pre class="lang-py prettyprint-override"><code> def call(self, inputs, mask=None):
if type(inputs) is not list or len(inputs) <= 1:
raise Exception('Merge must be called on a list of tensors '
'(at least 2). Got: ' + str(inputs))
e1 = inputs[0]
e2 = inputs[1]
f = K.transpose((K.batch_dot(e1, K.dot(e2, self.W), axes=1))) #Removing K.transpose also works, why?
return f, e1
import theano
from keras import layers
input = layers.Input(params)
layer = YourLayer(params)
output, e1 = layer(input)
debug_fn = theano.function([input], e1)
print(debug_fn(numpy_array))
</code></pre>
<p>I do not know if there are better practices, but it works quite well for me.</p>
| 1 | 2016-10-04T22:58:08Z | [
"python",
"keras",
"keras-layer"
]
|
Visual studio code interactive python console | 39,828,744 | <p>I'm using visual studio code with DonJayamanne python extension. It's working fine but I want to have an interactive session just like the one in Matlab, where after code execution every definition and computational result remains and accessible in the console. </p>
<p>For example after running this code: </p>
<pre><code>a = 1
</code></pre>
<p>the python session is terminated and I cannot type in the console something like:</p>
<pre><code>b = a + 1
print(b)
</code></pre>
<p>I'm aware that the python session can stay alive with a "-i" flag. But this simply doesn't work. </p>
<p>Also every time I run a code file, a new python process is spawned. Is there a way to run consecutive runs in just one console? Again like Matlab?</p>
<p>This sounds really essential and trivial to me. Am I missing something big here that I can't find a solution for this?</p>
| 0 | 2016-10-03T09:45:53Z | 39,846,499 | <p>I'm the author of the extension.
There are two options:</p>
<ol>
<li><p>Use the integrated terminal window (I guess you already knew this)<br>
Launch the terminal window and type in <code>python</code>.<br>
Every statement executed in the REPL is within the same session. </p></li>
<li><p>Next version will add support for Jupyter.<br>
Please have a look here for some samples of what is yet to come: </p>
<ul>
<li><a href="https://github.com/DonJayamanne/pythonVSCode/issues/303" rel="nofollow">#303</a> </li>
<li>Screen <a href="https://github.com/DonJayamanne/pythonVSCodeDocs/blob/master/images/jupyter/examples.gif" rel="nofollow">samples</a> and <a href="https://github.com/DonJayamanne/pythonVSCodeDocs/tree/master/images/jupyter" rel="nofollow">more</a></li>
</ul></li>
</ol>
| 1 | 2016-10-04T07:25:51Z | [
"python",
"ipython",
"vscode"
]
|
Copy number of rows for n number of times using Python and write them in other file | 39,828,791 | <p>Hi I'm writing a simple script to copy a set of rows from a <strong>csv</strong> file and paste them for N number of times in other file. </p>
<p>I'm not able to write the result into other file.</p>
<p>Please find the code below:</p>
<pre><code>import csv
for i in range(2):
with open('C:\\Python\\CopyPaste\\result2.csv', 'r') as fp:
data = fp.readlines()
fp.close()
with open('C:\\Python\\CopyPaste\\mydata.csv', 'w') as mycsvfile:
thedatawriter = csv.writer(mycsvfile)
for row in data:
thedatawriter.writerow(row)
</code></pre>
<p><img src="http://i.stack.imgur.com/ssulP.jpg" alt="enter image description here"></p>
| -1 | 2016-10-03T09:48:55Z | 39,829,026 | <p>I guess your question is : read a .csv file and then write the data to another .csv file for N times?</p>
<p>If my recognition is right, my suggestion would be using pandas library, that's very convenient.</p>
<p>Something like:</p>
<pre><code>import pandas as pd
df = pd.read_csv('origin.csv')
df.to_csv('output.csv')
</code></pre>
| 0 | 2016-10-03T10:02:04Z | [
"python",
"csv"
]
|
Copy number of rows for n number of times using Python and write them in other file | 39,828,791 | <p>Hi I'm writing a simple script to copy a set of rows from a <strong>csv</strong> file and paste them for N number of times in other file. </p>
<p>I'm not able to write the result into other file.</p>
<p>Please find the code below:</p>
<pre><code>import csv
for i in range(2):
with open('C:\\Python\\CopyPaste\\result2.csv', 'r') as fp:
data = fp.readlines()
fp.close()
with open('C:\\Python\\CopyPaste\\mydata.csv', 'w') as mycsvfile:
thedatawriter = csv.writer(mycsvfile)
for row in data:
thedatawriter.writerow(row)
</code></pre>
<p><img src="http://i.stack.imgur.com/ssulP.jpg" alt="enter image description here"></p>
| -1 | 2016-10-03T09:48:55Z | 39,829,207 | <p>Assuming that the format of the input and output CSV files is the same, just read the input file into a string and then write it to an output file <code>N</code> times:</p>
<pre><code>N = 3
with open('C:\\Python\\CopyPaste\\result2.csv', 'r') as infile,\
open('C:\\Python\\CopyPaste\\mydata.csv', 'w') as outfile:
data = fp.read() # read entire contents of input file into data
for i in range(N):
outfile.write(data)
</code></pre>
<p>The above answers the question literally, however, it will replicate the header row <code>N</code> times, probably not what you want. You can do this instead:</p>
<pre><code>import csv
N = 3
with open('C:\\Python\\CopyPaste\\result2.csv', 'r') as infile,\
open('C:\\Python\\CopyPaste\\mydata.csv', 'w') as outfile:
reader = csv.reader(infile)
writer = csv.writer(outfile)
writer.writerow(next(reader)) # reads header line and writes it to output file
data = [row for row in reader] # reads the rest of the input file
for i in range(N):
writer.writerows(data)
</code></pre>
<p>This code reads the first row from the input file as the header, and writes it <em>once</em> to the output CSV file. Then the remaining rows are read from the input file into the <code>data</code> list, and replicated <code>N</code> times in the output file.</p>
| 1 | 2016-10-03T10:12:50Z | [
"python",
"csv"
]
|
How can I correct my ORM statement to show all friends not associated with a user in Django? | 39,828,805 | <p>In my Django application, I've got two models, one Users and one Friendships. There is a Many to Many relationship between the two, as Users can have many Friends, and Friends can have many other Friends that are Users.</p>
<p>How can I return all friends (first and last name) whom are NOT friends with the user with the first_name='Daniel'?</p>
<p>Models.py:</p>
<pre><code>class Friendships(models.Model):
user = models.ForeignKey('Users', models.DO_NOTHING, related_name="usersfriend")
friend = models.ForeignKey('Users', models.DO_NOTHING, related_name ="friendsfriend")
created_at = models.DateTimeField(blank=True, null=True)
updated_at = models.DateTimeField(blank=True, null=True)
class Meta:
managed = False
db_table = 'friendships'
class Users(models.Model):
first_name = models.CharField(max_length=45, blank=True, null=True)
last_name = models.CharField(max_length=45, blank=True, null=True)
created_at = models.DateTimeField(blank=True, null=True)
updated_at = models.DateTimeField(blank=True, null=True)
class Meta:
managed = False
db_table = 'users'
</code></pre>
<p>So far, here's what I've tried in my controller (views.py) -- please note, I understand controllers should be skinny but still learning so apologies. What I tried in the snippet below (after many failed attempts at a cleaner method) was to try and first grab friends of daniels (populating them into a list and then removing any duplicate ids), and then filter them out by their id.</p>
<pre><code># show first and last name of all friends who daniel is not friends with:
def index(req):
friends_of_daniel = Friendships.objects.filter(user__first_name='Daniel')
daniels_friends = []
for friend_of_daniel in friends_of_daniel:
daniels_friends.append(friend_of_daniel.friend.id)
daniels_friends = list(set(daniels_friends))
not_daniels_friends = Friendships.objects.exclude(id__in=daniels_friends)
context = {
'not_daniels_friends':not_daniels_friends,
}
return render(req, "friendapp/index.html",context)
</code></pre>
<p>However, when I try the following in my views (templates) file, I still see individuals whom are friends of Daniels. Any idea what I'm doing wrong?</p>
<pre><code><ul>
{% for not_daniel_friend in not_daniels_friends %}
<li>{{ not_daniel_friend.user.first_name }} {{ not_daniel_friend.user.last_name }}</li>
{% endfor %}
</ul>
</code></pre>
| 0 | 2016-10-03T09:49:41Z | 39,830,356 | <p>Try this,In the place of friend_of_daniel.friend.id , You should exclude the results from User model.</p>
<p>Something like this :</p>
<pre><code> def index(req):
friends_of_daniel = Friendships.objects.filter(user__first_name='Daniel')
daniels_friends = []
for friend_of_daniel in friends_of_daniel:
daniels_friends.append(friend_of_daniel.friend.id)
daniels_friends = list(set(daniels_friends))
not_daniels_friends = Users.objects.exclude(id__in=daniels_friends)
context = {
'not_daniels_friends':not_daniels_friends,
}
return render(req, "friendapp/index.html",context)
</code></pre>
<p>Thanks.</p>
| 1 | 2016-10-03T11:18:40Z | [
"python",
"django",
"orm",
"many-to-many"
]
|
How can I correct my ORM statement to show all friends not associated with a user in Django? | 39,828,805 | <p>In my Django application, I've got two models, one Users and one Friendships. There is a Many to Many relationship between the two, as Users can have many Friends, and Friends can have many other Friends that are Users.</p>
<p>How can I return all friends (first and last name) whom are NOT friends with the user with the first_name='Daniel'?</p>
<p>Models.py:</p>
<pre><code>class Friendships(models.Model):
user = models.ForeignKey('Users', models.DO_NOTHING, related_name="usersfriend")
friend = models.ForeignKey('Users', models.DO_NOTHING, related_name ="friendsfriend")
created_at = models.DateTimeField(blank=True, null=True)
updated_at = models.DateTimeField(blank=True, null=True)
class Meta:
managed = False
db_table = 'friendships'
class Users(models.Model):
first_name = models.CharField(max_length=45, blank=True, null=True)
last_name = models.CharField(max_length=45, blank=True, null=True)
created_at = models.DateTimeField(blank=True, null=True)
updated_at = models.DateTimeField(blank=True, null=True)
class Meta:
managed = False
db_table = 'users'
</code></pre>
<p>So far, here's what I've tried in my controller (views.py) -- please note, I understand controllers should be skinny but still learning so apologies. What I tried in the snippet below (after many failed attempts at a cleaner method) was to try and first grab friends of daniels (populating them into a list and then removing any duplicate ids), and then filter them out by their id.</p>
<pre><code># show first and last name of all friends who daniel is not friends with:
def index(req):
friends_of_daniel = Friendships.objects.filter(user__first_name='Daniel')
daniels_friends = []
for friend_of_daniel in friends_of_daniel:
daniels_friends.append(friend_of_daniel.friend.id)
daniels_friends = list(set(daniels_friends))
not_daniels_friends = Friendships.objects.exclude(id__in=daniels_friends)
context = {
'not_daniels_friends':not_daniels_friends,
}
return render(req, "friendapp/index.html",context)
</code></pre>
<p>However, when I try the following in my views (templates) file, I still see individuals whom are friends of Daniels. Any idea what I'm doing wrong?</p>
<pre><code><ul>
{% for not_daniel_friend in not_daniels_friends %}
<li>{{ not_daniel_friend.user.first_name }} {{ not_daniel_friend.user.last_name }}</li>
{% endfor %}
</ul>
</code></pre>
| 0 | 2016-10-03T09:49:41Z | 39,830,397 | <p>I guess something like this will do. Just then take the list <code>users</code>, and get the first and last name of those users.</p>
<pre><code>daniels = Users.objects.filter(first_name="Daniel") # There may be more than one Daniel
users = Friendships.objects.exclude(friend__in=daniels)
</code></pre>
<p>Note here, while Friendships.friend is a <code>foreignkey</code> of type <code>Users</code> you can pass Users instances (i.e <code>daniels</code> list) in <code>friend__in</code> to exclude those users.</p>
| 1 | 2016-10-03T11:20:44Z | [
"python",
"django",
"orm",
"many-to-many"
]
|
How can I correct my ORM statement to show all friends not associated with a user in Django? | 39,828,805 | <p>In my Django application, I've got two models, one Users and one Friendships. There is a Many to Many relationship between the two, as Users can have many Friends, and Friends can have many other Friends that are Users.</p>
<p>How can I return all friends (first and last name) whom are NOT friends with the user with the first_name='Daniel'?</p>
<p>Models.py:</p>
<pre><code>class Friendships(models.Model):
user = models.ForeignKey('Users', models.DO_NOTHING, related_name="usersfriend")
friend = models.ForeignKey('Users', models.DO_NOTHING, related_name ="friendsfriend")
created_at = models.DateTimeField(blank=True, null=True)
updated_at = models.DateTimeField(blank=True, null=True)
class Meta:
managed = False
db_table = 'friendships'
class Users(models.Model):
first_name = models.CharField(max_length=45, blank=True, null=True)
last_name = models.CharField(max_length=45, blank=True, null=True)
created_at = models.DateTimeField(blank=True, null=True)
updated_at = models.DateTimeField(blank=True, null=True)
class Meta:
managed = False
db_table = 'users'
</code></pre>
<p>So far, here's what I've tried in my controller (views.py) -- please note, I understand controllers should be skinny but still learning so apologies. What I tried in the snippet below (after many failed attempts at a cleaner method) was to try and first grab friends of daniels (populating them into a list and then removing any duplicate ids), and then filter them out by their id.</p>
<pre><code># show first and last name of all friends who daniel is not friends with:
def index(req):
friends_of_daniel = Friendships.objects.filter(user__first_name='Daniel')
daniels_friends = []
for friend_of_daniel in friends_of_daniel:
daniels_friends.append(friend_of_daniel.friend.id)
daniels_friends = list(set(daniels_friends))
not_daniels_friends = Friendships.objects.exclude(id__in=daniels_friends)
context = {
'not_daniels_friends':not_daniels_friends,
}
return render(req, "friendapp/index.html",context)
</code></pre>
<p>However, when I try the following in my views (templates) file, I still see individuals whom are friends of Daniels. Any idea what I'm doing wrong?</p>
<pre><code><ul>
{% for not_daniel_friend in not_daniels_friends %}
<li>{{ not_daniel_friend.user.first_name }} {{ not_daniel_friend.user.last_name }}</li>
{% endfor %}
</ul>
</code></pre>
| 0 | 2016-10-03T09:49:41Z | 39,837,106 | <p>Firstly as a general comment: a cleaner way of populating a list of ids is using the <a href="https://docs.djangoproject.com/es/1.10/ref/models/querysets/#django.db.models.query.QuerySet.values_list" rel="nofollow">.value_list() method from django</a> (part of the .values() method in previous versions of Django). It has a "flat" flag that creates the list you want.</p>
<p>So, instead of:</p>
<pre><code>friends_of_daniel = Friendships.objects.filter(user__first_name='Daniel')
daniels_friends = []
for friend_of_daniel in friends_of_daniel:
daniels_friends.append(friend_of_daniel.friend.id)
daniels_friends = list(set(daniels_friends))
</code></pre>
<p>You could do, in one line:</p>
<pre><code>daniels_friends = Friendships.objects \
.filter(user__first_name='Daniel') \
.distinct('friend') \
.values_list('friend', flat=True)
</code></pre>
<p>distinct makes the same as your list() - set() cast (it makes sure that your list has no repeated elements) and values_list with flat=True can be customizable to any field in the related "user" table: .values_list('friend__id', flat=True) or .values_list('friend__first_name', flat=True) to get a list of first_names of Daniel's friends.</p>
<p>Coming back to your general question, you can do the whole query directly in one line using your related_names, as I am not really sure of what you want (an user instance, a Friendship instance or just a list of firsts and last names) I will give you many options:</p>
<ul>
<li><p>If you want a Friendship instance (what you are trying in your sample
code):</p>
<pre><code>friendships_not_friends_with_daniel = Friendships.objects\
.exclude(friend__first_name="Daniel")
</code></pre>
<p>This is equivalent to what @Rafael proposes in his answer:</p>
<blockquote>
<p>daniels = Users.objects.filter(first_name="Daniel") # There may be
more than one Daniel users =
Friendships.objects.exclude(friend__in=daniels)</p>
</blockquote>
<p>Here I am embedding his first query in the exclude by referencing the
field in the related table with double underscore (which is an very
powerful standard in Django).</p></li>
<li><p>If you want an User instance:</p>
<pre><code>users_with_no_friendship_with_daniel = Users.objects\
.exclude(usersfriend__friend__first_name="Daniel")
</code></pre>
<p>Here you are using the related name of your model to access from the
users table to the friendships table, and then check if the friend of this user is called Daniel. This way of querying is a bit complex to understand but as soon as you get used to it becomes really powerful because it is very similar to the spoken language: you want all users, but excluding the ones that have a friendship, whose friend's first name is Daniel. Depending on how many friends an user hat or how many users are called Daniel, you might to add some distinct() methods or split the query in two.</p>
<p>As an advice, maybe you could improve the related name in your model, because it is what you would use if you have an user instance and want to get the related friendships: user_instance.friendships instead of user_instance.usersfriend and user_instance.friendsfriendships instead of user_instance.friendsfriend.... Do not know, it is always difficult to me to choose good related names...</p></li>
<li><p>If you want a list of tuples of users first and last names:</p>
<pre><code>names_of_users_with_no_friendship_with_daniel = Users.objects\
.exclude(usersfriend__friend__first_name="Daniel")\
.values_list('first_name', 'last_name')
</code></pre></li>
</ul>
<p>I am sorry if something is not clear, please ask and I try to explain better. (I am quite new in stackoverflow)</p>
| 1 | 2016-10-03T17:25:13Z | [
"python",
"django",
"orm",
"many-to-many"
]
|
Python: three levels of nesting with list comprehension | 39,828,865 | <p>I am trying to achieve a nesting of three levels because I need to group some data.</p>
<p>I have a list of matches, and each of these matches belong to particular rounds. I want to regroup these matches into separate nested lists for each round, except I don't want to store the whole match in these lists, but only the scores.</p>
<p>To clarify, we have this:</p>
<pre><code>all_matches = [final_match, semifinal1_match, semifinal2_match]
</code></pre>
<p>These matches have properties like <code>round</code>, <code>home_score</code> and <code>away_score</code>. What I am trying to do is group them in the following fashion:</p>
<pre><code>[
[[1, 3], [2, 0]], # semifinal
[[1, 0]] # final round
]
</code></pre>
<p>I managed to group matches into nested lists by the round they belong to:</p>
<pre><code>[list(matches) for round, matches in groupby(all_matches, key=attrgetter('round'))]
</code></pre>
<p>And this is the result:</p>
<pre><code>[[semifinal1_match, semifinal2_match], [final_match]]
</code></pre>
<p>This is not quite what I am after. I am having trouble trying to figure out how the list comprehension syntax would be to extract only the scores (in a list) for each match into its respective round list, instead of having the whole match in there.</p>
| 0 | 2016-10-03T09:54:10Z | 39,828,977 | <p>You can expand each match getting the required attributes from your current result using a nested <em>list comprehension</em>: </p>
<pre><code>[[[m.home_score, m.away_score] for m in matches]
for _, matches in groupby(all_matches, key=attrgetter('round'))]
</code></pre>
| 1 | 2016-10-03T09:59:30Z | [
"python",
"python-2.7"
]
|
apache failed to start with no error logs | 39,828,871 | <p>this day i got an issue</p>
<p>my apache has failed to restart my django app, i dont know why.
and the worst is , it doesnt out with any error log , just blank!</p>
<p>this problem make me confuse ,need help.</p>
<blockquote>
<p>alif@alif-VirtualBox:/var/www/mywebsite/website$ service apache2 restart<br>
* Restarting web server apache2 [fail]</p>
</blockquote>
<p>/etc/apache2/sites-available/000-default.conf</p>
<pre><code><VirtualHost *:80>
# The ServerName directive sets the request scheme, hostname and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
#ServerName www.example.com
ServerAdmin admin@musicplayer.vhost
ServerName www.musicplayer.vhost
ServerAlias musicplayer.vhost
DocumentRoot /var/www/mywebsite
Alias /static /var/www/mywebsite/website/static
<Directory /var/www/mywebsite/website/static>
Require all granted
</Directory>
<Directory /var/www/mywebsite/website/website>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
WSGIDaemonProcess mywebsite python-path=/var/www/mywebsite:/var/www/mywebsite/env/lib/python2.7/site-packages
WSGIProcessGroup mywebsite
WSGIScriptAlias / /var/www/mywebsite/website/website/wsgi.py
# Available loglevels: trace8, ..., trace1, debug, info,notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf
</code></pre>
<p></p>
<p>wsgi.py on mywebsite</p>
<pre><code>import os
from django.core.wsgi import get_wsgi_application
path = '/var/www/mywebsite/website'
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "website.settings")
application = get_wsgi_application()
</code></pre>
| 0 | 2016-10-03T09:54:38Z | 39,833,672 | <p>Check to make sure nothing is listening on port 80. I'm not sure what the <code>website.settings</code> contains but it is possible that either this apache instance is trying to use the same port as another, or something else is running on port 80. (this is the default port). You can check if something might be using the port with <code>netstat -tulpn | grep :80</code></p>
<p>You can alter the port of this server to avoid the collision or, stop the other process.</p>
| 0 | 2016-10-03T14:10:39Z | [
"python",
"django",
"apache",
"wsgi",
"error-log"
]
|
Surprising behaviour of enumerate function | 39,828,904 | <p>I wrote some Python code using the enumerate function. </p>
<pre><code>A = [2,3,5,7]
for i, x in enumerate(A):
# calculate product with each element to the right
for j, y in enumerate(A, start=i+1):
print(x*y)
</code></pre>
<p>I expected it to calculate 6 products: 2*3, 2*5, 2*7, 3*5, 3*7, 5*7</p>
<p>Instead, it calculated all possible 16 products. What's going on?</p>
| 0 | 2016-10-03T09:56:25Z | 39,829,021 | <p>The <code>start</code> parameter of <code>enumerate</code> solely influences the first value of the yielded tuple (i.e. <code>i</code> and <code>j</code>), it does not influence at which index the enumeration starts. As <a href="https://docs.python.org/3/library/functions.html#enumerate">the manual</a> puts it, <code>enumerate</code> is equivalent to this:</p>
<pre><code>def enumerate(sequence, start=0):
n = start
for elem in sequence:
yield n, elem
n += 1
</code></pre>
<p>What you want is this:</p>
<pre><code>for i, x in enumerate(A):
for y in A[i + 1:]:
print(x * y)
</code></pre>
| 7 | 2016-10-03T10:01:42Z | [
"python"
]
|
Surprising behaviour of enumerate function | 39,828,904 | <p>I wrote some Python code using the enumerate function. </p>
<pre><code>A = [2,3,5,7]
for i, x in enumerate(A):
# calculate product with each element to the right
for j, y in enumerate(A, start=i+1):
print(x*y)
</code></pre>
<p>I expected it to calculate 6 products: 2*3, 2*5, 2*7, 3*5, 3*7, 5*7</p>
<p>Instead, it calculated all possible 16 products. What's going on?</p>
| 0 | 2016-10-03T09:56:25Z | 39,829,065 | <p>The question here is firstly what enumerate did, and secondly why you're using it. The base function of enumerate is to convert an iterable of the form <code>(a,b,c)</code> to an iterable of the form <code>((start,a), (start+1,b), (start+2,c))</code>. It adds a new column which is typically used as an index; in your code, this is <code>i</code> and <code>j</code>. It doesn't change the entries contained in the sequence. </p>
<p>I believe the operation you were intending is a slice, extracting only part of the list:</p>
<pre><code>for i, x in enumerate(A):
for y in A[i+1:]:
print(x*y)
</code></pre>
<p>If it is important not to copy the list (it rarely is), you can replace <code>A[i+1:]</code> with <code>itertools.islice(A, i+1, len(A))</code>. </p>
<p>A side note is that the <code>start</code> argument may be useful in the outer loop in this code. We're only using <code>i+1</code>, not <code>i</code> so we may as well use that value as our index:</p>
<pre><code>for nextindex, x in enumerate(A, 1):
for y in A[nextindex:]:
print(x*y)
</code></pre>
| 2 | 2016-10-03T10:04:51Z | [
"python"
]
|
How to find QButton created with a loop? | 39,829,223 | <p>in maya one creates a button with:</p>
<pre><code> cmds.button('buttonname', label='click me')
</code></pre>
<p>where buttonname is the name of
the button object. At a later stage i can edit the button simply by calling:</p>
<pre><code> cmds.button('buttonname', e=1, label='click me again')
</code></pre>
<p>Now the problem:
i created a bunch of buttons in qt using a loop:</p>
<pre><code> for s in Collection:
file = os.path.splitext(s)[0]
# Main widget
widgetItem = QtWidgets.QWidget()
layoutItem = QtWidgets.QVBoxLayout()
widgetItem.setLayout(layoutItem)
# Button
button = QtGui.QPushButton()
button.setObjectName(file)
layoutItem.addWidget(button)
</code></pre>
<p>How can i call/edit one of them using the button name?</p>
<p>Thanks in advance</p>
| 1 | 2016-10-03T10:13:42Z | 39,829,404 | <p>Assuming you already have access to their parent widget, you can find them by <code>findChild</code> method.</p>
<p>In C++ syntax, it would be something like this:</p>
<pre><code>QPushButton *button = parentWidget->findChild<QPushButton *>("button1");
</code></pre>
<p>where <code>button1</code> is the name of that button.</p>
| 4 | 2016-10-03T10:23:41Z | [
"python",
"qt",
"button",
"pyside",
"maya"
]
|
How do I perform error handling with two files? | 39,829,233 | <p>So , I am having two files , so to checks its validity I am performing <code>try</code> and <code>except</code> two times . But I don't thinks this is a good method, can you suggest a better way?</p>
<p>Here is my code:</p>
<pre><code>def form_density_dictionary(self,word_file,fp_exclude):
self.freq_dictionary={}
try:
with open(fp_exclude,'r')as fp2:
words_excluded=fp2.read().split() #words to be excluded stored in a list
print("**Read file successfully :" + fp_exclude + "**")
words_excluded=[words.lower() for words in words_excluded] # converted to lowercase
except IOError:
print("**Could not read file:", fp_exclude, " :Please check file name**")
sys.exit()
try:
with open(word_file,'r') as file:
print("**Read file successfully :" + word_file + "**")
words_list=file.read()
if not words_list:
print("**No data in file:",word_file +":**")
sys.exit()
words_list=words_list.split()
words_list=[words.lower() for words in words_list] # lowercasing entire list
unique_words=list((set(words_list)-set(words_excluded)))
self.freq_dictionary= {word:("%6.2f"%(float((words_list.count(word))/len(words_list))*100)) for word in unique_words}
#print((len(self.freq_dictionary)))
except IOError:
print("**Could not read file:", word_file, " :Please check file name**")
sys.exit()
</code></pre>
<p>Any other suggestion is also welcomed to make it more pythonic.</p>
| 0 | 2016-10-03T10:14:02Z | 39,829,443 | <p>Exceptions thrown that involve a file system path have a <code>filename</code> attribute that can be used instead of explicit attributes <code>word_file</code> and <code>fp_exclude</code> as you do. </p>
<p>This means you can wrap these IO operations in the same <code>try-except</code> and use the <code>exception_instance.filename</code> which will indicate in which file the operation couldn't be performed.</p>
<p>For example:</p>
<pre><code>try:
with open('unknown_file1.py') as f1, open('known_file.py') as f2:
f1.read()
f2.read()
except IOError as e:
print("No such file: {0.filename}".format(e))
</code></pre>
<p>Eventually prints out:</p>
<pre><code>No such file: unknown_file1.py
</code></pre>
<p>While the opposite:</p>
<pre><code>try:
with open('known_file.py') as f1, open('unknown_file2.py') as f2:
f1.read()
f2.read()
except IOError as e:
print("No such file: {0.filename}".format(e))
</code></pre>
<p>Prints out:</p>
<pre><code>No such file: unknown_file2.py
</code></pre>
| 1 | 2016-10-03T10:26:06Z | [
"python",
"python-3.x",
"error-handling"
]
|
How do I perform error handling with two files? | 39,829,233 | <p>So , I am having two files , so to checks its validity I am performing <code>try</code> and <code>except</code> two times . But I don't thinks this is a good method, can you suggest a better way?</p>
<p>Here is my code:</p>
<pre><code>def form_density_dictionary(self,word_file,fp_exclude):
self.freq_dictionary={}
try:
with open(fp_exclude,'r')as fp2:
words_excluded=fp2.read().split() #words to be excluded stored in a list
print("**Read file successfully :" + fp_exclude + "**")
words_excluded=[words.lower() for words in words_excluded] # converted to lowercase
except IOError:
print("**Could not read file:", fp_exclude, " :Please check file name**")
sys.exit()
try:
with open(word_file,'r') as file:
print("**Read file successfully :" + word_file + "**")
words_list=file.read()
if not words_list:
print("**No data in file:",word_file +":**")
sys.exit()
words_list=words_list.split()
words_list=[words.lower() for words in words_list] # lowercasing entire list
unique_words=list((set(words_list)-set(words_excluded)))
self.freq_dictionary= {word:("%6.2f"%(float((words_list.count(word))/len(words_list))*100)) for word in unique_words}
#print((len(self.freq_dictionary)))
except IOError:
print("**Could not read file:", word_file, " :Please check file name**")
sys.exit()
</code></pre>
<p>Any other suggestion is also welcomed to make it more pythonic.</p>
| 0 | 2016-10-03T10:14:02Z | 39,829,548 | <p>To be more 'pythonic' you could use something what is callec Counter, from collections library.</p>
<pre><code>from collections import Counter
def form_density_dictionary(self, word_file, fp_exclude):
success_msg = '*Read file succesfully : {filename}'
fail_msg = '**Could not read file: {filename}: Please check filename'
empty_file_msg = '*No data in file :{filename}:**'
exclude_read = self._file_open(fp_exclude, success_msg, fail_msg, '')
exclude = Counter([word.lower() for word in exclude_read.split()])
word_file_read = self._file_open(word_file, success_msg, fail_msg, empty_file_msg)
words = Counter([word.lower() for word in word_file_read.split()])
unique_words = words - excluded
self.freq_dictionary = {word: '{.2f}'.format(count / len(unique_words))
for word, count in unique_words.items()}
</code></pre>
<p>Also it would be better if you would just create the open_file method, like:</p>
<pre><code>def _open_file(self, filename, success_msg, fails_msg, empty_file_msg):
try:
with open(filename, 'r') as file:
if success_msg:
print(success_msg.format(filename= filename))
data = file.read()
if empty_file_msg:
print(empty_file_msg.format(filename= filename))
return data
except IOError:
if fail_msg:
print(fail_msg.format(filename= filename))
sys.exit()
</code></pre>
| 1 | 2016-10-03T10:32:28Z | [
"python",
"python-3.x",
"error-handling"
]
|
How do I perform error handling with two files? | 39,829,233 | <p>So , I am having two files , so to checks its validity I am performing <code>try</code> and <code>except</code> two times . But I don't thinks this is a good method, can you suggest a better way?</p>
<p>Here is my code:</p>
<pre><code>def form_density_dictionary(self,word_file,fp_exclude):
self.freq_dictionary={}
try:
with open(fp_exclude,'r')as fp2:
words_excluded=fp2.read().split() #words to be excluded stored in a list
print("**Read file successfully :" + fp_exclude + "**")
words_excluded=[words.lower() for words in words_excluded] # converted to lowercase
except IOError:
print("**Could not read file:", fp_exclude, " :Please check file name**")
sys.exit()
try:
with open(word_file,'r') as file:
print("**Read file successfully :" + word_file + "**")
words_list=file.read()
if not words_list:
print("**No data in file:",word_file +":**")
sys.exit()
words_list=words_list.split()
words_list=[words.lower() for words in words_list] # lowercasing entire list
unique_words=list((set(words_list)-set(words_excluded)))
self.freq_dictionary= {word:("%6.2f"%(float((words_list.count(word))/len(words_list))*100)) for word in unique_words}
#print((len(self.freq_dictionary)))
except IOError:
print("**Could not read file:", word_file, " :Please check file name**")
sys.exit()
</code></pre>
<p>Any other suggestion is also welcomed to make it more pythonic.</p>
| 0 | 2016-10-03T10:14:02Z | 39,830,065 | <p>The first thing that jumps out is the lack of consistency and readability: in some lines you indent with 4 spaces, on others you only use two; in some places you put a space after a comma, in others you don't, in most places you don't have spaces around the assignment operator (<code>=</code>)...</p>
<p>Be consistent and make your code readable. The most commonly used formatting is to use four spaces for indenting and to always have a space after a comma but even more important than that is to be consistent, meaning that whatever you choose, stick with it throughout your code. It makes it much easier to read for everyone, including yourself.</p>
<p>Here are a few other things I think you could improve:</p>
<ul>
<li><p>Have a single exception handling block instead of two.</p></li>
<li><p>You can also open both files in a single line.</p></li>
<li><p>Even better, combine both previous suggestions and have a separate method to read data from the files, thus eliminating code repetition and making the main method easier to read.</p></li>
<li><p>For string formatting it's preferred to use <code>.format()</code> instead of <code>%</code>. Check this out: <a href="https://pyformat.info/" rel="nofollow">https://pyformat.info/</a></p></li>
<li><p>Overall try to avoid repetition in your code. If there's something you're doing more than once, extract it to a separate function or method and use that instead.</p></li>
</ul>
<p>Here's your code quickly modified to how I'd probably write it, and taking these things into account:</p>
<pre><code>import sys
class AtifImam:
def __init__(self):
self.freq_dictionary = {}
def form_density_dictionary(self, word_file, exclude_file):
words_excluded = self.read_words_list(exclude_file)
words_excluded = self.lowercase(words_excluded)
words_list = self.read_words_list(word_file)
if len(words_list) == 0:
print("** No data in file: {} **".format(word_file))
sys.exit()
words_list = self.lowercase(words_list)
unique_words = list((set(words_list) - set(words_excluded)))
self.freq_dictionary = {
word: ("{:6.2f}".format(
float((words_list.count(word)) / len(words_list)) * 100))
for word in unique_words
}
@staticmethod
def read_words_list(file_name):
try:
with open(file_name, 'r') as file:
data = file.read()
print("** Read file successfully: {} **".format(file_name))
return data.split()
except IOError as e:
print("** Could not read file: {0.filename} **".format(e))
sys.exit()
@staticmethod
def lowercase(word_list):
return [word.lower() for word in word_list]
</code></pre>
| 1 | 2016-10-03T11:02:19Z | [
"python",
"python-3.x",
"error-handling"
]
|
Messenger account linking authentication flow in Django | 39,829,354 | <p>How can I complete the authentication flow of the account linking in Django?</p>
<p>I send the login template to user. When the user clicks on it, she is redirect to <a href="https://example.ngork.io/authenticate" rel="nofollow">https://example.ngork.io/authenticate</a> with the parameters account_linking_token and redirect_uri. </p>
<p>Now, when I perform the redirection I have this error:</p>
<p>Page not found (404)</p>
<p>Request URL: <a href="http://example.ngrok.io/[redirect_uri]" rel="nofollow">http://example.ngrok.io/[redirect_uri]</a></p>
<ol>
<li>^admin/</li>
<li>^$ [name='index']</li>
<li>^messengerhook [name='messengerhook']</li>
<li>^authenticate [name='authenticate']</li>
</ol>
<p>The current URL didn't march any of these.</p>
<p>EDIT:</p>
<p>the url of the server is generated by ngork since I run it in local. </p>
<pre><code>https://a0505537.ngrok.io
</code></pre>
<p>The redirect uri is the one provided by facebook for linking account flow:</p>
<pre><code>https://www.facebook.com/messenger_platform/account_linking
?account_linking_token=ACCOUNT_LINKING_TOKEN
&authorization_code=AUTHORIZATION_CODE
</code></pre>
<p>About the views, in the question there are the urls written in my urls.py setting file.</p>
<p>Basically, the authenticate view is a login button and when the user is logged I run <code>window.location.replace(["redirect_uri"])</code></p>
<p>RESOLVED:</p>
<p>The url wasn't decoded, so I add:</p>
<pre><code>var url = decodeURIComponent(params["redirect_uri"]);
</code></pre>
| 0 | 2016-10-03T10:20:12Z | 39,854,245 | <p>I resolved the issue. Details are in the question after the tag RESOLVED</p>
| 0 | 2016-10-04T13:57:16Z | [
"python",
"django",
"facebook",
"bots",
"facebook-messenger"
]
|
SimpleBool, Python Package | 39,829,359 | <p>I have been trying to use SimpleBool, which uses Python Language. I downloaded the python scripts to use SimpleBool.When I tried to execute the Python file Boolmutation, I get the following error:</p>
<pre><code>%run "/home/JPJ/Priya_Ph.D/simple_bool/simplebool/SimpleBool-master /BoolMutation.py"
</code></pre>
<hr>
<pre><code>IOError Traceback (most recent call last)
/home/JPJ/Priya_Ph.D/simple_bool/simplebool/SimpleBool-master /BoolMutation.py in <module>()
383 para=ParaParser(sys.argv[1])
384 except:
--> 385 para=ParaParser('mutation.in')
386 simu_mutation(para)
/home/JPJ/Priya_Ph.D/simple_bool/simplebool/SimpleBool-master/BoolMutation.py in ParaParser(ParaFile)
254 } # define parameters
255
--> 256 for each_line in open(ParaFile).readlines():
257 para_name = each_line.split('=')[0].strip()
258 para_value = each_line.split('=')[1].strip()
IOError: [Errno 2] No such file or directory: 'mutation.in'
</code></pre>
<p>I have pasted a portion of the script below:</p>
<pre><code>for each_line in open(ParaFile).readlines():
para_name = each_line.split('=')[0].strip()
para_value = each_line.split('=')[1].strip()
if para_name in INPUT.keys():
INPUT[para_name] = para_value
else:
print "Error: Unknown Parameters: %s" % para_name
# formalize parameters
</code></pre>
<p>Should I formalize parameters here? I am learning Python so pls help me to understand the problem here.
Thank you
Regards
Priya</p>
| 0 | 2016-10-03T10:20:35Z | 39,858,989 | <p>In Python in general and in Canopy in particular, you cannot assume that your current directory is the same as the directory where the script that you are running is located. But without looking at this package, it seems likely that it does make such an assumption. If so, you can do so with the "Keep Directory Synced to Editor" command, described in the User Guide at <a href="http://docs.enthought.com/canopy/quick-start/code_editor.html#change-directory" rel="nofollow">http://docs.enthought.com/canopy/quick-start/code_editor.html#change-directory</a></p>
| 0 | 2016-10-04T18:06:23Z | [
"python",
"python-2.7",
"boolean",
"enthought",
"canopy"
]
|
Is it safe to remove google's "unused_argv" from forked python projects? | 39,829,431 | <p>On projects that google share on the web, they use:</p>
<p>def main(unused_argv):</p>
<p><a href="https://github.com/tensorflow/models/blob/master/im2txt/im2txt/train.py" rel="nofollow">(see example)</a>
, where <code>unused_argv</code> is never used. I couldn't find why do they do it and it annoys me to have warnings on my code.</p>
<p>Is it safe to remove this parameter?</p>
<p>Thanks</p>
| 0 | 2016-10-03T10:25:22Z | 39,830,034 | <p>In that case it's presumably, because the tf.app.run function expects the main function to have a positional argument</p>
<p>This is the code that will call your main function: <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/platform/app.py" rel="nofollow">Source</a></p>
<pre><code>from tensorflow.python.platform import flags
def run(main=None):
f = flags.FLAGS
flags_passthrough = f._parse_flags()
main = main or sys.modules['__main__'].main
sys.exit(main(sys.argv[:1] + flags_passthrough))
</code></pre>
| 1 | 2016-10-03T11:00:46Z | [
"python"
]
|
cryptography AssertionError: sorry, but this version only supports 100 named groups | 39,829,473 | <p>I'm installing several python packages via <code>pip install</code> on travis, </p>
<pre><code>language: python
python:
- '2.7'
install:
- pip install -r requirements/env.txt
</code></pre>
<p>Everything worked fine, but today I started getting following error:</p>
<pre><code> Running setup.py install for cryptography
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-hKwMR3/cryptography/setup.py", line 334, in <module>
**keywords_with_side_effects(sys.argv)
File "/opt/python/2.7.9/lib/python2.7/distutils/core.py", line 111, in setup
_setup_distribution = dist = klass(attrs)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/setuptools/dist.py", line 269, in __init__
_Distribution.__init__(self,attrs)
File "/opt/python/2.7.9/lib/python2.7/distutils/dist.py", line 287, in __init__
self.finalize_options()
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/setuptools/dist.py", line 325, in finalize_options
ep.load()(self, ep.name, value)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/setuptools_ext.py", line 181, in cffi_modules
add_cffi_module(dist, cffi_module)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/setuptools_ext.py", line 48, in add_cffi_module
execfile(build_file_name, mod_vars)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/setuptools_ext.py", line 24, in execfile
exec(code, glob, glob)
File "src/_cffi_src/build_openssl.py", line 81, in <module>
extra_link_args=extra_link_args(compiler_type()),
File "/tmp/pip-build-hKwMR3/cryptography/src/_cffi_src/utils.py", line 61, in build_ffi_for_binding
extra_link_args=extra_link_args,
File "/tmp/pip-build-hKwMR3/cryptography/src/_cffi_src/utils.py", line 70, in build_ffi
ffi.cdef(cdef_source)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/api.py", line 105, in cdef
self._cdef(csource, override=override, packed=packed)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/api.py", line 119, in _cdef
self._parser.parse(csource, override=override, **options)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/cparser.py", line 299, in parse
self._internal_parse(csource)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/cparser.py", line 304, in _internal_parse
ast, macros, csource = self._parse(csource)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/cparser.py", line 260, in _parse
ast = _get_parser().parse(csource)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/cparser.py", line 40, in _get_parser
_parser_cache = pycparser.CParser()
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/pycparser/c_parser.py", line 87, in __init__
outputdir=taboutputdir)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/pycparser/c_lexer.py", line 66, in build
self.lexer = lex.lex(object=self, **kwargs)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/pycparser/ply/lex.py", line 911, in lex
lexobj.readtab(lextab, ldict)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/pycparser/ply/lex.py", line 233, in readtab
titem.append((re.compile(pat, lextab._lexreflags | re.VERBOSE), _names_to_funcs(func_name, fdict)))
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/re.py", line 194, in compile
return _compile(pattern, flags)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/re.py", line 249, in _compile
p = sre_compile.compile(pattern, flags)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/sre_compile.py", line 583, in compile
"sorry, but this version only supports 100 named groups"
AssertionError: sorry, but this version only supports 100 named groups
</code></pre>
<p>Solutions?</p>
| 13 | 2016-10-03T10:28:14Z | 39,830,224 | <p>There is a bug with PyCParser - See <a href="https://github.com/pyca/cryptography/issues/3187">https://github.com/pyca/cryptography/issues/3187</a></p>
<p>The work around is to use another version or to not use the binary distribution.</p>
<pre><code>pip install git+https://github.com/eliben/pycparser@release_v2.14
</code></pre>
<p>or</p>
<pre><code>pip install --no-binary pycparser
</code></pre>
| 22 | 2016-10-03T11:11:24Z | [
"python",
"python-2.7",
"travis-ci",
"python-cryptography"
]
|
Iterate over two lists, execute function and return values | 39,829,490 | <p>I am trying to iterate over two lists of the same length, and for the pair of entries per index, execute a function. The function aims to cluster the entries
according to some requirement X on the value the function returns.</p>
<p>The lists in questions are:</p>
<pre><code>e_list = [-0.619489,-0.465505, 0.124281, -0.498212, -0.51]
p_list = [-1.7836,-1.14238, 1.73884, 1.94904, 1.84]
</code></pre>
<p>and the function takes 4 entries, every combination of l1 and l2.
The function is defined as</p>
<pre><code>def deltaR(e1, p1, e2, p2):
de = e1 - e2
dp = p1 - p2
return de*de + dp*dp
</code></pre>
<p>I have so far been able to loop over the lists simultaneously as:</p>
<pre><code>for index, (eta, phi) in enumerate(zip(e_list, p_list)):
for index2, (eta2, phi2) in enumerate(zip(e_list, p_list)):
if index == index2: continue # to avoid same indices
if deltaR(eta, phi, eta2, phi2) < X:
print (index, index2) , deltaR(eta, phi, eta2, phi2)
</code></pre>
<p>This loops executes the function on every combination, except those that are same i.e. index 0,0 or 1,1 etc </p>
<p>The output of the code returns:</p>
<pre><code>(0, 1) 0.659449892453
(1, 0) 0.659449892453
(2, 3) 0.657024790285
(2, 4) 0.642297230697
(3, 2) 0.657024790285
(3, 4) 0.109675332432
(4, 2) 0.642297230697
(4, 3) 0.109675332432
</code></pre>
<p>I am trying to return the number of indices that are all matched following the condition above. In other words, to rearrange the output to:</p>
<pre><code>output = [No. matched entries]
</code></pre>
<p>i.e.</p>
<pre><code>output = [2, 3]
</code></pre>
<p>2 coming from the fact that indices 0 and 1 are matched</p>
<p>3 coming from the fact that indices 2, 3, and 4 are all matched</p>
<p>A possible way I have thought of is to append to a list, all the indices used such that I return</p>
<pre><code>output_list = [0, 1, 1, 0, 2, 3, 4, 3, 2, 4, 4, 2, 3]
</code></pre>
<p>Then, I use defaultdict to count the occurrances:</p>
<pre><code>for index in output_list:
hits[index] += 1
</code></pre>
<p>From the dict I can manipulate it to return [2,3] but is there a more pythonic way of achieving this?</p>
| 1 | 2016-10-03T10:29:17Z | 39,833,406 | <p>This is finding connected components of a graph, which is very easy and well documented, once you revisit the problem from that view.</p>
<p>The data being in two lists is a distraction. I am going to consider the data to be zip(e_list, p_list). Consider this as a graph, which in this case has 5 nodes (but could have many more on a different data set). Construct the graph using these nodes, and connected them with an edge if they pass your distance test. </p>
<p>From there, you only need to determine the connected components of an undirected graph, which is covered on many many places. Here is a basic depth first search on this site: <a href="http://stackoverflow.com/questions/21078445/find-connected-components-in-a-graph">Find connected components in a graph</a></p>
<p>You loop through the nodes once, performing a DFS to find all connected nodes. Once you look at a node, mark it visited, so it does not get counted again. To get the answer in the format you want, simply count the number of unvisited nodes found from each unvisited starting point, and append that to a list.</p>
<p>------------------------ graph theory ----------------------</p>
<p>You have data points that you want to break down into related groups. This is a topic in both mathematics and computer science known as graph theory. see: <a href="https://en.wikipedia.org/wiki/Graph_theory" rel="nofollow">https://en.wikipedia.org/wiki/Graph_theory</a></p>
<p>You have data points. Imagine drawing them in eta phi space as rectangular coordinates, and then draw lines between the points that are close to each other. You now have a "graph" with vertices and edges.</p>
<p>To determine which of these dots have lines between them is finding connected components. Obviously it's easy to see, but if you have thousands of points, and you want a computer to find the connected components quickly, you use graph theory.</p>
<p>Suppose I make a list of all the eta phi points with zip(e_list, p_list), and each entry in the list is a vertex. If you store the graph in "adjacency list" format, then each vertex will also have a list of the outgoing edges which connect it to another vertex.</p>
<p>Finding a connected component is literally as easy as looking at each vertex, putting a checkmark by it, and then following every line to the next vertex and putting a checkmark there, until you can't find anything else connected. Now find the next vertex without a checkmark, and repeat for the next connected component.</p>
<p>As a programmer, you know that writing your own data structures for common problems is a bad idea when you can use published and reviewed code to handle the task. Google "python graph module". One example mentioned in comments is "pip install networkx". If you build the graph in networkx, you can get the connected components as a list of lists, then take the len of each to get the format you want: [len(_) for _ in nx.connected_components(G)]</p>
<p>---------------- code -------------------</p>
<p>But if you don't understand the math, then you might not understand a module for graphs, nor a base python implementation, but it's pretty easy if you just look at some of those links. Basically dots and lines, but pretty useful when you apply the concepts, as you can see with your problem being nothing but a very simple graph theory problem in disguise.</p>
<p>My graph is a basic list here, so the vertices don't actually have names. They are identified by their list index.</p>
<pre><code>e_list = [-0.619489,-0.465505, 0.124281, -0.498212, -0.51]
p_list = [-1.7836,-1.14238, 1.73884, 1.94904, 1.84]
def deltaR(e1, p1, e2, p2):
de = e1 - e2
dp = p1 - p2
return de*de + dp*dp
X = 1 # you never actually said, but this works
def these_two_particles_are_going_the_same_direction(p1, p2):
return deltaR(p1.eta, p1.phi, p2.eta, p2.phi) < X
class Vertex(object):
def __init__(self, eta, phi):
self.eta = eta
self.phi = phi
self.connected = []
self.visited = False
class Graph(object):
def __init__(self, e_list, p_list):
self.vertices = []
for eta, phi in zip(e_list, p_list):
self.add_node(eta, phi)
def add_node(self, eta, phi):
# add this data point at the next available index
n = len(self.vertices)
a = Vertex(eta, phi)
for i, b in enumerate(self.vertices):
if these_two_particles_are_going_the_same_direction(a,b):
b.connected.append(n)
a.connected.append(i)
self.vertices.append(a)
def reset_visited(self):
for v in self.nodes:
v.visited = False
def DFS(self, n):
#perform depth first search from node n, return count of connected vertices
count = 0
v = self.vertices[n]
if not v.visited:
v.visited = True
count += 1
for i in v.connected:
count += self.DFS(i)
return count
def connected_components(self):
self.reset_visited()
components = []
for i, v in enumerate(self.vertices):
if not v.visited:
components.append(self.DFS(i))
return components
g = Graph(e_list, p_list)
print g.connected_components()
</code></pre>
| 2 | 2016-10-03T13:56:45Z | [
"python"
]
|
Using IVI-COM drivers with python via comtypes | 39,829,502 | <p>I am trying to get my IVI drivers working using comtypes. So far I have been successful in initializing the instrument thanks to <a href="http://stackoverflow.com/questions/13840997/python-instrument-drivers">Python instrument drivers</a>
more specifically Jorenko's post, as he is using the same instrument as me (I'm hoping he sees this as he seems to work for the company that makes the instrument).</p>
<p>So far I have:</p>
<pre><code>from comtypes import client
dmm = client.CreateObject('VTEXDmm.VTEXDmm')
dmm.Initialize('TCPIP::10.20.30.40::INSTR', True, True)
dmm.Initiate()
dmm.Measurement.Read(1000)
#dmm.Measurement.Fetch(1000)
</code></pre>
<p>This works fine for taking readings from the default state, which is DC Volts, but I can't figure out how to set other functions.
I've tried</p>
<pre><code>dmm.Function = VTEXDmmFunctionACVolts
</code></pre>
<p>and had no joy with it.</p>
<p>It's worth noting that I have very little experience with IVI drivers.</p>
<p>Can someone please point me in the right direction</p>
<p>Thanks</p>
| 0 | 2016-10-03T10:29:51Z | 39,852,314 | <p>Answered my own question (after much trial and error)</p>
<p>For anyone interested, I had a bit of success with the following</p>
<pre><code>import comtypes
from comtypes import client
dmm = client.CreateObject('VTEXDmm.VTEXDmm')
dmm.Initialize('TCPIP::10.20.30.40::INSTR', True, True)
dmm.Configure(Function=comtypes.gen.VTEXDmmLib.VTEXDmmFunctionACVolts, Range=1.0, Resolution=0.0001)
dmm.Initiate()
dmm.Measurement.Read(1000)
</code></pre>
| 0 | 2016-10-04T12:25:59Z | [
"python",
"com",
"driver",
"visa"
]
|
Seeking help to improve a crawler | 39,829,574 | <p>I'm a beginner with <strong>Scrapy/Python</strong>, I developed a crawler that can find <strong>expired domains</strong> and scan each on a <strong>SEO API</strong>.<br />
My crawler work fine but I pretty sure the crawler isn't 100% optimized for the job.<br/><br/>
Could it be possible to have some tricks to improve the Crawler please ?</p>
<p><strong>expired.py :</strong></p>
<pre><code>class HttpbinSpider(CrawlSpider):
name = "expired"
rules = (
Rule(LxmlLinkExtractor(allow=('.com', '.fr', '.net', '.org', '.info', '.casino', '.eu'),
deny=('facebook', 'amazon', 'wordpress', 'blogspot', 'free', 'reddit'),
callback='parse_obj',
process_request='add_errback',
follow=True),
)
def __init__(self, domains=None, **kwargs):
self.start_urls = json.loads(domains)
super(HttpbinSpider, self).__init__()
def add_errback(self, request):
return request.replace(errback=self.errback_httpbin)
def errback_httpbin(self, failure):
if failure.check(DNSLookupError):
request = failure.request
ext = tldextract.extract(request.url)
domain = ext.registered_domain
if domain != '':
domain = domain.replace("%20", "")
self.check_domain(domain)
def check_domain(self, domain):
if self.is_available(domain) == 'AVAILABLE':
self.logger.info('## Domain Expired : %s', domain)
url = 'http://api.majestic.com/api/json?app_api_key=API&cmd=GetIndexItemInfo&items=1&item0=' + domain + '&datasource=fresh'
response = urllib.urlopen(url)
data = json.loads(response.read())
response.close()
TrustFlow = data['DataTables']['Results']['Data'][0]['TrustFlow']
CitationFlow = data['DataTables']['Results']['Data'][0]['CitationFlow']
RefDomains = data['DataTables']['Results']['Data'][0]['RefDomains']
ExtBackLinks = data['DataTables']['Results']['Data'][0]['ExtBackLinks']
if (RefDomains > 20) and (TrustFlow > 4) and (CitationFlow > 4):
insert_table(domain, TrustFlow, CitationFlow, RefDomains, ExtBackLinks)
def is_available(self, domain):
url = 'https://api.internet.bs/Domain/Check?ApiKey=KEY&Password=PSWD&responseformat=json&domain' + domain
response = urllib.urlopen(url)
data = json.loads(response.read())
response.close()
return data['status']
</code></pre>
<p>Thank's a lot.</p>
| -1 | 2016-10-03T10:33:17Z | 39,829,861 | <p>The biggest issue in your code is urllib requests that are blocking whole async scrapy routine. You can easily replace those with scrapy request chain by yielding a <code>scrapy.Request</code>.</p>
<p>Something like this:</p>
<pre><code>def errback_httpbin(self, failure):
if not failure.check(DNSLookupError):
return
request = failure.request
ext = tldextract.extract(request.url)
domain = ext.registered_domain
if domain == '':
logging.debug('no domain: {}'.format(request.url))
return
domain = domain.replace("%20", "")
url = 'https://api.internet.bs/Domain/Check?ApiKey=KEY&Password=PSWD&responseformat=json&domain=' + domain
return Request(url, self.parse_checkdomain)
def parse_checkdomain(self, response):
"""check whether domain is available"""
data = json.loads(response.read())
if data['status'] == 'AVAILABLE':
self.logger.info('Domain Expired : {}'.format(data['domain']))
url = 'http://api.majestic.com/api/json?app_api_key=API&cmd=GetIndexItemInfo&items=1&item0=' + data['domain']+ '&datasource=fresh'
return Request(url, self.parse_claim)
def parse_claim(self, response):
"""save available domain's details"""
data = json.loads(response.read())
# eliminate redundancy
results = data['DataTables']['Results']['Data'][0]
# snake case is more pythonic
trust_flow = results['TrustFlow']
citation_flow = results['CitationFlow']
ref_domains = results['RefDomains']
ext_back_links = results['ExtBackLinks']
# don't need to wrap everything in ()
if ref_domains > 20 and trust_flow > 4 and citation_flow > 4:
insert_table(domain, trust_flow, citation_flow, ref_domains, ext_back_links)
</code></pre>
<p>This way your code is not being blocked and is fully asynchronious. Generally you don't want to use anything but scrapy requests when dealing with http in your scrapy spider.</p>
| 1 | 2016-10-03T10:50:43Z | [
"python",
"python-2.7",
"web-scraping",
"scrapy"
]
|
Video Stitching from multiple cameras | 39,829,582 | <p>I wanted to stitch a video from multiple cameras. While stitching I wanted to switch view from one camera to another. Is it possible to do it in OpenCv? </p>
<p>For example I have 3 video paths(videos having same duration) and wanted to create a single video summary by switching between the videos. To start with I have created 3 video capture objects as shown below.</p>
<pre><code>cap0=cv2.VideoCapture(path1)
cap1=cv2.VideoCapture(path2)
cap2=cv2.VideoCapture(path3)
</code></pre>
<p>similary, I also created </p>
<pre><code>ret,frame0=cap0.read()
ret,frame1=cap1.read()
ret,frame3=cap2.read()
</code></pre>
<p>Now initially I will have frames from that are read by cap0 and passed into VideoWriter object. After some time I wanted to insert frames that are read from path2 from the time where I switched from path 1. like If I wrote frames till 3 sec from path 1,I wanted to insert frames of path 2 from the 4 sec till 6 sec</p>
<p>Now if I switch back again to path1 then I wanted to insert frames from the 7 sec by skipping the frames of path1 from 4sec to 6 sec.</p>
<p>Is there any way doing this like may be skipping the frames or any other alternative</p>
| 1 | 2016-10-03T10:33:30Z | 39,831,576 | <p>yes you can do this by first finding the fps of your videos using</p>
<pre><code>int fps = (int) cvGetCaptureProperty(capture1, CV_CAP_PROP_FPS);
</code></pre>
<p>now calculate the number of frames you want to capture, </p>
<pre><code>numberOfFrames = fps*time
</code></pre>
<p>this time is the time for which you want to capture one video.Thus calculate starting and end frame for each videostream. Then capture these frames as images in Mat and use VideoWriter to write</p>
| 0 | 2016-10-03T12:23:43Z | [
"python",
"opencv",
"video",
"video-capture"
]
|
Video Stitching from multiple cameras | 39,829,582 | <p>I wanted to stitch a video from multiple cameras. While stitching I wanted to switch view from one camera to another. Is it possible to do it in OpenCv? </p>
<p>For example I have 3 video paths(videos having same duration) and wanted to create a single video summary by switching between the videos. To start with I have created 3 video capture objects as shown below.</p>
<pre><code>cap0=cv2.VideoCapture(path1)
cap1=cv2.VideoCapture(path2)
cap2=cv2.VideoCapture(path3)
</code></pre>
<p>similary, I also created </p>
<pre><code>ret,frame0=cap0.read()
ret,frame1=cap1.read()
ret,frame3=cap2.read()
</code></pre>
<p>Now initially I will have frames from that are read by cap0 and passed into VideoWriter object. After some time I wanted to insert frames that are read from path2 from the time where I switched from path 1. like If I wrote frames till 3 sec from path 1,I wanted to insert frames of path 2 from the 4 sec till 6 sec</p>
<p>Now if I switch back again to path1 then I wanted to insert frames from the 7 sec by skipping the frames of path1 from 4sec to 6 sec.</p>
<p>Is there any way doing this like may be skipping the frames or any other alternative</p>
| 1 | 2016-10-03T10:33:30Z | 39,831,855 | <p>Since you have written while stitching so I assume you want to make a sort of gui and see the results of switching at different time, otherwise Garvita Tiwari answer's seems to be correct.</p>
<p>One way of doing is to use createTrackbar of a variable(say) vid_no with its value ranging from 0 to num_Cams - 1.Now u can simply use if else depending upon the value of variable vid_no to capture frames from the desired video.
Format for using Trackbar in OpenCv</p>
<pre><code>createTrackbar("select_Video", "control", &vid_no, num_Cams - 1);
</code></pre>
| 0 | 2016-10-03T12:36:47Z | [
"python",
"opencv",
"video",
"video-capture"
]
|
Verbose_name and helptext lost when using django autocomplete light | 39,829,584 | <p>I have the below model that includes a field called boxnumber
When i don't use DAL, the verbose_name and help_text appear and get translated when needed.</p>
<p>But when adding DAL (see modelform below), it only shows the name, not translated and with no help text.</p>
<p>Any suggestions?</p>
<p>control/models.py:</p>
<pre><code>from django.utils.translation import ugettext_lazy as _
class Command(models.Model):
....
boxnumber = models.ForeignKey(SmartBox, models.SET_NULL, blank=True, null=True,
help_text=_("the Smart Box # on this client"),
verbose_name=_('Box-Number')
)
class CommandForm(ModelForm):
class Meta:
model = Command
fields = [...,
'boxnumber',
... ]
boxnumber = forms.ModelChoiceField(
queryset=SmartBox.objects.all(),
widget=autocomplete.ModelSelect2(url='control/boxnumber-autocomplete',
forward=['group'])
) # adding this removes help_text and verbose_name
</code></pre>
<p>Info:
DAL 3.1.8
Django 1.10.1
Python 3.4</p>
| 0 | 2016-10-03T10:33:53Z | 40,127,486 | <p>It's not specific to dal. You're re-instanciating a new widget class, so you need to copy help_text and verbose_name yourself.</p>
| 0 | 2016-10-19T09:32:17Z | [
"python",
"django",
"django-autocomplete-light"
]
|
How to call from function to another function | 39,829,660 | <p>I am making a minesweeper game within python with pygame.</p>
<pre><code>import pygame, math, sys
def bomb_check():
if check in BOMBS:
print("You hit a bomb!")
sys.exit
def handle_mouse(mousepos):
x, y = mousepos
x, y = math.ceil(x / 40), math.ceil(y / 40)
check = print("("+"{0}, {1}".format(x,y)+")")
</code></pre>
<p>I want to call "check" to "bomb_check"
Any other solution to this problem is welcome, I am but a rookie at python.</p>
| -1 | 2016-10-03T10:39:31Z | 39,829,738 | <p>Just use it as an argument:</p>
<pre><code>import pygame, math, sys
def bomb_check(check):
if check in BOMBS:
print("You hit a bomb!")
sys.exit
def handle_mouse(mousepos):
x, y = mousepos
x, y = math.ceil(x / 40), math.ceil(y / 40)
check = x, y
print(check)
bomb_check(check)
</code></pre>
<p>That will work only if you are storing a tuples of 2 items (integers) in your BOMBS. Of course it also requires BOMBS to be in global scope.</p>
| 0 | 2016-10-03T10:44:10Z | [
"python"
]
|
Attain a tally of a column of a 2d array | 39,829,712 | <p>I have a 2d array data. And would like to attain a tally every time the jth iteration is a 1.
Where i = rows and j = columns.
How do I go about doing this without a for loop? </p>
<p>Conceptually something like this:</p>
<pre><code>for r in range(row):
if(data[r][j] == 1)
amount += 1
</code></pre>
| 1 | 2016-10-03T10:42:24Z | 39,830,113 | <p>I interprete this question such that you want to iterate over both, rows and columns, and add 1 to <code>amount</code> for each entry in data that is 1. This can be done without looping as follows.</p>
<pre><code>import numpy as np
data = np.ones((6,8))
amount = data[data == 1].sum()
print amount
</code></pre>
<p>If instaead, you fix one column <code>j</code>, and only want the amount in this column:</p>
<pre><code>import numpy as np
j=7
data = np.ones((6,8))
amount = data[:,j][data[:,j]==1].sum()
print amount
</code></pre>
| 0 | 2016-10-03T11:04:53Z | [
"python",
"arrays",
"numpy",
"multidimensional-array",
"scipy"
]
|
Attain a tally of a column of a 2d array | 39,829,712 | <p>I have a 2d array data. And would like to attain a tally every time the jth iteration is a 1.
Where i = rows and j = columns.
How do I go about doing this without a for loop? </p>
<p>Conceptually something like this:</p>
<pre><code>for r in range(row):
if(data[r][j] == 1)
amount += 1
</code></pre>
| 1 | 2016-10-03T10:42:24Z | 39,830,251 | <p>You can do as follow:</p>
<pre><code>import numpy as np
a = np.array([[0, 1], [1, 1]])
j = 1
np.sum(a[:, j] == 1)
</code></pre>
<p>will give you 2 as a result
, while <code>np.sum(a[:, 0] == 1)</code> will give 1</p>
<p>If as mentioned in your comment you want to use a condition on multiple arrays, you can use <code>np.logical_and(condition1, condition2)</code>:</p>
<pre><code>np.sum(np.logical_and(a[:, 0] == 1, b[:, 0] == 2))
</code></pre>
| 1 | 2016-10-03T11:12:51Z | [
"python",
"arrays",
"numpy",
"multidimensional-array",
"scipy"
]
|
How to get rid of extra + sign in print statement, Python | 39,829,827 | <pre><code>print('2**', n, ' + ', sep='', end='')
</code></pre>
<p>Hi the print statement above is in a loop so the output ends up being </p>
<pre><code>2 ** 10 + 2 ** 7 + 2 ** 6 + 2 ** 4 + 2 ** 1 +
</code></pre>
<p>I need to get rid of the last plus in the statement but have no idea how to go about doing so.</p>
| 1 | 2016-10-03T10:48:37Z | 39,829,901 | <p>You can store the string to be printed in a variable in the loop and then after the loop ends, remove the extra plus by slicing using <code>to_print[:len(to_print)-1]</code>
And then print to_print.
Here to_print is the text you need to store in the loop Instead of printing it and then print it at the end after slicing it as I showed above.</p>
| 0 | 2016-10-03T10:53:05Z | [
"python",
"python-3.x",
"printing"
]
|
How to get rid of extra + sign in print statement, Python | 39,829,827 | <pre><code>print('2**', n, ' + ', sep='', end='')
</code></pre>
<p>Hi the print statement above is in a loop so the output ends up being </p>
<pre><code>2 ** 10 + 2 ** 7 + 2 ** 6 + 2 ** 4 + 2 ** 1 +
</code></pre>
<p>I need to get rid of the last plus in the statement but have no idea how to go about doing so.</p>
| 1 | 2016-10-03T10:48:37Z | 39,829,904 | <p>Either check if you're at the last element and use a different print statement (without the final <code>'+'</code>), or construct your output in a list first and join the list before printing.</p>
<pre><code>output = []
some_loop:
output.append('2**%i' % n)
print(' + '.join(output))
</code></pre>
| 0 | 2016-10-03T10:53:14Z | [
"python",
"python-3.x",
"printing"
]
|
How to get rid of extra + sign in print statement, Python | 39,829,827 | <pre><code>print('2**', n, ' + ', sep='', end='')
</code></pre>
<p>Hi the print statement above is in a loop so the output ends up being </p>
<pre><code>2 ** 10 + 2 ** 7 + 2 ** 6 + 2 ** 4 + 2 ** 1 +
</code></pre>
<p>I need to get rid of the last plus in the statement but have no idea how to go about doing so.</p>
| 1 | 2016-10-03T10:48:37Z | 39,829,920 | <p>It is pretty 'common' 'problem', and it is often solved by using ''.join method. I assume that you've a list of integers, so all you need to do is:</p>
<pre><code>powers = [10, 7, 6, 4]
print(' + '.join(['2 ** {n}'.format(n= n) for n in powers]))
</code></pre>
| 1 | 2016-10-03T10:54:03Z | [
"python",
"python-3.x",
"printing"
]
|
How to get rid of extra + sign in print statement, Python | 39,829,827 | <pre><code>print('2**', n, ' + ', sep='', end='')
</code></pre>
<p>Hi the print statement above is in a loop so the output ends up being </p>
<pre><code>2 ** 10 + 2 ** 7 + 2 ** 6 + 2 ** 4 + 2 ** 1 +
</code></pre>
<p>I need to get rid of the last plus in the statement but have no idea how to go about doing so.</p>
| 1 | 2016-10-03T10:48:37Z | 39,829,972 | <p>If you separate the exponents, as you have probably done, you can use <a href="https://docs.python.org/3/library/stdtypes.html#str.join" rel="nofollow"><code>str.join()</code></a>:</p>
<pre><code>>>> exponents = (10, 7, 6, 4, 1)
>>> print(' + '.join('2**{}'.format(n) for n in exponents))
2**10 + 2**7 + 2**6 + 2**4 + 2**1
</code></pre>
<p>That will work in both Python 2 & 3. You can also use the <code>print()</code> function with the <code>sep</code> argument:</p>
<pre><code>>>> print(*('2**{}'.format(n) for n in exponents), sep=' + ')
2**10 + 2**7 + 2**6 + 2**4 + 2**1
</code></pre>
| 2 | 2016-10-03T10:57:14Z | [
"python",
"python-3.x",
"printing"
]
|
How to get rid of extra + sign in print statement, Python | 39,829,827 | <pre><code>print('2**', n, ' + ', sep='', end='')
</code></pre>
<p>Hi the print statement above is in a loop so the output ends up being </p>
<pre><code>2 ** 10 + 2 ** 7 + 2 ** 6 + 2 ** 4 + 2 ** 1 +
</code></pre>
<p>I need to get rid of the last plus in the statement but have no idea how to go about doing so.</p>
| 1 | 2016-10-03T10:48:37Z | 39,829,991 | <p>You could do the following, and adapting it to your loop length</p>
<pre><code>n = 1
for i in range(0,10):
if i < 9:
print('2**', n, ' + ', sep='', end = '')
else:
print('2**', n, sep='', end = '')
</code></pre>
| 0 | 2016-10-03T10:58:05Z | [
"python",
"python-3.x",
"printing"
]
|
How to get rid of extra + sign in print statement, Python | 39,829,827 | <pre><code>print('2**', n, ' + ', sep='', end='')
</code></pre>
<p>Hi the print statement above is in a loop so the output ends up being </p>
<pre><code>2 ** 10 + 2 ** 7 + 2 ** 6 + 2 ** 4 + 2 ** 1 +
</code></pre>
<p>I need to get rid of the last plus in the statement but have no idea how to go about doing so.</p>
| 1 | 2016-10-03T10:48:37Z | 39,830,120 | <p><a href="http://www.tutorialspoint.com/python/string_join.htm" rel="nofollow">join()</a> helps a lot in this case:</p>
<pre><code>exponents = [10, 7, 6, 4, 1]
out = []
for n in exponents:
out.append('2 ** %d' % n)
print ' + '.join(out)
</code></pre>
| 0 | 2016-10-03T11:05:04Z | [
"python",
"python-3.x",
"printing"
]
|
Python - unittests: compare some objects using their attributes instead if they are really the same object? | 39,829,946 | <p>I have method that sets data into class attribute.</p>
<p>So let say I run:</p>
<pre><code>self._set_data(some_data)
print self._data
</code></pre>
<p>It prints me this information:</p>
<pre><code>{'c2': {
'column': 1,
'style': <xlwt.Style.XFStyle object at 0x7f4668a18dd0>,
'value': u'Argentina', 'row': 2},
'c1': {
'column': 0,
'style': <xlwt.Style.XFStyle object at 0x7f4668a18dd0>,
'value': 'C is not Python', 'row': 0}}
</code></pre>
<p>So every key except <code>style</code> has simple data, so there is no problem checking what is expected when running unittests. But the problem I see with <code>style</code> key, it returns instantiated <code>xlwt</code> module's style object. Now even if I would create "same" style using same values to <code>__init__</code> it, unittest would still fail, because it would compare objects, so it would be different object. Does python standard unittest suite have something like that? Or I need extend unittests suite, so it would compare that specific object differently somehow?</p>
| 0 | 2016-10-03T10:55:31Z | 39,830,190 | <p>In your tests you can create a mock style object which is initialized with the expected values, then compare it's <code>__dict__</code> attribute to the <code>__dict__</code> attribute of the tested style object.</p>
<pre><code>if mock_style.__dict__ == tested_style.__dict__ :
print('The styles are set correctly.')
</code></pre>
| 0 | 2016-10-03T11:09:30Z | [
"python",
"python-2.7",
"object",
"python-unittest",
"xlwt"
]
|
set value from filepicker to dialog that called it | 39,830,040 | <p>i have a action button that calls an dialog that i've made:</p>
<pre><code>class MainPanelManager(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self):
super().__init__()
self.setupUi(self)
self.actionLocation.triggered.connect(self.editsettings)
def editsettings(self):
dialog = QDialog()
dialog.ui = Ui_Dialog()
dialog.ui.setupUi(dialog)
dialog.ui.pushButton.clicked.connect(self.openfile)
dialog.exec_()
def openfile(self):
folder = QFileDialog.getExistingDirectory(self, 'Select Folder', 'C:/')
# folder value must be set to dialog textedit
</code></pre>
<p>Dialog works and opens file picker when button is pressed. How to set the value when folder is selected. I need to put the value in <code>textedit</code></p>
| 0 | 2016-10-03T11:01:05Z | 39,830,169 | <p>A simple solution would be declaring the <code>dialog</code> variable a property of <code>self</code> object. So, you can then use it class-widely in all methods.</p>
<pre><code>class MainPanelManager(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self):
super().__init__()
self.setupUi(self)
self.actionLocation.triggered.connect(self.editsettings)
def editsettings(self):
self.dialog = QDialog()
self.dialog.ui = Ui_Dialog()
self.dialog.ui.setupUi(dialog)
self.dialog.ui.pushButton.clicked.connect(self.openfile)
self.dialog.exec_()
def openfile(self):
folder = QFileDialog.getExistingDirectory(self, 'Select Folder', 'C:/')
# folder value must be set to dialog textedit
self.dialog.ui.textedit.set_text(folder)
</code></pre>
| 0 | 2016-10-03T11:08:01Z | [
"python",
"qt",
"python-3.x",
"pyqt",
"qt-creator"
]
|
cPickle: UnpicklingError: invalid load key, 'A' | 39,830,198 | <p>I have a pickle file which upon unpickling throws an <code>UnpicklingError: invalid load key, 'A'.</code> exception. The exception gets thrown regardless of whether I try to analyse it on the Ubuntu 14.04 machine on which the file was generated or on my Windows machine. It contains 26 data points and the exception gets thrown after data point 11. I suspect I must have somehow accidentally edited the file though I don't know when or how. I know there are several other discussions on this sort of error but so far I haven't found a post yet telling me if and how I could go about recovering the values after the faulty entry (I suspect one of the values is just irretrievably lost). Is there any way I could skip it and carry on unpickling the next one? Can one e.g. unpickle in the reverse direction, i.e. last element first? Then I could work backwards till I hit the faulty entry and thus get the other values. (I could regenerate the data but it would take a day or two so I would rather avoid having to do that if I can.)</p>
<p>This is the code for pickling:</p>
<pre><code>with open('hist_vs_years2.pkl', 'ab') as hist_pkl:
pickle.dump(hist, hist_pkl, -1)
</code></pre>
<p>And this is the code for unpickling:</p>
<pre><code>hist_vs_samples2 = []
more_values = True
with open('hist_vs_years2.pkl', 'rb') as hist_vs_samples_pkl:
while more_values == True:
try:
hist_vs_samples2.append(pickle.load(hist_vs_samples_pkl))
except EOFError:
more_values = False
</code></pre>
<p>I should add that I am using cPickle. If I try to unpickle using pickle I get the following error:</p>
<pre><code> File "C:\Anaconda2\lib\pickle.py", line 1384, in load
return Unpickler(file).load()
File "C:\Anaconda2\lib\pickle.py", line 864, in load
dispatch[key](self)
KeyError: 'A'
</code></pre>
| 1 | 2016-10-03T11:09:52Z | 39,835,073 | <p>When storing multiple objects (by repeated <code>dump</code>, not from containers) Pickle will store objects sequentially in pickle files, so if an object is broken it can be removed without corrupting the others.</p>
<p>In principle, the pickle format is pseudo-documented in <code>pickle.py</code>. For most cases, the opcodes at the beginning of the module are sufficient to piece together what is happening. Basically, pickle files are an instruction on how to build objects.</p>
<p>How readable a pickle file is depends on its pickle format - 0 is doable, everything above is <strong>difficult</strong>. Whether you can fix or must delete depends entirely on this. What's consistent is that each individual pickle ends with a dot (<code>.</code>). For example, <code>b'Va\np0\n.'</code> and <code>b'\x80\x04\x95\x05\x00\x00\x00\x00\x00\x00\x00\x8c\x01a\x94.'</code> both are the character '"a"', but in protocol 0 and 4.</p>
<p>The simplest form of recovery is to count the number of objects you can load:</p>
<pre><code>with open('/my/pickle.pkl', 'rb') as pkl_source:
idx = 1
while True:
pickle.load(pkl_source)
print(idx)
idx += 1
</code></pre>
<p>Then open the pickle file, skip as many objects and remove everything up to the next <code>.</code>.</p>
| 1 | 2016-10-03T15:23:04Z | [
"python",
"python-2.7",
"pickle"
]
|
Celery workers wait | 39,830,235 | <p>I am writing an application with using Celery framework. Some of my tasks are pretty heavyweight and can execute for a long time.</p>
<p>I've noticed that when I run 5-6 workers and then put 10-20 tasks they may be distributed by workers randomly and sometimes if one get free of tasks, it does not start remaining ones and they will be handled by others only when they complete their tasks (maybe in hours). If I run one more worker at this time - it does nothing, but can accept new tasks.</p>
<p>Is it a bug or a feature and how do I solve my needs? It does not make sense to wait hours while we have free workers and not started tasks.</p>
| 6 | 2016-10-03T11:12:07Z | 39,830,304 | <p>It is not a bug or a feature (more likely a feature), it is just misconfiguration. </p>
<p>As the <a href="http://docs.celeryproject.org/en/latest/userguide/optimizing.html#prefetch-limits">documentation</a> says, the worker can reserve some tasks for himself to hasten the processing messages. But this makes sense only for small and fast tasks - it does not ask the broker for the new message but immediately starts reserved one.</p>
<p>But for the long tasks this may lead to the case described in your question.</p>
<blockquote>
<p>If you have many tasks with a long duration you want the multiplier value to be 1, which means it will only reserve one task per worker process at a time.</p>
<p>If you have a combination of long- and short-running tasks, the best option is to use two worker nodes that are configured separately, and route the tasks according to the run-time.</p>
</blockquote>
<p>So, you need to set <code>CELERYD_PREFETCH_MULTIPLIER = 1</code> in the celery's settings.</p>
<p>But,</p>
<blockquote>
<p>When using early acknowledgement (default), a prefetch multiplier of 1 means the worker will reserve at most one extra task for every active worker process.</p>
<p>When users ask if itâs possible to disable âprefetching of tasksâ, often what they really want is to have a worker only reserve as many tasks as there are child processes.</p>
</blockquote>
<p>I also may recommend to set <code>CELERY_ACKS_LATE = True</code> to send ACK command only after the task get completed. This way the worker won't reserve any additional tasks at all, but currently executing task will be marked as reserved only.</p>
<p>Although this has a side effect - if the worker get crashed/terminated in the middle of executing of your task, the task will be marked again as not-started and any other worker may start it again from the beginning. So make sure you have <code>idempotent</code> tasks. See <a href="http://docs.celeryproject.org/en/latest/userguide/optimizing.html#reserve-one-task-at-a-time">docs</a> again about this.</p>
| 5 | 2016-10-03T11:16:01Z | [
"python",
"celery"
]
|
How to perform two functions in a single for loop in python | 39,830,241 | <p>I wanna split a sentence and also replace quotes from it. I did:</p>
<pre><code>sentences = read_data.split('\n')
sentences_no_quotes = [sentence.replace('"', '') for sentence in sentences]
splited_sentences = [sentence.split(',') for sentence in sentences_no_quotes]
</code></pre>
<p>How can I do it in a single line? any suggestions! Thanks for the help</p>
| -3 | 2016-10-03T11:12:29Z | 39,831,171 | <p>Just do it, it's straightforward.</p>
<pre><code>splited_sentences = [sentence.split(',') for sentence in
[sentence.replace('"', '') for sentence in
read_data.split('\n')]]
</code></pre>
<p>Probably you'd better use generators instead of lists but that's beyond your question.</p>
| 1 | 2016-10-03T12:04:34Z | [
"python",
"python-3.x"
]
|
Ubuntu Server Python Pandas âSVD did not convergeâ | 39,830,257 | <p>I have the following code.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8 -*-
# cadf.py
import datetime
import MySQLdb as mdb
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import pandas as pd
import pprint
import statsmodels.tsa.stattools as ts
from pandas.stats.api import ols
if __name__ == "__main__":
# Connect to the MySQL instance
db_host = '192.168.200.128'
db_user = 'sec_master'
db_pass = 'pass'
db_name = 'GBP-USD'
con = mdb.connect(db_host, db_user, db_pass, db_name)
sql = """SELECT TIME,`BID-CLOSE`
FROM `GBP-USD`.`tbl_GBP-USD_1-Day`
WHERE TIME >= '2007-01-01' AND TIME <= '2016-09-20'
ORDER BY TIME ASC;"""
# Create a pandas dataframe from the SQL query
GBPUSD = pd.read_sql_query(sql, con=con, index_col='TIME')
if __name__ == "__main__":
# Connect to the MySQL instance
db_host2 = '192.168.200.128'
db_user2 = 'sec_master'
db_pass2 = 'pass'
db_name2 = 'EUR-USD'
con = mdb.connect(db_host2, db_user2, db_pass2, db_name2)
sql2 = """SELECT TIME,`BID-CLOSE`
FROM `EUR-USD`.`tbl_EUR-USD_1-Day`
WHERE TIME >= '2007-01-01' AND TIME <= '2016-09-20'
ORDER BY TIME ASC;"""
# Create a pandas dataframe from the SQL query
EURUSD = pd.read_sql_query(sql2, con=con, index_col='TIME')
def plot_price_series(df, ts1, ts2):
months = mdates.MonthLocator() # every month
fig, ax = plt.subplots()
ax.plot(df.index, df[ts1], label=ts1)
ax.plot(df.index, df[ts2], label=ts2)
ax.xaxis.set_major_locator(months)
ax.xaxis.set_major_formatter(mdates.DateFormatter('%b %Y'))
ax.set_xlim(datetime.datetime(2007, 1, 1), datetime.datetime(2016, 9, 20))
ax.grid(True)
fig.autofmt_xdate()
plt.xlabel('Month/Year')
plt.ylabel('Price ($)')
plt.title('%s and %s Daily Prices' % (ts1, ts2))
plt.legend()
plt.show()
def plot_scatter_series(df, ts1, ts2):
plt.xlabel('%s Price ($)' % ts1)
plt.ylabel('%s Price ($)' % ts2)
plt.title('%s and %s Price Scatterplot' % (ts1, ts2))
plt.scatter(df[ts1], df[ts2])
plt.show()
def plot_residuals(df):
months = mdates.MonthLocator() # every month
fig, ax = plt.subplots()
ax.plot(df.index, df["res"], label="Residuals")
ax.xaxis.set_major_locator(months)
ax.xaxis.set_major_formatter(mdates.DateFormatter('%b %Y'))
ax.set_xlim(datetime.datetime(2007, 1, 1), datetime.datetime(2016, 9, 20))
ax.grid(True)
fig.autofmt_xdate()
plt.xlabel('Month/Year')
plt.ylabel('Price ($)')
plt.title('Residual Plot')
plt.legend()
plt.plot(df["res"])
plt.show()
if __name__ == "__main__":
df = pd.DataFrame(index=GBPUSD.index)
df["GBPUSD"] = GBPUSD["BID-CLOSE"]
df["EURUSD"] = EURUSD["BID-CLOSE"]
# Plot the two time series
plot_price_series(df, "GBPUSD", "EURUSD")
# Display a scatter plot of the two time series
plot_scatter_series(df, "GBPUSD", "EURUSD")
# Calculate optimal hedge ratio "beta"
res = ols(y=df['EURUSD'], x=df["GBPUSD"])
beta_hr = res.beta.x
# Calculate the residuals of the linear combination
df["res"] = df["EURUSD"] - beta_hr*df["GBPUSD"]
# Plot the residuals
plot_residuals(df)
# Calculate and output the CADF test on the residuals
cadf = ts.adfuller(df["res"])
pprint.pprint(cadf)
</code></pre>
<p>I run the code on Ubuntu Server and receive the following error.</p>
<blockquote>
<p>james@GEN-U-DAE-01:~/Desktop$ sudo python cadf_gbpusdAndEurusd2.py</p>
</blockquote>
<pre><code>sys:1: FutureWarning: The pandas.stats.ols module is deprecated and will be removed in a future version. We refer to external packages like statsmodels, see some examples here: http://statsmodels.sourceforge.net/stable/regression.html
Traceback (most recent call last):
File "cadf_gbpusdAndEurusd2.py", line 110, in <module>
cadf = ts.adfuller(df["res"])
File "/usr/local/lib/python2.7/dist-packages/statsmodels/tsa/stattools.py", line 231, in adfuller
maxlag, autolag)
File "/usr/local/lib/python2.7/dist-packages/statsmodels/tsa/stattools.py", line 73, in _autolag
results[lag] = mod_instance.fit()
File "/usr/local/lib/python2.7/dist-packages/statsmodels/regression/linear_model.py", line 174, in fit
self.pinv_wexog, singular_values = pinv_extended(self.wexog)
File "/usr/local/lib/python2.7/dist-packages/statsmodels/tools/tools.py", line 392, in pinv_extended
u, s, vt = np.linalg.svd(X, 0)
File "/usr/local/lib/python2.7/dist-packages/numpy/linalg/linalg.py", line 1359, in svd
u, s, vt = gufunc(a, signature=signature, extobj=extobj)
File "/usr/local/lib/python2.7/dist-packages/numpy/linalg/linalg.py", line 99, in _raise_linalgerror_svd_nonconvergence
raise LinAlgError("SVD did not converge")
numpy.linalg.linalg.LinAlgError: SVD did not converge
</code></pre>
<p>Everything else such as SQL connection and charts seem to be working fine.</p>
<p>I found the exact error point in the data but still I am unsure why this is happening. If I run the dates from '2006-12-31' to '2016-09-20', I get the error. However if I run the program from '2007-01-01' to '2016-09-20' the program completes without error. </p>
<p>Below is a snipping of the data.</p>
<p>GBPUSD</p>
<pre><code>TIME, BID-OPEN, BID-HIGH, BID-LOW, BID-CLOSE, ASK-OPEN, ASK-HIGH, ASK-LOW, ASK-CLOSE, VOLUME
'2007-01-03 22:00:00', '1.9513', '1.9531', '1.9397', '1.9428', '1.9511', '1.9529', '1.9395', '1.9426', '11882'
'2007-01-02 22:00:00', '1.9736', '1.975', '1.9481', '1.9513', '1.9734', '1.9748', '1.9479', '1.9511', '12260'
'2007-01-01 22:00:00', '1.9581', '1.9741', '1.958', '1.9736', '1.9579', '1.9739', '1.9578', '1.9734', '8113'
'2006-12-31 22:00:00', '1.958', '1.9581', '1.958', '1.9581', '1.9583', '1.9583', '1.9579', '1.9579', '1'
'2006-12-28 22:00:00', '1.9631', '1.967', '1.9566', '1.958', '1.9634', '1.9673', '1.9569', '1.9583', '9684'
'2006-12-27 22:00:00', '1.9561', '1.9678', '1.9554', '1.9631', '1.9564', '1.9681', '1.9557', '1.9634', '10025'
'2006-12-26 22:00:00', '1.9536', '1.9633', '1.9527', '1.9561', '1.9539', '1.9636', '1.953', '1.9564', '10049'
</code></pre>
<p>EURUSD</p>
<pre><code>TIME, BID-OPEN, BID-HIGH, BID-LOW, BID-CLOSE, ASK-OPEN, ASK-HIGH, ASK-LOW, ASK-CLOSE, VOLUME
'2007-01-03 22:00:00', '1.31701', '1.31791', '1.30761', '1.30831', '1.31679', '1.31769', '1.30739', '1.30809', '10377'
'2007-01-02 22:00:00', '1.32731', '1.32911', '1.31461', '1.31701', '1.32709', '1.32889', '1.31439', '1.31679', '9935'
'2007-01-01 22:00:00', '1.32021', '1.32971', '1.32001', '1.32731', '1.32029', '1.32949', '1.31979', '1.32709', '6711'
'2006-12-28 22:00:00', '1.31491', '1.32051', '1.31381', '1.32021', '1.31499', '1.32059', '1.31389', '1.32029', '8010'
'2006-12-27 22:00:00', '1.31131', '1.32021', '1.31061', '1.31491', '1.31139', '1.32029', '1.31069', '1.31499', '8313'
'2006-12-26 22:00:00', '1.30971', '1.31781', '1.30951', '1.31131', '1.30979', '1.31789', '1.30959', '1.31139', '8242'
</code></pre>
<p>How do I fix this error message, please could someone explain to me what the issue is?</p>
<p>Thank you</p>
| 1 | 2016-10-03T11:13:10Z | 39,870,212 | <p>After some testing with other data, my first thought was the quantity of rows being tested. This was not the case. </p>
<p>The problem was a missing data point which can be seen from the snipping above. Row '2006-12-31 22:00:00' was missing from the 'EUR-USD' sample causing the calculation to NaN, the calculation could only utilise one complete side 'GBP-USD'. Hope that makes sense.</p>
<p>Problem solved by inserting a new row, albeit a fictitious one using the SQL code below.</p>
<pre><code>INSERT INTO `EUR-USD`.`tbl_EUR-USD_1-Day` (`TIME`, `BID-OPEN`, `BID-HIGH`, `BID-LOW`, `BID-CLOSE`, `ASK-OPEN`, `ASK-HIGH`, `ASK-LOW`, `ASK-CLOSE`, `VOLUME`)
VALUES ('2006-12-31 22:00:00', '1.32021', '1.32971', '1.32001', '1.32731', '1.32029', '1.32949', '1.31979', '1.32709', '6711');
</code></pre>
<p>Moving forward I will write a script to spot missing data and calculate an average value of before and after the missing data point.</p>
<p>UPDATE</p>
<p>After further reading I suggest not updating the records in the database, it is best to keep the RAW data 'as is' and manipulate it with Pandas.</p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow">Pandas</a> has some excellent options below is just one of them. </p>
<p>FFILL is a crude way of filling in missing data. I will be testing other methods and update this post accordingly.</p>
<pre><code>idx = pd.date_range('1997-12-31 22:00:00', '2016-09-18 22:00:00', freq= 'D')
EURUSD = EURUSD.reindex(idx, method='ffill')
GBPUSD = GBPUSD.reindex(idx, method='ffill')
</code></pre>
| -1 | 2016-10-05T09:27:03Z | [
"python",
"ubuntu",
"pandas",
"quantitative-finance"
]
|
Parse args to json elements in python | 39,830,277 | <p>i have a data structure that i want to fill with cmd args</p>
<pre><code>params = {
...
...
"blockX": "{\"nameX\":["\Arg1\","\Arg2\",\"Arg3\"........"\ArgX\"]}",
...
...
}
</code></pre>
<p>Then i run a post request with the params as data:</p>
<pre><code>r = requests.post("https://url", data=params)
</code></pre>
<p>I only know how to do it with one element</p>
<pre><code>"BlockX": "{\"nameX\": \"%s\"}" %ArgX,
</code></pre>
<p>Regards</p>
| 0 | 2016-10-03T11:14:30Z | 39,830,424 | <p>If you have your arguments in <code>args</code> list, then you can try this:</p>
<pre><code>import json
params['blockX'] = json.dumps({'nameX': args})
</code></pre>
| 0 | 2016-10-03T11:22:33Z | [
"python",
"json",
"args"
]
|
Parse args to json elements in python | 39,830,277 | <p>i have a data structure that i want to fill with cmd args</p>
<pre><code>params = {
...
...
"blockX": "{\"nameX\":["\Arg1\","\Arg2\",\"Arg3\"........"\ArgX\"]}",
...
...
}
</code></pre>
<p>Then i run a post request with the params as data:</p>
<pre><code>r = requests.post("https://url", data=params)
</code></pre>
<p>I only know how to do it with one element</p>
<pre><code>"BlockX": "{\"nameX\": \"%s\"}" %ArgX,
</code></pre>
<p>Regards</p>
| 0 | 2016-10-03T11:14:30Z | 39,834,055 | <p>Your example suggest you want to <code>join</code> the arguments. You can convert any list into a single string by merging with a separator:</p>
<pre><code>args = ['foo', 'bar', 'hold the mustard']
print('", "'.join(args))
</code></pre>
<p>So to get your list <code>'"Arg1", "Arg2"'</code> etc, join them with the '", "' separator and insert <em>that</em> into your template:</p>
<pre><code>'{"nameX": ["%s"]}' % '", "'.join(args)
</code></pre>
<p>Note how <code>'</code> allows you to skip escaping each <code>"</code>. In your example, this would look like:</p>
<pre><code>params = {
#...
"blockX": '{"nameX": ["%s"]}' % '", "'.join(args),
# ...
}
</code></pre>
<p>Please keep in mind that you are putting a string representation of a JSON into a JSON! This is... perhaps not the right thing to do. A proper API would expect just a single JSON, in which case you can dump <code>args</code> directly.</p>
<pre><code>params = {
#...
"blockX": {
"nameX": args
},
# ...
}
</code></pre>
| 0 | 2016-10-03T14:29:16Z | [
"python",
"json",
"args"
]
|
For Loop doesn't spit out needed results | 39,830,446 | <p>I got this piece of code to spit out the unique "area number" in the URL. However, the loop doesn't work. It spits out the same number, please see below:</p>
<pre><code>import urllib3
from bs4 import BeautifulSoup
http = urllib3.PoolManager()
url = open('MS Type 1 URL.txt',encoding='utf-8-sig')
links = []
for link in url:
y = link.strip()
links.append(y)
url.close()
print('Amount of Links: ', len(links))
for x in links:
j = (x.find("=") + 1)
g = (x.find('&housing'))
print(link[j:g])
</code></pre>
<p>Results are:</p>
<p><a href="http://millersamuel.com/aggy-data/home/query_report?area=38&housing_type=3&measure=4&query_type=quarterly&region=1&year_end=2020&year_start=1980" rel="nofollow">http://millersamuel.com/aggy-data/home/query_report?area=38&housing_type=3&measure=4&query_type=quarterly&region=1&year_end=2020&year_start=1980</a>
23</p>
<p><a href="http://millersamuel.com/aggy-data/home/query_report?area=23&housing_type=1&measure=4&query_type=annual&region=1&year_end=2020&year_start=1980" rel="nofollow">http://millersamuel.com/aggy-data/home/query_report?area=23&housing_type=1&measure=4&query_type=annual&region=1&year_end=2020&year_start=1980</a>
23</p>
<p>As you can see it spits out the area number '23' which is only in one of this URL but not the '38' of the other URL.</p>
| 0 | 2016-10-03T11:23:41Z | 39,830,592 | <p>There's a typo in your code. You iterate over <code>links</code> list and bind its elements to <code>x</code> variable, but print a slice of <code>link</code> variable, so you get the same string printed on each loop iteration. So you can change <code>print(link[j:g])</code> to <code>print(x[j:g])</code>, but it's better to call your variables with more descriptive names, so here's the fixed version of your loop:</p>
<pre><code>for link in links:
j = link.find('=') + 1
g = link.find('&housing')
print(link[j:g])
</code></pre>
<p>And I also want to show you a proper way to extract <code>area</code> value from URLs:</p>
<pre><code>from urllib.parse import urlparse, parse_qs
url = 'http://millersamuel.com/aggy-data/home/query_report?area=38&housing_type=3&measure=4&query_type=quarterly&region=1&year_end=2020&year_start=1980'
area = parse_qs(urlparse(url).query)['area'][0]
</code></pre>
<p>So instead of using <code>str.find</code> method, you can write this:</p>
<pre><code>for url in urls:
parsed_qs = parse_qs(urlparse(url).query)
if 'area' in parsed_qs:
area = parsed_qs['area'][0]
print(area)
</code></pre>
<p>Used functions:</p>
<ul>
<li><a href="https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlparse" rel="nofollow"><code>urllib.urlparse</code></a></li>
<li><a href="https://docs.python.org/3/library/urllib.parse.html#urllib.parse.parse_qs" rel="nofollow"><code>urllib.parse_qs</code></a></li>
</ul>
| 1 | 2016-10-03T11:32:03Z | [
"python",
"python-3.x",
"loops",
"beautifulsoup",
"urllib3"
]
|
For Loop doesn't spit out needed results | 39,830,446 | <p>I got this piece of code to spit out the unique "area number" in the URL. However, the loop doesn't work. It spits out the same number, please see below:</p>
<pre><code>import urllib3
from bs4 import BeautifulSoup
http = urllib3.PoolManager()
url = open('MS Type 1 URL.txt',encoding='utf-8-sig')
links = []
for link in url:
y = link.strip()
links.append(y)
url.close()
print('Amount of Links: ', len(links))
for x in links:
j = (x.find("=") + 1)
g = (x.find('&housing'))
print(link[j:g])
</code></pre>
<p>Results are:</p>
<p><a href="http://millersamuel.com/aggy-data/home/query_report?area=38&housing_type=3&measure=4&query_type=quarterly&region=1&year_end=2020&year_start=1980" rel="nofollow">http://millersamuel.com/aggy-data/home/query_report?area=38&housing_type=3&measure=4&query_type=quarterly&region=1&year_end=2020&year_start=1980</a>
23</p>
<p><a href="http://millersamuel.com/aggy-data/home/query_report?area=23&housing_type=1&measure=4&query_type=annual&region=1&year_end=2020&year_start=1980" rel="nofollow">http://millersamuel.com/aggy-data/home/query_report?area=23&housing_type=1&measure=4&query_type=annual&region=1&year_end=2020&year_start=1980</a>
23</p>
<p>As you can see it spits out the area number '23' which is only in one of this URL but not the '38' of the other URL.</p>
| 0 | 2016-10-03T11:23:41Z | 39,830,862 | <blockquote>
<p>You need to change: </p>
</blockquote>
<pre><code>print(link[j:g]) to print(x[j:g])
</code></pre>
| 0 | 2016-10-03T11:47:21Z | [
"python",
"python-3.x",
"loops",
"beautifulsoup",
"urllib3"
]
|
how to return data from python to php and display it on php page | 39,830,522 | <p>My python code gets data from php it takes value in text_content variable and then execute its script. But i dont understand to how to return back the results to php. help me please.</p>
<pre><code><body>
<?php
// define variables and set to empty values
$text_content="";
$hello="";
if ($_SERVER["REQUEST_METHOD"] == "POST") {
$hello = $_POST['text_content'];
$command="\Users\jonii\AppData\Local\Programs\Python\Python35\python splitter.py $hello ";
exec($command , $out,$ret );
//echo $out;
echo $ret;
}
?>
<form method="post" action="<?php echo htmlspecialchars($_SERVER["PHP_SELF"]);?>">
<textarea name="text_content" value="<?php echo $text_content;?>" cols="40" rows="4"> </textarea>
<input type="submit" name="submit" value="Submit">
</form>
</body>
</html>
</code></pre>
<p>my python file is:</p>
<pre><code>#!/Users/jonii/AppData/Local/Programs/Python/Python35/python
# Import modules for CGI handling
import cgi, cgitb
import nltk
from nltk.tokenize import sent_tokenize, word_tokenize
import sys
print("Content-type:text/html\n")
print("hello")
text_content = ''
for word in sys.argv[1:]:
text_content += word + ' '
print(text_content)
def sentence_split( text_content ):
# Add both the parameters and return them."
print(sent_tokenize(text_content))
return
# Now you can call sentence_split function
sentence_split( text_content );
</code></pre>
| 0 | 2016-10-03T11:28:22Z | 39,830,821 | <p>I think you have got the data, but you're tried to print it as a string, and not an array. Try:</p>
<pre><code>$command="\Users\jonii\AppData\Local\Programs\Python\Python35\python splitter.py $hello ";
exec($command , $out,$ret );
//echo $out;
/*Loop through each line of data returned*/
foreach ($out as $line){
print "$line\n";
}
echo $ret;
</code></pre>
| 0 | 2016-10-03T11:44:52Z | [
"php",
"python",
"nltk"
]
|
How to rebuild project after SWIG files changed? | 39,830,598 | <p>Given the below makefile:</p>
<pre><code>TARGET = _example.pyd
OFILES = example.obj example_wrap.obj
HFILES =
CC = cl
CXX = cl
LINK = link
CPPFLAGS = -DNDEBUG -DUNICODE -DWIN32 -I. -Id:\virtual_envs\py351\include
CFLAGS = -nologo -Zm200 -Zc:wchar_t- -FS -Zc:strictStrings -O2 -MD -W3 -w44456 -w44457 -w44458
CXXFLAGS = -nologo -Zm200 -Zc:wchar_t- -FS -Zc:strictStrings -D_HAS_EXCEPTIONS=0 -O2 -MD -W3 -w34100 -w34189 -w44996 -w44456 -w44457 -w44458 -wd4577
LFLAGS = /LIBPATH:. /NOLOGO /DYNAMICBASE /NXCOMPAT /DLL /MANIFEST /MANIFESTFILE:$(TARGET).manifest /SUBSYSTEM:WINDOWS /INCREMENTAL:NO
LIBS = /LIBPATH:d:\virtual_envs\py351\libs python35.lib
.SUFFIXES: .c .cpp .cc .cxx .C
{.}.cpp{}.obj::
$(CXX) -c $(CXXFLAGS) $(CPPFLAGS) -Fo @<<
$<
<<
{.}.cc{}.obj::
$(CXX) -c $(CXXFLAGS) $(CPPFLAGS) -Fo @<<
$<
<<
{.}.cxx{}.obj::
$(CXX) -c $(CXXFLAGS) $(CPPFLAGS) -Fo @<<
$<
<<
{.}.C{}.obj::
$(CXX) -c $(CXXFLAGS) $(CPPFLAGS) -Fo @<<
$<
<<
{.}.c{}.obj::
$(CC) -c $(CFLAGS) $(CPPFLAGS) -Fo @<<
$<
<<
all: $(TARGET)
$(OFILES): $(HFILES)
$(TARGET): $(OFILES)
$(LINK) $(LFLAGS) /OUT:$(TARGET) @<<
$(OFILES) $(LIBS)
<<
mt -nologo -manifest $(TARGET).manifest -outputresource:$(TARGET);2
install: $(TARGET)
@if not exist d:\virtual_envs\py351\Lib\site-packages mkdir d:\virtual_envs\py351\Lib\site-packages
copy /y $(TARGET) d:\virtual_envs\py351\Lib\site-packages\$(TARGET)
clean:
-del $(TARGET)
-del *.obj
-del *.exp
-del *.lib
-del $(TARGET).manifest
test:
python runme.py
</code></pre>
<p>I'd like to improve a couple of things here:</p>
<ul>
<li>I'd like to consider swig files (*.i) in the makefile. For example, every time some swig file has been changed a new wrap file should be generated (ie: swig -python -c++ file_has_changed.cpp) and then rebuild the project</li>
<li>I'd like to avoid having hardcoded object files. For instance, I'd like to use all cpp files using wildcards somehow</li>
</ul>
<p>I've read a little bit of the docs talking about <a href="https://msdn.microsoft.com/en-us/library/yz1tske6.aspx" rel="nofollow">Makefiles</a> but I'm still pretty much confused. How could I achieve this?</p>
<p>Right now I'm using a <strong>hacky</strong> solution like <code>swig -python -c++ whatever_file.i && nmake</code>, that's of course it's not ideal at all</p>
<p>REFERENCES</p>
<p>Achieving this inside visual studio IDE is quite easy following <a href="http://stackoverflow.com/questions/5969173/how-to-swig-in-vs2010/6117641#6117641">these steps</a> but I'd like to use this makefile inside SublimeText, that's why I'm quite interested on knowing how to have a proper Makefile</p>
| 4 | 2016-10-03T11:32:23Z | 39,876,573 | <p>I have solved this using CMake and this translates directly to using <code>autoconf</code> and <code>automake</code> and thereby makefiles.</p>
<p>The idea is to introduce the following variable</p>
<pre><code>DEPENDENCIES = `swig -M -python -c++ -I. example.i | sed 's/\//g'`
</code></pre>
<p>and make your target depend on this. The above generates a list of dependencies of all headers and <code>.i</code> files your SWIG interface file may include.</p>
| 1 | 2016-10-05T14:22:15Z | [
"python",
"c++",
"makefile",
"visual-studio-2015",
"swig"
]
|
How to rebuild project after SWIG files changed? | 39,830,598 | <p>Given the below makefile:</p>
<pre><code>TARGET = _example.pyd
OFILES = example.obj example_wrap.obj
HFILES =
CC = cl
CXX = cl
LINK = link
CPPFLAGS = -DNDEBUG -DUNICODE -DWIN32 -I. -Id:\virtual_envs\py351\include
CFLAGS = -nologo -Zm200 -Zc:wchar_t- -FS -Zc:strictStrings -O2 -MD -W3 -w44456 -w44457 -w44458
CXXFLAGS = -nologo -Zm200 -Zc:wchar_t- -FS -Zc:strictStrings -D_HAS_EXCEPTIONS=0 -O2 -MD -W3 -w34100 -w34189 -w44996 -w44456 -w44457 -w44458 -wd4577
LFLAGS = /LIBPATH:. /NOLOGO /DYNAMICBASE /NXCOMPAT /DLL /MANIFEST /MANIFESTFILE:$(TARGET).manifest /SUBSYSTEM:WINDOWS /INCREMENTAL:NO
LIBS = /LIBPATH:d:\virtual_envs\py351\libs python35.lib
.SUFFIXES: .c .cpp .cc .cxx .C
{.}.cpp{}.obj::
$(CXX) -c $(CXXFLAGS) $(CPPFLAGS) -Fo @<<
$<
<<
{.}.cc{}.obj::
$(CXX) -c $(CXXFLAGS) $(CPPFLAGS) -Fo @<<
$<
<<
{.}.cxx{}.obj::
$(CXX) -c $(CXXFLAGS) $(CPPFLAGS) -Fo @<<
$<
<<
{.}.C{}.obj::
$(CXX) -c $(CXXFLAGS) $(CPPFLAGS) -Fo @<<
$<
<<
{.}.c{}.obj::
$(CC) -c $(CFLAGS) $(CPPFLAGS) -Fo @<<
$<
<<
all: $(TARGET)
$(OFILES): $(HFILES)
$(TARGET): $(OFILES)
$(LINK) $(LFLAGS) /OUT:$(TARGET) @<<
$(OFILES) $(LIBS)
<<
mt -nologo -manifest $(TARGET).manifest -outputresource:$(TARGET);2
install: $(TARGET)
@if not exist d:\virtual_envs\py351\Lib\site-packages mkdir d:\virtual_envs\py351\Lib\site-packages
copy /y $(TARGET) d:\virtual_envs\py351\Lib\site-packages\$(TARGET)
clean:
-del $(TARGET)
-del *.obj
-del *.exp
-del *.lib
-del $(TARGET).manifest
test:
python runme.py
</code></pre>
<p>I'd like to improve a couple of things here:</p>
<ul>
<li>I'd like to consider swig files (*.i) in the makefile. For example, every time some swig file has been changed a new wrap file should be generated (ie: swig -python -c++ file_has_changed.cpp) and then rebuild the project</li>
<li>I'd like to avoid having hardcoded object files. For instance, I'd like to use all cpp files using wildcards somehow</li>
</ul>
<p>I've read a little bit of the docs talking about <a href="https://msdn.microsoft.com/en-us/library/yz1tske6.aspx" rel="nofollow">Makefiles</a> but I'm still pretty much confused. How could I achieve this?</p>
<p>Right now I'm using a <strong>hacky</strong> solution like <code>swig -python -c++ whatever_file.i && nmake</code>, that's of course it's not ideal at all</p>
<p>REFERENCES</p>
<p>Achieving this inside visual studio IDE is quite easy following <a href="http://stackoverflow.com/questions/5969173/how-to-swig-in-vs2010/6117641#6117641">these steps</a> but I'd like to use this makefile inside SublimeText, that's why I'm quite interested on knowing how to have a proper Makefile</p>
| 4 | 2016-10-03T11:32:23Z | 39,877,121 | <p>Producing any kind of target from any kind of source, that's the essence of a makefile:</p>
<pre><code>.i.cpp:
swig -python -c++ $<
</code></pre>
<p>This elegance will, however, break with <code>nmake</code> (<a href="https://www.gnu.org/software/make/manual/html_node/Chained-Rules.html" rel="nofollow">as opposed to GNU <code>make</code></a>) if the <code>.cpp</code> file is missing because <a href="http://stackoverflow.com/questions/4808674/nmake-inference-rules-limited-to-depth-of-1"><code>nmake</code> doesn't try to chain inference rules through a missing link</a>.<br>
Moreover, it will break silently and "build" from stale versions of the files that are later in the build chain (which includes the resulting executable) if they are present.</p>
<p>Possible <s>kludges</s> workarounds here (save for ditching <code>nmake</code>, of course) are:</p>
<ul>
<li><p>invoke <code>nmake</code> multiple times, first, to generate all files that are an intermediate steps between two <a href="https://msdn.microsoft.com/en-us/library/968fkazs.aspx" rel="nofollow">inference rules</a> (which can in turn require multiple invocations if they are generated from one another), and then for the final targets</p>
<ul>
<li><p>This requires an external script which can very well be another makefile. E.g.:
move the current <code>Makefile</code> to <code>main_makefile</code> and create a new <code>Makefile</code> with commands for the main target like this:</p>
<pre><code>python -c "import os,os.path,subprocess;
subprocess.check_call(['nmake', '/F', 'main_makefile']
+[os.path.splitext(f)[0]+'.cpp'
for f in os.listdir('.') if os.path.isfile(f)
and f.endswith('.i')])"
nmake /F main_makefile
</code></pre></li>
</ul></li>
<li><p>do not rely solely on inference rules but have an explicit rule for each <code>.cpp</code> to be produced (that's what CMake does btw)</p>
<ul>
<li><p>this asks for the relevant part of Makefile to be autogenerated. That part can be <code>!INCLUDE</code>'d, but still, external code is needed to do the generation before <code>nmake</code> gets to work on the result. Example code (again, in Python):</p>
<pre class="lang-python prettyprint-override"><code>import os,os.path,subprocess
for f in os.listdir('.') if os.path.isfile(f) and f.endswith('.i'):
print '"%s": "%s"'%(os.path.splitext(f)[0]+'.cxx',f)
#quotes are to allow for special characters,
# see https://msdn.microsoft.com/en-us/library/956d3677.aspx
#command is not needed, it will be added from the inferred rule I gave
# in the beginning, see http://www.darkblue.ch/programming/Namke.pdf, p.41 (567)
</code></pre></li>
</ul></li>
</ul>
| 3 | 2016-10-05T14:44:42Z | [
"python",
"c++",
"makefile",
"visual-studio-2015",
"swig"
]
|
How to apply multiprocessing technique in python for-loop? | 39,830,676 | <p>I have a long list of user(about 200,000) and a corresponding data frame <code>df</code> with their attributes. Now I'd like to write a for loop to measure pair-wise similarity of the users. The code is following:</p>
<pre><code>df2record = pd.DataFrame(columns=['u1', 'u2', 'sim'])
for u1 in reversed(user_list):
for u2 in reversed(list(range(1, u1))):
sim = measure_sim(df[u1], df[u2]))
if sim < 0.6:
continue
else:
df2record = df2record.append(pd.Series([u1, u2, sim], index=['u1', 'u2', 'sim']), ignore_index=True)
</code></pre>
<p>Now I wanna run this for loop with multiprocessing and I have read some tutorial. But I still have no idea to handle it properly. Seems that I should set reasonable number of processes first, like <code>6</code>. And then I should feed each loop into one process. But the problem is how can I know the task in a certain process has been done so that a new loop can begin? Could you help me with this? Thanks you in advance!</p>
| 1 | 2016-10-03T11:37:31Z | 39,831,594 | <p>1st of all i would not recommend to use multiprocessing on such a small data. and especially when you are working with data frame. because data frame has it's own lot functionality which can help you in many ways. you just need to write proper loop.</p>
<p>Use: multiprocessing.Pool</p>
<p>just pass list of user as iterator(process_size=list_of_user) to pool.map() . you just need to create your iterator in a little tweak.</p>
<pre><code>from multiprocessing import Pool
with Pool() as pool:
pool = multiprocessing.Pool(processes=6)
pool.map(function, iterator)
</code></pre>
| 0 | 2016-10-03T12:24:28Z | [
"python",
"multiprocessing"
]
|
How to apply multiprocessing technique in python for-loop? | 39,830,676 | <p>I have a long list of user(about 200,000) and a corresponding data frame <code>df</code> with their attributes. Now I'd like to write a for loop to measure pair-wise similarity of the users. The code is following:</p>
<pre><code>df2record = pd.DataFrame(columns=['u1', 'u2', 'sim'])
for u1 in reversed(user_list):
for u2 in reversed(list(range(1, u1))):
sim = measure_sim(df[u1], df[u2]))
if sim < 0.6:
continue
else:
df2record = df2record.append(pd.Series([u1, u2, sim], index=['u1', 'u2', 'sim']), ignore_index=True)
</code></pre>
<p>Now I wanna run this for loop with multiprocessing and I have read some tutorial. But I still have no idea to handle it properly. Seems that I should set reasonable number of processes first, like <code>6</code>. And then I should feed each loop into one process. But the problem is how can I know the task in a certain process has been done so that a new loop can begin? Could you help me with this? Thanks you in advance!</p>
| 1 | 2016-10-03T11:37:31Z | 39,831,601 | <p>You can use <a href="https://docs.python.org/3.5/library/multiprocessing.html#multiprocessing.pool.Pool" rel="nofollow">multiprocessing.Pool</a> which provides method <code>map</code> that maps pool of processes over given iterable. Here's some example code:</p>
<pre><code>def pairGen():
for u1 in reversed(user_list):
for u2 in reversed(list(range(1, u1))):
yield (u1, u2)
def processFun(pair):
u1, u2 = pair
sim = measure_sim(df[u1], df[u2]))
if sim < 0.6:
return None
else:
return pd.Series([u1, u2, sim], index=['u1', 'u2', 'sim'])
def main():
with multiprocessing.Pool(processes=6) as pool:
vals = pool.map(processFun, pairGen())
df2record = pd.DataFrame(columns=['u1', 'u2', 'sim'])
for v in vals:
if vals != None:
df2record = df2record.append(v, ignore_index=True)
</code></pre>
| 1 | 2016-10-03T12:24:56Z | [
"python",
"multiprocessing"
]
|
Python: NoneType object has no attribute __getitem__. But its not nonetype | 39,830,681 | <p>So I have this piece of code:</p>
<pre><code> key = val_cfg['src_model']
print "key: ", key
print "objects dict: ", objects
print "before Nonetype"
print "accessing objects dict: ", objects[key] # line 122
print "after"
method = self._handle_object(
val_cfg, objects[val_cfg['src_model']])
</code></pre>
<p>So the output I get is this:</p>
<pre><code>key: ir.model.data
objects dict: {'res.partner': res.partner(22,), 'ir.model.data': ir.model.data()}
before Nonetype
accessing objects dict: ir.model.data()
after
</code></pre>
<p>And then I get error:</p>
<pre><code>_report.tests.test_vat_report: ` File "/home/user/addons/account_vat_report/models/vat_report.py", line 122, in _handle_method
2016-10-03 11:32:44,863 31650 ERROR vat_reports openerp.addons.account_vat_report.tests.test_vat_report: ` print "accessing objects dict: ", objects[key]
2016-10-03 11:32:44,863 31650 ERROR vat_reports openerp.addons.account_vat_report.tests.test_vat_report: ` TypeError: 'NoneType' object has no attribute '__getitem__'
</code></pre>
<p>Well this does not make sense. I print results of line 122, but test fails, saying its NoneType object. How so? I'm probably missing something here. Does someone see whats wrong here?</p>
<p><strong>Update</strong>.
definition of <code>_handle_object</code>:</p>
<pre><code>@api.model
def _handle_object(self, val_cfg, obj):
"""Method to get value from object."""
# check if value is list or tuple of strings.
if not isinstance(val_cfg['value'], basestring):
for val in val_cfg['value']:
value = self._get_attribute(obj, val_cfg['value'])
# return first value that was retrieved.
if value:
return value
# if we do not get any "True" value, just return last
# one.
else:
return value
else:
return self._get_attribute(obj, val_cfg['value'])
</code></pre>
| 1 | 2016-10-03T11:37:44Z | 39,831,994 | <p>Well I made stupid mistake, though that error got me confused, because of traceback. So I was looking in a wrong direction.</p>
<p>The problem was in another method, that is used to update objects dictionary. Its purpose is to update value for specific key, if some current iter item (from iteration) needs to be assigned if specified in configuration.</p>
<p>Anyway, the problem was this (in another method called <code>_set_dynamic_rows_data</code>):</p>
<pre><code>objects = self._update_objects_dict(
val_cfg, objects, iter_)
</code></pre>
<p>I accidentally assigned <code>objects</code> dict to this method and because this method does not return anything, <code>objects</code> dict would be set to <code>None</code>. And it was disguised with that error, because first iteration, everything would be OK and during second iteration, when <code>objects</code> dict would be changed, it then would start failing like described in question.</p>
| 1 | 2016-10-03T12:45:04Z | [
"python",
"python-2.7",
"dictionary",
"nonetype"
]
|
Python: Reading text file and splitting into different lists | 39,830,781 | <p>I have a text file which contains two lines of text. Each line contains student names separated by a comma.</p>
<p>I'm trying to write a program which will read each line and convert it to a list. My solution seems to make two lists but I don't know how to differentiate between the two as both lists are called "filelist". For example I may need to append to the second list. How would I differentiate between the two?</p>
<p>Or is it possible for it to create completely separate lists with different names? I want the program to be able to handle many lines in the text file ideally.</p>
<p>My code is:</p>
<pre><code> filelist=[]
with open("students.txt") as students:
for line in students:
filelist.append(line.strip().split(","))
print(filelist)
</code></pre>
| 0 | 2016-10-03T11:42:36Z | 39,830,871 | <p>You will have to create a multi-dimensional array like this:</p>
<pre><code>text = open("file.txt")
lines = text.split("\n")
entries = []
for line in lines:
entries.append(line.split(","))
</code></pre>
<p>If your file is</p>
<pre><code>John,Doe
John,Smith
</code></pre>
<p>then entries will be:</p>
<pre><code>[["John", "Doe"], ["John", "Smith"]]
</code></pre>
| 2 | 2016-10-03T11:47:53Z | [
"python",
"list",
"file",
"text"
]
|
Python: Reading text file and splitting into different lists | 39,830,781 | <p>I have a text file which contains two lines of text. Each line contains student names separated by a comma.</p>
<p>I'm trying to write a program which will read each line and convert it to a list. My solution seems to make two lists but I don't know how to differentiate between the two as both lists are called "filelist". For example I may need to append to the second list. How would I differentiate between the two?</p>
<p>Or is it possible for it to create completely separate lists with different names? I want the program to be able to handle many lines in the text file ideally.</p>
<p>My code is:</p>
<pre><code> filelist=[]
with open("students.txt") as students:
for line in students:
filelist.append(line.strip().split(","))
print(filelist)
</code></pre>
| 0 | 2016-10-03T11:42:36Z | 39,830,873 | <p>in your code, filelist will be considered as a list of list because you append a list to it, not an element,
consider doing <code>filelist += line.strip().split(",")</code> which will concatenate the lists</p>
| 0 | 2016-10-03T11:48:04Z | [
"python",
"list",
"file",
"text"
]
|
Python: Reading text file and splitting into different lists | 39,830,781 | <p>I have a text file which contains two lines of text. Each line contains student names separated by a comma.</p>
<p>I'm trying to write a program which will read each line and convert it to a list. My solution seems to make two lists but I don't know how to differentiate between the two as both lists are called "filelist". For example I may need to append to the second list. How would I differentiate between the two?</p>
<p>Or is it possible for it to create completely separate lists with different names? I want the program to be able to handle many lines in the text file ideally.</p>
<p>My code is:</p>
<pre><code> filelist=[]
with open("students.txt") as students:
for line in students:
filelist.append(line.strip().split(","))
print(filelist)
</code></pre>
| 0 | 2016-10-03T11:42:36Z | 39,831,510 | <p>two lines to done</p>
<pre><code>with open("file.txt") as f:
qlist = map(lambda x:x.strip().split(","), f.readlines())
</code></pre>
<p>or </p>
<pre><code>with open("file.txt") as f:
qlist = [i.strip().split(",") for i in f.readlines()]
</code></pre>
| 0 | 2016-10-03T12:20:21Z | [
"python",
"list",
"file",
"text"
]
|
Python: Reading text file and splitting into different lists | 39,830,781 | <p>I have a text file which contains two lines of text. Each line contains student names separated by a comma.</p>
<p>I'm trying to write a program which will read each line and convert it to a list. My solution seems to make two lists but I don't know how to differentiate between the two as both lists are called "filelist". For example I may need to append to the second list. How would I differentiate between the two?</p>
<p>Or is it possible for it to create completely separate lists with different names? I want the program to be able to handle many lines in the text file ideally.</p>
<p>My code is:</p>
<pre><code> filelist=[]
with open("students.txt") as students:
for line in students:
filelist.append(line.strip().split(","))
print(filelist)
</code></pre>
| 0 | 2016-10-03T11:42:36Z | 39,901,969 | <p>If you are looking to have a list for each line, you can use a dictionary where the key is the line number and the value of the dictionary entry is the list of names for that line. Something like this</p>
<pre><code>with open('my_file.txt', 'r') as fin:
students = {k:line[:-1].split(',') for k,line in enumerate(fin)}
print students
</code></pre>
<p>The line[:-1] is to get rid off the carriage return at the end of each line</p>
| 0 | 2016-10-06T17:11:50Z | [
"python",
"list",
"file",
"text"
]
|
Django/Python: How to get the latest value from a model queryset? | 39,830,810 | <p>I have created a model 'VehicleDetails' in which a user can fill the details of a vehicle and another model 'TripStatus' in which he can update the vehicle location. I wanted to get the latest location for which I did as in my below code. But when i running the server, it returns all the values not the latest value. I would appreciate helping me solve this or suggesting a new approach.</p>
<p>models.py:</p>
<pre><code>class VehicleDetails(models.Model):
Vehicle_No = models.CharField(max_length=20)
class TripStatus(models.Model):
vehicledetails = models.ForeignKey(VehicleDetails, related_name='tripstatuss')
CHOICES = (('Yet to start', 'Yet to start'),('Trip starts', 'Trip starts'), ('Chennai','Chennai'), ('Vizag', 'Vizag'), ('Kolkata', 'Kolkata'))
Vehicle_Status = models.CharField(choices=CHOICES, default="Yet to start", max_length=20)
statustime = models.DateTimeField(auto_now=False, auto_now_add=True)
</code></pre>
<p>views:</p>
<pre><code>def status(request):
tripstatus = TripStatus.objects.all().latest('statustime')
context = {
"tripstatus": tripstatus,
}
return render(request, 'loggedin_load/active_deals.html', context)
</code></pre>
<p>template:</p>
<pre><code>{% for status in vehicledetails.tripstatuss.all %}
{{status.Vehicle_Status}}
{% endfor %}
</code></pre>
| -1 | 2016-10-03T11:44:08Z | 39,830,867 | <p>Should just have to remove the .all():</p>
<pre><code>tripstatus = TripStatus.objects.latest('statustime')
</code></pre>
<p>Or maybe:</p>
<pre><code>tripstatus = TripStatus.order_by('-statustime').first()
</code></pre>
| 0 | 2016-10-03T11:47:50Z | [
"python",
"django",
"django-models",
"django-templates",
"django-views"
]
|
Change DNA sequences in fasta file using Biopython | 39,830,826 | <p>I have a file in fasta format with several DNA sequences. I want to change the content of each sequence for another smaller sequence, keeping the same sequence id.
The new sequences are in a list.</p>
<pre><code>with open("outfile.fa", "w") as f:
for seq_record in SeqIO.parse("ma-all-mito.fa", "fasta"):
for i in range(len(newSequences_ok)):
f.write(str(seq_record.id[i]) + "\n")
f.write(str(newSequences_ok[i]) + "\n")
</code></pre>
<p>But I get:</p>
<pre><code>IndexError: string index out of range
</code></pre>
<p>How could I change the code so that it works? I think the problem is that I need to iterate both through the original fasta file and through the list with the new sequences.</p>
<p>The original fasta file looks like this:</p>
<pre><code>>Sequence1
ATGATGCATGG
>Sequence2
TTTTGGGAATC
>Sequence3
GGGCTAACTAC
>Sequence4
ATCTCAGGAA
</code></pre>
<p>And the list with the new sequences is similar to this one:</p>
<pre><code>newSequences_ok=[ATGG,TTTC,GGTA,CTCG]
</code></pre>
<p>The output that I would like to get is:</p>
<pre><code>>Sequence1
ATGG
>Sequence2
TTTC
>Sequence3
GGTA
>Sequence4
CTCG
</code></pre>
| 1 | 2016-10-03T11:45:04Z | 39,851,540 | <p>I think that this <em>might</em> work:</p>
<pre><code>records = SeqIO.parse("ma-all-mito.fa", "fasta")
with open("outfile.fa", "w") as f:
for r, s in zip(records,newSequences_ok):
f.write(r.seq.seq.split('\n')[0] + '\n')
f.write(s + '\n')
</code></pre>
<p>If not (and even if it does) -- you really need to read up on how <a href="http://biopython.org/DIST/docs/tutorial/Tutorial.html#htoc32" rel="nofollow">Biopython</a> works. You were treating <code>SeqIO.parse</code> as something which directly returns the lines of the files. Instead, it returns <code>SeqRecord</code> objects which have a <code>seq</code> attribute which returns <code>Seq</code> objects which themselves have two attributes, a <code>seq</code> attribute (which is what you seem to want) and an <code>alphabet</code> attribute. You should concentrate on being able to extract the information that you are interested in before you try to modify it.</p>
| 1 | 2016-10-04T11:49:51Z | [
"python",
"biopython",
"fasta"
]
|
Use LeaveOneGroupOut strategy on cross_val_score in sklearn | 39,830,848 | <p>I'd like to use <code>LeaveOneGroupOut</code> strategy to evaluate my model. According to <a href="http://scikit-learn.org/stable/modules/cross_validation.html#computing-cross-validated-metrics" rel="nofollow">sklearn's document</a>, <code>cross_val_score</code> seems convenient. </p>
<p>However, the following code does not work.</p>
<pre><code>import sklearn
from sklearn import datasets
iris = datasets.load_iris()
from sklearn.model_selection import cross_val_score
clf = sklearn.svm.SVC(kernel='linear', C=1)
# cv = ShuffleSplit(n_splits=3, test_size=0.3, random_state=0) # => this works
cv = LeaveOneGroupOut # => this does not work
scores = cross_val_score(clf, iris.data, iris.target, cv=cv)
</code></pre>
<p>The error message is:</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-40-435a3a7fa16c> in <module>()
4 from sklearn.model_selection import cross_val_score
5 clf = sklearn.svm.SVC(kernel='linear', C=1)
----> 6 scores = cross_val_score(clf, iris.data, iris.target, cv=LeaveOneGroupOut())
7 scores
/Users/xxx/.pyenv/versions/anaconda-2.0.1/lib/python2.7/site-packages/sklearn/model_selection/_validation.pyc in cross_val_score(estimator, X, y, groups, scoring, cv, n_jobs, verbose, fit_params, pre_dispatch)
138 train, test, verbose, None,
139 fit_params)
--> 140 for train, test in cv.split(X, y, groups))
141 return np.array(scores)[:, 0]
142
/Users/xxx/.pyenv/versions/anaconda-2.0.1/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in __call__(self, iterable)
756 # was dispatched. In particular this covers the edge
757 # case of Parallel used with an exhausted iterator.
--> 758 while self.dispatch_one_batch(iterator):
759 self._iterating = True
760 else:
/Users/xxx/.pyenv/versions/anaconda-2.0.1/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in dispatch_one_batch(self, iterator)
601
602 with self._lock:
--> 603 tasks = BatchedCalls(itertools.islice(iterator, batch_size))
604 if len(tasks) == 0:
605 # No more tasks available in the iterator: tell caller to stop.
/Users/xxx/.pyenv/versions/anaconda-2.0.1/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in __init__(self, iterator_slice)
125
126 def __init__(self, iterator_slice):
--> 127 self.items = list(iterator_slice)
128 self._size = len(self.items)
129
/Users/xxx/.pyenv/versions/anaconda-2.0.1/lib/python2.7/site-packages/sklearn/model_selection/_validation.pyc in <genexpr>(***failed resolving arguments***)
135 parallel = Parallel(n_jobs=n_jobs, verbose=verbose,
136 pre_dispatch=pre_dispatch)
--> 137 scores = parallel(delayed(_fit_and_score)(clone(estimator), X, y, scorer,
138 train, test, verbose, None,
139 fit_params)
/Users/xxx/.pyenv/versions/anaconda-2.0.1/lib/python2.7/site-packages/sklearn/model_selection/_split.pyc in split(self, X, y, groups)
88 X, y, groups = indexable(X, y, groups)
89 indices = np.arange(_num_samples(X))
---> 90 for test_index in self._iter_test_masks(X, y, groups):
91 train_index = indices[np.logical_not(test_index)]
92 test_index = indices[test_index]
/Users/xxx/.pyenv/versions/anaconda-2.0.1/lib/python2.7/site-packages/sklearn/model_selection/_split.pyc in _iter_test_masks(self, X, y, groups)
770 def _iter_test_masks(self, X, y, groups):
771 if groups is None:
--> 772 raise ValueError("The groups parameter should not be None")
773 # We make a copy of groups to avoid side-effects during iteration
774 groups = np.array(groups, copy=True)
ValueError: The groups parameter should not be None
scores
</code></pre>
| 0 | 2016-10-03T11:46:31Z | 39,831,546 | <p>You do not define your <strong>groups</strong> parameter which is the group according to which you are going to split your data.</p>
<p>The error comes from <strong>cross_val_score</strong> that takes this parameter in argument : in your case it is equal to <strong>None</strong>.</p>
<p>Try to follow the example below :</p>
<pre><code>from sklearn.model_selection import LeaveOneGroupOut
X = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])
y = np.array([1, 2, 1, 2])
groups = np.array([1, 1, 2, 2])
lol = LeaveOneGroupOut()
</code></pre>
<p>You have :</p>
<pre><code>[In] lol.get_n_splits(X, y, groups)
[Out] 2
</code></pre>
<p>Then you will be able to use :</p>
<pre><code>lol.split(X, y, groups)
</code></pre>
| 1 | 2016-10-03T12:22:21Z | [
"python",
"machine-learning",
"scikit-learn"
]
|
Can I create a Google Drive folder using service account with Python? | 39,830,956 | <p>I can create a folder using account Owner but cannot share it with service account.</p>
<p>Is there a way for My Python script to create a folder using service account and then upload a file?</p>
| 0 | 2016-10-03T11:53:23Z | 39,848,117 | <p>You can create a folder and upload file, same process flow as a real user when you using a service account.</p>
<h3>BUT</h3>
<p>As stated in this <a href="http://www.riskcompletefailure.com/2015/03/understanding-service-accounts.html" rel="nofollow">blog</a>, you have to remember that:</p>
<blockquote>
<p>A service account is a sort of virtual user or robot account associated with a given project. It has an email address, and a cryptographic key pair.</p>
</blockquote>
<p>Also, there is a statement from a related <a href="http://stackoverflow.com/a/22247317/5995040">SO question</a> that:</p>
<blockquote>
<p>There is no UI for Service Accounts. The files are only accessible to your app.</p>
</blockquote>
<p>I suggest that you share a folder from a real account to your service account so that accessing the files would be easier.</p>
<p>Hope this help!</p>
| 0 | 2016-10-04T08:56:52Z | [
"python",
"google-drive-sdk"
]
|
combine two id into a new table? | 39,831,016 | <p>i had a task about text processing and i don't know how to combine some columns from separate tables into one table</p>
<p>so here is the case:
i have a table named <code>list</code> with <code>id_doc</code>, and <code>title</code> columns
then i create a new table named <code>term_list</code> which contains a list of result terms when i do some text processing to titles from <code>list</code>.</p>
<p>the <code>term_list</code> table have <code>id_term</code>, <code>term</code>, <code>df</code>, and <code>idf</code> column. Lastly, i want to have a table named <code>term_freq</code> which has columns <code>id</code>, <code>id_term</code>, <code>id_doc</code>, <code>tf</code>, and <code>normalized_tf</code></p>
<p>example :
table <code>list</code> is like this:</p>
<pre><code>id_doc titles
11 information retrieval system
12 operating system
13 business information
</code></pre>
<p>table <code>term_list</code> is below this:</p>
<pre><code>id_term term df idf
21 information 2 --
22 retrieval 1 --
23 system 2 --
24 operating 1 --
25 business 1 --
</code></pre>
<p>I want to ask how to create a table <code>term_freq</code> so that the table becomes like this?</p>
<pre><code>id id_term id_doc tf normalized_tf
31 21 11 1 --
32 22 11 1 --
33 23 11 1 --
34 24 12 1 --
35 23 12 1 --
36 25 13 1 --
37 21 13 1 --
</code></pre>
<p>the main problem is i have to join <code>id_term</code> and <code>id_doc</code> into one table that one <code>id_doc</code> has relation to several <code>id_term</code> but i don't know how to correlate because <code>list</code> and <code>term_list</code>doesn't have any similar column.</p>
<p>Please help :(</p>
| 1 | 2016-10-03T11:56:20Z | 39,832,416 | <p>You can iterate over rows in <code>term_list</code>:</p>
<pre><code>SELECT id_term, term FROM term_list
</code></pre>
<p>for each <code>term</code> make:</p>
<pre><code>SELECT id_doc FROM list WHERE titles LIKE "term"
</code></pre>
<p>and saves pairs <code>id_term</code> and <code>id_doc</code> in the table <code>term_freq</code>.</p>
| 0 | 2016-10-03T13:06:13Z | [
"python",
"mysql",
"database",
"text-processing",
"information-retrieval"
]
|
Binary to Hexadecimal converter | 39,831,062 | <p>I have written a code to try and convert a 4-bit binary number into Hexadecimal. Only thing is, when I type a value that begins with a '1' it comes up with the conversion, whereas if I type a value that starts with a '0' it doesn't work. Any help?</p>
<pre><code>print ("""Here's how the program will work. You need to get your binary number ready.
if it is 8 bit, split it into four, because thats how hexadecimal works. Make sure all your conversions
are slip into '4' bits like this:
01001101 will turn into:
0100 ,and then 1101""")
time.sleep(6)
print ("""So, for example, the program will ask you for your binary number. Like this:
Enter your binary number here:
Then you put in your number, like this:
Enter your binary number here: 0100
Lastly, the program will give you your hexadecimal number, then ask you if you would
like to do another conversion in this area, or end program.""")
time.sleep(6)
HEXADECIMAL = int(input("Please enter your binary number here: "))
if HEXADECIMAL == 0000:
print ("Your hexadecimal value is 0")
if HEXADECIMAL == 0001:
print ("Your hexadecimal value is 1")
if HEXADECIMAL == 0010:
print ("Your hexadecimal value is 2")
if HEXADECIMAL == 0011:
print ("Your hexadecimal value is 3")
if HEXADECIMAL == 0100:
print ("Your hexadecimal value is 4")
if HEXADECIMAL == 0101:
print ("Your hexadecimal value is 5")
if HEXADECIMAL == 0110:
print ("Your hexadecimal value is 6")
if HEXADECIMAL == 0111:
print ("Your hexadecimal value is 7")
if HEXADECIMAL == 1000:
print ("Your hexadecimal value is 8")
if HEXADECIMAL == 1001:
print ("Your hexadecimal value is 9")
if HEXADECIMAL == 1010:
print ("Your hexadecimal value is A")
if HEXADECIMAL == 1011:
print ("Your hexadecimal value is B")
if HEXADECIMAL == 1100:
print ("Your hexadecimal value is C")
if HEXADECIMAL == 1101:
print ("Your hexadecimal value is D")
if HEXADECIMAL == 1110:
print ("Your hexadecimal value is E")
if HEXADECIMAL == 1111:
print ("Your hexadecimal value is F")
</code></pre>
<p>You can try to run this code and start with, for example, 0110, but it will not convert. Any help?</p>
| 0 | 2016-10-03T11:59:11Z | 39,831,184 | <p>When you input an integer starting with an leading 0, python interprets it as being in base 8:</p>
<pre><code>a = 0110
print a
</code></pre>
<p>will output</p>
<pre><code>72
</code></pre>
<p>Therefore, drop the leading 0 in your input, or remove them before casting <code>int()</code> on it</p>
| 0 | 2016-10-03T12:05:17Z | [
"python",
"binary",
"hex"
]
|
Binary to Hexadecimal converter | 39,831,062 | <p>I have written a code to try and convert a 4-bit binary number into Hexadecimal. Only thing is, when I type a value that begins with a '1' it comes up with the conversion, whereas if I type a value that starts with a '0' it doesn't work. Any help?</p>
<pre><code>print ("""Here's how the program will work. You need to get your binary number ready.
if it is 8 bit, split it into four, because thats how hexadecimal works. Make sure all your conversions
are slip into '4' bits like this:
01001101 will turn into:
0100 ,and then 1101""")
time.sleep(6)
print ("""So, for example, the program will ask you for your binary number. Like this:
Enter your binary number here:
Then you put in your number, like this:
Enter your binary number here: 0100
Lastly, the program will give you your hexadecimal number, then ask you if you would
like to do another conversion in this area, or end program.""")
time.sleep(6)
HEXADECIMAL = int(input("Please enter your binary number here: "))
if HEXADECIMAL == 0000:
print ("Your hexadecimal value is 0")
if HEXADECIMAL == 0001:
print ("Your hexadecimal value is 1")
if HEXADECIMAL == 0010:
print ("Your hexadecimal value is 2")
if HEXADECIMAL == 0011:
print ("Your hexadecimal value is 3")
if HEXADECIMAL == 0100:
print ("Your hexadecimal value is 4")
if HEXADECIMAL == 0101:
print ("Your hexadecimal value is 5")
if HEXADECIMAL == 0110:
print ("Your hexadecimal value is 6")
if HEXADECIMAL == 0111:
print ("Your hexadecimal value is 7")
if HEXADECIMAL == 1000:
print ("Your hexadecimal value is 8")
if HEXADECIMAL == 1001:
print ("Your hexadecimal value is 9")
if HEXADECIMAL == 1010:
print ("Your hexadecimal value is A")
if HEXADECIMAL == 1011:
print ("Your hexadecimal value is B")
if HEXADECIMAL == 1100:
print ("Your hexadecimal value is C")
if HEXADECIMAL == 1101:
print ("Your hexadecimal value is D")
if HEXADECIMAL == 1110:
print ("Your hexadecimal value is E")
if HEXADECIMAL == 1111:
print ("Your hexadecimal value is F")
</code></pre>
<p>You can try to run this code and start with, for example, 0110, but it will not convert. Any help?</p>
| 0 | 2016-10-03T11:59:11Z | 39,831,194 | <p>Instead of just coercing a string to integer, you need to pass the second parameter to <a href="https://docs.python.org/3/library/functions.html#int" rel="nofollow"><code>int</code> constructor</a> called <code>base</code>:</p>
<pre><code>BINARY = int(input('Please enter your binary number here: '), 2)
if BINARY == 0b0100:
print('Your hexadecimal value is 4')
</code></pre>
<p>Also you need to prefix your binary <a href="https://en.wikipedia.org/wiki/Nibble" rel="nofollow">nibbles</a> with <code>0b</code>, so Python knows that it's a binary representation.</p>
<p>If you just want to compare strings, then you don't need to coerce at all:</p>
<pre><code>BINARY = input('Please enter your binary number here: ')
if BINARY == '0100':
print('Your hexadecimal value is 4')
</code></pre>
| -1 | 2016-10-03T12:05:39Z | [
"python",
"binary",
"hex"
]
|
Binary to Hexadecimal converter | 39,831,062 | <p>I have written a code to try and convert a 4-bit binary number into Hexadecimal. Only thing is, when I type a value that begins with a '1' it comes up with the conversion, whereas if I type a value that starts with a '0' it doesn't work. Any help?</p>
<pre><code>print ("""Here's how the program will work. You need to get your binary number ready.
if it is 8 bit, split it into four, because thats how hexadecimal works. Make sure all your conversions
are slip into '4' bits like this:
01001101 will turn into:
0100 ,and then 1101""")
time.sleep(6)
print ("""So, for example, the program will ask you for your binary number. Like this:
Enter your binary number here:
Then you put in your number, like this:
Enter your binary number here: 0100
Lastly, the program will give you your hexadecimal number, then ask you if you would
like to do another conversion in this area, or end program.""")
time.sleep(6)
HEXADECIMAL = int(input("Please enter your binary number here: "))
if HEXADECIMAL == 0000:
print ("Your hexadecimal value is 0")
if HEXADECIMAL == 0001:
print ("Your hexadecimal value is 1")
if HEXADECIMAL == 0010:
print ("Your hexadecimal value is 2")
if HEXADECIMAL == 0011:
print ("Your hexadecimal value is 3")
if HEXADECIMAL == 0100:
print ("Your hexadecimal value is 4")
if HEXADECIMAL == 0101:
print ("Your hexadecimal value is 5")
if HEXADECIMAL == 0110:
print ("Your hexadecimal value is 6")
if HEXADECIMAL == 0111:
print ("Your hexadecimal value is 7")
if HEXADECIMAL == 1000:
print ("Your hexadecimal value is 8")
if HEXADECIMAL == 1001:
print ("Your hexadecimal value is 9")
if HEXADECIMAL == 1010:
print ("Your hexadecimal value is A")
if HEXADECIMAL == 1011:
print ("Your hexadecimal value is B")
if HEXADECIMAL == 1100:
print ("Your hexadecimal value is C")
if HEXADECIMAL == 1101:
print ("Your hexadecimal value is D")
if HEXADECIMAL == 1110:
print ("Your hexadecimal value is E")
if HEXADECIMAL == 1111:
print ("Your hexadecimal value is F")
</code></pre>
<p>You can try to run this code and start with, for example, 0110, but it will not convert. Any help?</p>
| 0 | 2016-10-03T11:59:11Z | 39,831,200 | <p>replace,</p>
<pre><code>HEXADECIMAL = int(input("Please enter your binary number here: "))
</code></pre>
<p>with,</p>
<pre><code>temp = input("Please enter your binary number here: ")
HEXADECIMAL = int (temp)
</code></pre>
<p>plus remove sleep time if there is an error.</p>
| -1 | 2016-10-03T12:05:52Z | [
"python",
"binary",
"hex"
]
|
Binary to Hexadecimal converter | 39,831,062 | <p>I have written a code to try and convert a 4-bit binary number into Hexadecimal. Only thing is, when I type a value that begins with a '1' it comes up with the conversion, whereas if I type a value that starts with a '0' it doesn't work. Any help?</p>
<pre><code>print ("""Here's how the program will work. You need to get your binary number ready.
if it is 8 bit, split it into four, because thats how hexadecimal works. Make sure all your conversions
are slip into '4' bits like this:
01001101 will turn into:
0100 ,and then 1101""")
time.sleep(6)
print ("""So, for example, the program will ask you for your binary number. Like this:
Enter your binary number here:
Then you put in your number, like this:
Enter your binary number here: 0100
Lastly, the program will give you your hexadecimal number, then ask you if you would
like to do another conversion in this area, or end program.""")
time.sleep(6)
HEXADECIMAL = int(input("Please enter your binary number here: "))
if HEXADECIMAL == 0000:
print ("Your hexadecimal value is 0")
if HEXADECIMAL == 0001:
print ("Your hexadecimal value is 1")
if HEXADECIMAL == 0010:
print ("Your hexadecimal value is 2")
if HEXADECIMAL == 0011:
print ("Your hexadecimal value is 3")
if HEXADECIMAL == 0100:
print ("Your hexadecimal value is 4")
if HEXADECIMAL == 0101:
print ("Your hexadecimal value is 5")
if HEXADECIMAL == 0110:
print ("Your hexadecimal value is 6")
if HEXADECIMAL == 0111:
print ("Your hexadecimal value is 7")
if HEXADECIMAL == 1000:
print ("Your hexadecimal value is 8")
if HEXADECIMAL == 1001:
print ("Your hexadecimal value is 9")
if HEXADECIMAL == 1010:
print ("Your hexadecimal value is A")
if HEXADECIMAL == 1011:
print ("Your hexadecimal value is B")
if HEXADECIMAL == 1100:
print ("Your hexadecimal value is C")
if HEXADECIMAL == 1101:
print ("Your hexadecimal value is D")
if HEXADECIMAL == 1110:
print ("Your hexadecimal value is E")
if HEXADECIMAL == 1111:
print ("Your hexadecimal value is F")
</code></pre>
<p>You can try to run this code and start with, for example, 0110, but it will not convert. Any help?</p>
| 0 | 2016-10-03T11:59:11Z | 39,831,373 | <p>You dont have to use the <code>if/else</code> statements evertime. There is a cool way you can use <code>int()</code> in python, if you using a string, <code>int()</code> function can directly convert it to any base for you.</p>
<pre><code>HEXADECIMAL = str(input())
base = 16 # for Hex and 2 for binary
msg = "Your hexadecimal value is"
print (msg, int(HEXADECIMAL, base))
</code></pre>
<p>This will do it for you. But if you want to stick with your method than you should probably add a <code>0b</code> as in <code>0b0010</code> before the binary and do stuff, leading zeroes are illegal, <code>0010</code> is illegal.</p>
| 1 | 2016-10-03T12:13:04Z | [
"python",
"binary",
"hex"
]
|
Python: write list to csv, transposing the first row into a column | 39,831,083 | <p>Using python, I have data stored in a list:</p>
<pre><code>a = [['a', 'b', 'c'], [1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]
</code></pre>
<p>I want to write this list to a csv that looks like the following:</p>
<pre><code>a, 1, 2, 3, 4
b, 5, 6, 7, 8
c, 9, 10, 11, 12
</code></pre>
<p>This is the code I came up with after reading lots of other transposing problems:</p>
<pre><code>length = len(a[0])
with open('test.csv', 'w') as test_file:
file_writer = csv.writer(test_file)
for i in range(length):
file_writer.writerow([x[i] for x in a])
</code></pre>
<p>Which gives me:</p>
<pre><code>a,1,5,9
b,2,6,10
c,3,7,11
</code></pre>
<p>So it transposes the entire list (not to mention that some values even get lose), but as shown above, I only want the first row to be transposed. I just don't know where to even get my hand on.</p>
<p>Thanks to Nf4r, I came up with the following -- may look awkward, but works :-)</p>
<pre><code>a = [['a', 'b', 'c'], [1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]
f = open('test.csv', 'a')
for header, data in zip(a[0], a[1:]):
result = "{header}, {data}".format(header = header,
data = ', '.join((str(n) for n in data)))
f.write(result)
f.write('\n')
f.close()
</code></pre>
| 2 | 2016-10-03T12:00:37Z | 39,831,300 | <p>use pandas library for playing with dataframe and writing to csv easily. It will be quit easy.</p>
| -2 | 2016-10-03T12:10:17Z | [
"python",
"csv",
"transpose"
]
|
Python: write list to csv, transposing the first row into a column | 39,831,083 | <p>Using python, I have data stored in a list:</p>
<pre><code>a = [['a', 'b', 'c'], [1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]
</code></pre>
<p>I want to write this list to a csv that looks like the following:</p>
<pre><code>a, 1, 2, 3, 4
b, 5, 6, 7, 8
c, 9, 10, 11, 12
</code></pre>
<p>This is the code I came up with after reading lots of other transposing problems:</p>
<pre><code>length = len(a[0])
with open('test.csv', 'w') as test_file:
file_writer = csv.writer(test_file)
for i in range(length):
file_writer.writerow([x[i] for x in a])
</code></pre>
<p>Which gives me:</p>
<pre><code>a,1,5,9
b,2,6,10
c,3,7,11
</code></pre>
<p>So it transposes the entire list (not to mention that some values even get lose), but as shown above, I only want the first row to be transposed. I just don't know where to even get my hand on.</p>
<p>Thanks to Nf4r, I came up with the following -- may look awkward, but works :-)</p>
<pre><code>a = [['a', 'b', 'c'], [1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]
f = open('test.csv', 'a')
for header, data in zip(a[0], a[1:]):
result = "{header}, {data}".format(header = header,
data = ', '.join((str(n) for n in data)))
f.write(result)
f.write('\n')
f.close()
</code></pre>
| 2 | 2016-10-03T12:00:37Z | 39,831,324 | <p>Maybe something like:</p>
<pre><code>a = [['a', 'b', 'c'], [1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]
for header, data in zip(a[0], a[1:]):
result = "{header}, {data}".format(header= header,
data= ', '.join((str(n) for n in data)))
print(result)
a, 1, 2, 3, 4
b, 5, 6, 7, 8
c, 9, 10, 11, 12
<do what u want with it>
</code></pre>
| 0 | 2016-10-03T12:10:58Z | [
"python",
"csv",
"transpose"
]
|
Python: write list to csv, transposing the first row into a column | 39,831,083 | <p>Using python, I have data stored in a list:</p>
<pre><code>a = [['a', 'b', 'c'], [1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]
</code></pre>
<p>I want to write this list to a csv that looks like the following:</p>
<pre><code>a, 1, 2, 3, 4
b, 5, 6, 7, 8
c, 9, 10, 11, 12
</code></pre>
<p>This is the code I came up with after reading lots of other transposing problems:</p>
<pre><code>length = len(a[0])
with open('test.csv', 'w') as test_file:
file_writer = csv.writer(test_file)
for i in range(length):
file_writer.writerow([x[i] for x in a])
</code></pre>
<p>Which gives me:</p>
<pre><code>a,1,5,9
b,2,6,10
c,3,7,11
</code></pre>
<p>So it transposes the entire list (not to mention that some values even get lose), but as shown above, I only want the first row to be transposed. I just don't know where to even get my hand on.</p>
<p>Thanks to Nf4r, I came up with the following -- may look awkward, but works :-)</p>
<pre><code>a = [['a', 'b', 'c'], [1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]
f = open('test.csv', 'a')
for header, data in zip(a[0], a[1:]):
result = "{header}, {data}".format(header = header,
data = ', '.join((str(n) for n in data)))
f.write(result)
f.write('\n')
f.close()
</code></pre>
| 2 | 2016-10-03T12:00:37Z | 39,832,101 | <p>You just need to pull the first <em>sublist</em> and <em>zip</em> with the remainder then use <em>writerows</em> after combining into a single list:</p>
<pre><code>a = [['a', 'b', 'c'], [1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]
import csv
with open('test.csv', 'w') as test_file:
file_writer = csv.writer(test_file)
it = iter(a)
col_1 = next(it)
file_writer.writerows([a] + b for a,b in zip(col_1, it))
</code></pre>
| 0 | 2016-10-03T12:50:08Z | [
"python",
"csv",
"transpose"
]
|
How do I avoid cycles across two columns within a DataFrame to render a Sankey diagram with Google Charts? | 39,831,103 | <p>I have a DataFrame with source and destination IPs, and bytes,</p>
<pre><code>[ 1.2.3.4, 8.8.8.8, 123456 ]
...
[ 8.8.8.8, 1.2.3.4, 1234 ]
</code></pre>
<p>My problem is that I am using this DataFrame for a <a href="https://developers.google.com/chart/interactive/docs/gallery/sankey" rel="nofollow">JS visualization</a> which breaks if there is such a cycle. Like between 1.2.3.4 and 8.8.8.8.</p>
<blockquote>
<p>Note: Avoid cycles in your data: if A links to itself, or links to B
which links to C which links to A, your chart will not render</p>
</blockquote>
<p>Is there a way to make sure that I remain uni-directional in the relationship? So in case 8.8.8.8 links back to 1.2.3.4 - and creates a cycle - I'd either skip it or swap the values. I am not sure if swapping is an option with Pandas.</p>
<p>I got myself a quick workaround by limiting it to the top 10 talkers, where a cycle is less likely. But it's incomplete of course.</p>
<pre><code>vis_data += str(netflow_df.groupby(("dip","sip","bytes"), as_index=False).sum()
.sort_values(by="bytes", ascending=False)
.head(10)[["sip", "dip", "bytes"]]
.values.tolist()
)
</code></pre>
| 2 | 2016-10-03T12:01:40Z | 39,831,925 | <p>Without having any extra information would something like this be sufficient? :</p>
<pre><code>for index in range(df1):
if df1[a] == df2[b]:
if df1[b] == df2[a]:
df1.drop(df1.index[index])
pass
</code></pre>
| 1 | 2016-10-03T12:41:14Z | [
"python",
"pandas",
"google-visualization"
]
|
Opening non-utf-8 csv file Python 3 | 39,831,197 | <p>I have a csv file that is not <code>utf-8</code> encoded. And it seems impossible to open it in Python 3. I've tried all kinds of <code>.encode()</code> <code>Windows-1252</code>, <code>ISO-8859-1</code>, <code>latin-1</code> â every time I get</p>
<pre><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfc in position 279: invalid start byte
</code></pre>
<p>The <code>0xfc</code> byte is the German <code>ü</code></p>
<p>I concede that my judgment is impaired since I was fighting with this issue for a long time now. What am I missing? I've always had problems with unicode in Python, but this one just seems especially stubborn.</p>
<p>This is the first time I try to work with Python 3 and as far as I understand there is no <code>.decode()</code> anymore, which could have solved the issue in the second.</p>
<p>EDIT:
code to open file:</p>
<pre><code>import unicodecsv as csv
csv.reader(open('myFile.csv', 'r'), delimiter = ';')
</code></pre>
| 1 | 2016-10-03T12:05:45Z | 39,831,317 | <p>Simply specify encoding when <strong>opening</strong> the file:</p>
<pre><code>with open("xxx.csv", encoding="latin-1") as fd:
rd = csv.reader(fd)
...
</code></pre>
<p>or with your own code:</p>
<pre><code>csv.reader(open('myFile.csv', 'r', encoding='latin1'), delimiter = ';')
</code></pre>
| 3 | 2016-10-03T12:10:51Z | [
"python",
"csv",
"utf-8"
]
|
How to decode a numpy array of dtype=numpy.string_? | 39,831,230 | <p>I need to decode, with Python 3, a string that was encoded the following way:</p>
<pre><code>>>> s = numpy.asarray(numpy.string_("hello\nworld"))
>>> s
array(b'hello\nworld',
dtype='|S11')
</code></pre>
<p>I tried:</p>
<pre><code>>>> str(s)
"b'hello\\nworld'"
>>> s.decode()
AttributeError Traceback (most recent call last)
<ipython-input-31-7f8dd6e0676b> in <module>()
----> 1 s.decode()
AttributeError: 'numpy.ndarray' object has no attribute 'decode'
>>> s[0].decode()
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-34-fae1dad6938f> in <module>()
----> 1 s[0].decode()
IndexError: 0-d arrays can't be indexed
</code></pre>
| 2 | 2016-10-03T12:06:49Z | 39,831,375 | <p>If my understanding is correct, you can do this with <code>astype</code> which, if <code>copy = False</code> will return the array with the contents in the corresponding type:</p>
<pre><code>>>> s = numpy.asarray(numpy.string_("hello\nworld"))
>>> r = s.astype(str, copy=False)
>>> r
array('hello\nworld',
dtype='<U11')
</code></pre>
| 1 | 2016-10-03T12:13:13Z | [
"python",
"string",
"python-3.x",
"numpy"
]
|
How to decode a numpy array of dtype=numpy.string_? | 39,831,230 | <p>I need to decode, with Python 3, a string that was encoded the following way:</p>
<pre><code>>>> s = numpy.asarray(numpy.string_("hello\nworld"))
>>> s
array(b'hello\nworld',
dtype='|S11')
</code></pre>
<p>I tried:</p>
<pre><code>>>> str(s)
"b'hello\\nworld'"
>>> s.decode()
AttributeError Traceback (most recent call last)
<ipython-input-31-7f8dd6e0676b> in <module>()
----> 1 s.decode()
AttributeError: 'numpy.ndarray' object has no attribute 'decode'
>>> s[0].decode()
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-34-fae1dad6938f> in <module>()
----> 1 s[0].decode()
IndexError: 0-d arrays can't be indexed
</code></pre>
| 2 | 2016-10-03T12:06:49Z | 39,831,542 | <p>In Python 3, there are two types that represent sequences of characters: <code>bytes</code> and <code>str</code> (contain Unicode characters). When you use <code>string_</code> as your type, numpy will return <code>bytes</code>. If you want the regular <code>str</code> you should use <code>unicode_</code> type in numpy:</p>
<pre><code>>>> s = numpy.asarray(numpy.unicode_("hello\nworld"))
>>> s
array('hello\nworld',
dtype='<U11')
>>> str(s)
'hello\nworld'
</code></pre>
<p>But note that if you don't specify a type for your string (string_ or unicode_) it will return the default str type (which in python 3.x is the str (contain the unicode characters)).</p>
<pre><code>>>> s = numpy.asarray("hello\nworld")
>>> str(s)
'hello\nworld'
</code></pre>
| 0 | 2016-10-03T12:22:08Z | [
"python",
"string",
"python-3.x",
"numpy"
]
|
How to decode a numpy array of dtype=numpy.string_? | 39,831,230 | <p>I need to decode, with Python 3, a string that was encoded the following way:</p>
<pre><code>>>> s = numpy.asarray(numpy.string_("hello\nworld"))
>>> s
array(b'hello\nworld',
dtype='|S11')
</code></pre>
<p>I tried:</p>
<pre><code>>>> str(s)
"b'hello\\nworld'"
>>> s.decode()
AttributeError Traceback (most recent call last)
<ipython-input-31-7f8dd6e0676b> in <module>()
----> 1 s.decode()
AttributeError: 'numpy.ndarray' object has no attribute 'decode'
>>> s[0].decode()
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-34-fae1dad6938f> in <module>()
----> 1 s[0].decode()
IndexError: 0-d arrays can't be indexed
</code></pre>
| 2 | 2016-10-03T12:06:49Z | 39,836,043 | <p>Another option is the <code>np.char</code> collection of string operations.</p>
<pre><code>In [255]: np.char.decode(s)
Out[255]:
array('hello\nworld',
dtype='<U11')
</code></pre>
<p>It accepts the <code>encoding</code> keyword if needed. But <code>.astype</code> is probably better if you don't need this.</p>
<p>This <code>s</code> is 0d (shape ()), so needs to be indexed with <code>s[()]</code>.</p>
<pre><code>In [268]: s[()]
Out[268]: b'hello\nworld'
In [269]: s[()].decode()
Out[269]: 'hello\nworld'
</code></pre>
<p><code>s.item()</code> also works.</p>
| 1 | 2016-10-03T16:18:01Z | [
"python",
"string",
"python-3.x",
"numpy"
]
|
I'm confused by pip installations | 39,831,263 | <p>I have recently begun having troubles using pip to install python packages. I have always used pip but never really understood how it actually works, my experience with it is basically limited to "pip install pkg". </p>
<p>Recently when trying to install openCV on my machine, I followed a few guides that involved changing paths etc. Since making these changes I have been having trouble using pip to install packages correctly. </p>
<p>Now when I run "pip3 install pkg", the install runs fine without any errors. When I try to import the module in python however, python cannot find the package. If I run "pip3 list" in the terminal I get a list of modules that is different to running help('modules') within python. </p>
<p>I think pip is installing the packages to a different location than my version of python is referencing when importing modules? </p>
<p>Is there a way I can change where pip installs to? What did it mean to change paths and how can I avoid this in the future?</p>
<p>Thanks in advance. </p>
<p>EDIT: I should mention that running "python3 -m pip install pkg" installs the packages correctly. </p>
| 0 | 2016-10-03T12:08:32Z | 39,847,573 | <p>Because you have 2 versions of python installed, the best solution is to install and use <a href="http://%20https://pypi.python.org/pypi/virtualenv" rel="nofollow" title="virtualenv">virtualenv</a></p>
<p>A Virtual Environment is a tool to keep all dependencies required by different projects and python versions in separate places. It solves the problem you mentioned and keeps your site-packages directory manageable.</p>
| 0 | 2016-10-04T08:28:27Z | [
"python",
"terminal",
"pip"
]
|
Multi-variable for loop with a tuple and a array | 39,831,267 | <pre><code>Cols = [(10,11),(8,9),(6,7),(4,5),(2,3),(0,1)]
Index = [1,2,3,4,5,6]
Temp = ['RT','85C','125C','175C','220C','260C']
for i,c,t in Index, Cols, Temp:
print(i,c,t)
</code></pre>
<p>I wish to have i as a tuple, c as an integer and t as a string. When i try the above I keep getting a: </p>
<p><strong>ValueError: too many values to unpack (expected 3)</strong></p>
<p>So I tried following as suggested by other solutions:</p>
<pre><code>x = [[(10,11),(8,9),(6,7),(4,5),(2,3),(0,1)], [1,2,3,4,5,6], ['RT','85C','125C','175C','220C','260C']]
for c,i,t in x:
print(i,c,t)
</code></pre>
<p>But got the same error as well. Here is what I wish to obtain is:</p>
<pre><code>First loop:
i = 1
c = (10,11)
t = 'RT'
Second loop:
i = 2
c = (8,9)
t = '85C'
.
.
.
.
</code></pre>
| 0 | 2016-10-03T12:08:40Z | 39,831,310 | <p>You need to use the <a href="https://docs.python.org/3/library/functions.html#zip" rel="nofollow"><code>zip</code></a> function to iterate through your lists element-wise</p>
<pre><code>for i,c,t in zip(Index, Cols, Temp):
print(i,c,t)
</code></pre>
<p>Output</p>
<pre><code>1 (10, 11) RT
2 (8, 9) 85C
3 (6, 7) 125C
4 (4, 5) 175C
5 (2, 3) 220C
6 (0, 1) 260C
</code></pre>
| 3 | 2016-10-03T12:10:33Z | [
"python",
"python-3.x",
"jupyter-notebook"
]
|
Multi-variable for loop with a tuple and a array | 39,831,267 | <pre><code>Cols = [(10,11),(8,9),(6,7),(4,5),(2,3),(0,1)]
Index = [1,2,3,4,5,6]
Temp = ['RT','85C','125C','175C','220C','260C']
for i,c,t in Index, Cols, Temp:
print(i,c,t)
</code></pre>
<p>I wish to have i as a tuple, c as an integer and t as a string. When i try the above I keep getting a: </p>
<p><strong>ValueError: too many values to unpack (expected 3)</strong></p>
<p>So I tried following as suggested by other solutions:</p>
<pre><code>x = [[(10,11),(8,9),(6,7),(4,5),(2,3),(0,1)], [1,2,3,4,5,6], ['RT','85C','125C','175C','220C','260C']]
for c,i,t in x:
print(i,c,t)
</code></pre>
<p>But got the same error as well. Here is what I wish to obtain is:</p>
<pre><code>First loop:
i = 1
c = (10,11)
t = 'RT'
Second loop:
i = 2
c = (8,9)
t = '85C'
.
.
.
.
</code></pre>
| 0 | 2016-10-03T12:08:40Z | 39,831,331 | <p>All you need is a <code>zip()</code></p>
<pre><code>Cols = [(10,11),(8,9),(6,7),(4,5),(2,3),(0,1)]
Index = [1,2,3,4,5,6]
Temp = ['RT','85C','125C','175C','220C','260C']
for i,c,t in zip(Index, Cols, Temp):
print(i,c,t)
</code></pre>
| 2 | 2016-10-03T12:11:19Z | [
"python",
"python-3.x",
"jupyter-notebook"
]
|
Multi-variable for loop with a tuple and a array | 39,831,267 | <pre><code>Cols = [(10,11),(8,9),(6,7),(4,5),(2,3),(0,1)]
Index = [1,2,3,4,5,6]
Temp = ['RT','85C','125C','175C','220C','260C']
for i,c,t in Index, Cols, Temp:
print(i,c,t)
</code></pre>
<p>I wish to have i as a tuple, c as an integer and t as a string. When i try the above I keep getting a: </p>
<p><strong>ValueError: too many values to unpack (expected 3)</strong></p>
<p>So I tried following as suggested by other solutions:</p>
<pre><code>x = [[(10,11),(8,9),(6,7),(4,5),(2,3),(0,1)], [1,2,3,4,5,6], ['RT','85C','125C','175C','220C','260C']]
for c,i,t in x:
print(i,c,t)
</code></pre>
<p>But got the same error as well. Here is what I wish to obtain is:</p>
<pre><code>First loop:
i = 1
c = (10,11)
t = 'RT'
Second loop:
i = 2
c = (8,9)
t = '85C'
.
.
.
.
</code></pre>
| 0 | 2016-10-03T12:08:40Z | 39,831,561 | <blockquote>
<p>you can do like this</p>
</blockquote>
<pre><code>Cols = [(10,11),(8,9),(6,7),(4,5),(2,3),(0,1)]
Index = [1,2,3,4,5,6]
Temp = ['RT','85C','125C','175C','220C','260C']
loops_value = ['First','Second','Third','Fourth','Fifth','Sixth']
for j, i, c, t in zip(loops_value, Index, Cols, Temp):
print "%s loop" % j
print 'i = ', i
print 'j = ', c
print 'k = ', t
</code></pre>
| 2 | 2016-10-03T12:23:14Z | [
"python",
"python-3.x",
"jupyter-notebook"
]
|
Multi-variable for loop with a tuple and a array | 39,831,267 | <pre><code>Cols = [(10,11),(8,9),(6,7),(4,5),(2,3),(0,1)]
Index = [1,2,3,4,5,6]
Temp = ['RT','85C','125C','175C','220C','260C']
for i,c,t in Index, Cols, Temp:
print(i,c,t)
</code></pre>
<p>I wish to have i as a tuple, c as an integer and t as a string. When i try the above I keep getting a: </p>
<p><strong>ValueError: too many values to unpack (expected 3)</strong></p>
<p>So I tried following as suggested by other solutions:</p>
<pre><code>x = [[(10,11),(8,9),(6,7),(4,5),(2,3),(0,1)], [1,2,3,4,5,6], ['RT','85C','125C','175C','220C','260C']]
for c,i,t in x:
print(i,c,t)
</code></pre>
<p>But got the same error as well. Here is what I wish to obtain is:</p>
<pre><code>First loop:
i = 1
c = (10,11)
t = 'RT'
Second loop:
i = 2
c = (8,9)
t = '85C'
.
.
.
.
</code></pre>
| 0 | 2016-10-03T12:08:40Z | 39,833,053 | <p>Basically the for statement iterates over the contents, one by one, so in each iteration, one value is available.</p>
<p>When using</p>
<pre><code> for i,c,t in Index, Cols, Temp:
</code></pre>
<p>you are trying to unpack one value into three variables that's why your are getting <strong>Too many values to unpack ValueError</strong>.since you seem to want to use three different list for a single iteration, there exists a function zip(),izip() and izip_longest() which can be used for this purpose.</p>
<p>zip() returns a list of tuples where the i-th tuple contains the i-th element from each of the list passed as argument to zip().
The returned list is truncated in length to the length of the list passed.</p>
<pre><code> for i,c,t in zip(Index, Cols, Temp):
print i,c,t
</code></pre>
<p>izip() works same as zip() but it returns an iterator and can be traversed once and they are fast as compared to zip() for single traversal.</p>
<pre><code> from itertools import izip
for i,c,t in izip(Index, Cols, Temp):
print i,c,t
</code></pre>
<p>izip_longest() works same as both zip() and izip() but its helpful when If the iterables are of uneven length, missing values are None by default but can be updated as per requirement.</p>
<pre><code> from itertools import izip_longest
for i,c,t in izip_longest(Index, Cols, Temp,fillvalue= 0):
print i,c,t
</code></pre>
<p>The fillvalue argument is the missing value i.e. 0, if the iterables are of uneven length.</p>
<p>Hope it helps. :) </p>
| 1 | 2016-10-03T13:38:28Z | [
"python",
"python-3.x",
"jupyter-notebook"
]
|
Python pandas check if the last element of a list in a cell contains specific string | 39,831,410 | <pre><code>my dataframe df:
index url
1 [{'url': 'http://bhandarkarscollegekdp.org/'}]
2 [{'url': 'http://cateringinyourhome.com/'}]
3 NaN
4 [{'url': 'http://muddyjunction.com/'}]
5 [{'url': 'http://ecskouhou.jp/'}]
6 [{'url': 'http://andersrice.com/'}]
7 [{'url': 'http://durager.cz/'}, {'url': 'http:andersrice.com'}]
8 [{'url': 'http://milenijum-osiguranje.rs/'}]
9 [{'url': 'http://form-kind.org/'}, {'url': 'https://osiguranje'},{'url': 'http://beseka.com.tr'}]
</code></pre>
<p>I would like to select the rows if the last item in the list of the row of url column contains 'https', while skipping missing values.</p>
<p>My current script </p>
<pre><code>df[df['url'].str[-1].str.contains('https',na=False)]
</code></pre>
<p>returns False values for all the rows while some of them actually contains https. </p>
<p>Can anybody help with it?</p>
| 1 | 2016-10-03T12:14:59Z | 39,831,687 | <p>I think you can first replace <code>NaN</code> to <code>empty url</code> and then use <code>apply</code>:</p>
<pre><code>df = pd.DataFrame({'url':[[{'url': 'http://bhandarkarscollegekdp.org/'}],
np.nan,
[{'url': 'http://cateringinyourhome.com/'}],
[{'url': 'http://durager.cz/'}, {'url': 'https:andersrice.com'}]]},
index=[1,2,3,4])
print (df)
url
1 [{'url': 'http://bhandarkarscollegekdp.org/'}]
2 NaN
3 [{'url': 'http://cateringinyourhome.com/'}]
4 [{'url': 'http://durager.cz/'}, {'url': 'https...
</code></pre>
<hr>
<pre><code>df.loc[df.url.isnull(), 'url'] = [[{'url':''}]]
print (df)
url
1 [{'url': 'http://bhandarkarscollegekdp.org/'}]
2 [{'url': ''}]
3 [{'url': 'http://cateringinyourhome.com/'}]
4 [{'url': 'http://durager.cz/'}, {'url': 'https...
print (df.url.apply(lambda x: 'https' in x[-1]['url']))
1 False
2 False
3 False
4 True
Name: url, dtype: bool
</code></pre>
<p>First solution:</p>
<pre><code>df.loc[df.url.notnull(), 'a'] =
df.loc[df.url.notnull(), 'url'].apply(lambda x: 'https' in x[-1]['url'])
df.a.fillna(False, inplace=True)
print (df)
url a
1 [{'url': 'http://bhandarkarscollegekdp.org/'}] False
2 NaN False
3 [{'url': 'http://cateringinyourhome.com/'}] False
4 [{'url': 'http://durager.cz/'}, {'url': 'https... True
</code></pre>
| 1 | 2016-10-03T12:29:14Z | [
"python",
"loops",
"pandas",
"contain"
]
|
Python pandas check if the last element of a list in a cell contains specific string | 39,831,410 | <pre><code>my dataframe df:
index url
1 [{'url': 'http://bhandarkarscollegekdp.org/'}]
2 [{'url': 'http://cateringinyourhome.com/'}]
3 NaN
4 [{'url': 'http://muddyjunction.com/'}]
5 [{'url': 'http://ecskouhou.jp/'}]
6 [{'url': 'http://andersrice.com/'}]
7 [{'url': 'http://durager.cz/'}, {'url': 'http:andersrice.com'}]
8 [{'url': 'http://milenijum-osiguranje.rs/'}]
9 [{'url': 'http://form-kind.org/'}, {'url': 'https://osiguranje'},{'url': 'http://beseka.com.tr'}]
</code></pre>
<p>I would like to select the rows if the last item in the list of the row of url column contains 'https', while skipping missing values.</p>
<p>My current script </p>
<pre><code>df[df['url'].str[-1].str.contains('https',na=False)]
</code></pre>
<p>returns False values for all the rows while some of them actually contains https. </p>
<p>Can anybody help with it?</p>
| 1 | 2016-10-03T12:14:59Z | 39,831,789 | <p>not sure url is str or other types</p>
<p>you can do like this:</p>
<pre><code>"https" in str(df.url[len(df)-1])
</code></pre>
<p>or </p>
<pre><code>str(df.ix[len(df)-1].url).__contains__("https")
</code></pre>
| 0 | 2016-10-03T12:34:01Z | [
"python",
"loops",
"pandas",
"contain"
]
|
Python - GUI (TKinter), getting variables from other functions without running the whole function agan | 39,831,418 | <p>I do have an problem i can't realy get solved. The problem is, i do have two separate buttons. In one button I want to load a selected file. And with the other one i want to do some searching.
However i can't get de variables from one function to the otherone without running the whole function again. So that means, the find button is useless now.</p>
<pre><code>from __future__ import print_function
from Tkinter import *
from Tkinter import Tk
from tkFileDialog import askopenfilename
def openFile():
Tk().withdraw()
txtFile = askopenfilename(defaultextension=".txt", filetypes=(("something", "*.txt"),("All Files", "*.*") ))
print(txtFile)
return txtFile
def Function():
txtFile = openFile()
with open(txtFile) as fp, open(('c:/map/test.txt'), 'w') as fo:
for line in fp:
if ('Hello') in line:
content = line.strip() + " Hello detected "
else:
content = line.strip()
fo.write(content + "\n")
def presentGUI():
root = Tk()
root.title("simulation")
# Buttons
button1 = Button(root, text="Select .txt file", command=openFile)
button2 = Button(root, text="Run !", width=28, command=Function)
# grid
button1.grid(row=1, column=1)
button2.grid(row=3, columnspan=3)
root.mainloop()
presentGUI()
</code></pre>
| -1 | 2016-10-03T12:15:31Z | 39,835,190 | <p>Two methods</p>
<ol>
<li>Use a global variable for txtFile</li>
<li>Use OO and create the functions as part of a class so that they can share variables. </li>
</ol>
<p>I've given you an example of option 2. Since It is my preferred method of working.</p>
<p>If you click "Run !" and haven't clicked on "Select .txt file" first it will prompt you to select a file anyway.
You can access the self.txtFile variable from any method inside the 'Application' class.</p>
<pre><code>from __future__ import print_function
from Tkinter import *
from Tkinter import Tk
from tkFileDialog import askopenfilename
class Application(Frame):
def __init__(self,parent,**kw):
Frame.__init__(self,parent,**kw)
self.txtFile = None
self.presentGUI()
def openFile(self):
##Tk().withdraw()
self.txtFile = askopenfilename(defaultextension=".txt", filetypes=(("something", "*.txt"),("All Files", "*.*") ))
print(self.txtFile)
def Function(self):
if self.txtFile == None:
self.openFile()
with open(self.txtFile) as fp, open(('c:/map/test.txt'), 'w') as fo:
for line in fp:
if ('Hello') in line:
content = line.strip() + " Hello detected "
else:
content = line.strip()
fo.write(content + "\n")
def presentGUI(self):
# Buttons
self.button1 = Button(self, text="Select .txt file", command=self.openFile)
self.button2 = Button(self, text="Run !", width=28, command=self.Function)
# grid
self.button1.grid(row=1, column=1)
self.button2.grid(row=3, columnspan=3)
if __name__ == '__main__':
root = Tk()
root.title("simulation")
app = Application(root)
app.grid()
root.mainloop()
</code></pre>
| 1 | 2016-10-03T15:29:03Z | [
"python",
"function",
"variables",
"tkinter"
]
|
snimpy.snmp.SNMPNoSuchObject: No such object was found | 39,831,446 | <p>I'm having an exception raised, but I don't get why</p>
<pre><code>snimpy.snmp.SNMPNoSuchObject: No such object was found
</code></pre>
<h3>Code</h3>
<pre><code>from snimpy import manager as snimpy
def snmp(hostname, oids, mibs):
logger.debug(hostname)
logger.debug(oids)
logger.debug(mibs)
for mib in mibs:
snimpy.load(mib)
session = snimpy.snmp.Session(hostname, "public", 1)
details = session.get(*oids)
return [{
'oid': '.' + '.'.join(repr(node) for node in oid[0]),
'value': oid[1]
} for oid in details]
oids = ['.1.3.6.1.2.1.25.3.2.1.3.1', '.1.3.6.1.2.1.43.10.2.1.4.1.1.1.3.6.1.2.1.1.4.0', '.1.3.6.1.2.1.1.1.0', '.1.3.6.1.2.1.1.5.0', '.1.3.6.1.2.1.1.3.0']
hostname = '192.168.2.250'
mibs = ['DISMAN-EVENT-MIB', 'HOST-RESOURCES-MIB', 'SNMPv2-MIB', 'SNMPv2-SMI']
snmp(hostname, oids, mibs)
</code></pre>
<h3>Error</h3>
<pre><code>>>> scanner.get_device_infos('192.168.2.250')
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "~/project/daemon/api/scanner.py", line 62, in get_device_infos
infos = self.network_tools.snmp(hostname, oids, mibs)
File "~/project/daemon/api/network_tools.py", line 26, in snmp
logger.debug(type(oids))
File "~/project/env/lib/python3.5/site-packages/snimpy/snmp.py", line 286, in get
return self._op(self._cmdgen.getCmd, *oids)
File "~/project/env/lib/python3.5/site-packages/snimpy/snmp.py", line 278, in _op
return tuple([(oid, self._convert(val)) for oid, val in results])
File "~/project/env/lib/python3.5/site-packages/snimpy/snmp.py", line 278, in <listcomp>
return tuple([(oid, self._convert(val)) for oid, val in results])
File "~/project/env/lib/python3.5/site-packages/snimpy/snmp.py", line 249, in _convert
self._check_exception(value)
File "~/project/env/lib/python3.5/site-packages/snimpy/snmp.py", line 217, in _check_exception
raise SNMPNoSuchObject("No such object was found") # nopep8
snimpy.snmp.SNMPNoSuchObject: No such object was found
</code></pre>
<h3>Doing it in bash works</h3>
<p>Reaching the equipment using <code>bash</code> and <code>snmpget</code> command works fine:</p>
<pre><code>declare -a oids=(
'.1.3.6.1.2.1.25.3.2.1.3.1' # HOST-RESOURCES-MIB::hrDeviceDescr.1
'.1.3.6.1.2.1.43.10.2.1.4.1.1' # SNMPv2-SMI::mib-2.43.10.2.1.4.1.1 page count
'.1.3.6.1.2.1.1.4.0' # SNMPv2-MIB::sysContact.0
'.1.3.6.1.2.1.1.1.0' # SNMPv2-MIB::sysDescr.0
'.1.3.6.1.2.1.1.5.0' # SNMPv2-MIB::sysName.0
'.1.3.6.1.2.1.1.3.0' # DISMAN-EVENT-MIB::sysUpTimeInstance
)
for oid in ${oids[@]}; do
echo "$oid"
snmpget -v 1 -t .3 -r 2 -c public 192.168.2.250 -m +SNMPv2-MIB "$oid"
echo
done
</code></pre>
<p><strong>output:</strong></p>
<pre><code>.1.3.6.1.2.1.25.3.2.1.3.1
HOST-RESOURCES-MIB::hrDeviceDescr.1 = STRING: Brother HL-5250DN series
.1.3.6.1.2.1.43.10.2.1.4.1.1
SNMPv2-SMI::mib-2.43.10.2.1.4.1.1 = Counter32: 22629
.1.3.6.1.2.1.1.4.0
SNMPv2-MIB::sysContact.0 = STRING:
.1.3.6.1.2.1.1.1.0
SNMPv2-MIB::sysDescr.0 = STRING: Brother NC-6400h, Firmware Ver.1.01 (05.08.31),MID 84UZ92
.1.3.6.1.2.1.1.5.0
SNMPv2-MIB::sysName.0 = STRING: BRN_7D3B43
.1.3.6.1.2.1.1.3.0
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (168019770) 19 days, 10:43:17.70
</code></pre>
<h3>Question</h3>
<p>What's the matter here?</p>
| 0 | 2016-10-03T12:16:43Z | 39,832,038 | <p>One of the <code>oid</code> seem to trigger the error (cf. <code>'.1.3.6.1.2.1.43.10.2.1.4.1.1' # SNMPv2-SMI::mib-2.43.10.2.1.4.1.1 page count</code>), moving it to last fix the error. But this is a dubious solution.</p>
<pre><code>oids = [
'.1.3.6.1.2.1.25.3.2.1.3.1', # HOST-RESOURCES-MIB::hrDeviceDescr.1
'.1.3.6.1.2.1.1.4.0', # SNMPv2-MIB::sysContact.0
'.1.3.6.1.2.1.1.1.0', # SNMPv2-MIB::sysDescr.0
'.1.3.6.1.2.1.1.5.0', # SNMPv2-MIB::sysName.0
'.1.3.6.1.2.1.1.3.0', # DISMAN-EVENT-MIB::sysUpTimeInstance
# ugly duckling
'.1.3.6.1.2.1.43.10.2.1.4.1.1' # SNMPv2-SMI::mib-2.43.10.2.1.4.1.1 page count
]
</code></pre>
| 0 | 2016-10-03T12:47:15Z | [
"python",
"snmp",
"snimpy"
]
|
Dynamic parameters in JSON serializable class | 39,831,794 | <p>I am trying to create a JSON object with <code>json</code> library from Python 2.7. I am creating a class with needed parameters to serialize like:</p>
<pre><code>class DataMessage:
channelID = 0
messageID = 0
timestamp = 0
voltageRMS = 0
currentRMS = 0
voltageDC = []
currentDC = []
</code></pre>
<p><br>But when serializing it to JSON I need the name of the parameters to change regarding channelID. For eg: when channelID=1 the data should be serialized like: </p>
<pre><code>{
"messageID" = id,
"timestamp" = 32432,
"voltageRMS1" = 548,
"currentRMS1" = 5548,
"voltageDC1_1" = 43,
"voltageDC1_2" = 44,
"voltageDC1_3" = 45,
# ....
"currentDC1_1" = 32,
# ....
}
</code></pre>
<p>I didn't find any functionality in this library that will exclude some of the serialized parameters (<code>channelID</code>) or to dynamically create an array of <code>params(voltageDC[])</code>.</p>
<p>So, the details for the JSON serialised message:</p>
<ol>
<li><code>voltageRMS1</code> - refers to the fact that <code>channelID = 1</code></li>
<li><code>voltageDC[]</code> array will expand for each value in <code>voltageDC_1 = ...</code></li>
<li><code>channelID</code> will not be serialized, just taken into consideration for parameters names.</li>
</ol>
| 0 | 2016-10-03T12:34:14Z | 39,832,051 | <p>You will have to implement a custom <a href="https://docs.python.org/2/library/json.html#json.JSONEncoder" rel="nofollow"><code>JSONEncoder</code></a> for your class that unpacks each array:</p>
<pre><code>from json import JSONEncoder
class MyEncoder(JSONEncoder):
def default(self, o):
result = {
'messageID': o.messageID,
...
}
for n, item in enumerate(o.voltageDC):
result['voltageDC{}_{}'.format(o.channelID, n)] = item
# and so on...
return result
</code></pre>
<p>You can then call <code>json.dumps()</code> with your custom encoder class to get the JSON output:</p>
<pre><code>dm = DataMessage()
...
json.dumps(dm, cls=MyEncoder)
</code></pre>
| 1 | 2016-10-03T12:47:42Z | [
"python",
"json",
"serialization"
]
|
json2html, python: json data not converted to html | 39,831,894 | <p>I'm trying to format json data to html using json2html.
The json data look like this:</p>
<pre><code>json_obj = [{"Agent Status": "status1", "Last backup": "", "hostId": 1234567, "hostfqdn": "test1.example.com", "username": "user1"}, {"Agent Status": "status2", "Last backup": "", "hostId": 2345678, "hostfqdn": "test2.example.com", "username": "user2"}]
</code></pre>
<p>As already reported in post "<a href="http://stackoverflow.com/questions/38633629/json2html-not-a-valid-json-list-python">json2html not a valid json list python</a>", to make the code works, the json parameter must be a dictionary and not a list, so I'm calling it that way:</p>
<pre><code>json_obj_in_html = json2html.convert(json = { "data" : json_obj })
</code></pre>
<p>However it does not format the json data to html only the first level of dictionary { "data" : json_obj }:</p>
<pre><code>print json_obj_in_html
<table border="1"><tr><th>data</th><td>[{"Agent Status": "status1", "Last backup": "", "hostId": 1234567, "hostfqdn": "test1.example.com", "username": "user1"}, {"Agent Status": "status2", "Last backup": "", "hostId": 2345678, "hostfqdn": "test2.example.com", "username": "user2"}]</td></tr></table>
</code></pre>
<p>Note that the <strong>online</strong> convert tool provides the right output: <a href="http://json2html.varunmalhotra.xyz/" rel="nofollow">http://json2html.varunmalhotra.xyz/</a></p>
<pre><code><table border="1"><tr><th>data</th><td><ul><table border="1"><tr><th>Agent Status</th><td>status1</td></tr><tr><th>Last backup</th><td></td></tr><tr><th>hostId</th><td>1234567</td></tr><tr><th>hostfqdn</th><td>test1.example.com</td></tr><tr><th>username</th><td>user1</td></tr></table><table border="1"><tr><th>Agent Status</th><td>status2</td></tr><tr><th>Last backup</th><td></td></tr><tr><th>hostId</th><td>2345678</td></tr><tr><th>hostfqdn</th><td>test2.example.com</td></tr><tr><th>username</th><td>user2</td></tr></table></ul></td></tr></table>
</code></pre>
<p>Any help would be very welcome.</p>
| 0 | 2016-10-03T12:39:22Z | 39,832,966 | <p>Make sure that <code>json_obj</code> is an array of objects and not a string (<code>str</code>).</p>
<p>I put your code to a complete sample:</p>
<pre><code>from json2html import *
json_obj = [{"Agent Status": "status1", "Last backup": "", "hostId": 1234567, "hostfqdn": "test1.example.com", "username": "user1"}, {"Agent Status": "status2", "Last backup": "", "hostId": 2345678, "hostfqdn": "test2.example.com", "username": "user2"}]
json_obj_in_html = json2html.convert(json = { "data" : json_obj })
print json_obj_in_html
</code></pre>
<p>With <strong>Python 2.7</strong> and <strong>json2html 1.0.1</strong> this leads to this result:
<a href="http://i.stack.imgur.com/Vv5Dy.png" rel="nofollow"><img src="http://i.stack.imgur.com/Vv5Dy.png" alt="enter image description here"></a></p>
<p>If you receive a result like</p>
<pre><code><table border="1"><tr><th>data</th><td>[{"Agent Status": "sta...
</code></pre>
<p>it is very likely that <code>json_obj</code> is a <code>str</code> and not an array of objects. You can check this by inserting a statement like</p>
<pre><code>print type(json_obj)
</code></pre>
<p>before <code>jsonhtml.convert</code>. I assume that <code>type(json_obj)</code> returns a <code><type 'str'></code> in your case and that is why the JSON like string appears in your html. To get it right you have to modify your code in that way that <code>type(json_obj)</code> returns <code><type 'list'></code>.</p>
| 0 | 2016-10-03T13:34:45Z | [
"python",
"html",
"json",
"json2html"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.