title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
How can I print the Truth value of a variable?
| 39,604,780 |
<p>In Python, variables have truthy values based on their content. For example:</p>
<pre><code>>>> def a(x):
... if x:
... print (True)
...
>>> a('')
>>> a(0)
>>> a('a')
True
>>>
>>> a([])
>>> a([1])
True
>>> a([None])
True
>>> a([0])
True
</code></pre>
<p>I also know I can print the truthy value of a comparison without the if operator at all:</p>
<pre><code>>>> print (1==1)
True
>>> print (1<5)
True
>>> print (5<1)
False
</code></pre>
<p>But how can I print the <code>True</code> / <code>False</code> value of a variable? Currently, I'm doing this:</p>
<pre><code>print (not not a)
</code></pre>
<p>but that looks a little inelegant. Is there a preferred way?</p>
| 2 |
2016-09-20T22:36:06Z
| 39,604,788 |
<p>Use the builtin <code>bool</code> type.</p>
<pre><code>print(bool(a))
</code></pre>
<p>Some examples from the REPL:</p>
<pre><code>>>> print(bool(''))
False
>>> print(bool('a'))
True
>>> print(bool([]))
False
</code></pre>
| 4 |
2016-09-20T22:37:09Z
|
[
"python",
"python-3.x",
"boolean",
"output"
] |
How to compress a file with bzip2 in Python?
| 39,604,843 |
<p>Here is what I have:</p>
<pre><code>import bz2
compressionLevel = 9
source_file = '/foo/bar.txt' #this file can be in a different format, like .csv or others...
destination_file = '/foo/bar.bz2'
tarbz2contents = bz2.compress(source_file, compressionLevel)
fh = open(destination_file, "wb")
fh.write(tarbz2contents)
fh.close()
</code></pre>
<p><strong>I know first param of bz2.compress is a data, but it's the simple way that I found to clarify what I need.</strong> </p>
<p>And I know about BZ2File but, I cannot find any good example to use BZ2File.</p>
| 0 |
2016-09-20T22:43:29Z
| 39,604,890 |
<p>The <a href="https://docs.python.org/2/library/bz2.html#bz2.compress" rel="nofollow">documentation for bz2.compress</a> for says it takes data, not a file name.<br>
Try replacing the line below:</p>
<pre><code>tarbz2contents = bz2.compress(open(source_file, 'rb').read(), compressionLevel)
</code></pre>
<p>...or maybe :</p>
<pre><code>with open(source_file, 'rb') as data:
tarbz2contents = bz2.compress(data, compressionLevel)
</code></pre>
| 1 |
2016-09-20T22:48:03Z
|
[
"python",
"compression",
"bzip2"
] |
Qthread is still working when i close gui on python pyqt
| 39,604,903 |
<p>my code has thread, but when i close the gui, it still works on background. how can i stop threads? is there something stop(), close()?
i dont use signal, slots? Must i use this?</p>
<pre><code>from PyQt4 import QtGui, QtCore
import sys
import time
import threading
class Main(QtGui.QMainWindow):
def __init__(self, parent=None):
super(Main, self).__init__(parent)
self.kac_ders=QtGui.QComboBox()
self.bilgi_cek=QtGui.QPushButton("Save")
self.text=QtGui.QLineEdit()
self.widgetlayout=QtGui.QFormLayout()
self.widgetlar=QtGui.QWidget()
self.widgetlar.setLayout(self.widgetlayout)
self.bilgiler=QtGui.QTextBrowser()
self.bilgi_cek.clicked.connect(self.on_testLoop)
self.scrollArea = QtGui.QScrollArea()
self.scrollArea.setWidgetResizable(True)
self.scrollArea.setWidget(self.widgetlar)
self.analayout=QtGui.QVBoxLayout()
self.analayout.addWidget(self.text)
self.analayout.addWidget(self.bilgi_cek)
self.analayout.addWidget(self.bilgiler)
self.centralWidget=QtGui.QWidget()
self.centralWidget.setLayout(self.analayout)
self.setCentralWidget(self.centralWidget)
def on_testLoop(self):
self.c_thread=threading.Thread(target=self.kontenjan_ara)
self.c_thread.start()
def kontenjan_ara(self):
while(1):
self.bilgiler.append(self.text.text())
time.sleep(10)
app = QtGui.QApplication(sys.argv)
myWidget = Main()
myWidget.show()
app.exec_()
</code></pre>
| 3 |
2016-09-20T22:49:30Z
| 39,605,384 |
<p>I chose to rewrite a bit this answer, because I had failed to properly look at the problem's context. As the other answers and comments tell, you code lacks thread-safety. </p>
<p>The best way to fix this is to try to really think "in threads", to restrict yourself to only use objects living in the same thread, or functions that are known as "threadsafe".</p>
<p>Throwing in some signals and slots will help, but maybe you want to think back a bit to your original problem. In your current code, each time a button is pressed, <strong>a new thread in launched</strong>, that will, every 10 seconds, do 2 things :
- Read some text from <code>self.text</code>
- Append it to <code>self.bilgiler</code></p>
<p>Both of these operations are non-threadsafe, and <strong>must</strong> be called from the thread that owns these objects (the main thread). You want to make the worker threads "schedule & wait" the read & append oeprations, instead of simply "executing" them.</p>
<p>I recommend using the other answer (the thread halting problem is automatically fixed by using proper QThreads that integrate well with Qt's event loop), which would make you use a cleaner approach, more integrated with Qt.</p>
<p>You may also want to rethink your problem, because maybe there is a simpler approach to your problem, for example : not spawning threads each time <code>bilgi_cek</code> is clicked, or using <code>Queue</code> objects so that your worker is completely agnostic of your GUI, and only interact with it using threadsafe objects.</p>
<p>Good luck, sorry if I caused any confusion. My original answer is still available <a href="http://stackoverflow.com/revisions/39605384/2">here</a>. I think it would be wise to mark the other answer as the valid answer for this question.</p>
| 2 |
2016-09-20T23:44:30Z
|
[
"python",
"python-3.x",
"pyqt",
"pyqt4",
"qthread"
] |
Qthread is still working when i close gui on python pyqt
| 39,604,903 |
<p>my code has thread, but when i close the gui, it still works on background. how can i stop threads? is there something stop(), close()?
i dont use signal, slots? Must i use this?</p>
<pre><code>from PyQt4 import QtGui, QtCore
import sys
import time
import threading
class Main(QtGui.QMainWindow):
def __init__(self, parent=None):
super(Main, self).__init__(parent)
self.kac_ders=QtGui.QComboBox()
self.bilgi_cek=QtGui.QPushButton("Save")
self.text=QtGui.QLineEdit()
self.widgetlayout=QtGui.QFormLayout()
self.widgetlar=QtGui.QWidget()
self.widgetlar.setLayout(self.widgetlayout)
self.bilgiler=QtGui.QTextBrowser()
self.bilgi_cek.clicked.connect(self.on_testLoop)
self.scrollArea = QtGui.QScrollArea()
self.scrollArea.setWidgetResizable(True)
self.scrollArea.setWidget(self.widgetlar)
self.analayout=QtGui.QVBoxLayout()
self.analayout.addWidget(self.text)
self.analayout.addWidget(self.bilgi_cek)
self.analayout.addWidget(self.bilgiler)
self.centralWidget=QtGui.QWidget()
self.centralWidget.setLayout(self.analayout)
self.setCentralWidget(self.centralWidget)
def on_testLoop(self):
self.c_thread=threading.Thread(target=self.kontenjan_ara)
self.c_thread.start()
def kontenjan_ara(self):
while(1):
self.bilgiler.append(self.text.text())
time.sleep(10)
app = QtGui.QApplication(sys.argv)
myWidget = Main()
myWidget.show()
app.exec_()
</code></pre>
| 3 |
2016-09-20T22:49:30Z
| 39,609,677 |
<p>A few things:</p>
<ol>
<li><p>You shouldn't be calling GUI code from outside the main thread. GUI elements are not thread-safe. <code>self.kontenjan_ara</code> updates and reads from GUI elements, it shouldn't be the target of your <code>thread</code>.</p></li>
<li><p>In almost all cases, you should use <code>QThreads</code> instead of python threads. They integrate nicely with the event and signal system in Qt.</p></li>
</ol>
<p>If you just want to run something every few seconds, you can use a <code>QTimer</code></p>
<pre><code>def __init__(self, parent=None):
...
self.timer = QTimer(self)
self.timer.timeout.connect(self.kontenjan_ara)
self.timer.start(10000)
def kontenjan_ara(self):
self.bilgiler.append(self.text.text())
</code></pre>
<p>If your thread operations are more computationally complex you can create a worker thread and pass data between the worker thread and the main GUI thread using signals. </p>
<pre><code>class Worker(QObject):
work_finished = QtCore.pyqtSignal(object)
@QtCore.pyqtSlot()
def do_work(self):
data = 'Text'
while True:
# Do something with data and pass back to main thread
data = data + 'text'
self.work_finished.emit(data)
time.sleep(10)
class MyWidget(QtGui.QWidget):
def __init__(self, ...)
...
self.worker = Worker()
self.thread = QtCore.QThread(self)
self.worker.work_finished.connect(self.on_finished)
self.worker.moveToThread(self.thread)
self.thread.started.connect(self.worker.do_work)
self.thread.start()
@QtCore.pyqtSlot(object)
def on_finished(self, data):
self.bilgiler.append(data)
...
</code></pre>
<p>Qt will automatically kill all the subthreads when the main thread exits the event loop.</p>
| 3 |
2016-09-21T07:09:37Z
|
[
"python",
"python-3.x",
"pyqt",
"pyqt4",
"qthread"
] |
How to convert JSON into multiple dictonaries using Python
| 39,604,911 |
<p>I'd like to take the following JSON and convert it into multiple dicts so I can access each setting under the top level nodes only for that environment. This is a config file that will maintain settings for different environments, I'd like to be able to grab a top level node/environment and then use all the underlying nodes/settings just for that environment.</p>
<p>My example JSON</p>
<blockquote>
<p>{ "default": </p>
<p>{</p>
<pre><code>"build": {
"projectKey": "TEST",
"buildKey": "ME"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
</code></pre>
<p>},</p>
<p>"dev": {</p>
<pre><code>"build": {
"projectKey": "TEST",
"buildKey": "YOU"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
</code></pre>
<p>},</p>
<p>"qa": {</p>
<pre><code>"build": {
"projectKey": "TEST",
"buildKey": "THEM"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
</code></pre>
<p>}
}</p>
</blockquote>
<p>I tried doing this by pulling out the top level keys but couldn't see how to break them up into multiple dictionaries using Python so I could collect each environment's settings and use them without duplication. Checking the underlying nodes I could see doing by checking the len of the node, to see if there are any more nodes underneath, but from the top level and splitting each one to its own dict I couldn't work out.</p>
<p>Or perhaps there is a better way to do this, than I am not aware of. The length underneath could vary, but that should be irrelevant if I can split these up.</p>
| -1 |
2016-09-20T22:50:32Z
| 39,605,043 |
<p>Your question is really confusing but I'll try guessing a little bit, let's assume you got a json file (I'll emulate that using python3.x <a href="https://docs.python.org/3/library/io.html" rel="nofollow">io.StringIO</a>).</p>
<p>I assume you want to know how to load that file into a python dictionary, to do that use <a href="https://docs.python.org/2/library/json.html#json.load" rel="nofollow">json.load</a> method and you're pretty much done. Here's a mcve example showing you how to load your hypothetical file and then process it with the different configurations:</p>
<pre><code>import json
import io
f = io.StringIO("""{
"default": {
"build": {
"projectKey": "TEST",
"buildKey": "ME"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
},
"dev": {
"build": {
"projectKey": "TEST",
"buildKey": "YOU"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
},
"qa": {
"build": {
"projectKey": "TEST",
"buildKey": "THEM"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
}
}""")
def processor(config, stage):
print('----Processing stuff for {0} server----'.format(stage))
print(config[stage])
print('----End----')
config = json.load(f)
for stage in ['default', 'dev', 'qa']:
processor(config, stage)
</code></pre>
| 0 |
2016-09-20T23:04:47Z
|
[
"python",
"json",
"dictionary",
"configuration"
] |
How to convert JSON into multiple dictonaries using Python
| 39,604,911 |
<p>I'd like to take the following JSON and convert it into multiple dicts so I can access each setting under the top level nodes only for that environment. This is a config file that will maintain settings for different environments, I'd like to be able to grab a top level node/environment and then use all the underlying nodes/settings just for that environment.</p>
<p>My example JSON</p>
<blockquote>
<p>{ "default": </p>
<p>{</p>
<pre><code>"build": {
"projectKey": "TEST",
"buildKey": "ME"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
</code></pre>
<p>},</p>
<p>"dev": {</p>
<pre><code>"build": {
"projectKey": "TEST",
"buildKey": "YOU"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
</code></pre>
<p>},</p>
<p>"qa": {</p>
<pre><code>"build": {
"projectKey": "TEST",
"buildKey": "THEM"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
</code></pre>
<p>}
}</p>
</blockquote>
<p>I tried doing this by pulling out the top level keys but couldn't see how to break them up into multiple dictionaries using Python so I could collect each environment's settings and use them without duplication. Checking the underlying nodes I could see doing by checking the len of the node, to see if there are any more nodes underneath, but from the top level and splitting each one to its own dict I couldn't work out.</p>
<p>Or perhaps there is a better way to do this, than I am not aware of. The length underneath could vary, but that should be irrelevant if I can split these up.</p>
| -1 |
2016-09-20T22:50:32Z
| 39,605,051 |
<p>There's std lib capable of doing it</p>
<pre><code>import json
my_dct = json.loads(json_string)
</code></pre>
<p>for more, see <a href="https://docs.python.org/3.5/library/json.html#json.loads" rel="nofollow">https://docs.python.org/3.5/library/json.html#json.loads</a></p>
| 0 |
2016-09-20T23:05:20Z
|
[
"python",
"json",
"dictionary",
"configuration"
] |
How to convert JSON into multiple dictonaries using Python
| 39,604,911 |
<p>I'd like to take the following JSON and convert it into multiple dicts so I can access each setting under the top level nodes only for that environment. This is a config file that will maintain settings for different environments, I'd like to be able to grab a top level node/environment and then use all the underlying nodes/settings just for that environment.</p>
<p>My example JSON</p>
<blockquote>
<p>{ "default": </p>
<p>{</p>
<pre><code>"build": {
"projectKey": "TEST",
"buildKey": "ME"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
</code></pre>
<p>},</p>
<p>"dev": {</p>
<pre><code>"build": {
"projectKey": "TEST",
"buildKey": "YOU"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
</code></pre>
<p>},</p>
<p>"qa": {</p>
<pre><code>"build": {
"projectKey": "TEST",
"buildKey": "THEM"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
</code></pre>
<p>}
}</p>
</blockquote>
<p>I tried doing this by pulling out the top level keys but couldn't see how to break them up into multiple dictionaries using Python so I could collect each environment's settings and use them without duplication. Checking the underlying nodes I could see doing by checking the len of the node, to see if there are any more nodes underneath, but from the top level and splitting each one to its own dict I couldn't work out.</p>
<p>Or perhaps there is a better way to do this, than I am not aware of. The length underneath could vary, but that should be irrelevant if I can split these up.</p>
| -1 |
2016-09-20T22:50:32Z
| 39,605,129 |
<p>Store above data within a variable like below and convert it to dict using json.loads
It converts text to dictionary, then you can loop over dictionary based on env to get the properties</p>
<pre><code>text='''
{ "default":
{
"build": {
"projectKey": "TEST",
"buildKey": "ME"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
},
"dev": {
"build": {
"projectKey": "TEST",
"buildKey": "YOU"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
},
"qa": {
"build": {
"projectKey": "TEST",
"buildKey": "THEM"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
} }'''
import json
dict=json.loads(text)
</code></pre>
| 0 |
2016-09-20T23:13:37Z
|
[
"python",
"json",
"dictionary",
"configuration"
] |
How to convert JSON into multiple dictonaries using Python
| 39,604,911 |
<p>I'd like to take the following JSON and convert it into multiple dicts so I can access each setting under the top level nodes only for that environment. This is a config file that will maintain settings for different environments, I'd like to be able to grab a top level node/environment and then use all the underlying nodes/settings just for that environment.</p>
<p>My example JSON</p>
<blockquote>
<p>{ "default": </p>
<p>{</p>
<pre><code>"build": {
"projectKey": "TEST",
"buildKey": "ME"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
</code></pre>
<p>},</p>
<p>"dev": {</p>
<pre><code>"build": {
"projectKey": "TEST",
"buildKey": "YOU"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
</code></pre>
<p>},</p>
<p>"qa": {</p>
<pre><code>"build": {
"projectKey": "TEST",
"buildKey": "THEM"
},
"headers": {
"json": "application/json",
"xml": "application/xml"
}
</code></pre>
<p>}
}</p>
</blockquote>
<p>I tried doing this by pulling out the top level keys but couldn't see how to break them up into multiple dictionaries using Python so I could collect each environment's settings and use them without duplication. Checking the underlying nodes I could see doing by checking the len of the node, to see if there are any more nodes underneath, but from the top level and splitting each one to its own dict I couldn't work out.</p>
<p>Or perhaps there is a better way to do this, than I am not aware of. The length underneath could vary, but that should be irrelevant if I can split these up.</p>
| -1 |
2016-09-20T22:50:32Z
| 39,605,445 |
<p>You need <code>json.loads()</code> to convert your <code>string</code> to <code>json</code> object. Below is the code to open the file and load the json as <code>dict</code> or <code>list</code> based on the structure of json (in your case <code>dict</code>)</p>
<pre><code>import json
json_file = open('/path/to/config_file')
json_str = json_file.read()
json_data = json.loads(json_str)
</code></pre>
| 0 |
2016-09-20T23:52:06Z
|
[
"python",
"json",
"dictionary",
"configuration"
] |
What does (n,) mean in the context of numpy and vectors?
| 39,604,918 |
<p>I've tried searching StackOverflow, googling, and even using symbolhound to do character searches, but was unable to find an answer. Specifically, I'm confused about Ch. 1 of Nielsen's <em>Neural Networks and Deep Learning</em>, where he says "It is assumed that the input <code>a</code> is an <code>(n, 1) Numpy ndarray</code>, not a <code>(n,) vector</code>."</p>
<p>At first I thought <code>(n,)</code> referred to the orientation of the array - so it might refer to a one-column vector as opposed to a vector with only one row. But then I don't see why we need <code>(n,)</code> and <code>(n, 1)</code> both - they seem to say the same thing. I know I'm misunderstanding something but am unsure.</p>
<p>For reference <code>a</code> refers to a vector of activations that will be input to a given layer of a neural network, before being transformed by the weights and biases to produce the output vector of activations for the next layer.</p>
<p>EDIT: This question equivocates between a "one-column vector" (there's no such thing) and a "one-column matrix" (does actually exist). Same for "one-row vector" and "one-row matrix". </p>
<p>A vector is only a list of numbers, or (equivalently) a list of scalar transformations on the basis vectors of a vector space. A vector might <em>look</em> like a matrix when we write it out, if it only has one row (or one column). Confusingly, we will sometimes refer to a "vector of activations" but actually mean "a single-row matrix of activation values transposed so that it is a single-column."</p>
<p>Be aware that in neither case are we discussing a one-dimensional vector, which would be a vector defined by only one number (unless, trivially, n==1, in which case the concept of a "column" or "row" distinction would be meaningless).</p>
| 2 |
2016-09-20T22:51:21Z
| 39,605,142 |
<p><code>(n,)</code> is a tuple of length 1, whose only element is <code>n</code>. (The syntax isn't <code>(n)</code> because that's just <code>n</code> instead of making a tuple.)</p>
<p>If an array has shape <code>(n,)</code>, that means it's a 1-dimensional array with a length of <code>n</code> along its only dimension. It's not a row vector or a column vector; it doesn't have rows or columns. It's just a vector.</p>
| 3 |
2016-09-20T23:14:27Z
|
[
"python",
"numpy",
"machine-learning",
"neural-network"
] |
What does (n,) mean in the context of numpy and vectors?
| 39,604,918 |
<p>I've tried searching StackOverflow, googling, and even using symbolhound to do character searches, but was unable to find an answer. Specifically, I'm confused about Ch. 1 of Nielsen's <em>Neural Networks and Deep Learning</em>, where he says "It is assumed that the input <code>a</code> is an <code>(n, 1) Numpy ndarray</code>, not a <code>(n,) vector</code>."</p>
<p>At first I thought <code>(n,)</code> referred to the orientation of the array - so it might refer to a one-column vector as opposed to a vector with only one row. But then I don't see why we need <code>(n,)</code> and <code>(n, 1)</code> both - they seem to say the same thing. I know I'm misunderstanding something but am unsure.</p>
<p>For reference <code>a</code> refers to a vector of activations that will be input to a given layer of a neural network, before being transformed by the weights and biases to produce the output vector of activations for the next layer.</p>
<p>EDIT: This question equivocates between a "one-column vector" (there's no such thing) and a "one-column matrix" (does actually exist). Same for "one-row vector" and "one-row matrix". </p>
<p>A vector is only a list of numbers, or (equivalently) a list of scalar transformations on the basis vectors of a vector space. A vector might <em>look</em> like a matrix when we write it out, if it only has one row (or one column). Confusingly, we will sometimes refer to a "vector of activations" but actually mean "a single-row matrix of activation values transposed so that it is a single-column."</p>
<p>Be aware that in neither case are we discussing a one-dimensional vector, which would be a vector defined by only one number (unless, trivially, n==1, in which case the concept of a "column" or "row" distinction would be meaningless).</p>
| 2 |
2016-09-20T22:51:21Z
| 39,605,320 |
<p>In <code>numpy</code> an array can have a number of different dimensions, 0, 1, 2 etc.</p>
<p>The typical 2d array has dimension <code>(n,m)</code> (this is a Python tuple). We tend to describe this as having n rows, m columns. So a <code>(n,1)</code> array has just 1 column, and a <code>(1,m)</code> has 1 row.</p>
<p>But because an array may have just 1 dimension, it is possible to have a shape <code>(n,)</code> (Python notation for a 1 element tuple: see <a href="https://wiki.python.org/moin/TupleSyntax" rel="nofollow">here</a> for more).</p>
<p>For many purposes <code>(n,)</code>, <code>(1,n)</code>, <code>(n,1)</code> arrays are equivalent (also <code>(1,n,1,1)</code> (4d)). They all have <code>n</code> terms, and can be reshaped to each other.</p>
<p>But sometimes that extra <code>1</code> dimension matters. A (1,m) array can multiply a (n,1) array to produce a (n,m) array. A (n,1) array can be indexed like a (n,m), with 2 indices, <code>x[:,0]</code> where as a (n,) only accepts <code>x[0]</code>.</p>
<p>MATLAB matrices are always 2d (or higher). So people transfering ideas from MATLAB tend to expect 2 dimensions. There is a <code>np.matrix</code> subclass that supposed to imitate that.</p>
<p>For numpy programmers the distinctions between vector, row vector, column vector, matrix are loose and relatively unimportant. Or the use is derived from the application rather than from <code>numpy</code> itself. I think that's what's happening with this network book - the notation and expectations come from outside of <code>numpy</code>.</p>
<p>See as well this answer for how to interpret the shapes with respect to the data stored in <code>ndarrays</code>. It also provides insight on how to use <code>.reshape</code>: <a href="http://stackoverflow.com/a/22074424/3277902">http://stackoverflow.com/a/22074424/3277902</a> </p>
| 2 |
2016-09-20T23:36:09Z
|
[
"python",
"numpy",
"machine-learning",
"neural-network"
] |
Python: passing parameters over functions
| 39,604,958 |
<p>Python Experts,</p>
<p>I have been trying to implement BST using Python and here is my code for the insert function:</p>
<p><strong>Draft 1:</strong></p>
<pre><code>def insert(self, val):
newNode = self._Node(val)
if (self._root is None):
self._root = newNode
else:
self._insert(self._root,val)
def _insert(self, node, val):
if node is None:
node = self._Node(val)
elif val >= node._val:
self._insert(node._right, val)
else:
self._insert(node._left, val)
</code></pre>
<p>But, I'm unable to construct the tree except the root. I'm guessing I messed up somewhere with the parameters passing over the two functions because when I modify the code as below, I get it alright:</p>
<p><strong>Draft 2:</strong></p>
<pre><code>def insert(self, val):
newNode = self._Node(val)
if (self._root is None):
self._root = newNode
else:
self._insert(self._root,val)
def _insert(self, node, val):
if val >= node._val:
if node._right is None:
node._right = self._Node(val)
else:
self._insert(node._right, val)
else:
if node._left is None:
node._left = self._Node(val)
else:
self._insert(node._left, val)
</code></pre>
<p>I'm trying hard to understand why the draft 2 works but draft 1 doesn't. Any help here? Thanks in advance!</p>
| 2 |
2016-09-20T22:55:39Z
| 39,605,307 |
<p>The fundamental misunderstanding you have is how variable assignment works and interacts with Python's evaluation strategy: <a href="https://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_sharing" rel="nofollow">call-by-sharing.</a></p>
<p>Essentially, in your first draft, when you do the following:</p>
<pre><code>def _insert(self, node, val):
if node is None:
node = self._Node(val)
...
</code></pre>
<p>You are simply assigning to the name (variable) <code>node</code> the value of <code>self._Node(val)</code> but then when you leave the scope, the new object is destroyed! Even though <code>node</code> used to refer to the value that was passed in by the method call, simple assignment doesn't mutate the object that is referenced by the name, rather, it <em>reassigns the name.</em></p>
<p>In your second draft, however:</p>
<pre><code>def _insert(self, node, val):
if val >= node._val:
if node._right is None:
node._right = self._Node(val)
else:
self._insert(node._right, val)
</code></pre>
<p>You <em>are mutating an object</em> i.e.: `node._right = self._Node(val)</p>
<p>Here is a simple example that is hopefully illuminating:</p>
<pre><code>>>> def only_assign(object):
... x = 3
... object = x
...
>>> def mutate(object):
... object.attribute = 3
...
>>> class A:
... pass
...
>>> a = A()
>>> a
<__main__.A object at 0x7f54c3e256a0>
>>> only_assign(a)
>>> a
<__main__.A object at 0x7f54c3e256a0>
>>> mutate(a)
>>> a.attribute
3
</code></pre>
| 2 |
2016-09-20T23:34:41Z
|
[
"python"
] |
Python: passing parameters over functions
| 39,604,958 |
<p>Python Experts,</p>
<p>I have been trying to implement BST using Python and here is my code for the insert function:</p>
<p><strong>Draft 1:</strong></p>
<pre><code>def insert(self, val):
newNode = self._Node(val)
if (self._root is None):
self._root = newNode
else:
self._insert(self._root,val)
def _insert(self, node, val):
if node is None:
node = self._Node(val)
elif val >= node._val:
self._insert(node._right, val)
else:
self._insert(node._left, val)
</code></pre>
<p>But, I'm unable to construct the tree except the root. I'm guessing I messed up somewhere with the parameters passing over the two functions because when I modify the code as below, I get it alright:</p>
<p><strong>Draft 2:</strong></p>
<pre><code>def insert(self, val):
newNode = self._Node(val)
if (self._root is None):
self._root = newNode
else:
self._insert(self._root,val)
def _insert(self, node, val):
if val >= node._val:
if node._right is None:
node._right = self._Node(val)
else:
self._insert(node._right, val)
else:
if node._left is None:
node._left = self._Node(val)
else:
self._insert(node._left, val)
</code></pre>
<p>I'm trying hard to understand why the draft 2 works but draft 1 doesn't. Any help here? Thanks in advance!</p>
| 2 |
2016-09-20T22:55:39Z
| 39,605,422 |
<p>I believe this is due to the fact that by doing :</p>
<pre><code>node = self._Node(val)
</code></pre>
<p>in the _insert function you are not changing the value of the left/right node but binding the name node to a new _Node object, thus letting the left/right node as None. </p>
<p>On your second draft you are effectively affecting a new object to left / right node.</p>
<p>Here's a simple example to illustrate what happens on your code.</p>
<p>Guess what the print(test) will display?</p>
<pre><code>test = [5, 5, 5]
def function(list):
list[0] = 10
list = range(1, 3)
function(test)
print test
</code></pre>
<p>If you think it will display [1,2] you're wrong .. it will actually display [10, 5, 5] because when doing list = range(1, 3) we are binding the name list to another object and not changing the first object it was bound to (the one test is actually bound to)</p>
| 1 |
2016-09-20T23:48:48Z
|
[
"python"
] |
How do I add numbers in a list that do not equal parameters?
| 39,605,064 |
<p>I am trying to write a function that will add all of the numbers in a list that do not equal the parameters. The code I have, that is not working, is:</p>
<pre><code>def suminout(nums,a,b):
total=0
for i in range(len(nums)):
if nums[i]!=a or nums[i]!=b:
total=total+nums[i]
return total
</code></pre>
<p>It appears to be summing everything in the list.</p>
<p>For example, if I called:
suminout([1,2,3,4],1,2)
it should return 7. However, I am getting 10.</p>
<p>Any thoughts?</p>
| -2 |
2016-09-20T23:06:36Z
| 39,605,111 |
<p>As Kasramvd so duly noted, you need conjunction not disjunction.</p>
<p>Here is a list comprehension doing the same thing.</p>
<pre><code>def suminout(nums, a, b):
total = 0
total = sum([x for x in nums if (x!=a and x!=b)])
return total
</code></pre>
| 1 |
2016-09-20T23:11:51Z
|
[
"python",
"list",
"parameters",
"sum"
] |
How do I add numbers in a list that do not equal parameters?
| 39,605,064 |
<p>I am trying to write a function that will add all of the numbers in a list that do not equal the parameters. The code I have, that is not working, is:</p>
<pre><code>def suminout(nums,a,b):
total=0
for i in range(len(nums)):
if nums[i]!=a or nums[i]!=b:
total=total+nums[i]
return total
</code></pre>
<p>It appears to be summing everything in the list.</p>
<p>For example, if I called:
suminout([1,2,3,4],1,2)
it should return 7. However, I am getting 10.</p>
<p>Any thoughts?</p>
| -2 |
2016-09-20T23:06:36Z
| 39,605,259 |
<p>You need to use <code>and</code> instead of <code>or</code>.</p>
<pre><code>def suminout(nums,a,b):
total=0
for i in range(len(nums)):
if nums[i]!=a and nums[i]!=b:
total=total+nums[i]
return total
</code></pre>
<p>Your <code>for</code> logic could be further simplified (without using <code>len()</code> and <code>range()</code>) as:</p>
<pre><code>for num in nums:
if num not in [a, b]: # same as: num != a and num != b
total += num # same as: total = total + num
</code></pre>
<p>Better way to achieve it using <code>list comprehension</code> with <code>sum()</code> as mentioned by Sean. OR you may use <a href="https://docs.python.org/2/library/functions.html#filter" rel="nofollow"><code>filter()</code></a> instead of list comprehension:</p>
<pre><code>>>> my_list = [1, 2, 3, 4]
>>> sum(filter(lambda x: x !=1 and x!=4, my_list))
5
</code></pre>
| 0 |
2016-09-20T23:29:48Z
|
[
"python",
"list",
"parameters",
"sum"
] |
How do I add numbers in a list that do not equal parameters?
| 39,605,064 |
<p>I am trying to write a function that will add all of the numbers in a list that do not equal the parameters. The code I have, that is not working, is:</p>
<pre><code>def suminout(nums,a,b):
total=0
for i in range(len(nums)):
if nums[i]!=a or nums[i]!=b:
total=total+nums[i]
return total
</code></pre>
<p>It appears to be summing everything in the list.</p>
<p>For example, if I called:
suminout([1,2,3,4],1,2)
it should return 7. However, I am getting 10.</p>
<p>Any thoughts?</p>
| -2 |
2016-09-20T23:06:36Z
| 39,605,573 |
<p>Or:</p>
<pre><code>def suminout(nums, a, b):
total = 0
total = sum([x for x in nums if x not in (a,b)])
return total
</code></pre>
| 0 |
2016-09-21T00:08:52Z
|
[
"python",
"list",
"parameters",
"sum"
] |
Image class keeps selecting different images instead of identical, how to fix? Sikuli
| 39,605,067 |
<p>I'm using the random 5 card shuffler to get the hang of some basic scripting, but the script will select A A 3 or 10 9 8 instead of just ignoring them. How could I get it to run faster?</p>
<pre><code>running = True
def runHotkey(event):
global running
running = False
Env.addHotkey(Key.F1, KeyModifier.CTRL, runHotkey)
while exists("1474199877323.png")and running:
click(Pattern("1474369588947.png").similar(0.80))
wait(2)
click("1474138615993.png")
click("1474138629993.png")
wait(1)
imageCount=0
images = []
# find all images and store them in a list to prevent additional search
if exists(Pattern("1474368132347.png").similar(0.90)):
for image in findAll(Pattern("1474368132347.png").similar(0.89)):
wait(1)
images.append(image)
#check list length and act accordingly
if len(images) >= 2:
wait(1)
for image in images:
wait(1)
image.click()
click(Pattern("1474409820809.png").similar(0.93))
if exists(Pattern("1474409495397.png").similar(0.91)):
for image1 in findAll(Pattern("1474409495397.png").similar(0.91)):
wait(1)
images.append(image1)
#check list length and act accordingly
if len(images) >= 3:
wait(1)
for image1 in images:
wait(1)
image1.click()
click(Pattern("1474369529687.png").similar(0.90))
if exists(Pattern("1474410728933.png").similar(0.95)):
for image2 in findAll(Pattern("1474410728933.png").similar(0.95)):
wait(1)
images.append(image2)
#check list length and act accordingly
if len(images) >= 3:
wait(1)
for image2 in images:
wait(1)
image2.click()
click(Pattern("1474369529687.png").similar(0.90))
if exists(Pattern("1474411088984.png").similar(0.91)):
for image3 in findAll(Pattern("1474411088984.png").similar(0.91)):
wait(1)
images.append(image3)
#check list length and act accordingly
if len(images) >= 3:
wait(1)
for image3 in images:
wait(1)
image3.click()
click(Pattern("1474409820809.png").similar(0.93))
if exists(Pattern("1474411136494.png").similar(0.93)):
for image4 in findAll(Pattern("1474411136494.png").similar(0.93)):
wait(1)
images.append(image4)
#check list length and act accordingly
if len(images) >= 3:
wait(1)
for image4 in images:
wait(1)
image4.click()
click(Pattern("1474369529687.png").similar(0.90))
if exists(Pattern("1474411200166.png").similar(0.94)):
for image5 in findAll(Pattern("1474411200166.png").similar(0.94)):
wait(1)
images.append(image5)
#check list length and act accordingly
if len(images) >= 3:
wait(1)
for image5 in images:
wait(1)
image5.click()
click(Pattern("1474369529687.png").similar(0.90))
if exists(Pattern("1474411297233.png").similar(0.94)):
for image6 in findAll(Pattern("1474411297233.png").similar(0.94)):
wait(1)
images.append(image6)
#check list length and act accordingly
if len(images) >= 3:
wait(1)
for image6 in images:
wait(1)
image6.click()
click(Pattern("1474409820809.png").similar(0.93))
if exists(Pattern("1474411373675.png").similar(0.94)):
for image7 in findAll(Pattern("1474411373675.png").similar(0.94)):
wait(1)
images.append(image7)
#check list length and act accordingly
if len(images) >= 3:
wait(1)
for image7 in images:
wait(1)
image7.click()
click(Pattern("1474369529687.png").similar(0.90))
if exists(Pattern("1474411438209.png").similar(0.92)):
for image8 in findAll(Pattern("1474411438209.png").similar(0.92)):
wait(1)
images.append(image8)
#check list length and act accordingly
if len(images) >= 3:
wait(1)
for image8 in images:
wait(1)
image8.click()
click(Pattern("1474369529687.png").similar(0.90))
if exists(Pattern("1474411516981.png").similar(0.95)):
for image9 in findAll(Pattern("1474411516981.png").similar(0.95)):
wait(1)
images.append(image9)
#check list length and act accordingly
if len(images) >= 3:
wait(1)
for image9 in images:
wait(1)
image9.click()
click(Pattern("1474409820809.png").similar(0.93))
if exists(Pattern("1474411592794.png").similar(0.92)):
for imagea in findAll(Pattern("1474411592794.png").similar(0.92)):
wait(1)
images.append(imagea)
#check list length and act accordingly
if len(images) >= 3:
wait(1)
for imagea in images:
wait(1)
imagea.click()
click(Pattern("1474369529687.png").similar(0.90))
if exists(Pattern("1474411644943.png").similar(0.90)):
for imageb in findAll(Pattern("1474411644943.png").similar(0.90)):
wait(1)
images.append(imageb)
#check list length and act accordingly
if len(images) >= 3:
wait(1)
for imageb in images:
wait(1)
imageb.click()
click(Pattern("1474369529687.png").similar(0.90))
if exists(Pattern("1474411713586.png").similar(0.90)):
for imagec in findAll(Pattern("1474411713586.png").similar(0.90)):
wait(1)
images.append(imagec)
#check list length and act accordingly
if len(images) >= 3:
wait(1)
for imagec in images:
wait(1)
imagec.click()
click(Pattern("1474369529687.png").similar(0.90))
else:
wait(0)
</code></pre>
| 0 |
2016-09-20T23:06:43Z
| 40,041,254 |
<p>If There are more than one match for given image at given similarity, sikuli click only one image (Randomly) out of it. So if you want to click similar images with more stably you need to develop code using findAll function and later sort the Match results of findAll function and click one of it. You can sort the images depending on co-ordinates of Match Objects found after findAll and click on one of image after sort.</p>
<p>if you want to process (click) on every found Match, directly iterate through all the output results of findAll and do action. No need to do sorting in this case.</p>
| 0 |
2016-10-14T10:44:22Z
|
[
"python",
"jython",
"sikuli"
] |
Assigning custom method to on_touch_down etc. in Kivy
| 39,605,079 |
<p>I'm writing what is ultimately to be a mobile game app using Kivy. Given the capabilities of the framework - being able to separate form and function - I'm attempting to do most, if not all, of the design of my GUI within a .kv file using the Kivy language. This works great as far as crafting a layout, but getting touch event handlers to work correctly is proving quite challenging. What I'm attempting to do is this:</p>
<p>Python:</p>
<pre><code>from kivy import require
from kivy.app import App
from kivy.uix.floatlayout import FloatLayout
from kivy.uix.image import Image
from kivy.uix.boxlayout import BoxLayout
require("1.9.1")
class gameScreen(FloatLayout):
def move_scatter(self, sender):
self.ids.moveArea.x = sender.text
def pc_move(self, touch, *args, **kwargs):
print('Goodbye')
# self.ids.protagonist.pos = (self.x + )
class GameApp(App):
def __init__(self, **kwargs):
super(GameApp, self).__init__(**kwargs)
def build(self):
return gameScreen()
class MoveBox(BoxLayout):
pass
class Pc(Image):
pass
if __name__ == '__main__':
GameApp().run()
</code></pre>
<p>Kivy Code:</p>
<pre><code><gameScreen>:
orientation: 'vertical'
padding: 20
canvas.before:
Rectangle:
size: (root.width, root.height)
source: 'bliss.jpg'
Pc:
id: protagonist
TextInput:
id: debugOut
size_hint: None, None
size: 200, 50
text: 'Hello'
BoxLayout:
id: moveArea
size_hint: None, None
size: 200, 200
on_touch_down: root.pc_move()
canvas:
Color:
rgba: .2, .2, .2, .5
Rectangle:
pos: (self.x + root.width - self.width, self.y)
size: self.size
<Pc>
source:'voolf.png'
pos_hint: {'top': .9}
size_hint: None, None
size: 300, 300
</code></pre>
<p>When I try this, I get:</p>
<pre><code>TypeError: pc_move() takes at least 2 arguments (1 given)
</code></pre>
<p>This obviously make sense, as I am calling the pc_move() method without passing an argument. I know that the <em>easiest</em> way around this issue is just to create the BoxLayout in my Python code and define the on_touch_down method there, but as stated, I'm trying to keep my GUI and functionality separate.</p>
<p>The question is, how do I get the 'touch' parameter to pass as it would if I were to create the widget in Python code? Alternatively, am I just chasing a white whale? Does the event handling have to be done in a widget created in the Python code?</p>
| 0 |
2016-09-20T23:08:29Z
| 39,605,322 |
<p>I confess I've never used Kivy, but the <a href="https://kivy.org/docs/api-kivy.uix.widget.html#kivy.uix.widget.Widget.on_touch_down" rel="nofollow">documentation for on_touch_down</a> indicates it receives a <code>touch</code> parameter.</p>
<p>The <a href="https://kivy.org/docs/api-kivy.lang.html#value-expressions-on-property-expressions-ids-and-reserved-keywords" rel="nofollow">docs also mention</a> that <code>args</code> keyword <em>is available in on_ callbacks</em>.</p>
<p>Putting these two together, you should be able to pass your touch parameter to python via:</p>
<pre><code>on_touch_down: root.pc_move(args[1])
</code></pre>
<p>[I'm not positive that it will be the #1 element in <code>args[]</code>, but some of the examples seem to indicate that]</p>
| 1 |
2016-09-20T23:36:11Z
|
[
"python",
"kivy",
"kivy-language"
] |
Hollow diamond in python with an asterisk outline
| 39,605,093 |
<p>I have to build a hollow diamond like this one:</p>
<pre><code> ******
** **
* *
* *
** **
******
</code></pre>
<p>Heres what I have so far, </p>
<pre><code>def hollow_diamond(w):
h=int(w/2)
while 0<h:
print('*'*h)
h=h-1
i=1
while i<(w/2+1):
print(i*'*')
i=i+1
</code></pre>
<p>However using the code that i have i only get half of the diamond.</p>
<pre><code>***
**
*
*
**
***
</code></pre>
<p>Should I be using <strong>for</strong> loops instead of while to be able to complete the diamond?</p>
| -3 |
2016-09-20T23:09:38Z
| 39,605,216 |
<p>Building a hollow diamond means, like you said, probably the following:</p>
<ol>
<li>A line with full asterisks (<code>0</code> spaces in the middle)</li>
<li>A line with <code>2</code> spaces in the middle</li>
<li>A line with <code>4</code> spaces in the middle</li>
<li>...</li>
<li>A line with <code>l-2</code> spaces in the middle</li>
<li>A line with <code>l-2</code> spaces in the middle</li>
<li>A line with <code>l-4</code> spaces in the middle</li>
<li>A line with <code>l-6</code> spaces in the middle</li>
<li>...</li>
<li>A line with full asterisks (<code>l-l</code> spaces in the middle)</li>
</ol>
<p><code>n</code> is the "step", or how many asterisks you "lose" in each iteration. <code>l</code> is the size of your square.</p>
<p>So, you algorithm is composed of two parts, the increasing spaces, and the decreasing spaces.</p>
<p>So, your algorithm should be something like this</p>
<pre><code>for (spaces = 0; spaces < size/2 ; spaces = spaces + 1 )
for (asterisk = 0; asterisk < size/2 - spaces; asterisk = asterisk + 1)
print '*'
for (space = 0; space < spaces*2; space = space + 1)
print ' '
for (asterisk = 0; asterisk < size/2 - spaces; asterisk = asterisk + 1)
print '*'
for (spaces = size/2 - 1; spaces >= 0; spaces = spaces - 1)
# The same inner code as above
</code></pre>
<p>I purposedly didn't put the python code there, so you can do your homework properly ;), but once you understand the algorithm, that should be pretty easy.</p>
| 0 |
2016-09-20T23:24:27Z
|
[
"python",
"python-3.x"
] |
Hollow diamond in python with an asterisk outline
| 39,605,093 |
<p>I have to build a hollow diamond like this one:</p>
<pre><code> ******
** **
* *
* *
** **
******
</code></pre>
<p>Heres what I have so far, </p>
<pre><code>def hollow_diamond(w):
h=int(w/2)
while 0<h:
print('*'*h)
h=h-1
i=1
while i<(w/2+1):
print(i*'*')
i=i+1
</code></pre>
<p>However using the code that i have i only get half of the diamond.</p>
<pre><code>***
**
*
*
**
***
</code></pre>
<p>Should I be using <strong>for</strong> loops instead of while to be able to complete the diamond?</p>
| -3 |
2016-09-20T23:09:38Z
| 39,605,294 |
<p>I won't steal from you the joy to fix your homework but this exercise was quite fun so I'll give you another possible version to give you few ideas:</p>
<pre><code>def cool_diamond(w):
r = []
for y in range(w):
s = '*' * (w - y)
r.append("{0}{1}{0}".format(s, ''.join(['-' for x in range(2 * y)]), s))
return '\n'.join(r + r[::-1])
for i in range(3, 6):
print cool_diamond(i)
print('-' * 80)
</code></pre>
<p>I'd strongly recommend you take your time first to fix yours! Otherwise you won't learn nothing from the exercise. </p>
<p>Once you've fixed yours you'll feel pretty satisfied for the effort paying off, and then... just then, you can take think whether you can improve YOUR version or refactoring.</p>
<p>Happy coding!</p>
<pre><code>******
**--**
*----*
*----*
**--**
******
--------------------------------------------------------------------------------
********
***--***
**----**
*------*
*------*
**----**
***--***
********
--------------------------------------------------------------------------------
**********
****--****
***----***
**------**
*--------*
*--------*
**------**
***----***
****--****
**********
--------------------------------------------------------------------------------
</code></pre>
| 0 |
2016-09-20T23:33:26Z
|
[
"python",
"python-3.x"
] |
Hollow diamond in python with an asterisk outline
| 39,605,093 |
<p>I have to build a hollow diamond like this one:</p>
<pre><code> ******
** **
* *
* *
** **
******
</code></pre>
<p>Heres what I have so far, </p>
<pre><code>def hollow_diamond(w):
h=int(w/2)
while 0<h:
print('*'*h)
h=h-1
i=1
while i<(w/2+1):
print(i*'*')
i=i+1
</code></pre>
<p>However using the code that i have i only get half of the diamond.</p>
<pre><code>***
**
*
*
**
***
</code></pre>
<p>Should I be using <strong>for</strong> loops instead of while to be able to complete the diamond?</p>
| -3 |
2016-09-20T23:09:38Z
| 39,605,446 |
<p>You've already figured out how to print the first set of asterisks for each line; good job so far. Now, you need to figure out how many spaces to print. Let's take the first loop, where you're printing <strong>h</strong> asterisks in a grid of <strong>w</strong> lines.</p>
<p>You need <strong>h</strong> asterisks on the left and <strong>h</strong> more on the right; that's <strong>2*h</strong> asterisks total. This leaves <strong>s = w - 2*h</strong> spaces in the middle.</p>
<p>So, for each line, you need to print ...</p>
<ul>
<li><strong>h</strong> asterisks</li>
<li><strong>s</strong> spaces</li>
<li><strong>h</strong> more asterisks</li>
</ul>
<p>Does that move you toward a useful update of your current code?</p>
| 1 |
2016-09-20T23:52:06Z
|
[
"python",
"python-3.x"
] |
Convert DateTime to sequential day of the year
| 39,605,156 |
<p>I have a dataframe that contains a series of dates, e.g.:</p>
<pre><code>0 2014-06-17
1 2014-05-05
2 2014-01-07
3 2014-06-29
4 2014-03-15
5 2014-06-06
7 2014-01-29
</code></pre>
<p>Now, I need a way to convert these dates to the sequential day of the year, e.g.</p>
<pre><code>0 168
1 125
2 7
3 180
4 74
5 157
7 29
</code></pre>
<p>All the values are within the same year.
The opposite problem has been asked many times - converting the sequential day of the year to a date, but I need to do the opposite. Is there an easy way to do this with Pandas? Thanks!</p>
<p>EDIT: Answered by piRSquared. Thank you!</p>
| 2 |
2016-09-20T23:16:28Z
| 39,605,166 |
<p><strong><em>Option 1</em></strong><br>
<a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-date-components" rel="nofollow"><strong><em>time-date-components</em></strong></a><br>
Link from @root<br>
Use <code>dt.dayofyear</code></p>
<pre><code>df.iloc[:, 0].dt.dayofyear
0 168
1 125
2 7
3 180
4 74
5 157
7 29
Name: 1, dtype: int64
</code></pre>
<hr>
<p><strong><em>Option 2</em></strong><br>
<a href="http://strftime.org/" rel="nofollow"><strong><em>strftime.org</em></strong></a><br>
Use <code>dt.strftime('%-j')</code></p>
<pre><code>df.iloc[:, 0].dt.strftime('%-j')
0 168
1 125
2 7
3 180
4 74
5 157
7 29
Name: 1, dtype: object
</code></pre>
| 4 |
2016-09-20T23:17:57Z
|
[
"python",
"pandas"
] |
Convert DateTime to sequential day of the year
| 39,605,156 |
<p>I have a dataframe that contains a series of dates, e.g.:</p>
<pre><code>0 2014-06-17
1 2014-05-05
2 2014-01-07
3 2014-06-29
4 2014-03-15
5 2014-06-06
7 2014-01-29
</code></pre>
<p>Now, I need a way to convert these dates to the sequential day of the year, e.g.</p>
<pre><code>0 168
1 125
2 7
3 180
4 74
5 157
7 29
</code></pre>
<p>All the values are within the same year.
The opposite problem has been asked many times - converting the sequential day of the year to a date, but I need to do the opposite. Is there an easy way to do this with Pandas? Thanks!</p>
<p>EDIT: Answered by piRSquared. Thank you!</p>
| 2 |
2016-09-20T23:16:28Z
| 39,605,588 |
<p>Yes, I did it. Please refer my below code. </p>
<p><strong><em><a href="http://i.stack.imgur.com/WYI7k.png" rel="nofollow">Screen shot of my RESULT</a></em></strong></p>
<p>Step 1: Make array of String datatype and add dates in string format. </p>
<p>Step 2: Use for loop because you have multiple dates.</p>
<p>Step 3: Convert String to Date format.</p>
<p>Step 4: Conver Date to LocalDate format.</p>
<p>Step 5: Use getDatOfYear() inbuilt function to get sequential day of the year. </p>
<pre><code>package test;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.time.Instant;
import java.time.LocalDate;
import java.time.ZoneId;
import java.util.*;
public class ShubhamDateClass{
public static void main(String[] args) {
SimpleDateFormat formatter = new SimpleDateFormat("dd-MMM-yyyy");
String[] dateForTest = {"17-JUN-2014","05-MAY-2014","07-JAN-2017","20-JUN-2014","15-MAR-2014", "6-JUN-2014", "29-JAN-2014"};
for(int i=0;i<dateForTest.length;i++){
try {
Date date = formatter.parse(dateForTest[i]);
System.out.print(formatter.format(date)+" --- ");
LocalDate dateTest = Instant.ofEpochMilli(date.getTime()).atZone(ZoneId.systemDefault()).toLocalDate();
int dayOfYear2 = dateTest.getDayOfYear();
System.out.println(dayOfYear2);
} catch (ParseException e) {
e.printStackTrace();
}
}
}
}
</code></pre>
| -1 |
2016-09-21T00:11:45Z
|
[
"python",
"pandas"
] |
How to print specific value from key in a dictionary?
| 39,605,252 |
<p>Our teacher set us a challenge to make a program that will allow users to input a symbol of an element and the program should output some info about the element.</p>
<p>To do this I have to use dictionaries. Currently I have this:</p>
<pre><code>elements = {"Li": "Lithium" " 12" " Alkali Metal"}
element = input("Enter an elemental symbol: ")
print (elements[element])
</code></pre>
<p>This prints everything that is related to Li.</p>
<p>I was wondering how I would be able to only output, say Alkali Metal, rather than everything associated with Li? (Yes I know 12 isn't Lithium's atomic number)</p>
| 0 |
2016-09-20T23:28:18Z
| 39,605,283 |
<p>You currently have one string as a value so there is not much you can do reliably. You would need to store separate values which you could do with a sub-dict:</p>
<pre><code>elements = {"Li": {"full_name":"Lithium", "num":"12", "type":"Alkali Metal"}}
</code></pre>
<p>Then just access the nested dict using the key of what particular value you want to get:</p>
<pre><code>In [1]: elements = {"Li": {"full_name":"Lithium", "num":"12", "type":"Alkali Metal"}}
In [2]: elements["Li"]["num"]
Out[2]: '12'
In [3]: elements["Li"]["full_name"]
Out[3]: 'Lithium'
In [4]: elements["Li"]["type"]
Out[4]: 'Alkali Metal'
</code></pre>
<p>If you have strings with no comma separating each substring, python will create a single string:</p>
<pre><code>In [5]: "Lithium" " 12" " Alkali Metal"
Out[5]: 'Lithium 12 Alkali Metal'
In [6]: "Lithium","12","Alkali Metal"
Out[6]: ('Lithium', '12', 'Alkali Metal') # now its a three element tuple
</code></pre>
| 3 |
2016-09-20T23:31:40Z
|
[
"python",
"python-3.x"
] |
Tensorflow: Numpy equivalent of tf batches and reshape
| 39,605,399 |
<p>I am trying to classify some images using Tensorflow using the LSTM method in image classification with one-hot encoding output and a softmax classifier at the last LSTM output. My dataset is CSV and had to research a lot in Numpy and Tensorflow on how to do some modifications. I'm still getting an error:</p>
<pre><code>AttributeError: 'numpy.ndarray' object has no attribute 'next_batch'
</code></pre>
<p>which if you will see, i can't use <code>next_batch(batch_size)</code> along with my dataset and also the next <code>tf.reshape</code> needs to be replaced with its Numpy equivalent.</p>
<p>My question: How should I correct these 2 issues?</p>
<pre><code>'''
Tensorflow LSTM classification of 16x30 images.
'''
from __future__ import print_function
import tensorflow as tf
from tensorflow.python.ops import rnn, rnn_cell
import numpy as np
from numpy import genfromtxt
from sklearn.cross_validation import train_test_split
import pandas as pd
'''
a Tensorflow LSTM that will sequentially input several lines from each single image
i.e. The Tensorflow graph will take a flat (1,480) features image as it was done in Multi-layer
perceptron MNIST Tensorflow tutorial, but then reshape it in a sequential manner with 16 features each and 30 time_steps.
'''
blaine = genfromtxt('./Desktop/Blaine_CSV_lstm.csv',delimiter=',') # CSV transform to array
target = [row[0] for row in blaine] # 1st column in CSV as the targets
data = blaine[:, 1:480] #flat feature vectors
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.05, random_state=42)
f=open('cs-training.csv','w') #1st split for training
for i,j in enumerate(X_train):
k=np.append(np.array(y_train[i]),j )
f.write(",".join([str(s) for s in k]) + '\n')
f.close()
f=open('cs-testing.csv','w') #2nd split for test
for i,j in enumerate(X_test):
k=np.append(np.array(y_test[i]),j )
f.write(",".join([str(s) for s in k]) + '\n')
f.close()
ss = pd.Series(y_train) #indexing series needed for later Pandas Dummies one-hot vectors
gg = pd.Series(y_test)
new_data = genfromtxt('cs-training.csv',delimiter=',') # Training data
new_test_data = genfromtxt('cs-testing.csv',delimiter=',') # Test data
x_train=np.array([ i[1::] for i in new_data])
y_train_onehot = pd.get_dummies(ss)
x_test=np.array([ i[1::] for i in new_test_data])
y_test_onehot = pd.get_dummies(gg)
# General Parameters
learning_rate = 0.001
training_iters = 100000
batch_size = 128
display_step = 10
# Tensorflow LSTM Network Parameters
n_input = 16 # MNIST data input (img shape: 28*28)
n_steps = 30 # timesteps
n_hidden = 128 # hidden layer num of features
n_classes = 20 # MNIST total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
# Define weights
weights = {
'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))
}
biases = {
'out': tf.Variable(tf.random_normal([n_classes]))
}
def RNN(x, weights, biases):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (batch_size, n_steps, n_input)
# Required shape: 'n_steps' tensors list of shape (batch_size, n_input)
# Permuting batch_size and n_steps
x = tf.transpose(x, [1, 0, 2])
# Reshaping to (n_steps*batch_size, n_input)
x = tf.reshape(x, [-1, n_input])
# Split to get a list of 'n_steps' tensors of shape (batch_size, n_input)
x = tf.split(0, n_steps, x)
# Define a lstm cell with tensorflow
lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0)
# Get lstm cell output
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
# Linear activation, using rnn inner loop last output
return tf.matmul(outputs[-1], weights['out']) + biases['out']
pred = RNN(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
step = 1
# Keep training until reach max iterations
while step * batch_size < training_iters:
x_train, y_train = new_data.next_batch(batch_size)
# Reshape data to get 30 seq of 16 elements
x_train = x_train.reshape((batch_size, n_steps, n_input))
# Run optimization op (backprop)
sess.run(optimizer, feed_dict={x: x_train, y: y_train})
if step % display_step == 0:
# Calculate batch accuracy
acc = sess.run(accuracy, feed_dict={x: x_train, y: y_train})
# Calculate batch loss
loss = sess.run(cost, feed_dict={x: x_train, y: y_train})
print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
"{:.6f}".format(loss) + ", Training Accuracy= " + \
"{:.5f}".format(acc))
step += 1
print("Optimization Finished!")
</code></pre>
| 1 |
2016-09-20T23:46:24Z
| 39,605,486 |
<p>You can make your own function called next batch that given a numpy array and indices will return that slice of the numpy array for you.</p>
<pre><code>def nextbatch(x,i,j):
return x[i:j,...]
</code></pre>
<p>You could also pass in what step you are in and maybe do modulo but this is the basic that will get it to work.</p>
<p>As for the resphape use:</p>
<pre><code>x_train = np.reshape(x_train,(batch_size, n_steps, n_input))
</code></pre>
| 2 |
2016-09-20T23:57:24Z
|
[
"python",
"numpy",
"machine-learning",
"tensorflow"
] |
D3 Line with JSON Data, Not Rendering
| 39,605,488 |
<p>I have a Flask webserver that takes a dict of Python data and uses the jsonify function to return a JSON object when a GET is called on <code>/data</code>. The JSON object is not a nested list (see sample below) like most other examples on here.</p>
<p>I've been attempting to take that JSON data and pass it into my <code>d3.svg.line()</code> function, but it appears that something is wrong with the data I'm passing into it. My webpage renders the axes, but no line appears.</p>
<p>Inspecting the elements shows that the x and y axes are populated with my data, (<code><path class="domain" d="M0,6V0H890V6"></path></code>) but my <code><path class="line"></path></code> is empty.</p>
<p>I'm running a map function to convert my dates and values into their correct formats and returning them as an array. Running <code>console.log</code> on this function shows an output of a valid JSON object.</p>
<p>Can anybody help me out with where I'm going wrong here? Should I reformat my jsonified object to be a nested list instead and then populate my data object with <code>forEach</code>?</p>
<p>Below is my code and JSON sample:</p>
<pre><code><!DOCTYPE html>
<meta charset="utf-8">
<style>
body { font: 12px Arial; }
path {
stroke: steelblue;
stroke-width: 2;
fill: none;
}
.axis path, .axis line {
fill: none;
stroke: grey;
stroke-width: 1;
shape-rendering: crispEdges;
}
</style>
<body>
<script src="//d3js.org/d3.v3.min.js"></script>
<script>
var margin = {top: 20, right: 20, bottom: 30, left: 50},
width = 960 - margin.left - margin.right,
height = 500 - margin.top - margin.bottom;
var formatDate = d3.time.format("%Y-%m-%d %H:%M:%S").parse;
var x = d3.time.scale().range([0,width]);
var y = d3.scale.linear().range([height,0]);
var xAxis = d3.svg.axis()
.scale(x)
.orient("bottom");
var yAxis = d3.svg.axis()
.scale(y)
.orient("left");
var valueline = d3.svg.line()
.x(function(d) {return x(d.timeStamps); })
.y(function(d) {return y(d.outTemps); });
var svg = d3.select("body")
.append("svg")
.attr("width", width + margin.left + margin.right)
.attr("height", height + margin.top + margin.bottom)
.append("g")
.attr("transform",
"translate (" + margin.left + "," + margin.top + ")");
d3.json("/data",function(error, data) {
function type(d) {
d.timeStamps =
d.timeStamps.map(function (time) {return formatDate(time) } );
d.outTemps = d.outTemps.map(function (temp) {return parseFloat(temp)});
};
x.domain(d3.extent(data, function (d) {return d.timeStamps}));
y.domain(d3.extent(data, function(d) { return d.outTemps; }));
svg.append("path")
.datum(data)
.append("path")
.attr("class", "line")
.attr("d", valueline(data));
svg.append("path")
.attr("class", "line")
.attr("d", valueline(data));
svg.append("g")
.attr("class", "x axis")
.attr("transform", "translate(0," + height + ")")
.call(xAxis);
svg.append("g")
.attr("class", "y axis")
.call(yAxis);
});
</script>
</body>
{
"outTemps": [
"79.7",
"79.7",
"79.8",
],
"timeStamps": [
"2016-09-20 19:15:07",
"2016-09-20 19:10:07",
"2016-09-20 19:05:11",
]
</code></pre>
| 1 |
2016-09-20T23:57:34Z
| 39,605,814 |
<p>Your data is in a strange format for <code>d3</code>ing. <code>d3</code> prefers arrays of objects where each property of the object represents an <code>x</code> or<code>y</code>. So your data properly formatted should look something like this:</p>
<pre><code>[{
"timeStamps": "2016-09-20T23:15:07.000Z",
"outTemps": 79.7
}, {
"timeStamps": "2016-09-20T23:10:07.000Z",
"outTemps": 79.7
}, {
"timeStamps": "2016-09-20T23:05:11.000Z",
"outTemps": 79.8
}]
</code></pre>
<p>Further, my guess is that your <code>type</code> function is an attempt at re-formatting the data but you aren't even calling it...</p>
<p>Finally, if you want to just re-format your data with JavaScript, it would look something like this:</p>
<pre><code> data = data.timeStamps.map(function(d, i) {
return {
timeStamps: formatDate(d),
outTemps: parseFloat(data.outTemps[i])
}
});
</code></pre>
<p>With that correction, <a href="http://plnkr.co/edit/yLWhz6nHoNFFUw1L99Iv?p=preview" rel="nofollow">here's your running code and plot</a>.</p>
| 0 |
2016-09-21T00:41:49Z
|
[
"javascript",
"python",
"json",
"d3.js",
"svg"
] |
twist dataframe by rank
| 39,605,512 |
<p>consider the dataframe <code>df</code></p>
<pre><code>np.random.seed([3,1415])
df = pd.DataFrame(np.random.rand(4, 5), columns=list('ABCDE'))
df
</code></pre>
<p><a href="http://i.stack.imgur.com/VuXYh.png" rel="nofollow"><img src="http://i.stack.imgur.com/VuXYh.png" alt="enter image description here"></a></p>
<hr>
<p>I want a dataframe where the columns are ranks and each row is <code>['A', 'B', 'C', 'D', 'E']</code> in rank order.</p>
<p><strong><em>ranks</em></strong> </p>
<pre><code>df.rank(1).astype(int)
</code></pre>
<p><a href="http://i.stack.imgur.com/VkC62.png" rel="nofollow"><img src="http://i.stack.imgur.com/VkC62.png" alt="enter image description here"></a></p>
<hr>
<p><strong><em>expected results</em></strong></p>
<p><a href="http://i.stack.imgur.com/UtYAi.png" rel="nofollow"><img src="http://i.stack.imgur.com/UtYAi.png" alt="enter image description here"></a></p>
| 1 |
2016-09-21T00:00:11Z
| 39,605,942 |
<p>Here's one way:</p>
<pre><code>In [90]: df
Out[90]:
A B C D E
0 0.444939 0.407554 0.460148 0.465239 0.462691
1 0.016545 0.850445 0.817744 0.777962 0.757983
2 0.934829 0.831104 0.879891 0.926879 0.721535
3 0.117642 0.145906 0.199844 0.437564 0.100702
In [91]: df2 = df.apply(lambda row: df.columns[np.argsort(row)], axis=1)
In [92]: df2
Out[92]:
A B C D E
0 B A C E D
1 A E D C B
2 E B C D A
3 E A B C D
</code></pre>
<p>The new DataFrame has the same column index as <code>df</code>, but that can be fixed:</p>
<pre><code>In [93]: df2.columns = range(1, 1 + df2.shape[1])
In [94]: df2
Out[94]:
1 2 3 4 5
0 B A C E D
1 A E D C B
2 E B C D A
3 E A B C D
</code></pre>
<hr>
<p>Here's another way. This one converts the DataFrame to a numpy array, applies <code>argsort</code> on axis 1, uses that to index <code>df.columns</code>, and puts the result back into a DataFrame.</p>
<pre><code>In [110]: pd.DataFrame(df.columns[np.array(df).argsort(axis=1)], columns=range(1, 1 + df.shape[1]))
Out[110]:
1 2 3 4 5
0 B A C E D
1 A E D C B
2 E B C D A
3 E A B C D
</code></pre>
| 3 |
2016-09-21T00:57:18Z
|
[
"python",
"pandas",
"numpy"
] |
twist dataframe by rank
| 39,605,512 |
<p>consider the dataframe <code>df</code></p>
<pre><code>np.random.seed([3,1415])
df = pd.DataFrame(np.random.rand(4, 5), columns=list('ABCDE'))
df
</code></pre>
<p><a href="http://i.stack.imgur.com/VuXYh.png" rel="nofollow"><img src="http://i.stack.imgur.com/VuXYh.png" alt="enter image description here"></a></p>
<hr>
<p>I want a dataframe where the columns are ranks and each row is <code>['A', 'B', 'C', 'D', 'E']</code> in rank order.</p>
<p><strong><em>ranks</em></strong> </p>
<pre><code>df.rank(1).astype(int)
</code></pre>
<p><a href="http://i.stack.imgur.com/VkC62.png" rel="nofollow"><img src="http://i.stack.imgur.com/VkC62.png" alt="enter image description here"></a></p>
<hr>
<p><strong><em>expected results</em></strong></p>
<p><a href="http://i.stack.imgur.com/UtYAi.png" rel="nofollow"><img src="http://i.stack.imgur.com/UtYAi.png" alt="enter image description here"></a></p>
| 1 |
2016-09-21T00:00:11Z
| 39,608,310 |
<p>Use <code>stack</code>, <code>reset_index</code>, and <code>pivot</code></p>
<pre><code>df.rank(1).astype(int).stack().reset_index() \
.pivot('level_0', 0, 'level_1').rename_axis(None)
</code></pre>
<p><a href="http://i.stack.imgur.com/plUw1.png" rel="nofollow"><img src="http://i.stack.imgur.com/plUw1.png" alt="enter image description here"></a></p>
<hr>
<p><strong><em>Timing</em></strong></p>
<p><a href="http://i.stack.imgur.com/l3FFp.png" rel="nofollow"><img src="http://i.stack.imgur.com/l3FFp.png" alt="enter image description here"></a></p>
| 1 |
2016-09-21T05:39:08Z
|
[
"python",
"pandas",
"numpy"
] |
twist dataframe by rank
| 39,605,512 |
<p>consider the dataframe <code>df</code></p>
<pre><code>np.random.seed([3,1415])
df = pd.DataFrame(np.random.rand(4, 5), columns=list('ABCDE'))
df
</code></pre>
<p><a href="http://i.stack.imgur.com/VuXYh.png" rel="nofollow"><img src="http://i.stack.imgur.com/VuXYh.png" alt="enter image description here"></a></p>
<hr>
<p>I want a dataframe where the columns are ranks and each row is <code>['A', 'B', 'C', 'D', 'E']</code> in rank order.</p>
<p><strong><em>ranks</em></strong> </p>
<pre><code>df.rank(1).astype(int)
</code></pre>
<p><a href="http://i.stack.imgur.com/VkC62.png" rel="nofollow"><img src="http://i.stack.imgur.com/VkC62.png" alt="enter image description here"></a></p>
<hr>
<p><strong><em>expected results</em></strong></p>
<p><a href="http://i.stack.imgur.com/UtYAi.png" rel="nofollow"><img src="http://i.stack.imgur.com/UtYAi.png" alt="enter image description here"></a></p>
| 1 |
2016-09-21T00:00:11Z
| 39,610,093 |
<p>Here's another way.</p>
<pre><code>In [5]: df1 = df.rank(1).astype(int)
In [6]: df3 = df1.replace({rank: name for rank, name in enumerate(df1.columns, 1)})
In [7]: df3.columns = range(1, 1 + df3.shape[1])
In [8]: df3
Out[8]:
1 2 3 4 5
0 B A C E D
1 A E D C B
2 E B C D A
3 B C D E A
</code></pre>
<p>Yet another way.</p>
<pre><code>In [6]: ranks = df.rank(axis=1).astype(int)-1
In [7]: new_values = df.columns.values.take(ranks)
In [8]: pd.DataFrame(new_values)
Out[8]:
0 1 2 3 4
0 B A C E D
1 A E D C B
2 E B C D A
3 B C D E A
</code></pre>
| 1 |
2016-09-21T07:28:48Z
|
[
"python",
"pandas",
"numpy"
] |
ImportError: No module named mako.util when running airflow
| 39,605,604 |
<p>I'm trying to follow the tutorial on here: <a href="http://pythonhosted.org/airflow/tutorial.html" rel="nofollow">http://pythonhosted.org/airflow/tutorial.html</a></p>
<p>but i'm using a mac, and so i had to install python via <code>brew</code>, which then comes with <code>pip</code>, which i used to install <code>airflow</code>. However, that didn't quite work either, so i then tried to create a <code>virtualenv</code> for which i tried to install <code>airflow</code> and it is still giving me this <code>ImportError: No module named mako.util</code></p>
<p>not sure if it matters, but here's my setup:</p>
<pre><code>(airflow) [davidtian: airflow]$ python --version
Python 2.7.12
(airflow) [davidtian: airflow]$ pip --version
pip 8.1.2 from /Users/someone/Desktop/blah/airflow/airflow/lib/python2.7/site-packages (python 2.7)
(airflow) [davidtian: airflow]$
</code></pre>
<p>How do i install this <code>mako.util</code> module?</p>
| 0 |
2016-09-21T00:14:21Z
| 39,608,524 |
<p>Finally figured it out. after trying a bunch of things. for one, the python that comes with Mac apparently doesn't work very well. you have to <code>brew install python</code> instead. with that python, it comes with <code>pip</code> by default</p>
<p>i had to actually <code>sudo pip uninstall airflow</code> and then install it again.</p>
| 0 |
2016-09-21T05:55:51Z
|
[
"python",
"python-2.7",
"mako",
"airflow"
] |
Access a global variable to track status in multiprocessing
| 39,605,634 |
<p>I am writing a multiprocessing process that I want to monitor the status of. How can I access my_var from thaht context?</p>
<pre><code>from multiprocessing import Process
import time
my_var = list()
def alter_my_var():
global my_var
for x in range(10):
my_var.append(x)
time.sleep(1)
p = Process(target=alter_my_var)
p.start()
while p.is_alive():
print "Length of my_var is %i" % len(my_var)
time.sleep(1)
p.join()
print "Done - final length of my_var is %s" % len(my_var)
</code></pre>
<p>Thanks</p>
| 1 |
2016-09-21T00:18:53Z
| 39,605,728 |
<p>Use a <a href="https://docs.python.org/2/library/multiprocessing.html#multiprocessing.sharedctypes.multiprocessing.Manager" rel="nofollow"><em>Manager</em></a>, each process gets a copy of the list so you are not sharing one object, with <em>Manager().list()</em> you are:</p>
<pre><code>from multiprocessing import Process, Manager
import time
my_var = Manager().list()
def alter_my_var(my_var):
for x in range(10):
my_var.append(x)
time.sleep(1)
p = Process(target=alter_my_var, args=(my_var, ))
p.start()
while p.is_alive():
print "Length of my_var is %i" % len(my_var)
time.sleep(1)
p.join()
print "Done - final length of my_var is %s" % len(my_var)
</code></pre>
<p>If we paste the code into Ipython, you can see the output:</p>
<pre><code>## -- End pasted text --
Length of my_var is 0
Length of my_var is 1
Length of my_var is 3
Length of my_var is 4
Length of my_var is 5
Length of my_var is 6
Length of my_var is 7
Length of my_var is 8
Length of my_var is 9
Length of my_var is 10
Done - final length of my_var is 10
</code></pre>
| 1 |
2016-09-21T00:31:01Z
|
[
"python"
] |
Access a global variable to track status in multiprocessing
| 39,605,634 |
<p>I am writing a multiprocessing process that I want to monitor the status of. How can I access my_var from thaht context?</p>
<pre><code>from multiprocessing import Process
import time
my_var = list()
def alter_my_var():
global my_var
for x in range(10):
my_var.append(x)
time.sleep(1)
p = Process(target=alter_my_var)
p.start()
while p.is_alive():
print "Length of my_var is %i" % len(my_var)
time.sleep(1)
p.join()
print "Done - final length of my_var is %s" % len(my_var)
</code></pre>
<p>Thanks</p>
| 1 |
2016-09-21T00:18:53Z
| 39,605,730 |
<p>Try this:</p>
<pre><code>from multiprocessing import Process, Pipe
import time
my_var = list()
def alter_my_var(my_var, conn):
for x in range(10):
my_var.append(x)
conn.send(len(my_var))
time.sleep(1)
global my_var
parent_conn, child_conn = Pipe()
p = Process(target=alter_my_var, args=(my_var,child_conn,))
p.start()
while p.is_alive():
myvar_len = parent_conn.recv()
print "Length of my_var is %i" % myvar_len
time.sleep(1)
p.join()
print "Done - final length of my_var is %s" % myvar_len
</code></pre>
| 0 |
2016-09-21T00:31:03Z
|
[
"python"
] |
Access a global variable to track status in multiprocessing
| 39,605,634 |
<p>I am writing a multiprocessing process that I want to monitor the status of. How can I access my_var from thaht context?</p>
<pre><code>from multiprocessing import Process
import time
my_var = list()
def alter_my_var():
global my_var
for x in range(10):
my_var.append(x)
time.sleep(1)
p = Process(target=alter_my_var)
p.start()
while p.is_alive():
print "Length of my_var is %i" % len(my_var)
time.sleep(1)
p.join()
print "Done - final length of my_var is %s" % len(my_var)
</code></pre>
<p>Thanks</p>
| 1 |
2016-09-21T00:18:53Z
| 39,605,736 |
<p>You are using <code>multiprocessing</code> not <code>threading</code>. Process run in a different memory space and variables get copied to the child process but they do not point to the same memory address anymore. Therefore, what you modify in the process can not be read by the main process. </p>
<p>If you don't care if it's a process or a thread try changing <code>Process</code> by <code>Thread</code> and it will work.</p>
<p><strong>Output</strong>:</p>
<pre><code>Length of my_var is 0
Length of my_var is 1
Length of my_var is 2
Length of my_var is 3
Length of my_var is 4
Length of my_var is 6
Length of my_var is 7
Length of my_var is 8
Length of my_var is 9
Length of my_var is 10
Done - final length of my_var is 10
</code></pre>
<p><strong>Edit</strong></p>
<p>As other people say, if you want to keep using a process you have to build some kind of IPC. You can use for that queues, the Manager, etc.</p>
| 2 |
2016-09-21T00:32:51Z
|
[
"python"
] |
How do I pull a recurring key from a JSON?
| 39,605,640 |
<p>I'm new to python (and coding in general), I've gotten this far but I'm having trouble. I'm querying against a web service that returns a json file with information on every employee. I would like to pull just a couple of attributes for each employee, but I'm having some trouble.</p>
<p>I have this script so far:</p>
<pre><code>import json
import urllib2
req = urllib2.Request('http://server.company.com/api')
response = urllib2.urlopen(req)
the_page = response.read()
j = json.loads(the_page)
print j[1]['name']
</code></pre>
<p>The JSON that it returns looks like this...</p>
<pre><code>{
"name": bill jones,
"address": "123 something st",
"city": "somewhere",
"state": "somestate",
"zip": "12345",
"phone_number": "800-555-1234",
},
{
"name": jane doe,
"address": "456 another ave",
"city": "metropolis",
"state": "ny",
"zip": "10001",
"phone_number": "555-555-5554",
},
</code></pre>
<p>You can see that with the script I can return the name of employee in index 1. But I would like to have something more along the lines of: <code>print j[**0 through len(j)**]['name']</code> so it will print out the name (and preferably the phone number too) of every employee in the json list.</p>
<p>I'm fairly sure I'm approaching something wrong, but I need some feedback and direction.</p>
| 4 |
2016-09-21T00:19:21Z
| 39,605,651 |
<p>Your JSON is the <code>list</code> of <code>dict</code> objects. By doing <code>j[1]</code>, you are accessing the item in the list at index <code>1</code>. In order to get all the records, you need to iterate all the elements of the list as:</p>
<pre><code>for item in j:
print item['name']
</code></pre>
<p>where <code>j</code> is result of <code>j = json.loads(the_page)</code> as is mentioned in your answer</p>
| 6 |
2016-09-21T00:21:43Z
|
[
"python",
"json"
] |
How do I pull a recurring key from a JSON?
| 39,605,640 |
<p>I'm new to python (and coding in general), I've gotten this far but I'm having trouble. I'm querying against a web service that returns a json file with information on every employee. I would like to pull just a couple of attributes for each employee, but I'm having some trouble.</p>
<p>I have this script so far:</p>
<pre><code>import json
import urllib2
req = urllib2.Request('http://server.company.com/api')
response = urllib2.urlopen(req)
the_page = response.read()
j = json.loads(the_page)
print j[1]['name']
</code></pre>
<p>The JSON that it returns looks like this...</p>
<pre><code>{
"name": bill jones,
"address": "123 something st",
"city": "somewhere",
"state": "somestate",
"zip": "12345",
"phone_number": "800-555-1234",
},
{
"name": jane doe,
"address": "456 another ave",
"city": "metropolis",
"state": "ny",
"zip": "10001",
"phone_number": "555-555-5554",
},
</code></pre>
<p>You can see that with the script I can return the name of employee in index 1. But I would like to have something more along the lines of: <code>print j[**0 through len(j)**]['name']</code> so it will print out the name (and preferably the phone number too) of every employee in the json list.</p>
<p>I'm fairly sure I'm approaching something wrong, but I need some feedback and direction.</p>
| 4 |
2016-09-21T00:19:21Z
| 39,605,815 |
<p>Slightly nicer for mass-conversions than repeated <code>dict</code> lookup is using <a href="https://docs.python.org/3/library/operator.html#operator.itemgetter" rel="nofollow"><code>operator.itemgetter</code></a>:</p>
<pre><code>from future_builtins import map # Only on Py2, to get lazy, generator based map
from operator import itemgetter
for name, phone_number in map(itemgetter('name', 'phone_number'), j):
print name, phone_number
</code></pre>
<p>If you needed to look up individual things as needed (so you didn't always need <code>name</code> or <code>phone_number</code>), then regular <code>dict</code> lookups would make sense, this just optimizes the case where you're always retrieving the same set of items by pushing work to builtin functions (which, on the CPython reference interpreter, are implemented in C, so they run a bit faster than hand-rolled code). Using a generator based <code>map</code> isn't strictly necessary, but it avoids making (potentially large) temporary <code>list</code>s when you're just going to iterate the result anyway.</p>
<p>It's basically just a faster version of:</p>
<pre><code>for emp in j:
name, phone_number = emp['name'], emp['phone_number']
print name, phone_number
</code></pre>
| 2 |
2016-09-21T00:41:57Z
|
[
"python",
"json"
] |
Searching for and manipulating the content of a keyword in a huge file
| 39,605,704 |
<p>I have a huge HTML file that I have converted to text file. (The file is Facebook home page's source). Assume the text file has a specific keyword in some places of it. For example: "some_keyword: [bla bla]". How would I print all the different bla blas that are followed by some_keyword?</p>
<pre><code>{id:"1126830890",name:"Hillary Clinton",firstName:"Hillary"}
</code></pre>
<p>Imagine there are 50 different names with this format in the page. How would I print all the names followed by "name:", considering the text is very large and crashes when you read() it or try to search through its lines. </p>
<p>Sample File: </p>
<p><code>shortProfiles:{"100000094503825":{id:"100000094503825",name:"Bla blah",firstName:"Blah",vanity:"blah",thumbSrc:"https://scontent-lax3-1.xx.fbcdn.net/v/t1.0-1/c19.0.64.64/p64x64/10354686_10150004552801856_220367501106153455_n.jpg?oh=3b26bb13129d4f9a482d9c4115b9eeb2&oe=5883062B",uri:"https://www.facebook.com/blah",gender:2,i18nGender:16777216,type:"friend",is_friend:true,mThumbSrcSmall:null,mThumbSrcLarge:null,dir:null,searchTokens:["Bla"],alternateName:"",is_nonfriend_messenger_contact:false},"1347968857":</code></p>
| 0 |
2016-09-21T00:28:27Z
| 39,605,735 |
<p>Based on your comment, since you are the person responsible for writting the data to the file. Write the data in JSON format and read it from file using <a href="https://docs.python.org/2/library/json.html#json.loads" rel="nofollow"><code>json.loads()</code></a> as:</p>
<pre><code>import json
json_file = open('/path/to/your_file')
json_str = json_file.read()
json_data = json.loads(json_str)
for item in json_data:
print item['name']
</code></pre>
<p><strong>Explanation:</strong></p>
<p>Lets say <code>data</code> is the variable storing </p>
<pre><code>{id:"1126830890",name:"Hillary Clinton",firstName:"Hillary"}
</code></pre>
<p>which will be dynamically changing within your code where you are performing write operation in the file. Instead append it to the list as:</p>
<pre><code>a = []
for item in page_content:
# data = some xy logic on HTML file
a.append(data)
</code></pre>
<p>Now write this list to the file using: <a href="https://docs.python.org/2/library/json.html#json.dump" rel="nofollow"><code>json.dump()</code></a> </p>
| 0 |
2016-09-21T00:32:34Z
|
[
"python"
] |
Searching for and manipulating the content of a keyword in a huge file
| 39,605,704 |
<p>I have a huge HTML file that I have converted to text file. (The file is Facebook home page's source). Assume the text file has a specific keyword in some places of it. For example: "some_keyword: [bla bla]". How would I print all the different bla blas that are followed by some_keyword?</p>
<pre><code>{id:"1126830890",name:"Hillary Clinton",firstName:"Hillary"}
</code></pre>
<p>Imagine there are 50 different names with this format in the page. How would I print all the names followed by "name:", considering the text is very large and crashes when you read() it or try to search through its lines. </p>
<p>Sample File: </p>
<p><code>shortProfiles:{"100000094503825":{id:"100000094503825",name:"Bla blah",firstName:"Blah",vanity:"blah",thumbSrc:"https://scontent-lax3-1.xx.fbcdn.net/v/t1.0-1/c19.0.64.64/p64x64/10354686_10150004552801856_220367501106153455_n.jpg?oh=3b26bb13129d4f9a482d9c4115b9eeb2&oe=5883062B",uri:"https://www.facebook.com/blah",gender:2,i18nGender:16777216,type:"friend",is_friend:true,mThumbSrcSmall:null,mThumbSrcLarge:null,dir:null,searchTokens:["Bla"],alternateName:"",is_nonfriend_messenger_contact:false},"1347968857":</code></p>
| 0 |
2016-09-21T00:28:27Z
| 39,625,346 |
<p>I just wanted to throw this out there even though I agree with all the comments about just dealing with the html directly or using Facebook's API (probably the safest way), but open file objects in Python can be used as a generator yielding lines without reading the entire file into memory and the re module can be used to extract information from text.</p>
<p>This can be done like so:</p>
<pre><code>import re
regex = re.compile(r"(?:some_keyword:\s\[)(.*?)\]")
with open("filename.txt", "r") as fp:
for line in fp:
for match in regex.findall(line):
print(match)
</code></pre>
<p>Of course this only works if the file is in a "line-based" format, but the end effect is that only the line you are on is loaded into memory at any one time. </p>
<p><a href="https://docs.python.org/2/library/re.html" rel="nofollow">here</a> is the Python 2 docs for the <code>re</code> module </p>
<p><a href="https://docs.python.org/3.5/library/re.html" rel="nofollow">here</a> is the Python 3 docs for the <code>re</code> module</p>
<p>I cannot find documentation which details the generator capabilities of file objects in Python, it seems to be one of those well-known secrets...Please feel free to edit and remove this paragraph if you know where in the Python docs this is detailed. </p>
| 0 |
2016-09-21T19:48:45Z
|
[
"python"
] |
Is there a version of __file__ that when used in a function, will get the name of the file that uses the library?
| 39,605,747 |
<p>So, as a joke I wrote a version of goto for python that I wanted to use as a library. The function I wrote for it is as follows.</p>
<pre><code>def goto(loc):
exec(open(__file__).read().split("# "+str(loc))[1],globals())
quit()
</code></pre>
<p>This works for cases where it is in the file where goto is used, such as this:</p>
<pre><code>def goto(loc):
exec(open(__file__).read().split("# "+str(loc))[1],globals())
quit()
# hi
print("test")
goto("hi")
</code></pre>
<p>However, if I import goto in another file, it doesn't work as</p>
<pre><code>__file__
</code></pre>
<p>will always return the file with the function in it, not the one it is used in. Is there an equivalent of file that will allow me to import a file with the goto function in it and have it work?</p>
| 1 |
2016-09-21T00:33:54Z
| 39,605,823 |
<p>Yes, you can if you inspect the call stack:</p>
<pre><code>import inspect
def goto():
try:
frame = inspect.currentframe()
print(frame.f_back.f_globals['__file__'])
finally:
# break reference cycles
# https://docs.python.org/3.6/library/inspect.html#the-interpreter-stack
del frame
goto()
</code></pre>
<p>Note that the call stack is actually python implementation dependent -- So for some python interpreters, this might not actually work...</p>
<p>Of course, I've also shown you how to get the caller's globals (<code>locals</code> can be found via <code>frame.f_back.f_locals</code> for all of your <code>exec</code>ing needs).</p>
<p>Also note that <a href="https://docs.python.org/3.6/reference/datamodel.html#frame-objects" rel="nofollow">it looks like you can implement a jump command</a> by writing to the frame's <code>f_lineno</code> -- which might be a better way to implement your <code>goto</code>. I've never done this though so I can't really advise you on how to actually code it up :-)</p>
| 0 |
2016-09-21T00:42:45Z
|
[
"python",
"python-3.x",
"goto"
] |
Double loop takes time
| 39,605,839 |
<p>I have a script which takes a lot of time and can't finish so far after 2 days...
I parsed 1 file into 2 dictionaries as the following:</p>
<pre><code>gfftree = {'chr1':[(gene_id, gstart, gend),...], 'chr2':[(gene_id, gstart, gend),...],...}
TElocation = {'chr1':[(TE_id, TEstart, TEend),...], 'chr2':[(TE_id, TEstart, TEend),...],...}
</code></pre>
<p>.</p>
<p>--The aim is to find TE_id whose TEstart or TEend or both are located between gene_id' gstart and gend in each chr(key).</p>
<h1>The above should be changed to "find TE_id whose range(TEstart, TEend) overlaps with any gene_id's range(gstart,gend)"</h1>
<p>Here is my code:</p>
<pre><code>TE_in_TSS = []
for TErange in TElocation[chromosome]:
TE_id, TEstart, TEend = TErange
for item in gfftree[chromosome]:
gene, gstart, gend = item
if len(list(set(range(int(gstart),int(gend)+1)) & set(range(int(TEstart),int(TEend)+1)))) > 0:
TE_in_TSS.append((gene, TE_id, TEstart, TEend))
else:
pass
</code></pre>
<p>So far I'm sure this loop is fine with small data, but when it comes to bigger one like 800,000 TE_id and 4,000 gene_id, it takes time...and I don't know if it could finish...</p>
| 6 |
2016-09-21T00:43:56Z
| 39,605,897 |
<p>The OP approach is <code>O(n*m)</code>, where <code>n</code> is the number of genes and <code>m</code> is the number of TEs. Rather than test each gene against each TE as in the OP, this approach leverages the ordered nature of the genes and TEs, and the specified rules of matching, to look at each gene and TE only once, except for the gene look-ahead described in <code>3.</code> below. This approach is <code>O(n + m)</code> provided that the average gene look-ahead is small relative to <code>n</code>. The sequence in which each gene and TE is visited is described by: </p>
<ol>
<li>After we finish testing the current TE against the current gene, we
get the next TE.</li>
<li>When the current TE's start position is past the current gene's end
position, we get the next gene until it's not.</li>
<li>If we find a matching TE/gene pair, we test each successive gene
against the current TE until there is no match, leaving the current
gene unchanged.</li>
</ol>
<hr>
<pre><code>def get_TE_in_TSS(genes, TEs):
TE_in_TSS = []
gene_pos, TE_pos = 0, 0
gene_count, TE_count = len(genes), len(TEs)
while gene_pos < gene_count:
while (TE_pos < TE_count) and (TEs[TE_pos][1] <= genes[gene_pos][2]):
match_gene_pos = gene_pos
while (match_gene_pos < gene_count) and (TEs[TE_pos][2] >= genes[match_gene_pos][1]):
TE_in_TSS.append((genes[match_gene_pos][0], TEs[TE_pos][0],
TEs[TE_pos][1], TEs[TE_pos][2]))
match_gene_pos += 1 # look ahead to see if this TE matches the next gene
TE_pos += 1
gene_pos += 1
return TE_in_TSS
</code></pre>
<hr>
<p><em>performance</em>, as reported by OP:</p>
<pre><code>1 second (compared to 2 days + for OP code) for 801,948 TEs, 6,007 genes
</code></pre>
<hr>
<p><em>test data:</em></p>
<pre><code>genes = (('HTR3A', 7, 9), ('ADAMTSL4', 10,100), ('THSD4',2000, 2800), ('PAPLN', 2850, 3000))
TEs = (('a', 10, 11), ('b', 13, 17), ('c', 50, 2500), ('d', 2550, 2700),
('e', 2800, 2900), ('f', 9999, 9999))
TE_in_TSS = get_TE_in_TSS(genes, TEs)
print(TE_in_TSS)
</code></pre>
<p><em>Output:</em></p>
<pre><code>[('ADAMTSL4', 'a', 10, 11), ('ADAMTSL4', 'b', 13, 17), ('ADAMTSL4', 'c', 50, 2500),
('THSD4', 'c', 50, 2500), ('THSD4', 'd', 2550, 2700), ('THSD4', 'e', 2800, 2900),
('PAPLN', 'e', 2800, 2900)]
</code></pre>
<hr>
<p>Note that the first 9 comments on this post refer to a more efficient <code>O(n * m)</code> approach that became outdated by clarified specs. </p>
| 3 |
2016-09-21T00:51:34Z
|
[
"python",
"for-loop"
] |
Double loop takes time
| 39,605,839 |
<p>I have a script which takes a lot of time and can't finish so far after 2 days...
I parsed 1 file into 2 dictionaries as the following:</p>
<pre><code>gfftree = {'chr1':[(gene_id, gstart, gend),...], 'chr2':[(gene_id, gstart, gend),...],...}
TElocation = {'chr1':[(TE_id, TEstart, TEend),...], 'chr2':[(TE_id, TEstart, TEend),...],...}
</code></pre>
<p>.</p>
<p>--The aim is to find TE_id whose TEstart or TEend or both are located between gene_id' gstart and gend in each chr(key).</p>
<h1>The above should be changed to "find TE_id whose range(TEstart, TEend) overlaps with any gene_id's range(gstart,gend)"</h1>
<p>Here is my code:</p>
<pre><code>TE_in_TSS = []
for TErange in TElocation[chromosome]:
TE_id, TEstart, TEend = TErange
for item in gfftree[chromosome]:
gene, gstart, gend = item
if len(list(set(range(int(gstart),int(gend)+1)) & set(range(int(TEstart),int(TEend)+1)))) > 0:
TE_in_TSS.append((gene, TE_id, TEstart, TEend))
else:
pass
</code></pre>
<p>So far I'm sure this loop is fine with small data, but when it comes to bigger one like 800,000 TE_id and 4,000 gene_id, it takes time...and I don't know if it could finish...</p>
| 6 |
2016-09-21T00:43:56Z
| 39,606,699 |
<p>Here is a solution using multi-threading, comparing code used for nested loop methods.</p>
<p>I created two csv's, one with 8k rows and one 800 rows of (int, float1,float2) random generated numbers and import as below:</p>
<pre><code>import time
import itertools
start = time.time()
def f((TE_id, TEstart, TEend)):
a=[]
for gene, gstart, gend in gfftree['chr1']:
if (gstart <= TEstart <=gend) or (gstart<=TEend <=gend):
a.append((gene,TE_id,TEstart,TEend))
return a
'''
#slow
TEinTSS = []
for TE_id, TEstart, TEend in TElocation['chr1']:
for gene, gstart, gend in gfftree['chr1']:
if (gstart <= TEstart <=gend) or (gstart<=TEend <=gend):
TEinTSS.append((gene,TE_id,TEstart,TEend))
print len(TEinTSS)
print time.time()-start
#faster
TEinTSS = []
for things in TElocation['chr1']:
TEinTSS.extend(f(things))
print len(TEinTSS)
print time.time()-start
'''
#fastest (especially with multi-core, multithreading)
from multiprocessing import Pool
if __name__ == '__main__':
p=Pool()
TEinTSS = list(itertools.chain.from_iterable(p.imap_unordered(f, b)))
print len(TEinTSS)
print time.time() - start
</code></pre>
| 1 |
2016-09-21T02:43:43Z
|
[
"python",
"for-loop"
] |
Double loop takes time
| 39,605,839 |
<p>I have a script which takes a lot of time and can't finish so far after 2 days...
I parsed 1 file into 2 dictionaries as the following:</p>
<pre><code>gfftree = {'chr1':[(gene_id, gstart, gend),...], 'chr2':[(gene_id, gstart, gend),...],...}
TElocation = {'chr1':[(TE_id, TEstart, TEend),...], 'chr2':[(TE_id, TEstart, TEend),...],...}
</code></pre>
<p>.</p>
<p>--The aim is to find TE_id whose TEstart or TEend or both are located between gene_id' gstart and gend in each chr(key).</p>
<h1>The above should be changed to "find TE_id whose range(TEstart, TEend) overlaps with any gene_id's range(gstart,gend)"</h1>
<p>Here is my code:</p>
<pre><code>TE_in_TSS = []
for TErange in TElocation[chromosome]:
TE_id, TEstart, TEend = TErange
for item in gfftree[chromosome]:
gene, gstart, gend = item
if len(list(set(range(int(gstart),int(gend)+1)) & set(range(int(TEstart),int(TEend)+1)))) > 0:
TE_in_TSS.append((gene, TE_id, TEstart, TEend))
else:
pass
</code></pre>
<p>So far I'm sure this loop is fine with small data, but when it comes to bigger one like 800,000 TE_id and 4,000 gene_id, it takes time...and I don't know if it could finish...</p>
| 6 |
2016-09-21T00:43:56Z
| 39,622,265 |
<p>If the aim of the process is purely to find the gene IDs falling inside a specific start range and <em>you're not too worried about how you achieve this but are simply looking for the fastest solution</em>, then you may want to consider dropping the concept of a loop altogether and looking at a pre-existing solution mechanism.</p>
<p>Assuming your data is in CSV format, the following would suit your requirements, returning a dataframe containing the IDs, gene names and associated chromasones, grouped by chromasone.</p>
<p><strong>File: genometest.py</strong></p>
<pre><code>import pandas as pd
columns = ['id', 'chromasone', 'start', 'end', 'gene_name']
te_locations = pd.read_csv('Sequences/te.bed', delimiter='\t', header=None, names=columns)
gene_locations = pd.read_csv('Sequences/gene.bed', delimiter='\t', header=None, names=columns)
dataframe = pd.merge(te_locations, gene_locations, on=['gene_name', 'chromasone'], how='outer', suffixes=('_te', '_ge'))
dataset = dataframe.query('start_te >= start_ge & start_te <= end_ge')[['peak_id_te', 'gene_name', 'chromasone']]
dataset.groupby('chromasone')
</code></pre>
<p><strong>Input sizes</strong></p>
<ul>
<li>TE_Locations dataset size = 337848</li>
<li>Gene_Locations dataset size = 50307</li>
</ul>
<p><strong>Output size</strong></p>
<ul>
<li>dataset size = 7085</li>
</ul>
<p><strong>Performance</strong></p>
<pre><code>$ python3 -m timeit 'import genometest'
10 loops, best of 3: 0.391 usec per loop
</code></pre>
| 1 |
2016-09-21T16:49:20Z
|
[
"python",
"for-loop"
] |
How to check two POS tags are in the same category in NLTK?
| 39,605,864 |
<p>Like the title says, how can I check two POS tags are in the same category?</p>
<p>For example,</p>
<pre><code>go -> VB
goes -> VBZ
</code></pre>
<p>These two words are both verbs. Or,</p>
<pre><code>bag -> NN
bags -> NNS
</code></pre>
<p>These two are both nouns.
So my question is that whether there exists any function in NLTK to check if two given tags are in the same category?</p>
| 1 |
2016-09-21T00:46:26Z
| 39,606,287 |
<p>Not sure if this is what you are looking for, but you can tag with a <a href="http://www.nltk.org/book/ch05.html#reading-tagged-corpora" rel="nofollow"><code>universal</code> tagset</a>:</p>
<pre><code>from pprint import pprint
from collections import defaultdict
from nltk import pos_tag
from nltk.tokenize import sent_tokenize, word_tokenize
s = "I go. He goes. This bag is brown. These bags are brown."
d = defaultdict(list)
for sent in sent_tokenize(s):
text = word_tokenize(sent)
for value, tag in pos_tag(text, tagset='universal'):
d[tag].append(value)
pprint(dict(d))
</code></pre>
<p>Prints:</p>
<pre><code>{'.': ['.', '.', '.', '.'],
'ADJ': ['brown'],
'DET': ['This', 'These'],
'NOUN': ['bag', 'bags'],
'PRON': ['I', 'He'],
'VERB': ['go', 'goes', 'is', 'brown', 'are']}
</code></pre>
<p>Note how <code>bag</code> and <code>bags</code> fall into <code>NOUN</code> category and <code>go</code> and <code>goes</code> fall into <code>VERB</code>.</p>
| 0 |
2016-09-21T01:48:57Z
|
[
"python",
"nltk",
"pos-tagging"
] |
How to check two POS tags are in the same category in NLTK?
| 39,605,864 |
<p>Like the title says, how can I check two POS tags are in the same category?</p>
<p>For example,</p>
<pre><code>go -> VB
goes -> VBZ
</code></pre>
<p>These two words are both verbs. Or,</p>
<pre><code>bag -> NN
bags -> NNS
</code></pre>
<p>These two are both nouns.
So my question is that whether there exists any function in NLTK to check if two given tags are in the same category?</p>
| 1 |
2016-09-21T00:46:26Z
| 39,613,600 |
<p>Let's take the simple case first: Your corpus is tagged with the Brown tagset (that's what it looks like), and you'd be happy with the simple tags defined in the nltk's <a href="http://www.nltk.org/book/ch05.html#tab-universal-tagset" rel="nofollow">"universal" tagset</a>: <code>., ADJ, ADP, ADV, CONJ, DET, NOUN, NUM, PRON, PRT, VERB, X</code>, where the dot stands for "punctuation". In this case, simply load the nltk's map and use it with your data:</p>
<pre><code>tagmap = nltk.tag.mapping.tagset_mapping("en-brown", "universal")
if tagmap[tag1] == tagmap[tag2]:
print("The two words have the same part of speech")
</code></pre>
<p>If that's not your use case, you'll need to <em>manually</em> decide on a mapping from each individual tag to the simplified category you want to assign it to. If you are working with the Brown corpus tagset, you can see the tags and their meanings <a href="http://www.scs.leeds.ac.uk/amalgam/tagsets/brown.html" rel="nofollow">here,</a> or from within python like this:</p>
<pre><code>print(nltk.help.brown_tagset())
</code></pre>
<p>Study your tags and define a dictionary that maps each POS tag to your chosen category; people sometimes find it useful to just group Brown corpus tags by their first two letters, putting together "NN", "NN$", "NNS-HL", etc. You could create this particular mapping automatically like this:</p>
<pre><code>from nltk.corpus import brown
alltags = set(t for w, t in brown.tagged_words())
tagmap = dict(t[:2] for t in alltags)
</code></pre>
<p>Then you can customize this map according to your needs; e.g., to put all punctuation tags together in the category ".":</p>
<pre><code>for tag in tagmap:
if not tag.isalpha():
tagmap[tag] = "."
</code></pre>
<p>Once your <code>tagmap</code> is to your liking, use it like the one I imported from the <code>nltk</code>.</p>
<p>Finally, you might find it convenient to retag your entire corpus in one go, so that you can simply compare the assigned tags. If <code>corpus</code> is a list of tagged sentences in the format of the nltk's <code><corpus>.tagged_sents()</code> command (so <em>not</em> a corpus reader object), you can retag everything like this:</p>
<pre><code>newcorpus = []
for sent in corpus:
newcorpus.append( [ (w, tagmap[t]) for w, t in sent ] )
</code></pre>
| 1 |
2016-09-21T10:12:08Z
|
[
"python",
"nltk",
"pos-tagging"
] |
small program stops - why?
| 39,605,896 |
<p>Whats wrong with my code?
After the second "input" the program stops...</p>
<pre><code>convr = 0
x = input("Inform value: ")
y = input("Inform if is Dolar (D) or Euro (E): ")
convt = x * convr
if y == "D":
convr = 1/0.895
print (convt)
elif y == "E":
convr = 0.895
print (convt)
else:
print ("NOT ALLOWED!")
</code></pre>
| -1 |
2016-09-21T00:51:06Z
| 39,605,968 |
<p>The program doesn's stop. Just your value is empty.
You can see what I mean by changing your print() statements to:</p>
<pre><code>print ('RESULT: ' + convt)
</code></pre>
<p>Now, instead of a blank line, you will get "Result: " printed to the screen.</p>
<p>The reason you don't get a value is because of this line:</p>
<pre><code>convt = x * convr
</code></pre>
<p>When you run this convr is equal to zero. Any number times by zero equals zero. :)</p>
| 0 |
2016-09-21T01:01:16Z
|
[
"python",
"python-3.x",
"input"
] |
small program stops - why?
| 39,605,896 |
<p>Whats wrong with my code?
After the second "input" the program stops...</p>
<pre><code>convr = 0
x = input("Inform value: ")
y = input("Inform if is Dolar (D) or Euro (E): ")
convt = x * convr
if y == "D":
convr = 1/0.895
print (convt)
elif y == "E":
convr = 0.895
print (convt)
else:
print ("NOT ALLOWED!")
</code></pre>
| -1 |
2016-09-21T00:51:06Z
| 39,605,998 |
<p>Because your <code>x</code> variable is a string, you need to transform it to number, eg. <code>float</code> or <code>int</code>.</p>
<pre><code>x = float(input("Inform value: "))
y = input("Inform if is Dolar (D) or Euro (E): ")
if y == "D":
convr = 1/0.895
convt = x * convr
print (convt)
elif y == "E":
convr = 0.895
convt = x * convr
print (convt)
else:
print ("NOT ALLOWED!")
---------
# Inform value: 2345
# Inform if is Dolar (D) or Euro (E): D
# 2620.1117318435754
</code></pre>
| 3 |
2016-09-21T01:04:53Z
|
[
"python",
"python-3.x",
"input"
] |
How do I create a randomised quiz using a dictionary?
| 39,605,919 |
<p>I have been attempting to make a randomised quiz but can't make the dictionary work, please help! This is my code so far. </p>
<pre><code>Import random
questions_answers = {"Which is the comparison operator for Not Equal to? Enter the number of the correct answer, 1. = 2. == 3. != ":"3",
"How many paths through a program does an IF statement allow? 1. One path 2. Two paths 3. Three paths":"2",
"What is this curly bracket called { } ? ": "Braces",
"What does the statement 'print' do? 1. Output a hard copy of a program to a printer 2. Output a message on the screen 3. Print a hard copy of a flowchart to a printer":"2.",
"How many statements are there in this line of code: print('If I am 17, I can drive a car')? 1. There are two statements - 'print' and 'if' 2. There are no statements 3. There is one statement - 'print'":"1",
"What is a variable? 1. A variable is a number 2. A variable is a message input by the user 3. A variable is a location in memory that we use to store data": "3",
"In programming what name is given to the data type which represents decimal numbers?": "Float",
"In programming, what is iteration? 1. The repetition of steps within a program 2. The order in which instructions are carried out 3. A decision point in a program": "1",
"What is the name given to a loop that continues indefinitely? 1. An infinite loop 2. A forever loop 3.A continuous loop": "1",
"What values does a Boolean expression have? 1. Yes or No 2.True or False 3.Right or Wrong": "2",
"How many elements would the array 'student_marks(10)' hold? ": "10",
"In the array 'colours('Purple', 'Blue', 'Red', 'Green', 'Yellow')', what is the value of the element colours(3)?": "Green",
"What is the difference between an array and a list? 1. An array holds numbers, whereas a list holds strings 2. An array can only hold data of the same data type, whereas lists can hold values of different data types 3.An array holds both numbers and strings, whereas a list only holds numbers": "2",
"Why are functions used? 1. To make code more readable 2. To reduce repetition of code and to calculate a value 3. To make decisions": "2",
"What is a bug? 1. An error in a program 2. An error in an algorithm 3. A way of halting a program that is running": "1",
"What is a syntax error? 1. A mistake in the logic of the program 2. A spelling or grammatical mistake in the program 3. A way of halting a program that is running": "2",
"Which of the following contains a syntax error? 1. print('We love computing') 2. print('We love computing') 3. print(answer)": "2",
"Why are comments included in code? 1. So that the program's title is known 2. To make it clear who wrote the program 3. To help programmers understand how the program works": "3",
"If a syntax error is present, what will happen when the error is encountered? 1. The program will crash when the error is encountered 2. The program will behave unexpectedly 3. The program will not start": "1",
"What is an array? 1. A list of variables 2. A list of strings 3. A series of memory locations, each with the same name, that hold related data": "3"}
def ask_questions():
print ("You may quit this game at any time by pressing the letter Q")
while True:
rand_num == questions_asked
if input==answer_list subposition:
print ("Correct!")
score +=1
else:
print ("Incorrect Answer, the correct answer is" + answer_list(rand_num)
break
</code></pre>
| -5 |
2016-09-21T00:53:42Z
| 39,606,594 |
<p>Ok, I decided to do this because my python is a little rusty and I thought the question was interesting.</p>
<pre><code>import sys
import random
def ask_questions():
score = 0
print ("You may quit this game at any time by pressing the letter Q")
while True:
rand_q = random.choice(questions_answers.keys())
rand_q_answer = int(questions_answers[rand_q])
user_input = input("Your answer? ")
if user_input == rand_q_answer:
print ("Correct!")
score +=1
else:
print ("Incorrect Answer, the correct answer is ", rand_q_answer)
sys.exit()
ask_questions()
</code></pre>
<p>This code seems to work fine, although I'm sure it can be improved.</p>
| -1 |
2016-09-21T02:30:53Z
|
[
"python",
"dictionary"
] |
lambda function of another function but force fixed argument
| 39,605,985 |
<p>I just switched to Python from Matlab, and I want to use lambda function to map function <code>f1(x,y)</code> with multiple arguments to one argument function <code>f2(x)</code> for optimization.
I want that when I map the function <code>f2(x) <- f1(x,y=y1)</code> then <code>y</code> will stay constant no matter what <code>y1</code> changes, in Matlab this is true by default but if I try in Python, it keeps changing as the following examples</p>
<pre><code>>>> def f1(x,y):
>>> return (x+y)
>>> y1 = 2
>>> f2 = lambda x: f1(x,y1)
>>> f2(1)
3
</code></pre>
<p>I expect <code>f2(1)</code> stays <code>3</code> even if I change <code>y1</code>, however if I change <code>y1</code>, the whole <code>f1(1)</code> also changes as follows</p>
<pre><code>>>> y1 = 5
>>> f2(1)
6
</code></pre>
<p>I wonder is there a way that when I declare <code>f2 = lambda x: f1(x,y1)</code> then <code>f1</code> will take the value of <code>y1</code> at that time and fix it to <code>f2</code>. The reason for this because I want to dynamically create different functions for different scenarios then sum them all.
I'm still new to Python, please help, much appreciate.</p>
| 0 |
2016-09-21T01:03:11Z
| 39,606,008 |
<p>Try:</p>
<pre><code>f2 = lambda x, y=y1: f1(x,y)
</code></pre>
<p>Your issue has to do with <a href="http://docs.python-guide.org/en/latest/writing/gotchas/#late-binding-closures" rel="nofollow">how closures work in Python</a></p>
<p>Your version of the lambda function will use the current version of <code>y1</code>. You need to <em>capture</em> the value of <code>y1</code> on the line where you've defined the lambda function. To do that, you can define it as the default value of a parameter (the <code>y=y1</code> part).</p>
| 2 |
2016-09-21T01:06:02Z
|
[
"python",
"function",
"lambda"
] |
lambda function of another function but force fixed argument
| 39,605,985 |
<p>I just switched to Python from Matlab, and I want to use lambda function to map function <code>f1(x,y)</code> with multiple arguments to one argument function <code>f2(x)</code> for optimization.
I want that when I map the function <code>f2(x) <- f1(x,y=y1)</code> then <code>y</code> will stay constant no matter what <code>y1</code> changes, in Matlab this is true by default but if I try in Python, it keeps changing as the following examples</p>
<pre><code>>>> def f1(x,y):
>>> return (x+y)
>>> y1 = 2
>>> f2 = lambda x: f1(x,y1)
>>> f2(1)
3
</code></pre>
<p>I expect <code>f2(1)</code> stays <code>3</code> even if I change <code>y1</code>, however if I change <code>y1</code>, the whole <code>f1(1)</code> also changes as follows</p>
<pre><code>>>> y1 = 5
>>> f2(1)
6
</code></pre>
<p>I wonder is there a way that when I declare <code>f2 = lambda x: f1(x,y1)</code> then <code>f1</code> will take the value of <code>y1</code> at that time and fix it to <code>f2</code>. The reason for this because I want to dynamically create different functions for different scenarios then sum them all.
I'm still new to Python, please help, much appreciate.</p>
| 0 |
2016-09-21T01:03:11Z
| 39,606,040 |
<p>As already pointed out, your issue comes down to how closures work. However, you really shouldn't be using a lambda for this - lambdas are for anonymous functions. Make a higher-order function with <code>def</code> statements instead:</p>
<pre><code>>>> def f1(x,y):
... return x + y
...
>>> def f1_factory(y):
... def f1_y(x):
... return f1(x,y)
... return f1_y
...
>>> f1_factory(6)(4)
10
>>> f1_factory(5)(4)
9
</code></pre>
<p>It also avoids the problem you encountered:</p>
<pre><code>>>> y = 3
>>> newfunc = f1_factory(y)
>>> newfunc(1)
4
>>> y = 20
>>> newfunc(1)
4
>>>
</code></pre>
<p>From <a href="https://www.python.org/dev/peps/pep-0008/#programming-recommendations" rel="nofollow">PEP8</a>:</p>
<blockquote>
<p>Always use a def statement instead of an assignment statement that
binds a lambda expression directly to an identifier.</p>
<p>Yes:</p>
<p>def f(x): return 2*x </p>
<p>No:</p>
<p>f = lambda x: 2*x </p>
<p>The first form means that the name of the resulting
function object is specifically 'f' instead of the generic <code><lambda></code>.
This is more useful for tracebacks and string representations in
general. The use of the assignment statement eliminates the sole
benefit a lambda expression can offer over an explicit def statement
(i.e. that it can be embedded inside a larger expression)</p>
</blockquote>
| 1 |
2016-09-21T01:10:56Z
|
[
"python",
"function",
"lambda"
] |
tkinter using two keys at the same time
| 39,606,019 |
<p>So tkinker can only use one key at a time.
I am unable to say move to the left and up at the same time with this example.
How would i go about doing it if I wanted to?</p>
<pre><code>import tkinter
root = tkinter.Tk()
root.title('test')
c= tkinter.Canvas(root, height=300, width=400)
c.pack()
body = c.create_oval(100, 150, 300, 250, fill='green')
def key(event):
OnKeyDown(event.char)
print(event.char)
def MoveLeft(evenr)
c.move(body, -10, 0)
def MoveRight(event):
c.move(body, 10, 0)
def MoveUp(event):
c.move(body, 0, 10)
def MoveDown(event):
c.move(body, 0, -10)
root.bind('<KeyPress-Left>', MoveLeft)
root.bind('<KeyPress-Right>', MoveRight)
root.bind('<KeyPress-Up>', MoveUp)
root.bind('<KeyPress-Down>', MoveDown)
</code></pre>
<p>Personally I would also prefer to not have to "bind" my keys to functions as well as I also would like to use the keys to preform other actions (ie: make it move faster if I hold shift and up at the same time) Can tinker recognize when you pre-assign two keys or hold two keys at the same time?</p>
| 0 |
2016-09-21T01:08:31Z
| 39,609,454 |
<p>Like this :</p>
<pre><code>from Tkinter import *
root = Tk()
var = StringVar()
a_label = Label(root,textvariable = var ).pack()
history = []
def keyup(e):
print e.keycode
if e.keycode in history :
history.pop(history.index(e.keycode))
var.set(str(history))
def keydown(e):
if not e.keycode in history :
history.append(e.keycode)
var.set(str(history))
frame = Frame(root, width=200, height=200)
frame.bind("<KeyPress>", keydown)
frame.bind("<KeyRelease>", keyup)
frame.pack()
frame.focus_set()
root.mainloop()
</code></pre>
<h2>Don't forget <code>toggle keys</code> because got a little mix status.</h2>
| 1 |
2016-09-21T06:57:15Z
|
[
"python",
"tkinter",
"keypress"
] |
Selecting a random tuple from a dictionary
| 39,606,049 |
<p>Say I have this:</p>
<pre><code>d={'a':[(1,2),(3,4)],'b':[(9,2),(5,4)],'c':[(2,2),(7,7)]}
</code></pre>
<p>where d is a dictionary in python. I'd like to get random tuples out of this corresponding to a particular key using the <code>random.choice()</code> method.</p>
<p>This is what I'm doing and it's not working:</p>
<pre><code>random.choice(d['a'].values())
</code></pre>
| 0 |
2016-09-21T01:12:48Z
| 39,606,086 |
<p><code>d['a']</code> is already a list, so you don't need to call <code>.values()</code> on it.</p>
<pre><code>import random
d = {
'a': [(1, 2), (3, 4)],
'b': [(9, 2), (5, 4)],
'c': [(2, 2), (7, 7)],
}
print(random.choice(d['a']))
</code></pre>
| 2 |
2016-09-21T01:19:45Z
|
[
"python",
"dictionary",
"tuples"
] |
Selecting a random tuple from a dictionary
| 39,606,049 |
<p>Say I have this:</p>
<pre><code>d={'a':[(1,2),(3,4)],'b':[(9,2),(5,4)],'c':[(2,2),(7,7)]}
</code></pre>
<p>where d is a dictionary in python. I'd like to get random tuples out of this corresponding to a particular key using the <code>random.choice()</code> method.</p>
<p>This is what I'm doing and it's not working:</p>
<pre><code>random.choice(d['a'].values())
</code></pre>
| 0 |
2016-09-21T01:12:48Z
| 39,606,103 |
<p>If you're just trying to get a random tuple out of key you pick, you've written too much:</p>
<pre><code>random.choice(d['a'])
</code></pre>
<p>(Also NB: You'll need quotes around the keys in the dictionary. Right now you're using, e.g., the undefined variable <code>a</code> instead of the string <code>'a'</code>.)</p>
| 0 |
2016-09-21T01:21:43Z
|
[
"python",
"dictionary",
"tuples"
] |
why does my program return None in for loop?
| 39,606,059 |
<p>I have a function that should print the squares in the given interval:</p>
<pre><code>class Squares:
def __init__(self, min, max):
self.min = min
self.max = max
def __iter__(self):
return self
def __next__(self):
a_list = []
for i in range((self.max)+1):
a_list += [i**2]
if self.min <= self.max:
if self.min in a_list:
result = self.min
self.min += 1
return result
else:
self.min += 1
else:
raise StopIteration
import math
for i in Squares(5, 50):
print(i)
</code></pre>
<p>It should print 9, 16, 25, 49, but the output was:</p>
<pre><code>None
None
None
None
9
None
None
None
None
None
None
16
None
None
None
None
None
None
None
None
25
None
None
None
None
None
None
None
None
None
None
36
None
None
None
None
None
None
None
None
None
None
None
None
49
None
</code></pre>
<p>Why is this?</p>
| 0 |
2016-09-21T01:15:20Z
| 39,606,214 |
<p>The reason that <em>None</em> is returned every time the variable result is not a perfect square, is that the next() function returns <em>None</em> by default if no return is specified.</p>
<p>If you <em>must</em> use an iterator for this project, you have to structure your code so that a value is returned each pass.</p>
<p>Also, notice that each time next() is called, an entirely new array named a_list is generated, which is pretty inefficient. It would be much better to initialize that array once.</p>
<p>Check out the differences in this example.</p>
<pre><code>class Squares:
def __init__(self, min, max):
self.min = min
self.max = max
def __iter__(self):
self.a_list = []
for i in range((self.max)+1):
self.a_list += [i**2]
self.iter_index = 0
return self
def next(self):
self.iter_index += 1
if self.a_list[self.iter_index] > self.max:
raise StopIteration
else:
return self.a_list[self.iter_index]
import math
import pdb
for i in Squares(5, 50):
print(i)
</code></pre>
| 0 |
2016-09-21T01:37:05Z
|
[
"python",
"class",
"iterator",
"next"
] |
How to check for alphanumeric characters in a binary file
| 39,606,112 |
<p>I'm a beginner trying to write a program that will read in .exe files, .class files, or .pyc files and get the percentage of alphanumeric characters (a-z,A-Z,0-9). Here's what I have right now (I'm just trying to see if I can identify anything at the moment, not looking to count stuff yet):</p>
<pre><code>chars_total = 0
chars_alphnum = 0
iterate = 1
with open("pythonfile.pyc", "rb") as f:
byte = f.read(iterate)
while byte != b"":
chars_total += 1
print (byte)
iterate +=1
byte = f.read(iterate)
</code></pre>
<p>This code prints out various bytes such as </p>
<pre><code>b'\xe1WQ\x00'
b'\x00\x00c\x00\x00'
</code></pre>
<p>but I'm having trouble with translating the bytes themselves.</p>
<p>I've also tried <code>print (binascii.hexlify(byte))</code> after importing binascii which converts everything into alphanumeric characters, which seems to not quite be what I'm looking for. So am I just getting something severely mistaken or am I at least on the right track?</p>
<p>Full disclaimer, this is related in small part to a homework assignment, but we have permission to use this site because neither the in class material nor the reading covers any coding at all. And yes, I have been trying to figure this out before I came on here.</p>
| -3 |
2016-09-21T01:22:20Z
| 39,606,172 |
<p>Assuming you are reading from an arbitrary binary for which it might not be possible to decode it to ASCII/UTF-8, you could try something like the following</p>
<pre><code>import string
# create a set of the ascii code points for alphanumerics
alphanumeric_codes = {ord(c) for c in string.ascii_letters + string.digits}
file_bytes = b'...'
alphanumerics = (b for b in file_bytes if b in alphanumeric_codes)
percent_alphanumerics = 100.0 * len(alphanumerics) / len(filebytes)
</code></pre>
| 0 |
2016-09-21T01:31:06Z
|
[
"python"
] |
How to check for alphanumeric characters in a binary file
| 39,606,112 |
<p>I'm a beginner trying to write a program that will read in .exe files, .class files, or .pyc files and get the percentage of alphanumeric characters (a-z,A-Z,0-9). Here's what I have right now (I'm just trying to see if I can identify anything at the moment, not looking to count stuff yet):</p>
<pre><code>chars_total = 0
chars_alphnum = 0
iterate = 1
with open("pythonfile.pyc", "rb") as f:
byte = f.read(iterate)
while byte != b"":
chars_total += 1
print (byte)
iterate +=1
byte = f.read(iterate)
</code></pre>
<p>This code prints out various bytes such as </p>
<pre><code>b'\xe1WQ\x00'
b'\x00\x00c\x00\x00'
</code></pre>
<p>but I'm having trouble with translating the bytes themselves.</p>
<p>I've also tried <code>print (binascii.hexlify(byte))</code> after importing binascii which converts everything into alphanumeric characters, which seems to not quite be what I'm looking for. So am I just getting something severely mistaken or am I at least on the right track?</p>
<p>Full disclaimer, this is related in small part to a homework assignment, but we have permission to use this site because neither the in class material nor the reading covers any coding at all. And yes, I have been trying to figure this out before I came on here.</p>
| -3 |
2016-09-21T01:22:20Z
| 39,606,216 |
<p>On Windows, you could use a simple PowerShell script to get the hexdump (take a look here <a href="http://windowsitpro.com/powershell/get-hex-dumps-files-powershell" rel="nofollow">http://windowsitpro.com/powershell/get-hex-dumps-files-powershell</a>) and then, decode it to whatever standard you want (ascii, unicode) with Python (take a look here <a href="https://docs.python.org/2/library/functions.html#chr" rel="nofollow">https://docs.python.org/2/library/functions.html#chr</a>), keeping only the alphanumeric characters.</p>
<p>On Linux, <em>$man hexdump</em> on a terminal.</p>
| 0 |
2016-09-21T01:37:41Z
|
[
"python"
] |
Filter Multi-Dimensional Array
| 39,606,158 |
<p>I have an array (lists) which is NxK. However, I want to "filter" is after inputting some constraints based on values in Columns 4 and 6. This is the code I have so far.</p>
<pre><code>minmag = 5
maxmag = 7
mindist = 25
maxdist = 64
filter = np.zeros((1, 7), dtype='object')
add = np.zeros((1, 7), dtype='object')
k = 0
for i in range(0,len(lists)):
if lists[i, 4]>= minmag and lists [i, 4] <= maxmag and lists [i, 6]>=mindist and lists [i, 6]<= maxdist:
if k == 0:
for x in range(0,16):
filter[0, x] = lists[i, x]
k = 1
else:
for x in range(0, 16):
add[0, x] = lists[i, x]
filter = np.append(filter, add, axis=0)
</code></pre>
<p>It works, however it is not so neat. Just wondering if anyone has a better solution.</p>
| 0 |
2016-09-21T01:29:04Z
| 39,606,247 |
<p>Simplifying the most repetitive parts:</p>
<pre><code>if k==0:
for x in xrange(1,8):
lists[i,x] = filter[0,x]
k = 1
else:
for x in xrange(1,8):
lists[i,x] = add[0,x]
filter = np.append(filter, add, axis=0)
</code></pre>
<p>You could also combine your nested <code>if</code>s into a single one with the 4 conditions combined with <code>and</code>s.</p>
<p>I also believe (not seeing how <code>lists</code> is defined, I'm not sure) you can replace the outer loop with</p>
<pre><code>for row in lists:
</code></pre>
<p>and then use <code>row[x]</code> in place of <code>lists[i,x]</code></p>
| 0 |
2016-09-21T01:42:44Z
|
[
"python",
"arrays",
"filter"
] |
Python class: Employee management system
| 39,606,186 |
<p>This exercise assumes that you have created the Employee class for Programming Exercise 4.
Create a program that stores Employee objects in a dictionary. Use the employee ID number
as the key. The program should present a menu that lets the user perform the following actions:
⢠Look up an employee in the dictionary
⢠Add a new employee to the dictionary
⢠Change an existing employeeâs name, department, and job title in the dictionary
⢠Delete an employee from the dictionary
⢠Quit the program
When the program ends, it should pickle the dictionary and save it to a file. Each time the
program starts, it should try to load the pickled dictionary from the file. If the file does not
exist, the program should start with an empty dictionary.</p>
<p>Okay so here is my solution:-</p>
<pre><code>class Employee:
'ID, number, department, job title'
def __init__(self,ID,number,department,job_title):
self.ID = ID
self.number = number
self.department = department
self.job_title = job_title
# Mutators
def set_ID(self,ID):
self.ID = ID
def set_number(self,number):
self.number = number
def set_department(self,department):
self.department = department
def job_title(self,job_title):
self.job_title = job_title
#Accessor Methods
def get_ID(self):
return self.ID
def get_number(self):
return self.number
def get_department(self):
return self.department
def get_job_title(self):
return self.job_title
def get_data(self):
print self.ID, self.number,self.department,self.job_title
</code></pre>
<p>I saved the above as Employee.py in a folder. Then i started a new file and saved that as Employee Management System.py
Here is the code in that file</p>
<pre><code>import Employee
import pickle
filename = 'contacts.dat'
input_file = open(filename,'rb')
unpickle_input_file = pickle.load(input_file)
def test_system():
user = input('Press 1 to look up employee,\nPress 2 to add employee'
'\n3Press 3 to change an existing employee name, department and job title'
'\n4 Delete an employee from the dictionary'
'\n5 Quit the program'
'\nMake your choice ')
if user == 2:
ID = raw_input('Enter the name ')
number = input('Enter the number')
deparment = raw_input('Enter the department ')
job_title = raw_input('Enter the job title ')
entry = module from Employee??.class name(id,number,department,job_title)??
empty_dictionary = {}
empty_dictionary[number] = entry
input_file.close()
</code></pre>
<p>My first problem is that i am trying to use the created attribue in Employee.py . Specifically the <strong>init</strong> and add entry to it. I know the above code is not in the most logical forum but i am trying to first see if i can add data then pickle the file first. Everything else would be easy to figure out later if i can under how to do those two things.</p>
<p>It kind of reminds me of</p>
<pre><code>import math
x = math.pi(3.14)
x = module.function(3.14)
</code></pre>
<p>But i can't just seem to made the connection between the two examples.
Thank you</p>
| 0 |
2016-09-21T01:33:38Z
| 39,606,307 |
<p>What you're trying to do is to instantiate an <code>Employee</code> object with the given parameters. To do this, you just call the class name as if it were a function and pass in those parameters. In your case, the class name is <code>Employee</code> within the <code>Employee</code> module, so you would do this:</p>
<pre><code> entry = Employee.Employee(id, number, department, job_title)
</code></pre>
<p>This will create a new Employee object and call its <code>__init__</code> method with the parameters you passed in.</p>
| 1 |
2016-09-21T01:52:00Z
|
[
"python",
"class",
"object",
"pickle"
] |
Sending a invite into google calendar
| 39,606,198 |
<p>I want to develop a app or a simple script which will parse through lines and create a even out of it and add a invite in google calendar? Is there some api which can speak with google calendar to add events in calendar. I will prefer to write the code in python?</p>
| -1 |
2016-09-21T01:35:16Z
| 39,606,281 |
<p>You can use the Google Calendar API:</p>
<p><a href="https://developers.google.com/google-apps/calendar/quickstart/python" rel="nofollow">https://developers.google.com/google-apps/calendar/quickstart/python</a></p>
<p><a href="https://developers.google.com/google-apps/calendar/create-events" rel="nofollow">https://developers.google.com/google-apps/calendar/create-events</a></p>
| 0 |
2016-09-21T01:48:14Z
|
[
"python",
"automation",
"google-calendar"
] |
How to use blist module without installing it?
| 39,606,225 |
<p>Without installing <code>blist</code>, I am trying to use blist_1.3.6 module by setting the environment variable <code>PYTHONPATH</code>. However I am still getting the error below. Is there any way to use this <code>_blist</code> without installing it? I can see <code>_blist.c</code> is C language file.</p>
<pre class="lang-none prettyprint-override"><code>File "/path/blist-1.3.6/blist/__init__.py", line 2, in <module>
from blist._blist import *
ImportError: No module named _blist
</code></pre>
| 0 |
2016-09-21T01:39:22Z
| 39,606,276 |
<p><code>_blist</code> is the module implemented by the object that results from compiling <code>_blist.c</code> and creating a shared library from it. You can't simply import <code>_blist.c</code> directly.</p>
| 1 |
2016-09-21T01:48:01Z
|
[
"python"
] |
PyCharm not respond to my change in JavaScript file
| 39,606,308 |
<p>I am developing a simple web application integrated with MySQL database. I am using PyCharm to write Python, HTML, JavaScript, CSS. After I make change to my JavaScript and I run my application on Chrome, the Chrome console suggests that the change did not apply. I already invalid PyCharm caches and restart Pycharm, it still cannot work. Anyone has idea about this?</p>
<p>PS: if I rename the JavaScript file, it will work. But what is the reason of this problem? And how can I solve it without renaming?</p>
<p>Thanks in advance!</p>
| 0 |
2016-09-21T01:52:22Z
| 39,629,226 |
<p>Open the Chrome Developer tool setting, and disable the cache.</p>
<p>credit to @All is Vanity</p>
| 0 |
2016-09-22T02:18:45Z
|
[
"javascript",
"python",
"pycharm"
] |
What is the python equivalent of c++ const-reference copy-constructor arguments
| 39,606,315 |
<p>One of the great things of C++ is the usage of <code>const</code>-reference arguments â using these type of argument, you're pretty much guaranteed objects wonât be accidentally modified, and there wonât be side effects.</p>
<p>Question is, whatâd be the Python equivalent to such arguments?</p>
<p>For instance, letâs say you have this C++ method:</p>
<pre><code>void Foo::setPosition(const QPoint &position) {
m_position = position;
}
</code></pre>
<p>And you want to âtranslateâ it to Python like this:</p>
<pre><code>def set_position(self, position):
self.position = position
</code></pre>
<p>Doing this will potentially yield a lot of trouble, and many subtle bugs could appear as well. Question is, whatâs the âPython equivalentâ way of C++ methods which use const references arguments (copy constructor)?</p>
<p>Last time I caught a bug because a I had a bad âC++ -> Python translationâ; I fixed this with something like:</p>
<pre><code>my_python_instance.set_position(QPoint(pos))
</code></pre>
<p>⦠said otherwise, my other choice was to clone the object from the caller⦠which Iâm pretty much sure is not the right way to go.</p>
| 1 |
2016-09-21T01:53:19Z
| 39,606,527 |
<p>There is no direct equivalent. The closest is to use the <a href="https://docs.python.org/2/library/copy.html" rel="nofollow"><code>copy</code> module</a> and define the <code>__copy__()</code> and/or <code>__deepcopy__()</code> methods on your classes.</p>
| 1 |
2016-09-21T02:22:39Z
|
[
"python",
"c++",
"language-lawyer",
"copy-constructor",
"const-correctness"
] |
What is the python equivalent of c++ const-reference copy-constructor arguments
| 39,606,315 |
<p>One of the great things of C++ is the usage of <code>const</code>-reference arguments â using these type of argument, you're pretty much guaranteed objects wonât be accidentally modified, and there wonât be side effects.</p>
<p>Question is, whatâd be the Python equivalent to such arguments?</p>
<p>For instance, letâs say you have this C++ method:</p>
<pre><code>void Foo::setPosition(const QPoint &position) {
m_position = position;
}
</code></pre>
<p>And you want to âtranslateâ it to Python like this:</p>
<pre><code>def set_position(self, position):
self.position = position
</code></pre>
<p>Doing this will potentially yield a lot of trouble, and many subtle bugs could appear as well. Question is, whatâs the âPython equivalentâ way of C++ methods which use const references arguments (copy constructor)?</p>
<p>Last time I caught a bug because a I had a bad âC++ -> Python translationâ; I fixed this with something like:</p>
<pre><code>my_python_instance.set_position(QPoint(pos))
</code></pre>
<p>⦠said otherwise, my other choice was to clone the object from the caller⦠which Iâm pretty much sure is not the right way to go.</p>
| 1 |
2016-09-21T01:53:19Z
| 39,606,678 |
<p>I hope I understood correctly.</p>
<p>Shortly, there is not one. There are two things you are after which you do not come by in python often. <code>const</code>s and copy constructors.</p>
<p>It's a design decision, python is a different language. Arguments are always passed by reference.</p>
<h3><code>const</code> correctness</h3>
<p>It's up to user not to mess up object, and I think it doesn't happen very often. Personally I like const correctness in c++ but I never caught myself missing it in python. It's a dynamic and scripting language so there is no point in looking at the micro optimizations which could be done under argument constness assertion.</p>
<h3>Copying object</h3>
<p>... you don't do it too much in python. In my opinion it's a design decision to offload memory management onto user, because it's hard to come by with a good standard way, e.g. shallow copies vs. deep copies. I guess if you assume, you don't need it that much, there is no point in providing a way for each object (like c++ does), but only for those which do need it.</p>
<p>Therefore, there is no unified pythonic way. There are at least few ways to do it in standard: </p>
<ol>
<li>List <code>copied = original[:]</code>.</li>
<li>Some objects provide <code>copy</code> method, like <code>dict</code>.</li>
<li>Some objects explicitly provide a constructor (like in c++), <code>dict</code> and <code>list</code> do so, so you can write <code>copied = list(original)</code>.</li>
<li>There is a module <a href="https://docs.python.org/2/library/copy.html#module-copy" rel="nofollow"><code>copy</code></a>, for which you can provide custom methods <code>__copy__</code> and <code>__deepcopy__</code>. It also has advantage that another standard library work with it - <code>pickle</code> for serialization.</li>
</ol>
<p>The most c++-like way is option 3. - to implement a constructor so it accepts invocation which returns a copy of its argument (when the argument is of the same type). But it might need a bit crafty implementation of ctor, because you cannot do type overloading.</p>
<p>Option 2. is the same, but refactored into a function.</p>
<p>So probably best way is to provide explicit <code>__copy__</code>/<code>copy</code> methods, and if you're nice support ctor invocation which will call it.</p>
<p>You then as a developer of object can ensure const correctness, and provide a user with an easy explicit way to request a copy.</p>
| 2 |
2016-09-21T02:41:06Z
|
[
"python",
"c++",
"language-lawyer",
"copy-constructor",
"const-correctness"
] |
What is the python equivalent of c++ const-reference copy-constructor arguments
| 39,606,315 |
<p>One of the great things of C++ is the usage of <code>const</code>-reference arguments â using these type of argument, you're pretty much guaranteed objects wonât be accidentally modified, and there wonât be side effects.</p>
<p>Question is, whatâd be the Python equivalent to such arguments?</p>
<p>For instance, letâs say you have this C++ method:</p>
<pre><code>void Foo::setPosition(const QPoint &position) {
m_position = position;
}
</code></pre>
<p>And you want to âtranslateâ it to Python like this:</p>
<pre><code>def set_position(self, position):
self.position = position
</code></pre>
<p>Doing this will potentially yield a lot of trouble, and many subtle bugs could appear as well. Question is, whatâs the âPython equivalentâ way of C++ methods which use const references arguments (copy constructor)?</p>
<p>Last time I caught a bug because a I had a bad âC++ -> Python translationâ; I fixed this with something like:</p>
<pre><code>my_python_instance.set_position(QPoint(pos))
</code></pre>
<p>⦠said otherwise, my other choice was to clone the object from the caller⦠which Iâm pretty much sure is not the right way to go.</p>
| 1 |
2016-09-21T01:53:19Z
| 39,606,883 |
<p>Write a decorator on the function. Have it serialize the data on the way in, serialize it on the way out, and confirm the two are equal. If they are not, generate an error.</p>
<p>Python type checking is mostly runtime checks that arguments satisfy some predicate. <code>const</code> in C++ is a type check on the behaviour of the function: so you have to do a runtime check on the behaviour of the function to be equivalent.</p>
<p>You could also only do such checks when doing unit testing or in a debug build, "prove" it correct, then remove checks on "release" mode.</p>
<p>Alternatively, you could write a static analyzer that checks for const violations using the inspect module, and decorating the immutability of arguments you lack source for, I suppose. Probably be just as easy to write your own language variant that supports <code>const</code> tho. As in nigh impossible.</p>
| 0 |
2016-09-21T03:06:05Z
|
[
"python",
"c++",
"language-lawyer",
"copy-constructor",
"const-correctness"
] |
What is the python equivalent of c++ const-reference copy-constructor arguments
| 39,606,315 |
<p>One of the great things of C++ is the usage of <code>const</code>-reference arguments â using these type of argument, you're pretty much guaranteed objects wonât be accidentally modified, and there wonât be side effects.</p>
<p>Question is, whatâd be the Python equivalent to such arguments?</p>
<p>For instance, letâs say you have this C++ method:</p>
<pre><code>void Foo::setPosition(const QPoint &position) {
m_position = position;
}
</code></pre>
<p>And you want to âtranslateâ it to Python like this:</p>
<pre><code>def set_position(self, position):
self.position = position
</code></pre>
<p>Doing this will potentially yield a lot of trouble, and many subtle bugs could appear as well. Question is, whatâs the âPython equivalentâ way of C++ methods which use const references arguments (copy constructor)?</p>
<p>Last time I caught a bug because a I had a bad âC++ -> Python translationâ; I fixed this with something like:</p>
<pre><code>my_python_instance.set_position(QPoint(pos))
</code></pre>
<p>⦠said otherwise, my other choice was to clone the object from the caller⦠which Iâm pretty much sure is not the right way to go.</p>
| 1 |
2016-09-21T01:53:19Z
| 39,678,223 |
<p>What kind of object is <code>position</code>? There is very likely a way to copy it, so that <code>self</code> has a private copy of the position. For instance, if <code>position</code> is a <code>Point2D</code>, then either <code>self.position = Point2D(position)</code> or <code>self.position = Point2D( position.x, position.y )</code> is likely to work.</p>
<p>By the way, your C++ code might not be as safe as you think it is. If <code>m_position</code> is a <code>QPoint&</code>, then you are still vulnerable to somebody in the outside world modifying the <code>QPoint</code> that was passed in, after your function returns. (Passing a parameter as const does not guarantee that the referred-to object is const from the caller's point of view.)</p>
| 0 |
2016-09-24T15:53:15Z
|
[
"python",
"c++",
"language-lawyer",
"copy-constructor",
"const-correctness"
] |
How can I download just thumbnails using youtube-dl?
| 39,606,419 |
<p>I've been trying to download the thumbnails of a list of URL's (youtube videos) I have.</p>
<p>I've been using youtube-dl and I've worked it out to this so far:</p>
<pre><code> import os
with open('results.txt') as f:
for line in f:
os.system("youtube-dl " + "--write-thumbnail " + line)
</code></pre>
<p>Like this I'm able to download the thumbnails but I'm forced to downloading the youtube videos as well.</p>
<p>How can I just download the thumbnail?</p>
| 0 |
2016-09-21T02:07:30Z
| 39,607,541 |
<p>It looks like passing --list-thumbnails will return the url to the thumbnail images, but it will just output to the screen when calling os.system().</p>
<p>The following isn't the prettiest, but it's a quick working example of getting the output of youtube-dl into a string using subprocess, parsing it to get the url, and downloading with requests:</p>
<pre><code>import re
import requests
import subprocess
with open('results.txt') as f:
for line in f:
proc = subprocess.Popen(['youtube-dl', '--list-thumbnails', line], stdout=subprocess.PIPE)
youtubedl_output, err = proc.communicate()
imgurl = re.search("(?P<url>https?://[^\s]+)", youtubedl_output).group('url')
r = requests.get(imgurl)
if r.status_code == 200:
with open(imgurl.split('/')[4] + '.jpg', 'wb') as file:
for chunk in r.iter_content(1024):
file.write(chunk)
</code></pre>
<p>Hope this helped!</p>
| 0 |
2016-09-21T04:25:51Z
|
[
"python",
"youtube",
"youtube-dl"
] |
boto3 'str' object has no attribute 'get
| 39,606,503 |
<p>I am following the example at <a href="https://devcenter.heroku.com/articles/s3-upload-python" rel="nofollow">https://devcenter.heroku.com/articles/s3-upload-python</a> for uploading files directly to s3 from the client and am coming up with errors</p>
<p>views.api.sign_s3:</p>
<pre><code>def sign_s3(request):
S3_BUCKET = os.environ.get('S3_BUCKET')
file_name = request.GET.get('file_name',False)
file_type = request.GET.get('file_type',False)
s3 = boto3.client('s3')
presigned_post = s3.generate_presigned_post(
Bucket = S3_BUCKET,
Key = file_name,
Fields = {"acl": "public-read", "Content-Type": file_type},
Conditions = [
{"acl": "public-read"},
{"Content-Type": file_type}
],
ExpiresIn = 3600
)
return json.dumps({
'data': presigned_post,
'url': 'https://%s.s3.amazonaws.com/%s' % (S3_BUCKET, file_name)
})
</code></pre>
<p>settings.py:</p>
<pre><code>os.environ['S3_BUCKET'] = 'mybucketname'
os.environ['AWS_ACCESS_KEY_ID'] = 'myaccesskey'
os.environ['AWS_SECRET_ACCESS_KEY'] = 'mysecretaccesskey'
</code></pre>
<p>html file</p>
<pre><code><input id="file_input" name="video_files" type="file">
<!-- other html omitted -->
<script>
(function() {
document.getElementById("file_input").onchange = function(){
var files = document.getElementById("file_input").files;
var file = files[0];
if(!file){
return alert("No file selected.");
}
$(files).each(function(i,file){
getSignedRequest(file);
});
};
})();
function getSignedRequest(file){
var xhr = new XMLHttpRequest();
console.log(file);
xhr.open("GET", "/api/sign_s3?file_name="+file.name+"&file_type="+file.type);
xhr.onreadystatechange = function(){
if(xhr.readyState === 4){
if(xhr.status === 200){
var response = JSON.parse(xhr.responseText);
uploadFile(file, response.data, response.url);
}else{
alert("Could not get signed URL.");
}
}
};
xhr.send();
}
function uploadFile(file, s3Data, url){
var xhr = new XMLHttpRequest();
xhr.open("POST", s3Data.url);
var postData = new FormData();
for(key in s3Data.fields){
postData.append(key, s3Data.fields[key]);
}
postData.append('file', file);
xhr.onreadystatechange = function() {
if(xhr.readyState === 4){
if(xhr.status === 200 || xhr.status === 204){
document.getElementById("preview").src = url;
document.getElementById("avatar-url").value = url;
}else{
alert("Could not upload file.");
}
}
};
xhr.send(postData);
}
</script>
</code></pre>
<p>Traceback:</p>
<pre><code>AttributeError at /api/sign_s3/
'str' object has no attribute 'get'
Request Method: GET
Request URL: http://127.0.0.1:1337/api/sign_s3/? file_name=jam13.mp4&file_type=video/mp4
Django Version: 1.8.5
Exception Type: AttributeError
Exception Value:
'str' object has no attribute 'get'
Exception Location: c:\Python27\lib\site-packages\django\middleware\clickjacking.py in process_response, line 31
Python Executable: c:\Python27\python.exe
Python Version: 2.7.3
</code></pre>
<p>...</p>
<pre><code>c:\Python27\lib\site-packages\django\middleware\clickjacking.py in process_response
clickjacking protection techniques should be used if protection in those
browsers is required.
https://en.wikipedia.org/wiki/Clickjacking#Server_and_client
"""
def process_response(self, request, response):
# Don't set it if it's already in the response
if response.get('X-Frame-Options', None) is not None: ...
return response
# Don't set it if they used @xframe_options_exempt
if getattr(response, 'xframe_options_exempt', False):
return response
</code></pre>
<p>anyone have any idea what's going wrong?</p>
| 0 |
2016-09-21T02:18:47Z
| 39,609,721 |
<p>I think the way you are returning the response to your view is causing the trouble.</p>
<p>Try something like this - </p>
<pre><code>import json
from django.http import HttpResponse, JsonResponse
def sign_s3(request):
#Your View Code Here...
#Finally The Response (Using JsonResponse)...
json_object = {
'data': presigned_post,
'url': 'https://%s.s3.amazonaws.com/%s' % (S3_BUCKET, file_name)
}
return JsonResponse(json_object)
#Another Response Option (Using HttpResponse)
data = {
'data': presigned_post,
'url': 'https://%s.s3.amazonaws.com/%s' % (S3_BUCKET, file_name)
}
return HttpResponse(json.dumps(data), content_type = "application/json")
</code></pre>
| 0 |
2016-09-21T07:11:48Z
|
[
"python",
"django",
"heroku",
"amazon-s3",
"boto3"
] |
Using a generator to iterate over a large collection in Mongo
| 39,606,506 |
<p>I have a collection with 500K+ documents which is stored on a single node mongo. Every now and then my pymongo cursor.find() fails as it times out.</p>
<p>While I could set the <code>find</code> to ignore timeout, I do not like that approach. Instead, I tried a generator (adapted from <a href="http://stackoverflow.com/questions/9786736/how-to-read-through-collection-in-chunks-by-1000">this</a> answer and <a href="http://code.activestate.com/recipes/137270-use-generators-for-fetching-large-db-record-sets/" rel="nofollow">this</a> link):</p>
<pre><code>def mongo_iterator(self, cursor, limit=1000):
skip = 0
while True:
results = cursor.find({}).sort("signature", 1).skip(skip).limit(limit)
try:
results.next()
except StopIteration:
break
for result in results:
yield result
skip += limit
</code></pre>
<p>I then call this method using:</p>
<pre><code>ref_results_iter = self.mongo_iterator(cursor=latest_rents_refs, limit=50000)
for ref in ref_results_iter:
results_latest1.append(ref)
</code></pre>
<p>The problem:
My iterator does not return the same number of results. The issue is that next() advances the cursor. So for every call I lose one element...</p>
<p>The question:
Is there a way to adapt this code so that I can check if next exists? Pymongo 3x does not provide hasNext() and 'alive' check <a href="http://api.mongodb.com/python/current/api/pymongo/cursor.html" rel="nofollow">is not guaranteed</a> to return false.</p>
| 2 |
2016-09-21T02:19:13Z
| 39,606,547 |
<p>Why not use </p>
<pre><code>for result in results:
yield result
</code></pre>
<p>The for loop should handle <code>StopIteration</code> for you.</p>
| 0 |
2016-09-21T02:25:38Z
|
[
"python",
"mongodb",
"pymongo",
"pymongo-3.x"
] |
Using a generator to iterate over a large collection in Mongo
| 39,606,506 |
<p>I have a collection with 500K+ documents which is stored on a single node mongo. Every now and then my pymongo cursor.find() fails as it times out.</p>
<p>While I could set the <code>find</code> to ignore timeout, I do not like that approach. Instead, I tried a generator (adapted from <a href="http://stackoverflow.com/questions/9786736/how-to-read-through-collection-in-chunks-by-1000">this</a> answer and <a href="http://code.activestate.com/recipes/137270-use-generators-for-fetching-large-db-record-sets/" rel="nofollow">this</a> link):</p>
<pre><code>def mongo_iterator(self, cursor, limit=1000):
skip = 0
while True:
results = cursor.find({}).sort("signature", 1).skip(skip).limit(limit)
try:
results.next()
except StopIteration:
break
for result in results:
yield result
skip += limit
</code></pre>
<p>I then call this method using:</p>
<pre><code>ref_results_iter = self.mongo_iterator(cursor=latest_rents_refs, limit=50000)
for ref in ref_results_iter:
results_latest1.append(ref)
</code></pre>
<p>The problem:
My iterator does not return the same number of results. The issue is that next() advances the cursor. So for every call I lose one element...</p>
<p>The question:
Is there a way to adapt this code so that I can check if next exists? Pymongo 3x does not provide hasNext() and 'alive' check <a href="http://api.mongodb.com/python/current/api/pymongo/cursor.html" rel="nofollow">is not guaranteed</a> to return false.</p>
| 2 |
2016-09-21T02:19:13Z
| 39,610,669 |
<p>The <code>.find()</code> method takes additional keyword arguments. One of them is <code>no_cursor_timeout</code> which you need to set to <code>True</code></p>
<pre><code>cursor = collection.find({}, no_cursor_timeout=True)
</code></pre>
<p>You don't need to write your own generator function. The <code>find()</code> method returns a generator like object.</p>
| 0 |
2016-09-21T07:58:19Z
|
[
"python",
"mongodb",
"pymongo",
"pymongo-3.x"
] |
sqlalchemy / pandas - How do I create an sqlalchemy `selectable` to pass to pd.read_sql?
| 39,606,534 |
<p>I'm completely new to sqlalchemy, and I've been trying to better understand how <code>pd.read_sql</code> can be used.</p>
<p>I've succesfully run the following:</p>
<pre><code>import sqlalchemy as sa
import pandas as pd
df = pd.DataFrame( index=range(10,30), data=np.random.rand(20, 10) )
eng = sa.create_engine('sqlite:///test.db')
df.reset_index().to_sql('test_table', eng, index=False)
df2 = pd.read_sql( 'test_table' , eng ) # Don't understand this function
</code></pre>
<p>I've figured out I can just load whatever I stored in the table by passing the table name as the first argument of <code>pd.read_sql</code> but what if I wanted to load only the elements for which column <code>index</code> is greater than some number.</p>
<p><strong>Question</strong></p>
<p>How do I create an sqlalchemy <code>selectable</code> for the first argument of <code>pd.read_sql</code> to only load a subset of the database/table?</p>
<p><em>Comment:</em>
In this case I know that this is trivial to do after I've loaded df2 but if the size of the db is very large I'd like to avoid having to load the entire db in memory first.</p>
| 1 |
2016-09-21T02:23:49Z
| 39,614,914 |
<p>Getting the table in 3 different ways. (Not sure what the differences are).</p>
<p>then using the <code>.select()</code> method on that table and the <code>.where</code> on that result gets what I want.</p>
<pre><code>import sqlalchemy as sa
eng = sa.create_engine('sqlite:///test.db')
# First way to load table
m = sa.MetaData()
m.reflect(bind=eng)
t1 = m.tables['test_table']
# Second way
m2 = sa.MetaData(bind=eng)
t2 = sa.Table('test_table', m2, autoload=True)
# Third way
t3 = sa.Table('test_table', sa.MetaData(), autoload_with=eng)
# I can then use either of the t's to do the following
df3 = pd.read_sql(t.select().where(t.c.index > 15), eng)
</code></pre>
| 0 |
2016-09-21T11:10:39Z
|
[
"python",
"pandas",
"sqlalchemy"
] |
PySpark DataFrame - Join on multiple columns dynamically
| 39,606,589 |
<p>let's say I have two DataFrames on Spark</p>
<pre><code>firstdf = sqlContext.createDataFrame([{'firstdf-id':1,'firstdf-column1':2,'firstdf-column2':3,'firstdf-column3':4}, \
{'firstdf-id':2,'firstdf-column1':3,'firstdf-column2':4,'firstdf-column3':5}])
seconddf = sqlContext.createDataFrame([{'seconddf-id':1,'seconddf-column1':2,'seconddf-column2':4,'seconddf-column3':5}, \
{'seconddf-id':2,'seconddf-column1':6,'seconddf-column2':7,'seconddf-column3':8}])
</code></pre>
<p>Now I want to join them by multiple columns (any number bigger than one)</p>
<p>What I have is an array of columns of the first DataFrame and an array of columns of the second DataFrame, these arrays have the same size, and I want to join by the columns specified in these arrays. For example:</p>
<pre><code>columnsFirstDf = ['firstdf-id', 'firstdf-column1']
columnsSecondDf = ['seconddf-id', 'seconddf-column1']
</code></pre>
<p>Since these arrays have variable sizes I can't use this kind of approach:</p>
<pre><code>from pyspark.sql.functions import *
firstdf.join(seconddf, \
(col(columnsFirstDf[0]) == col(columnsSecondDf[0])) &
(col(columnsFirstDf[1]) == col(columnsSecondDf[1])), \
'inner'
)
</code></pre>
<p>Is there any way that I can join on multiple columns dynamically?</p>
| 1 |
2016-09-21T02:29:55Z
| 39,615,339 |
<p>Why not use a simple comprehension:</p>
<pre><code>firstdf.join(
seconddf,
[col(f) == col(s) for (f, s) in zip(columnsFirstDf, columnsSecondDf)],
"inner"
)
</code></pre>
<p>Since you use logical it is enough to provide a list of conditions without <code>&</code> operator.</p>
| 1 |
2016-09-21T11:28:54Z
|
[
"python",
"apache-spark",
"pyspark",
"apache-spark-sql",
"spark-dataframe"
] |
Python: Append a list to another list and Clear the first list
| 39,606,601 |
<p>So this just blew my mind. I am working on a python code where I create a list, append it to a master list and clear the first list to add some more elements to it. When I clear the first list, even the master list gets cleared. I worked on a lot of list appends and clears but never observed this.</p>
<pre><code>list1 = []
list2 = []
list1 = [1,2,3]
list2.append(list1)
list1
[1, 2, 3]
list2
[[1, 2, 3]]
del list1[:]
list1
[]
list2
[[]]
</code></pre>
<p>I know this happens with appending dictionaries but I did not know how to deal with lists. Can some one please elaborate?</p>
<p>I am using Python2.7</p>
| 1 |
2016-09-21T02:31:36Z
| 39,606,650 |
<p>Passing a <code>list</code> to a method like <code>append</code> is just passing a <em>reference</em> to the same <code>list</code> referred to by <code>list1</code>, so that's what gets appended to <code>list2</code>. They're still the same <code>list</code>, just referenced from two different places.</p>
<p>If you want to cut the tie between them, either:</p>
<ol>
<li>Insert a copy of <code>list1</code>, not <code>list1</code> itself, e.g. <code>list2.append(list1[:])</code>, or</li>
<li>Replace <code>list1</code> with a fresh <code>list</code> after <code>append</code>ing instead of clearing in place, changing <code>del list1[:]</code> to <code>list1 = []</code></li>
</ol>
<p>Note: It's a little unclear, but if you want the <em>contents</em> of <code>list1</code> to be added to <code>list2</code> (so <code>list2</code> should become <code>[1, 2, 3]</code> not <code>[[1, 2, 3]]</code> with the values in the nested <code>list</code>), you'd want to call <code>list2.extend(list1)</code>, not <code>append</code>, and in that case, no shallow copies are needed; the values from <code>list1</code> at that time would be individually <code>append</code>ed, and no further tie would exist between <code>list1</code> and <code>list2</code> (since the values are immutable <code>int</code>s; if they were mutable, say, nested <code>list</code>s, <code>dict</code>s, etc., you'd need to copy them to completely sever the tie, e.g. with <a href="https://docs.python.org/3/library/copy.html#copy.deepcopy" rel="nofollow"><code>copy.deepcopy</code></a> for complex nested structure).</p>
| 4 |
2016-09-21T02:36:47Z
|
[
"python",
"list"
] |
Python: Append a list to another list and Clear the first list
| 39,606,601 |
<p>So this just blew my mind. I am working on a python code where I create a list, append it to a master list and clear the first list to add some more elements to it. When I clear the first list, even the master list gets cleared. I worked on a lot of list appends and clears but never observed this.</p>
<pre><code>list1 = []
list2 = []
list1 = [1,2,3]
list2.append(list1)
list1
[1, 2, 3]
list2
[[1, 2, 3]]
del list1[:]
list1
[]
list2
[[]]
</code></pre>
<p>I know this happens with appending dictionaries but I did not know how to deal with lists. Can some one please elaborate?</p>
<p>I am using Python2.7</p>
| 1 |
2016-09-21T02:31:36Z
| 39,606,851 |
<p>So basically here is what the code is doing:</p>
<h3>Before delete</h3>
<p><a href="http://i.stack.imgur.com/T41Sv.png" rel="nofollow"><img src="http://i.stack.imgur.com/T41Sv.png" alt="enter image description here"></a></p>
<h3>After deleting</h3>
<p><a href="http://i.stack.imgur.com/uuiDI.png" rel="nofollow"><img src="http://i.stack.imgur.com/uuiDI.png" alt="enter image description here"></a></p>
<p>In short, both lists names are pointing to the same list object.</p>
<p><a href="http://www.pythontutor.com/visualize.html#code=list1%20%3D%20%5B%5D%0Alist2%20%3D%20%5B%5D%0Alist1%20%3D%20%5B1,2,3%5D%0Alist2.append(list1%29%0Adel%20list1%5B%3A%5D&cumulative=false&curInstr=4&heapPrimitives=false&mode=display&origin=opt-frontend.js&py=2&rawInputLstJSON=%5B%5D&textReferences=false" rel="nofollow">code visualization source</a></p>
| 2 |
2016-09-21T03:02:26Z
|
[
"python",
"list"
] |
More succinct initialization for SQLAlchemy instance
| 39,606,690 |
<p>It's my first attempt at sqlalchemy. I have a json file with my usr information and I would like to put them in a sqlite3 database file. It works but I find the instance initialization verbose since there are many columns in the table, as you can see below.</p>
<p>Is it possible to use a dictionary as input to initialize <code>User()</code>? Something like <code>a = User(usr)</code>?</p>
<pre><code>import json
from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base
engine = create_engine('sqlite:///tutorial.db', echo=True)
Base = declarative_base()
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
bbs_id = Column(String)
name = Column(String)
sex = Column(String, nullable=False)
city = Column(String)
state = Column(String)
class_type = Column(String, nullable=False)
class_id = Column(String, nullable=False)
latitude = Column(Float)
longitude = Column(Float)
def __repr__(self):
return "<User(bbs_id='%s', name='%s'>" % (self.bbs_id, self.name)
Base.metadata.create_all(engine)
with open('mydata.json') as fin:
usrs = json.load(fin)
usr = usrs[0]
a = User(id=usr['id'], bbs_id=usr['bbs_id'], name=usr['name'])
</code></pre>
| 1 |
2016-09-21T02:42:44Z
| 39,606,812 |
<p>If you know the property names in the JSON object match the column names of the Python model, you can just change:</p>
<pre><code>a = User(id=usr['id'], bbs_id=usr['bbs_id'], name=usr['name'])
</code></pre>
<p>to:</p>
<pre><code>a = User(**usr)
</code></pre>
<p><a href="https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists" rel="nofollow">Double-star/<code>dict</code> unpacking</a> passes each key/value pair of the <code>dict</code> as if it were an argument being passed by keyword. Since you didn't override <code>__init__</code> for your model, it already allows and expects the arguments to be optional and passed by keyword, so this lines up perfectly.</p>
| 2 |
2016-09-21T02:57:57Z
|
[
"python",
"sqlalchemy"
] |
How to convert video to spatio-temporal volumes in python
| 39,606,892 |
<p>I am doing my project in video analytics. I have to densely sample video. Sampling means converting video to spatio-temporal video volumes. I am using the python language. How can I do that in python? Is that option available in opencv or any other package? Input video sequence and desired output is shown <img src="http://i.stack.imgur.com/BXJxR.jpg" alt="here"></p>
| 0 |
2016-09-21T03:07:30Z
| 39,607,997 |
<p>Read the video file using</p>
<pre><code>cap = cv2.VideoCapture(fileName)
</code></pre>
<p>Go over each frame:</p>
<pre><code>while(cap.isOpened()):
# Read frame
ret, frame = cap.read()
</code></pre>
<p>If you want you can just insert every frame to a 3D matrix to get the spatio-temporal matrix that you wanted</p>
| 1 |
2016-09-21T05:09:37Z
|
[
"python",
"opencv",
"video",
"image-processing",
"video-processing"
] |
Exception in making a REST call
| 39,606,976 |
<p>I am currently working on a REST call using Flask Framework. and I have run into some errors along the way, which I am unable to figure out as to why they occur (still trying though). The error is as shown below:</p>
<pre><code>[2016-09-20 18:53:26,486] ERROR in app: Exception on /Recommend [GET]
Traceback (most recent call last):
File "/anaconda/lib/python2.7/site-packages/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/anaconda/lib/python2.7/site-packages/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/anaconda/lib/python2.7/site-packages/flask_cors/extension.py", line 161, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/anaconda/lib/python2.7/site-packages/flask_api/app.py", line 97, in handle_user_exception
for typecheck, handler in chain(blueprint_handlers.items(), app_handlers.items()):
AttributeError: 'tuple' object has no attribute 'items'
127.0.0.1 - - [20/Sep/2016 18:53:26] "GET /Recommend HTTP/1.1" 500 -
</code></pre>
<p>here is the code that I have built:</p>
<pre><code>from flask import request, jsonify
from flask_api import FlaskAPI
from flask_cors import CORS
from sets import Set
from collections import defaultdict
import itertools
import copy
app = FlaskAPI(__name__)
CORS(app)
content = None
class Apriori:
def __init__(self):
self.no_of_transactions = None
self.min_support = 0.5
self.min_confidence = 0.75
self.transactions = {}
self.set_of_items = set()
self.frequencies = {}
self.frequent_itemsets_of_order_n = {}
self.association_rules = {}
def createPowerSet(self,s):
powerset = set()
for i in xrange(2**len(s)):
subset = tuple([x for j,x in enumerate(s) if (i >> j) & 1])
if len(subset) == 0:
pass
elif len(subset) == 1:
powerset.add(subset[0])
else:
powerset.add(subset)
return powerset
def createFrequentItemSets(self,set_of_items,len):
frequent_itemsets = set(itertools.combinations(set_of_items, len))
for i in list(frequent_itemsets):
tempset = set(i)
self.frequencies[i] = 0
for k, v in self.transactions.iteritems():
if tempset.issubset(set(v)):
self.frequencies[i] += 1
if float(self.frequencies[i])/self.no_of_transactions < self.min_support:
frequent_itemsets.discard(i)
return frequent_itemsets
def mineAssociationRules(self,frequent_itemset):
s = set(frequent_itemset)
subs = list(self.createPowerSet(s))
for each in subs:
if sorted(tuple(set(each))) == sorted(tuple(s)):
continue
if len(set(each))==1:
antecedent = list(set(each))[0]
elif len(set(each))>1:
antecedent = tuple(set(each))
if len(s.difference(set(each)))==1:
consequent = list(s.difference(set(each)))[0]
elif len(s.difference(set(each)))>1:
consequent = tuple(s.difference(set(each)))
AuC = tuple(s)
if float(self.frequencies[AuC])/self.frequencies[antecedent] >= self.min_confidence:
if antecedent in self.association_rules:
pass
else:
if type(antecedent) is tuple:
antecedent = (",").join(antecedent)
if type(consequent) is tuple:
consequent = (",").join(consequent)
self.association_rules[antecedent] = consequent
def implement(self,transactions):
#for i in range(0,self.no_of_transactions):
for i in range(0,len(transactions)):
self.transactions["T"+str(i)] = defaultdict(list)
self.transactions["T"+str(i)] = transactions[i].split(',')
self.set_of_items = self.set_of_items.union(Set(self.transactions["T"+str(i)]))
for i in list(self.set_of_items):
self.frequencies[i] = 0
for k, v in self.transactions.iteritems():
if i in v:
self.frequencies[i] = self.frequencies[i] + 1
if float(self.frequencies[i])/self.no_of_transactions < self.min_support:
self.set_of_items.discard(i)
self.frequent_itemsets_of_order_n[1] = self.set_of_items
l = 1
reps = copy.deepcopy(self.set_of_items)
while True:
l += 1
result = self.createFrequentItemSets(self.set_of_items, l)
if len(result) == 0:
break
self.frequent_itemsets_of_order_n[l] = result
reps = copy.deepcopy(self.frequent_itemsets_of_order_n[l])
l = l-1
while l>2:
for each in self.frequent_itemsets_of_order_n[l]:
self.mineAssociationRules(each)
l = l-1
@app.route('/Recommend')
def FindAssociations():
transactions = ["A,C,D,F,G","A,B,C,D,F","C,D,E","A,D,F","A,C,D,E,F","B,C,D,E,F,G"]
apr = Apriori()
apr.implement(transactions)
return jsonify(rules=apr.association_rules)
if __name__ == "__main__":
app.run(port=5000)
</code></pre>
<p>I did run some sample code found on the web, and built the above script based on those scripts. They worked out well. The class I built was based on another python program that I had built earlier which worked well. Should I have imported the class from another script instead of building it here?</p>
| -1 |
2016-09-21T03:18:52Z
| 39,608,108 |
<p>There are a number of errors in your <code>Apriori</code> which need to fixed. There are a few attempts to divide by <code>self.no_of_transactions</code> (e.g. line 87) which is initialised to <code>None</code> and never changed. Dividing by <code>None</code> raises an exception:</p>
<pre><code>>>> 1/None
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for /: 'int' and 'NoneType'
</code></pre>
<p>This exception is then handled in Flask-API's <code>handle_user_exception()</code> method, which also appears to have a bug as shown in the exception it raises.</p>
<p>The way to fix it is to correct your code so that it does not divide by <code>None</code>.</p>
| 0 |
2016-09-21T05:19:54Z
|
[
"python",
"flask"
] |
Embed multiple matplotlib figures in wxPython
| 39,607,157 |
<p>I have a 2x2 <code>FlexGridSizer</code> in a panel and I want to insert four different matplotlib figures with their own toolbars at the same time.</p>
<p>I have seen many links related and working examples embedding one figure, but as I am a begginer with wxPython and OOP I get quite confuse when testing some codes and trying to merge them with mine.</p>
<p>Here is a piece of the page class of a <code>wx.Notebook</code> where I want to put the figures</p>
<pre><code>class Pagina(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent)
self.boton_descargar = wx.Button(self, -1, label=u"Descargar")
self.boton_desconectar = wx.Button(self, -1, label=u"Desconectar")
sizer = wx.BoxSizer(wx.VERTICAL)
subsizer1 = wx.BoxSizer(wx.HORIZONTAL)
subsizer2 = wx.FlexGridSizer(rows=2, cols=2)
sizer.Add(subsizer1, 0, wx.EXPAND | wx.ALL, 0)
sizer.Add(subsizer2, 1, wx.EXPAND | wx.ALL, 0)
sizer.Add(self.boton_desconectar, 0, wx.ALIGN_RIGHT | wx.RIGHT, 10)
subsizer1.Add(self.boton_descargar, 0, wx.ALL, 10)
self.Bind(wx.EVT_BUTTON, self.onClick_descargar, self.boton_descargar)
self.Bind(wx.EVT_BUTTON, self.onClick_desconectar, self.boton_desconectar)
self.SetSizer(sizer)
def onClick_descargar(self, event):
HiloDescarga()
def onClick_desconectar(self, event):
pass
</code></pre>
<p><code>HiloDescarga</code> is actually a thread launched to download some text lines, process data and plotting this way (the fourth figure is the same thing):</p>
<pre><code>import matplotlib.pyplot as plt
line, = plt.plot(range(len(x)), x, '-', linewidth=1)
line, = plt.plot(range(len(x)), f, 'y-', linewidth=1)
plt.xlabel('x')
plt.ylabel('y')
plt.title(r'title1')
plt.grid()
plt.figure()
line, = plt.plot(range(len(y)), y, 'r-', linewidth=1)
line, = plt.plot(range(len(y)), g, 'y-', linewidth=1)
plt.xlabel('x')
plt.ylabel('y')
plt.title(r'title2')
plt.grid()
plt.figure()
line, = plt.plot(range(len(z)), z, 'g-', linewidth=1)
line, = plt.plot(range(len(z)), h, 'y-', linewidth=1)
plt.xlabel('x')
plt.ylabel('y')
plt.title(r'title3')
plt.grid()
plt.show()
</code></pre>
<p>so the figures are just popping in separated windows. If you could give me a snippet or at least some orientation, perhaps a few changes to the plotting code, I don't know. Any help is welcomed.</p>
| 0 |
2016-09-21T03:41:25Z
| 39,613,369 |
<p>You just have to follow the examples where they embed one Figure, but instead instantiate several members for each of the figures you want to create.
Here is a quick example:</p>
<pre><code>import matplotlib
matplotlib.use('WXAgg')
from matplotlib.figure import Figure
from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg as FigureCanvas
import wx
class MyFrame(wx.Frame):
def __init__(self, parent, nbFigures=4):
super(MyFrame, self).__init__(parent)
sizer = wx.BoxSizer(wx.VERTICAL)
self.figs = [Figure(figsize=(2,1)) for _ in range(nbFigures)]
self.axes = [fig.add_subplot(111) for fig in self.figs]
self.canvases = [FigureCanvas(self, -1, fig) for fig in self.figs]
for canvas in self.canvases:
sizer.Add(canvas, 0, wx.ALL, 5)
self.SetSizer(sizer)
app = wx.App()
frm = MyFrame(None)
frm.Show()
app.MainLoop()
</code></pre>
<p><a href="http://i.stack.imgur.com/Z3v0L.png" rel="nofollow"><img src="http://i.stack.imgur.com/Z3v0L.png" alt="enter image description here"></a></p>
<p>I've saved the instances of each figure in the array <code>self.figs</code> and each instances of the axes on each figure in <code>self.axes</code></p>
<p>Hope that puts you in the right tracks</p>
| 1 |
2016-09-21T10:01:24Z
|
[
"python",
"matplotlib",
"wxpython"
] |
Python subprocess.Popen poll seems to hang but communicate works
| 39,607,172 |
<pre><code>child = subprocess.Popen(command,
shell=True,
env=environment,
close_fds=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
stdin=sys.stdin,
preexec_fn=os.setsid
)
child_interrupted = False
while child.poll() is None:
if Signal.isInterrupted():
child_interrupted = True
os.killpg(os.getpgid(child.pid), signal.SIGTERM)
break
time.sleep(0.1)
subout = child.communicate()[0]
logging.info(subout)
</code></pre>
<p>the above works for most command it executes (90%) but for some commands it hangs</p>
<p>for those command that repeatedly hangs, if i get rid of the below, it works fine: </p>
<pre><code> child_interrupted = False
while child.poll() is None:
if Signal.isInterrupted():
child_interrupted = True
os.killpg(os.getpgid(child.pid), signal.SIGTERM)
break
time.sleep(0.1)
</code></pre>
<p>im assuming for those hanging commands, <code>child.poll() is None</code> even though the job is finished??</p>
<p>communicate() can tell the process is finished but poll() cant?</p>
<p>i've executed <code>ps -ef</code> on those processes<br>
and they are defunct only when <code>child.poll()</code> code is in place<br>
any idea why?</p>
<p>it looks like defunct means "That's a zombie process, it's finished but the parent hasn't wait()ed for it yet."
well, im polling to see if i can call wait/communitcate...</p>
| 0 |
2016-09-21T03:42:50Z
| 39,607,358 |
<p>You've set the <code>Popen</code> object to receive the subprocess's <code>stdout</code> via pipe. Problem is, you're not reading from that pipe until the process exits. If the process produces enough output to fill the OS level pipe buffers, and you don't drain the pipe, then you're deadlocked; the subprocess wants you to read the output its writing so it can continue to write, then exit, while you're waiting for it to exit before you'll read the output.</p>
<p>If your explicit poll and interrupt checking is necessary, the easiest solution to this deadlock is probably to launch a thread that drains the pipe:</p>
<pre><code>... launch the thread just after Popen called ...
draineddata = []
# Trivial thread just reads lines from stdout into the list
drainerthread = threading.Thread(target=draineddata.extend, args=(child.stdout,))
drainerthread.daemon = True
drainerthread.start()
... then where you had been doing communicate, change it to: ...
child.wait()
drainerthread.join()
subout = b''.join(draineddata) # Combine the data read back to a single output
</code></pre>
| 1 |
2016-09-21T04:03:49Z
|
[
"python",
"subprocess",
"popen"
] |
Conditional Help in Python
| 39,607,266 |
<p>I needed some help in understanding the following code:</p>
<pre><code>nextState = ((nextx, nexty), self.corners) if self.currentPosition not in self.corners else ((nextx, nexty), tuple([i for i in self.corners if i != self.currentPosition]))
successors.append((nextState, action, 1))
</code></pre>
<p>Can someone show me what this conditional statement would look like in multiple lines rather than just a one line if statement?</p>
| 0 |
2016-09-21T03:54:23Z
| 39,607,396 |
<p>As an <code>if</code> statement, this would be:</p>
<pre><code>if self.currentPosition not in self.corners:
nextState = ((nextx, nexty), self.corners)
else:
nextState = ((nextx, nexty), tuple([i for i in self.corners if i != self.currentPosition]))
successors.append((nextState, action, 1))
</code></pre>
<p>This is essentially Python's form of a <a href="https://docs.python.org/3/faq/programming.html#is-there-an-equivalent-of-c-s-ternary-operator" rel="nofollow">ternary operator</a>.</p>
| 0 |
2016-09-21T04:07:22Z
|
[
"python"
] |
Stuck in a django migration IntegrityError loop: can I delete those migrations that aren't yet in the db?
| 39,607,359 |
<p>So, I committed and pushed all my code, and then deployed my web application successfully. Then, I added a new model to my 'home' app, which (for a reason I now understand, but doesn't matter here), created an <code>IntegrityError</code> (<code>django.db.utils.IntegrityError: insert or update on table "foo" violates foreign key constraint "bar"</code>). I ran <code>python manage.py makemigrations</code>, <code>python manage.py migrate</code>, which causes the the <code>IntegrityError</code>.</p>
<p>However, even if I remove all of my new model code(so that git status comes up with nothing), the IntegrityError still happens. If I connect to my db via a different python instance and download <code>select * from django_migrations;</code>, the latest db migration: <strong>0020</strong> there is eight migrations away from my latest local <code>home/migrations</code> migration file: <strong>0028</strong>.</p>
<p>--> My question is: is it safe for me to delete my local <strong>0021-0028</strong> migration files? Will this fix my problem?</p>
| 0 |
2016-09-21T04:03:49Z
| 39,607,668 |
<p>OK, so I crossed my fingers, backed my local 0021-0028 migration files, and then deleted them. <strong>It worked</strong>. I think they key is that the migration files were not yet in the database yet, but not 100% sure. +1 if anyone can answer further for clarification.</p>
| 0 |
2016-09-21T04:39:03Z
|
[
"python",
"django"
] |
Stuck in a django migration IntegrityError loop: can I delete those migrations that aren't yet in the db?
| 39,607,359 |
<p>So, I committed and pushed all my code, and then deployed my web application successfully. Then, I added a new model to my 'home' app, which (for a reason I now understand, but doesn't matter here), created an <code>IntegrityError</code> (<code>django.db.utils.IntegrityError: insert or update on table "foo" violates foreign key constraint "bar"</code>). I ran <code>python manage.py makemigrations</code>, <code>python manage.py migrate</code>, which causes the the <code>IntegrityError</code>.</p>
<p>However, even if I remove all of my new model code(so that git status comes up with nothing), the IntegrityError still happens. If I connect to my db via a different python instance and download <code>select * from django_migrations;</code>, the latest db migration: <strong>0020</strong> there is eight migrations away from my latest local <code>home/migrations</code> migration file: <strong>0028</strong>.</p>
<p>--> My question is: is it safe for me to delete my local <strong>0021-0028</strong> migration files? Will this fix my problem?</p>
| 0 |
2016-09-21T04:03:49Z
| 39,608,279 |
<p>If you haven't applied your migrations to db, it is safe to delete them and recreate them. </p>
<p>Possible reasons of why you run into this error are:</p>
<ol>
<li>You deleted your model code but, when you run <code>migrate</code> it reads your migration files (which has information about your deleted model) and tries to apply migration operations. If you didn't run <code>makemigrations</code> command after you've deleted your model, migration system won't be able to detect your changes and will think that your model is still there. </li>
<li>Even if you've run <code>makemigrations</code> after you've deleted your model there'll be dependency issues in your migrations files, because the new migration files will depend on old ones (with which you had problems)</li>
</ol>
<p>That's why we can say that it is safe to delete them, if they haven't applied, but at the same time you should be careful with your migration dependencies.</p>
<p>This <a href="https://docs.djangoproject.com/en/1.10/topics/migrations/#history-consistency" rel="nofollow">documentation</a> information maybe useful.</p>
| 1 |
2016-09-21T05:36:02Z
|
[
"python",
"django"
] |
Python method to request attribute component that formed by list
| 39,607,437 |
<p>I need to access an item from an attribute that formed in list. This is my code:</p>
<pre><code>class Foo:
def __init__(self):
self.__words = []
@property
def word(self, index):
return self.__words[index]
@word.setter
def word(self, word):
self.__words.append(str(word))
</code></pre>
<p>When I tested it in terminal, its result is as follow:</p>
<pre><code>import coba
a = coba.Foo()
a.word = 'ok'
a.word(0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: word() missing 1 required positional argument: 'index'
</code></pre>
<p>Why I cannot access list item using that method? Thank you for helping.</p>
| 0 |
2016-09-21T04:12:34Z
| 39,607,556 |
<p>I believe you cannot add arbitrary arguments to a method decorated by <code>@property</code>. So, if you really want to do this, you can return a method that takes an index instead of trying to get the value itself:</p>
<pre><code>class Foo:
def __init__(self):
self.__words = []
@property
def word(self):
return lambda index: self.__words[index]
@word.setter
def word(self, word):
self.__words.append(str(word))
print(self.__words)
a = Foo()
a.word = 'ok'
print(a.word(0)) # 'ok'
</code></pre>
<p>Note that this is generally not a very Pythonic class. Python takes a "trust the caller" approach, so it's rare to see read-only or restricted write implementations like this. As others have said, it's much more common to just use the internal list and trust the caller to not mess with it in unintended ways.</p>
| 0 |
2016-09-21T04:27:27Z
|
[
"python",
"list",
"python-3.x"
] |
What is wrong with my code to remove duplicates from Linked List?
| 39,607,536 |
<p>New to Python and my code is passing all test cases other than input 1-->0, which it returns nothing instead of 1->0. does this have something to do with the value of None?</p>
<pre><code>def RemoveDuplicates(head):
if head == None or head.next == None:
return
else:
temp = head
while(temp.next != None):
if temp.data == temp.next.data:
temp.next = temp.next.next
else:
temp = temp.next
return head
</code></pre>
| 0 |
2016-09-21T04:24:57Z
| 39,607,595 |
<p>Try <code>is None</code> instead of <code>== None</code>. Comparisons to singletons should be done using <code>is</code> or <code>is not</code>.</p>
| 0 |
2016-09-21T04:32:01Z
|
[
"python"
] |
What is wrong with my code to remove duplicates from Linked List?
| 39,607,536 |
<p>New to Python and my code is passing all test cases other than input 1-->0, which it returns nothing instead of 1->0. does this have something to do with the value of None?</p>
<pre><code>def RemoveDuplicates(head):
if head == None or head.next == None:
return
else:
temp = head
while(temp.next != None):
if temp.data == temp.next.data:
temp.next = temp.next.next
else:
temp = temp.next
return head
</code></pre>
| 0 |
2016-09-21T04:24:57Z
| 39,608,112 |
<p>We can't see the class of head and what operations it has defined, is the <code>__eq__</code> magic function defined on that class?</p>
<p>Is it possible that <code>head.next == None</code> evaluates to <code>head.next.data == None</code> for the class? In which case <code>0 == None</code> evaluates to True.</p>
<p>As others have mentioned this is fixed by using <code>head.next is None</code></p>
<p>I suspect you're supposed to <code>return head</code> in the first case regardless, but it doesn't seem like it should be triggering in this case unless the evaluations I mention above are happening. </p>
| 0 |
2016-09-21T05:20:26Z
|
[
"python"
] |
Count the number of Occurrence of Values based on another column
| 39,607,540 |
<p>I have a question regarding creating pandas dataframe according to the sum of other column.</p>
<p>For example, I have this dataframe</p>
<pre><code> Country | Accident
England Car
England Car
England Car
USA Car
USA Bike
USA Plane
Germany Car
Thailand Plane
</code></pre>
<p>I want to make another dataframe based on the sum value of all accident based on the country. We will disregard the type of the accident, while summing them all based on the country.</p>
<p>My desire dataframe would look like this</p>
<pre><code> Country | Sum of Accidents
England 3
USA 3
Germany 1
Thailand 1
</code></pre>
| 3 |
2016-09-21T04:25:44Z
| 39,607,916 |
<p>You can use the <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow"><code>groupby</code></a> method.</p>
<p>Example - </p>
<pre><code>In [36]: df.groupby(["country"]).count().sort_values(["accident"], ascending=False).rename(columns={"accident" : "Sum of accidents"}).reset_index()
Out[36]:
country Sum of accidents
0 England 3
1 USA 3
2 Germany 1
3 Thailand 1
</code></pre>
<p>Explanation - </p>
<pre><code>df.groupby(["country"]). # Group by country
count(). # Aggregation function which counts the number of occurences of country
sort_values( # Sorting it
["accident"],
ascending=False).
rename(columns={"accident" : "Sum of accidents"}). # Renaming the columns
reset_index() # Resetting the index, it takes the country as the index if you don't do this.
</code></pre>
| 3 |
2016-09-21T05:02:10Z
|
[
"python",
"pandas"
] |
Count the number of Occurrence of Values based on another column
| 39,607,540 |
<p>I have a question regarding creating pandas dataframe according to the sum of other column.</p>
<p>For example, I have this dataframe</p>
<pre><code> Country | Accident
England Car
England Car
England Car
USA Car
USA Bike
USA Plane
Germany Car
Thailand Plane
</code></pre>
<p>I want to make another dataframe based on the sum value of all accident based on the country. We will disregard the type of the accident, while summing them all based on the country.</p>
<p>My desire dataframe would look like this</p>
<pre><code> Country | Sum of Accidents
England 3
USA 3
Germany 1
Thailand 1
</code></pre>
| 3 |
2016-09-21T04:25:44Z
| 39,608,197 |
<p><strong><em>Option 1</em></strong><br>
Use <code>value_counts</code></p>
<pre><code>df.Country.value_counts().reset_index(name='Sum of Accidents')
</code></pre>
<p><a href="http://i.stack.imgur.com/G0Gii.png"><img src="http://i.stack.imgur.com/G0Gii.png" alt="enter image description here"></a></p>
<p><strong><em>Option 2</em></strong><br>
Use <code>groupby</code> then <code>size</code></p>
<pre><code>df.groupby('Country').size().sort_values(ascending=False) \
.reset_index(name='Sum of Accidents')
</code></pre>
<p><a href="http://i.stack.imgur.com/K7RDt.png"><img src="http://i.stack.imgur.com/K7RDt.png" alt="enter image description here"></a></p>
| 5 |
2016-09-21T05:28:59Z
|
[
"python",
"pandas"
] |
Reset tensorflow Optimizer
| 39,607,566 |
<p>I am loading from a saved model and I would like to be able to reset a tensorflow optimizer such as an Adam Optimizer. Ideally something like:</p>
<pre><code>sess.run([tf.initialize_variables(Adamopt)])
</code></pre>
<p>or</p>
<pre><code>sess.run([Adamopt.reset])
</code></pre>
<p>I have tried looking for an answer but have yet to find any way to do it. Here's what I've found which don't address the issue:
<a href="https://github.com/tensorflow/tensorflow/issues/634" rel="nofollow">https://github.com/tensorflow/tensorflow/issues/634</a></p>
<p><a href="http://stackoverflow.com/questions/35164529/in-tensorflow-is-there-any-way-to-just-initialize-uninitialised-variables">In TensorFlow is there any way to just initialize uninitialised variables?</a></p>
<p><a href="http://stackoverflow.com/questions/33788989/tensorflow-using-adam-optimizer?noredirect=1&lq=1">Tensorflow: Using Adam optimizer</a> </p>
<p>I basically just want a way to reset the "slot" variables in the Adam Optimizer.</p>
<p>Thanks</p>
| 0 |
2016-09-21T04:28:22Z
| 39,680,248 |
<p>The simplest way I found was to give the optimizer its own variable scope and then run </p>
<pre><code>optimizer_scope = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
"scope/prefix/for/optimizer")
sess.run(tf.initialize_variables(optimizer_scope))
</code></pre>
<p>idea from <a href="http://stackoverflow.com/questions/35298326/freeze-some-variables-scopes-in-tensorflow-stop-gradient-vs-passing-variables">freeze weights</a></p>
| 0 |
2016-09-24T19:38:36Z
|
[
"python",
"optimization",
"tensorflow"
] |
Number of shortest paths of a knight in chessboard from A to B
| 39,607,721 |
<p>Here is the problem:</p>
<p>Given the input n = 4 x = 5, we must imagine a chessboard that is 4 squares across (x-axis) and 5 squares tall (y-axis). (This input changes, all the up to n = 200 x = 200)</p>
<p>Then, we are asked to determine the minimum shortest path from the bottom left square on the board to the top right square on the board for the Knight (the Knight can move 2 spaces on one axis, then 1 space on the other axis).</p>
<p>My current ideas:</p>
<p>Use a 2d array to store all the possible moves, perform breadth-first
search(BFS) on the 2d array to find the shortest path.</p>
<p>Floyd-Warshall shortest path algorithm.</p>
<p>Create an adjacency list and perform BFS on that (but I think this would be inefficient).</p>
<p>To be honest though I don't really have a solid grasp on the logic.</p>
<p>Can anyone help me with psuedocode, python code, or even just a logical walk-through of the problem?</p>
| 0 |
2016-09-21T04:42:57Z
| 39,607,825 |
<p>Try something. Draw boards of the following sizes: 1x1, 2x2, 3x3, 4x4, and a few odd ones like 2x4 and 3x4. Starting with the smallest board and working to the largest, start at the bottom left corner and write a 0, then find all moves from zero and write a 1, find all moves from 1 and write a 2, etc. Do this until there are no more possible moves.</p>
<p>After doing this for all 6 boards, you should have noticed a pattern: Some squares couldn't be moved to until you got a larger board, but once a square was "discovered" (ie could be reached), the number of minimum moves to that square was constant for all boards not smaller than the board on which it was first discovered. (Smaller means less than n OR less than x, not less than (n * x) )</p>
<p>This tells something powerful, anecdotally. All squares have a number associated with them that must be discovered. This number is a property of the square, NOT the board, and is NOT dependent on size/shape of the board. It is always true. However, if the square cannot be reached, then obviously the number is not applicable.</p>
<p>So you need to find the number of every square on a 200x200 board, and you need a way to see if a board is a subset of another board to determine if a square is reachable.</p>
<p>Remember, in these programming challenges, some questions that are really hard can be solved in O(1) time by using lookup tables. I'm not saying this one can, but keep that trick in mind. For this one, pre-calculating the 200x200 board numbers and saving them in an array could save a lot of time, whether it is done only once on first run or run before submission and then the results are hard coded in.</p>
<p>If the problem needs move sequences rather than number of moves, the idea is the same: save move sequences with the numbers.</p>
| 0 |
2016-09-21T04:54:39Z
|
[
"python",
"algorithm",
"chess"
] |
Number of shortest paths of a knight in chessboard from A to B
| 39,607,721 |
<p>Here is the problem:</p>
<p>Given the input n = 4 x = 5, we must imagine a chessboard that is 4 squares across (x-axis) and 5 squares tall (y-axis). (This input changes, all the up to n = 200 x = 200)</p>
<p>Then, we are asked to determine the minimum shortest path from the bottom left square on the board to the top right square on the board for the Knight (the Knight can move 2 spaces on one axis, then 1 space on the other axis).</p>
<p>My current ideas:</p>
<p>Use a 2d array to store all the possible moves, perform breadth-first
search(BFS) on the 2d array to find the shortest path.</p>
<p>Floyd-Warshall shortest path algorithm.</p>
<p>Create an adjacency list and perform BFS on that (but I think this would be inefficient).</p>
<p>To be honest though I don't really have a solid grasp on the logic.</p>
<p>Can anyone help me with psuedocode, python code, or even just a logical walk-through of the problem?</p>
| 0 |
2016-09-21T04:42:57Z
| 39,608,395 |
<p>My approach to this question would be backtracking as the number of squares in the x-axis and y-axis are different. </p>
<p><strong>Note: Backtracking algorithms can be slow for certain cases and fast for the other</strong> </p>
<p>Create a 2-d Array for the chess-board. You know the staring index and the final index. To reach to the final index u need to keep close to the diagonal that's joining the two indexes. </p>
<p>From the starting index see all the indexes that the knight can travel to, choose the index which is closest to the diagonal indexes and keep on traversing, if there is no way to travel any further backtrack one step and move to the next location available from there.</p>
<p><strong>PS : This is a bit similar to a well known problem Knight's Tour, in which choosing any starting point you have to find that path in which the knight whould cover all squares. I have codes this as a java gui application, I can send you the link if you want any help</strong></p>
<p>Hope this helps!!</p>
| 0 |
2016-09-21T05:45:38Z
|
[
"python",
"algorithm",
"chess"
] |
Number of shortest paths of a knight in chessboard from A to B
| 39,607,721 |
<p>Here is the problem:</p>
<p>Given the input n = 4 x = 5, we must imagine a chessboard that is 4 squares across (x-axis) and 5 squares tall (y-axis). (This input changes, all the up to n = 200 x = 200)</p>
<p>Then, we are asked to determine the minimum shortest path from the bottom left square on the board to the top right square on the board for the Knight (the Knight can move 2 spaces on one axis, then 1 space on the other axis).</p>
<p>My current ideas:</p>
<p>Use a 2d array to store all the possible moves, perform breadth-first
search(BFS) on the 2d array to find the shortest path.</p>
<p>Floyd-Warshall shortest path algorithm.</p>
<p>Create an adjacency list and perform BFS on that (but I think this would be inefficient).</p>
<p>To be honest though I don't really have a solid grasp on the logic.</p>
<p>Can anyone help me with psuedocode, python code, or even just a logical walk-through of the problem?</p>
| 0 |
2016-09-21T04:42:57Z
| 39,612,988 |
<p>BFS is efficient enough for this problem as it's complexity is O(n*x) since you explore each cell only one time. For keeping the number of shortest paths, you just have to keep an auxiliary array to save them.</p>
<p>You can also use A* to solve this faster but it's not necessary in this case because it is a programming contest problem.</p>
<pre><code>dist = {}
ways = {}
def bfs():
start = 1,1
goal = 6,6
queue = [start]
dist[start] = 0
ways[start] = 1
while len(queue):
cur = queue[0]
queue.pop(0)
if cur == goal:
print "reached goal in %d moves and %d ways"%(dist[cur],ways[cur])
return
for move in [ (1,2),(2,1),(-1,-2),(-2,-1),(1,-2),(-1,2),(-2,1),(2,-1) ]:
next_pos = cur[0]+move[0], cur[1]+move[1]
if next_pos[0] > goal[0] or next_pos[1] > goal[1] or next_pos[0] < 1 or next_pos[1] < 1:
continue
if next_pos in dist and dist[next_pos] == dist[cur]+1:
ways[next_pos] += ways[cur]
if next_pos not in dist:
dist[next_pos] = dist[cur]+1
ways[next_pos] = ways[cur]
queue.append(next_pos)
bfs()
</code></pre>
<p>Output</p>
<pre><code>reached goal in 4 moves and 4 ways
</code></pre>
<p><em>Note that the number of ways to reach the goal can get exponentially big</em></p>
| 1 |
2016-09-21T09:45:23Z
|
[
"python",
"algorithm",
"chess"
] |
Number of shortest paths of a knight in chessboard from A to B
| 39,607,721 |
<p>Here is the problem:</p>
<p>Given the input n = 4 x = 5, we must imagine a chessboard that is 4 squares across (x-axis) and 5 squares tall (y-axis). (This input changes, all the up to n = 200 x = 200)</p>
<p>Then, we are asked to determine the minimum shortest path from the bottom left square on the board to the top right square on the board for the Knight (the Knight can move 2 spaces on one axis, then 1 space on the other axis).</p>
<p>My current ideas:</p>
<p>Use a 2d array to store all the possible moves, perform breadth-first
search(BFS) on the 2d array to find the shortest path.</p>
<p>Floyd-Warshall shortest path algorithm.</p>
<p>Create an adjacency list and perform BFS on that (but I think this would be inefficient).</p>
<p>To be honest though I don't really have a solid grasp on the logic.</p>
<p>Can anyone help me with psuedocode, python code, or even just a logical walk-through of the problem?</p>
| 0 |
2016-09-21T04:42:57Z
| 39,622,042 |
<p>I suggest:</p>
<ol>
<li>Use BFS backwards from the target location to calculate (in just O(nx) total time) the minimum distance to the target (x, n) in knight's moves <em>from each other square</em>. For each starting square (i, j), store this distance in d[i][j].</li>
<li>Calculate c[i][j], the number of minimum-length paths starting at (i, j) and ending at the target (x, n), recursively as follows:
<ul>
<li>c[x][n] = 1</li>
<li>c[i][j] = the sum of c[p][q] over all (p, q) such that both
<ul>
<li>(p, q) is a knight's-move-neighbour of (i, j), and</li>
<li>d[p][q] = d[i][j]-1.</li>
</ul></li>
</ul></li>
</ol>
<p>Use <a href="https://en.wikipedia.org/wiki/Memoization" rel="nofollow">memoisation</a> in step 2 to keep the recursion from taking exponential time. Alternatively, you can compute c[][] bottom-up with a slightly modified second BFS (also backwards) as follows:</p>
<pre><code>c = x by n array with each entry initially 0;
seen = x by n array with each entry initially 0;
s = createQueue();
push(s, (x, n));
while (notEmpty(s)) {
(i, j) = pop(s);
for (each location (p, q) that is a knight's-move-neighbour of (i, j) {
if (d[p][q] == d[i][j] + 1) {
c[p][q] = c[p][q] + c[i][j];
if (seen[p][q] == 0) {
push(s, (p, q));
seen[p][q] = 1;
}
}
}
}
</code></pre>
<p>The idea here is to always compute c[][] values for all positions having some given distance from the target before computing any c[][] value for a position having a larger distance, as the latter depend on the former.</p>
<p>The length of a shortest path will be d[1][1], and the number of such shortest paths will be c[1][1]. <strong>Total computation time is O(nx)</strong>, which is clearly best-possible in an asymptotic sense.</p>
| 1 |
2016-09-21T16:36:41Z
|
[
"python",
"algorithm",
"chess"
] |
how to find running script in raspberry pi 3
| 39,607,860 |
<p>I have raspberry pi 3 and i m running python script at startup of raspberry pi using /etc/rc.local. I want to find which python script running in background so login using putty and i had run following command:</p>
<pre><code>ps aux|grep mail
</code></pre>
<p>so it displays two script running like following:</p>
<pre><code>root 593 0.0 0.3 7808 3076 ? S 04:42 0:00 sudo python /usr/BCMaster/mail_captureimage_final.py
root 678 1.4 0.9 13220 9356 ? S 04:42 0:07 python /usr/BCMaster/mail_captureimage_final.py
pi 1434 0.0 0.2 4280 1944 pts/0 S+ 04:50 0:00 grep --color=auto mail
</code></pre>
<p>where mail_captureimage_final.py is my python script which i had put on startup of raspberry pi. So my question is why it is displaying two times? Plz help me. Thanks in advance.</p>
| 0 |
2016-09-21T04:57:44Z
| 39,647,804 |
<p>Use <code>ps -alx</code> and look at the PID and PPID fields, you find that the 'sudo python' process is the parent of the 'python' process. Sudo is an executable program that runs it's arguments as a sub-process.</p>
| 0 |
2016-09-22T19:53:57Z
|
[
"python",
"windows",
"putty",
"raspberry-pi3"
] |
Rediect to another page in blueprints
| 39,607,902 |
<p>Im looking for a way to redirect to another page while using flask blueprints</p>
<pre><code>from flask import Blueprint, request, render_template, redirect, url_for
import json
user = Blueprint("user",__name__,template_folder='templates')
@user.route("/index")
def index():
return render_template("index.html")
@user.route("/signup")
def signup():
return render_template("signup.html")
@user.route("/login")
def login():
return render_template("login.html")
from models.user_model import User
from app import db
@user.route('/saveuser/', methods=['GET'])
def saveuser():
username = request.args["username"]
emailid = request.args["email"]
password = request.args["password"]
try:
u = User(str(username),str(emailid),password=str(password))
db.session.add(u)
db.session.commit()
except Exception, excp:
return excp
# return redirect('/index')
return render_template("index.html")
</code></pre>
<p>In saveuser() I want to redirect to index page if Im able to insert successfully</p>
| 0 |
2016-09-21T05:01:08Z
| 39,608,106 |
<p>Use <code>redirect</code> and <code>url_for</code>:</p>
<pre><code>return redirect(url_for('user.index'))
</code></pre>
| 2 |
2016-09-21T05:19:43Z
|
[
"python",
"flask",
"blueprint"
] |
Rediect to another page in blueprints
| 39,607,902 |
<p>Im looking for a way to redirect to another page while using flask blueprints</p>
<pre><code>from flask import Blueprint, request, render_template, redirect, url_for
import json
user = Blueprint("user",__name__,template_folder='templates')
@user.route("/index")
def index():
return render_template("index.html")
@user.route("/signup")
def signup():
return render_template("signup.html")
@user.route("/login")
def login():
return render_template("login.html")
from models.user_model import User
from app import db
@user.route('/saveuser/', methods=['GET'])
def saveuser():
username = request.args["username"]
emailid = request.args["email"]
password = request.args["password"]
try:
u = User(str(username),str(emailid),password=str(password))
db.session.add(u)
db.session.commit()
except Exception, excp:
return excp
# return redirect('/index')
return render_template("index.html")
</code></pre>
<p>In saveuser() I want to redirect to index page if Im able to insert successfully</p>
| 0 |
2016-09-21T05:01:08Z
| 39,614,262 |
<p>Use the following working example and adjust it to fit your needs:</p>
<pre><code>from flask import Flask, Blueprint, render_template, redirect, url_for
user = Blueprint("user", __name__, template_folder='templates')
@user.route("/index")
def index():
return render_template("index.html")
@user.route('/saveuser/', methods=['GET'])
def saveuser():
# save your user here
return redirect(url_for('user.index'))
app = Flask(__name__)
app.register_blueprint(user)
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)
</code></pre>
| 0 |
2016-09-21T10:40:30Z
|
[
"python",
"flask",
"blueprint"
] |
Opencv webcam script endlessly turns webcams off and on
| 39,608,123 |
<p>I wrote a script to display a depth map from my webcams:</p>
<pre><code>cam_a = int(sys.argv[1])
cam_b = int(sys.argv[2])
while True:
imgl = cv2.VideoCapture(cam_a).read()[1]
imgL = cv2.cvtColor(imgl, cv2.COLOR_BGR2GRAY)
imgr = cv2.VideoCapture(cam_b).read()[1]
imgR = cv2.cvtColor(imgr, cv2.COLOR_BGR2GRAY)
stereo = cv2.StereoBM(cv2.STEREO_BM_BASIC_PRESET,ndisparities=16, SADWindowSize=15)
disparity = stereo.compute(imgL,imgR)
cv2.imshow('Disparity', disparity)
</code></pre>
<p>And while it doesn't give me an error, it does flash both of my webcams off and on, endlessly. I'm worried this might break my webcams, how can I stop this?</p>
<p><strong>EDIT</strong></p>
<p>So, I changed it so that it only shows one camera as a normal video:</p>
<pre><code>while True:
imgl = cv2.VideoCapture(cam_a).read()[1]
imgL = cv2.cvtColor(imgl, cv2.COLOR_BGR2GRAY)
#imgr = cv2.VideoCapture(cam_b).read()[1]
#imgR = cv2.cvtColor(imgr, cv2.COLOR_BGR2GRAY)
#stereo = cv2.StereoBM(cv2.STEREO_BM_BASIC_PRESET,ndisparities=16, SADWindowSize=15)
#disparity = stereo.compute(imgL,imgR)
cv2.imshow('Disparity', imgL)
cv2.waitKey(10)
</code></pre>
<p>And it <em>still</em> just flashes the camera on and off. I'm not sure what to change here.</p>
| 0 |
2016-09-21T05:21:34Z
| 39,608,521 |
<p>All you need a add an delay after <code>imshow</code>:</p>
<pre><code>cv2.waitKey(10)
</code></pre>
| 0 |
2016-09-21T05:55:42Z
|
[
"python",
"opencv",
"video",
"webcam",
"depth"
] |
Opencv webcam script endlessly turns webcams off and on
| 39,608,123 |
<p>I wrote a script to display a depth map from my webcams:</p>
<pre><code>cam_a = int(sys.argv[1])
cam_b = int(sys.argv[2])
while True:
imgl = cv2.VideoCapture(cam_a).read()[1]
imgL = cv2.cvtColor(imgl, cv2.COLOR_BGR2GRAY)
imgr = cv2.VideoCapture(cam_b).read()[1]
imgR = cv2.cvtColor(imgr, cv2.COLOR_BGR2GRAY)
stereo = cv2.StereoBM(cv2.STEREO_BM_BASIC_PRESET,ndisparities=16, SADWindowSize=15)
disparity = stereo.compute(imgL,imgR)
cv2.imshow('Disparity', disparity)
</code></pre>
<p>And while it doesn't give me an error, it does flash both of my webcams off and on, endlessly. I'm worried this might break my webcams, how can I stop this?</p>
<p><strong>EDIT</strong></p>
<p>So, I changed it so that it only shows one camera as a normal video:</p>
<pre><code>while True:
imgl = cv2.VideoCapture(cam_a).read()[1]
imgL = cv2.cvtColor(imgl, cv2.COLOR_BGR2GRAY)
#imgr = cv2.VideoCapture(cam_b).read()[1]
#imgR = cv2.cvtColor(imgr, cv2.COLOR_BGR2GRAY)
#stereo = cv2.StereoBM(cv2.STEREO_BM_BASIC_PRESET,ndisparities=16, SADWindowSize=15)
#disparity = stereo.compute(imgL,imgR)
cv2.imshow('Disparity', imgL)
cv2.waitKey(10)
</code></pre>
<p>And it <em>still</em> just flashes the camera on and off. I'm not sure what to change here.</p>
| 0 |
2016-09-21T05:21:34Z
| 39,672,403 |
<p>Got the problem. You are continuously initiating videocapture object under the while loop. You should use one instance initiated before while loop and access image using that instance of videocapture. See this example and change your code accordingly, hope it will fix your issues:</p>
<pre><code>import cv2
camera = cv2.VideoCapture(0)
while True:
return_value,image = camera.read()
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
cv2.imshow('image',gray)
if cv2.waitKey(1)& 0xFF == ord('s'):
cv2.imwrite('test.jpg',image)
break
camera.release()
cv2.destroyAllWindows()
</code></pre>
| 2 |
2016-09-24T04:10:49Z
|
[
"python",
"opencv",
"video",
"webcam",
"depth"
] |
match names in csv file to filename in folder
| 39,608,212 |
<p>I have got a list of about 7000 names in a csv file that is arranged by surname, name, date of birth etc. I also have a folder of about 7000+ scanned documents (enrolment forms) which have the name of each person as a filename. </p>
<p>Now the filenames may not exactly match the names in the csv ie. John Doe in the csv, filename will be John-Michael Doe etc..</p>
<p>How would I go abouts writing a program that looks through the csv and see what filenames are missing in the scanned folder?</p>
<p>I am a complete novice in programming and any advice is appreciated. </p>
| 0 |
2016-09-21T05:30:18Z
| 39,609,154 |
<p>The first thing you want to do is read the CSV into memory. You can do this with the <a href="https://docs.python.org/3/library/csv.html" rel="nofollow"><code>csv</code> module</a>. The most useful tool there is <code>csv.DictReader</code>, which takes the first line of the file as keys in a dictionary, and reads the remainder:</p>
<pre><code>import csv
with open('/path/to/yourfile.csv', 'r') as f:
rows = list(csv.DictReader(f))
from pprint import pprint
pprint(rows[:100])
</code></pre>
<p>In windows, the path would look different, and would be something like <code>c:/some folder/some other folder/</code> (note the forward-slashes instead of backslashes).</p>
<p>This will show the first 100 rows from the file. For example if you have columns named "First Name", "Last Name", "Date of Birth", this will look like:</p>
<pre><code>[{'Date of Birth': 'Jan 1, 1970', 'First Name': 'John', 'Last Name': 'Doe'},
{'Date of Birth': 'Jan 1, 1970', 'First Name': 'John', 'Last Name': 'Doe'},
{'Date of Birth': 'Jan 1, 1970', 'First Name': 'John', 'Last Name': 'Doe'},
{'Date of Birth': 'Jan 1, 1970', 'First Name': 'John', 'Last Name': 'Doe'},
{'Date of Birth': 'Jan 1, 1970', 'First Name': 'John', 'Last Name': 'Doe'},
{'Date of Birth': 'Jan 1, 1970', 'First Name': 'John', 'Last Name': 'Doe'},
{'Date of Birth': 'Jan 1, 1970', 'First Name': 'John', 'Last Name': 'Doe'},
{'Date of Birth': 'Jan 1, 1970', 'First Name': 'John', 'Last Name': 'Doe'},
{'Date of Birth': 'Jan 1, 1970', 'First Name': 'John', 'Last Name': 'Doe'},
{'Date of Birth': 'Jan 1, 1970', 'First Name': 'John', 'Last Name': 'Doe'}
...]
</code></pre>
<p>Next you want to get a list of all the 7000 files, using <code>os.listdir</code>:</p>
<pre><code>import os
images_directory = '/path/to/images/'
image_paths = [
os.path.join(images_directory, filename)
for filename in os.listdir(images_directory)]
</code></pre>
<p>Now you'll need some way to extract the names from the files. This depends crucially on the way the files are structured. The tricky-to-use but very powerful tool for this task is called a <em>regular expression</em>, but probably something simple will suffice. For example, if the files are named like "first-name last-name.pdf", you could write a simple parsing method like:</p>
<pre><code>def parse_filename(filename):
name, extension = filename.split('.')
first_name, last_name = name.split(' ')
return first_name.replace('-', ' '), last_name.replace('-', ' ')
</code></pre>
<p>The exact implementation will depend on how the files are named, but the key things to get you started are <a href="https://docs.python.org/3/library/stdtypes.html#str.split" rel="nofollow"><code>str.split</code></a>, <a href="https://docs.python.org/3/library/stdtypes.html#str.strip" rel="nofollow"><code>str.strip</code></a> and a few others in that same class. You might also take a look at the <a href="https://docs.python.org/3/library/re.html" rel="nofollow"><code>re</code> module for handling regular expressions</a>. As I said, that's a more advanced/powerful technique, so it may not be worth worrying about right now.</p>
<p>A simple algorithm to do the matching would be something like the following:</p>
<pre><code>name_to_filename = {parse_filename(filename.lower()): filename for filename in filenames}
matched_rows = []
unmatched_files = []
for row in rows:
name_key = (row['First Name'].lower(), row['Last Name'].lower())
matching_file = name_to_filename.get(name_key) # This sees if we have a matching file name, and returns
# None otherwise.
new_row = row.copy()
if matching_file:
new_row['File'] = matching_file
print('Matched "%s" to %s' % (' '.join(name_key), matching_file))
else:
new_row['File'] = ''
print('No match for "%s"' % (' '.join(name_key)))
matched_rows.append(new_row)
with open('/path/to/output.csv', 'w') as f:
writer = csv.DictWriter(f, ['First Name', 'Last Name', 'Date of Birth', 'File])
writer.writeheader()
writer.writerows(matched_rows)
</code></pre>
<p>This should give you an output spreadsheet with whatever rows you could match automatically matched up, and the remaining ones blank. Depending on how clean your data is, you might be able to just match the remaining few entries by hand. There's only 7000, and the "dumb" heuristic will probably catch most of them. If you need more advanced heuristics, you might look at <a href="https://en.wikipedia.org/wiki/Jaccard_index" rel="nofollow">Jaccard similarity</a> of the "words" in the name, and the <a href="https://docs.python.org/3/library/difflib.html" rel="nofollow">difflib</a> module for approximate string matching.</p>
<p>Of course most of this code won't <em>quite</em> work on your problem, but hopefully it's enough to get you started.</p>
| 0 |
2016-09-21T06:41:19Z
|
[
"python"
] |
Pandas error trying to convert string into integer
| 39,608,282 |
<p>Requirement : </p>
<p>One particular column in a DataFrame is 'Mixed' Type. It can have values like <code>"123456"</code> or <code>"ABC12345"</code>.</p>
<p>This dataframe is being written into an Excel using xlsxwriter .</p>
<p>For values like <code>"123456"</code>, down the line Pandas converting it into <code>123456.0</code> ( Making it look like a float)</p>
<p>We need to put it into xlsx as 123456 (i.e as +integer) in case value is FULLY numeric.</p>
<p>Effort :</p>
<p>Code Snippet shown below</p>
<pre><code>import pandas as pd
import numpy as np
import xlsxwriter
import os
import datetime
import sys
excel_name = str(input("Please Enter Spreadsheet Name :\n").strip())
print("excel entered : " , excel_name)
df_header = ['DisplayName','StoreLanguage','Territory','WorkType','EntryType','TitleInternalAlias',
'TitleDisplayUnlimited','LocalizationType','LicenseType','LicenseRightsDescription',
'FormatProfile','Start','End','PriceType','PriceValue','SRP','Description',
'OtherTerms','OtherInstructions','ContentID','ProductID','EncodeID','AvailID',
'Metadata', 'AltID', 'SuppressionLiftDate','SpecialPreOrderFulfillDate','ReleaseYear','ReleaseHistoryOriginal','ReleaseHistoryPhysicalHV',
'ExceptionFlag','RatingSystem','RatingValue','RatingReason','RentalDuration','WatchDuration','CaptionIncluded','CaptionExemption','Any','ContractID',
'ServiceProvider','TotalRunTime','HoldbackLanguage','HoldbackExclusionLanguage']
first_pass_drop_duplicate = df_m_d.drop_duplicates(['StoreLanguage','Territory','TitleInternalAlias','LocalizationType','LicenseType',
'LicenseRightsDescription','FormatProfile','Start','End','PriceType','PriceValue','ContentID','ProductID',
'AltID','ReleaseHistoryPhysicalHV','RatingSystem','RatingValue','CaptionIncluded'], keep=False)
# We need to keep integer AltID as is
first_pass_drop_duplicate.loc[first_pass_drop_duplicate['AltID']] = first_pass_drop_duplicate['AltID'].apply(lambda x : str(int(x)) if str(x).isdigit() == True else x)
</code></pre>
<p>I have tried :</p>
<pre><code>1. using `dataframe.astype(int).astype(str)` # works as long as value is not alphanumeric
2.importing re and using pure python `re.compile()` and `replace()` -- does not work
3.reading DF row by row in a for loop !!! Kills the machine as dataframe can have 300k+ records
</code></pre>
<p>Each time, error I get:</p>
<blockquote>
<p>raise KeyError('%s not in index' % objarr[mask])<br>
KeyError: '[ 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 5337. 5337. 5337. 5337. 5337. 5337. 5337. 5337.\n 5337. 5337. 5337. 5337. 5337. 5337. 5337. 5337.\n 5337. 5337. 5337. 5337. 5337. 5337. 5337. 5337.\n 5337. 5337. 5337. 5337. 5337. 5337. 5337. 5337.\n 5337. 5337. 5337. 5337. 5337. 5337. 5337. 5337.\n 5337. 5337. 2124. 2124. 2124. 2124. 2124. 2124.\n 2124. 2124. 6643. 6643. 6643. 6643. 6643. 6643.\n 6643. 6643. 6643. 6643. 6643. 6643. 6643. 6643.\n 6643. 6643. 6643. 6643. 6643. 6643. 6643. 6643.\n 6643. 6643. 6643. 6643. 6643. 6643. 6643. 6643.] not in index'</p>
</blockquote>
<p>I am newbie in python/pandas , any help, solution is much appreciated.</p>
| 3 |
2016-09-21T05:36:20Z
| 39,608,394 |
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html" rel="nofollow"><code>to_numeric</code></a>:</p>
<pre><code>df = pd.DataFrame({'AltID':['123456','ABC12345','123456'],
'B':[4,5,6]})
print (df)
AltID B
0 123456 4
1 ABC12345 5
2 123456 6
df.ix[df.AltID.str.isdigit(), 'AltID'] = pd.to_numeric(df.AltID, errors='coerce')
print (df)
AltID B
0 123456 4
1 ABC12345 5
2 123456 6
print (df['AltID'].apply(type))
0 <class 'float'>
1 <class 'str'>
2 <class 'float'>
Name: AltID, dtype: object
</code></pre>
| 2 |
2016-09-21T05:45:34Z
|
[
"python",
"string",
"pandas",
"casting",
"int"
] |
Pandas error trying to convert string into integer
| 39,608,282 |
<p>Requirement : </p>
<p>One particular column in a DataFrame is 'Mixed' Type. It can have values like <code>"123456"</code> or <code>"ABC12345"</code>.</p>
<p>This dataframe is being written into an Excel using xlsxwriter .</p>
<p>For values like <code>"123456"</code>, down the line Pandas converting it into <code>123456.0</code> ( Making it look like a float)</p>
<p>We need to put it into xlsx as 123456 (i.e as +integer) in case value is FULLY numeric.</p>
<p>Effort :</p>
<p>Code Snippet shown below</p>
<pre><code>import pandas as pd
import numpy as np
import xlsxwriter
import os
import datetime
import sys
excel_name = str(input("Please Enter Spreadsheet Name :\n").strip())
print("excel entered : " , excel_name)
df_header = ['DisplayName','StoreLanguage','Territory','WorkType','EntryType','TitleInternalAlias',
'TitleDisplayUnlimited','LocalizationType','LicenseType','LicenseRightsDescription',
'FormatProfile','Start','End','PriceType','PriceValue','SRP','Description',
'OtherTerms','OtherInstructions','ContentID','ProductID','EncodeID','AvailID',
'Metadata', 'AltID', 'SuppressionLiftDate','SpecialPreOrderFulfillDate','ReleaseYear','ReleaseHistoryOriginal','ReleaseHistoryPhysicalHV',
'ExceptionFlag','RatingSystem','RatingValue','RatingReason','RentalDuration','WatchDuration','CaptionIncluded','CaptionExemption','Any','ContractID',
'ServiceProvider','TotalRunTime','HoldbackLanguage','HoldbackExclusionLanguage']
first_pass_drop_duplicate = df_m_d.drop_duplicates(['StoreLanguage','Territory','TitleInternalAlias','LocalizationType','LicenseType',
'LicenseRightsDescription','FormatProfile','Start','End','PriceType','PriceValue','ContentID','ProductID',
'AltID','ReleaseHistoryPhysicalHV','RatingSystem','RatingValue','CaptionIncluded'], keep=False)
# We need to keep integer AltID as is
first_pass_drop_duplicate.loc[first_pass_drop_duplicate['AltID']] = first_pass_drop_duplicate['AltID'].apply(lambda x : str(int(x)) if str(x).isdigit() == True else x)
</code></pre>
<p>I have tried :</p>
<pre><code>1. using `dataframe.astype(int).astype(str)` # works as long as value is not alphanumeric
2.importing re and using pure python `re.compile()` and `replace()` -- does not work
3.reading DF row by row in a for loop !!! Kills the machine as dataframe can have 300k+ records
</code></pre>
<p>Each time, error I get:</p>
<blockquote>
<p>raise KeyError('%s not in index' % objarr[mask])<br>
KeyError: '[ 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 5337. 5337. 5337. 5337. 5337. 5337. 5337. 5337.\n 5337. 5337. 5337. 5337. 5337. 5337. 5337. 5337.\n 5337. 5337. 5337. 5337. 5337. 5337. 5337. 5337.\n 5337. 5337. 5337. 5337. 5337. 5337. 5337. 5337.\n 5337. 5337. 5337. 5337. 5337. 5337. 5337. 5337.\n 5337. 5337. 2124. 2124. 2124. 2124. 2124. 2124.\n 2124. 2124. 6643. 6643. 6643. 6643. 6643. 6643.\n 6643. 6643. 6643. 6643. 6643. 6643. 6643. 6643.\n 6643. 6643. 6643. 6643. 6643. 6643. 6643. 6643.\n 6643. 6643. 6643. 6643. 6643. 6643. 6643. 6643.] not in index'</p>
</blockquote>
<p>I am newbie in python/pandas , any help, solution is much appreciated.</p>
| 3 |
2016-09-21T05:36:20Z
| 39,608,396 |
<p>Use <code>apply</code> and <code>pd.to_numeric</code> with parameter <code>errors='ignore'</code></p>
<p>consider the <code>pd.Series</code> <code>s</code></p>
<pre><code>s = pd.Series(['12345', 'abc12', '456', '65hg', 54, '12-31-2001'])
s.apply(pd.to_numeric, errors='ignore')
0 12345
1 abc12
2 456
3 65hg
4 54
5 12-31-2001
dtype: object
</code></pre>
<hr>
<p>Notice the types</p>
<pre><code>s.apply(pd.to_numeric, errors='ignore').apply(type)
0 <type 'numpy.int64'>
1 <type 'str'>
2 <type 'numpy.int64'>
3 <type 'str'>
4 <type 'int'>
5 <type 'str'>
dtype: object
</code></pre>
| 2 |
2016-09-21T05:45:41Z
|
[
"python",
"string",
"pandas",
"casting",
"int"
] |
Pandas error trying to convert string into integer
| 39,608,282 |
<p>Requirement : </p>
<p>One particular column in a DataFrame is 'Mixed' Type. It can have values like <code>"123456"</code> or <code>"ABC12345"</code>.</p>
<p>This dataframe is being written into an Excel using xlsxwriter .</p>
<p>For values like <code>"123456"</code>, down the line Pandas converting it into <code>123456.0</code> ( Making it look like a float)</p>
<p>We need to put it into xlsx as 123456 (i.e as +integer) in case value is FULLY numeric.</p>
<p>Effort :</p>
<p>Code Snippet shown below</p>
<pre><code>import pandas as pd
import numpy as np
import xlsxwriter
import os
import datetime
import sys
excel_name = str(input("Please Enter Spreadsheet Name :\n").strip())
print("excel entered : " , excel_name)
df_header = ['DisplayName','StoreLanguage','Territory','WorkType','EntryType','TitleInternalAlias',
'TitleDisplayUnlimited','LocalizationType','LicenseType','LicenseRightsDescription',
'FormatProfile','Start','End','PriceType','PriceValue','SRP','Description',
'OtherTerms','OtherInstructions','ContentID','ProductID','EncodeID','AvailID',
'Metadata', 'AltID', 'SuppressionLiftDate','SpecialPreOrderFulfillDate','ReleaseYear','ReleaseHistoryOriginal','ReleaseHistoryPhysicalHV',
'ExceptionFlag','RatingSystem','RatingValue','RatingReason','RentalDuration','WatchDuration','CaptionIncluded','CaptionExemption','Any','ContractID',
'ServiceProvider','TotalRunTime','HoldbackLanguage','HoldbackExclusionLanguage']
first_pass_drop_duplicate = df_m_d.drop_duplicates(['StoreLanguage','Territory','TitleInternalAlias','LocalizationType','LicenseType',
'LicenseRightsDescription','FormatProfile','Start','End','PriceType','PriceValue','ContentID','ProductID',
'AltID','ReleaseHistoryPhysicalHV','RatingSystem','RatingValue','CaptionIncluded'], keep=False)
# We need to keep integer AltID as is
first_pass_drop_duplicate.loc[first_pass_drop_duplicate['AltID']] = first_pass_drop_duplicate['AltID'].apply(lambda x : str(int(x)) if str(x).isdigit() == True else x)
</code></pre>
<p>I have tried :</p>
<pre><code>1. using `dataframe.astype(int).astype(str)` # works as long as value is not alphanumeric
2.importing re and using pure python `re.compile()` and `replace()` -- does not work
3.reading DF row by row in a for loop !!! Kills the machine as dataframe can have 300k+ records
</code></pre>
<p>Each time, error I get:</p>
<blockquote>
<p>raise KeyError('%s not in index' % objarr[mask])<br>
KeyError: '[ 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 102711. 102711. 102711. 102711. 102711. 102711. 102711. 102711.\n 5337. 5337. 5337. 5337. 5337. 5337. 5337. 5337.\n 5337. 5337. 5337. 5337. 5337. 5337. 5337. 5337.\n 5337. 5337. 5337. 5337. 5337. 5337. 5337. 5337.\n 5337. 5337. 5337. 5337. 5337. 5337. 5337. 5337.\n 5337. 5337. 5337. 5337. 5337. 5337. 5337. 5337.\n 5337. 5337. 2124. 2124. 2124. 2124. 2124. 2124.\n 2124. 2124. 6643. 6643. 6643. 6643. 6643. 6643.\n 6643. 6643. 6643. 6643. 6643. 6643. 6643. 6643.\n 6643. 6643. 6643. 6643. 6643. 6643. 6643. 6643.\n 6643. 6643. 6643. 6643. 6643. 6643. 6643. 6643.] not in index'</p>
</blockquote>
<p>I am newbie in python/pandas , any help, solution is much appreciated.</p>
| 3 |
2016-09-21T05:36:20Z
| 39,626,721 |
<p>Finally it worked by using 'converters' option in pandas read_excel format as</p>
<pre><code>df_w02 = pd.read_excel(excel_name, names = df_header,converters = {'AltID':str,'RatingReason' : str}).fillna("")
</code></pre>
<p>converters can 'cast' a type as defined by my function/value and keeps intefer stored as string without adding decimal point.</p>
| 0 |
2016-09-21T21:16:33Z
|
[
"python",
"string",
"pandas",
"casting",
"int"
] |
Using Python dateutil, how to judge a timezone string is "valid" or not?
| 39,608,283 |
<p>I am using the following code to do the UTC time to local time conversion:</p>
<pre><code>def UTC_to_local(timezone_str, datetime_UTC):
"""
convert UTC datetime to local datetime. Input datetime is naive
"""
try:
from_zone = dateutil.tz.gettz('UTC')
to_zone = dateutil.tz.gettz(timezone_str)
datetime_UTC = datetime_UTC.replace(tzinfo=from_zone)
# Convert time zone
datetime_local = datetime_UTC.astimezone(to_zone)
except Exception as e:
raise
return datetime_local
</code></pre>
<p>If I gave the correct timezone_str (e.g., 'America/Chicago'), it works as expected.
But even I give the unexpected timezone_str (e.g., 'America/Chicago1' or 'Americaerror/Chicago'), there is still no exception and it just returns different numbers! I think it's more reasonable to get an exception for an unexpected timezone string than just "making the best guess".</p>
<p>Furthermore, I have found(using IPYTHON):</p>
<pre><code>In [171]: tz.gettz("America/Chicago")
Out[171]: tzfile('/usr/share/zoneinfo/America/Chicago')
In [172]: tz.gettz("America/Chicago1")
Out[172]: tzstr('America/Chicago1')
In [173]: tz.gettz("Americaerror/Chicago")
(None)
</code></pre>
| 0 |
2016-09-21T05:36:33Z
| 39,608,989 |
<p><strong>Solution #1:</strong> If you can use <a href="https://github.com/newvem/pytz" rel="nofollow">pytz</a></p>
<pre><code>import pytz
if timezone_str in pytz.all_timezones:
...
else:
raise ValueError('Invalid timezone string!')
</code></pre>
<p><strong>Solution #2:</strong></p>
<pre><code>import os
import tarfile
import dateutil.zoneinfo
zi_path = os.path.abspath(os.path.dirname(dateutil.zoneinfo.__file__))
zonesfile = tarfile.TarFile.open(os.path.join(zi_path, 'dateutil-zoneinfo.tar.gz'))
zonenames = zonesfile.getnames()
if timezone_str in zonenames:
...
else:
raise ValueError('Invalid timezone string!')
</code></pre>
| 1 |
2016-09-21T06:31:25Z
|
[
"python",
"timezone"
] |
Google FooBar unexpected failed valuation
| 39,608,290 |
<p>I am working on a Google FooBar challenge and the test case doesn't seem to be correct; below is the highlight.</p>
<pre><code>return the product of non-empty subset of those numbers. Example [2, -3, 1, 0, -5],
would be: xs[0] = 2, xs[1] = -3, xs[4] = -5,
giving the product 2*(-3)*(-5) = 30.
So answer([2,-3,1,0,-5]) will be "30".
</code></pre>
<p>Given the following:</p>
<p>Case 1:</p>
<pre><code>Inputs:
(int list) xs = [2, 0, 2, 2, 0]
Output:
(string) "8"
</code></pre>
<p>Case 2:</p>
<pre><code>Inputs:
(int list) xs = [-2, -3, 4, -5]
Output:
(string) "60"
</code></pre>
<p>The "expected result" of 60 confuses me, shouldn't the expected result be 120? When I submit the following code: </p>
<pre><code> def answer(xs):
runningTotal = ""
for i in range(0, len(x)):
if x[i] != 0:
runningTotal = runningTotal + "(" + str(x[i]) +")" + " * "
answer = runningTotal.replace("-","")[:-3]
return str(eval(answer))
</code></pre>
<p>It passes Test 1, but fails test 2 (and 3,4,5 which I am not given the test conditions). Is there something I am missing, or is it possible this is an error on Google's expected results? Below is the entire instruction set.</p>
<h1>Power Hungry</h1>
<p>Commander Lambda's space station is HUGE. And huge space stations take a LOT of power. Huge space stations with doomsday devices take even more power. To help meet the station's power
needs, Commander Lambda has installed solar panels on the station's outer surface. But the station sits in the middle of a quasar quantum flux field, which wreaks havoc on the solar
panels. You and your team of henchmen has been assigned to repair the solar panels, but you can't take them all down at once without shutting down the space station (and all those pesky
life support systems!). </p>
<p>You need to figure out which sets of panels in any given array you can take offline to repair while still maintaining the maximum amount of power output per array, and to do THAT, you'll
first need to figure out what the maximum output of each array actually is. Write a function answer(xs) that takes a list of integers representing the power output levels of each panel in an
array, and returns the maximum product of some non-empty subset of those numbers. So for example, if an array contained panels with power output levels of [2, -3, 1, 0, -5], then the maximum
product would be found by taking the subset: xs[0] = 2, xs[1] = -3, xs[4] = -5, giving the product 2*(-3)*(-5) = 30. So answer([2,-3,1,0,-5]) will be "30".</p>
<p>Each array of solar panels contains at least 1 and no more than 50 panels, and each panel will have a power output level whose absolute value is no greater than 1000 (some panels are
malfunctioning so badly that they're draining energy, but you know a trick with the panels' wave stabilizer that lets you combine two negative-output panels to produce the positive
output of the multiple of their power values). The final products may be very large, so give the answer as a string representation of the number.</p>
<h1>Languages</h1>
<p>To provide a Python solution, edit solution.py
To provide a Java solution, edit solution.java</p>
<h1>Test cases</h1>
<p>Inputs:
(int list) xs = [2, 0, 2, 2, 0]
Output:
(string) "8"</p>
<p>Inputs:
(int list) xs = [-2, -3, 4, -5]
Output:
(string) "60"</p>
<p>Use verify [file] to test your solution and see how it does. When you are finished editing your code, use submit [file] to submit your answer. If your solution passes the test cases, it will
be removed from your home folder.</p>
<p>[EDIT]</p>
<p>I adjusted <strike> my code as follows </strike> and all of my test cases return as expected, but now all but the first test fails (even though I have access to case 1 and 2 which both return the expected results on my local computer); for a few minutes the test cases were absent from the readme.txt file but after logging out and back in it was back to the original file. Am I missing some subtly or could this be an error on their system (I only ask because I would expect to pass their first two tests as I pass the first (meaning my return type is correct) and fail the second even though their "output" and mine match. **Please do not give code examples, as noted by Fayaz this is a challenge and I not phishing for someone to do this on my behalf (what fun would that be?!).</p>
<p>[EDIT]
After rebooting computer, I noticed I zoomed in slightly and it was cutting off the text (that's embarrassing) I also noticed a few moments later my code made the assumption of 0 being the default if the value was lower than 0, which clear didn't work. After that minor adjustment everything passed and I live to code another day. Thank you all so much for your help/support!</p>
| 1 |
2016-09-21T05:37:08Z
| 39,608,727 |
<pre><code>[-2, -3, 4, -5] = -120
</code></pre>
<p>so the subset which has a highest product is</p>
<pre><code> [-3,4,-5] = 60
</code></pre>
<p>, -2 should be excluded from subset to get the maximum product.</p>
| 1 |
2016-09-21T06:13:40Z
|
[
"java",
"python",
"list"
] |
Google FooBar unexpected failed valuation
| 39,608,290 |
<p>I am working on a Google FooBar challenge and the test case doesn't seem to be correct; below is the highlight.</p>
<pre><code>return the product of non-empty subset of those numbers. Example [2, -3, 1, 0, -5],
would be: xs[0] = 2, xs[1] = -3, xs[4] = -5,
giving the product 2*(-3)*(-5) = 30.
So answer([2,-3,1,0,-5]) will be "30".
</code></pre>
<p>Given the following:</p>
<p>Case 1:</p>
<pre><code>Inputs:
(int list) xs = [2, 0, 2, 2, 0]
Output:
(string) "8"
</code></pre>
<p>Case 2:</p>
<pre><code>Inputs:
(int list) xs = [-2, -3, 4, -5]
Output:
(string) "60"
</code></pre>
<p>The "expected result" of 60 confuses me, shouldn't the expected result be 120? When I submit the following code: </p>
<pre><code> def answer(xs):
runningTotal = ""
for i in range(0, len(x)):
if x[i] != 0:
runningTotal = runningTotal + "(" + str(x[i]) +")" + " * "
answer = runningTotal.replace("-","")[:-3]
return str(eval(answer))
</code></pre>
<p>It passes Test 1, but fails test 2 (and 3,4,5 which I am not given the test conditions). Is there something I am missing, or is it possible this is an error on Google's expected results? Below is the entire instruction set.</p>
<h1>Power Hungry</h1>
<p>Commander Lambda's space station is HUGE. And huge space stations take a LOT of power. Huge space stations with doomsday devices take even more power. To help meet the station's power
needs, Commander Lambda has installed solar panels on the station's outer surface. But the station sits in the middle of a quasar quantum flux field, which wreaks havoc on the solar
panels. You and your team of henchmen has been assigned to repair the solar panels, but you can't take them all down at once without shutting down the space station (and all those pesky
life support systems!). </p>
<p>You need to figure out which sets of panels in any given array you can take offline to repair while still maintaining the maximum amount of power output per array, and to do THAT, you'll
first need to figure out what the maximum output of each array actually is. Write a function answer(xs) that takes a list of integers representing the power output levels of each panel in an
array, and returns the maximum product of some non-empty subset of those numbers. So for example, if an array contained panels with power output levels of [2, -3, 1, 0, -5], then the maximum
product would be found by taking the subset: xs[0] = 2, xs[1] = -3, xs[4] = -5, giving the product 2*(-3)*(-5) = 30. So answer([2,-3,1,0,-5]) will be "30".</p>
<p>Each array of solar panels contains at least 1 and no more than 50 panels, and each panel will have a power output level whose absolute value is no greater than 1000 (some panels are
malfunctioning so badly that they're draining energy, but you know a trick with the panels' wave stabilizer that lets you combine two negative-output panels to produce the positive
output of the multiple of their power values). The final products may be very large, so give the answer as a string representation of the number.</p>
<h1>Languages</h1>
<p>To provide a Python solution, edit solution.py
To provide a Java solution, edit solution.java</p>
<h1>Test cases</h1>
<p>Inputs:
(int list) xs = [2, 0, 2, 2, 0]
Output:
(string) "8"</p>
<p>Inputs:
(int list) xs = [-2, -3, 4, -5]
Output:
(string) "60"</p>
<p>Use verify [file] to test your solution and see how it does. When you are finished editing your code, use submit [file] to submit your answer. If your solution passes the test cases, it will
be removed from your home folder.</p>
<p>[EDIT]</p>
<p>I adjusted <strike> my code as follows </strike> and all of my test cases return as expected, but now all but the first test fails (even though I have access to case 1 and 2 which both return the expected results on my local computer); for a few minutes the test cases were absent from the readme.txt file but after logging out and back in it was back to the original file. Am I missing some subtly or could this be an error on their system (I only ask because I would expect to pass their first two tests as I pass the first (meaning my return type is correct) and fail the second even though their "output" and mine match. **Please do not give code examples, as noted by Fayaz this is a challenge and I not phishing for someone to do this on my behalf (what fun would that be?!).</p>
<p>[EDIT]
After rebooting computer, I noticed I zoomed in slightly and it was cutting off the text (that's embarrassing) I also noticed a few moments later my code made the assumption of 0 being the default if the value was lower than 0, which clear didn't work. After that minor adjustment everything passed and I live to code another day. Thank you all so much for your help/support!</p>
| 1 |
2016-09-21T05:37:08Z
| 39,608,767 |
<p>The output of input = [-2, -3, 4, -5] should be 60.
I will tell you why?
I think you get 120 as (-2)*(-3)<em>4</em>(-5).
However the result of this operation is -120 which is the least product possible for this input.
The output should be 60 from the subset (-3)<em>4</em>(-5).</p>
<p>If you are privileged to be invited for the foobar challenge, I think you should be able to make the changes to your code to accommodate this.
All the Best!</p>
| 1 |
2016-09-21T06:15:57Z
|
[
"java",
"python",
"list"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.