title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Plotting networkx graph: node labels adjacent to nodes? | 39,876,123 | <p>I am trying to plot association rules and am having a difficult time getting the node labels below to "follow" the nodes. That is, I would like each label to automatically be near its respective node without having to hard-code any values. The output from below doesn't even include some of the node labels. How can I make these labels dynamically follow the nodes?</p>
<pre><code>import pandas as pd
import networkx as nx
import matlotlib.pyplot as plt
df = pd.DataFrame({'node1': ['candy', 'cookie', 'beach', 'mark', 'black'],
'node2': ['beach', 'beach', 'cookie', 'beach', 'mark'],
'weight': [10, 5, 3, 4, 20]})
G = nx.Graph()
for idx in df.index:
node1 = df.loc[idx, 'node1']
node2 = df.loc[idx, 'node2']
weight = df.loc[idx, 'weight']
G.add_edge(node1, node2, weight = weight)
nx.draw(G, node_size = 100)
pos = nx.spring_layout(G)
nx.draw_networkx_labels(G, pos = pos, font_size = 14, with_labels = True)
plt.draw()
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/r8011.png" rel="nofollow"><img src="http://i.stack.imgur.com/r8011.png" alt="enter image description here"></a></p>
| 0 | 2016-10-05T14:03:57Z | 39,880,971 | <p>When you call</p>
<pre><code>nx.draw(G, node_size = 100)
</code></pre>
<p>and then</p>
<pre><code>pos = nx.spring_layout(G)
</code></pre>
<p>you are creating two sets of positions. The solution is to first get the positions, and then use them for both, nodes and labels.</p>
<pre><code>pos = nx.spring_layout(G)
nx.draw(G, pos = pos, node_size = 100)
# do stuff to pos if you want offsets
nx.draw_networkx_labels(G, pos = pos, font_size = 14, with_labels = True)
</code></pre>
| 1 | 2016-10-05T18:10:45Z | [
"python",
"nodes",
"networkx"
]
|
Efficient converting of numpy int array with shape (M, N, P) array to 2D object array with (N, P) shape | 39,876,136 | <p>From a 3D array with the shape (M, N, P) of data type <code>int</code>, I would like to get a 2D array of shape (N, P) of data type <code>object</code> and have this done with reasonable efficiency.</p>
<p>I'm happy with the objects being of either <code>tuple</code>, <code>list</code> or <code>numpy.ndarray</code> types.</p>
<p>I have a working hack of a solution where I have to go via a list. So it feels like I'm missing something:</p>
<pre><code>import numpy as np
m = np.mgrid[:8, :12]
l = zip(*(v.ravel() for v in m))
a2 = np.empty(m.shape[1:], dtype=np.object)
a2.ravel()[:] = l
</code></pre>
<p>The final array <code>a2</code>, in this example, should have the property that <code>a2[(x, y)] == (x, y)</code></p>
<p>It feels like it should have been possible to transpose <code>m</code> and make <code>a2</code> like this:</p>
<p><code>a2 = m.transpose(1,2,0).astype(np.object).reshape(m.shape[1:])</code></p>
<p>since numpy doesn't really care about what's inside the objects or alternatively when creating a numpy-array of type <code>np.object</code> be able to tell how many dimensions there should be:</p>
<pre><code>a2 = np.array(m.transpose(1,2,0), astype=object, ndim=2)
</code></pre>
<p>Numpy knows to stop before the final depth of nested iterables if they have different shape at the third dimension (in this example), but since <code>m</code> doesn't have irregularities, this seems impossible.</p>
<p>Or create <code>a2</code> and fill it with the transposed:</p>
<pre><code>a2 = np.empty(m.shape[1:], dtype=np.object)
a2[...] = m.transpose(1, 2, 0)
</code></pre>
<p>In this case e.g. <code>m.transpose(1, 2, 0)[2, 4]</code> is <code>np.array([2, 4])</code> and assigning it to <code>a2[2, 4]</code> would have been perfectly legal. However, none of these three more reasonable attempts work.</p>
| 3 | 2016-10-05T14:04:21Z | 39,879,187 | <p>So for a smaller <code>m</code>:</p>
<pre><code>In [513]: m = np.mgrid[:3,:4]
In [514]: m.shape
Out[514]: (2, 3, 4)
In [515]: m
Out[515]:
array([[[0, 0, 0, 0],
[1, 1, 1, 1],
[2, 2, 2, 2]],
[[0, 1, 2, 3],
[0, 1, 2, 3],
[0, 1, 2, 3]]])
In [516]: ll = list(zip(*(v.ravel() for v in m)))
In [517]: ll
Out[517]:
[(0, 0),
(0, 1),
(0, 2),
...
(2, 3)]
In [518]: a2=np.empty(m.shape[1:], dtype=object)
In [519]: a2.ravel()[:] = ll
In [520]: a2
Out[520]:
array([[(0, 0), (0, 1), (0, 2), (0, 3)],
[(1, 0), (1, 1), (1, 2), (1, 3)],
[(2, 0), (2, 1), (2, 2), (2, 3)]], dtype=object)
</code></pre>
<p>Making an empty of the right shape, and filling it via <code>[:]=</code> is the best way of controlling the <code>object</code> depth of such an array. <code>np.array(...)</code> defaults to the highest possible dimension, which in this case would 3d. </p>
<p>So the main question is - is there a better way of constructing that <code>ll</code> list of tuples. </p>
<pre><code> a2.ravel()[:] = np.array(ll)
</code></pre>
<p>does not work, complaining <code>(12,2) into shape (12)</code>.</p>
<p>Working backwards, if I start with an array like <code>ll</code>, turn it into a nested list, the assignment works, except elements of <code>a2</code> are lists, not tuples:</p>
<pre><code>In [533]: a2.ravel()[:] = np.array(ll).tolist()
In [534]: a2
Out[534]:
array([[[0, 0], [0, 1], [0, 2], [0, 3]],
[[1, 0], [1, 1], [1, 2], [1, 3]],
[[2, 0], [2, 1], [2, 2], [2, 3]]], dtype=object)
</code></pre>
<p><code>m</code> shape is (2,3,4)<code>and</code>np.array(ll)<code>shape is (12,2), then</code>m.reshape(2,-1).T` produces the same thing.</p>
<pre><code>a2.ravel()[:] = m.reshape(2,-1).T.tolist()
</code></pre>
<p>I could have transposed first, and then reshaped, <code>m.transpose(1,2,0).reshape(-1,2)</code>.</p>
<p>To get tuples I need to pass the reshaped array through a comprehension:</p>
<pre><code>a2.ravel()[:] = [tuple(l) for l in m.reshape(2,-1).T]
</code></pre>
<p>===============</p>
<p><code>m.transpose(1,2,0).astype(object)</code> is still 3d; it's just changed the integers with pointers to integers. There's a 'wall' between the array dimensions and the dtype. Things like reshape and transpose only operate on the dimensions, and don't penetrate that wall, or move it. Lists are pointers all the way down. Object arrays use pointers only at the <code>dtype</code> level.</p>
<p>Don't be afraid of the <code>a2.ravel()[:]=</code> expression. <code>ravel</code> is a cheap reshape, and assignment to a flatten version of an array may actually be faster than assignment to 2d version. After all, the data (in this case pointers) is stored in a flat data buffer.</p>
<p>But (after playing around a bit) I can do the assignment without the ravel or reshape (still need the <code>tolist</code> to move the <code>object</code> boundary). The list nesting has to match the <code>a2</code> shape down to 'object' level.</p>
<pre><code>a2[...] = m.transpose(1,2,0).tolist() # even a2[:] works
</code></pre>
<p>(This brings to mind a discussion about giving <code>np.array</code> a <code>maxdim</code> parameter - <a href="http://stackoverflow.com/questions/38774922/prevent-numpy-from-creating-a-multidimensional-array">Prevent numpy from creating a multidimensional array</a>).</p>
<p>The use of <code>tolist</code> seems like an inefficiency. But if the elements of <code>a2</code> are tuples (or rather pointers to tuples), those tuples have to be created some how. The <code>c</code> databuffer of the <code>m</code> cannot be viewed as a set of tuples. <code>tolist</code> (with the <code>[tuple...]</code> comprehension) might well be the most efficient way of creating such objects.</p>
<p>==============</p>
<p>Did I note that the transpose can be indexed, producing 2 element arrays with the right numbers?</p>
<pre><code>In [592]: m.transpose(1,2,0)[1,2]
Out[592]: array([1, 2])
In [593]: m.transpose(1,2,0)[0,1]
Out[593]: array([0, 1])
</code></pre>
<p>==================</p>
<p>Since the <code>tolist</code> for a structured array uses tuples, I could do:</p>
<pre><code>In [598]: a2[:]=m.transpose(1,2,0).copy().view('i,i').reshape(a2.shape).tolist()
In [599]: a2
Out[599]:
array([[(0, 0), (0, 1), (0, 2), (0, 3)],
[(1, 0), (1, 1), (1, 2), (1, 3)],
[(2, 0), (2, 1), (2, 2), (2, 3)]], dtype=object)
</code></pre>
<p>and thus avoid the list comprehension. It's not necessarily simpler or faster.</p>
| 1 | 2016-10-05T16:22:30Z | [
"python",
"numpy"
]
|
Add tabstop in python expression in UltiSnip | 39,876,144 | <p>I made a modification of the example given by UltiSnip doc:</p>
<pre><code>snippet "be(gin)?( (\S+))?" "begin{} / end{}" br
\begin{${1:`!p
snip.rv = match.group(3) if match.group(2) is not None else "something"`}}${2:`!p
if match.group(2) is not None and match.group(3) != "proof":
snip.rv = "\label{"+t[1]+":}"`}
${3:${VISUAL}}
\end{$1}$0
endsnippet
</code></pre>
<p>which, compared to the original one add a <code>\label{envname:}</code> and if the <code>envname</code> is <code>proof</code> then we didn't add it. This can be helpful when we write <code>thm</code> environment, e.g.</p>
<p><code>be lem<tab></code> will give </p>
<p><code>\begin{lem}\label{lem:}
<c-j>
\end{lem}
</code>
the only drawback is that, I don't know how to add a placeholder at the position <code>\label{lem:$4}</code>. Any idea?</p>
| 0 | 2016-10-05T14:04:40Z | 39,886,190 | <p>I get a work version, but not clean in code:</p>
<p><code>snippet "be(gin)?( (\S+))?" "begin{} / end{}" br
\begin{${1:`!p
snip.rv = match.group(3) if match.group(2) is not None else "something"`}}${2:`!p
if match.group(2) is not None and match.group(3) != "proof":
snip.rv = '\label{'+t[1]+':'`$4`!p
if match.group(2) is not None and match.group(3) != "proof":
snip.rv ='}'`}
${3:${VISUAL}}
\end{$1}$0
endsnippet
</code></p>
| 0 | 2016-10-06T01:39:39Z | [
"python",
"vim-plugin",
"ultisnips"
]
|
Update pandas dataframe with values from another dataframe | 39,876,316 | <p>Assuming a dataframe where values from any of the columns can change, Given another dataframe which contains the old value, new value and column it belongs to, how to update dataframe using information about changes?
For example:</p>
<pre><code>>>> my_df
x y z
0 1 2 5
1 2 3 9
2 8 7 2
3 3 4 7
4 6 7 7
</code></pre>
<p><code>my_df_2</code> contains information about changed values and their columns:</p>
<pre><code>>>> my_df_2
changed_col old_value new_value
0 x 2 10
1 z 9 20
2 x 1 12
3 y 4 23
</code></pre>
<p>How to use information in <code>my_df_2</code> to update <code>my_df</code> such that <code>my_df</code> now becomes:</p>
<pre><code>>>> my_df
x y z
0 12 2 5
1 10 3 20
2 8 7 2
3 3 23 7
4 6 7 7
</code></pre>
| 0 | 2016-10-05T14:12:03Z | 39,876,622 | <p>You can create a dictionary for the changes as follows:</p>
<pre><code>d = {i: dict(zip(j['old_value'], j['new_value'])) for i, j in my_df_2.groupby('changed_col')}
d
Out: {'x': {1: 12, 2: 10}, 'y': {4: 23}, 'z': {9: 20}}
</code></pre>
<p>Then use it in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html" rel="nofollow">DataFrame.replace</a>:</p>
<pre><code>my_df.replace(d)
Out:
x y z
0 12 2 5
1 10 3 20
2 8 7 2
3 3 23 7
4 6 7 7
</code></pre>
| 1 | 2016-10-05T14:24:25Z | [
"python",
"pandas"
]
|
Update pandas dataframe with values from another dataframe | 39,876,316 | <p>Assuming a dataframe where values from any of the columns can change, Given another dataframe which contains the old value, new value and column it belongs to, how to update dataframe using information about changes?
For example:</p>
<pre><code>>>> my_df
x y z
0 1 2 5
1 2 3 9
2 8 7 2
3 3 4 7
4 6 7 7
</code></pre>
<p><code>my_df_2</code> contains information about changed values and their columns:</p>
<pre><code>>>> my_df_2
changed_col old_value new_value
0 x 2 10
1 z 9 20
2 x 1 12
3 y 4 23
</code></pre>
<p>How to use information in <code>my_df_2</code> to update <code>my_df</code> such that <code>my_df</code> now becomes:</p>
<pre><code>>>> my_df
x y z
0 12 2 5
1 10 3 20
2 8 7 2
3 3 23 7
4 6 7 7
</code></pre>
| 0 | 2016-10-05T14:12:03Z | 39,876,649 | <p>You can use the update method. See <a href="http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.DataFrame.update.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.DataFrame.update.html</a></p>
<p>Example:</p>
<pre><code>old_df = pd.DataFrame({"a":np.arange(5), "b": np.arange(4,9)})
+----+-----+-----+
| | a | b |
|----+-----+-----|
| 0 | 0 | 4 |
| 1 | 1 | 5 |
| 2 | 2 | 6 |
| 3 | 3 | 7 |
| 4 | 4 | 8 |
+----+-----+-----+
new_df = pd.DataFrame({"a":np.arange(7,8), "b": np.arange(10,11)})
+----+-----+-----+
| | a | b |
|----+-----+-----|
| 0 | 7 | 10 |
+----+-----+-----+
old_df.update(new_df)
+----+-----+-----+
| | a | b |
|----+-----+-----|
| 0 | 7 | 10 | #Changed row
| 1 | 1 | 5 |
| 2 | 2 | 6 |
| 3 | 3 | 7 |
| 4 | 4 | 8 |
+----+-----+-----+
</code></pre>
| 0 | 2016-10-05T14:25:47Z | [
"python",
"pandas"
]
|
Make a Tv simulator - Python | 39,876,384 | <p>I am really new to Python and I have to make a tv simulator. I have searched basically the entire net but I cant really find my answer. My problem is that we have to save the state of the tv in a file and when we reopen the programm it must retrieve its previous state i.e. channel, show, volume. I use pickle for saving the file, but it does not retrieve the previous state. </p>
<p>Hopefully I have been adequatly specific otherwise feel free to ask me for clarification.</p>
<p>Thank you in advance </p>
<pre><code>class Television(object):
currentState = []
def __init__(self,volume,channel,show):
self.volume=volume
self.channel=channel
self.show=show
def changeChannel(self,choice):
self.channel=choice
return self.channel
def increaseVolume(self,volume):
if volume in range(0,9):
self.volume+=1
return self.volume
def decreaseVolume(self,volume):
if volume in range(1,10):
self.volume-=1
return self.volume
#def __str__(self):
#return "[channel: %s show: %s,volume: %s]" % (self.channel, self.show, self.volume)
def __str__(self):
return "Channel: "+str(self.channel)+\
"\nShow: "+ str(self.show)+\
"\nVolume: "+ str(self.volume)
#printing object, state = "print(object)"
def getState(self):
return self.volume
def setState (self,channel,show):
self.volume=5
self.channel=channel[1]
self.show=show[1]
#####################################################################
from tvsimulator import*
import pickle, os.path
#from builtins import int
channel = ["1. Mtv","2. Tv 3","2. Svt","4. Kanal4"]
show = ["Music is life", "Har du tur i karlek?", "Pengar ar inte allt","Vem vill inte bli miljonar"]
volume = 5
global myTv
livingRoomTv = Television(channel,show,volume)
kitchenTv = Television(channel,show,volume)
def saveFile():
with open("tvState.pkl",'wb') as outfile:
pickle.dump(livingRoomTv,outfile)
pickle.dump(kitchenTv,outfile)
def importFile():
with open("tvState.pkl",'rb') as infile:
livingRoomTv = pickle.load(infile)
kitchenTv = pickle.load(infile)
def channelMenu(myTv):
for i in channel:
print(i)
choice =int(input(print("Which channel do you want?")))
choice = channel[choice-1]
myTv.changeChannel(choice)
selection =myTv.changeChannel(choice)
return selection
def methodSelection(myTv):
print("1: Change channel")
print("2: Volume Up")
print("3: Volume Down")
print("4: Return to main menu")
choice = int(input(print("\nPleas make a selection from the above list\n")))
print(myTv)
try:
if choice ==1:
channelMenu(myTv)
print(myTv)
methodSelection(myTv)
if choice ==2:
myTv.increaseVolume(volume)
print(myTv)
methodSelection(myTv)
if choice ==3:
myTv.decreaseVolume(volume)
print(myTv)
methodSelection(myTv)
if choice ==4:
mainMenu()
except:
print("Wrong selection, please try again")
def mainMenu():
print("1: Livingroom Tv")
print("2: Kitchen TV")
print("3: Exit\n")
choice = int(input(print("Please make a selection from the above list")))
try:
if choice == 1:
print("Living room\n")
print(livingRoomTv)
myTv = livingRoomTv
methodSelection(myTv)
if choice == 2:
print("Kitchen\n")
print(kitchenTv)
myTv=kitchenTv
methodSelection(myTv)
if choice == 3:
saveFile()
print("Tv Shut Down")
exit
except:
print("Wrong selection, please try again")
def startUp():
if os.path.isfile("tvState.pkl"):
print("Tv restored to previous state")
importFile()
kitchenTv.getState()
livingRoomTv.getState()
mainMenu()
else:
print("Welcome")
kitchenTv.setState(channel, show)
livingRoomTv.setState(channel, show)
saveFile()
mainMenu()
startUp()
</code></pre>
| -2 | 2016-10-05T14:15:00Z | 39,876,600 | <p>Using global variables for reading is no problem, thus explaining that the <code>save</code> feature works, but assignments to globals are not automatic: You have to make your variables global or it will create local variables instead that will be lost as soon as they go out of scope (thus explaining why your state is not reloaded).</p>
<p>The global variables can be reached by adding <code>global v</code> where <code>v</code> is the variable to be seen as global in the function:</p>
<pre><code>def importFile():
global livingRoomTv # link to the global variable
global kitchenTv # link to the global variable
with open("tvState.pkl",'rb') as infile:
livingRoomTv = pickle.load(infile)
kitchenTv = pickle.load(infile)
</code></pre>
<p>Simple MCVE:</p>
<pre><code>z=[1,2,3]
def load():
#global z
z=set()
load()
print(z)
</code></pre>
<p>prints <code>[1,2,3]</code></p>
<p>Now uncommenting <code>global z</code>, it prints <code>set([])</code>, meaning I was successfully able to change <code>z</code> in the function. </p>
| 1 | 2016-10-05T14:23:25Z | [
"python",
"pickle"
]
|
Pandas : Getting unique rows for a given column but conditional on some criteria of other columns | 39,876,389 | <p>I'm using python 2.7. From a given data as follows:</p>
<pre><code>data = pd.DataFrame({'id':['001','001','001','002','002','003','003','003','004','005'],
'status':['ground','unknown','air','ground','unknown','ground','unknown','unknown','unknown','ground'],
'value':[10,-5,12,20,-12,2,-4,-1,0,6]})
</code></pre>
<p>The data looks like this:</p>
<pre><code>id status value
001 ground 10
001 unknown -5
001 air 12
002 ground 20
002 unknown -12
003 ground 2
003 unknown -4
003 unknown -1
004 unknown 0
005 ground 6
</code></pre>
<p>I would like to get the output in dataframe that has unique id conditional to the following criteria: For a given id</p>
<pre><code> 'status': If 'air' does exist, pick 'air'.
If 'air' does not exist, pick 'ground'.
If both 'air' and 'ground' do not exist, pick 'unknown'.
'value': Sum of values for each id
'count': Count the number of rows for each id
</code></pre>
<p>Therefore, the expected output is the following.</p>
<pre><code>id status value count
001 air 17 3
002 ground 8 2
003 ground -3 3
004 unknown 0 1
005 ground 6 1
</code></pre>
<p>I can do looping for each unique id but it is not elegant enough and computation is also expensive, especially when data becomes large. May i know the better pythonic style and more efficient way to come up with this output? Thank you in advance.</p>
| 0 | 2016-10-05T14:15:09Z | 39,877,620 | <p>One option would be changing the type of status column to category and sorting based on that in groupby.agg:</p>
<pre><code>df['status'] = df['status'].astype('category', categories=['air', 'ground', 'unknown'], ordered=True)
df.sort_values('status').groupby('id').agg({'status': 'first', 'value': ['sum', 'count']})
Out:
status value
first sum count
id
001 air 17 3
002 ground 8 2
003 ground -3 3
004 unknown 0 1
005 ground 6 1
</code></pre>
<p>Here, since the values are sorted in <code>'air'</code>, <code>'ground'</code> and <code>'unknown'</code> order, <code>'first'</code> returns the correct value. If you don't want to change the type, you can define your own function that returns <code>air</code>/<code>ground</code>/<code>unknown</code> and instead of <code>'first'</code> you can pass that function.</p>
| 2 | 2016-10-05T15:05:13Z | [
"python",
"pandas"
]
|
Pandas : Getting unique rows for a given column but conditional on some criteria of other columns | 39,876,389 | <p>I'm using python 2.7. From a given data as follows:</p>
<pre><code>data = pd.DataFrame({'id':['001','001','001','002','002','003','003','003','004','005'],
'status':['ground','unknown','air','ground','unknown','ground','unknown','unknown','unknown','ground'],
'value':[10,-5,12,20,-12,2,-4,-1,0,6]})
</code></pre>
<p>The data looks like this:</p>
<pre><code>id status value
001 ground 10
001 unknown -5
001 air 12
002 ground 20
002 unknown -12
003 ground 2
003 unknown -4
003 unknown -1
004 unknown 0
005 ground 6
</code></pre>
<p>I would like to get the output in dataframe that has unique id conditional to the following criteria: For a given id</p>
<pre><code> 'status': If 'air' does exist, pick 'air'.
If 'air' does not exist, pick 'ground'.
If both 'air' and 'ground' do not exist, pick 'unknown'.
'value': Sum of values for each id
'count': Count the number of rows for each id
</code></pre>
<p>Therefore, the expected output is the following.</p>
<pre><code>id status value count
001 air 17 3
002 ground 8 2
003 ground -3 3
004 unknown 0 1
005 ground 6 1
</code></pre>
<p>I can do looping for each unique id but it is not elegant enough and computation is also expensive, especially when data becomes large. May i know the better pythonic style and more efficient way to come up with this output? Thank you in advance.</p>
| 0 | 2016-10-05T14:15:09Z | 39,877,656 | <p>You want to use <code>groupby</code> on id. This is easy for value and count but trickier for the status. We need to write our own function which takes a pandas Series and returns a single attribute.</p>
<pre><code>def group_status(x):
if (x=='air').any():
y = 'air'
elif (x=='ground').any():
y = 'ground'
else:
y = 'unknown'
return y
data = data.groupby(by='id').agg({'value': ['sum', 'count'], 'status': [group_status]})
data.columns = ['status', 'value', 'count']
print(data)
status value count
id
001 air 17 3
002 ground 8 2
003 ground -3 3
004 unknown 0 1
005 ground 6 1
</code></pre>
<p>Here we have ensured that the air, ground, unknown order is preserved without the need to change the column type to categorical, as mentioned in ayhan's very elegant answer.</p>
<p>The <code>group_status()</code> function does lay the groundwork should you wish to incorporate more advanced groupby functionality.</p>
| 2 | 2016-10-05T15:06:44Z | [
"python",
"pandas"
]
|
Python Imports failing. Relative imports, package recognition, __init__.py , __package__, __all__ | 39,876,409 | <p>I've been reading a lot of threads, the PEP articles related to this problem, which are around 4 of them, but none of them gives a clear idea on some points and I still can't do relative imports. </p>
<p>In fact, the contents of my main package is not listed at all.</p>
<p><strong>REEDITION. I modified all the post, it was too complex and questions were a lot.</strong></p>
<p>In <code>C:/test/</code> I have this package:</p>
<pre><code>Package/ (FOLDER 2)
__init__.py
moduleA.py
moduleB.py
</code></pre>
<ul>
<li><code>moduleA</code> imports <code>moduleB</code>, and viceversa.</li>
<li><code>__init__.py</code> is empty</li>
</ul>
<p>My process:</p>
<ol>
<li>I add <code>C:/test/</code> to <code>sys.path</code>.</li>
<li><code>import Package</code> (WORKS) </li>
<li><code>dir(Package)</code> <strong>Doesn't list any module inside Package.</strong></li>
<li>Package is: <code><module âPackageâ from C:/test/Package/_init_.py></code></li>
<li><code>__file__</code> is the init file under Package</li>
<li><code>__name__</code> is <code>Package</code></li>
<li><code>__package__</code> <strong>is an empty string</strong></li>
<li><code>__path__</code> is <code>C:/test/Package</code></li>
</ol>
<hr>
<p>TEST 1 - Version 1:
in <code>moduleA</code> I have <code>from Package import moduleB</code></p>
<p>I get this:</p>
<pre><code>>>> import Package.moduleA
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:/test\Package\moduleA.py", line
from Package import moduleB
File "C:/test\Package\moduleB.py", line
from Package import moduleA
ImportError: cannot import name moduleA
</code></pre>
<p><em>It doesn't work because <code>moduleA</code> is not part of <code>Package</code>. So <code>Package</code> is not recognized as a package?</em></p>
<hr>
<p>TEST 1 - Version 2:
in <code>moduleA</code> I have <code>from . import moduleB</code></p>
<p>Doesn't work, same error</p>
<hr>
<p>TEST 1 - Version 3:
in <code>moduleA</code> I have <code>import Package.moduleB</code></p>
<p>It works.</p>
<p>After that, running:</p>
<pre><code>>>> dir(Package.moduleB)
['Package', '__builtins__', '__doc__', '__file__', '__name__', '__package__']
>>> Package.moduleB.Package
<module 'Package' from 'C:/prueba\Package\__init__.py'>
>>> dir(Package)
['__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', 'moduleA', 'moduleB']
</code></pre>
<p>So now, <code>Package.moduleB</code> has <code>Package</code> as a variable
And surprisingly, <code>Package</code> how looks like a proper package, and contains both modules.</p>
<p>In fact, doing any of imports in Version 1 and Version 2 work now, because now <code>moduleA</code> and <code>moduleB</code> are part of <code>Package</code>.</p>
<hr>
<p>Questions:</p>
<p>1) Why is <code>Package</code> not recognized as a package? Is it? Shouldn't it contain all submodules? </p>
<p>2) Why running <code>import Package.moduleA</code> generates <code>Package</code> inside <code>moduleA</code>?</p>
<p>3) Why running <code>import Package.moduleA</code> adds <code>moduleA</code> to <code>Package</code> and it wasn't there before?</p>
<p>4) Does an empty <code>__init__.py</code> file, and a NON empty <code>__init__.py</code> file affect to this at all? </p>
<p>5) Does defining an <code>__all__</code> variable containing <code>['moduleA', 'moduleB']</code> would do anything here?</p>
<p>6) How can I make the init file to load both submodules? Should I do <code>import Package.moduleA</code> and <code>Package.moduleB</code> inside?... can't I do it relatively like <code>import .moduleA as moduleA</code> or something like that? (What If the name of <code>Package</code> changes?)</p>
<p>7) Does that empty string on the package variable affect? We should change it's contents if we want it to recognize itself... <code>__package__</code> should be the same as <code>__name__</code>, or that's what the PEPs say. But doing this didn't work:</p>
<pre><code>if __name__ == "__main__" and __package__ is None:
__package__ = "Package"
</code></pre>
| 1 | 2016-10-05T14:16:05Z | 39,903,118 | <p>This is a circular dependency issue. It's very similar to this <a href="http://stackoverflow.com/questions/9252543/importerror-cannot-import-name-x/23836838#23836838">question</a>, but differs in that you are trying import a module from a package, rather than a class from a module. The best solution in this case is to rethink your code such that a circular dependency is not required. Move any common functions or classes out to a third module in the package.</p>
<p>Worrying about <code>__package__</code> is a red herring. The python import system will set this appropriately when your package becomes a proper package. </p>
<p>The problem is that <code>moduleA</code> and <code>moduleB</code> are only placed in <code>package</code> once they have been successfully imported. However, since both <code>moduleA</code> and <code>moduleB</code> are in the process of being imported they cannot see each other in <code>package</code>. The absolute import partially solves the problem as you bypass the relative import machinery. However, should your modules require parts of each other during initialisation then the program will fail.</p>
<p>The following code would work if the <code>var = ...</code> line was removed.</p>
<p>package.moduleA</p>
<pre><code>import package.moduleB
def func():
return 1
</code></pre>
<p>package.moduleB</p>
<pre><code>import package.moduleA
var = package.moduleA.func() # error, can't find moduleA
</code></pre>
<h3>Example of a <em>broken</em> package</h3>
<p>package.moduleA</p>
<pre><code>from . import moduleB
def depends_on_y():
return moduleB.y()
def x():
return "x"
</code></pre>
<p>package.moduleB</p>
<pre><code>from . import moduleA
def depends_on_x():
return moduleA.x()
def y():
return "y"
</code></pre>
<h3>Example of extracting common parts into separate module in package</h3>
<p>package.common</p>
<pre><code>def x():
return "x"
def y():
return "y"
</code></pre>
<p>package.moduleA</p>
<pre><code>from .common import y
def depends_on_y():
return y()
</code></pre>
<p>package.moduleB</p>
<pre><code>from .common import x
def depends_on_x():
return x()
</code></pre>
| 1 | 2016-10-06T18:23:06Z | [
"python",
"import",
"python-2.x"
]
|
Python Imports failing. Relative imports, package recognition, __init__.py , __package__, __all__ | 39,876,409 | <p>I've been reading a lot of threads, the PEP articles related to this problem, which are around 4 of them, but none of them gives a clear idea on some points and I still can't do relative imports. </p>
<p>In fact, the contents of my main package is not listed at all.</p>
<p><strong>REEDITION. I modified all the post, it was too complex and questions were a lot.</strong></p>
<p>In <code>C:/test/</code> I have this package:</p>
<pre><code>Package/ (FOLDER 2)
__init__.py
moduleA.py
moduleB.py
</code></pre>
<ul>
<li><code>moduleA</code> imports <code>moduleB</code>, and viceversa.</li>
<li><code>__init__.py</code> is empty</li>
</ul>
<p>My process:</p>
<ol>
<li>I add <code>C:/test/</code> to <code>sys.path</code>.</li>
<li><code>import Package</code> (WORKS) </li>
<li><code>dir(Package)</code> <strong>Doesn't list any module inside Package.</strong></li>
<li>Package is: <code><module âPackageâ from C:/test/Package/_init_.py></code></li>
<li><code>__file__</code> is the init file under Package</li>
<li><code>__name__</code> is <code>Package</code></li>
<li><code>__package__</code> <strong>is an empty string</strong></li>
<li><code>__path__</code> is <code>C:/test/Package</code></li>
</ol>
<hr>
<p>TEST 1 - Version 1:
in <code>moduleA</code> I have <code>from Package import moduleB</code></p>
<p>I get this:</p>
<pre><code>>>> import Package.moduleA
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:/test\Package\moduleA.py", line
from Package import moduleB
File "C:/test\Package\moduleB.py", line
from Package import moduleA
ImportError: cannot import name moduleA
</code></pre>
<p><em>It doesn't work because <code>moduleA</code> is not part of <code>Package</code>. So <code>Package</code> is not recognized as a package?</em></p>
<hr>
<p>TEST 1 - Version 2:
in <code>moduleA</code> I have <code>from . import moduleB</code></p>
<p>Doesn't work, same error</p>
<hr>
<p>TEST 1 - Version 3:
in <code>moduleA</code> I have <code>import Package.moduleB</code></p>
<p>It works.</p>
<p>After that, running:</p>
<pre><code>>>> dir(Package.moduleB)
['Package', '__builtins__', '__doc__', '__file__', '__name__', '__package__']
>>> Package.moduleB.Package
<module 'Package' from 'C:/prueba\Package\__init__.py'>
>>> dir(Package)
['__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', 'moduleA', 'moduleB']
</code></pre>
<p>So now, <code>Package.moduleB</code> has <code>Package</code> as a variable
And surprisingly, <code>Package</code> how looks like a proper package, and contains both modules.</p>
<p>In fact, doing any of imports in Version 1 and Version 2 work now, because now <code>moduleA</code> and <code>moduleB</code> are part of <code>Package</code>.</p>
<hr>
<p>Questions:</p>
<p>1) Why is <code>Package</code> not recognized as a package? Is it? Shouldn't it contain all submodules? </p>
<p>2) Why running <code>import Package.moduleA</code> generates <code>Package</code> inside <code>moduleA</code>?</p>
<p>3) Why running <code>import Package.moduleA</code> adds <code>moduleA</code> to <code>Package</code> and it wasn't there before?</p>
<p>4) Does an empty <code>__init__.py</code> file, and a NON empty <code>__init__.py</code> file affect to this at all? </p>
<p>5) Does defining an <code>__all__</code> variable containing <code>['moduleA', 'moduleB']</code> would do anything here?</p>
<p>6) How can I make the init file to load both submodules? Should I do <code>import Package.moduleA</code> and <code>Package.moduleB</code> inside?... can't I do it relatively like <code>import .moduleA as moduleA</code> or something like that? (What If the name of <code>Package</code> changes?)</p>
<p>7) Does that empty string on the package variable affect? We should change it's contents if we want it to recognize itself... <code>__package__</code> should be the same as <code>__name__</code>, or that's what the PEPs say. But doing this didn't work:</p>
<pre><code>if __name__ == "__main__" and __package__ is None:
__package__ = "Package"
</code></pre>
| 1 | 2016-10-05T14:16:05Z | 39,904,382 | <p>This is a Python bug that exists in Python versions prior to 3.5. See <a href="http://bugs.python.org/issue992389" rel="nofollow">issue 992389</a> where it was discussed (for many years) and <a href="http://bugs.python.org/issue17636" rel="nofollow">issue 17636</a> where the common cases of the issue were fixed.</p>
<p>After the fix in Python 3.5, an explicit relative import like <code>from . import moduleA</code> from within a module within the package <code>Package</code> will check in <code>sys.modules</code> for <code>Package.moduleA</code> before giving up if <code>moduleA</code> isn't present yet in <code>Package</code>. Since the module object is added to <code>sys.modules</code> before it starts loading, but is not added to <code>Package.__dict__</code> until after the loading is complete, this usually fixes the problem.</p>
<p>There can still be a problem with circular imports using <code>from package import *</code>, but in <a href="http://bugs.python.org/issue23447" rel="nofollow">issue 23447</a> (which I contributed a patch for), it was decided that fixing that even more obscure corner case was not worth the extra code complexity.</p>
<p>Circular imports are usually a sign of bad design. You should probably either refactor the bits of the code that depend on each other out into a single utility module that both of the other modules import, or you should combine both of the two separate modules into a single one.</p>
| 0 | 2016-10-06T19:40:38Z | [
"python",
"import",
"python-2.x"
]
|
Retrieving data from Pandas based on iloc and user input | 39,876,415 | <p>I have a small SQLite DB with data about electrical conductors. The program prints the list of names and Pandas ID, accepts user input of that ID, and then prints all the information about the selected conductor.</p>
<p>I am trying to figure out how to then select a specific item from a specified column - later, I'll allow input, but for now, I'm just manually specifying in the print() to simplify troubleshooting for myself.</p>
<p>The program works up until the combined conditional print line. I suspect casting the string to the int is making this more difficult than it needs to be; if there's a better way to select an entry (for instance, just matching the name given to the name in the db), I'm open to that.</p>
<pre><code>with lite.connect(db_path) as db:
df = pd.read_sql_query('SELECT * FROM cond', conn)
print('What conductor are you analyzing?')
try:
print(pd.read_sql_query('SELECT name FROM cond', conn)) # List names and Pandas IDs
getCond = int(input('\nEnter the ID #: ')) # cast string to int to allow .iloc
printSel = df.iloc[getCond]
print()
print(printSel)
print(df[(df['Amps']) & (df.iloc == getCond)])
finally:
if conn:
conn.close()
</code></pre>
<p>EDIT: The error thrown after selecting an item is </p>
<p>"TypeError: cannot compare a dtyped [object] array with a scalar of type [bool]"</p>
<p>I'm at a loss, because I think it's saying the & operator is being compared to something, rather than using it as print whatever meets "This AND That."</p>
| 0 | 2016-10-05T14:16:22Z | 39,876,675 | <p>If I understand correctly, the 'getCond' will be the index of the row you want to select, and 'Amps' is the column. I think you can just do this to return the 'getCond' row of the 'Amps' column.</p>
<p>df.loc[getCond, 'Amps']</p>
| 1 | 2016-10-05T14:26:33Z | [
"python",
"sqlite",
"pandas"
]
|
MultiIndex Slicing requires the index to be fully lexsorted | 39,876,416 | <p>I have a data frame with index (<code>year</code>, <code>foo</code>), where I would like to select the X largest observations of <code>foo</code> where <code>year == someYear</code>.</p>
<p>My approach was </p>
<pre><code>df.sort_index(level=[0, 1], ascending=[1, 0], inplace=True)
df.loc[pd.IndexSlice[2002, :10], :]
</code></pre>
<p>but I get</p>
<pre><code>KeyError: 'MultiIndex Slicing requires the index to be fully lexsorted tuple len (2), lexsort depth (0)'
</code></pre>
<p>I tried different variants of sorting (e.g. <code>ascending = [0, 0]</code>), but they all resulted in some sort of error.</p>
<p>If I only wanted the <code>xth</code> row, I could <code>df.groupby(level=[0]).nth(x)</code> after sorting, but since I want a set of rows, that doesn't feel quite efficient.</p>
<p>What's the best way to select these rows? Some data to play with:</p>
<pre><code> rank_int rank
year foo
2015 1.381845 2 320
1.234795 2 259
1.148488 199 2
0.866704 2 363
0.738022 2 319
</code></pre>
| 1 | 2016-10-05T14:16:22Z | 39,877,051 | <p><code>ascending</code> <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html" rel="nofollow">should be a boolean, not a list</a>. Try sorting this way:</p>
<p><code>df.sort_index(ascending=True, inplace=True)</code></p>
| 0 | 2016-10-05T14:41:57Z | [
"python",
"pandas"
]
|
MultiIndex Slicing requires the index to be fully lexsorted | 39,876,416 | <p>I have a data frame with index (<code>year</code>, <code>foo</code>), where I would like to select the X largest observations of <code>foo</code> where <code>year == someYear</code>.</p>
<p>My approach was </p>
<pre><code>df.sort_index(level=[0, 1], ascending=[1, 0], inplace=True)
df.loc[pd.IndexSlice[2002, :10], :]
</code></pre>
<p>but I get</p>
<pre><code>KeyError: 'MultiIndex Slicing requires the index to be fully lexsorted tuple len (2), lexsort depth (0)'
</code></pre>
<p>I tried different variants of sorting (e.g. <code>ascending = [0, 0]</code>), but they all resulted in some sort of error.</p>
<p>If I only wanted the <code>xth</code> row, I could <code>df.groupby(level=[0]).nth(x)</code> after sorting, but since I want a set of rows, that doesn't feel quite efficient.</p>
<p>What's the best way to select these rows? Some data to play with:</p>
<pre><code> rank_int rank
year foo
2015 1.381845 2 320
1.234795 2 259
1.148488 199 2
0.866704 2 363
0.738022 2 319
</code></pre>
| 1 | 2016-10-05T14:16:22Z | 39,877,111 | <p>To get the <code>xth</code> observations of the second level as wanted, one can combine <code>loc</code> with <code>iloc</code>:</p>
<pre><code>df.sort_index(level=[0, 1], ascending=[1, 0], inplace=True)
df.loc[2015].iloc[:10]
</code></pre>
<p>works as expected. This does not answer the weird index locking w.r.t. lexsorting, however.</p>
| 0 | 2016-10-05T14:44:25Z | [
"python",
"pandas"
]
|
Can someone help me installing pyHook? | 39,876,454 | <p>I have python 3.5 and I can't install pyHook. I tried every method possible. pip, open the cmd directly from the folder, downloaded almost all the pyHook versions. Still can't install it.
I get this error :'
Could not find a version that satisfies the requirment pyHook.
I have windows 10, 64 bit,
Can someone help ? Thanks ! </p>
| -3 | 2016-10-05T14:17:50Z | 40,008,811 | <p>This is how I did it...</p>
<ol>
<li>Download the py hook module that matches your version of python from <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#pyhook" rel="nofollow" title="Here">here</a>. Make sure that if you have python 32 bit you download the 32 bit module (even if you have windows 64x) and vice versa.</li>
<li>Open your command prompt and navigate to the folder where you downloaded the module</li>
<li><p>Type <code>pip install</code> and then the name of the file.</p>
<p>Ex: <code>pip install pyHook-1.5.1-cp27-none-win32.whl</code></p></li>
</ol>
<p>Note : you need pip</p>
<p><strong>If this does not work please say where and when you got stuck</strong></p>
| 0 | 2016-10-12T21:39:38Z | [
"python",
"python-3.x",
"pyhook"
]
|
Can someone help me installing pyHook? | 39,876,454 | <p>I have python 3.5 and I can't install pyHook. I tried every method possible. pip, open the cmd directly from the folder, downloaded almost all the pyHook versions. Still can't install it.
I get this error :'
Could not find a version that satisfies the requirment pyHook.
I have windows 10, 64 bit,
Can someone help ? Thanks ! </p>
| -3 | 2016-10-05T14:17:50Z | 40,068,100 | <p>I am really sorry for the dumb question! I installed it now. All i needed to do is to navigate to the folder with the pyHook in it, in CMD. I didn't know how to do it at first, so I jumped this step over, with the tought that it doesn't mean anything. Sorry again and thank you so much for all your responses ! </p>
| 0 | 2016-10-16T07:47:38Z | [
"python",
"python-3.x",
"pyhook"
]
|
Name error in Python in 'self' definition | 39,876,485 | <p>Getting <code>Name Error:</code> in python code as shared below. I am a beginner in Python.</p>
<pre><code>from PyQt4 import QtGui, QtCore
import sys
import I2CGenericFrame
import os
class I2CApp(QtGui.QMainWindow, I2CGenericFrame.Ui_winI2CMain):
def __init__(self):
super(self.__class__, self).__init__()
self.setupUi(self)
self.cmbBusListBox.addItem("Slect ..")
</code></pre>
<p>It is giving following error.</p>
<pre><code>Traceback (most recent call last):
File "I2cMain.py", line 6, in <module>
class I2CApp(QtGui.QMainWindow, I2CGenericFrame.Ui_winI2CMain):
File "I2cMain.py", line 12, in I2CApp
self.cmbBusListBox.addItem("Slect ..")
NameError: name 'self' is not defined
</code></pre>
<p>Guide me, Thanks in advance.</p>
| -1 | 2016-10-05T14:18:54Z | 39,876,545 | <p><code>self</code> is an argument like any other that is passed in a method. It has no meaning in the class body outside of a method.</p>
<p>You can resolve this by fixing the indentation so that it is referenced inside the <code>__init__</code> method:</p>
<pre><code>class I2CApp(QtGui.QMainWindow, I2CGenericFrame.Ui_winI2CMain):
def __init__(self):
super(I2CApp, self).__init__()
self.setupUi(self)
self.cmbBusListBox.addItem("Slect ..")
</code></pre>
<p>EDIT: As indicated in the comment, you should also call <code>super</code> with the class explicitly, to avoid the possibility of infinite recursion.</p>
| 1 | 2016-10-05T14:21:18Z | [
"python",
"linux"
]
|
Name error in Python in 'self' definition | 39,876,485 | <p>Getting <code>Name Error:</code> in python code as shared below. I am a beginner in Python.</p>
<pre><code>from PyQt4 import QtGui, QtCore
import sys
import I2CGenericFrame
import os
class I2CApp(QtGui.QMainWindow, I2CGenericFrame.Ui_winI2CMain):
def __init__(self):
super(self.__class__, self).__init__()
self.setupUi(self)
self.cmbBusListBox.addItem("Slect ..")
</code></pre>
<p>It is giving following error.</p>
<pre><code>Traceback (most recent call last):
File "I2cMain.py", line 6, in <module>
class I2CApp(QtGui.QMainWindow, I2CGenericFrame.Ui_winI2CMain):
File "I2cMain.py", line 12, in I2CApp
self.cmbBusListBox.addItem("Slect ..")
NameError: name 'self' is not defined
</code></pre>
<p>Guide me, Thanks in advance.</p>
| -1 | 2016-10-05T14:18:54Z | 39,876,587 | <p>the variable <code>self</code> is passed to methods automatically when you call them from an object. Your line of code <code>self.cmbBusListBox.addItem("Slect ..")</code> is not contained within a method of the Class and therefore it does not know what <code>self</code> is. This is probably an indentation error. Did you intend for it to be under <code>__init__</code>?</p>
| 0 | 2016-10-05T14:22:40Z | [
"python",
"linux"
]
|
Would a valid request URL ever have a '.html' extension in web2py? | 39,876,488 | <p>The <a href="http://web2py.com/books/default/chapter/29/04/the-core" rel="nofollow">web2py book's "Core" chapter</a> says:</p>
<blockquote>
<p>web2py maps GET/POST requests of the form:</p>
</blockquote>
<pre><code>http://127.0.0.1:8000/a/c/f.html/x/y/z?p=1&q=2
</code></pre>
<blockquote>
<p>to function f in controller "c.py" in application a, </p>
</blockquote>
<p>However I would imagine that the truly valid URL would not have <code>.html</code> in it. In fact, further down on the same page we read:</p>
<blockquote>
<p>URLs are only allowed to contain alphanumeric characters, underscores, and slashes; the args may contain non-consecutive dots. Spaces are replaced by underscores before validation.</p>
</blockquote>
<p>And it is clear that <code>.html</code> is not part of <code>args</code> yet it has a dot in it. Therefore the example URL contradicts the valid URL as documented further down the page.</p>
| 1 | 2016-10-05T14:18:59Z | 39,878,783 | <p>The part after the dot is used by web2py to render the proper view. Both <code>a/c/f.html</code> and <code>a/c/f.json</code> calls the same function (<code>f</code> inside the <code>c.py</code> controller), but the former will render <code>views/c/f.html</code> while the later <code>views/c/f.json</code> (if present, otherwise it will render <code>views/generic.json</code> in localhost or raise 404 in production).</p>
<p>Note that the extension can be omitted, and the default will be <code>.html</code>. Also, you can set <code>response.view</code> inside your controller to change the default behavior.</p>
<p>So yes, a valid URL might have an extension.</p>
<p>Hope it helps!</p>
| 2 | 2016-10-05T16:00:48Z | [
"python",
"validation",
"web2py"
]
|
How do i print out the right Id with my output | 39,876,504 | <p>I am working with sklearn and pandas and my prediction is coming out as an array without the right id, which has been set as the index.</p>
<p>My code:</p>
<pre><code>train = train.set_index('activity_id')
test = test.set_index('activity_id')
y_train = train['outcome']
x_train = train.drop('people_id', axis=1)
x_test = test
model = DecisionTreeClassifier(min_samples_leaf=100)
model.fit(x_train,y_train)
scores = cross_val_score(model, x_train,y_train, cv=10)
print('mean: {:.3f} (std: {:.3f})'.format(scores.mean(), scores.std()), end='\n\n')
print(model.score(x_train,y_train))
#make predictions
y_pred = model.predict(x_test)
</code></pre>
<p>Any thoughts on how i can get them to print out with the right activity_id list? Thanks!</p>
| 0 | 2016-10-05T14:19:49Z | 39,878,895 | <p>From what you have written I believe you are trying to show your index for x_test next to the y_pred values generated by x_test.</p>
<p>This can be done by turning the numpy array output from <code>model.predict(x_test)</code> into a DataFrame. Then we can set the index of the new DataFrame to be the same as that of <code>x_test</code>.</p>
<p>Here is an example,</p>
<pre><code>df_pred = pd.DataFrame(y_pred, index=x_test.index, columns=['y_pred'])
</code></pre>
| 1 | 2016-10-05T16:06:44Z | [
"python",
"pandas",
"scikit-learn"
]
|
How to merge two dataframes in Spark Hadoop without common key? | 39,876,536 | <p>I am trying to join two dataframes.</p>
<p>data: DataFrame[_1: bigint, _2: vector]</p>
<p>cluster: DataFrame[cluster: bigint]</p>
<pre><code>result = data.join(broadcast(cluster))
</code></pre>
<p>The strange thing is, that all the executors are failing on the joining step.</p>
<p>I have no idea what I could do.</p>
<p>The data file is 2.8 gb on HDFS and the cluster data only 5 mb.
The files are read using Parquet.</p>
| -2 | 2016-10-05T14:21:04Z | 39,959,105 | <p>What does work is this:</p>
<pre><code>data = sqlContext.read.parquet(data_path)
data = data.withColumn("id", monotonicallyIncreasingId())
cluster = sqlContext.read.parquet(cluster_path)
cluster = cluster.withColumn("id", monotonicallyIncreasingId())
result = data.join(cluster, on="id")
</code></pre>
<p>Adding the cluster DataFrame directly to the data DataFrame with:</p>
<pre><code>data.withColumn("cluster", cluster.cluster)
</code></pre>
<p>Does not work.</p>
<pre><code>data.join(cluster)
</code></pre>
<p>Also does not work, executors are failing whilst having enough memory.</p>
<p>No idea why it wasn't working...</p>
| 0 | 2016-10-10T13:10:59Z | [
"python",
"hadoop",
"apache-spark",
"spark-dataframe",
"parquet"
]
|
Read and Write separately in python | 39,876,608 | <p>I'm trying to create a very simple program in python that needs to read input from the user and write output accordingly. I need an output similar to this:</p>
<pre><code>$./program.py
say something: Hello World
result: hello world
</code></pre>
<p>The thing is that i need to read input indefinitely, each time the user inputs data i would like that the printed data doesn't obstruct the input prompt. It will even better that no newlines be printed, keeping the output as above: a line for reading and another for writing.</p>
<p>I tried using curses but i don't want the hole screen to be used, just the two lines.</p>
| 3 | 2016-10-05T14:23:51Z | 39,876,686 | <p>You can do veeeery simple trick:</p>
<pre><code>from os import system
while True:
system('clear') # or 'cls' if you are running windows
user_input = input('say something:')
print('result: ' + user_input)
input()
</code></pre>
| 2 | 2016-10-05T14:27:02Z | [
"python",
"terminal"
]
|
Read and Write separately in python | 39,876,608 | <p>I'm trying to create a very simple program in python that needs to read input from the user and write output accordingly. I need an output similar to this:</p>
<pre><code>$./program.py
say something: Hello World
result: hello world
</code></pre>
<p>The thing is that i need to read input indefinitely, each time the user inputs data i would like that the printed data doesn't obstruct the input prompt. It will even better that no newlines be printed, keeping the output as above: a line for reading and another for writing.</p>
<p>I tried using curses but i don't want the hole screen to be used, just the two lines.</p>
| 3 | 2016-10-05T14:23:51Z | 39,917,409 | <p>I believe this is what you want:</p>
<pre><code>import colorama
colorama.init()
no = 0
while True:
user_input = str(raw_input('\033[2A'*no + '\033[KSay something: '))
print '\033[KResult: ' + user_input
no = 1
</code></pre>
<p>This how it looks after entering the string:</p>
<p><a href="http://i.stack.imgur.com/2sk8G.png" rel="nofollow"><img src="http://i.stack.imgur.com/2sk8G.png" alt="Working solution"></a></p>
<p>This implementation works on windows, however, if you use Linux, if I am not mistaken these are not necessary:</p>
<pre><code>import colorama
colorama.init()
</code></pre>
<p>EDIT: Modified my code a bit so it does not overwrite the text that was printed before the execution of the code. Also added an image of working implementation.</p>
| 2 | 2016-10-07T12:32:47Z | [
"python",
"terminal"
]
|
How to use Python with beautifulsoup to merge siblings | 39,876,623 | <p>How do I merge siblings together and display the output one beside each other. </p>
<p><strong>EX.</strong></p>
<pre><code>dat="""
<div class="col-md-1">
<table class="table table-hover">
<tr>
<th>Name:</th>
<td><strong>John</strong></td>
</tr>
<tr>
<th>Last Name:</th>
<td>Doe</td>
</tr>
<tr>
<th>Email:</th>
<td>jd@mail.com</td>
</tr>
</table>
</div>
"""
soup = BeautifulSoup(dat, 'html.parser')
for buf in soup.find_all(class_="table"):
ope = buf.get_text("\n", strip=True)
print ope
</code></pre>
<p><strong>When run it produces:</strong></p>
<pre><code>Name:
John
Last Name:
Doe
Email:
jd@mail.com
</code></pre>
<p><strong>What I need:</strong></p>
<pre><code>Name: John
Last Name: Doe
Email: jd@mail.com
</code></pre>
<p>Can it be done in a list and every new "tr" tag put a new line?</p>
<p><strong>EDIT:</strong>
<a href="http://stackoverflow.com/users/771848/alecxe">alecxe</a> answer worked but strangely after the output I would get "ValueError: need more than 1 value to unpack" To fix that just put a try:except block.</p>
<pre><code>soup = BeautifulSoup(dat, 'html.parser')
for row in soup.select(".table tr"):
try:
(label, value) = row.find_all(["th", "td"])
print(label.get_text() + " " + value.get_text())
except ValueError:
continue
</code></pre>
| 2 | 2016-10-05T14:24:26Z | 39,876,791 | <p>Why don't process the table <em>row by row</em>:</p>
<pre><code>soup = BeautifulSoup(dat, 'html.parser')
for row in soup.select(".table tr"):
label, value = row.find_all(["th", "td"])
print(label.get_text() + " " + value.get_text())
</code></pre>
<p>Prints:</p>
<pre><code>Name: John
Last Name: Doe
Email: jd@mail.com
</code></pre>
| 1 | 2016-10-05T14:31:41Z | [
"python",
"html"
]
|
How to use Python with beautifulsoup to merge siblings | 39,876,623 | <p>How do I merge siblings together and display the output one beside each other. </p>
<p><strong>EX.</strong></p>
<pre><code>dat="""
<div class="col-md-1">
<table class="table table-hover">
<tr>
<th>Name:</th>
<td><strong>John</strong></td>
</tr>
<tr>
<th>Last Name:</th>
<td>Doe</td>
</tr>
<tr>
<th>Email:</th>
<td>jd@mail.com</td>
</tr>
</table>
</div>
"""
soup = BeautifulSoup(dat, 'html.parser')
for buf in soup.find_all(class_="table"):
ope = buf.get_text("\n", strip=True)
print ope
</code></pre>
<p><strong>When run it produces:</strong></p>
<pre><code>Name:
John
Last Name:
Doe
Email:
jd@mail.com
</code></pre>
<p><strong>What I need:</strong></p>
<pre><code>Name: John
Last Name: Doe
Email: jd@mail.com
</code></pre>
<p>Can it be done in a list and every new "tr" tag put a new line?</p>
<p><strong>EDIT:</strong>
<a href="http://stackoverflow.com/users/771848/alecxe">alecxe</a> answer worked but strangely after the output I would get "ValueError: need more than 1 value to unpack" To fix that just put a try:except block.</p>
<pre><code>soup = BeautifulSoup(dat, 'html.parser')
for row in soup.select(".table tr"):
try:
(label, value) = row.find_all(["th", "td"])
print(label.get_text() + " " + value.get_text())
except ValueError:
continue
</code></pre>
| 2 | 2016-10-05T14:24:26Z | 39,877,171 | <p>Use this</p>
<pre><code>soup = BeautifulSoup(dat, 'html.parser')
table = soup.find_all('table', attrs={'class': ['table', 'table-hover']})
for buf in table:
for row in buf.find_all('tr'):
print(row.th.string, row.td.string)
</code></pre>
<p>Output</p>
<pre><code>Name: John
Last Name: Doe
Email: jd@mail.com
</code></pre>
| 0 | 2016-10-05T14:46:36Z | [
"python",
"html"
]
|
Python windows service to automatically grant folder permissions creates duplicate ACEs | 39,876,729 | <p>I wrote a windows service in Python that scans a given directory for new folders. Whenever a new folder is created, the service creates 4 sub-folders and grants each one a different set of permissions. The problem is that within those subfolders, any folders created (essentially tertiary level, or sub-sub-folders)
have the following error when accessing the permissions (through right-click-> properties->security): </p>
<p><strong>"The permissions on test folder are incorrectly ordered, which may cause some entries to be ineffective"</strong> </p>
<p>To reiterate, we have folder A which is scanned. When I create folder B in folder A, folders 1,2,3,4 are created within B, with permissions provided by the script. Any folders created within (1,2,3,4) have the above error when opening up the directory permissions. Furthermore, the security entries for SYSTEM, Administrators and Authenticated Users appear twice when clicking on advanced.</p>
<p>The relevant portion of code is: </p>
<pre><code>import win32security
import ntsecuritycon
for rw_user in rw:
sd=win32security.GetFileSecurity(in_dir+"\\"+dir_,win32security.DACL_SECURITY_INFORMATION)
dacl=sd.GetSecurityDescriptorDacl()
dacl.AddAccessAllowedAceEx(sec.ACL_REVISION_DS,sec.OBJECT_INHERIT_ACE|sec.CONTAINER_INHERIT_ACE,con.FILE_GENERIC_READ|con.FILE_ADD_FILE,p_dict[rw_user][0])
sd.SetSecurityDescriptorDacl(1,dacl,0)
win32security.SetFileSecurity(in_dir+"\\"+dir_,win32security.DACL_SECURITY_INFORMATION,sd)
</code></pre>
<p>This is based on the example found in <a href="http://stackoverflow.com/questions/12168110/setting-folder-permissions-in-windows-using-python">Setting folder permissions in Windows using Python</a></p>
<p>Any help is greatly appreciated.</p>
<p>***EDITED TO ADD:</p>
<p>This is the output of icacls.exe on the folder created by the service:</p>
<pre><code>PS C:\> icacls "C:\directory monitor\main\center\test\request"
C:\directory monitor\main\center\test\request PNIM\jmtzlilmi:(OI)(CI)(R,WD)
PNIM\jmtzlilmi:(OI)(CI)(W,Rc)
PNIM\jmtzlilmi:(OI)(CI)(R,WD)
PNIM\jmtzlilmi:(OI)(CI)(W,Rc)
BUILTIN\Administrators:(I)(F)
BUILTIN\Administrators:(I)(OI)(CI)(IO)(F)
NT AUTHORITY\SYSTEM:(I)(F)
NT AUTHORITY\SYSTEM:(I)(OI)(CI)(IO)(F)
BUILTIN\Users:(I)(OI)(CI)(RX)
NT AUTHORITY\Authenticated Users:(I)(M)
NT AUTHORITY\Authenticated Users:(I)(OI)(CI)(IO)(M)
</code></pre>
<p>This is the output of icacls on the directory that I created within the automatically created folder, the one that has duplicate entries:</p>
<pre><code>PS C:\> icacls "C:\directory monitor\main\center\test\request\test folder"
C:\directory monitor\main\center\test\request\test folder PNIM\jmtzlilmi:(OI)(CI)(R,WD)
PNIM\jmtzlilmi:(OI)(CI)(W,Rc)
PNIM\jmtzlilmi:(OI)(CI)(R,WD)
PNIM\jmtzlilmi:(OI)(CI)(W,Rc)
BUILTIN\Administrators:(F)
BUILTIN\Administrators:(I)(OI)(CI)(IO)(F)
NT AUTHORITY\SYSTEM:(F)
NT AUTHORITY\SYSTEM:(I)(OI)(CI)(IO)(F)
BUILTIN\Users:(OI)(CI)(RX)
NT AUTHORITY\Authenticated Users:(M)
NT AUTHORITY\Authenticated Users:(I)(OI)(CI)(IO)(M)
</code></pre>
<p>The folder being monitored by the service is called center, the folder I created within is called test. The service then creates "request" within test, and I created "test folder" within request (yes, I'm brilliant at naming folders, I know. It's a bit more coherent in production.)</p>
<p>EDITED AGAIN:</p>
<p>Copied the wrong bit of code. I used AddAccessAllowedAceEx and NOT AddAccessAllowedAce. Many apologies...</p>
| 3 | 2016-10-05T14:28:53Z | 40,023,796 | <p>So the problem here is in the win32security.SetFileSecurity() function. As per MSDN, this function is obsolete (see: <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/aa379577(v=vs.85).aspx" rel="nofollow">https://msdn.microsoft.com/en-us/library/windows/desktop/aa379577(v=vs.85).aspx</a>) and has been replaced by SetNamedSecurityInfo. I switched, and everything appears to work well. Thanks anyway! </p>
| 0 | 2016-10-13T14:26:58Z | [
"python",
"windows",
"file-permissions",
"pywin32"
]
|
Where to keep SASS files in a Django project? | 39,876,761 | <p>I understand that static files (such as CSS, JS, images) in a Django project are ideally to be kept in a <code>static/</code> directory, whether inside an app or the project root.</p>
<p>A sample folder structure may look like</p>
<p><code>project_root/my_app/static/my_app/css</code>, <code>js</code> or <code>img</code>, or
<code>project_root/static/project_name/css</code>, <code>js</code> or <code>img</code></p>
<p>Also, I will run <code>collectstatic</code> command to make them ready to be served.</p>
<p>But my question is where should I keep my SASS files? If I create a sass directory inside <code>static/my_app/</code> along with <code>css</code>, <code>js</code> and <code>img</code> directories, won't they become available to public when I make them ready to be served?</p>
<p>What could be the best location/directory to keep my SASS (or SCSS) files in a Django project so that they are not available to public as they will already be processed into CSS which is finally available to public? Also, please let me know if my concepts are not clear about static files.</p>
<p>Thank you</p>
| 0 | 2016-10-05T14:30:01Z | 39,877,206 | <p>Keep assets in <code>assets</code> directory. Here is simple project layout I use (many things are omitted):</p>
<pre><code>[project root directory]
âââ apps/
| âââ [app]/
| âââ __init__.py
|
âââ assets/
| âââ styles/ (css/)
| âââ fonts/
| âââ images/ (img/)
| âââ scripts/ (js/)
|
âââ bin/
âââ fixtures/
âââ media/
âââ docs/
âââ requirements/
âââ [project name]/
| âââ __init__.py
| âââ settings/
| âââ urls.py
| âââ wsgi.py
|
âââ templates/
| âââ [app templates]/
| âââ base.html
| âââ main.html
|
âââ manage.py
âââ .gitignore
âââ .gitmodules
âââ README
</code></pre>
<p>Sass files are compiled to CSS so they need to kept apart. Good practice is to have all assets (styles/fonts/images/scripts) in one subdirectory and then compile/copy them (+ minify/ugly/process whoever you want along the way) to <code>static</code> directory with some kind of task-runner (for example Gulp). Same goes for scripts; you can keep Coffee or Typescript files in <code>assets/scripts</code> and then compile them to Javascript files in <code>static</code>. This way only final files are available to user and are nicely separated from what you (developer) work on. </p>
| 0 | 2016-10-05T14:48:16Z | [
"python",
"django",
"sass"
]
|
Scipy ode solver | 39,876,781 | <p>Using scipy version 0.18.0 and really scratching
my head at the moment.
I reduced the code to the following minimal example:</p>
<h1>try to solve the easiest differential equation possible</h1>
<pre><code>def phase(t, y):
c1 = y
dydt = - c1
return dydt
c1 = 1.0
y0 = c1
t = np.linspace(0,1,100)
ode_obj = sp.integrate.ode(phase)
ode_obj.set_initial_value(y0)
sol = ode_obj.integrate(t)
</code></pre>
<p>The object tells me it was successful, but the return value
is a numpy array of length 1.</p>
<p>The documentation is extremely sparse and I'm not sure
if I'm using it incorrectly.</p>
<p>Thanks for the help guys.</p>
<p>Greetings.</p>
| 0 | 2016-10-05T14:31:07Z | 39,891,902 | <p>Found this issue on github <a href="https://github.com/scipy/scipy/issues/1976" rel="nofollow">https://github.com/scipy/scipy/issues/1976</a>
This basically explains how it works and when you think how these initial
value solvers work ( stepwise into direction of the final point ) it
gets clearer why this is done this way. My code from above would look like
this:</p>
<pre><code>import scipy as sp
import pylab as pb
def phase(t, y):
c1 = y
dydt = - c1
return dydt
c1 = 1.0
t0 = 0.0 # not necessary here
y0 = c1
t1 = 5.0
t = []
sol = []
ode_obj = sp.integrate.ode(phase)
ode_obj.set_initial_value(y0, t=t0)
ode_obj.set_integrator('vode',method='bdf',rtol=1.e-12)
sol = ode_obj.integrate(t)
while ode_obj.successful() and ode_obj.t < t1:
ode_obj.integrate(t1,step=100)
t.append(ode_obj.t)
sol.append(ode_obj.y)
pb.plot(t,sol)
pb.show()
</code></pre>
<p>This will produce the following output:
<a href="http://i.stack.imgur.com/aRc1C.png" rel="nofollow"><img src="http://i.stack.imgur.com/aRc1C.png" alt="enter image description here"></a></p>
| 0 | 2016-10-06T09:05:06Z | [
"python",
"scipy"
]
|
Flask Default server issue | 39,876,845 | <p>To be Clear, I am well aware that flask's built in server which uses werkzeug cannot be used as a full blown server and should be replaced with Mod_wsgi or something else.</p>
<p>But , My requirement is to serve only one request at a time but if another request comes in while the first one is being served, it should queue up and execute it one by one.</p>
<p>Can Flask do that? </p>
<p>Also,</p>
<p>I Read the documents and there was an option like below</p>
<blockquote>
<p>app.run(host="10.343.34534.34543", port=6846,threaded=True)</p>
</blockquote>
<p>What does threaded=true mean?</p>
| 0 | 2016-10-05T14:33:52Z | 39,877,902 | <p><code>flask.Flask.run</code> accepts keyword arguments eg. <code>threaded=True</code> that it forwards to <code>werkzeug.serving.run_simple</code> two of those arguments are: threaded which enables threading, and processes, which you can set to have Werkzeug spawn more than one process to handle requests. So if you do:</p>
<pre><code>if __name__ == '__main__':
app.run(threaded=True)
# Alternately
# app.run(processes=3)
</code></pre>
<p>Flask will tell Werkzeug to use threading and to spawn three processes to handle incoming requests.</p>
<p>All of that being said, a WSGI like Gunicorn should be used in a production environment. Not Werkzeug.</p>
| 0 | 2016-10-05T15:18:42Z | [
"python",
"flask",
"server"
]
|
How to identify which .pyd files are required for the execution of exe files generated using py2exe module | 39,876,902 | <p>I have written a python script and generated an exe using py2exe on Windows 32 bit OS. While I'm trying to execute the generated exe file, I'm getting the below error:</p>
<hr>
<pre><code>Traceback (most recent call last):
File "program01.py", line 3, in <module>
File "PIL\Image.pyc", line 67, in <module>
File "PIL\_imaging.pyc", line 12, in <module>
File "PIL\_imaging.pyc", line 10, in __load
ImportError: DLL load failed: The specified module could not be found.
</code></pre>
<hr>
<p>Is there any way to identify the complete list what .pyd files are required for my program to be executed.</p>
<p>Below is my program import statements.</p>
<pre><code>from __future__ import division
import os, sys, math, aggdraw
from PIL import Image
import xml.etree.ElementTree as ET
import lxml.etree as LETREE
</code></pre>
<p>Any kind of help would be appreciated!!!</p>
<p>Thanks,
Ram</p>
| 0 | 2016-10-05T14:35:43Z | 39,879,368 | <p>You can include modules by adding <code>options</code> argument into <code>setup</code>:</p>
<pre><code> options = {'py2exe': {'bundle_files': 1, 'compressed': True, "includes" : ['os', 'sys', 'math', 'aggdraw', 'PIL', 'xml.etree.ElementTree', 'lxml.etree' ]}
}
</code></pre>
<p>Only thing that could be different in code above is that you might need to replace <code>xml.etree.ElementTree</code> with <code>xml</code> and <code>lxml.etree</code> with <code>lxml</code> as I'm not quite sure about those.</p>
| 0 | 2016-10-05T16:32:14Z | [
"python",
"pydev",
"py2exe",
"pillow",
"pyd"
]
|
SQLAlchemy query using both ilike and contains as filters | 39,877,109 | <p>I'm trying to use both .contains and .ilike to filter a query from a string value.</p>
<p>For example the query should look something like this:</p>
<pre><code>model.query.filter(model.column.contains.ilike("string").all()
</code></pre>
<p>Currently querying by .contains returns case sensitive results. And querying by ilike returns case unsensitive results, but the characters must match which is far too specific a query. I want potential results to appear to the user as they do with a .contains query.</p>
| -1 | 2016-10-05T14:44:20Z | 39,878,153 | <p>foo = model.query.filter(model.column.ilike("%"+ string_variable +"%")).all()</p>
| 0 | 2016-10-05T15:30:54Z | [
"python",
"sqlalchemy"
]
|
Insert missing weekdays in pandas dataframe and fill them with NaN | 39,877,184 | <p>I am trying to insert missing weekdays in a time series dataframe such has </p>
<pre><code>import pandas as pd
from pandas.tseries.offsets import *
df = pd.DataFrame([['2016-09-30', 10, 2020], ['2016-10-03', 20, 2424], ['2016-10-05', 5, 232]], columns=['date', 'price', 'vol']).set_index('date')
df['date'] = pd.to_datetime(df['date'])
df = df.set_index('date')
</code></pre>
<p>data looks like this :</p>
<pre><code>Out[300]:
price vol
date
2016-09-30 10 2020
2016-10-03 20 2424
2016-10-05 5 232
</code></pre>
<p>I can create a series of week days easily with <code>pd.date_range()</code></p>
<pre><code>pd.date_range('2016-09-30', '2016-10-05', freq=BDay())
Out[301]: DatetimeIndex(['2016-09-30', '2016-10-03', '2016-10-04', '2016-10-05'], dtype='datetime64[ns]', freq='B')
</code></pre>
<p>based on that DateTimeIndex I would like to add missing dates in my <code>df</code>and fill column values with NaN so I get:</p>
<pre><code>Out[300]:
price vol
date
2016-09-30 10 2020
2016-10-03 20 2424
2016-10-04 NaN NaN
2016-10-05 5 232
</code></pre>
<p>is there an easy way to do this? Thanks!</p>
| 0 | 2016-10-05T14:47:03Z | 39,877,325 | <p>You can use reindex:</p>
<pre><code>df.index = pd.to_datetime(df.index)
df.reindex(pd.date_range('2016-09-30', '2016-10-05', freq=BDay()))
Out:
price vol
2016-09-30 10.0 2020.0
2016-10-03 20.0 2424.0
2016-10-04 NaN NaN
2016-10-05 5.0 232.0
</code></pre>
| 1 | 2016-10-05T14:52:37Z | [
"python",
"python-2.7",
"pandas",
"datetimeindex"
]
|
Insert missing weekdays in pandas dataframe and fill them with NaN | 39,877,184 | <p>I am trying to insert missing weekdays in a time series dataframe such has </p>
<pre><code>import pandas as pd
from pandas.tseries.offsets import *
df = pd.DataFrame([['2016-09-30', 10, 2020], ['2016-10-03', 20, 2424], ['2016-10-05', 5, 232]], columns=['date', 'price', 'vol']).set_index('date')
df['date'] = pd.to_datetime(df['date'])
df = df.set_index('date')
</code></pre>
<p>data looks like this :</p>
<pre><code>Out[300]:
price vol
date
2016-09-30 10 2020
2016-10-03 20 2424
2016-10-05 5 232
</code></pre>
<p>I can create a series of week days easily with <code>pd.date_range()</code></p>
<pre><code>pd.date_range('2016-09-30', '2016-10-05', freq=BDay())
Out[301]: DatetimeIndex(['2016-09-30', '2016-10-03', '2016-10-04', '2016-10-05'], dtype='datetime64[ns]', freq='B')
</code></pre>
<p>based on that DateTimeIndex I would like to add missing dates in my <code>df</code>and fill column values with NaN so I get:</p>
<pre><code>Out[300]:
price vol
date
2016-09-30 10 2020
2016-10-03 20 2424
2016-10-04 NaN NaN
2016-10-05 5 232
</code></pre>
<p>is there an easy way to do this? Thanks!</p>
| 0 | 2016-10-05T14:47:03Z | 39,880,123 | <p>Alternatively, you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html" rel="nofollow">pandas.DataFrame.resample()</a>, specifying 'B' for <em>Business Day</em> with no need to specify beginning or end date sequence as along as the dataframe maintains a datetime index</p>
<pre><code>df = df.resample('B').sum()
# price vol
# date
# 2016-09-30 10.0 2020.0
# 2016-10-03 20.0 2424.0
# 2016-10-04 NaN NaN
# 2016-10-05 5.0 232.0
</code></pre>
| 2 | 2016-10-05T17:17:43Z | [
"python",
"python-2.7",
"pandas",
"datetimeindex"
]
|
Python: Open Excel Workbook using Win32 COM Api | 39,877,278 | <p>I'm using the following code to open and display a workbook within Excel:</p>
<pre><code>import win32com.client as win32
excel = win32.gencache.EnsureDispatch('Excel.Application')
wb = excel.Workbooks.Open('my_sheet.xlsm')
ws = wb.Worksheets('blaaaa')
excel.Visible = True
</code></pre>
<p>When the File 'my_sheet.xlsm' is already opened, Excel asks me whether I want to re-open it without saving. </p>
<p>How could I check in advance whether the workbook is already opened up and in case it is, just bring it to the front?</p>
<p><strong>EDIT</strong>: Found out by now:</p>
<pre><code>if excel.Workbooks.Count > 0:
for i in range(1, excel.Workbooks.Count+1):
if excel.Workbooks.Item(i).Name is 'my_sheet.xlsm':
wb = excel.Workbooks.Item(i)
break
</code></pre>
<p><strong>And one more question</strong>: My worksheet contains some headers where I enabled some filtering. So when the filters are set, and when I open the Workbook from Python, it sometimes asks me to enter a unique name to save the filter. Why is that? This is the dialogue:</p>
<p><a href="http://i.stack.imgur.com/j3D6B.png" rel="nofollow"><img src="http://i.stack.imgur.com/j3D6B.png" alt="enter image description here"></a></p>
<p><strong>EDIT</strong> Ok here it says (in german language) that the latter problem is a known bug in 2007 and 2010 files: <a href="https://social.msdn.microsoft.com/Forums/de-DE/3dce9f06-2262-4e22-a8ff-5c0d83166e73/excel-api-interne-namen" rel="nofollow">https://social.msdn.microsoft.com/Forums/de-DE/3dce9f06-2262-4e22-a8ff-5c0d83166e73/excel-api-interne-namen</a> and it seems to exist if you open Excel-Files programmatically. Don't know whether there is a workaround.</p>
| -1 | 2016-10-05T14:51:06Z | 39,880,844 | <p>While you found a solution, consider using <a href="https://docs.python.org/2/tutorial/errors.html#defining-clean-up-actions" rel="nofollow">try/except/finally</a> block. Currently your code will leave the Excel.exe process running in background (check Task Manager if using Windows) even if you close the visible worksheet. As an aside, in Python or any other language like VBA, any external API such as this COM interface should always be released cleanly during application code.</p>
<p>Below solution uses a defined function, <code>openWorkbook()</code> to go two potential routes: 1) first attempts to relaunch specified workbook assuming it is opened and 2) if it is not currently opened launches a new workbook object of that location. The last nested <code>try/except</code> is used in case the <code>Workbooks.Open()</code> method fails such as incorrect file name.</p>
<pre><code>import win32com.client as win32
def openWorkbook(xlapp, xlfile):
try:
xlwb = xlapp.Workbooks(xlfile)
except Exception as e:
try:
xlwb = xlapp.Workbooks.Open(xlfile)
except Exception as e:
print(e)
xlwb = None
return(xlwb)
try:
excel = win32.gencache.EnsureDispatch('Excel.Application')
wb = openWorkbook(excel, 'my_sheet.xlsm')
ws = wb.Worksheets('blaaaa')
excel.Visible = True
except Exception as e:
print(e)
finally:
# RELEASES RESOURCES
ws = None
wb = None
excel = None
</code></pre>
| 1 | 2016-10-05T18:03:07Z | [
"python",
"excel",
"winapi",
"com-interface"
]
|
How to download my file from HDInsight cluster that is saved from pd.to_csv() | 39,877,341 | <p>I have a HDInsight cluster running Spark 1.6.2 & Jupyter
In a jupyter notebook I run my pyspark commands and some of the output is processed in pandas dataframe.</p>
<p>As the last step I would like to save out my pandas dataframe to a csv file and either:</p>
<ol>
<li>save it to the 'jupyter filesystem' and download it to my laptop</li>
<li>save it to my blob storage</li>
</ol>
<p>But I have no clue how to do that.</p>
<p>I tried the following for:</p>
<p><strong>1. save it to the 'jupyter filesystem' and download it to my laptop</strong></p>
<pre><code># df is my resulting dataframe, so I save it to the filesystem where jupyter runs
df.to_csv('app_keys.txt')
</code></pre>
<p>I was expecting it to save in the same directory as my notebook and thus to see it in the tree view in the browser. This is not the case. So my question is: <strong>Where is this file saved on the filesystem?</strong></p>
<p><strong>2. save it to my blob storage</strong>
After googling it seems I could also upload the file to blob storage using the azure.storage.blob module. So I tried:</p>
<pre><code>from azure.storage.blob import BlobService # a lot of examples online import BlockBlobService but this one is not available in HDInsight
# i have all variables in CAPITALS provided in the code
blob_service=BlobService(account_name=STORAGEACCOUNTNAME,account_key=STORAGEACCOUNTKEY)
# check if reading from blob works
blob_service.get_blob_to_path(CONTAINERNAME, 'iris.txt', 'mylocalfile.txt') # this works
# now try to reverse the process and write to blob
blob_service.create_blob_from_path(CONTAINERNAME,'myblobfile.txt','mylocalfile.txt') # fails with AttributeError: 'BlobService' object has no attribute 'create_blob_from_path'
</code></pre>
<p>or </p>
<pre><code>blob_service.create_blob_from_text(CONTAINERNAME,'myblobfile.txt','mylocalfile.txt') # fails with 'BlobService' object has no attribute 'create_blob_from_text'
</code></pre>
<p>So I have no clue how I can write back and access the stuff I write out from my pandas to the filesystem.</p>
<p>Any help is apprciated</p>
| 0 | 2016-10-05T14:53:26Z | 39,891,156 | <p>Per my experience, the second question that you encountered is due to the version of the azure storage client library for python.
For the old version, the library doesn't include the method which you invoked in your code. The following URL is useful for you.</p>
<p><a href="http://stackoverflow.com/questions/35558463/how-to-import-azure-blobservice-in-python">How to import Azure BlobService in python?</a>.</p>
| 0 | 2016-10-06T08:27:00Z | [
"python",
"azure",
"pandas",
"azure-storage-blobs"
]
|
Names of objects in a list? | 39,877,498 | <p>I'm trying to iteratively write dictionaries to file, but am having issues creating the unique filenames for each dict.</p>
<pre><code>def variable_to_value(value):
for n, v in globals().items():
if v == value:
return n
else:
return None
a = {'a': [1,2,3]}
b = {'b': [4,5,6]}
c = {'c': [7,8,9]}
for obj in [a, b, c]:
name = variable_to_value(obj)
print(name)
</code></pre>
<p>This prints:</p>
<pre><code>a
obj
obj
</code></pre>
<p>How can I access the name of the original object itself instead of <code>obj</code>? </p>
| 2 | 2016-10-05T15:00:21Z | 39,877,727 | <p>Python doesn't actually work like this.</p>
<p>Objects in Python don't have innate names. It's the names that belong to an object, not the other way around: an object can have many names (or no names at all).</p>
<p>You're getting two copies of "obj" printed, because at the time you call <code>variable_to_value</code>, both the name <code>b</code> and the name <code>obj</code> refer to the same object! (The dictionary <code>{'b': [4,5,6]}</code>) So when you search for the global namespace for any value which is equal to <code>obj</code> (note that you should be checking using <code>is</code> rather than <code>==</code>) it's effectively random whether or not you get <code>b</code> or <code>obj</code>.</p>
| 0 | 2016-10-05T15:09:33Z | [
"python",
"list",
"introspection"
]
|
Names of objects in a list? | 39,877,498 | <p>I'm trying to iteratively write dictionaries to file, but am having issues creating the unique filenames for each dict.</p>
<pre><code>def variable_to_value(value):
for n, v in globals().items():
if v == value:
return n
else:
return None
a = {'a': [1,2,3]}
b = {'b': [4,5,6]}
c = {'c': [7,8,9]}
for obj in [a, b, c]:
name = variable_to_value(obj)
print(name)
</code></pre>
<p>This prints:</p>
<pre><code>a
obj
obj
</code></pre>
<p>How can I access the name of the original object itself instead of <code>obj</code>? </p>
| 2 | 2016-10-05T15:00:21Z | 39,877,731 | <p>The function returns the first name it finds referencing an object in your <code>globals()</code>. However, at each iteration, the name <code>obj</code> will reference each of the objects. So either the name <code>a</code>, <code>b</code> or <code>c</code> is returned or <code>obj</code>, depending on the one which was reached first in <code>globals()</code>.</p>
<p>You can avoid returning <code>obj</code> by excluding that name from the search in your function - <em>sort of hackish</em>:</p>
<pre><code>def variable_to_value(value):
for n, v in globals().items():
if v == value and n != 'obj':
return n
else:
return None
</code></pre>
| 1 | 2016-10-05T15:10:00Z | [
"python",
"list",
"introspection"
]
|
Names of objects in a list? | 39,877,498 | <p>I'm trying to iteratively write dictionaries to file, but am having issues creating the unique filenames for each dict.</p>
<pre><code>def variable_to_value(value):
for n, v in globals().items():
if v == value:
return n
else:
return None
a = {'a': [1,2,3]}
b = {'b': [4,5,6]}
c = {'c': [7,8,9]}
for obj in [a, b, c]:
name = variable_to_value(obj)
print(name)
</code></pre>
<p>This prints:</p>
<pre><code>a
obj
obj
</code></pre>
<p>How can I access the name of the original object itself instead of <code>obj</code>? </p>
| 2 | 2016-10-05T15:00:21Z | 39,877,760 | <p>The problem is that <code>obj</code>, your iteration variable is also in <code>globals</code>. Whether you get <code>a</code> or <code>obj</code> is just by luck. You can't solve the problem in general because an object can have any number of assignments in globals. You could update your code to exclude known references, but that is very fragile.</p>
<p>For example</p>
<pre><code>a = {'a': [1,2,3]}
b = {'b': [4,5,6]}
c = {'c': [7,8,9]}
print("'obj' is also in globals")
def variable_to_value(value):
return [n for n,v in globals().items() if v == value]
for obj in [a, b, c]:
name = variable_to_value(obj)
print(name)
print("you can update your code to exclude it")
def variable_to_value(value, exclude=None):
return [n for n,v in globals().items() if v == value and n != exclude]
for obj in [a, b, c]:
name = variable_to_value(obj, 'obj')
print(name)
print("but you'll still see other assignments")
foo = a
bar = b
bax = c
for obj in [a, b, c]:
name = variable_to_value(obj, 'obj')
print(name)
</code></pre>
<p>When run</p>
<pre><code>'obj' is also in globals
['a', 'obj']
['b', 'obj']
['c', 'obj']
you can update your code to exclude it
['a']
['b']
['c']
but you'll still see other assignments
['a', 'foo']
['b', 'bar']
['c', 'bax']
</code></pre>
| 2 | 2016-10-05T15:11:39Z | [
"python",
"list",
"introspection"
]
|
Names of objects in a list? | 39,877,498 | <p>I'm trying to iteratively write dictionaries to file, but am having issues creating the unique filenames for each dict.</p>
<pre><code>def variable_to_value(value):
for n, v in globals().items():
if v == value:
return n
else:
return None
a = {'a': [1,2,3]}
b = {'b': [4,5,6]}
c = {'c': [7,8,9]}
for obj in [a, b, c]:
name = variable_to_value(obj)
print(name)
</code></pre>
<p>This prints:</p>
<pre><code>a
obj
obj
</code></pre>
<p>How can I access the name of the original object itself instead of <code>obj</code>? </p>
| 2 | 2016-10-05T15:00:21Z | 39,878,309 | <p>So you want to find the name of any object available in the <code>globals()</code>?</p>
<p>Inside the <code>for</code> loop, <code>globals() dict</code> is being mutated, adding <code>obj</code> in its namespace. So on your second pass, you have two references to the same object (originally only referenced by the name 'a').</p>
<p>Danger of using <code>globals()</code>, I suppose. </p>
| 0 | 2016-10-05T15:37:36Z | [
"python",
"list",
"introspection"
]
|
Select Iframe inside other tags on Selenium | 39,877,503 | <p>Good day everyone.</p>
<p>First, here is a TL;DR version: </p>
<p>I need to select an element inside an iframe which is also inside many other HTML tags.</p>
<p>Now, the actual problem:</p>
<p>Today I`m trying to solve a little problem I have at work. Every day I need to perform a time-consuming task of filling a form with some information. The thing is that every form must be completed in the exactly same way and I have to do this on an individual web page. I learned about Selenium a few months back and tried to use it on the site so that I could automate this process.</p>
<p>The thing is that I ran into a problem. Apparently, Selenium IDE doesn't recognize iframes. I tried to export the code to Python 2.7 and edit it according to some stack overflow solutions I`ve found, but apparently, nothing works, probably because I'm new to this kind of thing.</p>
<p>The iframe is located inside of a bunch of other HTML tags( like table, spam, body, etc.i) soI have no idea on how to get to it.</p>
<p>Below is part of the webpage's code that contains the iframe and just after it is the test that I tried to automate using Python 2.7. Have in mind that I'm completely new to python and I've only used Selenium IDE once so far.</p>
<p>HTML code simplified by section:</p>
<pre><code><table class="dialog tabbed " id="tabDialog_6" cellspacing="0" cellpadding="0">
<tbody>
##A bunch of tables, spam, and bodies above.
<iframe id="descrpt_ifr" src='javascript:""' frameborder="0" style="width: 100%; height: 126px;" tabindex="78">
</iframe>
</td>
</tr>
<tr class="mceLast"><td class="mceStatusbar mceFirst mceLast">
<div id="descrpt_path_row">&nbsp;<a href="#" accesskey="x">
</a>
</div>
<a id="descrpt_resize" href="javascript:;" onclick="return false;" class="mceResize">
</a>
</td>
</tr>
</tbody>
</table>
</span>
<script>var WYSIWYGdisplayed = Boolean(window.tinyMCE != null);
</script>&nbsp;<table style="vertical-align: text-bottom; display: inline;" width="1%" cellpadding="0" cellspacing="0" border="0">
<tbody>
<tr>
<td nowrap="" style="border: none;">
<a id="oylyLink" title="Spell Check" class="button" href="javascript: void spellCheck(document.regform.LADD)" tabindex="79" onfocus="document.getElementById('oyly').onmouseover();" onblur="document.getElementById('oyly').onmouseout();">
<div id="oyly" class="button_off" onmousedown="mousedownButton('oyly');" onmouseup="highlightButton(false, 'oyly');" onmouseover="highlightButton(true, 'oyly');" onmouseout="highlightButton(false, 'oyly');" style="">
<img src="/MRimg/spell_check.png" alt="Spell Check" border="0" align="absmiddle" id="oyly_image">
</div>
</a>
</td>
</tr>
</tbody>
</table>
</div>
<div class="descriptionSectionHeader">Complete Description (PUBLIC)
</div>
<div class="indented"><iframe src="/tmp/alldesc_61121857.html" name="ALL_DESCS" width="702" height="170" scrolling="yes" wrap="hard"></iframe></div>
</td>
</tr>
</tbody>
</table>
</code></pre>
<p>Entire html code of the page:</p>
<pre><code><table class="dialog tabbed " id="tabDialog_6" cellspacing="0" cellpadding="0">
<script language="javascript"> tabIDs['Description'] = "tabDialog_6"; </script>
<tbody><tr>
<td class="dialogMainContent">
<div id="newDesc" class="descriptionSectionHeader"><label id="LADD_label" for="descrpt" class="">Append New Description (PUBLIC)<span class="asteriskHolder">*</span></label></div><div class="indented">
<script language="JavaScript">
function combineFields () {
var flds = new Array();
var res = new Array();
for (var i=0; i< flds.length; i++) {
var fld = document.regform.elements[flds[i]];
if (fld != null)
{
if (fld.type == 'select-one' && fld.selectedIndex>=0 && fld.options[fld.selectedIndex].value) {
res.push (fld.name+'='+fld.options[fld.selectedIndex].value);
} else if (fld.type == 'text' && fld.value) {
res.push (fld.name+'='+fld.value);
} else if (fld.type == 'select-multiple' && fld.selectedIndex>=0) {
var sel = new Array();
for (var j=0; j< fld.options.length; j++) {
if (fld.options[j].selected && fld.options[j].value) {
sel.push(fld.options[j].value);
}
}
if (sel.length> 0){res.push (fld.name+'~'+sel.join('|'));}
} else if (fld.type == 'checkbox' && fld.checked) {
res.push (fld.name+'='+fld.value);
} else if (fld.type == 'textarea' && fld.value) {
res.push (fld.name+'~'+fld.value);
}
}
}
var ret = encodeURIComponent(res.join(','));
ret = '&SOLFILTR=' + (ret || "__nothing__");
return ret;
}
function searchsolutions() {
var height = 530;
var width = 660;
var str = combineFields();
window.open('/MRcgi/MRshowsolutions_page.pl?USER=UserPROJECTID=2&MRP=5pIpUWj0&REGPAGE=0&SELECTSOLUTION=1&SESS_ID=9c91e9288993272a387fbe9f9f7c0fac&SAVEDNAME=SOL'+str, 'solsearch', 'top=176,left=133,width=' + width + ',height=' + height + ',status=yes,toolbar=no,directories=no,menubar=no,scrollbars=yes,resizable=yes');
}
</script>
<table class="inlineDialogHeading indented">
<tbody><tr>
<td><table style="vertical-align: text-bottom; display: inline;" width="1%" cellpadding="0" cellspacing="0" border="0"><tbody><tr><td nowrap="" style="border: none;"><a id="searchKBButtonLink" title="Search Knowledge Base" class="button" href="javascript:searchsolutions();" tabindex="77" onfocus="document.getElementById('searchKBButton').onmouseover();" onblur="document.getElementById('searchKBButton').onmouseout();"><div id="searchKBButton" class="button_off" onmousedown="mousedownButton('searchKBButton');" onmouseup="highlightButton(false, 'searchKBButton');" onmouseover="highlightButton(true, 'searchKBButton');" onmouseout="highlightButton(false, 'searchKBButton');" style=""><img src="/MRimg/knowledge_base.png" alt="Search Knowledge Base" border="0" align="absmiddle" id="searchKBButton_image"> <span id="searchKBButton_textSpan" style=" height: 18px; cursor: pointer;">Search Knowledge Base</span></div></a></td></tr></tbody></table></td></tr>
</tbody></table>
<textarea name="LADD" title="Description (PUBLIC)" cols="85" id="descrpt" tabindex="-1" onkeypress="addAutoSpell(document.regform.LADD)" style="height: 170px; width: 702px; display: none;" class="wysiwyg"></textarea><span id="descrpt_parent" class="mceEditor defaultSkin"><table id="descrpt_tbl" class="mceLayout" cellspacing="0" cellpadding="0" style="width: 702px; height: 170px;"><tbody><tr class="mceFirst"><td class="mceToolbar mceLeft mceFirst mceLast"><a href="#" accesskey="q" title="Jump to tool buttons - Alt+Q, Jump to editor - Alt-Z, Jump to element path - Alt-X"><!-- IE --></a><table id="descrpt_toolbar1" class="mceToolbar mceToolbarRow1 Enabled" cellpadding="0" cellspacing="0" align=""><tbody><tr><td class="mceToolbarStart mceToolbarStartButton mceFirst"><span><!-- IE --></span></td><td><a id="descrpt_newdocument" href="javascript:;" class="mceButton mceButtonEnabled mce_newdocument" onmousedown="return false;" onclick="return false;" title="New document"><span class="mceIcon mce_newdocument"></span></a></td><td><span class="mceSeparator"></span></td><td><a id="descrpt_cut" href="javascript:;" class="mceButton mceButtonEnabled mce_cut" onmousedown="return false;" onclick="return false;" title="Cut"><span class="mceIcon mce_cut"></span></a></td><td><a id="descrpt_copy" href="javascript:;" class="mceButton mceButtonEnabled mce_copy" onmousedown="return false;" onclick="return false;" title="Copy"><span class="mceIcon mce_copy"></span></a></td><td><a id="descrpt_paste" href="javascript:;" class="mceButton mceButtonEnabled mce_paste" onmousedown="return false;" onclick="return false;" title="Paste"><span class="mceIcon mce_paste"></span></a></td><td><a id="descrpt_undo" href="javascript:;" class="mceButton mce_undo mceButtonEnabled" onmousedown="return false;" onclick="return false;" title="Undo (Ctrl+Z)"><span class="mceIcon mce_undo"></span></a></td><td><a id="descrpt_redo" href="javascript:;" class="mceButton mce_redo mceButtonDisabled" onmousedown="return false;" onclick="return false;" title="Redo (Ctrl+Y)"><span class="mceIcon mce_redo"></span></a></td><td><span class="mceSeparator"></span></td><td><table id="descrpt_fontselect" cellpadding="0" cellspacing="0" class="mceListBox mceListBoxEnabled mce_fontselect"><tbody><tr><td class="mceFirst"><a id="descrpt_fontselect_text" href="javascript:;" class="mceText mceTitle" onclick="return false;" onmousedown="return false;">Font family</a></td><td class="mceLast"><a id="descrpt_fontselect_open" tabindex="-1" href="javascript:;" class="mceOpen" onclick="return false;" onmousedown="return false;"><span></span></a></td></tr></tbody></table></td><td><table id="descrpt_fontsizeselect" cellpadding="0" cellspacing="0" class="mceListBox mceListBoxEnabled mce_fontsizeselect"><tbody><tr><td class="mceFirst"><a id="descrpt_fontsizeselect_text" href="javascript:;" class="mceText mceTitle" onclick="return false;" onmousedown="return false;">Font size</a></td><td class="mceLast"><a id="descrpt_fontsizeselect_open" tabindex="-1" href="javascript:;" class="mceOpen" onclick="return false;" onmousedown="return false;"><span></span></a></td></tr></tbody></table></td><td><span class="mceSeparator"></span></td><td><table id="descrpt_forecolor" class="mceSplitButton mceSplitButtonEnabled mce_forecolor" cellpadding="0" cellspacing="0" onmousedown="return false;" title="Select text color"><tbody><tr><td class="mceFirst"><a id="descrpt_forecolor_action" href="javascript:;" class="mceAction mce_forecolor" onclick="return false;" onmousedown="return false;" title="Select text color"><span class="mceAction mce_forecolor"></span><div id="descrpt_forecolor_preview" class="mceColorPreview" style="background-color: rgb(136, 136, 136);"></div></a></td><td class="mceLast"><a id="descrpt_forecolor_open" href="javascript:;" class="mceOpen mce_forecolor" onclick="return false;" onmousedown="return false;" title="Select text color"><span class="mceOpen mce_forecolor"></span></a></td></tr></tbody></table></td><td><table id="descrpt_backcolor" class="mceSplitButton mceSplitButtonEnabled mce_backcolor" cellpadding="0" cellspacing="0" onmousedown="return false;" title="Select background color"><tbody><tr><td class="mceFirst"><a id="descrpt_backcolor_action" href="javascript:;" class="mceAction mce_backcolor" onclick="return false;" onmousedown="return false;" title="Select background color"><span class="mceAction mce_backcolor"></span><div id="descrpt_backcolor_preview" class="mceColorPreview" style="background-color: rgb(136, 136, 136);"></div></a></td><td class="mceLast"><a id="descrpt_backcolor_open" href="javascript:;" class="mceOpen mce_backcolor" onclick="return false;" onmousedown="return false;" title="Select background color"><span class="mceOpen mce_backcolor"></span></a></td></tr></tbody></table></td><td><span class="mceSeparator"></span></td><td><a id="descrpt_bold" href="javascript:;" class="mceButton mceButtonEnabled mce_bold" onmousedown="return false;" onclick="return false;" title="Bold (Ctrl+B)"><span class="mceIcon mce_bold"></span></a></td><td><a id="descrpt_italic" href="javascript:;" class="mceButton mceButtonEnabled mce_italic" onmousedown="return false;" onclick="return false;" title="Italic (Ctrl+I)"><span class="mceIcon mce_italic"></span></a></td><td><a id="descrpt_underline" href="javascript:;" class="mceButton mceButtonEnabled mce_underline" onmousedown="return false;" onclick="return false;" title="Underline (Ctrl+U)"><span class="mceIcon mce_underline"></span></a></td><td><span class="mceSeparator"></span></td><td><a id="descrpt_justifyleft" href="javascript:;" class="mceButton mceButtonEnabled mce_justifyleft" onmousedown="return false;" onclick="return false;" title="Align left"><span class="mceIcon mce_justifyleft"></span></a></td><td><a id="descrpt_justifycenter" href="javascript:;" class="mceButton mceButtonEnabled mce_justifycenter" onmousedown="return false;" onclick="return false;" title="Align center"><span class="mceIcon mce_justifycenter"></span></a></td><td><a id="descrpt_justifyright" href="javascript:;" class="mceButton mceButtonEnabled mce_justifyright" onmousedown="return false;" onclick="return false;" title="Align right"><span class="mceIcon mce_justifyright"></span></a></td><td><a id="descrpt_justifyfull" href="javascript:;" class="mceButton mceButtonEnabled mce_justifyfull" onmousedown="return false;" onclick="return false;" title="Align full"><span class="mceIcon mce_justifyfull"></span></a></td><td><span class="mceSeparator"></span></td><td><a id="descrpt_bullist" href="javascript:;" class="mceButton mceButtonEnabled mce_bullist" onmousedown="return false;" onclick="return false;" title="Unordered list"><span class="mceIcon mce_bullist"></span></a></td><td><a id="descrpt_numlist" href="javascript:;" class="mceButton mceButtonEnabled mce_numlist" onmousedown="return false;" onclick="return false;" title="Ordered list"><span class="mceIcon mce_numlist"></span></a></td><td><span class="mceSeparator"></span></td><td><a id="descrpt_outdent" href="javascript:;" class="mceButton mce_outdent mceButtonDisabled" onmousedown="return false;" onclick="return false;" title="Outdent"><span class="mceIcon mce_outdent"></span></a></td><td><a id="descrpt_indent" href="javascript:;" class="mceButton mceButtonEnabled mce_indent" onmousedown="return false;" onclick="return false;" title="Indent"><span class="mceIcon mce_indent"></span></a></td><td><span class="mceSeparator"></span></td><td><a id="descrpt_table" href="javascript:;" class="mceButton mceButtonEnabled mce_table" onmousedown="return false;" onclick="return false;" title="Inserts a new table"><span class="mceIcon mce_table"></span></a></td><td><a id="descrpt_hr" href="javascript:;" class="mceButton mceButtonEnabled mce_hr" onmousedown="return false;" onclick="return false;" title="Insert horizontal ruler"><span class="mceIcon mce_hr"></span></a></td><td><a id="descrpt_charmap" href="javascript:;" class="mceButton mceButtonEnabled mce_charmap" onmousedown="return false;" onclick="return false;" title="Insert custom character"><span class="mceIcon mce_charmap"></span></a></td><td><a id="descrpt_link" href="javascript:;" class="mceButton mce_link mceButtonEnabled" onmousedown="return false;" onclick="return false;" title="Insert/edit link"><span class="mceIcon mce_link"></span></a></td><td><a id="descrpt_image" href="javascript:;" class="mceButton mceButtonEnabled mce_image" onmousedown="return false;" onclick="return false;" title="Insert/edit image"><span class="mceIcon mce_image"></span></a></td><td class="mceToolbarEnd mceToolbarEndButton mceLast"><span><!-- IE --></span></td></tr></tbody></table><a href="#" accesskey="z" title="Jump to tool buttons - Alt+Q, Jump to editor - Alt-Z, Jump to element path - Alt-X" onfocus="tinyMCE.getInstanceById('descrpt').focus();"><!-- IE --></a></td></tr><tr><td class="mceIframeContainer mceFirst mceLast">
##THE IFRAME IS RIGHT IN THIS SECTION
<iframe id="descrpt_ifr" src='javascript:""' frameborder="0" style="width: 100%; height: 126px;" tabindex="78">
</iframe>
</td>
</tr>
<tr class="mceLast"><td class="mceStatusbar mceFirst mceLast">
<div id="descrpt_path_row">&nbsp;<a href="#" accesskey="x">
</a>
</div>
<a id="descrpt_resize" href="javascript:;" onclick="return false;" class="mceResize">
</a>
</td>
</tr>
</tbody>
</table>
</span>
<script>var WYSIWYGdisplayed = Boolean(window.tinyMCE != null);
</script>&nbsp;<table style="vertical-align: text-bottom; display: inline;" width="1%" cellpadding="0" cellspacing="0" border="0">
<tbody>
<tr>
<td nowrap="" style="border: none;">
<a id="oylyLink" title="Spell Check" class="button" href="javascript: void spellCheck(document.regform.LADD)" tabindex="79" onfocus="document.getElementById('oyly').onmouseover();" onblur="document.getElementById('oyly').onmouseout();">
<div id="oyly" class="button_off" onmousedown="mousedownButton('oyly');" onmouseup="highlightButton(false, 'oyly');" onmouseover="highlightButton(true, 'oyly');" onmouseout="highlightButton(false, 'oyly');" style="">
<img src="/MRimg/spell_check.png" alt="Spell Check" border="0" align="absmiddle" id="oyly_image">
</div>
</a>
</td>
</tr>
</tbody>
</table>
</div>
<div class="descriptionSectionHeader">Complete Description (PUBLIC)
</div>
<div class="indented"><iframe src="/tmp/alldesc_61121857.html" name="ALL_DESCS" width="702" height="170" scrolling="yes" wrap="hard"></iframe></div>
</td>
</tr>
</tbody>
</table>
</code></pre>
<p>My Selenium IDE code exported to python.</p>
<pre><code># -*- coding: utf-8 -*-
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
import unittest, time, re
class CasoDeTesteStiMEmPython(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.driver.implicitly_wait(30)
self.base_url = "THE WEBPAGE I NEED TO FILL THE FORM"
self.verificationErrors = []
self.accept_next_alert = True
def test_caso_de_teste_sti_m_em_python(self):
driver = self.driver
driver.get(self.base_url + "/MRcgi/MRhomepage.pl?USER=yferreir&PROJECTID=3&MRP=bmKXVpJkDK&OPTION=none&WRITECACHE=1&FIRST_TIME_IN_FP=1&FIRST_TIME_IN_PROJ=1&REMEMBER_SCREEN=1&")
driver.find_element_by_id("SEARCHS").clear()
driver.find_element_by_id("SEARCHS").send_keys("404905")
driver.find_element_by_id("splitbutton1-button").click()
# ERROR: Caught exception [ERROR: Unsupported command [waitForPopUp | details404905 | 100000]]
# ERROR: Caught exception [ERROR: Unsupported command [selectWindow | name=details404905 | ]]
driver.find_element_by_id("editButton_image").click()
Select(driver.find_element_by_id("status")).select_by_visible_text("Under Investigation")
##RIGHT HERE IS WHERE I NEED TO SELECT THE IFRAME
##THE FOLLOWING PARTS OF THE CODE WITH ## ARE FROM A STACKOVERFLOW SOLUTION I TRIED
## Give time for iframe to load ##
time.sleep(3)
## You have to switch to the iframe like so: ##
driver.switch_to_frame(driver.find_element_by_tag_name("descrpt_ifr"))
## Insert text via xpath ##
elem = driver.find_element_by_xpath("//*[@id="tinymce"]/p")
elem.send_keys("WHAT I NEED TO WRITE IN THE ELEMENT INSIDE THE IFRAME")
## Switch back to the "default content" (that is, out of the iframes) ##
driver.switch_to_default_content()
# ERROR: Caught exception [ERROR: Unsupported command [selectWindow | name=details404905 | ]]
driver.find_element_by_id("goButton_textSpan").click()
# ERROR: Caught exception [ERROR: Unsupported command [selectWindow | null | ]]
driver.find_element_by_id("SEARCHS").clear()
driver.find_element_by_id("SEARCHS").send_keys("404905")
driver.find_element_by_id("splitbutton1-button").click()
# ERROR: Caught exception [ERROR: Unsupported command [waitForPopUp | details404905 | 100000]]
# ERROR: Caught exception [ERROR: Unsupported command [selectWindow | name=details404905 | ]]
driver.find_element_by_id("editButton_textSpan").click()
Select(driver.find_element_by_id("status")).select_by_visible_text("Solved")
driver.find_element_by_id("tab_1").click()
Select(driver.find_element_by_id("userfield22")).select_by_visible_text("Administration")
driver.find_element_by_id("userfield24").clear()
driver.find_element_by_id("userfield24").send_keys("Cancelamento realizado com sucesso.")
driver.find_element_by_id("goButton_textSpan").click()
def is_element_present(self, how, what):
try: self.driver.find_element(by=how, value=what)
except NoSuchElementException as e: return False
return True
def is_alert_present(self):
try: self.driver.switch_to_alert()
except NoAlertPresentException as e: return False
return True
def close_alert_and_get_its_text(self):
try:
alert = self.driver.switch_to_alert()
alert_text = alert.text
if self.accept_next_alert:
alert.accept()
else:
alert.dismiss()
return alert_text
finally: self.accept_next_alert = True
def tearDown(self):
self.driver.quit()
self.assertEqual([], self.verificationErrors)
if __name__ == "__main__":
unittest.main()
</code></pre>
<p>I've tryed using the iframe's ID "descrpt_ifr" and the folloing Xpaths:</p>
<p>Iframe Xpath: //*[@id="descrpt_ifr"].</p>
<p>Element inside Iframe i want to use Xpath: //*[@id="tinymce"]/p.</p>
| 1 | 2016-10-05T15:00:29Z | 39,920,218 | <p>Looks like the line of code used to switch to the iframe is causing the error</p>
<pre><code>driver.switch_to_frame(driver.find_element_by_tag_name("descrpt_ifr"))
</code></pre>
<p>'descrpt_ifr' is not a tag name. Change it to </p>
<pre><code>driver.switch_to_frame(driver.find_element_by_id("descrpt_ifr"))
</code></pre>
<p>Or use xpath to identify the iframe in case there are multiple matching nodes for the id. </p>
| 1 | 2016-10-07T14:55:26Z | [
"python",
"html",
"selenium",
"iframe",
"xpath"
]
|
Find specific files in subdirs using Python os.walk | 39,877,560 | <p>In Windows, if I ran:</p>
<p>dir NP_???.LAS</p>
<p>I get 2 files:</p>
<p>NP_123.LAS</p>
<p>NP_1234.LAS</p>
<p>Using fmatch with NP_????.LAS mask I get only NP_1234.LAS, not NP_123.LAS.</p>
<p>Below, the code I´m running:</p>
<pre><code>def FindFiles(directory, pattern):
flist=[]
for root, dirs, files in os.walk(directory):
for filename in fnmatch.filter(files, pattern):
flist.append(os.path.join(root, filename))
return flist
</code></pre>
<p>It´s possible to change this to get the same file list as dir command, using only one pattern?</p>
| -1 | 2016-10-05T15:02:36Z | 39,880,476 | <p>You could use <a href="https://docs.python.org/2/library/re.html" rel="nofollow">re</a> to give you more flexibility by allowing actual regular expressions. The regular expression "[0-9]{3,4}" matches either 3 or 4 digit numbers. </p>
<pre><code>def FindFiles(directory, pattern):
flist=[]
for root, dirs, files in os.walk(directory):
prog = re.compile("NP_[0-9]{3,4}.txt")
for filename in filter(prog.match, files):
flist.append(os.path.join(root, filename))
return flist
</code></pre>
| 0 | 2016-10-05T17:40:55Z | [
"python",
"os.walk"
]
|
Use of getattr on a class with data descriptor | 39,877,574 | <p>While using data descriptors in building classes, I came across a strange behavior of getattr function on a class.</p>
<pre><code># this is a data descriptor
class String(object):
def __get__(self, instance, owner):
pass
def __set__(self, instance, value):
pass
# This defines a class A with 'dot' notation support for attribute 'a'
class A(object):
a = String()
obj = A()
assert getattr(A, 'a') is A.__dict__['a']
# This raises AssertionError
</code></pre>
<p>LHS return an empty string, while the RHS returns an instance of <code>String</code>. I thought <code>getattr</code> on an object was to get the value for the key inside the <code>__dict__</code>. How does <code>getattr</code> function work on a class object?</p>
| -4 | 2016-10-05T15:03:12Z | 39,877,704 | <p><code>getattr(A, 'a')</code> triggers the descriptor protocol, even on classes, so <code>String.__get__(None, A)</code> is called.</p>
<p>That returns <code>None</code> because your <code>String.__get__()</code> method has no explicit <code>return</code> statement.</p>
<p>From the <a href="https://docs.python.org/3/howto/descriptor.html#invoking-descriptors" rel="nofollow"><em>Descriptor Howto</em></a>:</p>
<blockquote>
<p>For classes, the machinery is in <code>type.__getattribute__()</code> which transforms <code>B.x</code> into <code>B.__dict__['x'].__get__(None, B)</code>.</p>
</blockquote>
<p><code>getattr(A, 'a')</code> is just a dynamic from of <code>A.a</code> here, so <code>A.__dict__['x'].__get__(None, A)</code> is executed, which is why you don't get the same thing as <code>A.__dict__['x']</code>.</p>
<p>If you expected it to return the descriptor object itself, you'll have to do so <em>explicitly</em>; <code>instance</code> will be set to <code>None</code> in that case:</p>
<pre><code>class String(object):
def __get__(self, instance, owner):
if instance is None:
return self
def __set__(self, instance, value):
pass
</code></pre>
<p>This is what the <code>property</code> descriptor object does.</p>
<p>Note that the <code>owner</code> argument to <code>descriptor.__get__</code> is optional; if not set you are supposed to use <code>type(instance)</code> instead.</p>
| 1 | 2016-10-05T15:08:37Z | [
"python",
"python-2.7",
"class",
"getattr"
]
|
Use of getattr on a class with data descriptor | 39,877,574 | <p>While using data descriptors in building classes, I came across a strange behavior of getattr function on a class.</p>
<pre><code># this is a data descriptor
class String(object):
def __get__(self, instance, owner):
pass
def __set__(self, instance, value):
pass
# This defines a class A with 'dot' notation support for attribute 'a'
class A(object):
a = String()
obj = A()
assert getattr(A, 'a') is A.__dict__['a']
# This raises AssertionError
</code></pre>
<p>LHS return an empty string, while the RHS returns an instance of <code>String</code>. I thought <code>getattr</code> on an object was to get the value for the key inside the <code>__dict__</code>. How does <code>getattr</code> function work on a class object?</p>
| -4 | 2016-10-05T15:03:12Z | 39,877,724 | <p><code>getattr(A, 'a')</code> is the same as <code>A.a</code>. This calls the respective descriptor, if present. So it provides the value presented by the descriptor, which is <code>None</code>. </p>
| 0 | 2016-10-05T15:09:19Z | [
"python",
"python-2.7",
"class",
"getattr"
]
|
How to create `input_fn` using `read_batch_examples` with `num_epochs` set? | 39,877,710 | <p>I have a basic <code>input_fn</code> that can be used with Tensorflow Estimators below. It works flawlessly without setting the <code>num_epochs</code> parameter; the obtained tensor has a discrete shape. Pass in <code>num_epochs</code> as anything other than <code>None</code> results in an unknown shape. My issue lies with constructing sparse tensors whilst using <code>num_epochs</code>; I cannot figure out how to generically create said tensors without knowing the shape of the input tensor.</p>
<p>Can anyone think of a solution to this problem? I'd like to be able to pass <code>num_epochs=1</code> to be able to evaluate only 1 time over the data set, as well as to pass to <code>predict</code> to yield a set of predictions the size of the data set, no more no less.</p>
<pre><code>def input_fn(batch_size):
examples_op = tf.contrib.learn.read_batch_examples(
FILE_NAMES,
batch_size=batch_size,
reader=tf.TextLineReader,
num_epochs=1,
parse_fn=lambda x: tf.decode_csv(x, [tf.constant([''], dtype=tf.string)] * len(HEADERS)))
examples_dict = {}
for i, header in enumerate(HEADERS):
examples_dict[header] = examples_op[:, i]
continuous_cols = {k: tf.string_to_number(examples_dict[k], out_type=tf.float32)
for k in CONTINUOUS_FEATURES}
# Problems lay here while creating sparse categorical tensors
categorical_cols = {
k: tf.SparseTensor(
indices=[[i, 0] for i in range(examples_dict[k].get_shape()[0])],
values=examples_dict[k],
shape=[int(examples_dict[k].get_shape()[0]), 1])
for k in CATEGORICAL_FEATURES}
feature_cols = dict(continuous_cols)
feature_cols.update(categorical_cols)
label = tf.string_to_number(examples_dict[LABEL], out_type=tf.int32)
return feature_cols, label
</code></pre>
| 0 | 2016-10-05T15:08:50Z | 40,052,668 | <p>I have solved the above issue by creating a function specific to what's expected on an <code>input_fn</code>; it takes in a dense column and creates a SparseTensor without knowing shape. The function was made possible using <code>tf.range</code> and <code>tf.shape</code>. Without further ado, here is the working generic <code>input_fn</code> code that works independently of <code>num_epochs</code> being set:</p>
<pre><code>def input_fn(batch_size):
examples_op = tf.contrib.learn.read_batch_examples(
FILE_NAMES,
batch_size=batch_size,
reader=tf.TextLineReader,
num_epochs=1,
parse_fn=lambda x: tf.decode_csv(x, [tf.constant([''], dtype=tf.string)] * len(HEADERS)))
examples_dict = {}
for i, header in enumerate(HEADERS):
examples_dict[header] = examples_op[:, i]
feature_cols = {k: tf.string_to_number(examples_dict[k], out_type=tf.float32)
for k in CONTINUOUS_FEATURES}
feature_cols.update({k: dense_to_sparse(examples_dict[k])
for k in CATEGORICAL_FEATURES})
label = tf.string_to_number(examples_dict[LABEL], out_type=tf.int32)
return feature_cols, label
def dense_to_sparse(dense_tensor):
indices = tf.to_int64(tf.transpose([tf.range(tf.shape(dense_tensor)[0]), tf.zeros_like(dense_tensor, dtype=tf.int32)]))
values = dense_tensor
shape = tf.to_int64([tf.shape(dense_tensor)[0], tf.constant(1)])
return tf.SparseTensor(
indices=indices,
values=values,
shape=shape
)
</code></pre>
<p>Hope this helps someone!</p>
| 0 | 2016-10-14T22:06:36Z | [
"python",
"tensorflow",
"skflow"
]
|
Does it make sense to use cross-correlation on arrays of timestamps? | 39,877,725 | <p>I want to find the offset between two arrays of timestamps. They could represent, let's say, the onset of beeps in two audio tracks.</p>
<p><strong>Note</strong>: There may be extra or missing onsets in either track.</p>
<p>I found some information about cross-correlation (e.g. <a href="https://dsp.stackexchange.com/questions/736/how-do-i-implement-cross-correlation-to-prove-two-audio-files-are-similar">https://dsp.stackexchange.com/questions/736/how-do-i-implement-cross-correlation-to-prove-two-audio-files-are-similar</a>) which looked promising.</p>
<p>I assumed that each audio track is 10 seconds in duration, and represented the beep onsets as the peaks of a "square wave" with a sample rate of 44.1 kHz:</p>
<pre><code>import numpy as np
rfft = np.fft.rfft
irfft = np.fft.irfft
track_1 = np.array([..., 5.2, 5.5, 7.0, ...])
# The onset in track_2 at 8.0 is "extra," it has no
# corresponding onset in track_1
track_2 = np.array([..., 7.2, 7.45, 8.0, 9.0, ...])
frequency = 44100
num_samples = 10 * frequency
wave_1 = np.zeros(num_samples)
wave_1[(track_1 * frequency).astype(int)] = 1
wave_2 = np.zeros(num_samples)
wave_2[(track_2 * frequency).astype(int)] = 1
xcor = irfft(rfft(wave_1) * np.conj(rfft(wave_2)))
offset = xcor.argmax()
</code></pre>
<p>This approach isn't particularly fast, but I was able to get fairly consistent results even with quite low frequencies. However... I have no idea if this is a good idea! Is there a better way to find this offset than cross-correlation?</p>
<p><strong>Edit</strong>: added note about missing and extra onsets.</p>
| 8 | 2016-10-05T15:09:24Z | 39,920,991 | <p>Yes it does make sense. This is commonly done in matlab. Here is a link to a similar application:</p>
<p><a href="http://www.mathworks.com/help/signal/ug/cross-correlation-of-delayed-signal-in-noise.html" rel="nofollow">http://www.mathworks.com/help/signal/ug/cross-correlation-of-delayed-signal-in-noise.html</a></p>
<p>Several considerations</p>
<p>Cross-correlation is commonly used for times when the signal in question has too much noise. If you don't have any noise to worry about I would use a different method however. </p>
| 3 | 2016-10-07T15:34:52Z | [
"python",
"fft",
"waveform",
"cross-correlation"
]
|
Does it make sense to use cross-correlation on arrays of timestamps? | 39,877,725 | <p>I want to find the offset between two arrays of timestamps. They could represent, let's say, the onset of beeps in two audio tracks.</p>
<p><strong>Note</strong>: There may be extra or missing onsets in either track.</p>
<p>I found some information about cross-correlation (e.g. <a href="https://dsp.stackexchange.com/questions/736/how-do-i-implement-cross-correlation-to-prove-two-audio-files-are-similar">https://dsp.stackexchange.com/questions/736/how-do-i-implement-cross-correlation-to-prove-two-audio-files-are-similar</a>) which looked promising.</p>
<p>I assumed that each audio track is 10 seconds in duration, and represented the beep onsets as the peaks of a "square wave" with a sample rate of 44.1 kHz:</p>
<pre><code>import numpy as np
rfft = np.fft.rfft
irfft = np.fft.irfft
track_1 = np.array([..., 5.2, 5.5, 7.0, ...])
# The onset in track_2 at 8.0 is "extra," it has no
# corresponding onset in track_1
track_2 = np.array([..., 7.2, 7.45, 8.0, 9.0, ...])
frequency = 44100
num_samples = 10 * frequency
wave_1 = np.zeros(num_samples)
wave_1[(track_1 * frequency).astype(int)] = 1
wave_2 = np.zeros(num_samples)
wave_2[(track_2 * frequency).astype(int)] = 1
xcor = irfft(rfft(wave_1) * np.conj(rfft(wave_2)))
offset = xcor.argmax()
</code></pre>
<p>This approach isn't particularly fast, but I was able to get fairly consistent results even with quite low frequencies. However... I have no idea if this is a good idea! Is there a better way to find this offset than cross-correlation?</p>
<p><strong>Edit</strong>: added note about missing and extra onsets.</p>
| 8 | 2016-10-05T15:09:24Z | 40,000,970 | <p>If <code>track_1</code> and <code>track_2</code> are timestamps of beeps and both catch all of the beeps, then there's no need to build a waveform and do cross-correlation. Just find the average delay between the two arrays of beep timestamps:</p>
<pre><code>import numpy as np
frequency = 44100
num_samples = 10 * frequency
actual_time_delay = 0.020
timestamp_noise = 0.001
# Assumes track_1 and track_2 are timestamps of beeps, with a delay and noise added
track_1 = np.array([1,2,10000])
track_2 = np.array([1,2,10000]) + actual_time_delay*frequency +
frequency*np.random.uniform(-timestamp_noise,timestamp_noise,len(track_1))
calculated_time_delay = np.mean(track_2 - track_1) / frequency
print('Calculated time delay (s):',calculated_time_delay)
</code></pre>
<p>This yields approximately <code>0.020</code> s depending on the random errors introduced and is about as fast as it gets.</p>
<p><strong>EDIT:</strong> If extra or missing timestamps need to be handled, then the code could be modified as the following. Essentially, if the error on the timestamp randomness is bounded, then perform statistical "mode" function on the delays between all the values. Anything within the timestamp randomness bound is clustered together and identified, then the original identified delays are then averaged. </p>
<pre><code>import numpy as np
from scipy.stats import mode
frequency = 44100
num_samples = 10 * frequency
actual_time_delay = 0.020
# Assumes track_1 and track_2 are timestamps of beeps, with a delay, noise added,
# and extra/missing beeps
timestamp_noise = 0.001
timestamp_noise_threshold = 0.002
track_1 = np.array([1,2,5,10000,100000,200000])
track_2 = np.array([1,2,44,10000,30000]) + actual_time_delay*frequency
track_2 = track_2 + frequency*np.random.uniform(-timestamp_noise,timestamp_noise,len(track_2))
deltas = []
for t1 in track_1:
for t2 in track_2:
deltas.append( t2 - t1)
sorted_deltas = np.sort(deltas)/frequency
truncated_deltas = np.array(sorted_deltas/(timestamp_noise_threshold)+0.5,
dtype=int)*timestamp_noise_threshold
truncated_time_delay = min(mode(truncated_deltas).mode,key=abs)
calculated_time_delay = np.mean(sorted_deltas[truncated_deltas == truncated_time_delay])
print('Calculated time delay (s):',calculated_time_delay)
</code></pre>
<p>Obviously, if too many beeps are missing or extra beeps exist then the code begins to fail at some point, but in general it should work well and be way faster than trying to generate a whole waveform and performing correlation. </p>
| 3 | 2016-10-12T14:16:08Z | [
"python",
"fft",
"waveform",
"cross-correlation"
]
|
Using list comprehension to create a list of lists in a recursive function | 39,877,809 | <p>I am experimenting with recursive functions. My goal is to produce:</p>
<blockquote>
<p>A function <code>combo</code> that generates all non-increasing series that add up to n</p>
</blockquote>
<p>Some sample inputs/outputs:</p>
<pre><code>>>> print (combo(3))
[[3], [2, 1], [1, 1, 1]]
>>> print (combo(4))
[[4], [3, 1], [2, 2], [2, 1, 1], [1, 1, 1, 1]]
>>> print (combo(5))
[[5], [4, 1], [3, 2], [3, 1, 1], [2, 2, 1], [2, 1, 1, 1], [1, 1, 1, 1, 1]]
</code></pre>
<p>After a lot of trial and error, I came up with the following function which does exactly what I want:</p>
<pre><code>def combo(n, limit=None):
if not limit:
limit = n
# return [ LIST COMPREHENSION HERE ]
res = []
for i in range(limit, 0, -1):
if i == n:
res.append([i])
else:
res.extend([i] + x for x in combo(n - i, min(n - i, i)))
return res
</code></pre>
<p>My question: <strong>Is there a way (pythonic or not) to return this same result using a single list comprehension, without appending to and extending <code>res</code></strong>?</p>
<p>Here are two things I've tried, along with their mangled results:</p>
<pre><code>return [[i] if i == n else [[i] + x for x in combo(n - i, min(n - i, i))] for i in range(limit, 0, -1)]
# lists get deeper: [[4], [[3, 1]], [[2, 2], [2, [1, 1]]], [[1, [1, [1, 1]]]]]
return [[i if i == n else ([i] + x for x in combo(n - i, min(n - i, i))) for i in range(limit, 0, -1)]]
# parentheses interpreted as generator: [[4, <generator object combo.<locals>.<listcomp>.<genexpr> at 0x01D84E40>, etc.]]
</code></pre>
<p>I realize that the answer may be quite ugly, but I've spent enough time trying that I just want to know if it is <em>possible</em>.</p>
| 1 | 2016-10-05T15:14:23Z | 39,877,938 | <p>Convert <em>extending</em> to <em>appending</em> and it becomes easier to grok how to convert this:</p>
<pre><code>res = []
for i in range(limit, 0, -1):
if i == n:
res.append([i])
else:
# this is the generator expression pulled out of the res.extend() call
for x in combo(n - i, min(n - i, i)):
res.append([i] + x)
</code></pre>
<p>You can now move the <code>i == n</code> case into the other branch (using a helper <code>items</code> variable here for readability):</p>
<pre><code>res = []
for i in range(limit, 0, -1):
items = [[]] if i == n else combo(n - i, min(n - i, i))
for x in items:
res.append([i] + x)
</code></pre>
<p>If <code>i == n</code>, this causes the loop iterate once and produce an empty list, so you effectively get <code>res.append([i])</code>.</p>
<p>This can then trivially be converted to a list comprehension (inlining <code>items</code> into the <code>for</code> loop):</p>
<pre><code>return [
[i] + x
for i in range(limit, 0, -1)
for x in ([[]] if i == n else combo(n - i, min(n - i, i)))]
</code></pre>
<p>Wether or not you <em>should</em> is another matter; this is hardly easier to read.</p>
| 4 | 2016-10-05T15:20:17Z | [
"python",
"recursion",
"list-comprehension"
]
|
python pattern recognition in data | 39,877,816 | <p>I have a large (order of 10k) set of data, letâs say in the form of key-value:</p>
<pre><code>A -> 2
B -> 5
C -> 7
D -> 1
E -> 13
F -> 1
G -> 3
. . .
</code></pre>
<p>Also a smaller sample set (order of 10):</p>
<pre><code>X -> 6
Y -> 8
Z -> 14
. . .
</code></pre>
<p>Although the values are <em>shifted</em>, the pattern can be found in the original data. What would be the best approach to match or do pattern recognition so that the machine recognizes the corresponding keys in the original data:</p>
<pre><code>X -> B
Y -> C
Z -> E
. . .
</code></pre>
<p>I have been reading about <a href="https://www.tensorflow.org/" rel="nofollow">TensorFlow</a> and have been doing some exercises, but as a total noob I am not quite sure this is the right tool, or if it is, how exactly to go about the problem.</p>
<p>Thanks for any hints.</p>
| 0 | 2016-10-05T15:14:44Z | 39,879,165 | <p>First, you need to think about a loss function, i.e. why is solution 1 better than solution 2? Can you come up with an objective score function such that lower scores are always better? </p>
<p>E.g. in your example, is this solution any worse:</p>
<pre><code>X -> C
Y -> C
Z -> E
</code></pre>
<p>Once you've defined what you are trying to optimize, we can tell you if tensorflow is the right tool.</p>
| 1 | 2016-10-05T16:21:20Z | [
"python",
"pattern-matching",
"tensorflow"
]
|
How to optionally pass arguments with non-default values in Python? | 39,877,826 | <p>I have a following method:</p>
<pre><code>def a(b='default_b', c='default_c', d='default_d'):
# â¦
pass
</code></pre>
<p>And I have some command-line interface that allows either to provide variables <code>user_b</code>, <code>user_c</code>, <code>user_d</code> or to set them to <code>None</code> - i.e. the simple Python <code>argparse</code> module.</p>
<p>I want to make a call like:</p>
<pre><code>a(b=user_b if user_b is not None,
c=user_c if user_c is not None,
d=user_d if user_d is not None)
</code></pre>
<p>If the variable is <code>None</code> I want to use a default value from the method's argument.</p>
<p>The only way I found is to check all the combinations of the user variables:</p>
<pre><code>if not user_b and not user_c and not user_d:
a()
elif not user_b and not user_c and user_d:
a(d)
â¦
elif user_b and user_c and user_d:
a(b, c, d)
</code></pre>
<p>What is more efficient, fancy and Pythonic way to solve my problem?</p>
| 1 | 2016-10-05T15:15:11Z | 39,877,888 | <pre><code>arg_dict = {}
if user_b is not None:
arg_dict["b"] = user_b
if user_c is not None:
arg_dict["c"] = user_c
if user_d is not None:
arg_dict["d"] = user_d
a(**arg_dict)
</code></pre>
<p>Not exactly pretty though.</p>
<p>A nicer way would just be to re-write your function <code>a</code> to accept <code>None</code> as a default answer, and then check for that. (I.e. <code>if b is None: b = "default_b"</code>)</p>
| 2 | 2016-10-05T15:17:53Z | [
"python",
"syntactic-sugar"
]
|
How to optionally pass arguments with non-default values in Python? | 39,877,826 | <p>I have a following method:</p>
<pre><code>def a(b='default_b', c='default_c', d='default_d'):
# â¦
pass
</code></pre>
<p>And I have some command-line interface that allows either to provide variables <code>user_b</code>, <code>user_c</code>, <code>user_d</code> or to set them to <code>None</code> - i.e. the simple Python <code>argparse</code> module.</p>
<p>I want to make a call like:</p>
<pre><code>a(b=user_b if user_b is not None,
c=user_c if user_c is not None,
d=user_d if user_d is not None)
</code></pre>
<p>If the variable is <code>None</code> I want to use a default value from the method's argument.</p>
<p>The only way I found is to check all the combinations of the user variables:</p>
<pre><code>if not user_b and not user_c and not user_d:
a()
elif not user_b and not user_c and user_d:
a(d)
â¦
elif user_b and user_c and user_d:
a(b, c, d)
</code></pre>
<p>What is more efficient, fancy and Pythonic way to solve my problem?</p>
| 1 | 2016-10-05T15:15:11Z | 39,878,272 | <p>You could write a function that filters out unwanted values then use it to scrub the target function call.</p>
<pre><code>def scrub_params(**kw):
return {k:v for k,v in kw if v is not None}
some_function(**scrub_params(foo=args.foo, bar=args.bar))
</code></pre>
| 0 | 2016-10-05T15:36:27Z | [
"python",
"syntactic-sugar"
]
|
sorting records with in grouped item in a dataframe | 39,877,898 | <p>Objective : I am trying to sort rows within a group in DataFrame [in python and pyspark]</p>
<p>This is what I have,</p>
<pre><code>id time
1 10
1 8
1 1
1 5
2 1
2 8
</code></pre>
<p>This is what I am looking for,</p>
<pre><code>id time delta
1 1 7
1 8 2
1 10 0
1 1 4
2 5 3
2 8 0
</code></pre>
<p>There is a solution to this <a href="http://stackoverflow.com/questions/39826720/calculating-delta-time-between-records-in-dataframe">here</a>, but sorting within grouped object was not possible, any hints on this would be helpful, I looking to work with python pandas and also in pyspark.</p>
| 0 | 2016-10-05T15:18:34Z | 39,878,815 | <p>You can use lag over a window:</p>
<pre><code>from pyspark.sql.functions import lag
from pyspark.sql.window import Window
df = sc.parallelize([
(1, 10), (1, 8 ), (1, 1), (1, 5), (2, 1), (2, 8),
]).toDF(["id", "time"])
df.withColumn("delta",
df["time"] - lag("time", 1).over(Window().partitionBy("id").orderBy("time")))
</code></pre>
<p>where <code>Window().partitionBy("id").orderBy("time")</code> creates a frame with records:</p>
<ul>
<li>Grouped by <code>id</code>.</li>
<li>Sorted by <code>time</code>.</li>
</ul>
<p>and <code>lag("time", 1)</code> emits a previous value of time according to the frame definition.</p>
| 0 | 2016-10-05T16:02:11Z | [
"python",
"pandas",
"dataframe",
"pyspark",
"spark-dataframe"
]
|
Unknown reason of dead multiprocessing in python-3.5 | 39,877,986 | <p>I have a data frame <code>df</code> contains information of two-hundred-thousand items. And now I want to measure their pair-wise similarity and pick up the top <code>n</code> pairs. That means, I should calculate nearly tens of billions calculation. Given the huge computation cost, I choose to run it via multiprocessing. Here is the code I have written so far:</p>
<pre><code>user_list = (list(range(1, 200001))
def gen_pair():
for u1 in reversed(user_list):
for u2 in reversed(list(range(1, u1))):
yield (u1, u2)
def cal_sim(u_pair):
u1, u2 = u_pair
# do sth quite complex...
sim = sim_f(df[u1], df[u2])
if sim < 0.5:
return None
else:
return (u1, u2, sim)
with multiprocessing.Pool(processes=3) as pool:
vals = pool.map(cal_sim, gen_pair())
for v in vals:
if v is not None:
with open('result.txt', 'a') as f:
f.write('{0}\t{1}\t{2}\n'.format(v[0], v[1], v[2]))
</code></pre>
<p>When I just take the first 1000 users, it works quite well. But when I take all of them, it is kind of dead and no single word in <code>result.txt</code>. But if I add the number of processes, it also be dead. I wonder what is the reason and how can I fix it? Thanks in advance.</p>
<p>EDIT:</p>
<p>Here is my code of <code>sim_f</code>:</p>
<pre><code>def sim_f(t1, t2):
def intersec_f(l1, l2):
return set(l1)&set(l2)
def union_f(l1, l2):
return set(l1)|set(l2)
a_arr1, a_arr2 = t1[0], t1[1]
b_arr1, b_arr2 = t2[0], t2[1]
sim = float(len(union_f(intersec_f(a_arr1, a_arr2), intersec_f(b_arr1, b_arr2))))\
/ float(len(union_f(union_f(a_arr1, a_arr2), union_f(b_arr1, b_arr2))))
return sim
</code></pre>
| 1 | 2016-10-05T15:22:34Z | 39,878,114 | <p>There's little to go on, but try:</p>
<pre><code>vals = pool.imap(cal_sim, gen_pair())
^
</code></pre>
<p>instead: note that I changed "map" to "imap". As documented, <code>map()</code> blocks until the <em>entire</em> computation is complete, so you never get to your <code>for</code> loop until all the work is finished. <code>imap</code> returns "at once".</p>
<p>And if you don't care in which order results are delivered, use <code>imap_unordered()</code> instead.</p>
<h2>OPENING THE FILE JUST ONCE</h2>
<p>With respect to an issue raised in comments:</p>
<pre><code>with open('result.txt', 'w') as f:
for v in vals:
if v is not None:
f.write('{0}\t{1}\t{2}\n'.format(v[0], v[1], v[2]))
</code></pre>
<p>is the obvious way to open the file just once. But I'd be surprised if it helped you - all evidence to date suggests it's just that <code>cal_sim()</code> is plain expensive.</p>
<h2>SPEEDING SIM_F</h2>
<p>There's lots of redundant work being done in:</p>
<pre><code>def sim_f(t1, t2):
def intersec_f(l1, l2):
return set(l1)&set(l2)
def union_f(l1, l2):
return set(l1)|set(l2)
a_arr1, a_arr2 = t1[0], t1[1]
b_arr1, b_arr2 = t2[0], t2[1]
sim = float(len(union_f(intersec_f(a_arr1, a_arr2), intersec_f(b_arr1, b_arr2))))\
/ float(len(union_f(union_f(a_arr1, a_arr2), union_f(b_arr1, b_arr2))))
return sim
</code></pre>
<p>Wholly untested, here's an obvious ;-) rewrite:</p>
<pre><code>def sim_f(t1, t2):
a1, a2 = set(t1[0]), set(t1[1])
b1, b2 = set(t2[0]), set(t2[1])
sim = float(len((a1 & a2) | (b1 & b2))) \
/ len((a1 | a2) | (b1 | b2))
return sim
</code></pre>
<p>This is faster because:</p>
<ul>
<li>Converting to sets is done only once for each input.</li>
<li>No useless (but time-consuming) conversions of sets <em>to</em> sets.</li>
<li>No internal function-call overhead for unions and intersections.</li>
<li>One needless call to <code>float()</code> is eliminated (or, in Python 3, the remaining <code>float()</code> call could also be eliminated).</li>
</ul>
| 2 | 2016-10-05T15:28:45Z | [
"python",
"multiprocessing",
"python-3.5"
]
|
How to overcome MemoryError of numpy.unique | 39,877,999 | <p>I am using Numpy version 1.11.1 and have to deal with an two-dimensional array of</p>
<pre><code>my_arr.shape = (25000, 25000)
</code></pre>
<p>All values are integer, and I need a unique list of the arrays values. When using <code>lst = np.unique(my_arr)</code> I am getting:</p>
<pre><code>Traceback (most recent call last):
File "<pyshell#38>", line 1, in <module>
palette = np.unique(arr)
File "c:\Python27\lib\site-packages\numpy\lib\arraysetops.py", line 176, in unique
ar = np.asanyarray(ar).flatten()
MemoryError
</code></pre>
<p>My machine has only 8 GB RAM, but I tried it with another machine with 16 GB RAM, and the result is the same. Monitoring the memory and CPU usage doesn't show that the problems are related to RAM or CPU.</p>
<p>In principle, I know the values the array consists of, but what if the input changes... Also, if I want to replace values of the array by another (let's say all 2 by 0), will it need a lot of RAM as well?</p>
| 1 | 2016-10-05T15:23:12Z | 39,878,179 | <p>Python 32-bit can't access more than 4 GiB RAM (often ~2.5 GiB). The obvious answer would be to use the 64-bit version. If that doesn't work, another solution would be to use <code>numpy.memmap</code> and memory-map the array into a file stored on disk.</p>
| 0 | 2016-10-05T15:32:02Z | [
"python",
"arrays",
"numpy"
]
|
Finding Set of strings from list of pairs (string, int) with maximum int value | 39,878,076 | <p>I have a list of <code>(str,int)</code> pairs </p>
<p><code>list_word = [('AND', 1), ('BECAUSE', 1), ('OF', 1), ('AFRIAD', 1), ('NEVER', 1), ('CATS', 2), ('ARE', 2), ('FRIENDS', 1), ('DOGS', 2)]</code></p>
<p>This basically says how many times each word showed up in a text. </p>
<p>What I want to get is the set of words with maximum occurrence along with maximum occurrence number. So, in the above example, I want to get</p>
<p><code>(set(['CATS', 'DOGS','ARE']), 2)</code></p>
<p>The solution I can think of is looping through the list. But is there any elegant way of doing this?</p>
| 0 | 2016-10-05T15:26:50Z | 39,878,239 | <p>Two linear scans, first to find the maximal element:</p>
<pre><code>maxcount = max(map(itemgetter(1), mylist))
</code></pre>
<p>then a second to pull out the values you care about:</p>
<pre><code>maxset = {word for word, count in mylist if count == maxcount}, maxcount
</code></pre>
<p>If you needed to get the sets for more than just the maximal count, you can use <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow"><code>collections.defaultdict</code></a> to accumulate by count in a single pass:</p>
<pre><code>from collections import defaultdict
sets_by_count = defaultdict(set)
for word, count in mylist:
sets_by_count[count].add(word)
</code></pre>
<p>Which can then be followed by <code>allcounts = sorted(sets_by_count.items(), key=itemgetter(0), reverse=True)</code> to get a <code>list</code> of <code>count, set</code> pairs, from highest to lowest count (with minimal sorting work, since it's sorting only a number of items equal to the unique counts, not all words).</p>
| 2 | 2016-10-05T15:34:52Z | [
"python",
"string",
"set",
"max",
"pair"
]
|
Finding Set of strings from list of pairs (string, int) with maximum int value | 39,878,076 | <p>I have a list of <code>(str,int)</code> pairs </p>
<p><code>list_word = [('AND', 1), ('BECAUSE', 1), ('OF', 1), ('AFRIAD', 1), ('NEVER', 1), ('CATS', 2), ('ARE', 2), ('FRIENDS', 1), ('DOGS', 2)]</code></p>
<p>This basically says how many times each word showed up in a text. </p>
<p>What I want to get is the set of words with maximum occurrence along with maximum occurrence number. So, in the above example, I want to get</p>
<p><code>(set(['CATS', 'DOGS','ARE']), 2)</code></p>
<p>The solution I can think of is looping through the list. But is there any elegant way of doing this?</p>
| 0 | 2016-10-05T15:26:50Z | 39,878,371 | <p>Through the use of <a href="https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions" rel="nofollow" title="list comprehensions">list comprehensions</a> you can first select the maximum occurrence and then filter the strings accordingly</p>
<pre><code>list = [('AND', 1), ('BECAUSE', 1), ('OF', 1), ('AFRIAD', 1), ('NEVER', 1), ('CATS', 2), ('ARE', 2), ('FRIENDS', 1), ('DOGS', 2)]
mx = max([x[1] for x in list])
res = (set([x[0] for x in list if x[1]==mx]), mx)
</code></pre>
| 0 | 2016-10-05T15:40:55Z | [
"python",
"string",
"set",
"max",
"pair"
]
|
Finding Set of strings from list of pairs (string, int) with maximum int value | 39,878,076 | <p>I have a list of <code>(str,int)</code> pairs </p>
<p><code>list_word = [('AND', 1), ('BECAUSE', 1), ('OF', 1), ('AFRIAD', 1), ('NEVER', 1), ('CATS', 2), ('ARE', 2), ('FRIENDS', 1), ('DOGS', 2)]</code></p>
<p>This basically says how many times each word showed up in a text. </p>
<p>What I want to get is the set of words with maximum occurrence along with maximum occurrence number. So, in the above example, I want to get</p>
<p><code>(set(['CATS', 'DOGS','ARE']), 2)</code></p>
<p>The solution I can think of is looping through the list. But is there any elegant way of doing this?</p>
| 0 | 2016-10-05T15:26:50Z | 39,878,658 | <p>Convert <code>list</code> to <code>dict</code> with <em>key</em> as count and <em>value</em> as set of words. Find the <code>max</code> value of key, and it;s corresponding value</p>
<pre><code>from collections import defaultdict
my_list = [('AND', 1), ('BECAUSE', 1), ('OF', 1), ('AFRIAD', 1), ('NEVER', 1), ('CATS', 2), ('ARE', 2), ('FRIENDS', 1), ('DOGS', 2)]
my_dict = defaultdict(set)
for k, v in my_list:
my_dict[v].add(k)
max_value = max(my_dict.keys())
print (my_dict[max_value], max_value)
# prints: (set(['CATS', 'ARE', 'DOGS']), 2)
</code></pre>
| 0 | 2016-10-05T15:54:56Z | [
"python",
"string",
"set",
"max",
"pair"
]
|
Finding Set of strings from list of pairs (string, int) with maximum int value | 39,878,076 | <p>I have a list of <code>(str,int)</code> pairs </p>
<p><code>list_word = [('AND', 1), ('BECAUSE', 1), ('OF', 1), ('AFRIAD', 1), ('NEVER', 1), ('CATS', 2), ('ARE', 2), ('FRIENDS', 1), ('DOGS', 2)]</code></p>
<p>This basically says how many times each word showed up in a text. </p>
<p>What I want to get is the set of words with maximum occurrence along with maximum occurrence number. So, in the above example, I want to get</p>
<p><code>(set(['CATS', 'DOGS','ARE']), 2)</code></p>
<p>The solution I can think of is looping through the list. But is there any elegant way of doing this?</p>
| 0 | 2016-10-05T15:26:50Z | 39,880,002 | <p>While the more pythonic solutions are certainly easier on the eye, unfortunately the requirement for two scans, or building data-structures you don't really want is significantly slower.</p>
<p>The following fairly boring solution is about ~55% faster than the dict solution, and ~70% faster than the comprehension based solutions based on the provided example data (and my implementations, machine, benchmarking etc.)</p>
<p>This almost certainly down to the single scan here rather than two.</p>
<pre><code>word_occs = [
('AND', 1), ('BECAUSE', 1), ('OF', 1), ('AFRIAD', 1), ('NEVER', 1),
('CATS', 2), ('ARE', 2), ('FRIENDS', 1), ('DOGS', 2)
]
def linear_scan(word_occs):
max_val = 0
max_set = None
for word, occ in word_occs:
if occ == max_val:
max_set.add(word)
elif occ > max_val:
max_val, max_set = occ, {word}
return max_set, max_val
</code></pre>
<p>To be fair, they are all blazing fast and in your case readability might be more important.</p>
| 0 | 2016-10-05T17:10:32Z | [
"python",
"string",
"set",
"max",
"pair"
]
|
Running time in a while loop (really slow) | 39,878,086 | <p>Following is my code:</p>
<pre><code>import socket
import time
mysock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mysock.connect(('www.py4inf.com', 80))
mysock.send(b'GET /code/romeo.txt HTTP/1.1\n')
mysock.send(b'Host: www.py4inf.com\n\n')
all = b""
while True:
data = mysock.recv(512)
all = all + data
if len(data) < 1:
break
mysock.close()
stuff = all.decode()
position = stuff.find('\r\n\r\n')
print(stuff[position+4:])
</code></pre>
<p>There must be something wrong because it takes almost 30 seconds to invoke break in while loop.
However, if I change the code <code>if len(data) < 1:</code> to <code>if len(data) < 100:</code> it took just 0.5 second.</p>
<p>Please help. It haunted me for a while.
The sample website: <a href="http://www.py4inf.com/code/romeo.txt" rel="nofollow">http://www.py4inf.com/code/romeo.txt</a></p>
| 0 | 2016-10-05T15:27:15Z | 39,878,994 | <p>Web servers don't have to close connections immediately.In fact, they may be looking for another http request. Just add <code>print(data)</code> after the recv and you'll see you get the data, then a pause, then <code>b''</code>, meaning the server finally closed the socket.</p>
<p>You'll also notice that the server sends a header that includes "Content-Length: 167\r\n". Once the header has finished, the server will send exactly 167 bytes of data. You could parse out the header yourself but that's why we have client libraries like <code>urllib</code> and <code>requests</code>.</p>
<p>I was curious about how much would need to be added to the request header to get the connection to close immediately, and <code>Connection: close</code> seemed to do it. This returns right away:</p>
<pre><code>import socket
import time
mysock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mysock.connect(('www.py4inf.com', 80))
mysock.send(b'GET /code/romeo.txt HTTP/1.1\n')
mysock.send(b'Connection: close\n')
mysock.send(b'Host: www.py4inf.com\n\n')
all = b""
while True:
data = mysock.recv(512)
all = all + data
if len(data) < 1:
break
mysock.close()
stuff = all.decode()
position = stuff.find('\r\n\r\n')
print(stuff[position+4:])
</code></pre>
| 0 | 2016-10-05T16:10:54Z | [
"python",
"sockets"
]
|
Change characters inside a .txt file with script | 39,878,109 | <p>I have a text file:<br>
<code>3.4 - New York, United States</code> </p>
<p>I need to create a script (preferably written in python) that can change the <code>-</code> (dash) character to a <code>,</code> (comma) character,</p>
<p>And then save the output as a .csv file</p>
| -2 | 2016-10-05T15:28:38Z | 39,878,822 | <p>Use the CSV module, here is a stackoverflow example, <a href="http://stackoverflow.com/questions/6916542/writing-list-of-strings-to-excel-csv-file-in-python">Writing List of Strings to Excel CSV File in Python</a>. If you want to replace the "-" with a "," open the file and read the string, replace it with <code>str.replace('-', ',')</code> and write it back to the file. Here is an example. </p>
<pre><code>import csv
file = open("file_in.txt",'r')
file_out = open("file_out.csv",'wb')
writer = csv.writer(file_out)
writer.writerow(" ".join(file.read().replace('-', ','))) # added join method.
file.close(); file_out.close()
</code></pre>
| 0 | 2016-10-05T16:02:24Z | [
"python",
"csv"
]
|
Find a string that is not identical with Python | 39,878,226 | <p>So.. here's what i'm trying to do..
For each line in the datafile, check if the other file contains this string.</p>
<p>I tried some stuff from other posts, but non of them were any good.</p>
<p>The code below says it didnt find any of the string it was looking for, even while they were present somewhere in the file.</p>
<pre><code>def search():
file1= open('/home/example/file1.txt', 'r')
datafile= open('/home/user/datafile.txt', 'r')
for line in datafile:
if line in file1:
print '%s found' % line
else:
print '%s not found' % line
</code></pre>
<p><strong>EDIT:</strong>
I want it to find a match if the line is not identical.</p>
<p>Example: It should find <strong>hello</strong> in the document if it's written as <strong>hello123</strong></p>
| 0 | 2016-10-05T15:34:30Z | 39,878,467 | <p>Assuming the content of the first file is not extremely large, you can read the entire file as string and then check using string containment:</p>
<pre><code>def search():
file1_content = open('/home/example/file1.txt').read()
datafile = open('/home/user/datafile.txt')
for line in datafile:
if line in file1_content:
print '%s found' % line
else:
print '%s not found' % line
</code></pre>
<p>Note that the default mode for <code>open</code> is <code>'r'</code>, so you really don't need to pass that parameter if you're reading in text mode.</p>
| 3 | 2016-10-05T15:45:03Z | [
"python"
]
|
Find a string that is not identical with Python | 39,878,226 | <p>So.. here's what i'm trying to do..
For each line in the datafile, check if the other file contains this string.</p>
<p>I tried some stuff from other posts, but non of them were any good.</p>
<p>The code below says it didnt find any of the string it was looking for, even while they were present somewhere in the file.</p>
<pre><code>def search():
file1= open('/home/example/file1.txt', 'r')
datafile= open('/home/user/datafile.txt', 'r')
for line in datafile:
if line in file1:
print '%s found' % line
else:
print '%s not found' % line
</code></pre>
<p><strong>EDIT:</strong>
I want it to find a match if the line is not identical.</p>
<p>Example: It should find <strong>hello</strong> in the document if it's written as <strong>hello123</strong></p>
| 0 | 2016-10-05T15:34:30Z | 39,878,629 | <p>You can read the file into a <code>set</code> and then check for inclusion in the second file. <code>set</code>'s are typically faster at checking inclusion that lists.</p>
<pre><code>def search():
file1 = set(open('/home/example/file1.txt'))
datafile= open('/home/user/datafile.txt', 'r')
for line in datafile:
if line in file1:
print '%s found' % line
else:
print '%s not found' % line
</code></pre>
<p>You could also use set operations to extract, for instance, all lines not in the first file:</p>
<pre><code>set(open('/home/user/datafile.txt', 'r')) - set(open('/home/example/file1.txt'))
</code></pre>
| 0 | 2016-10-05T15:53:47Z | [
"python"
]
|
Find a string that is not identical with Python | 39,878,226 | <p>So.. here's what i'm trying to do..
For each line in the datafile, check if the other file contains this string.</p>
<p>I tried some stuff from other posts, but non of them were any good.</p>
<p>The code below says it didnt find any of the string it was looking for, even while they were present somewhere in the file.</p>
<pre><code>def search():
file1= open('/home/example/file1.txt', 'r')
datafile= open('/home/user/datafile.txt', 'r')
for line in datafile:
if line in file1:
print '%s found' % line
else:
print '%s not found' % line
</code></pre>
<p><strong>EDIT:</strong>
I want it to find a match if the line is not identical.</p>
<p>Example: It should find <strong>hello</strong> in the document if it's written as <strong>hello123</strong></p>
| 0 | 2016-10-05T15:34:30Z | 40,114,272 | <p>I found the answer.</p>
<pre><code>def search():
datafile = open('/codes/python/datafile.txt')
for line in datafile:
file1 = open('/codes/python/file1.txt').read()
line = line.rstrip()
if line in file1:
print '%s found' % line
else:
print '%s not found' % line
</code></pre>
<p>I added <code>.read()</code> to the file where it's looking into. <strong>NOT</strong> the file with strings it should be looking for (datafile).</p>
<p>Added <code>line = line.rstrip()</code></p>
| 0 | 2016-10-18T17:13:23Z | [
"python"
]
|
8 Queens (pyglet-python) | 39,878,289 | <p>I'm trying to make 8 queens game on pyglet. I have succesfully generated board.png on window. Now when I paste queen.png image on it, I want it to show only queen on it not the white part. I removed white part using photoshop, but as I call it on board.png in pyglet it again shows that white part please help.</p>
<pre><code>import pyglet
from pyglet.window import Window, mouse, gl
# Display an image in the application window
image = pyglet.image.Texture.create(800,800)
board = pyglet.image.load('resources/Board.png')
queen = pyglet.image.load('resources/QUEEN.png')
image.blit_into(board,0,0,0)
image.blit_into(queen,128,0,0)
# creating a window
width = board.width
height = board.height
mygame = Window(width, height,
resizable=False,
caption="8 Queens",
config=pyglet.gl.Config(double_buffer=True),
vsync=False)
# Making list of tiles
print("Height: ", board.height, "\nWidth: ", board.width)
@mygame.event
def on_draw():
mygame.clear()
image.blit(0, 0)
def updated(dt):
on_draw()
pyglet.clock.schedule_interval(updated, 1 / 60)
# Launch the application
pyglet.app.run()
</code></pre>
<p>These are the images:</p>
<p><a href="http://i.stack.imgur.com/sAMZd.png" rel="nofollow">queen.png</a></p>
<p><a href="http://i.stack.imgur.com/5hZDI.png" rel="nofollow">board.png</a></p>
| -2 | 2016-10-05T15:36:54Z | 39,880,233 | <p>Your image is a rectangle. So necessarily, you will have a white space around your queen whatever you do. </p>
<p>I would recommend a bit of hacking (it's not very beautiful) and create two queen versions: queen_yellow and queen_black. Whenever the queen is standing on a yellow tile, display queen_yellow, and otherwise display queen_black.</p>
<p>To find out whether a tile is a yellow tile (using a matrix with x and y coordinates, where the top value for y is 0 and the very left value for x is 0): </p>
<pre><code>if tile_y%2=0: #is it an even row?
if tile_x%2=0: #is it an even column?
queentype = queen_yellow
else:
queentype = queen_black
else: #is it an uneven row?
if tile_x%2!=0: #is it an uneven column?
queentype = queen_yellow
else: queentype = queen_black
</code></pre>
<p>Hope that helped,
Narusan</p>
| 0 | 2016-10-05T17:25:17Z | [
"python",
"pyglet"
]
|
Export 2Gb+ SELECT to CSV with Python (out of Memory) | 39,878,329 | <p>I'm trying to export a large file from Netezza (using Netezza ODBC + pyodbc), this solution throws memoryError, If I loop without "list" it's VERY slow. do you have any idea of a intermediate solution that doesn't kill my server/python process but can run faster?</p>
<pre><code>cursorNZ.execute(sql)
archi = open("c:\test.csv", "w")
lista = list(cursorNZ.fetchall())
for fila in lista:
registro = ''
for campo in fila:
campo = str(campo)
registro = registro+str(campo)+";"
registro = registro[:-1]
registro = registro.replace('None','NULL')
registro = registro.replace("'NULL'","NULL")
archi.write(registro+"\n")
</code></pre>
<p>---- Edit ----</p>
<p>Thank you, I'm trying this:
Where "sql" is the query,
cursorNZ is </p>
<pre><code>connMy = pyodbc.connect(DRIVER=.....)
cursorNZ = connNZ.cursor()
chunk = 10 ** 5 # tweak this
chunks = pandas.read_sql(sql, cursorNZ, chunksize=chunk)
with open('C:/test.csv', 'a') as output:
for n, df in enumerate(chunks):
write_header = n == 0
df.to_csv(output, sep=';', header=write_header, na_rep='NULL')
</code></pre>
<p>Have this:
<strong>AttributeError: 'pyodbc.Cursor' object has no attribute 'cursor'</strong>
Any idea?</p>
| 1 | 2016-10-05T15:38:45Z | 39,878,460 | <p>Don't use <code>cursorNZ.fetchall()</code>.</p>
<p>Instead, loop through the cursor directly:</p>
<pre><code>with open("c:/test.csv", "w") as archi: # note the fixed '/'
cursorNZ.execute(sql)
for fila in cursorNZ:
registro = ''
for campo in fila:
campo = str(campo)
registro = registro+str(campo)+";"
registro = registro[:-1]
registro = registro.replace('None','NULL')
registro = registro.replace("'NULL'","NULL")
archi.write(registro+"\n")
</code></pre>
<p>Personally, I would just use pandas:</p>
<pre><code>import pyodbc
import pandas
cnn = pyodbc.connect(DRIVER=.....)
chunksize = 10 ** 5 # tweak this
chunks = pandas.read_sql(sql, cnn, chunksize=chunksize)
with open('C:/test.csv', 'a') as output:
for n, df in enumerate(chunks):
write_header = n == 0
df.to_csv(output, sep=';', header=write_header, na_rep='NULL')
</code></pre>
| 4 | 2016-10-05T15:44:44Z | [
"python",
"csv",
"export-to-csv",
"netezza"
]
|
Clion can't find external module in python file | 39,878,401 | <p>Hell everyone,<br>
I'm using CLion for a C++ project.<br>
I have a some python files in this project too. ( boost python).<br>
The python files import a module generated by cmake.<br>
It works properly if I do : </p>
<blockquote>
<p>$ cd buildDir<br>
$ python mypythonFile.py</p>
</blockquote>
<p>But in CLion, It can't find the lib imported.<br>
So no autoCompletion, etc and everything is red.<br>
I tried this in the cmakeList.txt: </p>
<blockquote>
<p>set_target_properties(mymodule PROPERTIES ENVIRONMENT
"PYTHONPATH=$PYTHONPATH:${CMAKE_RUNTIME_OUTPUT_DIRECTORY}" )</p>
</blockquote>
<p>I thought since CLion use cmake, he'll use this PYTHONPATH but it doesn't work.<br>
I saw similar questions on CLion's forum but with no answer.<br>
So i thought I'd ask here. Â </p>
<p>Thank you all.<br>
Cheers</p>
| 0 | 2016-10-05T15:42:13Z | 39,898,862 | <p>CLion uses CMake for creation a project model (extracts compiler switches for c/cpp files, detects files that need to be compiled and etc), but it does not inherit the environment. At least in current implementation.</p>
<p>The problem is that there is a <a href="https://youtrack.jetbrains.com/issue/CPP-3090" rel="nofollow">bug</a> in CLion about overriding PYTHONPATH. As a workaround you can set PYTHONPATH in .gdbinit manually.</p>
| 0 | 2016-10-06T14:36:29Z | [
"python",
"cmake",
"clion"
]
|
Largest (n) numbers with Index and Column name in Pandas DataFrame | 39,878,426 | <p>I wish to find out the largest 5 numbers in a DataFrame and store the Index name and Column name for these 5 values.</p>
<p>I am trying to use nlargest() and idxmax methods but failing to achieve what i want. My code is as below:</p>
<pre><code>import numpy as np
import pandas as pd
from pandas import Series, DataFrame
df = DataFrame({'a': [1, 10, 8, 11, -1],'b': [1.0, 2.0, 6, 3.0, 4.0],'c': [1.0, 2.0, 6, 3.0, 4.0]})
</code></pre>
<p>Can you kindly let me know How can i achieve this. Thank you</p>
| 0 | 2016-10-05T15:43:32Z | 39,878,597 | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow"><code>stack</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.nlargest.html" rel="nofollow"><code>nlargest</code></a>:</p>
<pre><code>max_vals = df.stack().nlargest(5)
</code></pre>
<p>This will give you a Series with a multiindex, where the first level is the original DataFrame's index, and the second level is the column name for the given value. Here's what <code>max_vals</code> looks like:</p>
<pre><code>3 a 11.0
1 a 10.0
2 a 8.0
b 6.0
c 6.0
</code></pre>
<p>To explicitly get the index and column names, use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.get_level_values.html" rel="nofollow"><code>get_level_values</code></a> on the index of <code>max_vals</code>:</p>
<pre><code>max_idx = max_vals.index.get_level_values(0)
max_cols = max_vals.index.get_level_values(1)
</code></pre>
<p>The result of <code>max_idx</code>:</p>
<pre><code>Int64Index([3, 1, 2, 2, 2], dtype='int64')
</code></pre>
<p>The result of <code>max_cols</code>:</p>
<pre><code>Index(['a', 'a', 'a', 'b', 'c'], dtype='object')
</code></pre>
| 2 | 2016-10-05T15:51:50Z | [
"python",
"pandas",
"numpy",
"dataframe",
"data-analysis"
]
|
Model instance not reflecting data passed from pre_save in custom field | 39,878,528 | <p>I have a custom field for generating slug and I am using it in my model.</p>
<p>The strange thing I can't figure out is why is the value I am generating in this custom field's method <code>pre_save</code> not set on the current instance. </p>
<p><strong>My question is not about generating slug different way but about this behavior.</strong></p>
<p>To simplify this example I defined this classes:</p>
<p>Model:</p>
<pre><code>class MyModel(models.Model):
slug = MyCustomField(blank=True)
def save(self, *args, **kwargs):
super(MyModel, self).save(*args, **kwargs)
print 'in save'
print self.slug
</code></pre>
<p>Field:</p>
<pre><code>class MyCustomField(models.SlugField):
def pre_save(self, model_instance, add):
value = super(MyCustomField, self).pre_save(model_instance, add)
if not value:
value = 'random-generated-slug'
return value
</code></pre>
<p>Post save signal:</p>
<pre><code>@receiver(post_save, sender=MyModel)
def test(sender, **kwargs):
print 'in signal'
print kwargs['instance'].slug
print 'from database'
print MyModel.objects.get(pk=kwargs['instance'].pk).slug
</code></pre>
<p>Code to run:</p>
<pre><code>instance = MyModel()
instance.save()
>> 'in signal'
>> ''
>> 'in database'
>> 'random-generated-slug'
>> 'in save'
>> ''
instance.slug
>> ''
</code></pre>
<p>Like you can see, the value is set in the database, but it's not on the current instance nor in post_save signal.</p>
<p>I have Django version 1.10. Should I set the value in a different way in <code>MyCustomField</code>? What is going on?</p>
<p>EDIT:
Maybe I should set this value in field's <code>save_form_data</code> or where is the best place to do this?</p>
| 1 | 2016-10-05T15:48:26Z | 39,878,704 | <p>The instance's slug field was updated between the call to save and the write into the database. The current instance's slug value is <em>stale</em></p>
<p>To get the slug value which was written into the DB, the instance has to updated by <em>refetching</em> from the database:</p>
<pre><code>instance = MyModel.objects.get(pk=your_pk)
</code></pre>
| 1 | 2016-10-05T15:57:33Z | [
"python",
"django"
]
|
shopify python api: how do add new assets to published theme? | 39,878,552 | <p>The shopify docs are horrible, I couldn't find any info on adding new assets to an existing shopify store.</p>
<p>We're developing an app that needs to add some css and liquid files.</p>
<p>Can someone here can shed some light on how to achieve this simple task?</p>
<p>Thanks</p>
| 1 | 2016-10-05T15:49:43Z | 39,878,643 | <p>Just put your css in the assets folder and link it in the .liquid file with this:</p>
<pre><code>{{ 'style.css' | asset_url | stylesheet_tag }}
</code></pre>
| 0 | 2016-10-05T15:54:09Z | [
"python",
"shopify"
]
|
shopify python api: how do add new assets to published theme? | 39,878,552 | <p>The shopify docs are horrible, I couldn't find any info on adding new assets to an existing shopify store.</p>
<p>We're developing an app that needs to add some css and liquid files.</p>
<p>Can someone here can shed some light on how to achieve this simple task?</p>
<p>Thanks</p>
| 1 | 2016-10-05T15:49:43Z | 39,903,099 | <pre><code>asset = shopify.Asset()
asset.key = "templates/josh.liquid"
asset.value = "{% comment %} Liquid is cool {% endcomment %}"
success = asset.save()
</code></pre>
<p>Be careful; if an asset already exists with the same key then it will be overwritten. You can find out more from the <a href="https://help.shopify.com/api/reference/asset#update" rel="nofollow">Asset API documentation.</a></p>
| 0 | 2016-10-06T18:21:57Z | [
"python",
"shopify"
]
|
memory leak in batch matrix factorization with tensorflow | 39,878,628 | <p>suppose I have a rate matrix <code>R</code> and I want to factorize it to matrices <code>U</code> and <code>V</code> with tensorflow</p>
<p>without batch size its simple problem and could be solve with following code:</p>
<pre><code># define Variables
u = tf.Variable(np.random.rand(R_dim_1, output_dim), dtype=tf.float32, name='u')
v = tf.Variable(np.random.rand(output_dim, R_dim_2), dtype=tf.float32, name='v')
# predict rate by multiplication
predicted_R = tf.matmul(tf.cast(u, tf.float32), tf.cast(v, tf.float32))
#cost function and train step
cost = tf.reduce_sum(tf.reduce_sum(tf.abs(tf.sub(predicted_R, R))))
train_step = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with tf.Session() as sess:
init = tf.initialize_all_variables()
sess.run(init)
for i in range(no_epochs):
_, this_cost = sess.run([train_step, cost])
print 'cost: ', this_cost
</code></pre>
<p>I decided to solve this problem with batch updates and my solution was sending indices of <code>U</code> and <code>V</code> which I want to use in predicting rate matrix <code>R</code> and update just those selected ones
here is my code (just read comments if it takes much time) :</p>
<pre><code># define variables
u = tf.Variable(np.random.rand(R_dim_1, output_dim), dtype=tf.float32, name='u')
v = tf.Variable(np.random.rand(output_dim, R_dim_2), dtype=tf.float32, name='v')
idx1 = tf.placeholder(tf.int32, shape=batch_size1, name='idx1')
idx2 = tf.placeholder(tf.int32, shape=batch_size2, name='idx2')
# get current U and current V by slicing U and V
cur_u = tf.Variable(tf.gather(u, idx1), dtype=tf.float32, name='cur_u')
cur_v = tf.transpose(v)
cur_v = tf.gather(cur_v, idx2)
cur_v = tf.Variable(tf.transpose(cur_v), dtype=tf.float32, name='cur_v')
# predict rate by multiplication
predicted_R = tf.matmul(tf.cast(cur_u, tf.float32), tf.cast(cur_v, tf.float32))
# get needed rate from rate matrix by slicing it
cur_rate = tf.gather(R, idx1)
cur_rate = tf.transpose(cur_rate)
cur_rate = tf.gather(cur_rate, idx2)
cur_rate = tf.transpose(cur_rate)
#cost function and train step
cost = tf.reduce_sum(tf.reduce_sum(tf.abs(tf.sub(predicted_R, cur_rate))))
train_step = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with tf.Session() as sess:
# initialize variables
init_new_vars_op = tf.initialize_variables([v, u])
sess.run(init_new_vars_op)
init = tf.initialize_all_variables()
rand_idx = np.sort(np.random.randint(0, R_dim_1, batch_size1))
rand_idx2 = np.sort(np.random.randint(0, R_dim_2, batch_size2))
sess.run(init, feed_dict={idx1: rand_idx, idx2: rand_idx2})
for i in range(no_epochs):
with tf.Graph().as_default():
rand_idx1 = np.random.randint(0, R_dim_1, batch_size1)
rand_idx2 = np.random.randint(0, R_dim_2, batch_size2)
_, this_cost, tmp_u, tmp_v, tmp_cur_u, tmp_cur_v = sess.run([train_step, cost, u, v, cur_u, cur_v],feed_dict={idx1: rand_idx1, idx2: rand_idx2})
print this_cost
#update U and V with computed current U and current V
tmp_u = np.array(tmp_u)
tmp_u[rand_idx] = tmp_cur_u
u = tf.assign(u, tmp_u)
tmp_v = np.array(tmp_v)
tmp_v[:, rand_idx2] = tmp_cur_v
v = tf.assign(v, tmp_v)
</code></pre>
<p>but I have memory leak right at <code>u = tf.assign(u, tmp_u)</code> and <code>u = tf.assign(u, tmp_u)</code>
I applied <a href="http://stackoverflow.com/questions/35695183/tensorflow-memory-leak-even-while-closing-session">this</a> but got nothing.<br/>
there was another solution to apply update just to subset of <code>U</code> and <code>V</code> like <a href="http://stackoverflow.com/questions/34935464/update-a-subset-of-weights-in-tensorflow">this</a> but encountered lots of other errors so please stay on course of how to solve my memory leak problem. <br/>
sorry for my long question and thanks for reading it.</p>
| 0 | 2016-10-05T15:53:33Z | 39,900,876 | <p>I just solve this problem by sending the updated values of <code>U</code> and <code>V</code> as placeholder and then assign <code>U</code> and <code>V</code> to these passed parameters so the created graph will stay the same on different iterations.
here is the code:</p>
<pre><code># define variables
u = tf.Variable(np.random.rand(R_dim_1, output_dim), dtype=tf.float32, name='u')
v = tf.Variable(np.random.rand(output_dim, R_dim_2), dtype=tf.float32, name='v')
idx1 = tf.placeholder(tf.int32, shape=batch_size1, name='idx1')
idx2 = tf.placeholder(tf.int32, shape=batch_size2, name='idx2')
#define new place holder for changed values of U and V
last_u = tf.placeholder(tf.float32, shape=[R_dim_1, output_dim], name='last_u')
last_v = tf.placeholder(tf.float32, shape=[output_dim, R_dim_2], name='last_v')
#set U and V to updated ones
change_u = tf.assign(u, last_u)
change_v = tf.assign(v, last_v)
# get current U and current V by slicing U and V
cur_u = tf.Variable(tf.gather(u, idx1), dtype=tf.float32, name='cur_u')
cur_v = tf.transpose(v)
cur_v = tf.gather(cur_v, idx2)
cur_v = tf.Variable(tf.transpose(cur_v), dtype=tf.float32, name='cur_v')
# predict rate by multiplication
predicted_R = tf.matmul(tf.cast(cur_u, tf.float32), tf.cast(cur_v, tf.float32))
# get needed rate from rate matrix by slicing it
cur_rate = tf.gather(R, idx1)
cur_rate = tf.transpose(cur_rate)
cur_rate = tf.gather(cur_rate, idx2)
cur_rate = tf.transpose(cur_rate)
#cost function and train step
cost = tf.reduce_sum(tf.reduce_sum(tf.abs(tf.sub(predicted_R, cur_rate))))
train_step = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with tf.Session() as sess:
tmp_u = initial_u;
tmp_v = initial_v;
# initialize variables
init_new_vars_op = tf.initialize_variables([v, u])
sess.run(init_new_vars_op, feed_dict={last_u: tmp_u, last_v: tmp_v})
init = tf.initialize_all_variables()
rand_idx = np.sort(np.random.randint(0, R_dim_1, batch_size1))
rand_idx2 = np.sort(np.random.randint(0, R_dim_2, batch_size2))
sess.run(init, feed_dict={idx1: rand_idx, idx2: rand_idx2})
for i in range(no_epochs):
with tf.Graph().as_default():
rand_idx1 = np.random.randint(0, R_dim_1, batch_size1)
rand_idx2 = np.random.randint(0, R_dim_2, batch_size2)
_, this_cost, tmp_u, tmp_v, tmp_cur_u, tmp_cur_v, _, _ =
sess.run([train_step, cost, u, v, cur_u, cur_v, change_u, change_v],
feed_dict={idx1: rand_idx1, idx2: rand_idx2, last_u: tmp_u, last_v: tmp_v})
print this_cost
# find new values of U and current V but don't assign to them
tmp_u = np.array(tmp_u)
tmp_u[rand_idx] = tmp_cur_u
tmp_v = np.array(tmp_v)
tmp_v[:, rand_idx2] = tmp_cur_v
</code></pre>
| 0 | 2016-10-06T16:10:25Z | [
"python",
"memory-leaks",
"tensorflow",
"batch-updates",
"matrix-factorization"
]
|
Overload operator at runtime | 39,878,667 | <p>Is it possible to overload a operator at runtime? I tried the following code example:</p>
<pre><code>class A():
pass
a = A()
a.__eq__ = lambda self, other: False
a == a # this should return False since the __eq__-method should be overloaded but it returns
# True like object.__eq__ would
a.__eq__(a, a) # this returns False just as expected
</code></pre>
<p>Why won't this code work? Is it possible to achieve the desired behavior?</p>
| 1 | 2016-10-05T15:55:33Z | 39,878,791 | <p>Magic/double underscore methods are looked up on the class, not on the instance. So you need to override the class's method, not the instance. There are two ways to do this.</p>
<p>Either directly assign to the class as such:</p>
<pre><code>class A:
pass
a = A()
A.__eq__ = lambda self, other: False
print(a == a)
print(a.__eq__(a)) # You're already passing in self by calling it as an instance method, so you only pass in the other one.
</code></pre>
<p>Or if you only have the instance and you're using 3.x, you can do:</p>
<pre><code>class A:
pass
a = A()
type(a).__eq__ = lambda self, other: False
print(a == a)
print(a.__eq__(a))
</code></pre>
| 2 | 2016-10-05T16:01:09Z | [
"python",
"operator-overloading"
]
|
Cannot assign a device to node | 39,878,698 | <p>I followed <a href="https://medium.com/@hamedmp/exporting-trained-tensorflow-models-to-c-the-right-way-cf24b609d183#.qajf9shah" rel="nofollow">this tutoriel</a> to export my own trained tensorflow model to c++ and I got errors when I call <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py#L70" rel="nofollow"><code>freeze_graph</code></a> </p>
<pre><code>I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:03:00.0)
...
tensorflow.python.framework.errors.InvalidArgumentError: Cannot assign a device to node 'save/Const_1': Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and devices:
Identity: CPU
Const: CPU
[[Node: save/Const_1 = Const[dtype=DT_STRING, value=Tensor<type: string shape: [] values: model>, _device="/device:GPU:0"]()]]
Caused by op u'save/Const_1', defined at:
...
</code></pre>
<p><em>GPU:0</em> is detected and usable by Tensorflow, so I don't understand from where the error comes from.</p>
<p>Any idea ?</p>
| 0 | 2016-10-05T15:57:12Z | 39,879,061 | <p>The error means op <code>save/Const_1</code> is trying to get placed on GPU, and there's no GPU implementation of that node. In fact <code>Const</code> nodes are CPU only and are stored as part of Graph object, so it can't be placed on GPU. One work-around is to run with <code>allow_soft_placement=True</code>, or to open the <code>pbtxt</code> file and manually remove the <code>device</code> line for that node</p>
| 2 | 2016-10-05T16:15:04Z | [
"python",
"tensorflow"
]
|
Dynamically add class variables to classes inheriting mixin class | 39,878,756 | <p>I've got a mixin class, that adds some functionality to inheriting classes, but the mixin requires some class attributes to be present, for simplicity let's say only one property <code>handlers</code>. So this would be the usage of the mixin:</p>
<pre><code>class Mixin:
pass
class Something(Mixin):
handlers = {}
</code></pre>
<p>The mixin can't function without this being defined, but I really don't want to specify the <code>handlers</code> in every class that I want to use the mixin with. So I solved this by writing a metaclass:</p>
<pre><code>class MixinMeta:
def __new__(mcs, *args, **kwargs):
cls = super().__new__(mcs, *args, **kwargs)
cls.handlers = {}
return cls
class Mixin(metaclass=MixinMeta):
pass
</code></pre>
<p>And this works exactly how I want it to. But I'm thinking this can become a huge problem, since metaclasses don't work well together (I read various metaclass conflicts can only be solved by creating a new metaclass that resolves those conflicts).</p>
<p>Also, I don't want to make the <code>handlers</code> property a property of the <code>Mixin</code> class itself, since that would mean having to store handlers by their class names inside the <code>Mixin</code> class, complicating the code a bit. I like having each class having their handlers on their own class - it makes working with them simpler, but clearly this has drawbacks.</p>
<p>My question is, what would be a better way to implement this? I'm fairly new to metaclasses, but they seem to solve this problem well. But metaclass conflicts are clearly a huge issue when dealing with complex hierarchies without having to define various metaclasses just to resolve those conflicts.</p>
| 2 | 2016-10-05T15:59:34Z | 39,884,418 | <p>Your problem is very real, and Python folks have thought of this for Python 3.6 (still unrealsed) on. For now (up to Python 3.5), if your attributes can wait to exist until your classes are first instantiated, you could put cod to create a (class) attribute on the <code>__new__</code> method of your mixin class itself - thus avoiding the (extra) metaclass:</p>
<pre><code>class Mixin:
def __new__(cls):
if not hasattr(cls, handlers):
cls.handlers = {}
return super().__new__(cls)
</code></pre>
<p>For Python 3.6 on, <a href="https://www.python.org/dev/peps/pep-0487/" rel="nofollow">PEP 487</a> defines a <code>__init_subclass__</code> special method to go on the mixin class body. This special method is not called for the mixin class itself, but will be called at the end of <code>type.__new__</code> method (the "root" metaclass) for each class that inherits from your mixin.</p>
<pre><code>class Mixin:
def __init_subclass__(cls, **kwargs):
cls.handlers = {}
return super().__init_subclass__(**kwargs)
</code></pre>
<p>As per the PEP's background text, the main motivation for this is exactly what led you to ask your question: avoid the need for meta-classes when simple customization of class creation is needed, in order to reduce the chances of needing different metaclasses in a project, and thus triggering a situation of metaclass conflict.</p>
| 3 | 2016-10-05T22:03:17Z | [
"python",
"python-3.x"
]
|
How does APScheduler save the jobs in the database? (Python) | 39,878,785 | <p>I am trying to figure out how looks the schema of a sqlite database after I save some jobs using Advanced Python Scheduler. I need it because I want to show the job in a UI. I tried to write a simple script which saves a job :</p>
<pre><code>from pytz import utc
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore
from apscheduler.executors.pool import ThreadPoolExecutor, ProcessPoolExecutor
from datetime import datetime
import os
from apscheduler.schedulers.blocking import BlockingScheduler
jobstores = {
'default': SQLAlchemyJobStore(url='sqlite:///test.db')
}
executors = {
'default': ThreadPoolExecutor(20),
'processpool': ProcessPoolExecutor(5)
}
job_defaults = {
'coalesce': False,
'max_instances': 3
}
scheduler = BackgroundScheduler(jobstores=jobstores, executors=executors, job_defaults=job_defaults, timezone=utc)
def tick():
print('Tick! The time is: %s' % datetime.now())
scheduler = BlockingScheduler()
scheduler.add_executor('processpool')
scheduler.add_job(tick, 'interval', seconds=3)
print('Press Ctrl+{0} to exit'.format('Break' if os.name == 'nt' else 'C'))
try:
scheduler.start()
except (KeyboardInterrupt, SystemExit):
pass
</code></pre>
<p>But the command ".fullschema" in the terminal shows me that are no table and no data in the test.db. What exactly am I doing wrong?</p>
| 0 | 2016-10-05T16:00:49Z | 39,882,319 | <p>You're creating two schedulers but only starting the one with default configuration.</p>
| 1 | 2016-10-05T19:39:50Z | [
"python",
"sqlite",
"apscheduler"
]
|
Django - Migrating from Sqlite3 to Postgresql | 39,878,801 | <p>I'm trying to deploy my Django app in Heroku, so I have to switch to PostgreSQL and I've been following these <a href="http://stackoverflow.com/questions/28648695/migrate-django-development-database-sql3-to-heroku">steps</a></p>
<p>However when I run <code>python manage.py migrate</code></p>
<p>I get the following error:</p>
<pre><code>C:\Users\admin\trailers>python manage.py migrate
Operations to perform:
Apply all migrations: auth, movies, sessions, admin, contenttypes
Running migrations:
Rendering model states... DONE
Applying movies.0012_auto_20160915_1904...Traceback (most recent call last):
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\backends\
utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.ProgrammingError: foreign key constraint "movies_movie_genre_genre_id_d
9d93fd9_fk_movies_genre_id" cannot be implemented
DETAIL: Key columns "genre_id" and "id" are of incompatible types: integer and
character varying.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\core\managem
ent\__init__.py", line 350, in execute_from_command_line
utility.execute()
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\core\managem
ent\__init__.py", line 342, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\core\managem
ent\base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\core\managem
ent\base.py", line 399, in execute
output = self.handle(*args, **options)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\core\managem
ent\commands\migrate.py", line 200, in handle
executor.migrate(targets, plan, fake=fake, fake_initial=fake_initial)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\migration
s\executor.py", line 92, in migrate
self._migrate_all_forwards(plan, full_plan, fake=fake, fake_initial=fake_ini
tial)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\migration
s\executor.py", line 121, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_
initial)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\migration
s\executor.py", line 198, in apply_migration
state = migration.apply(state, schema_editor)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\migration
s\migration.py", line 123, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, projec
t_state)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\migration
s\operations\fields.py", line 201, in database_forwards
schema_editor.alter_field(from_model, from_field, to_field)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\backends\
base\schema.py", line 482, in alter_field
old_db_params, new_db_params, strict)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\backends\
base\schema.py", line 634, in _alter_field
params,
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\backends\
base\schema.py", line 110, in execute
cursor.execute(sql, params)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\backends\
utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\backends\
utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\utils.py"
, line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\utils\six.py
", line 685, in reraise
raise value.with_traceback(tb)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\backends\
utils.py", line 64, in execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: foreign key constraint "movies_movie_genre_gen
re_id_d9d93fd9_fk_movies_genre_id" cannot be implemented
DETAIL: Key columns "genre_id" and "id" are of incompatible types: integer and
character varying.
</code></pre>
<p>Here is my <strong>models.py</strong></p>
<pre><code>class Genre(models.Model):
id = models.IntegerField(primary_key=True)
genre = models.CharField(max_length=255)
class Person(models.Model):
name = models.CharField(max_length=128)
class Movie(models.Model):
title = models.CharField(max_length=511)
tmdb_id = models.IntegerField(null=True, blank=True, unique=True)
release = models.DateField(null=True, blank=True)
poster = models.TextField(max_length=500, null=True)
backdrop = models.TextField(max_length=500, null=True, blank=True)
popularity = models.TextField(null=True, blank=True)
runtime = models.IntegerField(null=True, blank=True)
description = models.TextField(null=True, blank=True)
director = models.ManyToManyField(Person, related_name="directed_movies")
actors = models.ManyToManyField(Person, related_name="acted_movies")
genre = models.ManyToManyField(Genre)
class Trailer(models.Model):
movie = models.ForeignKey(Movie, on_delete=models.CASCADE, null=True)
link = models.CharField(max_length=100)
</code></pre>
<p>I can't figure out what's wrong with my code, any help would be appreciated! </p>
<p><strong>Edit</strong>: I tried removing the id field from the Genre class and I still get the same error</p>
| 1 | 2016-10-05T16:01:40Z | 39,879,044 | <p>You must use this command:</p>
<pre><code>python manage.py makemigrations
</code></pre>
<p>before your command</p>
<pre><code>python manage.py migrate
</code></pre>
| 1 | 2016-10-05T16:14:08Z | [
"python",
"django",
"postgresql",
"heroku"
]
|
Django - Migrating from Sqlite3 to Postgresql | 39,878,801 | <p>I'm trying to deploy my Django app in Heroku, so I have to switch to PostgreSQL and I've been following these <a href="http://stackoverflow.com/questions/28648695/migrate-django-development-database-sql3-to-heroku">steps</a></p>
<p>However when I run <code>python manage.py migrate</code></p>
<p>I get the following error:</p>
<pre><code>C:\Users\admin\trailers>python manage.py migrate
Operations to perform:
Apply all migrations: auth, movies, sessions, admin, contenttypes
Running migrations:
Rendering model states... DONE
Applying movies.0012_auto_20160915_1904...Traceback (most recent call last):
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\backends\
utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.ProgrammingError: foreign key constraint "movies_movie_genre_genre_id_d
9d93fd9_fk_movies_genre_id" cannot be implemented
DETAIL: Key columns "genre_id" and "id" are of incompatible types: integer and
character varying.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\core\managem
ent\__init__.py", line 350, in execute_from_command_line
utility.execute()
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\core\managem
ent\__init__.py", line 342, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\core\managem
ent\base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\core\managem
ent\base.py", line 399, in execute
output = self.handle(*args, **options)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\core\managem
ent\commands\migrate.py", line 200, in handle
executor.migrate(targets, plan, fake=fake, fake_initial=fake_initial)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\migration
s\executor.py", line 92, in migrate
self._migrate_all_forwards(plan, full_plan, fake=fake, fake_initial=fake_ini
tial)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\migration
s\executor.py", line 121, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_
initial)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\migration
s\executor.py", line 198, in apply_migration
state = migration.apply(state, schema_editor)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\migration
s\migration.py", line 123, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, projec
t_state)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\migration
s\operations\fields.py", line 201, in database_forwards
schema_editor.alter_field(from_model, from_field, to_field)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\backends\
base\schema.py", line 482, in alter_field
old_db_params, new_db_params, strict)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\backends\
base\schema.py", line 634, in _alter_field
params,
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\backends\
base\schema.py", line 110, in execute
cursor.execute(sql, params)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\backends\
utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\backends\
utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\utils.py"
, line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\utils\six.py
", line 685, in reraise
raise value.with_traceback(tb)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\backends\
utils.py", line 64, in execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: foreign key constraint "movies_movie_genre_gen
re_id_d9d93fd9_fk_movies_genre_id" cannot be implemented
DETAIL: Key columns "genre_id" and "id" are of incompatible types: integer and
character varying.
</code></pre>
<p>Here is my <strong>models.py</strong></p>
<pre><code>class Genre(models.Model):
id = models.IntegerField(primary_key=True)
genre = models.CharField(max_length=255)
class Person(models.Model):
name = models.CharField(max_length=128)
class Movie(models.Model):
title = models.CharField(max_length=511)
tmdb_id = models.IntegerField(null=True, blank=True, unique=True)
release = models.DateField(null=True, blank=True)
poster = models.TextField(max_length=500, null=True)
backdrop = models.TextField(max_length=500, null=True, blank=True)
popularity = models.TextField(null=True, blank=True)
runtime = models.IntegerField(null=True, blank=True)
description = models.TextField(null=True, blank=True)
director = models.ManyToManyField(Person, related_name="directed_movies")
actors = models.ManyToManyField(Person, related_name="acted_movies")
genre = models.ManyToManyField(Genre)
class Trailer(models.Model):
movie = models.ForeignKey(Movie, on_delete=models.CASCADE, null=True)
link = models.CharField(max_length=100)
</code></pre>
<p>I can't figure out what's wrong with my code, any help would be appreciated! </p>
<p><strong>Edit</strong>: I tried removing the id field from the Genre class and I still get the same error</p>
| 1 | 2016-10-05T16:01:40Z | 39,879,212 | <p>Why do you have the <code>id</code> field on the Genre model? You don't need that.</p>
| 0 | 2016-10-05T16:23:46Z | [
"python",
"django",
"postgresql",
"heroku"
]
|
Python2.7: Untar files in parallel mode (with threading) | 39,878,877 | <p>I'm learning Python threading and in the same time trying to improve my old untaring script. </p>
<p>The main part of it looks like:</p>
<pre><code>import tarfile, os, threading
def untar(fname, path):
print "Untarring " + fname
try:
ut = tarfile.open(os.path.join(path,fname), "r:gz")
ut.extractall(path)
ut.close()
except tarfile.ReadError as e: #in case it's not gziped
print e
ut = tarfile.open(os.path.join(path,fname), "r:*")
ut.extractall(path)
ut.close()
def untarFolder(path):
if path == ".":
path = os.getcwd()
print "path", path
ListTarFiles = serveMenu(path) # function what parse folder
# content for tars, and tar.gz
# files and return list of them
print "ListTarFiles ", ListTarFiles
for filename in ListTarFiles:
print "filename: ", filename
t = threading.Thread(target=untar, args = (filename,path))
t.daemon = True
t.start()
print "Thread:", t
</code></pre>
<p>So the goal to untar all files in given folder not one by one but in parallel mode at the same time. Is it possible?</p>
<p>Output: </p>
<pre><code>bogard@testlab:~/Toolz/untar$ python untar01.py -f .
path /home/bogard/Toolz/untar
ListTarFiles ['tar1.tgz', 'tar2.tgz', 'tar3.tgz']
filename: tar1.tgz
Untarring tar1.tgz
Thread: <Thread(Thread-1, started daemon 140042104731392)>
filename: tar2.tgz
Untarring tar2.tgz
Thread: <Thread(Thread-2, started daemon 140042096338688)>
filename: tar3.tgz
Untarring tar3.tgz
Thread: <Thread(Thread-3, started daemon 140042087945984)>
</code></pre>
<p>In output can see that script create threads but it doesn't untar any files.
What's the catch?</p>
| 0 | 2016-10-05T16:06:06Z | 39,880,561 | <p>What might be happening is that your script is returning before the threads actually complete. You can wait for a thread to complete with <code>Thread.join()</code>. Maybe try something like this:</p>
<pre><code>threads = []
for filename in ListTarFiles:
t = threading.Thread(target=untar, args = (filename,path))
t.daemon = True
threads.append(t)
t.start()
# Wait for each thread to complete
for thread in threads:
thread.join()
</code></pre>
<p>Also, depending on the number of files you're untarring, you might want to limit the number of jobs that you're launching, so that you're not trying to untar 1000 files at once. You could maybe do this with something like <a href="https://docs.python.org/2/library/multiprocessing.html#introduction" rel="nofollow"><code>multiprocessing.Pool</code></a>.</p>
| 0 | 2016-10-05T17:46:05Z | [
"python",
"multithreading",
"tarfile"
]
|
Change default tkinter entry value in python | 39,878,878 | <pre><code>checkLabel = ttk.Label(win,text = " Check Amount ", foreground = "blue")
checkLabel.grid(row = 0 , column = 1)
checkEntry = ttk.Entry(win, textvariable = checkVariable)
checkEntry.grid(row = 1, column = 1, sticky = 'w')
</code></pre>
<p>How do I change the defualt entry field from displaying 0?</p>
| -2 | 2016-10-05T16:06:07Z | 39,879,154 | <p>Set the value of <code>checkVariable</code> to <code>''</code> (the empty string) before you create the <code>Entry</code> object? The statement to use would be</p>
<pre><code>checkVariable.set('')
</code></pre>
<p>But then <code>checkVariable</code> would have to be a <code>StringVar</code>, and you would have to convert the input value to an integer yourself if you wanted the integer value.</p>
| 0 | 2016-10-05T16:20:43Z | [
"python",
"tkinter"
]
|
Change default tkinter entry value in python | 39,878,878 | <pre><code>checkLabel = ttk.Label(win,text = " Check Amount ", foreground = "blue")
checkLabel.grid(row = 0 , column = 1)
checkEntry = ttk.Entry(win, textvariable = checkVariable)
checkEntry.grid(row = 1, column = 1, sticky = 'w')
</code></pre>
<p>How do I change the defualt entry field from displaying 0?</p>
| -2 | 2016-10-05T16:06:07Z | 39,879,466 | <p>Use the entry widget's function, <code>.insert()</code> and <code>.delete()</code>. Here is an example. </p>
<pre><code>entry = tk.Entry(root)
entry.insert(END, "Hello") # this will start the entry with "Hello" in it
# you may want to add a delay or something here.
entry.delete(0, END) # this will delete everything inside the entry
entry.insert(END, "WORLD") # and this will insert the word "WORLD" in the entry.
</code></pre>
<p>Another way to do this is with the Tkinter StrVar. Here is an example of using the str variable. </p>
<pre><code>entry_var = tk.StrVar()
entry_var.set("Hello")
entry = tk.Entry(root, textvariable=entry_var) # when packing the widget
# it should have the world "Hello" inside.
# here is your delay or you can make a function call.
entry_var.set("World") # this sets the entry widget text to "World"
</code></pre>
| 1 | 2016-10-05T16:37:50Z | [
"python",
"tkinter"
]
|
Python- Multiple dynamic inheritance | 39,878,892 | <p>I'm having trouble with getting multiple dynamic inheritance to work. These examples make the most sense to me(<a href="http://stackoverflow.com/questions/21060073/dynamic-inheritance-in-python">here</a> and <a href="http://stackoverflow.com/questions/32104219/dynamic-multiple-inheritance-in-python">here</a>), but there's not enough code in one example for me to really understand what's going on and the other example doesn't seem to be working when I change it around for my needs (code below).</p>
<p>I'm creating a universal tool that works with multiple software packages. In one software, I need to inherit from 2 classes: 1 software specific API mixin, and 1 PySide class. In another software I only need to inherit from the 1 PySide class. </p>
<p>The least elegant solution that I can think of is to just create 2 separate classes (with all of the same methods) and call either one based on the software that's running. I have a feeling there's a better solution.</p>
<p>Here's what I'm working with:</p>
<pre><code>## MainWindow.py
import os
from maya.app.general.mayaMixin import MayaQWidgetDockableMixin
# Build class
def build_main_window(*arg):
class Build(arg):
def __init__(self):
super( Build, self ).__init__()
# ----- a bunch of methods
# Get software
software = os.getenv('SOFTWARE')
# Run tool
if software == 'maya':
build_main_window(maya_mixin_class, QtGui.QWidget)
if software == 'houdini':
build_main_window(QtGui.QWidget)
</code></pre>
<p>I'm currently getting this error:</p>
<pre><code># class Build(arg):
# TypeError: Error when calling the metaclass bases
# tuple() takes at most 1 argument (3 given) #
</code></pre>
<p>Thanks for any help!</p>
<h1>EDIT:</h1>
<pre><code>## MainWindow.py
import os
# Build class
class BuildMixin():
def __init__(self):
super( BuildMixin, self ).__init__()
# ----- a bunch of methods
def build_main_window(*args):
return type('Build', (BuildMixin, QtGui.QWidget) + args, {})
# Get software
software = os.getenv('SOFTWARE')
# Run tool
if software == 'maya':
from maya.app.general.mayaMixin import MayaQWidgetDockableMixin
Build = build_main_window(MayaQWidgetDockableMixin)
if software == 'houdini':
Build = build_main_window()
</code></pre>
| 0 | 2016-10-05T16:06:39Z | 39,878,988 | <p><code>arg</code> is a tuple, you can't use a tuple as a base class.</p>
<p>Use <code>type()</code> to create a new class instead; it takes a class name, a tuple of base classes (can be empty) and the class body (a dictionary).</p>
<p>I'd keep the methods for your class in a mix-in method:</p>
<pre><code>class BuildMixin():
def __init__(self):
super(BuildMixin, self).__init__()
# ----- a bunch of methods
def build_main_window(*arg):
return type('Build', (BuildMixin, QtGui.QWidget) + args, {})
if software == 'maya':
Build = build_main_window(maya_mixin_class)
if software == 'houdini':
Build = build_main_window()
</code></pre>
<p>Here, <code>args</code> is used as an <em>additional</em> set of classes to inherit from. The <code>BuildMixin</code> class provides all the real methods, so the third argument to <code>type()</code> is left empty (the generated <code>Build</code> class has an empty class body).</p>
<p>Since <code>QtGui.QWidget</code> is common between the two classes, I just moved that into the <code>type()</code> call.</p>
| 1 | 2016-10-05T16:10:34Z | [
"python",
"class",
"inheritance",
"dynamic"
]
|
Python- Multiple dynamic inheritance | 39,878,892 | <p>I'm having trouble with getting multiple dynamic inheritance to work. These examples make the most sense to me(<a href="http://stackoverflow.com/questions/21060073/dynamic-inheritance-in-python">here</a> and <a href="http://stackoverflow.com/questions/32104219/dynamic-multiple-inheritance-in-python">here</a>), but there's not enough code in one example for me to really understand what's going on and the other example doesn't seem to be working when I change it around for my needs (code below).</p>
<p>I'm creating a universal tool that works with multiple software packages. In one software, I need to inherit from 2 classes: 1 software specific API mixin, and 1 PySide class. In another software I only need to inherit from the 1 PySide class. </p>
<p>The least elegant solution that I can think of is to just create 2 separate classes (with all of the same methods) and call either one based on the software that's running. I have a feeling there's a better solution.</p>
<p>Here's what I'm working with:</p>
<pre><code>## MainWindow.py
import os
from maya.app.general.mayaMixin import MayaQWidgetDockableMixin
# Build class
def build_main_window(*arg):
class Build(arg):
def __init__(self):
super( Build, self ).__init__()
# ----- a bunch of methods
# Get software
software = os.getenv('SOFTWARE')
# Run tool
if software == 'maya':
build_main_window(maya_mixin_class, QtGui.QWidget)
if software == 'houdini':
build_main_window(QtGui.QWidget)
</code></pre>
<p>I'm currently getting this error:</p>
<pre><code># class Build(arg):
# TypeError: Error when calling the metaclass bases
# tuple() takes at most 1 argument (3 given) #
</code></pre>
<p>Thanks for any help!</p>
<h1>EDIT:</h1>
<pre><code>## MainWindow.py
import os
# Build class
class BuildMixin():
def __init__(self):
super( BuildMixin, self ).__init__()
# ----- a bunch of methods
def build_main_window(*args):
return type('Build', (BuildMixin, QtGui.QWidget) + args, {})
# Get software
software = os.getenv('SOFTWARE')
# Run tool
if software == 'maya':
from maya.app.general.mayaMixin import MayaQWidgetDockableMixin
Build = build_main_window(MayaQWidgetDockableMixin)
if software == 'houdini':
Build = build_main_window()
</code></pre>
| 0 | 2016-10-05T16:06:39Z | 39,881,627 | <p>The error in your original code is caused by failing to use tuple expansion in the class definition. I would suggest simplifying your code to this:</p>
<pre><code># Get software
software = os.getenv('SOFTWARE')
BaseClasses = [QtGui.QWidget]
if software == 'maya':
from maya.app.general.mayaMixin import MayaQWidgetDockableMixin
BaseClasses.insert(0, MayaQWidgetDockableMixin)
class Build(*BaseClasses):
def __init__(self, parent=None):
super(Build, self).__init__(parent)
</code></pre>
<p><strong>UPDATE</strong>:</p>
<p>The above code will only work with Python 3, so it looks like a solution using <code>type()</code> will be needed for Python 2. From the other comments, it appears that the <code>MayaQWidgetDockableMixin</code> class may be a old-style class, so a solution like this may be necessary:</p>
<pre><code>def BaseClass():
bases = [QtGui.QWidget]
if software == 'maya':
from maya.app.general.mayaMixin import MayaQWidgetDockableMixin
class Mixin(MayaQWidgetDockableMixin, object): pass
bases.insert(0, Mixin)
return type('BuildBase', tuple(bases), {})
class Build(BaseClass()):
def __init__(self, parent=None):
super(Build, self).__init__(parent)
</code></pre>
| 1 | 2016-10-05T18:53:34Z | [
"python",
"class",
"inheritance",
"dynamic"
]
|
python Numpy transpose and calculate | 39,878,921 | <p>I am relatively new to python and numpy. I am currently trying to replicate the following table as shown in the image in python using numpy.</p>
<p><a href="http://i.stack.imgur.com/nVKbb.png" rel="nofollow"><img src="http://i.stack.imgur.com/nVKbb.png" alt="enter image description here"></a>
As in the figure, I have got the columns "group, sub_group,value" that are populated. I want to transpose column "sub_group" and do a simple calculation i.e. value minus shift(value) and display the figure in the lower diagonal of the matrix for each group. If sub_group is "0", then assign the whole column as 0. The <strong>transposed sub_group</strong> can be named anything (preferably index numbers) if it makes it easier. I am ok with a pandas solution as well. I just think pandas may be slow?</p>
<p>Below is code in array form:</p>
<pre><code>import numpy as np
a=np.array([(1,-1,10),(1,0,10),(1,-2,15),(1,-3,1),(1,-4,1),(1,0,12),(1,-5,16)], dtype=[('group',float),('sub_group',float),('value',float)])
</code></pre>
<p>Any help would be appreciated.
Regards,
S</p>
| 0 | 2016-10-05T16:07:33Z | 39,879,492 | <p>This piece of code does the calculation for the example of the subgroup, I am not sure if this is what you actually want, in that case post a comment here and I will edit</p>
<pre><code>import numpy as np
array_1=np.array([(1,-1,10),(1,0,10),(1,-2,15),(1,-3,1),(1,-4,1);(1,0,12),(1,-5,16)])
#transpose the matrix
transposed_group = array_1.transpose()
#loop over the first row
for i in range(0,len(transposed_group[1,:])):
#value[i] - first value of the row
transposed_group[0,i] = transposed_group[0,i] - transposed_group[0,0]
print transposed_group
</code></pre>
<p>In case you want to display that in the diagonal of the matrix, you can loop through the rows and columns, as for example: </p>
<pre><code>import numpy as np
#create an array of 0
array = np.zeros(shape=(3,3))
#fill the array with 1 in the diagonals
print array
#loop over rows
for i in range(0,len(array[:,1])):
#loop over columns
for j in range(0,len(array[1,:])):
array[i,j] = 1
print array
</code></pre>
| 1 | 2016-10-05T16:39:27Z | [
"python",
"arrays",
"pandas",
"numpy"
]
|
python Numpy transpose and calculate | 39,878,921 | <p>I am relatively new to python and numpy. I am currently trying to replicate the following table as shown in the image in python using numpy.</p>
<p><a href="http://i.stack.imgur.com/nVKbb.png" rel="nofollow"><img src="http://i.stack.imgur.com/nVKbb.png" alt="enter image description here"></a>
As in the figure, I have got the columns "group, sub_group,value" that are populated. I want to transpose column "sub_group" and do a simple calculation i.e. value minus shift(value) and display the figure in the lower diagonal of the matrix for each group. If sub_group is "0", then assign the whole column as 0. The <strong>transposed sub_group</strong> can be named anything (preferably index numbers) if it makes it easier. I am ok with a pandas solution as well. I just think pandas may be slow?</p>
<p>Below is code in array form:</p>
<pre><code>import numpy as np
a=np.array([(1,-1,10),(1,0,10),(1,-2,15),(1,-3,1),(1,-4,1),(1,0,12),(1,-5,16)], dtype=[('group',float),('sub_group',float),('value',float)])
</code></pre>
<p>Any help would be appreciated.
Regards,
S</p>
| 0 | 2016-10-05T16:07:33Z | 39,879,588 | <p>Try this out :</p>
<pre><code>import numpy as np
import pandas as pd
a=np.array([(1,-1,10),(1,0,10),(1,-2,15),(1,-3,1),(1,-4,1),(1,0,12),(1,-5,16)], dtype=[('group',float),('sub_group',float),('value',float)])
df = pd.DataFrame(a)
for i in df.index:
col_name = str(int(df['sub_group'][i]))
df[col_name] = None
if df['sub_group'][i] == 0:
df[col_name] = 0
else:
val = df['value'][i]
for j in range(i, df.index[-1]+1):
df[col_name][j] = val - df['value'][j]
</code></pre>
<p>For the upper triangle of the matrix, I have put <code>None</code>values. You can replace it by whatever you want.</p>
| 1 | 2016-10-05T16:46:14Z | [
"python",
"arrays",
"pandas",
"numpy"
]
|
Creating file from Django <InMemoryUploadedFile> | 39,878,944 | <p>I have a couple of .xlsx files uploaded from a form within a Django website. For reasons that aren't worth explaining, I need to get a file path for these file objects as opposed to performing actions directly upon them. Tl;dr: I'd have to spend days re-writing a ton of someone else's code. </p>
<p>Is the following the best way to do this: </p>
<ol>
<li>Save objects as files</li>
<li>Get filepaths of newly created files</li>
<li>do_something()</li>
<li>Delete files</li>
</ol>
<p>Thanks.</p>
| 0 | 2016-10-05T16:08:30Z | 39,884,192 | <p>NamedTemporaryFile. The with block does all the cleanup for you, and the Named version of this means the temp file can be handed off by filename to some other process if you have to go that far. Sorta example (note this is mostly coded on the fly so no promises of working code here)</p>
<pre><code>import tempfile
def handler(inmemory_fh):
with tempfile.NamedTemporaryFile() as f_out:
f_out.write(inmemory_fh.read()) # or something
subprocess.call(['/usr/bin/dosomething', f_out.name])
</code></pre>
<p>When the withblock exits, the file gets removed.</p>
<p>As for is that the best way? Enh, you might be able to reach into their uploader code and tell django to write it to a file, or you can poke at the django settings for it (<a href="https://docs.djangoproject.com/en/1.10/ref/settings/#file-upload-settings" rel="nofollow">https://docs.djangoproject.com/en/1.10/ref/settings/#file-upload-settings</a>) and see if django will just do it for you. Depends on what exactly you need to do with that file.</p>
| 1 | 2016-10-05T21:42:52Z | [
"python",
"django",
"python-2.7"
]
|
502 Bad Gateway nginx/1.1.19 on django | 39,879,034 | <p>I am new to this. I took the image of running django application and spawned the new vm that points to a different database but I am getting this "502 Bad Gateway nginx/1.1.1"</p>
<p>when i tested this in development mode, it works fine but not otherwise.</p>
<p>i looked into /var/log/nginx/access.log and error.log but nothing found there.
Any help would be appreciated</p>
| 0 | 2016-10-05T16:13:11Z | 39,879,669 | <p>Error <strong>502 Bad Gateway</strong> means that the NGINX server used to access your site couldn't communicate properly with the <em>upstream server</em> (your application server).</p>
<p>This can mean that either or both of your NGINX server and your Django Application server are configured incorrectly.</p>
<p>Double-check the configuration of your NGINX server to check it's proxying to the correct domain/address of your application server and that it is otherwise configured correctly.</p>
<p>If you're sure this isn't the issue then check the configuration of your application server. Are you able to connect directly to the application server's address? If you are able to log in to the server running the application, you can try <code>localhost:<port></code> using your app's port number to connect directly. You can try it with <code>curl</code> to see what response code you get back.</p>
| 1 | 2016-10-05T16:51:22Z | [
"python",
"django",
"nginx"
]
|
How do I get conversion with "OverflowError: Python int too large to convert to C long" error working on Windows Server 2012? | 39,879,047 | <p>When running this on Anaconda Python 2.7.12, Pandas 18.1, Windows Server 2012:</p>
<pre><code>df['z'] = df['y'].str.replace(' ', '').astype(int)
</code></pre>
<p>I get this error:</p>
<pre><code>OverflowError: Python int too large to convert to C long
</code></pre>
<p>I do not get this error on MacOS 10.11 or Ubuntu 14.04. I read from somewhere else that the Windows C++ compiler has a different definition of long than a Unix-like OS. If so, how do I make this work on Windows?</p>
<p>Additionally, data.txt is only 172 KB in size. If it helps, data.txt takes this form:</p>
<pre><code>x|y
99999917|099999927 9991
99999911|999999979 9994
99999912|999999902 9992
</code></pre>
| 2 | 2016-10-05T16:14:12Z | 39,885,044 | <p><code>int</code> is interpreted by numpy by as the <code>np.int_</code> dtype, which corresponds to a C integer. On Windows, even on a 64 bit system, this is a 32 bit integer. </p>
<p>So if you need to cast larger values, specify a 64 bit integer using</p>
<pre><code>.astype('int64')
</code></pre>
| 2 | 2016-10-05T23:08:49Z | [
"python",
"c++",
"pandas",
"numpy",
"anaconda"
]
|
Python - Socket - Networking - Very Simple | 39,879,091 | <p>Unfortunately I am unable to find an answer for this even after an hour of searching.</p>
<p>I borrowed this from online tutorials - Youtube - Draps</p>
<pre><code>import socket, threading, time, wx
tLock = threading.Lock()
shutdown = False
def receiving(name, sock):
while not shutdown:
try:
tLock.acquire()
#while True:
data, addr = sock.recvfrom(1024)
print str(data) + "hehehe"
except:
pass
finally:
tLock.release()
host = '127.0.0.1'
port = 0
server = ('127.0.0.1', 5000)
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind((host,port))
s.setblocking(0)
rT = threading.Thread(target = receiving, args = ("RecvThread", s))
rT.start()
alias = raw_input("Name: ")
message = raw_input(alias + "-->")
while message != 'q':
if message != '':
s.sendto(alias + ":" + message, server)
tLock.acquire()
message = raw_input(alias + "-->")
tLock.release()
time.sleep(0.2)
shutdown = True
rT.join()
s.close()
</code></pre>
<p>I have two questions: </p>
<ol>
<li><p>In the code, <code>host = '127.0.0.1'</code>. However if I use <code>socket.gethostbyname(socket.gethostname())</code>, I get a socket error. Could anyone tell me why is this so? When I deploy a similar code to an external computer, it should not have this problem of creating a socket. </p></li>
<li><p>I started a thread that runs continuously. Why is the shutdown value (which is declared after the thread has started) able to stop the rT thread and break the while loop? I am unable to understand the physics and surprised it is working. </p></li>
</ol>
| 0 | 2016-10-05T16:17:03Z | 39,879,206 | <p>I'm not 100% sure on the first question but for the second one <code>shutdown</code> is a global variable. Any threads spawned from the main thread have the ability to see the <code>shutdown</code></p>
<p>Can you post the socket error you are getting?</p>
| 1 | 2016-10-05T16:23:32Z | [
"python",
"multithreading",
"sockets",
"networking"
]
|
Unexpected Behavior When Appending Multiple List Inside function in PyQt GUI | 39,879,092 | <p>I am experiencing strange behavior while trying to append to two different lists from inside of the same function. </p>
<pre><code>x = 0
y = 0
list_1 = []
list_2 = []
def append_function(self):
self.x += 1
self.y += 1
self.list_1.append(self.x)
self.list_2.append(self.y)
print self.list_1
print self.list_2
</code></pre>
<p>The output that I would expect from running the function once is:</p>
<pre><code>[1]
[1]
</code></pre>
<p>If I ran twice I would expect:</p>
<pre><code>[1,2]
[1,2]
</code></pre>
<p>The actual output that I get from running the function once is:</p>
<blockquote>
<p>[1]</p>
</blockquote>
<p>When I run the function twice I get:</p>
<pre><code>[1]
[1,2]
</code></pre>
<p>Every time I run the function the first list lags behind. This only happens when I run the code inside of the GUI class. Otherwise it runs as expected. Does anyone know why this happens or know a workaround?</p>
<p>Here's the entire program:</p>
<pre><code>#imports
from PyQt4 import QtGui
from PyQt4 import QtCore
import ui_sof_test #Gui File
import sys
class Worker(QtCore.QThread):
def run(self):
pass
class Gui(QtGui.QMainWindow, ui_sof_test.Ui_MainWindow):
def __init__(self):
super(self.__class__, self).__init__()
self.setupUi(self)
self.start()
self.pushButton.clicked.connect(self.append_function)
x = 0
y = 0
list_1 = []
list_2 = []
def append_function(self):
self.x += 1
self.y += 1
self.list_1.append(self.x)
self.list_2.append(self.y)
print self.list_1
print self.list_2
#------------------------------------------------------------------------------------------------#
# Worker Thread(s) Setup #
##################################################################################################
def start(self):
self.setupWorker()
def setupWorker(self):
self.work = Worker()
#self.work.mcx.connect(self.setxy,QtCore.Qt.QueuedConnection)
#self.work.mcy.connect(self.move_cur_lifty,QtCore.Qt.QueuedConnection)
if not self.work.isRunning():#if the thread has not been started let's kick it off
self.work.start()
def main():
app = QtGui.QApplication(sys.argv) # A new instance of QApplication
form = Gui() # We set the form to be our ExampleApp (design)
form.show() # Show the form
app.exec_() # and execute the. app
if __name__ == '__main__': # if we're running file directly and not importing it
main() # run the main function
</code></pre>
| 1 | 2016-10-05T16:17:10Z | 39,880,815 | <p>The problem is unique to the Iron Python console in the Spyder IDE. Running the code in another python terminal produces the expected result.</p>
| 0 | 2016-10-05T18:01:38Z | [
"python",
"python-2.7",
"pyqt4"
]
|
TypeError: reduce() of empty sequence with no initial value when merging | 39,879,243 | <p>I have dataframes which look like this:</p>
<p>df1</p>
<pre><code>Value Hectares_2006
1 10
5 15
</code></pre>
<p>df2</p>
<pre><code>Value Hectares_2007
1 20
5 5
</code></pre>
<p>df3</p>
<pre><code>Value Hectares_2008
1 22
5 3
</code></pre>
<p>and I want to merge them all together by first putting all of the dataframes in a list and then using:</p>
<p><code>dfs = reduce(lambda left, right: pd.merge(left, right, on=['Value'], how='outer'), list1</code>)</p>
<p>but this returns:</p>
<pre><code> File "E:/python codes/temp.py", line 32, in <module>
dfs=reduce(lambda left, right: pd.merge(left, right, on=['VALUE'], how='outer'), list1)
TypeError: reduce() of empty sequence with no initial value
</code></pre>
<p>my desired output is:</p>
<pre><code>Value Hectares_2006 Hectares_2007 Hectares_2008
1 10 20 22
5 15 5 3
</code></pre>
<p>my full code is this, with <code>files</code> pathway the pathway to all my files which become the dataframes:</p>
<pre><code>import pandas as pd, os
from simpldbf import Dbf5
list1=[]
files=r'E:\Documents\2015 Summer RA\CDL_in_buffer'
for f in os.listdir(files):
if '.dbf' in f and '.xml' not in f:
table=Dbf5(os.path.join(files,f))
df=table.to_dataframe()
columns=['VALUE', 'CLASS_NAME','Count']
df=df[columns]
if ('2006' in f) or ('2007' in f) or ('2008' in f) or ('2009' in f):
df['Hectares']=df.Count*0.3136
if ('2010' in f) or ('2011' in f) or ('2012' in f) or ('2013' in f) or ('2014' in f) or ('2015' in f):
df['Hectares']=df.Count*0.09
df.drop(['Count'], axis=1, inplace=True)
df=df[df['CLASS_NAME'] .isin (['Corn'])]
df.rename(columns={'CLASS_NAME': 'Crop_' + f.split('.')[0], 'Hectares': 'Hectares_' + f.split('.')[0] }, inplace=True)
list1.append(df)
dfs=reduce(lambda left, right: pd.merge(left, right, on=['VALUE'], how='outer'), list1)
</code></pre>
| 0 | 2016-10-05T16:25:25Z | 39,879,518 | <p>As mentioned in the comments, you need to un-indent the <code>dfs=...</code> line so that it's outside of the for loop. Otherwise, <code>list1</code> will be empty on the first iteration of the loop if the first file seen doesn't contain <code>.dbf</code>, which will cause the empty sequence error.</p>
| 1 | 2016-10-05T16:40:54Z | [
"python",
"pandas"
]
|
how do i replace a character in a string by range? | 39,879,334 | <p>For example:</p>
<pre><code>url = 'www.google.com/bla.bla'
</code></pre>
<p>I need to replace '.' with '' in the last 7 characters
==> 'www.google.com/blabla'</p>
<p>I have tried :</p>
<pre><code>for i in range(15,22):
if url[i]=='.':
url = url.replace('.', "")
</code></pre>
<p>But i get this error:</p>
<blockquote>
<p>IndexError: string index out of range</p>
</blockquote>
| 2 | 2016-10-05T16:30:29Z | 39,879,405 | <p>You need to change your <code>if</code> line since a single <code>=</code> is for assignment not comparison:</p>
<pre><code>if url[i] == '.':
</code></pre>
<p>And note then when using <code>replace()</code> you will need to update the original string (<code>url</code>) since <code>replace()</code> will return a new string, not update the existing string.</p>
<p>However, I think <strong>@Patrick Haugh's</strong> one line answer is the better solution, though I would modify it as follows (if you are always using a google address):</p>
<pre><code>url = url[:15] + url[15:].replace('.', '')
</code></pre>
| 3 | 2016-10-05T16:34:11Z | [
"python",
"string"
]
|
how do i replace a character in a string by range? | 39,879,334 | <p>For example:</p>
<pre><code>url = 'www.google.com/bla.bla'
</code></pre>
<p>I need to replace '.' with '' in the last 7 characters
==> 'www.google.com/blabla'</p>
<p>I have tried :</p>
<pre><code>for i in range(15,22):
if url[i]=='.':
url = url.replace('.', "")
</code></pre>
<p>But i get this error:</p>
<blockquote>
<p>IndexError: string index out of range</p>
</blockquote>
| 2 | 2016-10-05T16:30:29Z | 39,879,425 | <p>In one line:</p>
<pre><code>url = url[:-7] + (url[-7:].replace('.', ''))
</code></pre>
| 4 | 2016-10-05T16:35:26Z | [
"python",
"string"
]
|
how do i replace a character in a string by range? | 39,879,334 | <p>For example:</p>
<pre><code>url = 'www.google.com/bla.bla'
</code></pre>
<p>I need to replace '.' with '' in the last 7 characters
==> 'www.google.com/blabla'</p>
<p>I have tried :</p>
<pre><code>for i in range(15,22):
if url[i]=='.':
url = url.replace('.', "")
</code></pre>
<p>But i get this error:</p>
<blockquote>
<p>IndexError: string index out of range</p>
</blockquote>
| 2 | 2016-10-05T16:30:29Z | 39,879,434 | <p>Thanks to @Patrick for pointing out the flaw in my previous answer.</p>
<hr>
<p>Instead of using <code>range()</code>, just use list comprehension and string indices:</p>
<pre><code>url = 'www.google.com/bla.bla'
url = url[:15] + ''.join([c for c in url[15:] if c!= '.'])
</code></pre>
<p>Explanation.</p>
<ul>
<li><code>url =</code>: reassign the value of url to...</li>
<li><code>url[:15]</code>: all the characters in <code>url</code> before the last seven...</li>
<li><code>+</code>: plus..</li>
<li><code>''.join([c for c in url[15:] if c!= '.'])</code>: the last seven characters in the string <code>url</code>, <strong>only if</strong> the value is not a <code>.</code>. Then join the list of characters together into a string.</li>
</ul>
<p>After being modified the value of url will be: <code>'www.google.com/blabla'</code></p>
<p><code>print("Hello World") #keywords such as and or as are still highlighted</code></p>
| 0 | 2016-10-05T16:36:19Z | [
"python",
"string"
]
|
how do i replace a character in a string by range? | 39,879,334 | <p>For example:</p>
<pre><code>url = 'www.google.com/bla.bla'
</code></pre>
<p>I need to replace '.' with '' in the last 7 characters
==> 'www.google.com/blabla'</p>
<p>I have tried :</p>
<pre><code>for i in range(15,22):
if url[i]=='.':
url = url.replace('.', "")
</code></pre>
<p>But i get this error:</p>
<blockquote>
<p>IndexError: string index out of range</p>
</blockquote>
| 2 | 2016-10-05T16:30:29Z | 39,879,465 | <p>A more <em>generic approach</em> would be to <em>split</em> the URL, replace the dot and then <em>join</em>:</p>
<pre><code>In [1]: url = 'www.google.com/bla.bla'
In [2]: s = url.split("/")
In [3]: s[1] = s[1].replace(".", "")
In [4]: "/".join(s)
Out[4]: 'www.google.com/blabla'
</code></pre>
| 6 | 2016-10-05T16:37:47Z | [
"python",
"string"
]
|
What are metaclass bases in python? | 39,879,345 | <p>I'm trying to extend django-allauth to do something specific to my projects. I'm basically trying to write my own wrapper on top of django-allauth, and want the installation, configuration and other stuff to be quite similar to allauth.<br>
For this, I started with extending <code>AppSettings</code> class from <code>allauth/accounts/app_settings.py</code>. I created my own <code>app_settings.py</code> did something like this:</p>
<pre><code>from allauth.account import app_settings as AllAuthAppSettings
class MyAppSettings (AllAuthAppSettings):
def __init__(self, prefix):
# do something
</code></pre>
<p>Also, at the end of the app_settings.py, I simply put the following (copying it from django-allauth itself):</p>
<pre><code>import sys
my_app_settings = MyAppSettings('MY_PREFIX_')
my_app_settings.__name__ = __name__
sys.modules[__name__] = my_app_settings
</code></pre>
<p>Now, when I start my project, it gives me the following error:</p>
<pre><code>TypeError: Error when calling the metaclass bases
__init__() takes exactly 2 arguments (4 given)
</code></pre>
<p>Honestly, I'm quite new to the python-django world, and don't really understand what's happening in those last 4 lines.<br></p>
<p>What is <em>metaclass bases</em>? What are the 4 arguments that are being passed to it? How do I make this flow work?</p>
<p>Any help would be highly appreciated.</p>
<p>Update: Here's the stack trace:</p>
<pre><code>Unhandled exception in thread started by <function wrapper at 0x104146578>
Traceback (most recent call last):
File "/Users/user/anaconda/lib/python2.7/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/Users/user/anaconda/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run
autoreload.raise_last_exception()
File "/Users/user/anaconda/lib/python2.7/site-packages/django/utils/autoreload.py", line 249, in raise_last_exception
six.reraise(*_exception)
File "/Users/user/anaconda/lib/python2.7/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/Users/user/anaconda/lib/python2.7/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/Users/user/anaconda/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/Users/user/anaconda/lib/python2.7/site-packages/django/apps/config.py", line 202, in import_models
self.models_module = import_module(models_module_name)
File "/Users/user/anaconda/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/Users/user/myproject/my_app/models.py", line 18, in <module>
from .model_managers import *
File "/Users/user/myproject/my_app/model_managers.py", line 89, in <module>
from . import app_settings
File "/Users/user/myproject/my_app/app_settings.py", line 9, in <module>
TypeError: Error when calling the metaclass bases
__init__() takes exactly 2 arguments (4 given)
</code></pre>
| 1 | 2016-10-05T16:31:06Z | 39,879,916 | <p>It doesn't look like you should be able to inherit from <code>AllAuthAppSettings</code></p>
<p>The <code>django-allauth</code> package is doing some very ugly python magic</p>
<pre><code>import sys # noqa
app_settings = AppSettings('ACCOUNT_')
app_settings.__name__ = __name__
sys.modules[__name__] = app_settings
</code></pre>
<p>Basically, when you import the <code>app_settings</code> module, it creates an instance of the <code>AppSettings</code> class, renames it to the name of the <code>app_settings</code> module, and then <em>replaces the imported module with the instance of the class</em>!</p>
<p>You cannot inherit from class instances. I'm guessing you want to inherit from then non-instantiated <code>AppSettings</code> class. To do that, you need to inherit from the <code>class</code> of <code>app_settings</code>, not <code>app_settings</code> directly</p>
<pre><code>from allauth.account import app_settings as AllAuthAppSettings
class MyAppSettings(AllAuthAppSettings.__class__):
...
</code></pre>
<p>I don't think you should need to copy those lines from the end of the <code>app_settings</code> module to hack your module into a class.</p>
| 1 | 2016-10-05T17:05:21Z | [
"python",
"django",
"class",
"metaclass"
]
|
How do you make two turtles draw at once in Python? | 39,879,410 | <p>How do you make two turtles draw at once? I know how to make turtles draw and how to make two or more but I don't know how you can make them draw at the same time.
Please help!</p>
| -4 | 2016-10-05T16:34:31Z | 39,880,134 | <p>Here's a minimalist example using timer events:</p>
<pre><code>import turtle
t1 = turtle.Turtle(shape="turtle")
t2 = turtle.Turtle(shape="turtle")
t1.setheading(45)
t2.setheading(-135)
def move_t1():
t1.forward(1)
turtle.ontimer(move_t1, 10)
def move_t2():
t2.forward(1)
turtle.ontimer(move_t2, 10)
turtle.ontimer(move_t1, 10)
turtle.ontimer(move_t2, 10)
turtle.exitonclick()
</code></pre>
| 1 | 2016-10-05T17:18:48Z | [
"python",
"turtle-graphics"
]
|
Recursively go through all directories until you find a certain file in Python | 39,879,638 | <p>What's the best way in Python to recursively go through all directories until you find a certain file?
I want to look through all the files in my directory and see if the file I'm looking for is in that directory.
If I cannot find it, I go to the parent directory and repeat the process. I also want to count
how many directories and files I go through before I find the file. If there is no file at the
end of the loop return that there is no file</p>
<p>startdir = "Users/..../file.txt"</p>
<p>findfile is name of file. This is my current loop but I want to make it work using recursion.</p>
<pre><code>def walkfs(startdir, findfile):
curdir = startdir
dircnt = 0
filecnt = 0
for directory in startdir:
for file in directory:
curdir = file
if os.path.join(file)==findfile:
return (dircnt, filecnt, curdir)
else:
dircnt+=1
filecnt+=1
</code></pre>
| 0 | 2016-10-05T16:49:11Z | 39,879,675 | <pre><code>def findPath(startDir,targetFile):
file_count = 0
for i,(current_dir,dirs,files) in enumerate(os.walk(startDir)):
file_count += len(files)
if targetFile in files:
return (i,file_count,os.path.join(current_dir,targetFile))
return (i,file_count,None)
print findPath("C:\\Users\\UserName","some.txt")
</code></pre>
| 0 | 2016-10-05T16:51:39Z | [
"python",
"file",
"recursion",
"directory"
]
|
Recursively go through all directories until you find a certain file in Python | 39,879,638 | <p>What's the best way in Python to recursively go through all directories until you find a certain file?
I want to look through all the files in my directory and see if the file I'm looking for is in that directory.
If I cannot find it, I go to the parent directory and repeat the process. I also want to count
how many directories and files I go through before I find the file. If there is no file at the
end of the loop return that there is no file</p>
<p>startdir = "Users/..../file.txt"</p>
<p>findfile is name of file. This is my current loop but I want to make it work using recursion.</p>
<pre><code>def walkfs(startdir, findfile):
curdir = startdir
dircnt = 0
filecnt = 0
for directory in startdir:
for file in directory:
curdir = file
if os.path.join(file)==findfile:
return (dircnt, filecnt, curdir)
else:
dircnt+=1
filecnt+=1
</code></pre>
| 0 | 2016-10-05T16:49:11Z | 39,879,686 | <p>Don't re-invent the directory-recursion wheel. Just use the <a href="https://docs.python.org/3/library/os.html#os.walk" rel="nofollow"><code>os.walk()</code> function</a>, which gives you a loop over a recursive traversal of directories:</p>
<pre><code>def walkfs(startdir, findfile):
dircount = 0
filecount = 0
for root, dirs, files in os.walk(startdir):
if findfile in files:
return dircount, filecount + files.index(findfile), os.path.join(root, findfile)
dircount += 1
filecount += len(files)
# nothing found, return None instead of a full path for the file
return dircount, filecount, None
</code></pre>
| 3 | 2016-10-05T16:52:15Z | [
"python",
"file",
"recursion",
"directory"
]
|
Can I efficiently create a pandas data frame from semi-structured binary data? | 39,879,711 | <p>I need to convert large binary files to n x 3 arrays. The data is a series of image frames defined by (x, y, time) coordinates. Each frame uses two 32-bit integers to define the n x 3 dimensions, and n triplets of 16-bit integers to define the (x, y, time) values. The result is a binary structure that looks like:</p>
<p><code>int32, int32, uint16, uint16, uint16, ..., int32, int32, uint16, uint16, uint16</code>, and so on. </p>
<p>My first attempt involved converting the binary data to a 1D array and then adding the sections I wanted to a data frame. The current data is already sorted in such a way that frame separation can be reconstructed without the two <code>int32</code> values, so they can be dropped if necessary. If this weren't the case, the same effect could be achieved by sorting each frame individually before adding it to the final data frame.</p>
<pre><code>import numpy as np
import pandas as pd
def frame_extract(index):
n = data[index]
subarray=data[index+4:index+(3*n+4)]
subarray=np.reshape(subarray, (len(subarray)/3,3))
frame = pd.DataFrame(data=subarray, columns=['x','y','t'])
return frame
def indexer(index):
n = data[index]
new_index = index+(3*n+4)
return new_index
data = np.fromfile('file.bin', dtype='<u2')
framedata = pd.DataFrame()
index = 0
while index <= len(data)-1:
framedata = framedata.append(frame_extract(index), ignore_index=True)
index = indexer(index)
print(framedata)
</code></pre>
<p>The above works, but the while loop is very slow, especially when compared to the following structured method, which would work fine (and orders of magnitude faster) if the <code>int32</code> values were not in the way:</p>
<pre><code>dt = np.dtype([('x', '<u2'), ('y', '<u2'), ('time', '<u2')])
data = np.fromfile("file.bin", dtype=dt)
df = pd.DataFrame(data.tolist(), columns=data.dtype.names)
</code></pre>
<p>Is there a more efficient way of approaching this? If so, would it be easier to do while unpacking the binary data, or after it has been converted to integers?</p>
<p>I'm currently considering using a generator to read the binary file as a series of chunks (i.e. to use the two 32-bit integers to decide how large the 16-bit integer chunk I need is), but I'm not yet familiar enough with these to know if that's the right approach. </p>
| 2 | 2016-10-05T16:53:26Z | 39,880,537 | <p>Each time you append to the data frame, you are copying the whole thing to a new location in memory. You will want to initialise the data frame with a numpy array with the full final size, and then index into that with iloc(), etc. when you populate it with the imaging data.</p>
<p>Also, is there a specific reason you are using pandas data frames to store imaging data? They are not really meant for that...</p>
| 0 | 2016-10-05T17:44:35Z | [
"python",
"pandas",
"numpy",
"binary"
]
|
Can I efficiently create a pandas data frame from semi-structured binary data? | 39,879,711 | <p>I need to convert large binary files to n x 3 arrays. The data is a series of image frames defined by (x, y, time) coordinates. Each frame uses two 32-bit integers to define the n x 3 dimensions, and n triplets of 16-bit integers to define the (x, y, time) values. The result is a binary structure that looks like:</p>
<p><code>int32, int32, uint16, uint16, uint16, ..., int32, int32, uint16, uint16, uint16</code>, and so on. </p>
<p>My first attempt involved converting the binary data to a 1D array and then adding the sections I wanted to a data frame. The current data is already sorted in such a way that frame separation can be reconstructed without the two <code>int32</code> values, so they can be dropped if necessary. If this weren't the case, the same effect could be achieved by sorting each frame individually before adding it to the final data frame.</p>
<pre><code>import numpy as np
import pandas as pd
def frame_extract(index):
n = data[index]
subarray=data[index+4:index+(3*n+4)]
subarray=np.reshape(subarray, (len(subarray)/3,3))
frame = pd.DataFrame(data=subarray, columns=['x','y','t'])
return frame
def indexer(index):
n = data[index]
new_index = index+(3*n+4)
return new_index
data = np.fromfile('file.bin', dtype='<u2')
framedata = pd.DataFrame()
index = 0
while index <= len(data)-1:
framedata = framedata.append(frame_extract(index), ignore_index=True)
index = indexer(index)
print(framedata)
</code></pre>
<p>The above works, but the while loop is very slow, especially when compared to the following structured method, which would work fine (and orders of magnitude faster) if the <code>int32</code> values were not in the way:</p>
<pre><code>dt = np.dtype([('x', '<u2'), ('y', '<u2'), ('time', '<u2')])
data = np.fromfile("file.bin", dtype=dt)
df = pd.DataFrame(data.tolist(), columns=data.dtype.names)
</code></pre>
<p>Is there a more efficient way of approaching this? If so, would it be easier to do while unpacking the binary data, or after it has been converted to integers?</p>
<p>I'm currently considering using a generator to read the binary file as a series of chunks (i.e. to use the two 32-bit integers to decide how large the 16-bit integer chunk I need is), but I'm not yet familiar enough with these to know if that's the right approach. </p>
| 2 | 2016-10-05T16:53:26Z | 39,922,295 | <p>The <code>count</code> parameter simplified this by allowing <code>np.fromfile</code> to take advantage of the structure defined by the <code>int32</code> values. The following <code>for</code> loop creates each image frame individually:</p>
<pre><code>f = open('file.bin', 'rb')
for i in np.arange(1,15001,1):
m, n = np.fromfile(f, dtype='<i', count=2)
frame = np.reshape(np.fromfile(f, dtype='<u2', count=m*n), (m, n))
</code></pre>
<p>Each frame can be added to a list and converted to a data frame using: </p>
<pre><code>f = open('file.bin', 'rb')
xyt_data = list()
for i in np.arange(1,15001,1):
m, n = np.fromfile(f, dtype='<i', count=2)
frame = np.reshape(np.fromfile(f, dtype='<u2', count=m*n), (m, n))
xyt_data.append(frame)
df = pd.DataFrame(np.vstack(xyt_data), columns=['x','y','t'])
</code></pre>
<p>The result is about three orders of magnitude faster than the version described in the original question.</p>
| 0 | 2016-10-07T16:48:02Z | [
"python",
"pandas",
"numpy",
"binary"
]
|
Django materialize | 39,879,746 | <p>So I have in my <code>forms.py</code>:</p>
<pre><code> auto_current_type = forms.ModelMultipleChoiceField(label="ÐвÑо:", queryset=Auto_current_type.objects.all(),
widget=forms.CheckboxSelectMultiple())
</code></pre>
<p>My template:</p>
<pre><code><div class="row">
<form class="col s6 offset-s3 l6 offset-l3 m6 offset-m3" method="post">
{% form %}
{% endform %}
<button class="btn waves-effect waves-light" type="submit" name="action">ÐоиÑк
</button>
</form>
</div>
</code></pre>
<p>But there how it looks like:
<a href="http://i.stack.imgur.com/pjiwq.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/pjiwq.jpg" alt="enter image description here"></a></p>
<p>As you can see there is name of the car types, and below you can see checkboxes with the same names of the car types. But checkboxes are not working. Any help?</p>
| 0 | 2016-10-05T16:55:22Z | 39,881,310 | <p>so I have found solution
<a href="https://pypi.python.org/pypi/django-materialize-form/0.1.1" rel="nofollow">here</a></p>
<p>hope this will help</p>
| -1 | 2016-10-05T18:31:11Z | [
"python",
"django",
"forms",
"django-forms",
"materialize"
]
|
Searching rows of one dataframe in another dataframe in Python | 39,879,845 | <p>I have two data frames in python:</p>
<pre><code> df1
A B C
1 1 0
1 2 1
0 1 2
. . .
. . .
df2
T W S Y
7 4 5 [1]
8 12 4 [0,7]
10 14 6 [2,3]
</code></pre>
<p>I want to go through all rows of df1 and find the observations in df2 with the following characteristics:</p>
<pre><code> (T <= A) and (W > A) and (S == B) and (sum(pd.Series(Y).isin([C])) != 0)
</code></pre>
<p>Obviously, using for loop is not efficient but could you please tell me how can I solve my problem? I've heard that I could vectorization technique like Matlab but I'm not sure if we have such technique in python or not.</p>
| 0 | 2016-10-05T17:01:31Z | 39,880,734 | <p>You can do a merge of the two dataframes and filter the result. I am not sure how optimized it will be, but it will do the job faster than python for loops.</p>
| 1 | 2016-10-05T17:56:25Z | [
"python",
"list",
"search",
"dataframe"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.