title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Using bits directly in python
| 39,420,120 |
<p>I want to manipulate the binary number directly in python. For example I have decimal number 18. I was able to convert number into binary using</p>
<pre><code>seed =bin(18)
</code></pre>
<p>but problem is I want to xor few of its bits.If I access this seed using array indexing I cant xor them as it is of type 'str'. How can simply take decimal number convert into binary and play with its bits?</p>
<p>Thanks in advance</p>
| -1 |
2016-09-09T21:43:07Z
| 39,420,164 |
<p>You can use bitwise operators directly on numbers.</p>
<p>For instance <code>18 & 3</code> gets you <code>2</code>, <code>18 | 3</code> gets you <code>19</code>, <code>1 << 5</code> gets you <code>32</code>. The fact that the human readable representation when you print the number is a decimal number doesn't change the fact that at an underlying level the numbers are all just stored and operated on as binary numbers.</p>
<p><code>bin</code> just gives you the binary representation of a number as a string. While that's useful for debugging or human output, you generally won't want to use it for bit manipulation directly.</p>
| 2 |
2016-09-09T21:46:55Z
|
[
"python",
"arrays",
"binary"
] |
Using bits directly in python
| 39,420,120 |
<p>I want to manipulate the binary number directly in python. For example I have decimal number 18. I was able to convert number into binary using</p>
<pre><code>seed =bin(18)
</code></pre>
<p>but problem is I want to xor few of its bits.If I access this seed using array indexing I cant xor them as it is of type 'str'. How can simply take decimal number convert into binary and play with its bits?</p>
<p>Thanks in advance</p>
| -1 |
2016-09-09T21:43:07Z
| 39,420,196 |
<p>You can use bitwise operations directly on integers. You can convert between binary strings and integers for printing/debugging by using <code>bin</code> as you already know, and converting a string to binary using <code>int(binary_string, 2)</code>.</p>
<pre><code>seed = bin(18) # 0b10010
bitmask = '01101'
xor_result = 18 ^ int(bitmask, 2)
print(bin(xor_result)) # 0b11111
</code></pre>
| 1 |
2016-09-09T21:49:53Z
|
[
"python",
"arrays",
"binary"
] |
Scraping font-size from HTML and CSS
| 39,420,152 |
<p>I am trying to scrape the font-size of each section of text in an HTML page. I have spent the past few days trying to do it, but I feel like I am trying to re-invent the wheel. I have looked at python libraries like cssutils, beautiful-soup, but haven't had much luck sadly. I have made my own html parser that finds the font size inside the html only, but it doesn't look at stylesheets which is really important. Any tips to get me headed in the right direction?</p>
| 0 |
2016-09-09T21:46:06Z
| 39,420,644 |
<p>You can use selenium with firefox or phantomjs if you're on a headless machine, the browser will render the page, then you can locate the element and get it's attributes.</p>
<p>On python the method to get attributes is self explanatory, <code>Element_obj.get_attribute('attribute_name')</code></p>
| 0 |
2016-09-09T22:41:37Z
|
[
"python",
"html",
"css",
"web-scraping"
] |
Pandas : Making Decision on groupby size()
| 39,420,183 |
<p>I am trying to do a 'Change Data Capture' using two spreadsheet.
I have grouped my resulting dataframe and stuck with a strange problem.
Requirement:</p>
<p>Case 1) size of a group == 2, do certain tasks</p>
<p>Case 2) size of a group == 1 , do certain tasks</p>
<p>Case 3) size_of_a_group > 2, do certain tasks</p>
<p>Problem is no matter how I try I can not break down result of groupby as per its size and then loop through it</p>
<p>I would like to do something like :</p>
<pre><code>if(group_by_1.filter(lambda x : len(x) ==2):
for grp,rows in sub(??)group:
for j in range(len(rows)-1):
#check rows[j,'column1'] != rows[j+1,'column1']:
do something
</code></pre>
<p>here is my code snippet. Any help is much appreciated.</p>
<pre><code>import pandas as pd
import numpy as np
pd.set_option('display.height', 1000)
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
print("reading wolverine xlxs")
# defining metadata
df_header = ['DisplayName','StoreLanguage','Territory','WorkType','EntryType','TitleInternalAlias',
'TitleDisplayUnlimited','LocalizationType','LicenseType','LicenseRightsDescription',
'FormatProfile','Start','End','PriceType','PriceValue','SRP','Description',
'OtherTerms','OtherInstructions','ContentID','ProductID','EncodeID','AvailID',
'Metadata', 'AltID', 'SuppressionLiftDate','SpecialPreOrderFulfillDate','ReleaseYear','ReleaseHistoryOriginal','ReleaseHistoryPhysicalHV',
'ExceptionFlag','RatingSystem','RatingValue','RatingReason','RentalDuration','WatchDuration','CaptionIncluded','CaptionExemption','Any','ContractID',
'ServiceProvider','TotalRunTime','HoldbackLanguage','HoldbackExclusionLanguage']
df_w01 = pd.read_excel("wolverine_1.xlsx", names = df_header)
df_w02 = pd.read_excel("wolverine_2.xlsx", names = df_header)
df_w01['version'] = 'OLD'
df_w02['version'] = 'NEW'
#print(df_w01)
df_m_d = pd.concat([df_w01, df_w02], ignore_index = True).reset_index()
#print(df_m_d)
first_pass_get_duplicates = df_m_d[df_m_d.duplicated(['StoreLanguage','Territory','TitleInternalAlias','LocalizationType','LicenseType',
'LicenseRightsDescription','FormatProfile','Start','End','PriceType','PriceValue','ContentID','ProductID',
'AltID','ReleaseHistoryPhysicalHV','RatingSystem','RatingValue','CaptionIncluded'], keep='first')] # This datframe has records which are DUPES on NEW and OLD
#print(first_pass_get_duplicates)
first_pass_drop_duplicate = df_m_d.drop_duplicates(['StoreLanguage','Territory','TitleInternalAlias','LocalizationType','LicenseType',
'LicenseRightsDescription','FormatProfile','Start','End','PriceType','PriceValue','ContentID','ProductID',
'AltID','ReleaseHistoryPhysicalHV','RatingSystem','RatingValue','CaptionIncluded'], keep=False) # This datframe has records which are unique on desired values evn for first time
#print(first_pass_drop_duplicate)
group_by_1 = first_pass_drop_duplicate.groupby(['StoreLanguage','Territory','TitleInternalAlias','LocalizationType','LicenseType','FormatProfile'],as_index=False)
#Best Case group_by has 2 elements on big key and at least one row is 'new'
#print(group_by_1.grouper.group_info[0])
#for i,rows in group_by_1:
#if(.transform(lambda x : len(x)==2)):
#print(group_by_1.grouper.group_info[0])
#print(group_by_1.describe())
'''for i,rows in group_by_1:
temp_rows = rows.reset_index()
temp_rows.reindex(index=range(0,len(rows)))
print("group has: ", len(temp_rows))
for j in range(len(rows)-1):
print(j)
print("this iteration: ", temp_rows.loc[j,'Start'])
print("next iteration: ", temp_rows.loc[j+1,'Start'])
if(temp_rows.loc[j+1,'Start'] == temp_rows.loc[j,'Start']):
print("Match")
else:
print("no_match")
print(temp_rows.loc[j,'Start'])
print("++++-----++++")'''
</code></pre>
<p>Any Help is much appreciated.</p>
| 3 |
2016-09-09T21:48:42Z
| 39,427,514 |
<p>This is a case where using a new index might make your life easier, depending on the operations you need to perform. I tried to mimic what some of your data might look like:</p>
<pre><code>In [1]:
...: pd.set_option('display.max_rows', 10)
...: pd.set_option('display.max_columns', 50)
...:
...:
...: df_header = ['DisplayName','StoreLanguage','Territory','WorkType','EntryType','TitleInternalAlias',
...: 'TitleDisplayUnlimited','LocalizationType','LicenseType','LicenseRightsDescription',
...: 'FormatProfile','Start','End','PriceType','PriceValue','SRP','Description',
...: 'OtherTerms','OtherInstructions','ContentID','ProductID','EncodeID','AvailID',
...: 'Metadata', 'AltID', 'SuppressionLiftDate','SpecialPreOrderFulfillDate','ReleaseYear','ReleaseHistoryOriginal','ReleaseHistoryP
...: hysicalHV',
...: 'ExceptionFlag','RatingSystem','RatingValue','RatingReason','RentalDuration','WatchDuration','CaptionIncluded','CaptionExempti
...: on','Any','ContractID',
...: 'ServiceProvider','TotalRunTime','HoldbackLanguage','HoldbackExclusionLanguage']
...:
...:
...: import itertools as it
...:
...: catcols = 'StoreLanguage','Territory','TitleInternalAlias','LocalizationType','LicenseType','FormatProfile'
...:
...: headers = list(catcols) + [chr(c + 65) for c in range(10)]
...:
...: df = pd.DataFrame(data=np.random.rand(100, len(headers)), columns=headers)
...:
...: df.StoreLanguage = list(it.islice((it.cycle(["en", "fr"])), 100))
...:
...: df.Territory =list(it.islice(it.cycle(["us", "fr", "po", "nz", "au"]), 100) )
...:
...: df.TitleInternalAlias =list(it.islice(it.cycle(['a', 'b', 'c']), 100) )
...:
...: df.LocalizationType =list(it.islice(it.cycle(['d', 'g']), 100) )
...:
...: df.LicenseType =list(it.islice(it.cycle(["free", "com", "edu", "home"]), 100) )
...:
...: df.FormatProfile =list(it.islice(it.cycle(["g", "q"]), 100) )
...:
</code></pre>
<p>Here's the trick:</p>
<pre><code> ...: gb = df.groupby(catcols, as_index=False)
...:
...: reindexed = (df.assign(group_size = gb['A'].transform(lambda x: x.shape[0]))
...: .set_index("group_size")
...: )
...:
In [2]: reindexed.head()
Out[2]:
StoreLanguage Territory TitleInternalAlias LocalizationType \
group_size
2.0 en us a d
2.0 fr fr b g
2.0 en po c d
2.0 fr nz a g
2.0 en au b d
LicenseType FormatProfile A B C D \
group_size
2.0 free g 0.312705 0.346577 0.910688 0.317494
2.0 com q 0.575515 0.627054 0.025820 0.943633
2.0 edu g 0.489421 0.518020 0.988816 0.833306
2.0 home q 0.146965 0.823234 0.155927 0.865554
2.0 free g 0.327784 0.107795 0.678729 0.178454
E F G H I J
group_size
2.0 0.032420 0.232436 0.279712 0.167969 0.847725 0.777870
2.0 0.833150 0.261634 0.832250 0.511341 0.865027 0.850981
2.0 0.924992 0.129079 0.419342 0.603113 0.705015 0.683255
2.0 0.560832 0.434411 0.260553 0.208577 0.259383 0.997590
2.0 0.431881 0.729873 0.606323 0.806250 0.000556 0.793380
In [3]: reindexed.loc[2, "FormatProfile"].head()
Out[3]:
group_size
2.0 g
2.0 q
2.0 g
2.0 q
2.0 g
Name: FormatProfile, dtype: object
</code></pre>
<p>You can drop duplicates here...</p>
<pre><code>In [4]: reindexed.loc[2, "FormatProfile"].drop_duplicates()
Out[4]:
group_size
2.0 g
2.0 q
Name: FormatProfile, dtype: object
</code></pre>
<p>And recombine the slices as you see fit. </p>
| 2 |
2016-09-10T15:22:15Z
|
[
"python",
"pandas"
] |
Pandas : Making Decision on groupby size()
| 39,420,183 |
<p>I am trying to do a 'Change Data Capture' using two spreadsheet.
I have grouped my resulting dataframe and stuck with a strange problem.
Requirement:</p>
<p>Case 1) size of a group == 2, do certain tasks</p>
<p>Case 2) size of a group == 1 , do certain tasks</p>
<p>Case 3) size_of_a_group > 2, do certain tasks</p>
<p>Problem is no matter how I try I can not break down result of groupby as per its size and then loop through it</p>
<p>I would like to do something like :</p>
<pre><code>if(group_by_1.filter(lambda x : len(x) ==2):
for grp,rows in sub(??)group:
for j in range(len(rows)-1):
#check rows[j,'column1'] != rows[j+1,'column1']:
do something
</code></pre>
<p>here is my code snippet. Any help is much appreciated.</p>
<pre><code>import pandas as pd
import numpy as np
pd.set_option('display.height', 1000)
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
print("reading wolverine xlxs")
# defining metadata
df_header = ['DisplayName','StoreLanguage','Territory','WorkType','EntryType','TitleInternalAlias',
'TitleDisplayUnlimited','LocalizationType','LicenseType','LicenseRightsDescription',
'FormatProfile','Start','End','PriceType','PriceValue','SRP','Description',
'OtherTerms','OtherInstructions','ContentID','ProductID','EncodeID','AvailID',
'Metadata', 'AltID', 'SuppressionLiftDate','SpecialPreOrderFulfillDate','ReleaseYear','ReleaseHistoryOriginal','ReleaseHistoryPhysicalHV',
'ExceptionFlag','RatingSystem','RatingValue','RatingReason','RentalDuration','WatchDuration','CaptionIncluded','CaptionExemption','Any','ContractID',
'ServiceProvider','TotalRunTime','HoldbackLanguage','HoldbackExclusionLanguage']
df_w01 = pd.read_excel("wolverine_1.xlsx", names = df_header)
df_w02 = pd.read_excel("wolverine_2.xlsx", names = df_header)
df_w01['version'] = 'OLD'
df_w02['version'] = 'NEW'
#print(df_w01)
df_m_d = pd.concat([df_w01, df_w02], ignore_index = True).reset_index()
#print(df_m_d)
first_pass_get_duplicates = df_m_d[df_m_d.duplicated(['StoreLanguage','Territory','TitleInternalAlias','LocalizationType','LicenseType',
'LicenseRightsDescription','FormatProfile','Start','End','PriceType','PriceValue','ContentID','ProductID',
'AltID','ReleaseHistoryPhysicalHV','RatingSystem','RatingValue','CaptionIncluded'], keep='first')] # This datframe has records which are DUPES on NEW and OLD
#print(first_pass_get_duplicates)
first_pass_drop_duplicate = df_m_d.drop_duplicates(['StoreLanguage','Territory','TitleInternalAlias','LocalizationType','LicenseType',
'LicenseRightsDescription','FormatProfile','Start','End','PriceType','PriceValue','ContentID','ProductID',
'AltID','ReleaseHistoryPhysicalHV','RatingSystem','RatingValue','CaptionIncluded'], keep=False) # This datframe has records which are unique on desired values evn for first time
#print(first_pass_drop_duplicate)
group_by_1 = first_pass_drop_duplicate.groupby(['StoreLanguage','Territory','TitleInternalAlias','LocalizationType','LicenseType','FormatProfile'],as_index=False)
#Best Case group_by has 2 elements on big key and at least one row is 'new'
#print(group_by_1.grouper.group_info[0])
#for i,rows in group_by_1:
#if(.transform(lambda x : len(x)==2)):
#print(group_by_1.grouper.group_info[0])
#print(group_by_1.describe())
'''for i,rows in group_by_1:
temp_rows = rows.reset_index()
temp_rows.reindex(index=range(0,len(rows)))
print("group has: ", len(temp_rows))
for j in range(len(rows)-1):
print(j)
print("this iteration: ", temp_rows.loc[j,'Start'])
print("next iteration: ", temp_rows.loc[j+1,'Start'])
if(temp_rows.loc[j+1,'Start'] == temp_rows.loc[j,'Start']):
print("Match")
else:
print("no_match")
print(temp_rows.loc[j,'Start'])
print("++++-----++++")'''
</code></pre>
<p>Any Help is much appreciated.</p>
| 3 |
2016-09-09T21:48:42Z
| 39,434,076 |
<p>Use <code>groupby</code> with a <code>transformation</code> of <code>df</code> with <code>np.size</code> </p>
<p>Consider the dataframe <code>df</code></p>
<pre><code>df = pd.DataFrame([
[1, 2, 3],
[1, 2, 3],
[2, 3, 4],
[2, 3, 4],
[2, 3, 4],
[3, 4, 5],
], columns=list('abc'))
</code></pre>
<p>and the function <code>my_function</code></p>
<pre><code>def my_function(df):
if df.name == 1:
return 'blue'
elif df.name == 2:
return 'red'
else:
return 'green'
</code></pre>
<p>The thing to group by is <code>grouper</code></p>
<pre><code>grouper = df.groupby('a').b.transform(np.size)
grouper
0 2
1 2
2 3
3 3
4 3
5 1
Name: b, dtype: int64
</code></pre>
<hr>
<pre><code>df.groupby(grouper).apply(my_function)
b
1 blue
2 red
3 green
dtype: object
</code></pre>
<hr>
<p>You should be able to piece this together to get what you want.</p>
| 3 |
2016-09-11T08:06:45Z
|
[
"python",
"pandas"
] |
Python - combine regex patterns
| 39,420,194 |
<p>I have a large text and the aim is to select all 10-character strings for which the first character is a letter and the last character is a digit. </p>
<p>I am a python rookie and what I managed to achieve is to find all 10-character strings:</p>
<pre><code>ten_char = re.findall(r"\D(\w{10})\D", pdfdoc)
</code></pre>
<p>Question is how can I put together my other conditions: apart from a 10-character string, I am looking for one where the first character is a letter and the last character is a digit. </p>
<p>Suggestions appreciated!</p>
| 3 |
2016-09-09T21:49:35Z
| 39,420,221 |
<p><code>([a-z].{8}[0-9])</code></p>
<p>Will ask for 1 alphabetical char, 8 other character and finally 1 number.</p>
<p><strong>JS Demo</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>var re = /([a-z].{8}[0-9])/gi;
var str = 'Aasdf23423423423423423b423423423423423';
var m;
while ((m = re.exec(str)) !== null) {
if (m.index === re.lastIndex) {
re.lastIndex++;
}
console.log(m[0]);
}</code></pre>
</div>
</div>
</p>
<p><a href="https://regex101.com/r/gI8jZ4/1" rel="nofollow">https://regex101.com/r/gI8jZ4/1</a></p>
| 2 |
2016-09-09T21:53:34Z
|
[
"python",
"regex"
] |
Python - combine regex patterns
| 39,420,194 |
<p>I have a large text and the aim is to select all 10-character strings for which the first character is a letter and the last character is a digit. </p>
<p>I am a python rookie and what I managed to achieve is to find all 10-character strings:</p>
<pre><code>ten_char = re.findall(r"\D(\w{10})\D", pdfdoc)
</code></pre>
<p>Question is how can I put together my other conditions: apart from a 10-character string, I am looking for one where the first character is a letter and the last character is a digit. </p>
<p>Suggestions appreciated!</p>
| 3 |
2016-09-09T21:49:35Z
| 39,420,276 |
<p>I wouldn't use regex for this. Regular string manipulation is more clear in my opinion (though I haven't tested the following code).</p>
<pre><code>def get_useful_words(filename):
with open(filename, 'r') as file:
for line in file:
for word in line.split():
if len(word) == 10 and word[0].isalpha() and word[-1].isdigit():
yield word
for useful_word in get_useful_words('tmp.txt'):
print(useful_word)
</code></pre>
| 0 |
2016-09-09T21:58:46Z
|
[
"python",
"regex"
] |
Python - combine regex patterns
| 39,420,194 |
<p>I have a large text and the aim is to select all 10-character strings for which the first character is a letter and the last character is a digit. </p>
<p>I am a python rookie and what I managed to achieve is to find all 10-character strings:</p>
<pre><code>ten_char = re.findall(r"\D(\w{10})\D", pdfdoc)
</code></pre>
<p>Question is how can I put together my other conditions: apart from a 10-character string, I am looking for one where the first character is a letter and the last character is a digit. </p>
<p>Suggestions appreciated!</p>
| 3 |
2016-09-09T21:49:35Z
| 39,420,429 |
<p>If I understand it, do:</p>
<pre><code>r'\b([a-zA-Z]\S{8}\d)\b'
</code></pre>
<p><a href="https://regex101.com/r/dM7lI7/2" rel="nofollow">Demo</a></p>
<p>Python demo:</p>
<pre><code>>>> import re
>>> txt="""\
... Should match:
... a123456789 aA34567s89 zzzzzzzer9
...
... Not match:
... 1123456789 aA34567s8a zzzzzzer9 zzzxzzzze99"""
>>> re.findall(r'\b([a-zA-Z]\S{8}\d)\b', txt)
['a123456789', 'aA34567s89', 'zzzzzzzer9']
</code></pre>
| 1 |
2016-09-09T22:14:35Z
|
[
"python",
"regex"
] |
Python - combine regex patterns
| 39,420,194 |
<p>I have a large text and the aim is to select all 10-character strings for which the first character is a letter and the last character is a digit. </p>
<p>I am a python rookie and what I managed to achieve is to find all 10-character strings:</p>
<pre><code>ten_char = re.findall(r"\D(\w{10})\D", pdfdoc)
</code></pre>
<p>Question is how can I put together my other conditions: apart from a 10-character string, I am looking for one where the first character is a letter and the last character is a digit. </p>
<p>Suggestions appreciated!</p>
| 3 |
2016-09-09T21:49:35Z
| 39,420,495 |
<p>thank you very much for a great discussion and interesting suggestions. Very first post on stack overflow, but wow...what a community you are! </p>
<p>In fact, using: </p>
<pre><code>r'\b([a-zA-Z]\S{8}\d)'
</code></pre>
<p>solved my problem very nicely. Really appreciated all your comments.</p>
| 0 |
2016-09-09T22:22:32Z
|
[
"python",
"regex"
] |
Python openpyxl loop through excel files in folder
| 39,420,317 |
<p>I have working code, but it takes forever to loop through about 35 .xlsx files, read values in column J (including comparing cell values to a dictionary) and then do some comparisons. </p>
<p>Basically, it's an email notification system that a) finds a person's name in a cell somewhere in column J, then examines its offset date in column A. If the date in column A is one day in the future (tomorrow) it sends that person a reminder email.</p>
<p>Wondering if anyone would be willing to provide some feedback! I have a sense that the multiple fors and if's are slowing it down, but not experienced enough to know how to improve it.</p>
<p>Thanks for any input! Sometimes even a little hint usually gives me enough info to work out a solution on my own.</p>
<pre><code>try:
for i in os.listdir(os.chdir(thisdir)):
if i.endswith(".xlsx"):
workbook = load_workbook(i, data_only=True)
try:
ws = workbook[wsvar]
cell_range = ws['j3':'j110']
for row in cell_range: # This is iterating through rows 1-7
for cell in row: # This iterates through the columns(cells) in that row
if cell.value:
if cell.offset(row=0, column =-9).value.date() == (datetime.now().date() + timedelta(days=1)):
for name, email in sublist.items():
#send the emails
if cell.value == name:
email = sublist[cell.value]
datconv = str(cell.offset(row=0, column=-9).value.date().strftime("%m/%d/%Y"))
program = cell.offset(row=0, column=-7).value
#if there are hours in the "hours worked column, use those"
if cell.offset(row=0, column=-5).value:
hours = cell.offset(row=0, column=-5).value
#else, pick up the scheudled hours
else:
hours = cell.offset(row=0, column=-6).value
#SMTP code for email goes here, but it doesn't seem to be the culprit
</code></pre>
| 0 |
2016-09-09T22:02:42Z
| 39,421,912 |
<p>I don't know if this can help, it works for me in a similar situation, first i red and appended all files i had (3600) to a List, then i loop them through the list, it ran faster.</p>
<p>Good luck</p>
| 0 |
2016-09-10T02:23:25Z
|
[
"python",
"excel",
"openpyxl",
"smtplib"
] |
Separate comma-separated values within individual cells of Pandas Series using regex
| 39,420,373 |
<p>I have a csv file from a database I've converted into a Pandas DataFrame that I'm trying to clean up. One of the issues is that multiple values have been input into single cells that need to be split up. The complicating factor is that there are string comments (also with commas) that need to be kept intact. The problem is illustrated in the example below, in Series form.</p>
<p>What I have:</p>
<pre><code>Index | values
0 | 2.54,3.563
1 | bad design, right?
</code></pre>
<p>What I want:</p>
<pre><code>Index | level_0 | values
0 | 0 | 2.54
1 | 0 | 3.563
2 | 1 | bad design, right?
</code></pre>
<p>As you can see, there are commas separating the values I want to split, with no following space after the comma, while the commas in string comments all have spaces after them. Seems like an easy thing to apply regex to split up. My solution below, using a strategy taken from another StackOverflow solution, is to use Series.str.split to separate the values into separate columns, then stack the columns. That strategy works great. However, in this case, the regex is apparently not identifying the split. Here's my code:</p>
<pre><code>Import pandas as pd
# Example Series:
data = pd.Series(("2.54,3.56", "3.24,5.864", "bad design, right?"), name = "values")
# Split cells with multiple entries into separate rows
split_data = data.str.split('[,]\b').apply(pd.Series)
# Stack the results and pull out the index into a column (which is sample number in my case)
split_data = split_data.stack().reset_index(0)
split_data = split_data.reset_index(drop=True)
</code></pre>
<p>I'm new to regular expressions, but from the guides I've looked at and from using a couple regex sandboxes specific to Python, it seems like the regex [,]\b should split the values, but not the comments. However, it does not split anything with this regex.</p>
<p>Here's the result of the debugger, which says this should work:
<a href="https://www.debuggex.com/r/UwTVnYS7GRSkAKJL" rel="nofollow">Debuggex Demo</a></p>
<p>Am I missing something easy here? Any better ideas on making this work? I'm using Python 3.5, if that makes a difference. Thanks.</p>
| 0 |
2016-09-09T22:09:31Z
| 39,420,558 |
<p>I would be inclined to use a lookahead; how you do so depends on your expected data. </p>
<p>This is a negative lookahead. it says "a comma that is not followed by whitespace" and would be preferred if you are <em>sure</em> that all comments with commas have whitespace, and would want to treat "red,green" as something to split. </p>
<pre><code>data.str.split('[,](?!\s)').apply(pd.Series)
</code></pre>
<p>Another option is a positive lookahead for something that looks like a valid value; your example was numbers, so for instance this would split only on a comma that is followed by a number:</p>
<pre><code>data.str.split('[,](?:\d)').apply(pd.Series)
</code></pre>
<p>Regular expressions are very powerful, but honestly, I am not sure that this solution will be great for you if this is a long-term problem. Getting most cases right as a one-time migration should be fine, but longer term I would consider trying to solve the problem before it gets here. Anyway, here's Debuggex's python regex cheat sheet, in case it is useful to you: <a href="https://www.debuggex.com/cheatsheet/regex/python" rel="nofollow">https://www.debuggex.com/cheatsheet/regex/python</a></p>
| 1 |
2016-09-09T22:32:09Z
|
[
"python",
"regex",
"pandas",
"split"
] |
How to run python-socketio (eventlet WSGI server) over HTTPS
| 39,420,376 |
<p>I want to run the following eventlet WSGI server over HTTPS. I am trying to connect to the python server from JavaScript on my HTTPS enabled web-server. </p>
<p><strong>I would like the answer to describe how I would change this code below to work with HTTPS.</strong></p>
<pre><code>import socketio
import eventlet
import eventlet.wsgi
from flask import Flask, render_template
sio = socketio.Server()
app = Flask(__name__)
@app.route('/')
def index():
"""Serve the client-side application."""
return render_template('index.html')
@sio.on('connect', namespace='/chat')
def connect(sid, environ):
print("connect ", sid)
@sio.on('chat message', namespace='/chat')
def message(sid, data):
print("message ", data)
sio.emit('reply', room=sid)
@sio.on('disconnect', namespace='/chat')
def disconnect(sid):
print('disconnect ', sid)
if __name__ == '__main__':
# wrap Flask application with engineio's middleware
app = socketio.Middleware(sio, app)
# deploy as an eventlet WSGI server
eventlet.wsgi.server(eventlet.listen(('', 8000)), app)
</code></pre>
<p>This code was take from <a href="https://pypi.python.org/pypi/python-socketio" rel="nofollow">here</a></p>
| 0 |
2016-09-09T22:09:53Z
| 39,420,484 |
<p>To run a Evenlet WSGI server over HTTPS all thatâs needed is to pass an SSL-wrapped socket to the server() method like so:</p>
<pre><code>wsgi.server(eventlet.wrap_ssl(eventlet.listen(('', 8000)),
certfile='cert.crt',
keyfile='private.key',
server_side=True),
app)
</code></pre>
| 0 |
2016-09-09T22:21:41Z
|
[
"python",
"socket.io"
] |
ValueError: Domain error in arguments scipy rv_continuous
| 39,420,430 |
<p>I was trying to sample random variables subject to a given probability density function (pdf) with scipy.stats.rv_continuous:</p>
<pre><code>class Distribution(stats.rv_continuous):
def _pdf(self,x, _a, _c):
return first_hitting_time(x, _a, _c)
</code></pre>
<p>where the function <em>first_hitting_time</em> is</p>
<pre><code>#pdf of first hitting time of W_t + c*t on a.
def first_hitting_time(_t, _a, _c=0.0):
return _a/_t*np.exp(-0.5/_t*(_a-_c*_t)**2)/np.sqrt(2.0*np.pi*_t)
</code></pre>
<p>then I continue with</p>
<pre><code>myrv= Distribution(name='hittingtime', a=0.002,b=30.0)
data3= myrv.rvs(size=10000, _a=1.0, _c=0.0)
</code></pre>
<p>and interpreter starts to complain-</p>
<pre><code>Traceback (most recent call last):
File "<ipython-input-246-71f67047462b>", line 1, in <module>
data3= myrv.rvs(size=10000, _a=1.0, _c=0.0)
File "C:\Users\ME\AppData\Local\Continuum\Anaconda2\lib\site-packages\scipy\stats\_distn_infrastructure.py", line 856, in rvs
raise ValueError("Domain error in arguments.")
ValueError: Domain error in arguments.
</code></pre>
<p>it seems if I set <code>_c</code> to be some number larger than 0.0, it works fine, but not for <code>_c</code> less than 0. </p>
<p>I am a bit confused about this. Any help would be appreciated. </p>
| 0 |
2016-09-09T22:14:37Z
| 39,423,475 |
<p>From <a href="https://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.stats.rv_continuous.html" rel="nofollow">the documentation</a>:</p>
<blockquote>
<h3>Subclassing</h3>
<p>New random variables can be defined by subclassing the rv_continuous
class and re-defining at least the <code>_pdf</code> or the <code>_cdf</code> method (normalized
to location 0 and scale 1).</p>
<p><strong>If positive argument checking is not correct</strong> for your RV then you will
also need to re-define the <code>_argcheck</code> method.</p>
</blockquote>
<p>It's not clear to me from your function what <code>_a</code> and <code>_c</code> represent, but it looks like you want to allow them to be negative.</p>
<p>See the default implementation <a href="https://github.com/scipy/scipy/blob/master/scipy/stats/_distn_infrastructure.py#L859-L869" rel="nofollow">in the source of <code>_distn_infrastructure</code></a></p>
| 0 |
2016-09-10T06:55:57Z
|
[
"python",
"numpy",
"scipy"
] |
Does mpi4py have a functioning mprobe (improbe)?
| 39,420,448 |
<p>You can see <a href="https://github.com/erdc-cm/mpi4py/blob/master/src/MPI/Comm.pyx#L1179" rel="nofollow">here</a> that <code>mpi4py</code> appears to have defined <code>mprobe</code> and <code>improbe</code>, however, there appears to be no <code>mrecv</code>, <code>Mrecv</code> or any other variation similar to it. What am I supposed to use to receive the message?</p>
| 1 |
2016-09-09T22:16:11Z
| 39,425,176 |
<p>The matched receives are available as methods <code>recv</code> and <code>irecv</code> of the <code>Message</code> object returned by the message probes - see <a href="https://github.com/erdc-cm/mpi4py/blob/master/src/MPI/Message.pyx#L120" rel="nofollow">here</a>. This actually makes sense since both <code>MPI_Mrecv</code> and <code>MPI_Imrecv</code> take a message handle as argument and not a communicator one, therefore they should not share the same class with the communicator-bound calls.</p>
| 1 |
2016-09-10T10:35:04Z
|
[
"python",
"mpi",
"mpi4py"
] |
How to get predictive attributes of each target in `Random Forest`?
| 39,420,453 |
<p>I've been messing around with <code>Random Forest</code> models lately and they are really useful w/ the <code>feature_importance_</code> attribute! </p>
<p><strong>It would be useful to know which variables are more predictive of particular targets.</strong> </p>
<p>For example, what if the <code>1st and 2nd attributes</code> were more predictive of distringuishing <code>target 0</code> but the <code>3rd and 4th attributes</code> were more predictive of <code>target 1</code>? </p>
<p><strong>Is there a way to get the <code>feature_importance_</code> array for each target separately?</strong> With <code>sklearn</code>, <code>scipy</code>, <code>pandas</code>, or <code>numpy</code> preferably. </p>
<pre><code># Iris dataset
DF_iris = pd.DataFrame(load_iris().data,
index = ["iris_%d" % i for i in range(load_iris().data.shape[0])],
columns = load_iris().feature_names)
Se_iris = pd.Series(load_iris().target,
index = ["iris_%d" % i for i in range(load_iris().data.shape[0])],
name = "Species")
# Import modules
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
# Split Data
X_tr, X_te, y_tr, y_te = train_test_split(DF_iris, Se_iris, test_size=0.3, random_state=0)
# Create model
Mod_rf = RandomForestClassifier(random_state=0)
Mod_rf.fit(X_tr,y_tr)
# Variable Importance
Mod_rf.feature_importances_
# array([ 0.14334485, 0.0264803 , 0.40058315, 0.42959169])
# Target groups
Se_iris.unique()
# array([0, 1, 2])
</code></pre>
| -1 |
2016-09-09T22:16:45Z
| 39,425,911 |
<p>This is not really how RF works. Since there is no simple "feature voting" (which takes place in linear models) it is really hard to answer the question what "feature X is more predictive for target Y" even means. What feature_importance of RF captures is "how probable is, in general, to use this feature in the decision process". The problem with addressing your question is that if you ask "how probable is, in general, to use this feature in decision process <strong>leading to label Y</strong>" you would have to pretty much run the same procedure but remove all subtrees which do not contain label Y in a leaf - this way you remove parts of the decision process which do not address the problem "is it Y or not Y" but rather try to answer which "not Y" it is. However, in practice, due to very stochastic nature of RF, cutting its depth etc. this might barely reduce anything. The bad news is also, that I never seen it implemented in any standard RF library, you could do this on your own, just the way I said:</p>
<pre><code>for i = 1 to K (K is number of distinct labels)
tmp_RF = deepcopy(RF)
for tree in tmp_RF:
tree = remove_all_subtrees_that_do_not_contain_given_label(tree, i)
for x in X (X is your dataset)
features_importance[i] += how_many_times_each_feature_is_used(tree, x) / |X|
features_importance[i] /= |tmp_RF|
return features_importance
</code></pre>
<p>in particular you could use existing feature_importance codes, simply by doing</p>
<pre><code>for i = 1 to K (K is number of distinct labels)
tmp_RF = deepcopy(RF)
for tree in tmp_RF:
tree = remove_all_subtrees_that_do_not_contain_given_label(tree, i)
features_importance[i] = run_regular_feature_importance(tmp_RF)
return features_importance
</code></pre>
| 1 |
2016-09-10T12:06:35Z
|
[
"python",
"machine-learning",
"classification",
"feature-extraction"
] |
Powering x until reach y in while loop
| 39,420,497 |
<p>Hi I want to make an app that will be raise X to a power until it reaches Y.</p>
<p>I have for now something like this</p>
<pre><code>x = 10
y = 1000000
while x <= y:
x = x**x
print(x)
</code></pre>
<p>I don't want it in function.</p>
<p>I know that probably this is simple, but I just started learning Python :)</p>
| -2 |
2016-09-09T22:22:45Z
| 39,420,533 |
<p>This might be what you are looking for. In python you want to use the operators for math as such += , -=, *=, /= for same variable operations.</p>
<pre><code>counter = 10
while counter <= 1000000:
counter *= counter
print(counter)
</code></pre>
| 0 |
2016-09-09T22:29:02Z
|
[
"python"
] |
Powering x until reach y in while loop
| 39,420,497 |
<p>Hi I want to make an app that will be raise X to a power until it reaches Y.</p>
<p>I have for now something like this</p>
<pre><code>x = 10
y = 1000000
while x <= y:
x = x**x
print(x)
</code></pre>
<p>I don't want it in function.</p>
<p>I know that probably this is simple, but I just started learning Python :)</p>
| -2 |
2016-09-09T22:22:45Z
| 39,420,765 |
<p>10<sup>10<sup>10<sup>â¦</sup></sup></sup> (x) will never be equal to 10<sup>6</sup> (y) because 10<sup>10</sup> is four orders of magnitude larger. Your program will interpret x = 10 as less than 10<sup>6</sup>, execute x<sup>x</sup> (10<sup>10</sup>), interpret this value as greater than 10<sup>6</sup>, exit the loop, and print x (now 10<sup>10</sup>).</p>
<p>I don't think this is what you're trying to do; please consider the comments other users have left. I have a hunch you're looking for x<sup>n</sup>=y (10 * 10 * 10 â¦), for which you could simply use logarithms.</p>
| 0 |
2016-09-09T22:56:26Z
|
[
"python"
] |
Getting cx_Oracle with correct case using conda
| 39,420,508 |
<p>I'm new to Python. I'm using the Anaconda 4.1.1 (Python 3.5.2) distribution on Ubuntu. I started working on a project that uses <a href="http://cx-oracle.sourceforge.net/" rel="nofollow"><code>cx_Oracle</code></a>. O could of course install <code>cx_Oracle</code> using <code>pip</code>.</p>
<pre><code>pip install cx_Oracle
</code></pre>
<p>But everyone seems to be saying that Anaconda's <code>conda</code> is a much better package manager, virtual environment manager, and dependency manager that <code>pip</code> and <code>virtualenv</code> put together. I'd prefer to just use <code>conda</code> for managing everything.</p>
<p>So I made a <code>requirements.txt</code> file (some of my teammates will still be using <code>pip</code> and <code>virtualenv</code>) with the following line. (I want to support Python 3.5, so I need <code>cx_Oracle</code> 5.2.1, the current latest.)</p>
<pre><code>cx_Oracle==5.2.1
</code></pre>
<p>Then I tell <code>conda</code> to create a virtual environment <code>foobar</code>:</p>
<pre><code>conda create -n foobar --file requirements.txt
</code></pre>
<p>This fails; unfortunately <code>cx_Oracle</code> 5.2.1 is not yet in the Continuum conda repository (even though half the year has passed since it was released). However there are several channels (e.g. <code>mgckind</code>) purporting to supply version 5.2.1. There's just one problem: all the channels are supplying <code>cx_oracle</code> and not <code>cx_Oracle</code> (note the case difference). So this won't work:</p>
<pre><code>conda create -n foobar -c mgckind --file requirements.txt
</code></pre>
<p>Even if I specify a channel as in the example above, and even though <code>requirements.txt</code> clearly says <code>cx_Oracle</code>, <code>conda</code> brings down <code>cx_oracle</code> with a lowercase <code>o</code>. Because Python module imports are apparently case sensitive, all my tests fail because they can't find <code>cx_Oracle</code> with an uppercase <code>O</code>.</p>
<p>Am I missing something simple here because I'm new to Python? Or is Anaconda really both behind the times and incompatible with <code>cx_Oracle</code>, meaning I'll have to use <code>pip install</code> and bring it down from PyPI?</p>
<p>If there is really a case difference, is this situation common on Conda vs PiPY? Is it a Conda policy to name things only in lowercase? How do others deal with the discrepancy?</p>
| 2 |
2016-09-09T22:24:09Z
| 39,501,925 |
<p>The conda package name does not influence how your <code>import</code> the code in python. Looking at the linux-64 package <a href="https://anaconda.org/anaconda/cx_oracle/files" rel="nofollow">here</a> for example, while the package name is <code>cx_oracle</code> to conform to conda ecosystem standards, in python you <em>must</em> import that package with <code>import cx_Oracle</code>. There are many examples of python packages on PyPI where the package name is different than how the package is imported in python code. Just one of those python quirks I guess.</p>
| 1 |
2016-09-15T01:28:41Z
|
[
"python",
"pip",
"anaconda",
"cx-oracle",
"conda"
] |
Getting cx_Oracle with correct case using conda
| 39,420,508 |
<p>I'm new to Python. I'm using the Anaconda 4.1.1 (Python 3.5.2) distribution on Ubuntu. I started working on a project that uses <a href="http://cx-oracle.sourceforge.net/" rel="nofollow"><code>cx_Oracle</code></a>. O could of course install <code>cx_Oracle</code> using <code>pip</code>.</p>
<pre><code>pip install cx_Oracle
</code></pre>
<p>But everyone seems to be saying that Anaconda's <code>conda</code> is a much better package manager, virtual environment manager, and dependency manager that <code>pip</code> and <code>virtualenv</code> put together. I'd prefer to just use <code>conda</code> for managing everything.</p>
<p>So I made a <code>requirements.txt</code> file (some of my teammates will still be using <code>pip</code> and <code>virtualenv</code>) with the following line. (I want to support Python 3.5, so I need <code>cx_Oracle</code> 5.2.1, the current latest.)</p>
<pre><code>cx_Oracle==5.2.1
</code></pre>
<p>Then I tell <code>conda</code> to create a virtual environment <code>foobar</code>:</p>
<pre><code>conda create -n foobar --file requirements.txt
</code></pre>
<p>This fails; unfortunately <code>cx_Oracle</code> 5.2.1 is not yet in the Continuum conda repository (even though half the year has passed since it was released). However there are several channels (e.g. <code>mgckind</code>) purporting to supply version 5.2.1. There's just one problem: all the channels are supplying <code>cx_oracle</code> and not <code>cx_Oracle</code> (note the case difference). So this won't work:</p>
<pre><code>conda create -n foobar -c mgckind --file requirements.txt
</code></pre>
<p>Even if I specify a channel as in the example above, and even though <code>requirements.txt</code> clearly says <code>cx_Oracle</code>, <code>conda</code> brings down <code>cx_oracle</code> with a lowercase <code>o</code>. Because Python module imports are apparently case sensitive, all my tests fail because they can't find <code>cx_Oracle</code> with an uppercase <code>O</code>.</p>
<p>Am I missing something simple here because I'm new to Python? Or is Anaconda really both behind the times and incompatible with <code>cx_Oracle</code>, meaning I'll have to use <code>pip install</code> and bring it down from PyPI?</p>
<p>If there is really a case difference, is this situation common on Conda vs PiPY? Is it a Conda policy to name things only in lowercase? How do others deal with the discrepancy?</p>
| 2 |
2016-09-09T22:24:09Z
| 39,518,360 |
<p>The problem had nothing to do with the lowercase package name on Continuum's conda repository. In fact I was missing something. I had created a new virtual environment as I mentioned in the question:</p>
<pre><code>conda create -n foobar --file requirements.txt
</code></pre>
<p>The requirements file contained <code>cx_Oracle==5.2.1</code>, which I also mentioned. But what I did <em>not</em> mention is that I then tested the program using <code>nosetests</code>, and the <code>requirements.txt</code> file did <em>not</em> include <code>nose</code>! This means that the unit tests were being run by the default Anaconda installation of <code>nose</code>, which had no knowledge of <code>cx_Oracle</code>, which was not installed into the main Anaconda installation. (The virtual environment, because of dependency issues, had pulled down Python 3.4, while the Anaconda installation was using Python 3.5.)</p>
<p>In any case, the issue was that I inadvertently was using <code>nose</code> from the default installation of Anaconda, not from my virtual environment. The <code>cx_oracle</code> installed into my virtual environment, as kalefranz pointed out, will requires <code>import cx_Oracle</code> regardless of the case of its package name.</p>
<p>As soon as I installed <code>nose</code> into my virtual environment, running <code>nosetests</code> worked because it picked up the <code>cx_oracle</code> installation.</p>
<p>In summary the following seems to be the package/module case situation in the Python world:</p>
<ul>
<li>Python module imports are case-sensitive.</li>
<li>Package names are case-insensitive.</li>
<li>The new best-practices approach is to use all lowercase in package names.</li>
<li>The PyPI repository retains case; the Continuum conda repository converts package names to lowercase.</li>
<li>Regardless of whether one pulls a package from PyPI or Continuum conda, and regardless of which package case is used to retrieve them, the module name and its corresponding import statement in the code should retain the official case used by the library.</li>
</ul>
| 0 |
2016-09-15T18:50:48Z
|
[
"python",
"pip",
"anaconda",
"cx-oracle",
"conda"
] |
how to parse key value pair request from url using python requests in flask
| 39,420,580 |
<p>I have spent about a week on this issue and although I have made considerable progress I am stuck at a key point.</p>
<p>I am writing a simple client-server program in Python that is supposed to accept key/value pairs from the command line, formulate them into an url, and request the url from the server. The problem appears to be either that the url is not properly formatted or that the server is not parsing it correctly. (That is, the key-value pairs appear to be properly making it from the command line to the function that contains the request.)</p>
<pre><code>import sys
import requests
server_url = "http://0.0.0.0:5006"
def test(payload):
print('payload in function is ' + payload)
r = requests.get('http://0.0.0.0:5006/buy', params=payload)
print(r.url) #debug
print(r.encoding)
print(r.text)
print(r.headers)
if __name__ == '__main__':
payload = sys.argv[2]
print('payload from cli is ' + payload)
test(payload)
</code></pre>
<p>Server:</p>
<pre><code>import subprocess
from flask import Flask
from flask import request
import request
# Configure the app and wallet
app = Flask(__name__)
@app.route('/buy', methods=['GET', 'POST'])
def test(key1, key2):
key1 = str(request.args.get('key1'))
key2 = str(request.args.get('key2'))
print('keys are' + key1 + key2)
fortune = subprocess.check_output(['echo', 'key1'])
return fortune
# Initialize and run the server
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5006)
</code></pre>
<p>Client console output:</p>
<pre><code>payload from cli is {"key1": "foo", "key2": "bar"}
payload in function is {"key1": "foo", "key2": "bar"}
http://0.0.0.0:5006/buy?%7B%22key1%22:%20%22foo%22,%20%22key2%22:%20%22bar%22%7D
ISO-8859-1
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
{'Content-Type': 'text/html', 'Content-Length': '291', 'Date': 'Fri, 09 Sep 2016 22:16:44 GMT', 'Server': 'Werkzeug/0.11.10 Python/3.5.2'}
</code></pre>
<p>Server console output:</p>
<pre><code>* Running on http://0.0.0.0:5006/ (Press CTRL+C to quit)
[2016-09-09 18:30:33,445] ERROR in app: Exception on /buy [GET]
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.5/dist-packages/flask/_compat.py", line 33, in reraise
raise value
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
TypeError: test() missing 2 required positional arguments: 'key1' and 'key2'
127.0.0.1 - - [09/Sep/2016 18:30:33] "GET /buy?%7B%22key1%22:%20%22foo%22,%20%22key2%22:%20%22bar%22%7D HTTP/1.1" 500 -
</code></pre>
<p>I take this as showing that the subprocess call is for some reason not able to decipher key1 and key2 from the URL so is failing when it runs "fortune None None".</p>
| 0 |
2016-09-09T22:35:19Z
| 39,420,658 |
<p>The problem seems to be in your payload.</p>
<p>The payload needs to be a dictionary. You're giving it a string.</p>
<p><code>sys.argv[2]</code> will be a string, even if you format the text to look like a dictionary. So unless there's something missing from your client code snippet, payload isn't actually a dictionary like requests would expect.</p>
<p>I can infact confirm this by looking at the URL being generated, which is:</p>
<pre><code>http://0.0.0.0:5006/buy?%7B%22key1%22:%20%22foo%22,%20%22key2%22:%20%22bar%22%7D
</code></pre>
<p>Had the payload been a dictionary and correctly encoded, it would've looked something like this:</p>
<pre><code>http://0.0.0.0:5006/buy?key1=foo&key2=bar
</code></pre>
<p>To see what type payload your really is, do <code>print(type(payload))</code> somewhere inside the <code>test</code> function.</p>
<p>Once you've converted your <code>payload</code> into a proper python dictionary (you'll have to parse your <code>sys.argv[2]</code>), then requests should work as expected.</p>
| 2 |
2016-09-09T22:43:26Z
|
[
"python",
"flask",
"python-requests"
] |
how to parse key value pair request from url using python requests in flask
| 39,420,580 |
<p>I have spent about a week on this issue and although I have made considerable progress I am stuck at a key point.</p>
<p>I am writing a simple client-server program in Python that is supposed to accept key/value pairs from the command line, formulate them into an url, and request the url from the server. The problem appears to be either that the url is not properly formatted or that the server is not parsing it correctly. (That is, the key-value pairs appear to be properly making it from the command line to the function that contains the request.)</p>
<pre><code>import sys
import requests
server_url = "http://0.0.0.0:5006"
def test(payload):
print('payload in function is ' + payload)
r = requests.get('http://0.0.0.0:5006/buy', params=payload)
print(r.url) #debug
print(r.encoding)
print(r.text)
print(r.headers)
if __name__ == '__main__':
payload = sys.argv[2]
print('payload from cli is ' + payload)
test(payload)
</code></pre>
<p>Server:</p>
<pre><code>import subprocess
from flask import Flask
from flask import request
import request
# Configure the app and wallet
app = Flask(__name__)
@app.route('/buy', methods=['GET', 'POST'])
def test(key1, key2):
key1 = str(request.args.get('key1'))
key2 = str(request.args.get('key2'))
print('keys are' + key1 + key2)
fortune = subprocess.check_output(['echo', 'key1'])
return fortune
# Initialize and run the server
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5006)
</code></pre>
<p>Client console output:</p>
<pre><code>payload from cli is {"key1": "foo", "key2": "bar"}
payload in function is {"key1": "foo", "key2": "bar"}
http://0.0.0.0:5006/buy?%7B%22key1%22:%20%22foo%22,%20%22key2%22:%20%22bar%22%7D
ISO-8859-1
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
{'Content-Type': 'text/html', 'Content-Length': '291', 'Date': 'Fri, 09 Sep 2016 22:16:44 GMT', 'Server': 'Werkzeug/0.11.10 Python/3.5.2'}
</code></pre>
<p>Server console output:</p>
<pre><code>* Running on http://0.0.0.0:5006/ (Press CTRL+C to quit)
[2016-09-09 18:30:33,445] ERROR in app: Exception on /buy [GET]
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.5/dist-packages/flask/_compat.py", line 33, in reraise
raise value
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
TypeError: test() missing 2 required positional arguments: 'key1' and 'key2'
127.0.0.1 - - [09/Sep/2016 18:30:33] "GET /buy?%7B%22key1%22:%20%22foo%22,%20%22key2%22:%20%22bar%22%7D HTTP/1.1" 500 -
</code></pre>
<p>I take this as showing that the subprocess call is for some reason not able to decipher key1 and key2 from the URL so is failing when it runs "fortune None None".</p>
| 0 |
2016-09-09T22:35:19Z
| 39,421,117 |
<p><code>test</code> is expecting values for <code>key1</code> and <code>key2</code>. Flask would provide those through your route. </p>
<pre><code>@app.route('/buy/<key1>/<key2>')
def test(key1, key2):
</code></pre>
<p>Visiting <code>/buy/value1/value2</code> would give values to the arguments. You want to pass values through the query string though. </p>
<p>You just need to remove them from the function's signature. </p>
<pre><code>@app.route('/buy', methods=['GET', 'POST'])
def test():
</code></pre>
| 1 |
2016-09-09T23:44:01Z
|
[
"python",
"flask",
"python-requests"
] |
Holding reference to out of scope object's method
| 39,420,599 |
<pre><code>class Foo:
def __init__(self, _label):
self.label = _label
def display(self):
print(str(self.label))
def get_display(self):
return self.display
display_test = Foo(1).get_display()
display_test()
def inner_scope():
global display_test
display_test = Foo(2).get_display()
inner_scope()
display_test()
</code></pre>
<p>In Python 2.7, the above block prints 1 followed by 2, which isn't terribly surprising. What I'm curious about, though, is what happens to the objects generated by <code>Foo(1)</code> and <code>Foo(2)</code> -- it appears as though they have not been garbage collected as of the time <code>display_test</code> gets invoked, but does that mean <code>display_test</code> has a reference to <code>Foo(2)</code>, preventing it from being garbage collected at all? Am I just getting lucky that the objects aren't garbage collected before I invoke <code>display_test</code>? Does this differ in <code>3.x</code>?</p>
| 0 |
2016-09-09T22:36:58Z
| 39,420,753 |
<p>Your question is a bit unclear. This sentence: "it appears as though they have not been garbage collected as of the time display_test gets invoked" suggests some confusion. In fact <code>display_test</code> gets invoked twice - one when it's bound to an instance method of Foo(1), and again when it's bound to instance method of Foo(2). Until the call to <code>inner_scope</code>, display_test holds a reference to Foo(1) so it won't be GC'd. After the call to <code>inner_scope</code> there is no longer any reference to Foo(1) so it is ready for GC. But display_test now holds a reference to Foo(2) so <em>that</em> won't get GC'd. So "they" - the objects Foo(1) and Foo(2) - don't get GC'd together but one at a time, each object's GC depending independently on whether there is any live reference to it.</p>
<p>Don't worry about the Python garbage collector. It works beautifully.</p>
| 1 |
2016-09-09T22:55:13Z
|
[
"python",
"python-2.7",
"python-3.x",
"garbage-collection"
] |
Holding reference to out of scope object's method
| 39,420,599 |
<pre><code>class Foo:
def __init__(self, _label):
self.label = _label
def display(self):
print(str(self.label))
def get_display(self):
return self.display
display_test = Foo(1).get_display()
display_test()
def inner_scope():
global display_test
display_test = Foo(2).get_display()
inner_scope()
display_test()
</code></pre>
<p>In Python 2.7, the above block prints 1 followed by 2, which isn't terribly surprising. What I'm curious about, though, is what happens to the objects generated by <code>Foo(1)</code> and <code>Foo(2)</code> -- it appears as though they have not been garbage collected as of the time <code>display_test</code> gets invoked, but does that mean <code>display_test</code> has a reference to <code>Foo(2)</code>, preventing it from being garbage collected at all? Am I just getting lucky that the objects aren't garbage collected before I invoke <code>display_test</code>? Does this differ in <code>3.x</code>?</p>
| 0 |
2016-09-09T22:36:58Z
| 39,426,319 |
<p>In Python 2, <code>display_test</code> holds a reference to an <code>instancemethod</code> object, one of whose attributes is <code>im_self</code>, the instance the method is bound to.</p>
<p>In Python 3, <code>display_test</code> holds a reference to a <code>method</code> object, one of whose attributes is <code>__self__</code> (the Python 3 equivalent of <code>im_self</code>).</p>
| 0 |
2016-09-10T13:00:49Z
|
[
"python",
"python-2.7",
"python-3.x",
"garbage-collection"
] |
How you calculate the average rating per genre in python?
| 39,420,633 |
<p>I have this lens dataframe. It has columns to classify genres a movie belongs to. The genre categories are column names with binary values in the rows. If a movie belongs to a genre, it has a 1 under the appropriate column and 0 otherwise. I want to calculate the average rating per genre for each user in python pandas.</p>
<pre><code># pass in column names for each CSV
u_cols = ['user_id', 'age', 'sex', 'occupation', 'zip_code']
users = pd.read_csv('C:/Users/End-User/Desktop/ml-100k/u.user',
sep='|',names=u_cols, encoding='latin-1')
r_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp']
ratings = pd.read_csv('C:/Users/End-User/Desktop/ml-100k/u.data',
sep='\t', names=r_cols, encoding='latin-1')
# Reading item file:
m_cols = ['movie_id', 'title' ,'release_date','video_release_date', 'imdb_url',
'unknown', 'Action', 'Adventure', 'Animation', 'Children\'s', 'Comedy',
'Crime', 'Documentary', 'Drama', 'Fantasy', 'Film-Noir', 'Horror',
'Musical', 'Mystery', 'Romance', 'Sci-Fi','Thriller', 'War', 'Western']
movies = pd.read_csv('C:/Users/End-User/Desktop/ml-100k/u.item',
sep='|', names=m_cols, encoding='latin-1')
# create one merged DataFrame
movie_ratings = pd.merge(movies, ratings)
lens = pd.merge(movie_ratings, users)
# I have tried this but don't know how to get the average of the ratings for each user.
df = pd.pivot_table(lens, index = ['user_id'],
columns = ['unknown', 'Action', 'Adventure', 'Animation',
'Children\'s', 'Comedy', 'Crime', 'Documentary',
'Drama', 'Fantasy', 'Film-Noir', 'Horror',
'Musical', 'Mystery', 'Romance', 'Sci-Fi',
'Thriller', 'War', 'Western'],
values = ['rating'])
print df
</code></pre>
| -1 |
2016-09-09T22:40:29Z
| 39,421,427 |
<p>Consider reshaping your dataframe from wide to long to create a <em>genre</em> column and then run result through <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow"><code>pivot_table()</code></a> using its <code>aggfunc</code> argument, specifically for numpy mean:</p>
<pre><code>import pandas as pd
import numpy as np
#...same code...
lens = pd.merge(movie_ratings, users)
genrecols = ['unknown', 'Action', 'Adventure', 'Animation',
'Children\'s', 'Comedy', 'Crime', 'Documentary',
'Drama', 'Fantasy', 'Film-Noir', 'Horror',
'Musical', 'Mystery', 'Romance', 'Sci-Fi',
'Thriller', 'War', 'Western']
# RESHAPE DF BY MELTING (WIDE TO LONG), SELECTING ONLY NEEDED FIELDS
mdf = pd.melt(lens[['user_id', 'sex', 'rating'] + genrecols],
id_vars=['user_id', 'sex', 'rating'], var_name='genre')
# FILTER FOR VALUE = 1 AND THREE NEEDED COLUMNS
mdf = mdf[mdf['value']==1][['user_id', 'sex', 'rating', 'genre']]
# RUN PIVOTED AGGREGATION
df = pd.pivot_table(mdf, columns = ['genre'], index = ['user_id', 'sex'],
values = ['rating'], aggfunc = np.mean)
print df
</code></pre>
| 0 |
2016-09-10T00:43:23Z
|
[
"python",
"pandas"
] |
Killing a sub-subprocess from python/bash without leaving orphans
| 39,420,683 |
<p>I have a system with 3 main components (everything running on Ubuntu14.04 and Python 2.7, don't care too much about portability):</p>
<p>a) Some binary that executes, let's call it <code>runtime</code>, it writes some values to <code>stdout</code> and <code>stderr</code> that I need to read from (c)</p>
<p>b) A launcher script that set's up the environment and executes <code>runtime</code>, let's call it <code>launcher</code>, it looks something like this:</p>
<pre><code>#!/bin/bash
#Needed by the runtime
export SOME_VAR=1234
source some_file.sh
#...
exec runtime --option1 $1 --option2 $2
</code></pre>
<p>c) A server, it handles remote requests and calls the <code>launcher</code> when a remote client requests it. The server is written in python and needs to read the <code>stdX</code> from (a). I'm calling <code>launcher</code> using the subprocess module:</p>
<pre><code># When calling the launcher script:
cmd = "launcher value1 value2"
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
shell=True)
# read p.stdout ...
</code></pre>
<p>This creates 2 new processes, 1 for the <code>launcher</code> script (e.g. pid:10000) and other for the <code>runtime</code> (e.g. pid:10001), running <code>htop</code> with <code>tree</code> shows something similar to this:</p>
<pre><code>PID ... Command
12345 ... python server.py
10000 ... |- /bin/sh -c ./launcher value1 value2
10001 ... |- runtime --option1 value1 --option2 value2
</code></pre>
<p>At some point, the <code>server</code> want's to stop the <code>runtime</code> process (by sending a <code>SIGTERM</code>)</p>
<pre><code># When trying to stop the runtime:
os.kill(p.pid, signal.SIGTERM)
# Have also tried with p.terminate(), same results.
</code></pre>
<p>The problem is that this only kills the <code>launcher</code> process (pid:10000), leaving the <code>runtime</code> executing indefinitely. I want to properly propagate the <code>SIGTERM</code> to the <code>runtime</code> whenever the <code>server</code> requests it (<code>runtime</code> MUST receive this signal to stop and exit gracefully, otherwise it will produce data corruption).</p>
<p>I've tried using <code>os.killpg(...)</code> to kill the whole process group, but it also kills the <code>server</code> (obviously).</p>
<p><strong>How can I properly propagate a SIGTERM signal to a sub-sub-process?</strong></p>
| 1 |
2016-09-09T22:46:34Z
| 39,420,840 |
<p>Use process group this way:</p>
<pre><code>p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
shell=True, preexec_fn=os.setpgrp)
os.killpg(os.getpgid(p.pid), signal.SIGTERM)
</code></pre>
| 0 |
2016-09-09T23:04:33Z
|
[
"python",
"linux",
"bash",
"ubuntu",
"subprocess"
] |
Get a list of subdirectories
| 39,420,685 |
<p>I know I can do this:</p>
<pre><code>data = sc.textFile('/hadoop_foo/a')
data.count()
240
data = sc.textFile('/hadoop_foo/*')
data.count()
168129
</code></pre>
<p>However, I would like to count the size of the data of every subdirectory of "/hadoop_foo/". Can I do that?</p>
<p>In other words, what I want is something like this:</p>
<pre><code>subdirectories = magicFunction()
for subdir in subdirectories:
data sc.textFile(subdir)
data.count()
</code></pre>
<hr>
<p>I tried with:</p>
<pre><code>In [9]: [x[0] for x in os.walk("/hadoop_foo/")]
Out[9]: []
</code></pre>
<p>but I think that fails, because it searches at the local directory of the driver (the gateway in that case), while "/hadoop_foo/" lies in the <a href="/questions/tagged/hdfs" class="post-tag" title="show questions tagged 'hdfs'" rel="tag">hdfs</a>. Same for "hdfs:///hadoop_foo/".</p>
<hr>
<p>After reading <a href="http://stackoverflow.com/questions/31056680/how-can-i-list-subdirectories-recursively-for-hdfs">How can I list subdirectories recursively for HDFS?</a>, I am wondering if there is a way to execute:</p>
<pre><code>hadoop dfs -lsr /hadoop_foo/
</code></pre>
<p>in code..</p>
<hr>
<p>From <a href="http://stackoverflow.com/questions/39303218/correct-way-of-writing-two-floats-into-a-regular-txt">Correct way of writing two floats into a regular txt</a>:</p>
<pre><code>In [28]: os.getcwd()
Out[28]: '/homes/gsamaras' <-- which is my local directory
</code></pre>
| 0 |
2016-09-09T22:46:46Z
| 39,421,095 |
<p>With python use <a href="https://pypi.python.org/pypi/hdfs/" rel="nofollow">hdfs</a> module; <a href="https://hdfscli.readthedocs.io/en/latest/api.html#hdfs.client.Client.walk" rel="nofollow">walk()</a> method can get you list of files.</p>
<p>The code sould look something like this:</p>
<pre><code>from hdfs import InsecureClient
client = InsecureClient('http://host:port', user='user')
for stuff in client.walk(dir, 0, True):
...
</code></pre>
<p>With Scala you can get the filesystem (<code>val fs = FileSystem.get(new Configuration())</code>) and run <a href="https://hadoop.apache.org/docs/r2.4.1/api/org/apache/hadoop/fs/FileSystem.html#listFiles(org.apache.hadoop.fs.Path" rel="nofollow">https://hadoop.apache.org/docs/r2.4.1/api/org/apache/hadoop/fs/FileSystem.html#listFiles(org.apache.hadoop.fs.Path</a>, boolean)</p>
<p>You can also execute a shell command from your script with <a href="https://docs.python.org/2/library/subprocess.html" rel="nofollow">os.subprocess</a> but this is never a recommended approach since you depend on text output of a shell utility here.</p>
<hr>
<p>Eventually, what worked for the OP was using <a href="/hadoop_foo/">subprocess.check_output()</a>:</p>
<pre><code>subdirectories = subprocess.check_output(["hadoop","fs","-ls", "/hadoop_foo/"])
</code></pre>
| 1 |
2016-09-09T23:40:26Z
|
[
"python",
"hadoop",
"apache-spark",
"hdfs",
"bigdata"
] |
Substituting strings with sub() method with regex
| 39,420,713 |
<p>I didn't want to write and ask this forum but I am stuck and the book I'm following, that supposed to be for beginners, is anything but... </p>
<p>Anyway... In the string below: </p>
<pre><code>'Agent Alice told Agent Bob that Agent Steve was a double agent.'
</code></pre>
<p>I want to show just the first letter of the agent's first name. So what I end up with, is:</p>
<pre><code>'Agent A**** told Agent B**** that Agent S**** was a double agent.'
</code></pre>
<p>I tried using grouping, like in the book, but its not working.</p>
<pre><code>namesRegex = re.compile(r'\w?([A-Z])')
mo = namesRegex.sub(r'\1****', 'Agent Alice told Agent Bob that Agent
Steve was a double agent.')
print(mo)
</code></pre>
<p>Also, I would welcome any recommended additional resources on this topic Thanks in advance... </p>
| 0 |
2016-09-09T22:50:58Z
| 39,420,788 |
<p>You can use look behind <code>?<=</code> syntax for this:</p>
<pre><code>namesRegex = re.compile(r'(?<=Agent\s[A-Z])\w+')
mo = namesRegex.sub(r'****', 'Agent Alice told Agent Bob that Agent Steve was a double agent.')
mo
# 'Agent A**** told Agent B**** that Agent S**** was a double agent.'
</code></pre>
<p>This will replace any word character <code>\w+</code> including alpha numeric characters and underscore <code>_</code> after the pattern <code>Agent\s[A-Z]</code> with <code>****</code>. If it's not guaranteed the agent's names starts with capital letter <code>Agent\s[A-Za-z]</code> would be a less restricted option.</p>
| 1 |
2016-09-09T22:58:32Z
|
[
"python"
] |
Why do certain implementations run slow in Python?
| 39,420,734 |
<p>I have three implementations of a function that checks whether a string (or a space delimited phrase) is a palindrome:</p>
<pre><code>def palindrome(str_in):
def p(s, i, j):
if i >= j:
return True
elif s[i] != s[j]:
return False
else:
return p(s, i+1, j-1)
return p(str_in.replace(' ', '').lower(), 0, len(str_in)-1)
def palindrome1(s):
st = s.replace(' ', '').lower()
return st == st[::-1]
def palindrome2(s):
st = s.replace(' ', '').lower()
i, j = 0, len(st)-1
while i < j:
if st[i] != st[j]:
return False
else:
i += 1
j -= 1
return True
</code></pre>
<p>Now, I figured <code>palindrome()</code> would be optimal in theory because no reversing and extra memory is taking place, but python does not have tail call optimization. <code>palindrome2()</code> is the imperative version of <code>palindrome()</code> but still takes much longer than <code>palindrome1()</code>. Why is this?</p>
<p>Here is the profiled results (ran with: <code>python -m cProfile file.py</code>):</p>
<pre><code> 12 function calls in 45.341 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.232 0.232 45.341 45.341 file.py:1(<module>)
1 2.198 2.198 3.532 3.532 file.py:300(palindrome1)
1 39.442 39.442 40.734 40.734 file.py:304(palindrome2)
1 0.000 0.000 0.000 0.000 {len}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
2 2.396 1.198 2.396 1.198 {method 'lower' of 'str' objects}
1 0.843 0.843 0.843 0.843 {method 'read' of 'file' objects}
2 0.231 0.115 0.231 0.115 {method 'replace' of 'str' objects}
1 0.000 0.000 0.000 0.000 {open}
1 0.000 0.000 0.000 0.000 {sys.setrecursionlimit}
</code></pre>
<p>Here is the profiled results(ran with: <code>pypy -m cProfile hw2.py</code>):</p>
<pre><code> 11 function calls in 12.470 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.011 0.011 12.470 12.470 hw2.py:1(<module>)
1 2.594 2.594 6.280 6.280 hw2.py:303(palindrome1)
1 0.852 0.852 4.347 4.347 hw2.py:307(palindrome2)
1 0.000 0.000 0.000 0.000 {len}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
2 3.263 1.631 3.263 1.631 {method 'lower' of 'str' objects}
1 1.832 1.832 1.832 1.832 {method 'read' of 'file' objects}
2 3.918 1.959 3.918 1.959 {method 'replace' of 'str' objects}
1 0.000 0.000 0.000 0.000 {sys.setrecursionlimit}
</code></pre>
<p>Here is my palindrome constructor:</p>
<pre><code>def palindrome_maker(n):
from random import choice
alphabet = ' abcdefghijklmnopqrstuvwxyz'
front = ''.join([choice(alphabet) for _ in range(n//2)])
back = front[::-1]
return front + (choice(alphabet) if n%2==1 else '') + back
</code></pre>
<p>BTW: the profile shows the performance for calling the functions with a string of length <code>999999999</code>.</p>
| 2 |
2016-09-09T22:53:15Z
| 39,421,780 |
<p>OK, so let's talk from the begining. CPython compiles visible text into a thing called bytecode, which is a representation that is easier for the virtual machine (i.e. the interpreter) to understand.</p>
<p>Both <code>palindrome</code> and <code>palindrome2</code> functions are slower then <code>palindrome1</code> because of this overhead. There's a neat module in CPython called <code>dis</code>. If you use it on a compiled function it will show its internal representation. So lets do this:</p>
<pre><code>>>> dis.dis(palindrome)
2 0 LOAD_CLOSURE 0 (p)
3 BUILD_TUPLE 1
6 LOAD_CONST 1 (<code object p at 0x01B95110, file "<stdin>", line 2>)
9 LOAD_CONST 2 ('palindrome.<locals>.p')
12 MAKE_CLOSURE 0
15 STORE_DEREF 0 (p)
9 18 LOAD_DEREF 0 (p)
21 LOAD_FAST 0 (str_in)
24 LOAD_ATTR 0 (replace)
27 LOAD_CONST 3 (' ')
30 LOAD_CONST 4 ('')
33 CALL_FUNCTION 2 (2 positional, 0 keyword pair)
36 LOAD_ATTR 1 (lower)
39 CALL_FUNCTION 0 (0 positional, 0 keyword pair)
42 LOAD_CONST 5 (0)
45 LOAD_GLOBAL 2 (len)
48 LOAD_FAST 0 (str_in)
51 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
54 LOAD_CONST 6 (1)
57 BINARY_SUBTRACT
58 CALL_FUNCTION 3 (3 positional, 0 keyword pair)
61 RETURN_VALUE
</code></pre>
<p>Now let's compare this with <code>palindrome1</code> function:</p>
<pre><code>>>> dis.dis(palindrome1)
2 0 LOAD_FAST 0 (s)
3 LOAD_ATTR 0 (replace)
6 LOAD_CONST 1 (' ')
9 LOAD_CONST 2 ('')
12 CALL_FUNCTION 2 (2 positional, 0 keyword pair)
15 LOAD_ATTR 1 (lower)
18 CALL_FUNCTION 0 (0 positional, 0 keyword pair)
21 STORE_FAST 1 (st)
3 24 LOAD_FAST 1 (st)
27 LOAD_FAST 1 (st)
30 LOAD_CONST 0 (None)
33 LOAD_CONST 0 (None)
36 LOAD_CONST 4 (-1)
39 BUILD_SLICE 3
42 BINARY_SUBSCR
43 COMPARE_OP 2 (==)
46 RETURN_VALUE
</code></pre>
<p>So this is what CPython more or less sees (actually these are encoded into a binary form, which is irrelevant at the moment). Then the virtual machine goes through those lines and executes them one by one.</p>
<p>So the first obvious thing is: more lines == more time to execute. This is because each line has to be interpreted and appropriate C code has to execute. And there are a lot of lines executed in both functions other then <code>palindrome1</code> because of the loop and recursive calls. So essentially its like your trying to run few laps but Python says "no, no, no, you have to run with 20kg on your shoulders". The more laps there are (i.e. more bytecode to execude) the slower you get. Generally this performance degradation should be linear in CPython but really who knows without reading through CPython's code? I've heard that a technique called <a href="https://en.wikipedia.org/wiki/Inline_caching" rel="nofollow">inline caching</a> was supposed to be implemented in CPython which would affect the performance alot. I don't know whether it was done or not.</p>
<p>Other thing is that calls in Python are expensive. There is <a href="https://en.wikipedia.org/wiki/Application_binary_interface" rel="nofollow">ABI</a> for how calls should be done at the low level (i.e. push registers onto the stack and do jump). C/C++ follows it. Now Python does alot more than that. There are frames created (which can be analyzed, e.g. when exception happens), there's a max recursion check, etc. etc. All of that counts towards performance lose.</p>
<p>So <code>palindrome</code> function does <strong>alot</strong> of calls. Recursion is inefficient in Python. In particular this is the reason why <code>palindrome2</code> is faster then <code>palindrome1</code>.</p>
<p>The other thing is that <code>palindrome1</code> has <code>[::-1]</code> which translates into <code>BUILD_SLICE</code> call which is implemented in C. So even though it does more then necessary (there is no reason for creating another copy of the string) it is still faster then other functions simply because the intermediate layer (i.e. the bytecode) is minimal. There is no need for the compiler to waste time on bytecode interpretation.</p>
<p>Another important thing is that each object you create in Python has to be garbage collected. And since these objects are generally bigger then pure C objects (for example due to reference counter) than this takes more time. Ah, by the way, incrementing and decrementing reference counters also takes time. Also there's this thing called GIL (Global Interpreter Lock) which acquires and releases a lock at each command so that the bytecode is thread safe. Even though it is completely unnecessary for a single threaded application. But Python doesn't know that you won't run threads at some point, it has to do that each time. This is all so that you don't have to worry about hard problems that most C/C++ coders have to deal with. :)</p>
<p>Now PyPy is another story. It has this neat thing inside it called JIT = Just In Time compiler. What it does it takes any Python bytecode and converts it into machine code on the fly which then is reused. So the initial call to a function has this compiling overhead, but it still is faster. Ultimately there is no bytecode at all and all functions run purely on CPU. However this doesn't mean that PyPy is as fast as a function written in C (e.g. <code>[::-1]</code>). Simply because there are lots of optimizations that are done on C level which we don't know how to implement in PyPy or any other Python interpreter. This is due to the nature of the language - it is dynamic. Now whether it is truely impossible is another story, it's not obvious at all, but at the moment we just don't know how to do this.</p>
<p><strong>tl;dr;</strong> builtin functions (or more generally C code run in Python) are always at least as fast as equivalent pure Python code and in most cases alot faster</p>
| 1 |
2016-09-10T01:56:10Z
|
[
"python",
"recursion",
"optimization",
"time-complexity",
"palindrome"
] |
Subtracting line of a column with other line values using python 3.5?
| 39,420,799 |
<pre><code>1 10.00
2 11.23
3 12.32
4 23.55
5 15.33
6 12.23
7 22
8 10.33
9 8.9
10 5.89
</code></pre>
<p>I have a dat file with above values. I want to subtract line 1 of column 2 with line 2,3,4...10 of column 2, then line 2 of column 2 with line 3,4,5...10, then 3 with 4,5..10 and so on until line 9 with 10. Also I would like to print the values and which line number subtracted by which? How can I do that in python? Could you please help me? I tried with numpy but could not figure out solve it with my conditions.
I will really appreciate your help. Thanks </p>
| 0 |
2016-09-09T22:59:53Z
| 39,420,879 |
<pre><code>k = [10.0, 11.23, 12.32, 23.55, 15.33, 12.23, 22, 10.33, 8.9, 5.89]
k[:] = [x - 10.0 for x in k]
[0.0, 1.23, 2.32, 13.55, 5.33, 2.23, 12.0, 0.33, -1.1, -4.11]
</code></pre>
| 0 |
2016-09-09T23:09:45Z
|
[
"python"
] |
Making snapshot plots using python
| 39,420,805 |
<p>I have a dynamic simulation that I am trying to visualise in a paper that I am writing. </p>
<p>I would like to take 'snapshots' of the dynamics as they progress over time, and then superimpose all of them on the same canvas, plotted against time (for each snapshot). </p>
<p>Similar to this (walking mechanism):</p>
<p><a href="http://i.stack.imgur.com/9VDVx.png" rel="nofollow"><img src="http://i.stack.imgur.com/9VDVx.png" alt="enter image description here"></a></p>
<p>For completeness; the snapshots are taken at some regular interval, predefined by some frequency. This is what I would like to emulate.</p>
| 0 |
2016-09-09T23:00:15Z
| 39,424,752 |
<p>This answer might need some iterations to improve since it is still not completely clear how your model looks, what kind of data you get, etc. But below is attempt #1 which plots a dynamic (<em>drunk stick figure</em>) system in time/space.</p>
<pre><code>import matplotlib.pylab as pl
import numpy as np
pl.close('all')
y = np.array([2,1,0]) # y-location of hips,knees,feet
x = np.zeros((2,3)) # x-coordinates of both legs
# Plot while the model runs:
pl.figure()
pl.title('Drunk stick figure', loc='left')
pl.xlabel('x')
pl.ylabel('y')
# The model:
for t in range(10):
x[:,0] = t # start (top of legs) progress in x in time
x[:,1] = x[:,0] + np.random.random(2) # random location knees
x[:,2] = x[:,0] + np.random.random(2) # random location feet
pl.plot(x[0,:], y[:], color='k')
pl.plot(x[1,:], y[:], color='r')
# or, if you want to plot every nth (lets say second) step:
# if (t % 2 == 0):
# pl.plot(..)
</code></pre>
<p><a href="http://i.stack.imgur.com/ufsfz.png" rel="nofollow"><img src="http://i.stack.imgur.com/ufsfz.png" alt="enter image description here"></a></p>
<p>In this case the plot is updated while the model runs, but that could of course easily be changed by e.g. saving the data and plotting them in a similar loop afterwards. </p>
| 1 |
2016-09-10T09:40:59Z
|
[
"python",
"matplotlib",
"simulate"
] |
Working out which points lat/lon coordinates are closest to
| 39,420,835 |
<p>I currently have a list of coordinates</p>
<pre><code>[(52.14847612092221, 0.33689512047881015),
(52.14847612092221, 0.33689512047881015),
(52.95756796776235, 0.38027099942700493),
(51.78723479900971, -1.4214854900618064)
...]
</code></pre>
<p>I would like to split this list into 3 separate lists/datafames corresponding to which city they are closest to (in this case the coordinates are all in the UK and the 3 cities are Manchester, Cardiff and London) </p>
<p>So at the end result I would like the current single list of coordinates to be split into either separate lists ideally or it could be a dataframe with 3 columns would be fine eg:</p>
<pre><code> leeds cardiff london
(51.78723479900971, (51.78723479900971, (51.78723479900971,
-1.4214854900618064) -1.4214854900618064) -1.4214854900618064)
</code></pre>
<p><em>(those are obiously not correct coordinates!)</em></p>
<p>-Hope that makes sense. It doesn't have to be overly accurate (don't need to take into consideration the curvature of the earth or anything like that!)</p>
<p>I'm really not sure where to start with this - I'm very new to python and would appreciate any help!
Thanks in advance</p>
| 0 |
2016-09-09T23:04:18Z
| 39,421,045 |
<p>This will get you started:</p>
<pre><code>from geopy.geocoders import Nominatim
geolocator = Nominatim()
places = ['london','cardiff','leeds']
coordinates = {}
for i in places:
coordinates[i] = ((geolocator.geocode(i).latitude, geolocator.geocode(i).longitude))
>>>print coordinates
{'cardiff': (51.4816546, -3.1791933), 'leeds': (53.7974185, -1.543794), 'london': (51.5073219, -0.1276473)}
</code></pre>
<p>You can now hook up the architecture for putting this in a pandas dataframe, calculating the distance metric between your coordinates and the above.</p>
<p>Ok so now we want to do distances between what is a very small array (the coordinates).</p>
<p>Here's some code:</p>
<pre><code>import numpy as np
single_point = [3, 4] # A coordinate
points = np.arange(20).reshape((10,2)) # Lots of other coordinates
dist = (points - single_point)**2
dist = np.sum(dist, axis=1)
dist = np.sqrt(dist)
</code></pre>
<p>From here there is any number of things you can do. You can sort it using numpy, or you can place it in a pandas dataframe and sort it there (though that's really just a wrapper for the numpy function I believe). Whichever you're more comfortable with.</p>
| 0 |
2016-09-09T23:34:50Z
|
[
"python",
"geospatial",
"geopy"
] |
Working out which points lat/lon coordinates are closest to
| 39,420,835 |
<p>I currently have a list of coordinates</p>
<pre><code>[(52.14847612092221, 0.33689512047881015),
(52.14847612092221, 0.33689512047881015),
(52.95756796776235, 0.38027099942700493),
(51.78723479900971, -1.4214854900618064)
...]
</code></pre>
<p>I would like to split this list into 3 separate lists/datafames corresponding to which city they are closest to (in this case the coordinates are all in the UK and the 3 cities are Manchester, Cardiff and London) </p>
<p>So at the end result I would like the current single list of coordinates to be split into either separate lists ideally or it could be a dataframe with 3 columns would be fine eg:</p>
<pre><code> leeds cardiff london
(51.78723479900971, (51.78723479900971, (51.78723479900971,
-1.4214854900618064) -1.4214854900618064) -1.4214854900618064)
</code></pre>
<p><em>(those are obiously not correct coordinates!)</em></p>
<p>-Hope that makes sense. It doesn't have to be overly accurate (don't need to take into consideration the curvature of the earth or anything like that!)</p>
<p>I'm really not sure where to start with this - I'm very new to python and would appreciate any help!
Thanks in advance</p>
| 0 |
2016-09-09T23:04:18Z
| 39,821,344 |
<p>This is a pretty brute force approach, and not too adaptable. However, that can be the easiest to understand and might be plenty efficient for the problem at hand. It also uses only pure python, which may help you to understand some of python's conventions.</p>
<pre><code>points = [(52.14847612092221, 0.33689512047881015), (52.14847612092221, 0.33689512047881015), (52.95756796776235, 0.38027099942700493), (51.78723479900971, -1.4214854900618064), ...]
cardiff = (51.4816546, -3.1791933)
leeds = (53.7974185, -1.543794)
london = (51.5073219, -0.1276473)
def distance(pt, city):
return ((pt[0] - city[0])**2 + (pt[1] - city[1])**2)**0.5
cardiff_pts = []
leeds_pts = []
london_pts = []
undefined_pts = [] # for points equidistant between two/three cities
for pt in points:
d_cardiff = distance(pt, cardiff)
d_leeds = distance(pt, leeds)
d_london = distance(pt, london)
if (d_cardiff < d_leeds) and (d_cardiff < d_london):
cardiff_pts.append(pt)
elif (d_leeds < d_cardiff) and (d_leeds < d_london):
leeds_pts.append(pt)
elif (d_london < d_cardiff) and (d_london < d_leeds):
london_pts.append(pt)
else:
undefined_pts.append(pt)
</code></pre>
<p>Note that this solution assumes the values are on a cartesian reference frame, which latitude longitude pairs are not.</p>
| 0 |
2016-10-02T20:24:50Z
|
[
"python",
"geospatial",
"geopy"
] |
Defining "add" function in a class
| 39,420,963 |
<p>I'm writing my own code language in Python (called Bean), and I want the math functions to have the syntax:</p>
<p>print math.add(3+7)<br>
==>10</p>
<p>print math.mul(4*8)<br>
==>32</p>
<p>and so on. So far my code is:</p>
<pre><code>bean_version = "1.0"
console = []
print "Running Bean v%s" % bean_version
#Math Function
class math(object):
def __init__(self, add, sub, mul, div):
self.add = add
self.sub = sub
self.mul = mul
self.div = div
def add(self):
print self.add
math = math(1,0,0,0)
print math.add()
</code></pre>
<p>But this will return an error, saying that <em>TypeError: 'int' object is not callable</em>. I can change the "add" function to a different name and it will work, but I would like to use "add" as the name.<br>
Thanks (This is Python 2.7.10 by the way)</p>
| -1 |
2016-09-09T23:22:30Z
| 39,421,021 |
<p>If you don't want to change the name of the add function, you can just change <code>self.add</code>. These two <code>add</code>s are conflicting with each other. You will not get any error if you run this:</p>
<pre><code>bean_version = "1.0"
console = []
print "Running Bean v%s" % bean_version
#Math Function
class math(object):
def __init__(self, add, sub, mul, div):
self.adds = add
self.sub = sub
self.mul = mul
self.div = div
def add(self):
print self.adds
math = math(1,0,0,0)
print math.add()
</code></pre>
| 3 |
2016-09-09T23:31:10Z
|
[
"python",
"python-2.7"
] |
Defining "add" function in a class
| 39,420,963 |
<p>I'm writing my own code language in Python (called Bean), and I want the math functions to have the syntax:</p>
<p>print math.add(3+7)<br>
==>10</p>
<p>print math.mul(4*8)<br>
==>32</p>
<p>and so on. So far my code is:</p>
<pre><code>bean_version = "1.0"
console = []
print "Running Bean v%s" % bean_version
#Math Function
class math(object):
def __init__(self, add, sub, mul, div):
self.add = add
self.sub = sub
self.mul = mul
self.div = div
def add(self):
print self.add
math = math(1,0,0,0)
print math.add()
</code></pre>
<p>But this will return an error, saying that <em>TypeError: 'int' object is not callable</em>. I can change the "add" function to a different name and it will work, but I would like to use "add" as the name.<br>
Thanks (This is Python 2.7.10 by the way)</p>
| -1 |
2016-09-09T23:22:30Z
| 39,421,076 |
<p>An object can't have two properties with the same name. So if you have a property called <code>add</code> that holds a number, it can't also have a method called <code>add</code>, because methods are just properties that happen to hold functions. When you do:</p>
<pre><code>this.add = add
</code></pre>
<p>you're replacing the method with that number. So you need to use different names.</p>
<p>You say you want to be able to call <code>math.Add()</code>. But the method you defined is <code>add</code>. It should be:</p>
<pre><code>def Add():
print this.add
</code></pre>
<p>This doesn't conflict with the <code>add</code> property that holds the number, since names are case-sensitive.</p>
| 3 |
2016-09-09T23:37:38Z
|
[
"python",
"python-2.7"
] |
How did I overwrite my whole python program with this command prompt command?
| 39,420,977 |
<p>So I was putting in some values into a Python program I have written (what it does is irrelevant):</p>
<blockquote>
<p>E:\Users\Me\Desktop\Python>idtohex.py</p>
<p>Enter ID: 213467</p>
<p>DB 41 03</p>
</blockquote>
<p>Nice, it works alright. Now at this point I had accidentally copied this string to my clipboard:</p>
<blockquote>
<p>[Newline here]</p>
<p>E:\Users\Me\Desktop\Python>idtohex.py</p>
</blockquote>
<p>From when I had originally started the program in the command prompt.</p>
<p>Now, because of the newline my program crashes as it couldn't handle the input (oops):</p>
<blockquote>
<p>ValueError: invalid literal for int() with base 10: ''</p>
</blockquote>
<p>Because of this error I am now returned to the command prompt and the next line from my clipboard is entered, which is trying to run this command:</p>
<blockquote>
<p>E:\Users\Me\Desktop\Python>idtohex.py</p>
</blockquote>
<p>The command prompt gives me this error so I think it's not a big deal and just get on with it:</p>
<blockquote>
<p>'E:\Users\Me\Desktop\Python' is not recognized as an internal or external command,</p>
<p>operable program or batch file.</p>
</blockquote>
<p>Now I try to run the program again and my whole program is completely empty. <a href="http://i.stack.imgur.com/Ej0bs.png" rel="nofollow">Overwritten. 0-bytes on disk</a>. So with that simple mistake I somehow overwrote my whole program and have no way to get it back.</p>
<p>How the <strong><em>hell</em></strong> does this happen within the Windows command prompt?</p>
| 0 |
2016-09-09T23:25:14Z
| 39,421,116 |
<p>Before running a command, cmd.exe sets up its standard handles for the child process to inherit. The <code>></code> operator redirects a file descriptor to a file opened for output, and it truncates an existing file to 0 bytes. It defaults to file descriptor 1 (standard output, or stdout). Since the only output is the error message on file descriptor 2 (standard error, stderr), the resulting file is empty. If you had run <code>E:\Users\Me\Desktop\Python 2>idtohex.py</code> (the space is necessary in this case for parsing), the file would instead get overwritten by the error message.</p>
| 5 |
2016-09-09T23:43:54Z
|
[
"python",
"windows",
"batch-file",
"command-line",
"cmd"
] |
Play audio using online compiler
| 39,421,103 |
<p>I am working on a program, and I want to be able to play a mp3 file (preferably, though other files could work). The catch is that, unfortunately, I'm using an online compiler (<a href="http://repl.it" rel="nofollow">repl.it</a>), and I can't use a desktop compiler. In other words, I can't use pyglet, or really any package not part of the standard ones. I've looked all over stack exchange, google, and beyond, but I can't seem to find anything. I don't need to edit the file, just play it.</p>
<p>I am using chrome as my browser, and the computer I'm using is a chromebook.</p>
<p>Any help would be appreciated. Thanks!</p>
| 2 |
2016-09-09T23:41:24Z
| 39,435,721 |
<p>Even if you could install a library for audio playback on the online REPL, wouldn't the sound be played back somewhere in the racks of a data center instead of your computer at home?</p>
<p>AFAIK, the only currently feasible solution to this problem is to use an online service that allows HTML output and to use the HTML5 <code><audio></code> tag to play back the desired sound on your local computer via your browser. I prefer to use <a href="http://jupyter.org/" rel="nofollow">Jupyter notebooks</a> for that.</p>
<p>IPython provides <a href="http://ipython.readthedocs.io/en/stable/api/generated/IPython.display.html#IPython.display.Audio" rel="nofollow">IPython.display.Audio</a> which turns a Python buffer, a <code>bytes</code> object or a NumPy array into an <code><audio></code> tag. You can try this immediately at <a href="https://try.jupyter.org/" rel="nofollow">https://try.jupyter.org/</a>. Note that this embeds the raw audio data into the notebook, making it quite large.</p>
<p>I normally prefer to save the resulting audio data to a sound file (e.g. a WAV file) and manually create an <code><audio></code> tag for it within a Markdown cell. You can of course also do this on <a href="https://try.jupyter.org/" rel="nofollow">https://try.jupyter.org/</a>.</p>
<p>If you want to share your results with others, you can for example use <a href="http://mybinder.org/" rel="nofollow">Binder</a>. Here is an example of a <a href="http://mybinder.org/repo/spatialaudio/communication-acoustics-exercises/notebooks/brir-solutions.ipynb" rel="nofollow">Jupyter notebook using HTML5 <code><audio></code> elements running interactively on Binder</a>. You can even install custom libraries on your Binder, see for example <a href="https://github.com/spatialaudio/communication-acoustics-exercises/blob/master/Dockerfile" rel="nofollow">my Dockerfile</a>.</p>
| 1 |
2016-09-11T11:44:43Z
|
[
"python",
"audio",
"compilation",
"music"
] |
How to get the specific C compiler type from Python distutils?
| 39,421,201 |
<p>I'd like to check the system's C compiler in Python so that I can add library links accordingly to compile my Cython code.</p>
<p>I understand <code>distutils.ccompiler.get_default_compiler()</code> or something like <code>compiler.compiler_type</code> would return a compiler name. But it is too coarse just like "unix", etc.</p>
<p>What I need is more specific information such as "gcc", "icc", "clang", etc., which are all shown as "unix" using the methods above.</p>
<p>One possible way to get the information is to check the system's environment variable <code>CC</code> via <code>os.environ["CC"]</code>, but it is not guaranteed that every system has <code>CC</code> defined so it is not a universal solution.</p>
<p>So, what should I do then? Thanks in advance! </p>
| 3 |
2016-09-09T23:58:29Z
| 39,421,570 |
<p>Generally you should be able to use the <strong><a href="https://docs.python.org/2/library/platform.html#platform.python_compiler" rel="nofollow"><code>platform</code></a></strong> module to get the info:</p>
<pre><code>>>> import platform
>>> platform.python_compiler()
'GCC 4.8.5 20150623 (Red Hat 4.8.5-4)'
</code></pre>
| 2 |
2016-09-10T01:13:33Z
|
[
"python",
"c"
] |
getsockaddarg() error when using UDP sockets
| 39,421,216 |
<p>I am trying to create a simple UDP connection, but fail miserably every time. I am using Python 3.5.2 with PyCharm.
import socket
from socket import AF_INET, SOCK_DGRAM</p>
<pre><code>ip = tuple(input('Enter an ip\n'))
#time = int(input('How long? In seconds \n'))
msg = 'Hello'
addr = (ip, 80)
def connection():
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind(ip)
sock.sendto(msg, addr)
connection()
</code></pre>
<p>The error I get is:</p>
<pre><code>Traceback (most recent call last):
File "A:/PycharmProjects/udp.py", line 15, in <module>
connection()
File "A:/PycharmProjects/udp.py", line 11, in connection
sock.bind(ip)
TypeError: getsockaddrarg() takes exactly 2 arguments (14 given)
Process finished with exit code 1
</code></pre>
| -1 |
2016-09-10T00:01:35Z
| 39,421,484 |
<p>Calling <code>tuple</code> will construct a tuple containing each individual character in the iterable (the input string):</p>
<pre><code>>>> tuple('127.0.0.1')
('1', '2', '7', '.', '0', '.', '0', '.', '1')
</code></pre>
<p>Don't use <code>tuple</code> just use the input you receive:</p>
<pre><code>ip = input('Enter an ip\n')
msg = 'Hello'
addr = (ip, 80)
</code></pre>
<p>and bind on it:</p>
<pre><code>sock.bind(addr)
</code></pre>
| 0 |
2016-09-10T00:53:43Z
|
[
"python",
"sockets",
"python-3.x"
] |
convert image pixels from square to hexagonal
| 39,421,233 |
<p>How can i convert the pixels of an image from square to hexagonal? Doing so i need to extract the rgb values from each hex pixel. Is there any library or function that simplify this process?</p>
<p>Example : Mona Lisa Hexagonal Pixel Shape </p>
<p><a href="http://i.stack.imgur.com/o3isE.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/o3isE.jpg" alt="enter image description here"></a>
Nothing tried. Thanks</p>
| 0 |
2016-09-10T00:03:46Z
| 39,424,482 |
<p>Here's a possible approach, though I am sure if you are able to write code to read, manipulate and use pixels from a file format that hasn't been invented yet, you should be able to create that file yourself ;-)</p>
<p>You could generate a hexagonal grid, using <strong>ImageMagick</strong> which is installed on most Linux distros and is available for OSX and Windows. Here, I am just doing things at the command-line in the Terminal, but there are Python, Perl, PHP, .Net, C/C++ and other bindings too - so take your pick.</p>
<p>First make a grid of hexagons - you'll have to work out the size you need, mine is arbitrary:</p>
<pre><code>convert -size 512x256 pattern:hexagons hexagons.png
</code></pre>
<p><a href="http://i.stack.imgur.com/dvvcT.png" rel="nofollow"><img src="http://i.stack.imgur.com/dvvcT.png" alt="enter image description here"></a></p>
<p>Now, fill in the hexagons, each with a different colour, I am just doing some examples of flood-filling here to give you the idea. Ideally, you would colour the first (top-left) hexagon with colour <code>#000</code> and the next one across with <code>#001</code> so that you could iterate through the coordinates of the output image as consecutive colours. Also, depending on your output image size, you may need to use a 32-bit PNG to accommodate the number of hexels (hexagonal pixels).</p>
<pre><code>convert hexagons.png \
-fill red - draw "color 100,100 floodfill" \
-fill blue -draw "color 200,200 floodfill" \
colouredmask.png
</code></pre>
<p><a href="http://i.stack.imgur.com/APU6L.png" rel="nofollow"><img src="http://i.stack.imgur.com/APU6L.png" alt="enter image description here"></a></p>
<p>Now iterate through all the colours, making every colour <em>except</em> that colour transparent. Note that I have added a black border just so you can see the context on StackOverflow's white background:</p>
<pre><code>convert colouredmask.png -fill none +opaque red onecell.png
</code></pre>
<p><a href="http://i.stack.imgur.com/6hqzS.png" rel="nofollow"><img src="http://i.stack.imgur.com/6hqzS.png" alt="enter image description here"></a></p>
<p>Now mask the original image with that mask and get the average colour of that one cell and write it to your yet-to-be-invented file format. Repeat for all cells/colours.</p>
<p>Note that the basic hexagon pattern is 30x18, so you should size your grid as multiples of that for it to tesselate properly.</p>
<p>Note that if you have lots of these to process, you should consider using something like <strong>GNU Parallel</strong> to take advantage of multiple cores. So, if you make a script called <code>ProcessOneImage</code> and you have 2,000 images to do, you would use:</p>
<pre><code>parallel ProcessOneImage ::: *.png
</code></pre>
<p>and it will keep, say 8, jobs running all the time if your PC has 8 cores. There are many more options, try <code>man parallel</code>. </p>
| 1 |
2016-09-10T09:07:49Z
|
[
"python",
"opencv",
"numpy",
"image-processing"
] |
convert image pixels from square to hexagonal
| 39,421,233 |
<p>How can i convert the pixels of an image from square to hexagonal? Doing so i need to extract the rgb values from each hex pixel. Is there any library or function that simplify this process?</p>
<p>Example : Mona Lisa Hexagonal Pixel Shape </p>
<p><a href="http://i.stack.imgur.com/o3isE.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/o3isE.jpg" alt="enter image description here"></a>
Nothing tried. Thanks</p>
| 0 |
2016-09-10T00:03:46Z
| 39,424,951 |
<p>Fred has an Imagemagick script on his site that may do what you want: <a href="http://www.fmwconcepts.com/imagemagick/stainedglass/index.php" rel="nofollow">STAINEDGLASS</a></p>
| 2 |
2016-09-10T10:03:46Z
|
[
"python",
"opencv",
"numpy",
"image-processing"
] |
convert image pixels from square to hexagonal
| 39,421,233 |
<p>How can i convert the pixels of an image from square to hexagonal? Doing so i need to extract the rgb values from each hex pixel. Is there any library or function that simplify this process?</p>
<p>Example : Mona Lisa Hexagonal Pixel Shape </p>
<p><a href="http://i.stack.imgur.com/o3isE.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/o3isE.jpg" alt="enter image description here"></a>
Nothing tried. Thanks</p>
| 0 |
2016-09-10T00:03:46Z
| 39,428,443 |
<p>First of all, I think there is no such a function that is ready for you to perform the lattice conversion, thus you may need to implement the conversion process by yourself.</p>
<p>The lattice conversion is a re-sampling process, and it is also a interpolation process. There are many algorithms that have been developed in the hexagonal image processing papers.</p>
<p>Please see the example for you:
<a href="http://i.stack.imgur.com/AcPR3.png" rel="nofollow"><img src="http://i.stack.imgur.com/AcPR3.png" alt="enter image description here"></a>
<a href="http://i.stack.imgur.com/cXzPZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/cXzPZ.png" alt="enter image description here"></a></p>
| 0 |
2016-09-10T16:55:21Z
|
[
"python",
"opencv",
"numpy",
"image-processing"
] |
How to return the indices of maximum value from an array with python?
| 39,421,273 |
<p>I have an array and I want to find the indices of the maximum values.</p>
<p>For example:</p>
<pre><code>myarray = np.array([1,8,8,3,2])
</code></pre>
<p>I want to get the result: <code>[1,2]</code>, how can I do that?</p>
<p>(Actually I tried <code>np.argmax(myarray)</code>, but it only return the first occurrence <code>[1]</code>)</p>
| 1 |
2016-09-10T00:10:50Z
| 39,421,310 |
<p>Given:</p>
<pre><code>>>> myarray = np.array([1,8,8,3,2])
</code></pre>
<p>You can do:</p>
<pre><code>>>> np.where(myarray==myarray[np.argmax(myarray)])
(array([1, 2]),)
</code></pre>
<p>or, </p>
<pre><code>>>> np.where(myarray==max(myarray))
(array([1, 2]),)
</code></pre>
<p>or, </p>
<pre><code>>>> np.nonzero(myarray==max(myarray))
(array([1, 2]),)
</code></pre>
| 3 |
2016-09-10T00:18:48Z
|
[
"python",
"numpy"
] |
TypeError: takes exactly 1 argument (0 given) - Scrapy
| 39,421,304 |
<p>I'm working with scrapy. I want to generate a unique user agent for each request. I have the following:</p>
<pre><code>class ContactSpider(Spider):
name = "contact"
def getAgent(self):
f = open('useragentstrings.txt')
agents = f.readlines()
return random.choice(agents).strip()
headers = {
'user-agent': getAgent(),
'content-type': "application/x-www-form-urlencoded",
'cache-control': "no-cache"
}
def parse(self, response):
open_in_browser(response)
</code></pre>
<p>getAgent generates an agent from a list of the form:</p>
<pre><code>"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36"
</code></pre>
<p>However when I run this I get:</p>
<pre><code> File "..spiders\contact_spider.py, line 35, in <module>
class ContactSpider(Spider):
File "..spiders\contact_spider.py", line 54, in ContactSpider
'user-agent': getAgent(),
TypeError: getAgent() takes exactly 1 argument (0 given)
</code></pre>
| 1 |
2016-09-10T00:17:55Z
| 39,421,436 |
<p><code>getAgent()</code> is an <em>instance method</em> and expects to see the <code>ContactSpider</code> instance as an argument. But, the problem is, you don't need this function to be a member of your spider class - move it to a separate "helpers"/"utils"/"libs" module and import:</p>
<pre><code>from helpers import getAgent
class ContactSpider(Spider):
name = "contact"
headers = {
'user-agent': getAgent(),
'content-type': "application/x-www-form-urlencoded",
'cache-control': "no-cache"
}
def parse(self, response):
open_in_browser(response)
</code></pre>
<p>See also: <a href="http://stackoverflow.com/questions/17134653/difference-between-class-and-instance-methods">Difference between Class and Instance methods</a>.</p>
<hr>
<p>Or, as an alternative approach, there is a <a href="https://github.com/alecxe/scrapy-fake-useragent" rel="nofollow"><code>scrapy-fake-user-agent</code></a> Scrapy middleware that would rotate user agents seamlessly and randomly. User Agent strings are supplied by the <a href="https://pypi.python.org/pypi/fake-useragent" rel="nofollow"><code>fake-useragent</code> module</a>.</p>
| 2 |
2016-09-10T00:45:04Z
|
[
"python",
"scrapy"
] |
HTML printing is wrong
| 39,421,329 |
<p>So I've been looking at this for over an hour and I cannot figure out what the heck is going on.</p>
<p>The script is printing only a ">"</p>
<p>It's suppose to print the full HTML and then, after the form is submitted, print "print_after"</p>
<pre><code>import webapp2
class MainHandler(webapp2.RequestHandler):
def get(self):
p = Page()
if self.request.GET:
name = self.request.GET['name']
age = self.request.GET['age']
time = self.request.GET['time']
model = self.request.GET['model']
radio = self.request.GET['trade']
self.response.write(p.print_after(name, age, time, model, radio))
print name + age + time + model + radio
else:
self.response.write(p.print_one)
class Page(object):
def __init__(self):
self.page_body = '''
<!DOCTYPE HTML>
<html>
<head>
<meta charset="utf-8">
<link rel="stylesheet" type="text/css" href="css/main.css">
<title>Audi Test Drive Request</title>
</head>
<body>
<img src="assets/custom/images/logo.png" title="logo" alt="" width="200px" height="150px"/>
<h3>It's awesome that you want to test-drive one of our vehicles</h3>
<form method="GET" action="">
<label>Name</label>
<br>
<input type="text" name="name" required>
<br>
<label>Age</label>
<br>
<input type="text" name="age" required>
<br>
<label>Time</label>
<br>
<select name="time" required>
<option value="12:00 PM">12:00 PM</option>
<option value="12:30 PM">12:30 PM</option>
<option value="1:00 PM">1:00 PM</option>
</select>
<br>
<label>Model</label>
<br>
<select name="model" required>
<option value="2008 Audi A4">2008 Audi A4</option>
<option value="2008 Audi S4">2008 Audi S4</option>
<option value="2008 Audi RS4">2008 Audi RS4</option>
</select>
<br>
<label>Are you trading in a vehicle?</label>
<br>
<input type="radio" name="trade" value="yes" required>Yes<br>
<input type="radio" name="trade" value="no" required>No<br>
<br>
<input type="submit" value="Request Test Drive">
</form>
</body>
</html>
'''
self.page_after = '''
<!DOCTYPE HTML>
<html>
<head>
<meta charset="utf-8">
<link rel="stylesheet" type="text/css" href="css/main.css">
<title>Audi Test Drive Request</title>
</head>
<body>
<img src="assets/custom/images/logo.png" title="logo" alt="" width="200px" height="150px"/>
<h3>It's awesome that you want to test-drive one of our vehicles</h3>
</body
</html>
'''
def print_one(self):
page_content = self.page_body
page_content = page_content.format(**locals())
return page_content
def print_after(self, name, age, time, model, radio):
after_page_content = self.page_after
after_page_content = after_page_content.format(**locals())
return after_page_content
app = webapp2.WSGIApplication([
('/', MainHandler)
], debug=True)
</code></pre>
| 0 |
2016-09-10T00:22:16Z
| 39,425,429 |
<p>I tested your code and rearranged it a little. I used http post for submitting the form and then it prints the form variables. </p>
<pre><code>import webapp2
class HelloWebapp2(webapp2.RequestHandler):
def get(self):
self.response.write('''<!DOCTYPE HTML>
<html>
<head>
<meta charset="utf-8">
<link rel="stylesheet" type="text/css" href="css/main.css">
<title>Audi Test Drive Request</title>
</head>
<body>
<img src="assets/custom/images/logo.png" title="logo" alt="" width="200px" height="150px"/>
<h3>It's awesome that you want to test-drive one of our vehicles</h3>
<form method="POST" action="">
<label>Name</label>
<br>
<input type="text" name="name" required>
<br>
<label>Age</label>
<br>
<input type="text" name="age" required>
<br>
<label>Time</label>
<br>
<select name="time" required>
<option value="12:00 PM">12:00 PM</option>
<option value="12:30 PM">12:30 PM</option>
<option value="1:00 PM">1:00 PM</option>
</select>
<br>
<label>Model</label>
<br>
<select name="model" required>
<option value="2008 Audi A4">2008 Audi A4</option>
<option value="2008 Audi S4">2008 Audi S4</option>
<option value="2008 Audi RS4">2008 Audi RS4</option>
</select>
<br>
<label>Are you trading in a vehicle?</label>
<br>
<input type="radio" name="trade" value="yes" required>Yes<br>
<input type="radio" name="trade" value="no" required>No<br>
<br>
<input type="submit" value="Request Test Drive">
</form>
</body>
</html>
''')
def post(self):
if self.request.POST:
name = self.request.POST['name']
age = self.request.POST['age']
time = self.request.POST['time']
model = self.request.POST['model']
radio = self.request.POST['trade']
self.response.write(name +" " + age +" " + time +" " + model +" " + radio)
app = webapp2.WSGIApplication([
('/', HelloWebapp2),
], debug=True)
def main():
from paste import httpserver
httpserver.serve(app, host='127.0.0.1', port='8080')
if __name__ == '__main__':
main()
</code></pre>
<p>The above code starts a local webserver at port 8080. It might not do exactly what you want, but much is there. You can also run it in appengine. </p>
| 0 |
2016-09-10T11:09:55Z
|
[
"python",
"class",
"variables",
"webapp2"
] |
Pandas data reduction and merging
| 39,421,350 |
<p>I am working with a Pandas (version 0.17.1) DataFrame that looks like this:</p>
<pre><code> time type module msg_type content
36636 2016-08-25 17:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property A' = some_value_1
36637 2016-08-25 17:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property B' = some_value_2
36638 2016-08-25 17:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property C' = some_value_3
36639 2016-08-25 17:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property D' = some_value_4
36715 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 1' = some_value_a
36716 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 2' = some_value_b
36717 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 3' = some_value_c
36718 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 4' = some_value_d
36719 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 5' = some_value_e
36720 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 6' = some_value_f
36721 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 7' = some_value_g
36722 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 8' = some_value_h
36723 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 9' = some_value_i
36724 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 10' = some_value_j
36725 2016-08-25 17:59:50.964 ERROR MOD_2_NAME STATUS Didn't receive Status Monitoring 'Parameter 11' from MODULE_2!
36726 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 12' = some_value_k
36727 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 13' = some_value_l
36785 2016-08-25 18:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property A' = some_value_1
36786 2016-08-25 18:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property B' = some_value_2
36787 2016-08-25 18:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property C' = some_value_3
36788 2016-08-25 18:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property D' = some_value_4
36827 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 1' = some_value_a
36828 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 2' = some_value_b
36829 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 3' = some_value_c
36830 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 4' = some_value_d
36831 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 5' = some_value_e
36832 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 6' = some_value_f
36833 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 7' = some_value_g
36834 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 8' = some_value_h
36835 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 9' = some_value_i
36836 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 10' = some_value_j
36837 2016-08-25 19:01:50.964 ERROR MOD_2_NAME STATUS Didn't receive Status Monitoring 'Parameter 11' from MODULE_2!
36838 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 12' = some_value_k
36839 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 13' = some_value_l
</code></pre>
<p>(The frame has already been reduced to remove rows that are not of interest. That is why the index column has missing numbers)</p>
<p>As you can see there are multiple parameters read from a device at the same time. Each reading is a separate row. I would like to do some "reduction" and "compression" so that each reading is only a single row. I would also like the <code>content</code> column to be a dictionary so I could easily lookup a particular item of interest. So the result would look like this:</p>
<pre><code> time type module msg_type content
36636 2016-08-25 17:59:50.051 INFO MOD_1_NAME STATUS {'Property A' = 'some_value_1', 'Property B' = 'some_value_2', 'Property C' = 'some_value_3', 'Property D' = 'some_value_4'}
36715 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS {'Parameter 1' = 'some_value_a', 'Parameter 2' = 'some_value_b', 'Parameter 3' = 'some_value_c', 'Parameter 4' = 'some_value_d', 'Parameter 5' = 'some_value_e', 'Parameter 6' = 'some_value_f', 'Parameter 7' = 'some_value_g','Parameter 8' = some_value_h, 'Parameter 9' = 'some_value_i', 'Parameter 10' = 'some_value_j', 'Parameter 11' = '', 'Parameter 12' = 'some_value_k', 'Parameter 13' = 'some_value_l'}
36785 2016-08-25 18:59:50.051 INFO MOD_1_NAME STATUS {'Property A' = 'some_value_1', 'Property B' = 'some_value_2', 'Property C' = 'some_value_3', 'Property D' = 'some_value_4'}
36827 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS {'Parameter 1' = 'some_value_a', 'Parameter 2' = 'some_value_b', 'Parameter 3' = 'some_value_c', 'Parameter 4' = 'some_value_d', 'Parameter 5' = 'some_value_e', 'Parameter 6' = 'some_value_f', 'Parameter 7' = 'some_value_g','Parameter 8' = some_value_h, 'Parameter 9' = 'some_value_i', 'Parameter 10' = 'some_value_j', 'Parameter 11' = '', 'Parameter 12' = 'some_value_k', 'Parameter 13' = 'some_value_l'}
</code></pre>
<p>So basically I would like for all rows with the same value for their <code>time</code> and <code>module</code> columns to be "merged" together, with their <code>contents</code> columns parsed into a dictionary. (There could also be some "missing" or "empty" readings.) I don't want to filter or remove data, just reduce and summarize it.</p>
<p>I am guessing that I need to you some combination of <code>groupby()</code>, <code>transform()</code>, and <code>apply()</code> but I am not sure where to even begin.</p>
<p>Part of my difficulty is that I cannot inspect the result of <code>groupby()</code> to see if it is doing what I want.</p>
<pre><code>g1 = df.groupby(['module', 'time'])
</code></pre>
<p><code>g1</code> does not show up in the Spyder variable explorer. <code>print</code>ing does not show anything. I cannot access attribute <code>index</code> or call <code>info()</code> on <code>g1</code>. But I am having doubts that <code>groupby()</code> is even worthwhile here... I don't want to eliminate anything.</p>
<p>Been doing some searching to find an example but keep getting what seems like false positives. Any help to get started would be appreciated.</p>
| 1 |
2016-09-10T00:26:22Z
| 39,422,983 |
<p>Define a function and use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow">groupby()</a> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow">apply()</a>:</p>
<pre><code>In [235]: def create_data_dict(rows):
...: return {k:v for k,v in re.findall(r"'([^']*)' = ([^ ]*)", ' '.join(rows.content.astype(str)))}
...:
In [236]: df[df['type'] != 'ERROR'].groupby(['time', 'module', 'msg_type']).apply(create_data_dict).to_frame(name = 'content').reset_index()
Out[236]:
time module msg_type content
0 2016-08-25 17:59:50.051 MOD_1_NAME STATUS {u'Property A': u'some_value_1', u'Property C': u'some_value_3', u'Property B': u'some_value_2', u'Property D': u'some_value_4'}
1 2016-08-25 17:59:50.964 MOD_2_NAME STATUS {u'Parameter 6': u'some_value_f', u'Parameter 7': u'some_value_g', u'Parameter 4': u'some_value_d', u'Parameter 5': u'some_value_e', u'Parameter 2': u'some_value_b', u'Parameter 3': u'some_value_c', u'Parameter 1': u'some_value_a', u'Parameter 8': u'some_value_h', u'Parameter 9': u'some_value_i', u'Parameter 10': u'some_value_j', u'Parameter 12': u'some_value_k', u'Parameter 13': u'some_value_l'}
2 2016-08-25 18:59:50.051 MOD_1_NAME STATUS {u'Property A': u'some_value_1', u'Property C': u'some_value_3', u'Property B': u'some_value_2', u'Property D': u'some_value_4'}
3 2016-08-25 19:01:50.964 MOD_2_NAME STATUS {u'Parameter 6': u'some_value_f', u'Parameter 7': u'some_value_g', u'Parameter 4': u'some_value_d', u'Parameter 5': u'some_value_e', u'Parameter 2': u'some_value_b', u'Parameter 3': u'some_value_c', u'Parameter 1': u'some_value_a', u'Parameter 8': u'some_value_h', u'Parameter 9': u'some_value_i', u'Parameter 10': u'some_value_j', u'Parameter 12': u'some_value_k', u'Parameter 13': u'some_value_l'}
</code></pre>
| 1 |
2016-09-10T05:42:43Z
|
[
"python",
"pandas",
"reduction"
] |
Pandas data reduction and merging
| 39,421,350 |
<p>I am working with a Pandas (version 0.17.1) DataFrame that looks like this:</p>
<pre><code> time type module msg_type content
36636 2016-08-25 17:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property A' = some_value_1
36637 2016-08-25 17:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property B' = some_value_2
36638 2016-08-25 17:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property C' = some_value_3
36639 2016-08-25 17:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property D' = some_value_4
36715 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 1' = some_value_a
36716 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 2' = some_value_b
36717 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 3' = some_value_c
36718 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 4' = some_value_d
36719 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 5' = some_value_e
36720 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 6' = some_value_f
36721 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 7' = some_value_g
36722 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 8' = some_value_h
36723 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 9' = some_value_i
36724 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 10' = some_value_j
36725 2016-08-25 17:59:50.964 ERROR MOD_2_NAME STATUS Didn't receive Status Monitoring 'Parameter 11' from MODULE_2!
36726 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 12' = some_value_k
36727 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 13' = some_value_l
36785 2016-08-25 18:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property A' = some_value_1
36786 2016-08-25 18:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property B' = some_value_2
36787 2016-08-25 18:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property C' = some_value_3
36788 2016-08-25 18:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property D' = some_value_4
36827 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 1' = some_value_a
36828 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 2' = some_value_b
36829 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 3' = some_value_c
36830 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 4' = some_value_d
36831 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 5' = some_value_e
36832 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 6' = some_value_f
36833 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 7' = some_value_g
36834 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 8' = some_value_h
36835 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 9' = some_value_i
36836 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 10' = some_value_j
36837 2016-08-25 19:01:50.964 ERROR MOD_2_NAME STATUS Didn't receive Status Monitoring 'Parameter 11' from MODULE_2!
36838 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 12' = some_value_k
36839 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 13' = some_value_l
</code></pre>
<p>(The frame has already been reduced to remove rows that are not of interest. That is why the index column has missing numbers)</p>
<p>As you can see there are multiple parameters read from a device at the same time. Each reading is a separate row. I would like to do some "reduction" and "compression" so that each reading is only a single row. I would also like the <code>content</code> column to be a dictionary so I could easily lookup a particular item of interest. So the result would look like this:</p>
<pre><code> time type module msg_type content
36636 2016-08-25 17:59:50.051 INFO MOD_1_NAME STATUS {'Property A' = 'some_value_1', 'Property B' = 'some_value_2', 'Property C' = 'some_value_3', 'Property D' = 'some_value_4'}
36715 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS {'Parameter 1' = 'some_value_a', 'Parameter 2' = 'some_value_b', 'Parameter 3' = 'some_value_c', 'Parameter 4' = 'some_value_d', 'Parameter 5' = 'some_value_e', 'Parameter 6' = 'some_value_f', 'Parameter 7' = 'some_value_g','Parameter 8' = some_value_h, 'Parameter 9' = 'some_value_i', 'Parameter 10' = 'some_value_j', 'Parameter 11' = '', 'Parameter 12' = 'some_value_k', 'Parameter 13' = 'some_value_l'}
36785 2016-08-25 18:59:50.051 INFO MOD_1_NAME STATUS {'Property A' = 'some_value_1', 'Property B' = 'some_value_2', 'Property C' = 'some_value_3', 'Property D' = 'some_value_4'}
36827 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS {'Parameter 1' = 'some_value_a', 'Parameter 2' = 'some_value_b', 'Parameter 3' = 'some_value_c', 'Parameter 4' = 'some_value_d', 'Parameter 5' = 'some_value_e', 'Parameter 6' = 'some_value_f', 'Parameter 7' = 'some_value_g','Parameter 8' = some_value_h, 'Parameter 9' = 'some_value_i', 'Parameter 10' = 'some_value_j', 'Parameter 11' = '', 'Parameter 12' = 'some_value_k', 'Parameter 13' = 'some_value_l'}
</code></pre>
<p>So basically I would like for all rows with the same value for their <code>time</code> and <code>module</code> columns to be "merged" together, with their <code>contents</code> columns parsed into a dictionary. (There could also be some "missing" or "empty" readings.) I don't want to filter or remove data, just reduce and summarize it.</p>
<p>I am guessing that I need to you some combination of <code>groupby()</code>, <code>transform()</code>, and <code>apply()</code> but I am not sure where to even begin.</p>
<p>Part of my difficulty is that I cannot inspect the result of <code>groupby()</code> to see if it is doing what I want.</p>
<pre><code>g1 = df.groupby(['module', 'time'])
</code></pre>
<p><code>g1</code> does not show up in the Spyder variable explorer. <code>print</code>ing does not show anything. I cannot access attribute <code>index</code> or call <code>info()</code> on <code>g1</code>. But I am having doubts that <code>groupby()</code> is even worthwhile here... I don't want to eliminate anything.</p>
<p>Been doing some searching to find an example but keep getting what seems like false positives. Any help to get started would be appreciated.</p>
| 1 |
2016-09-10T00:26:22Z
| 39,423,193 |
<p>In order to understand groups in pandas you should check out <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#groupby-object-attributes" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/groupby.html#groupby-object-attributes</a>. Another way to get some insight into the groups is to simply print them:</p>
<pre class="lang-py prettyprint-override"><code>grouped = df.groupby(['A', 'B'])
print grouped.first() # prints the first group
# print each (name, group) tuple from grouped
for name, grp in grouped:
print name
print grp
</code></pre>
<p>I've worked out a specific solution for you based on some assumptions I made (see notes below):</p>
<pre class="lang-py prettyprint-override"><code>import re
from collections import OrderedDict
df = pd.read_csv('/Users/shawnheide/Desktop/test.csv')
def custom_agg(contents):
this_dict = OrderedDict()
for content in contents:
match = re.findall("Property \w+|Parameter \d+", content)
if match:
key = match[0]
match = re.findall("some_value_\w+|some_value_\d+", content)
if match:
value = match[0]
else:
value = ''
this_dict[key] = value
return this_dict
grps = df.groupby(['time', 'module', ], as_index=False)
df_grp = grps.agg({'content': custom_agg})
</code></pre>
<p>Output:</p>
<pre><code>time module content
0 2016-08-25 17:59:50.051 MOD_1_NAME {'Property A': 'some_value_1', 'Property B': 'some_value_2', 'Property C': 'some_value_3', 'Property D': 'some_value_4'}
1 2016-08-25 17:59:50.964 MOD_2_NAME {'Parameter 1': 'some_value_a', 'Parameter 2': 'some_value_b', 'Parameter 3': 'some_value_c', 'Parameter 4': 'some_value_d', 'Parameter 5': 'some_value_e', 'Parameter 6': 'some_value_f', 'Parameter 7': 'some_value_g', 'Parameter 8': 'some_value_h', 'Parameter 9': 'some_value_i', 'Parameter 10': 'some_value_j', 'Parameter 11': '', 'Parameter 12': 'some_value_k', 'Parameter 13': 'some_value_l'}
2 2016-08-25 18:59:50.051 MOD_1_NAME {'Property A': 'some_value_1', 'Property B': 'some_value_2', 'Property C': 'some_value_3', 'Property D': 'some_value_4'}
3 2016-08-25 19:01:50.964 MOD_2_NAME {'Parameter 1': 'some_value_a', 'Parameter 2': 'some_value_b', 'Parameter 3': 'some_value_c', 'Parameter 4': 'some_value_d', 'Parameter 5': 'some_value_e', 'Parameter 6': 'some_value_f', 'Parameter 7': 'some_value_g', 'Parameter 8': 'some_value_h', 'Parameter 9': 'some_value_i', 'Parameter 10': 'some_value_j', 'Parameter 11': '', 'Parameter 12': 'some_value_k', 'Parameter 13': 'some_value_l'}
</code></pre>
<hr>
<p>Issues to consider:</p>
<p>So, first of all, you should post your data in a format that can be read by others (i.e. csv, tsv, etc.), this makes it a lot easier for others to import and help you solve your problem. </p>
<p>The second issue is that in your proposed solution you have the index and msg_type columns. This doesn't really make sense given that you're not grouping on these columns, but really it's just something to consider. </p>
<p>Lastly, in order to get an ordered dictionary, you need to use the OrderedDict module from collections, since Python dicts don't maintain order (fingers crossed this feature is coming in 3.6).</p>
| 1 |
2016-09-10T06:17:13Z
|
[
"python",
"pandas",
"reduction"
] |
Pandas data reduction and merging
| 39,421,350 |
<p>I am working with a Pandas (version 0.17.1) DataFrame that looks like this:</p>
<pre><code> time type module msg_type content
36636 2016-08-25 17:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property A' = some_value_1
36637 2016-08-25 17:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property B' = some_value_2
36638 2016-08-25 17:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property C' = some_value_3
36639 2016-08-25 17:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property D' = some_value_4
36715 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 1' = some_value_a
36716 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 2' = some_value_b
36717 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 3' = some_value_c
36718 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 4' = some_value_d
36719 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 5' = some_value_e
36720 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 6' = some_value_f
36721 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 7' = some_value_g
36722 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 8' = some_value_h
36723 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 9' = some_value_i
36724 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 10' = some_value_j
36725 2016-08-25 17:59:50.964 ERROR MOD_2_NAME STATUS Didn't receive Status Monitoring 'Parameter 11' from MODULE_2!
36726 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 12' = some_value_k
36727 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 13' = some_value_l
36785 2016-08-25 18:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property A' = some_value_1
36786 2016-08-25 18:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property B' = some_value_2
36787 2016-08-25 18:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property C' = some_value_3
36788 2016-08-25 18:59:50.051 INFO MOD_1_NAME STATUS Received Status Monitoring from MODULE_1 'Property D' = some_value_4
36827 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 1' = some_value_a
36828 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 2' = some_value_b
36829 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 3' = some_value_c
36830 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 4' = some_value_d
36831 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 5' = some_value_e
36832 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 6' = some_value_f
36833 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 7' = some_value_g
36834 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 8' = some_value_h
36835 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 9' = some_value_i
36836 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 10' = some_value_j
36837 2016-08-25 19:01:50.964 ERROR MOD_2_NAME STATUS Didn't receive Status Monitoring 'Parameter 11' from MODULE_2!
36838 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 12' = some_value_k
36839 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS Received Status Monitoring from MODULE_2 'Parameter 13' = some_value_l
</code></pre>
<p>(The frame has already been reduced to remove rows that are not of interest. That is why the index column has missing numbers)</p>
<p>As you can see there are multiple parameters read from a device at the same time. Each reading is a separate row. I would like to do some "reduction" and "compression" so that each reading is only a single row. I would also like the <code>content</code> column to be a dictionary so I could easily lookup a particular item of interest. So the result would look like this:</p>
<pre><code> time type module msg_type content
36636 2016-08-25 17:59:50.051 INFO MOD_1_NAME STATUS {'Property A' = 'some_value_1', 'Property B' = 'some_value_2', 'Property C' = 'some_value_3', 'Property D' = 'some_value_4'}
36715 2016-08-25 17:59:50.964 INFO MOD_2_NAME STATUS {'Parameter 1' = 'some_value_a', 'Parameter 2' = 'some_value_b', 'Parameter 3' = 'some_value_c', 'Parameter 4' = 'some_value_d', 'Parameter 5' = 'some_value_e', 'Parameter 6' = 'some_value_f', 'Parameter 7' = 'some_value_g','Parameter 8' = some_value_h, 'Parameter 9' = 'some_value_i', 'Parameter 10' = 'some_value_j', 'Parameter 11' = '', 'Parameter 12' = 'some_value_k', 'Parameter 13' = 'some_value_l'}
36785 2016-08-25 18:59:50.051 INFO MOD_1_NAME STATUS {'Property A' = 'some_value_1', 'Property B' = 'some_value_2', 'Property C' = 'some_value_3', 'Property D' = 'some_value_4'}
36827 2016-08-25 19:01:50.964 INFO MOD_2_NAME STATUS {'Parameter 1' = 'some_value_a', 'Parameter 2' = 'some_value_b', 'Parameter 3' = 'some_value_c', 'Parameter 4' = 'some_value_d', 'Parameter 5' = 'some_value_e', 'Parameter 6' = 'some_value_f', 'Parameter 7' = 'some_value_g','Parameter 8' = some_value_h, 'Parameter 9' = 'some_value_i', 'Parameter 10' = 'some_value_j', 'Parameter 11' = '', 'Parameter 12' = 'some_value_k', 'Parameter 13' = 'some_value_l'}
</code></pre>
<p>So basically I would like for all rows with the same value for their <code>time</code> and <code>module</code> columns to be "merged" together, with their <code>contents</code> columns parsed into a dictionary. (There could also be some "missing" or "empty" readings.) I don't want to filter or remove data, just reduce and summarize it.</p>
<p>I am guessing that I need to you some combination of <code>groupby()</code>, <code>transform()</code>, and <code>apply()</code> but I am not sure where to even begin.</p>
<p>Part of my difficulty is that I cannot inspect the result of <code>groupby()</code> to see if it is doing what I want.</p>
<pre><code>g1 = df.groupby(['module', 'time'])
</code></pre>
<p><code>g1</code> does not show up in the Spyder variable explorer. <code>print</code>ing does not show anything. I cannot access attribute <code>index</code> or call <code>info()</code> on <code>g1</code>. But I am having doubts that <code>groupby()</code> is even worthwhile here... I don't want to eliminate anything.</p>
<p>Been doing some searching to find an example but keep getting what seems like false positives. Any help to get started would be appreciated.</p>
| 1 |
2016-09-10T00:26:22Z
| 39,434,386 |
<pre><code>pv = df.set_index(['time', 'type', 'module', 'msg_type']) \
.content.str.extract(r"'(?P<prop>.+)' = (?P<val>.+)", expand=True)
pv.groupby(level=[0, 2]).apply(lambda df: df.set_index('prop').val.to_dict())
</code></pre>
<hr>
<pre><code>2016-08-25 17:59:50.051,MOD_1_NAME,"{'Property A': 'some_value_1', 'Property C': 'some_value_3', 'Property B': 'some_value_2', 'Property D': 'some_value_4'}"
2016-08-25 17:59:50.964,MOD_2_NAME,"{'Parameter 6': 'some_value_f', 'Parameter 7': 'some_value_g', 'Parameter 4': 'some_value_d', 'Parameter 5': 'some_value_e', 'Parameter 2': 'some_value_b', 'Parameter 3': 'some_value_c', 'Parameter 1': 'some_value_a', 'Parameter 8': 'some_value_h', 'Parameter 9': 'some_value_i', 'Parameter 10': 'some_value_j', 'Parameter 12': 'some_value_k', 'Parameter 13': 'some_value_l'}"
2016-08-25 18:59:50.051,MOD_1_NAME,"{'Property A': 'some_value_1', 'Property C': 'some_value_3', 'Property B': 'some_value_2', 'Property D': 'some_value_4'}"
2016-08-25 19:01:50.964,MOD_2_NAME,"{'Parameter 6': 'some_value_f', 'Parameter 7': 'some_value_g', 'Parameter 4': 'some_value_d', 'Parameter 5': 'some_value_e', 'Parameter 2': 'some_value_b', 'Parameter 3': 'some_value_c', 'Parameter 1': 'some_value_a', 'Parameter 8': 'some_value_h', 'Parameter 9': 'some_value_i', 'Parameter 10': 'some_value_j', 'Parameter 12': 'some_value_k', 'Parameter 13': 'some_value_l'}"
</code></pre>
| 2 |
2016-09-11T08:50:39Z
|
[
"python",
"pandas",
"reduction"
] |
Updating a dictionary with integer keys
| 39,421,366 |
<p>I'm working on a short assignment where I have to read in a .txt file and create a dictionary in which the keys are the number of words in a sentence and the values are the number of sentences of a particular length. I've read in the file and determined the length of each sentence already, but I'm having troubles creating the dictionary. </p>
<p>I've already initialized the dictionary and am trying to update it (within a for loop that iterates over the sentences) using the following code:</p>
<pre><code>for snt in sentences:
words = snt.split(' ')
sDict[len(words)]+=1
</code></pre>
<p>It gives me a KeyError on the very first iteration. I'm sure it has to do with my syntax but I'm not sure how else to update an existing entry in the dictionary.</p>
| 2 |
2016-09-10T00:28:53Z
| 39,421,396 |
<p><code>defaultdicts</code> were invented for this purpose:</p>
<pre><code>from collections import defaultdict
sDict = defaultdict(int)
for snt in sentences:
sDict[len(snt.split())] += 1
</code></pre>
<p>If you are restricted to the use of pure dictionaries in the context of your assignment, then you need to test for existence of the key before incrementing its value in order to prevent a <code>KeyError</code>:</p>
<pre><code>sDict = {}
for snt in sentences:
num_words = len(snt.split())
if num_words in sDict:
sDict[num_words] += 1
else:
sDict[num_words] = 1
</code></pre>
| 2 |
2016-09-10T00:35:57Z
|
[
"python",
"dictionary"
] |
Updating a dictionary with integer keys
| 39,421,366 |
<p>I'm working on a short assignment where I have to read in a .txt file and create a dictionary in which the keys are the number of words in a sentence and the values are the number of sentences of a particular length. I've read in the file and determined the length of each sentence already, but I'm having troubles creating the dictionary. </p>
<p>I've already initialized the dictionary and am trying to update it (within a for loop that iterates over the sentences) using the following code:</p>
<pre><code>for snt in sentences:
words = snt.split(' ')
sDict[len(words)]+=1
</code></pre>
<p>It gives me a KeyError on the very first iteration. I'm sure it has to do with my syntax but I'm not sure how else to update an existing entry in the dictionary.</p>
| 2 |
2016-09-10T00:28:53Z
| 39,421,407 |
<p>When you initialize the dictionary, it starts out empty. The next thing you do is look up a key so that you can update its value, but that key doesn't exist yet, because the dictionary is empty. The smallest change to your code is probably to use the <code>get</code> dictionary method. Instead of this:</p>
<pre><code>sDict[len(words)]+=1
</code></pre>
<p>Use this:</p>
<pre><code>sDict[len(words)] = sDict.get(len(words), 0) + 1
</code></pre>
<p>The <code>get</code> method looks up a key, but if the key doesn't exist, you are given a default value. The default default value is <code>None</code>, and you can specify a different default value, which is the second argument, <code>0</code> in this case.</p>
<p>The better solution is probably <code>collections.Counter</code>, which handles the common use case of counting occurrences:</p>
<pre><code>import collections
s = map(str.split, sentences)
sDict = collections.Counter(map(len, s))
</code></pre>
| 2 |
2016-09-10T00:37:47Z
|
[
"python",
"dictionary"
] |
row to columns while keeping part of dataframe, display on same row
| 39,421,384 |
<p>I am trying to move some of my rows and make the them columns, but keep a large portion of the dataframe the same.</p>
<p>Resulting Dataframe:</p>
<pre><code>ID Thing Level1 Level2 Time OAttribute IsTrue Score Value
1 bicycle value value 9:30 whatever yes 1 type1
1 bicycle value value 9:30 whatever yes 2 type2
2 bicycle value value 2:30 whatever no
4 non-bic value value 3:30 whatever no 4 type3
1 bicycle value value 9:30 whatever yes 3 type3
</code></pre>
<p>and I want something like this:</p>
<pre><code>ID Thing Level1 Level2 Time OAttribute IsTrue Type1 Type2 Type3
1 bicycle value value 9:30 whatever yes 1 2 3
2 bicycle value value 2:30 whatever yes
4 non-bic value value 3:30 whatever no 4
</code></pre>
<p>I have tried</p>
<pre><code>df_ = df[['Rating', 'Value']].dropna().set_index('Value', append=True).Rating.unstack()
df.drop('Value', 1).merge(df_, right_index=True, left_index=True, how='left').fillna('')
</code></pre>
| 0 |
2016-09-10T00:34:00Z
| 39,421,426 |
<p>Can't really tell what you're trying to do with both of your Score and Value columns at the same time. </p>
<p>But if you're looking to transform your "Value" column, you're looking for something like one-hot encoding of your "Value" column and pandas has a very convenient function for it. All you have to do is:</p>
<pre><code>pd.get_dummies(df['Value'])
</code></pre>
<p>That will give you a new data frame with 3 new columns namely [type1,type2,type3] filled a bunch of 1s and 0s.</p>
<p>After that, all you have to do is use the .join command to join it back to your original df. You can then proceed to delete the columns that you don't need. </p>
| 0 |
2016-09-10T00:43:11Z
|
[
"python",
"pandas",
"dataframe"
] |
row to columns while keeping part of dataframe, display on same row
| 39,421,384 |
<p>I am trying to move some of my rows and make the them columns, but keep a large portion of the dataframe the same.</p>
<p>Resulting Dataframe:</p>
<pre><code>ID Thing Level1 Level2 Time OAttribute IsTrue Score Value
1 bicycle value value 9:30 whatever yes 1 type1
1 bicycle value value 9:30 whatever yes 2 type2
2 bicycle value value 2:30 whatever no
4 non-bic value value 3:30 whatever no 4 type3
1 bicycle value value 9:30 whatever yes 3 type3
</code></pre>
<p>and I want something like this:</p>
<pre><code>ID Thing Level1 Level2 Time OAttribute IsTrue Type1 Type2 Type3
1 bicycle value value 9:30 whatever yes 1 2 3
2 bicycle value value 2:30 whatever yes
4 non-bic value value 3:30 whatever no 4
</code></pre>
<p>I have tried</p>
<pre><code>df_ = df[['Rating', 'Value']].dropna().set_index('Value', append=True).Rating.unstack()
df.drop('Value', 1).merge(df_, right_index=True, left_index=True, how='left').fillna('')
</code></pre>
| 0 |
2016-09-10T00:34:00Z
| 39,422,548 |
<p>One way would be to create an intermediate dataframe and then use outer merge.</p>
<pre><code>In [102]: df
Out[102]:
ID Thing Level1 Level2 Time OAttribute IsTrue Score Value
0 1 bicycle value value 9:30 whatever yes 1.0 type1
1 1 bicycle value value 9:30 whatever yes 2.0 type2
2 2 bicycle value value 2:30 whatever no NaN NaN
3 4 non-bic value value 3:30 whatever no 4.0 type3
4 1 bicycle value value 9:30 whatever yes 3.0 type3
In [103]: dg = pd.DataFrame(columns=pd.np.append(df['Value'].dropna().unique(), ['ID']))
In [104]: for i in range(len(df)):
...: key = df.loc[i]['Value']
...: value = df.loc[i]['Score']
...: ID = df.loc[i]['ID']
...: if key is not pd.np.nan:
...: dg.loc[i, key] = value
...: dg.loc[i, 'ID'] = ID
...:
In [105]: dg
Out[105]:
type1 type2 type3 ID
0 1 NaN NaN 1
1 NaN 2 NaN 1
3 NaN NaN 4 4
4 NaN NaN 3 1
In [106]: dg.groupby('ID').max().reset_index()
In [107]: dg
Out[107]:
ID type1 type2 type3
0 1 1 2 3
1 4 NaN NaN 4
In [108]: df[df.columns.difference(['Score', 'Value'])].drop_duplicates().merge(dg, how='outer').fillna('')
Out[108]:
ID IsTrue Level1 Level2 OAttribute Thing Time type1 type2 type3
0 1 yes value value whatever bicycle 9:30 1 2 3
1 2 no value value whatever bicycle 2:30
2 4 no value value whatever non-bic 3:30 4
</code></pre>
<p>Another way to calculate the intermediate data frame would be by avoiding the for loop and using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow">unstack()</a>:</p>
<pre><code>In [150]: df
Out[150]:
ID Thing Level1 Level2 Time OAttribute IsTrue Score Value
0 1 bicycle value value 9:30 whatever yes 1.0 type1
1 1 bicycle value value 9:30 whatever yes 2.0 type2
2 2 bicycle value value 2:30 whatever no NaN NaN
3 4 non-bic value value 3:30 whatever no 4.0 type3
4 1 bicycle value value 9:30 whatever yes 3.0 type3
In [151]: dg = df[['Score', 'Value']].dropna().set_index('Value', append=True).Score.unstack().join(df['ID']).groupby('ID').max().reset_index()
In [152]: df[df.columns.difference(['Score', 'Value'])].drop_duplicates().merge(dg, how='outer').fillna('')
Out[152]:
ID IsTrue Level1 Level2 OAttribute Thing Time type1 type2 type3
0 1 yes value value whatever bicycle 9:30 1 2 3
1 2 no value value whatever bicycle 2:30
2 4 no value value whatever non-bic 3:30 4
</code></pre>
| 1 |
2016-09-10T04:26:02Z
|
[
"python",
"pandas",
"dataframe"
] |
Efficient way to find null values in a dataframe
| 39,421,433 |
<pre><code>import pandas as pd
import numpy as np
df = pd.read_csv ('file',low_memory=False)
df_null = df.isnull()
mask = (df_null == True)
i, j = np.where(mask)
print (list(zip(df_null.columns[j], df['Column1'][i])))
</code></pre>
<p>This is what I currently have. Essentially, I've created two dataframes and from there using the index of the null value, picked the corresponding value in Column A.</p>
<p>The ask is if there is a more efficient and faster way of doing this using Dataframes, which I admit, I don't know too well.</p>
| 0 |
2016-09-10T00:44:22Z
| 39,422,625 |
<p>A routine that I normally use in pandas to identify null counts by columns is the following:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.read_csv("test.csv")
null_counts = df.isnull().sum()
null_counts[null_counts > 0].sort_values(ascending=False)
</code></pre>
<p>This will print the columns that have null values along with sorting each column by the number of null values that it has. </p>
<p>Example output:
</p>
<pre><code>PoolQC 1453
MiscFeature 1406
Alley 1369
Fence 1179
FireplaceQu 690
LotFrontage 259
GarageYrBlt 81
GarageType 81
GarageFinish 81
GarageQual 81
GarageCond 81
BsmtFinType2 38
BsmtExposure 38
BsmtFinType1 37
BsmtCond 37
BsmtQual 37
MasVnrArea 8
MasVnrType 8
Electrical 1
dtype: int64
</code></pre>
| 0 |
2016-09-10T04:40:38Z
|
[
"python",
"pandas",
"numpy"
] |
How can I solve this ParseError related to Odoo 9?
| 39,421,437 |
<p>I'm new to odoo and I'm trying to build a module using the documentation of odoo 9.</p>
<p>I have already created the module and I have installed it, but when I want to add the xml file, an error occurs, especially when I add the line 'views/openacademy.xml' to the <strong>openerp</strong>.py :</p>
<pre><code>ParseError : "Modèle non valide dans la définition de l'action"
None" while parsing file:///D:/Odoo/Odoo%209/server/openep/addons/openacademy/views/openacademy.xml:9, near
<record model="ir.actions.act_window" id="course_list_action">
<field name="name">Courses</field>
<field name="res_model">openacademy.course</field>
<field name="view_type">form</field>
<field name="view_mode">tree,form</field>
<field name="help" type="html">
<p class="oe_view_nocontent_create">Create the first course
</p>
</field>
</record>
</code></pre>
<p><strong>My code :</strong></p>
<p><strong>openacademy.py :</strong></p>
<pre><code>from openerp import models, fields, api
class Course(models.Model):
_name = 'openacademy.course'
name = fields.Char(string="Title", required=True)
description = fields.Text()
</code></pre>
<p><strong>openacademy.xml :</strong></p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<openerp>
<data>
<!-- window action -->
<!--
The following tag is an action definition for a "window action",
that is an action opening a view or a set of views
-->
<record model="ir.actions.act_window" id="course_list_action">
<field name="name">Courses</field>
<field name="res_model">openacademy.course</field>
<field name="view_type">form</field>
<field name="view_mode">tree,form</field>
<field name="help" type="html">
<p class="oe_view_nocontent_create">Create the first course
</p>
</field>
</record>
<!-- top level menu: no parent -->
<menuitem id="main_openacademy_menu" name="Open Academy"/>
<!-- A first level in the left side menu is needed
before using action= attribute -->
<menuitem id="openacademy_menu" name="Open Academy"
parent="main_openacademy_menu"/>
<!-- the following menuitem should appear *after*
its parent openacademy_menu and *after* its
action course_list_action -->
<menuitem id="courses_menu" name="Courses" parent="openacademy_menu"
action="course_list_action"/>
<!-- Full id location:
action="openacademy.course_list_action"
It is not required when it is the same module -->
</data>
</openerp>
</code></pre>
<p><strong>__openerp__.py :</strong></p>
<pre><code># -*- coding: utf-8 -*-
{
'name': "OpenAcademy",
'summary': """
My module is the first step to the manipulation of odoo""",
'description': """
Description is not necessary for the moment
""",
'author': "Osskadd",
'website': "http://www.Thinkey.com",
# Categories can be used to filter modules in modules listing
# Check https://github.com/odoo/odoo/blob/master/openerp/addons/base/module/module_data.xml
# for the full list
'category': 'Ecommerce',
'version': '0.1',
# any module necessary for this one to work correctly
'depends': ['base'],
# always loaded
'data': [
# 'security/ir.model.access.csv',
'views/openacademy.xml',
'views/templates.xml',
],
# only loaded in demonstration mode
'demo': [
'demo/demo.xml',
],
}
</code></pre>
<p>Thanks for the help!</p>
| 0 |
2016-09-10T00:45:07Z
| 39,421,699 |
<p>I think it is complaining about how you reference tree,form views. However you have not defined any. Define a tree and form view above the code you have.</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<openerp>
<data>
<record model="ir.ui.view" id="course_tree_view">
<field name="name">course.tree</field>
<field name="model">openacademy.course</field>
<field name="arch" type="xml">
<tree string="Courses Tree">
<!-- YOUR TREE VIEW HERE -->
</tree>
</field>
</record>
<record model="ir.ui.view" id="course_form_view">
<field name="name">course.form</field>
<field name="model">openacademy.course</field>
<field name="arch" type="xml">
<form string="Course Form">
<sheet>
<!-- YOUR FORM VIEW HERE -->
</sheet>
</form>
</field>
</record>
<!-- window action -->
<!--
The following tag is an action definition for a "window action",
that is an action opening a view or a set of views
-->
<record model="ir.actions.act_window" id="course_list_action">
<field name="name">Courses</field>
<field name="res_model">openacademy.course</field>
<field name="view_type">form</field>
<field name="view_mode">tree,form</field>
<field name="help" type="html">
<p class="oe_view_nocontent_create">Create the first course
</p>
</field>
</record>
<!-- top level menu: no parent -->
<menuitem id="main_openacademy_menu" name="Open Academy"/>
<!-- A first level in the left side menu is needed
before using action= attribute -->
<menuitem id="openacademy_menu" name="Open Academy"
parent="main_openacademy_menu"/>
<!-- the following menuitem should appear *after*
its parent openacademy_menu and *after* its
action course_list_action -->
<menuitem id="courses_menu" name="Courses" parent="openacademy_menu"
action="course_list_action"/>
<!-- Full id location:
action="openacademy.course_list_action"
It is not required when it is the same module -->
</data>
</code></pre>
<p></p>
| 0 |
2016-09-10T01:41:45Z
|
[
"python",
"xml",
"odoo-9",
"parse-error"
] |
Jupyter (iPython) notebook says "cannot find a kernel matching Python [Root]"
| 39,421,564 |
<p>I'm interested in using Jupyter notebooks with both Python 2 and Python 3 (one of my colleagues insists on still using Python 2 ;) ).</p>
<p>So I diligently followed the steps listed in this excellent answer: <a href="http://stackoverflow.com/questions/30492623/using-both-python-2-x-and-python-3-x-in-ipython-notebook">Using both Python 2.x and Python 3.x in IPython Notebook</a>. </p>
<p>I installed multiple kernels and now Jupyter notebooks has the option to use both Python 2 and Python 3!</p>
<p>However, I managed to somehow delete the Python[Root] kernel. Now, every time I open a notebook, it comes up with an error message and makes me choose between Python 2 and Python 3 kernel.</p>
<p>This is not the end of the world, but I'd like it to default to my Python[Root] kernel every time I open a new notebook. I use Anaconda by the way.</p>
<p>Thanks for the assistance!</p>
| 0 |
2016-09-10T01:12:37Z
| 39,471,893 |
<p>I have not had time to fully digest the answer in the post you reference: <a href="http://stackoverflow.com/questions/30492623/using-both-python-2-x-and-python-3-x-in-ipython-notebook">Using both Python 2.x and Python 3.x in IPython Notebook</a> -- but if what you currently have isn't working properly then what I would suggest is:</p>
<ol>
<li><p>Install Anaconda if you haven't already (it sounds like you probably have done this).</p></li>
<li><p><code>conda update conda</code> to update to the latest Conda (always a good idea)</p></li>
<li><p><code>conda install anaconda=4.1.1</code> to make sure you have the latest Anaconda (well, as of this date)</p></li>
<li><p><code>conda create -n ana41py27 anaconda python=2.7</code> to create a Python 2.7 based Conda environment that contains all the Anaconda packages</p></li>
<li><p><code>conda create -n ana41py35 anaconda python=3.5</code> to create a Python 3.5 based Conda environment that contains all the Anaconda packages</p></li>
</ol>
<p>If you have any problems with those steps, report them here or on the Anaconda mailing list.</p>
<p>Once you have that in place you can start Jupyter notebook (any way you like, pretty much), and then you will be able to create new notebooks that are either Python 2.7 or Python 3.5 based by choosing the appropriate kernel from the "New" button:</p>
<p><a href="http://i.stack.imgur.com/WLAwm.png" rel="nofollow"><img src="http://i.stack.imgur.com/WLAwm.png" alt="enter image description here"></a></p>
<p>or change between a Python 2.7 or Python 3.5 kernel from within a Notebook:</p>
<p><a href="http://i.stack.imgur.com/RGnZT.png" rel="nofollow"><img src="http://i.stack.imgur.com/RGnZT.png" alt="enter image description here"></a></p>
| 0 |
2016-09-13T13:55:11Z
|
[
"python",
"ipython",
"anaconda",
"jupyter-notebook"
] |
Plotting Grouped Datetime - Pandas
| 39,421,569 |
<p>This post is sort of long, so here's the ultimate "ask" upfront:</p>
<p><strong>Is there a way to transform the x-axis/index of the resulting <code>groupby</code> or a way to pass other types of arguments to the <code>axvspan</code> function?</strong></p>
<p>I have a <code>DataFrame</code> with a datetime column, which I've grouped by <code>year</code> and <code>weekofyear</code>. This works okay, but the x-axis is displayed as a tuple. I want to <code>axvspan</code>, but I don't know who to deal with the tuples.</p>
<pre><code>import numpy as np
import pandas as pd
import datetime
from matplotlib import pylab
import matplotlib.pyplot as plt
%matplotlib inline
query = ("https://data.cityofchicago.org/resource/6zsd-86xi.json?$where=year>2010")
raw_data = pd.read_json(query)
</code></pre>
<p>Here's an overview of the <code>DataFrame</code>. I'm going to be working with the <code>date</code> column.</p>
<pre><code>raw_data.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1706960 entries, 0 to 1706959
Data columns (total 22 columns):
arrest bool
beat int64
block object
case_number object
community_area float64
date datetime64[ns]
description object
district float64
domestic bool
fbi_code object
id int64
iucr object
latitude float64
location object
location_description object
longitude float64
primary_type object
updated_on object
ward float64
x_coordinate float64
y_coordinate float64
year int64
dtypes: bool(2), datetime64[ns](1), float64(7), int64(3), object(9)
memory usage: 263.7+ MB
</code></pre>
<p>First, filter all crimes but HOMICIDES.</p>
<pre><code># get murders
raw_data = raw_data[raw_data["primary_type"] == "HOMICIDE"]
# plot murder count by year and week of the year
plt.figure(figsize=(18, 6))
raw_data.groupby([raw_data.date.dt.year,
raw_data.date.dt.weekofyear])["community_area"].size().plot()
</code></pre>
<p><a href="http://i.stack.imgur.com/uOMqT.png" rel="nofollow"><img src="http://i.stack.imgur.com/uOMqT.png" alt="Resulting Plot"></a></p>
<p>So, as you can see, the x-axis is represented as tuples. Like I said before, I'd like to add a <code>axvspan</code> to add an arbitrary green span to my plot. If the x-axis maintained it's datetime structure, I could put values in the function like so, and it would work:</p>
<pre><code>pylab.axvspan(datetime.strptime('2015-12-1 13:40:00', "%Y-%m-%d %H:%M:%S"),
datetime.strptime('2016-1-1 13:40:00', "%Y-%m-%d %H:%M:%S"),
facecolor='g', alpha=0.05) # green span
</code></pre>
<p>This would shade the graph from December 1, 2015 to January 1, 2016 in green. Is there a way to transform the x-axis/index of the resulting <code>groupby</code> or a way to pass other types of arguments to the <code>axvspan</code> function?</p>
| 3 |
2016-09-10T01:13:32Z
| 39,422,025 |
<p>Okay, I dusted off the ole <a href="http://rads.stackoverflow.com/amzn/click/1449319793" rel="nofollow">Python for Data Analysis</a> copy and re-discovered the <code>resample</code> method, and how well <code>pandas</code> handles time series data in general. The code below did the trick (sticking with my original data set):</p>
<pre><code># doesn't really matter which column I choose, I just picked one
murders = raw_data["community_area"]
murders.index = raw_data["date"]
plt.figure(figsize=(18, 6))
murders.resample("W-MON").count().plot() # weekly, every Monday
min_date = min(murders.index)
release_date = datetime.strptime('2015-11-24 12:00:00', "%Y-%m-%d %H:%M:%S")
max_date = max(murders.index)
pylab.axvspan(min_date,
release_date,
facecolor='g', alpha=0.05) # green span
pylab.axvspan(release_date,
max_date,
facecolor='r', alpha=0.075) # red span
pylab.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/XcWan.png" rel="nofollow"><img src="http://i.stack.imgur.com/XcWan.png" alt="enter image description here"></a></p>
| 0 |
2016-09-10T02:46:19Z
|
[
"python",
"datetime",
"pandas",
"matplotlib"
] |
How to send a POST request to a .php page in python
| 39,421,621 |
<p>Context:
So I'm trying to build a python program that will send a POST request to a specific .php file, and return the output. I've done a little bit of research, and this is the code I have so far:</p>
<pre><code>def ForcePush():
params = urllib.urlencode({'log': 'admin', 'pwd':'password'})
headers = {"Content-type": "application/x-www-form-urlencoded","Accept": "text/plain", "Accept-Language":"en-Us,en;q=0.5" ,
"Referer":"http://192.168.18.138/wp-login.php"}
conn = httplib.HTTPConnection(raw_input("Where would you like to browse to: "))
conn.request("POST", "", params, headers)
response = conn.getresponse()
data = response.read()
print data
conn.close()
</code></pre>
<p>The code works fine for a normal website, like www.google.com, but if I try to go to a php page I get this error: </p>
<pre><code>Traceback (most recent call last):
File "WPEnum.py", line 24, in <module>
ForcePush()
File "WPEnum.py", line 18, in ForcePush
conn.request("POST", "", params, headers)
File "C:\Python27\lib\httplib.py", line 1057, in request
self._send_request(method, url, body, headers)
File "C:\Python27\lib\httplib.py", line 1097, in _send_request
self.endheaders(body)
File "C:\Python27\lib\httplib.py", line 1053, in endheaders
self._send_output(message_body)
File "C:\Python27\lib\httplib.py", line 897, in _send_output
self.send(msg)
File "C:\Python27\lib\httplib.py", line 859, in send
self.connect()
File "C:\Python27\lib\httplib.py", line 836, in connect
self.timeout, self.source_address)
File "C:\Python27\lib\socket.py", line 557, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
socket.gaierror: [Errno 11001] getaddrinfo failed
</code></pre>
<p>In case you are wondering, I'm making the program to enumerate the WPAdmin for the Mr.Robot vulnverable VM on VMWare. Doing this for educational purposes. This is the request I'm trying to emulate:</p>
<pre><code>POST /wp-login.php HTTP/1.1
Host: 192.168.18.138
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.8.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: https://192.168.18.138/wp-login.php
Cookie: s_fid=2692E4153C7D3D30-158A9B35CCC16635; s_nr=1473166726975; s_cc=true; s_sq=%5B%5BB%5D%5D; wordpress_test_cookie=WP+Cookie+check
Connection: close
Content-Type: application/x-www-form-urlencoded
Content-Length: 104
log=admin&pwd=login&wp-submit=Log+In&redirect_to=https%3A%2F%2F192.168.18.138%2Fwp-admin%2F&testcookie=1
</code></pre>
<p>I know I don't have all of the headers, but that doesn't seem to be what the error is suggesting is wrong. Does anyone know what the problem is?</p>
| 2 |
2016-09-10T01:25:58Z
| 39,422,476 |
<p>As you see from exception, you have connection-related error (on opening socket), so obviously host/port you're using in request is wrong, or inaccessible from python environment (and obviously from system) for some reason.</p>
<p>To debug try to get same page with curl or wget (even with GET method).
<code>curl http://my_host:my_port/my_page.php</code></p>
| 0 |
2016-09-10T04:13:22Z
|
[
"python",
"python-2.7"
] |
Find substring between / and \
| 39,421,672 |
<p>After <a href="http://stackoverflow.com/questions/39420685/get-a-list-of-subdirectories">Get a list of subdirectories</a>, I got strings of that form:</p>
<pre><code>In [30]: print(subdir)
foousa 0 2016-09-07 19:58 /projects/foousa/obama/trump/10973689-Parthenon
drwx------ -
</code></pre>
<p>and when I try this <a href="http://stackoverflow.com/a/3368991/2411320">answer</a>:</p>
<pre><code>In [23]: print find_between( subdir, "/", "\\" )
In [24]: print find_between( subdir, "\/", "\\" )
</code></pre>
<p>I don't get anything, maybe a newline only... What I was aiming for is <code>10973689-Parthenon</code>.</p>
<p>What am I missing?</p>
<p>I am using <a href="/questions/tagged/spark" class="post-tag" title="show questions tagged 'spark'" rel="tag">spark</a>, but I couldn't see how this would matter...</p>
| 1 |
2016-09-10T01:35:48Z
| 39,421,827 |
<p><strong>Using <code>re</code></strong>:</p>
<pre><code>import re
subdir = ' foousa 0 2016-09-07 19:58 /projects/foousa/obama/trump/10973689-Parthenon\ndrwx------ - '
match = re.search(r'/([^/\n]+)\n', subdir)
print(match.group(1))
</code></pre>
<p><strong>Using indexes:</strong></p>
<pre><code>subdir = ' foousa 0 2016-09-07 19:58 /projects/foousa/obama/trump/10973689-Parthenon\ndrwx------ - '
begin = subdir.rindex('/') + 1
end = subdir.rindex('\n')
result = subdir[begin:end]
print(result)
</code></pre>
<p>output:</p>
<pre><code>10973689-Parthenon
</code></pre>
| 1 |
2016-09-10T02:07:06Z
|
[
"python",
"linux",
"string",
"apache-spark",
"substring"
] |
Why use regex finditer() rather than findall()
| 39,421,746 |
<p>What is the advantage of using <code>finditer()</code> if <code>findall()</code> is good enough?
<code>findall()</code> returns all of the matches while <code>finditer()</code> returns match object which can't be processed as directly as a static list.</p>
<p>For example:</p>
<pre><code>import re
CARRIS_REGEX = (r'<th>(\d+)</th><th>([\s\w\.\-]+)</th>'
r'<th>(\d+:\d+)</th><th>(\d+m)</th>')
pattern = re.compile(CARRIS_REGEX, re.UNICODE)
mailbody = open("test.txt").read()
for match in pattern.finditer(mailbody):
print(match)
print()
for match in pattern.findall(mailbody):
print(match)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code><_sre.SRE_Match object at 0x00A63758>
<_sre.SRE_Match object at 0x00A63F98>
<_sre.SRE_Match object at 0x00A63758>
<_sre.SRE_Match object at 0x00A63F98>
<_sre.SRE_Match object at 0x00A63758>
<_sre.SRE_Match object at 0x00A63F98>
<_sre.SRE_Match object at 0x00A63758>
<_sre.SRE_Match object at 0x00A63F98>
('790', 'PR. REAL', '21:06', '04m')
('758', 'PORTAS BENFICA', '21:10', '09m')
('790', 'PR. REAL', '21:14', '13m')
('758', 'PORTAS BENFICA', '21:21', '19m')
('790', 'PR. REAL', '21:29', '28m')
('758', 'PORTAS BENFICA', '21:38', '36m')
('758', 'SETE RIOS', '21:49', '47m')
('758', 'SETE RIOS', '22:09', '68m')
</code></pre>
<p>I ask this out of curiosity.</p>
| 1 |
2016-09-10T01:50:29Z
| 39,421,785 |
<p>Sometimes it's superfluous to retrieve all matches. If the number of matches is really high you could risk filling up your memory loading them all.</p>
<p>Using iterators or generators is an important concept in modern python. That being said, if you have a small text (e.g this web page) the optimization is minuscule.</p>
<p>Here is a related question about iterators: <a href="http://stackoverflow.com/questions/628903/performance-advantages-to-iterators">Performance Advantages to Iterators?</a></p>
| 1 |
2016-09-10T01:56:36Z
|
[
"python",
"regex",
"match",
"string-matching",
"iterable"
] |
Why use regex finditer() rather than findall()
| 39,421,746 |
<p>What is the advantage of using <code>finditer()</code> if <code>findall()</code> is good enough?
<code>findall()</code> returns all of the matches while <code>finditer()</code> returns match object which can't be processed as directly as a static list.</p>
<p>For example:</p>
<pre><code>import re
CARRIS_REGEX = (r'<th>(\d+)</th><th>([\s\w\.\-]+)</th>'
r'<th>(\d+:\d+)</th><th>(\d+m)</th>')
pattern = re.compile(CARRIS_REGEX, re.UNICODE)
mailbody = open("test.txt").read()
for match in pattern.finditer(mailbody):
print(match)
print()
for match in pattern.findall(mailbody):
print(match)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code><_sre.SRE_Match object at 0x00A63758>
<_sre.SRE_Match object at 0x00A63F98>
<_sre.SRE_Match object at 0x00A63758>
<_sre.SRE_Match object at 0x00A63F98>
<_sre.SRE_Match object at 0x00A63758>
<_sre.SRE_Match object at 0x00A63F98>
<_sre.SRE_Match object at 0x00A63758>
<_sre.SRE_Match object at 0x00A63F98>
('790', 'PR. REAL', '21:06', '04m')
('758', 'PORTAS BENFICA', '21:10', '09m')
('790', 'PR. REAL', '21:14', '13m')
('758', 'PORTAS BENFICA', '21:21', '19m')
('790', 'PR. REAL', '21:29', '28m')
('758', 'PORTAS BENFICA', '21:38', '36m')
('758', 'SETE RIOS', '21:49', '47m')
('758', 'SETE RIOS', '22:09', '68m')
</code></pre>
<p>I ask this out of curiosity.</p>
| 1 |
2016-09-10T01:50:29Z
| 39,421,829 |
<p><code>finditer()</code> returns an iterator while <code>findall()</code> returns an array. An iterator only does work when you ask it to by calling <code>.next()</code>. A for loop knows to call <code>.next()</code> on iterators, meaning if you <code>break</code> from the loop early, any following matches won't be performed. An array, on the other hand, needs to be fully populated, meaning every match must be found up front.</p>
<p>Iterators can be be far more memory and CPU efficient since they only needs to load one item at a time. If you were matching a very large string (encyclopedias can be several hundred megabytes of text), trying to find all matches at once could cause the browser to hang while it searched and potentially run out of memory.</p>
| 2 |
2016-09-10T02:07:28Z
|
[
"python",
"regex",
"match",
"string-matching",
"iterable"
] |
Why does my python script break after compiling?
| 39,421,911 |
<p>I'm checking out how much of a performance increase I get after compiling a python script. After research looking into this issue I don't think I will actually see an increase in performance with the script I have written because I found out that once the script is loaded, the execution time doesn't increase. I still would like to know why this is failing to run after compiling as this is my first time trying this. Here is my script</p>
<pre><code>#!/bin/python3
from datetime import datetime
start = datetime.now()
import psutil
BYTES_PER_GB = 1024*1024*1024
# Memory
m = psutil.virtual_memory()
#total = m.total/BYTES_PER_GB
#available = m.available/BYTES_PER_GB
#used = m.used/BYTES_PER_GB
m_free= m.free/BYTES_PER_GB
m_percent = m.percent
# Swap
s = psutil.swap_memory()
s_free = s.free/BYTES_PER_GB
s_percent = s.percent
print('ï
%.1fG (%.1f%%) ï %.1fG (%.1f%%)' % (m_free, m_percent, s_free, s_percent))
print('ï
%.1fG (%.1f%%) ï %.1fG (%.1f%%)' % (m_free, m_percent, s_free, s_percent))
print(datetime.now() - start)
</code></pre>
<p>I'm trying to compile with this line</p>
<pre><code>python3 -m py_compile memory
</code></pre>
<p>In my print statements I have some special characters from font awesome. Not sure if that would cause a problem but if it doesn't show up correctly in my post then that's what that is.</p>
<p>The output when I try to run the compiled file is </p>
<pre><code>./memorycpython-35.pyc: line 1: $'\026\r\r': command not found
./memorycpython-35.pyc: line 2: �k�W��@s�ddlmZej�ZddlZdZej�Zejeej: command not found
./memorycpython-35.pyc: line 3: syntax error near unexpected token `)'
./memorycpython-35.pyc: line 3: `ej
�Z
e
e je
j Ze�eej�e�dS)�)datetimeNiii@)rZnow�startZpsutilZ
BYTES_PER_GBZvirtual_memory�mZfreeZm_freeZpercentZ m_percentZ
swap_memory�sZs_freeZ s_percent�print�rr�memory<module>s
'
^[[?62;c^[[?62;c
</code></pre>
<p><strong><em>EDIT</em></strong></p>
<p>To narrow down the problem I wrote the following script</p>
<pre><code>#!/bin/python3
print("Hello World!")
</code></pre>
<p>This is the output</p>
<pre><code>./testcpython-35.pyc: line 1: $'\026\r\r': command not found
./testcpython-35.pyc: line 2: syntax error near unexpected token `)'
./testcpython-35.pyc: line 2: `�r�W%�@sed�dS)z
Hello World!N)�print�rr�./test<module>s'
</code></pre>
<p>Compiled using</p>
<pre><code>python3 -m py_compile ./test
</code></pre>
<p>This creates a file in <code>__pycache__/</code> called <code>testcpython-35.pyc</code> which I then do <code>chmod +x testcpython-35.pyc</code> and <code>./testcpython-35.pyc</code></p>
| 1 |
2016-09-10T02:23:23Z
| 39,422,047 |
<p>It appears my issue is the <code>./testcpython-35.pyc</code> part. When I run <code>python3 testcpython-35.pyc</code>, independent on whether I did <code>chmod +x ./testcpython-35.pyc</code>, the output is scrabbled. As long as I run the compiled program by first specifying what program to run it with, <code>python3</code>, it outputs <code>Hello World!</code> as expected.</p>
| 1 |
2016-09-10T02:52:27Z
|
[
"python",
"python-3.x"
] |
How to download image from site with no clear extention?
| 39,421,924 |
<p>I am trying to download an image from the NGA.gov site using python 3 and urllib.</p>
<p>The site does not display images in a standard .jpg fashion and i get an error.</p>
<pre><code>import urllib.request
from bs4 import BeautifulSoup
try:
with urllib.request.urlopen("http://images.nga.gov/?service=asset&action=show_preview&asset=33643") as url:
s = url.read()
soup = BeautifulSoup(s, 'html.parser')
img = soup.find("img")
urllib.request.urlretrieve(img,"C:\art.jpg")
except Exception as e:
print (e)
</code></pre>
<p>Error:
Some characters could not be decoded, and were replaced with REPLACEMENT CHARACTER.
expected string or bytes-like object</p>
<p>Can someone please why i am getting this error and how to get the images to my computer. </p>
| 0 |
2016-09-10T02:25:59Z
| 39,422,320 |
<p>BeautifulSoup is library for html/xml parsing.
On this url you receive image already, so what are you trying to parse?
This works ok: <code>urllib.request.urlretrieve("http://images.nga.gov/?service=asset&action=show_preview&asset=33643" ,"C:\art.jpg")</code></p>
| 1 |
2016-09-10T03:44:49Z
|
[
"python",
"web-scraping",
"beautifulsoup"
] |
How to download image from site with no clear extention?
| 39,421,924 |
<p>I am trying to download an image from the NGA.gov site using python 3 and urllib.</p>
<p>The site does not display images in a standard .jpg fashion and i get an error.</p>
<pre><code>import urllib.request
from bs4 import BeautifulSoup
try:
with urllib.request.urlopen("http://images.nga.gov/?service=asset&action=show_preview&asset=33643") as url:
s = url.read()
soup = BeautifulSoup(s, 'html.parser')
img = soup.find("img")
urllib.request.urlretrieve(img,"C:\art.jpg")
except Exception as e:
print (e)
</code></pre>
<p>Error:
Some characters could not be decoded, and were replaced with REPLACEMENT CHARACTER.
expected string or bytes-like object</p>
<p>Can someone please why i am getting this error and how to get the images to my computer. </p>
| 0 |
2016-09-10T02:25:59Z
| 39,422,373 |
<p>There is no need to use BeautifulSoup! Just do:</p>
<pre><code>with urllib.request.urlopen("http://images.nga.gov/?service=asset&action=show_preview&asset=33643") as url:
s = url.read()
with open("art.jpg", 'wb') as fp:
fp.write(url.read())
</code></pre>
| 0 |
2016-09-10T03:53:45Z
|
[
"python",
"web-scraping",
"beautifulsoup"
] |
Values printed are different when debugging
| 39,421,948 |
<pre><code>cnt = 0
s = 'aghe'
s_len = len(s)
s_len = s_len - 1
while s_len >= 0:
if s[s_len] in ('aeiou'):
cnt += 1
s_len -= 1
break;
print ('numofVowels:'),cnt
</code></pre>
<p>This does print the value of <code>cnt</code>. When debugging, <code>cnt</code> has the correct value!</p>
| -5 |
2016-09-10T02:31:01Z
| 39,422,016 |
<p>You will want to get rid of the <code>break</code>, and make sure to include <code>cnt</code> inside the parentheses for your <code>print</code> call (having it outside the parentheses will cause it not to print in Python 3):</p>
<pre><code>cnt = 0
s = 'aghe'
s_len = len(s)
s_len = s_len - 1
while s_len >= 0:
if s[s_len] in ('aeiou'):
cnt += 1
s_len -= 1
print('numofVowels:', cnt)
</code></pre>
<p>The <code>break</code> in your while loop will ensure that it only loops once, and then stops (<code>break</code>s), which is probably not what you want if you are trying to count all the vowels in a string. Also, if you must use <code>break</code> don't follow it with a semicolon <code>;</code>. This will not cause a syntax error in Python, but is not necessary in this case, and is used exclusively (AFAIK) for putting multiple statements on a single line in Python (e.g., <code>import pdb; pdb.set_trace()</code>), and even that usage is generally discouraged. </p>
| 0 |
2016-09-10T02:44:06Z
|
[
"python"
] |
Concatenate OR conditional expressions from list to "|"
| 39,421,967 |
<p>This probably is a python lang related question, not precisely of boto3 lib:</p>
<p>That query works:</p>
<pre><code>matches = table.query(
IndexName="status-date-index",
KeyConditionExpression=Key('status').eq('next'),
FilterExpression=Attr('teams').contains('a') | Attr('teams').contains('b') | Attr('teams').contains('c')
)
</code></pre>
<p>But I will have a varible list <code>["a", "b", "c", ...N]</code> and need to convert them to</p>
<pre><code>filter_expression = Attr('teams').contains('a') | Attr('teams').contains('b') | Attr('teams').contains('c') | ...N
</code></pre>
<p>any ideas how to do it in python? or maybe I need to do a better FilterExpression for that case?</p>
<p><a href="http://boto3.readthedocs.io/en/latest/_modules/boto3/dynamodb/conditions.html" rel="nofollow">boto3.dynamodb.conditions.Attr reference.</a></p>
| 0 |
2016-09-10T02:34:41Z
| 39,422,244 |
<pre><code>import operator
filter_exp = reduce(operator.or_, (Attr('teams').contains(x) for x in 'abc'))
</code></pre>
| 0 |
2016-09-10T03:30:40Z
|
[
"python",
"amazon-dynamodb"
] |
Concatenate OR conditional expressions from list to "|"
| 39,421,967 |
<p>This probably is a python lang related question, not precisely of boto3 lib:</p>
<p>That query works:</p>
<pre><code>matches = table.query(
IndexName="status-date-index",
KeyConditionExpression=Key('status').eq('next'),
FilterExpression=Attr('teams').contains('a') | Attr('teams').contains('b') | Attr('teams').contains('c')
)
</code></pre>
<p>But I will have a varible list <code>["a", "b", "c", ...N]</code> and need to convert them to</p>
<pre><code>filter_expression = Attr('teams').contains('a') | Attr('teams').contains('b') | Attr('teams').contains('c') | ...N
</code></pre>
<p>any ideas how to do it in python? or maybe I need to do a better FilterExpression for that case?</p>
<p><a href="http://boto3.readthedocs.io/en/latest/_modules/boto3/dynamodb/conditions.html" rel="nofollow">boto3.dynamodb.conditions.Attr reference.</a></p>
| 0 |
2016-09-10T02:34:41Z
| 39,422,255 |
<p>You can use <a href="https://docs.python.org/library/functools.html#functools.reduce" rel="nofollow"><code>functools.reduce</code></a> on the list of items produced by <code>contains</code> (which I assume to be of type <code>ConditionBase</code>), with the <a href="https://docs.python.org/2/library/operator.html#operator.or_" rel="nofollow"><code>or_</code> operator</a>. You can also couple that with <a href="https://docs.python.org/library/functools.html#functools.map" rel="nofollow"><code>functools.map</code></a> on the strings since you have a pile of similar inputs. Consider this much simplified example:</p>
<pre><code>>>> from functools import map
>>> from functools import reduce
>>> from operator import or_
>>> values = ['9', '3', '7', '32']
>>> map(int, values)
[9, 3, 7, 32]
>>> reduce(or_, map(int, values))
47
</code></pre>
<p>Exact equivalent to</p>
<pre><code>>>> 9 | 3 | 7 | 32
47
</code></pre>
<p>To further simplify the generation of <code>Condition</code> objects from <code>Attr(field).contains(value)</code>, a list of 2-tuple can be constructed, like so (based on your original question) through list-comprehension:</p>
<pre><code>raw_conditions = [
('teams', 'a'),
('b', 'cali'),
('teams', 'c'),
]
conditions = [Attr(field).contains(value) for field, value in raw_conditions]
filter_expression = reduce(or_, conditions)
</code></pre>
| 1 |
2016-09-10T03:33:18Z
|
[
"python",
"amazon-dynamodb"
] |
Threads not being executed under supervisord
| 39,421,985 |
<p>I am working on a basic crawler which crawls 5 websites concurrently using threads.
For each site it creates a new thread. When I run the program from the shell then the output log indicates that all the 5 threads run as expected.
But when I run this program as a <a href="http://supervisord.org/" rel="nofollow">supervisord</a> program then the log indicates that only 2 threads are being run everytime! The log indicates that the all the 5 threads have started but only the same two of them are being executed and the rest get stuck.
I cannot understand why this inconsistency is happening when it is run from a shell and when it run from supervisor. Is there something I am not taking into account?</p>
<p>Here is the code which creates the threads:</p>
<pre><code>for sid in entries:
url = entries[sid]
threading.Thread(target=self.crawl_loop, \
args=(sid, url)).start()
</code></pre>
<p><strong>UPDATES:</strong>
As suggested by tdelaney in the comments, I changed the working directory in the supervisord configuration and now all the threads are being run as expected. Though I still don't understand that why setting the working directory to the crawler file directory rectifies the issue. Perhaps some one who knows about how supervisor manages processes can explain?</p>
| 0 |
2016-09-10T02:37:08Z
| 39,422,031 |
<p>AFAIK python threads can't do threads properly because it is not thread safe. It just gives you a facility to simulate simultaneous run of the code. Your code will still use 1 core only.</p>
<p><a href="https://wiki.python.org/moin/GlobalInterpreterLock" rel="nofollow">https://wiki.python.org/moin/GlobalInterpreterLock</a></p>
<p><a href="https://en.wikibooks.org/wiki/Python_Programming/Threading" rel="nofollow">https://en.wikibooks.org/wiki/Python_Programming/Threading</a></p>
<p>Therefore it is possible that it does not spawn more processes/threads.</p>
<p>You should use multiprocessing I think?</p>
<p><a href="https://docs.python.org/2/library/multiprocessing.html" rel="nofollow">https://docs.python.org/2/library/multiprocessing.html</a></p>
| 0 |
2016-09-10T02:47:28Z
|
[
"python",
"multithreading",
"supervisord"
] |
Slow printing - still need answers
| 39,422,027 |
<p>This topic has already been covered but I am still having trouble. The code I have does print the string but it prints it extremely fast. I have tried changing the time value but it doesn't seem to change anything.</p>
<p>Here is <em>everything</em> I have - Maybe I just need a more complete example?</p>
<pre><code>import sys,time,random
typing_speed = 50 #wpm
def slow_type(t):
for l in t:
sys.stdout.write(l)
time.sleep(random.random()*10.0/typing_speed)
print "Tell if you think this is too fast?"
</code></pre>
| 0 |
2016-09-10T02:46:32Z
| 39,422,081 |
<pre><code>import sys,time,random
typing_speed = 50 #wpm
def slowprint(t):
for l in t:
sys.stdout.write(l)
sys.stdout.flush() # Forcing the output of everything in the buffer.
time.sleep(random.random()*10.0/typing_speed)
sys.stdout.write("\n") # Printing a newline, because it looks cleaner.
slowprint ("Tell if you think this is too fast?")
</code></pre>
<p>I changed the name of the function because <code>slowprint</code> seem a little more explicit in my opinion.</p>
<p>I like the following version better - it accepts an argument to control the speed.</p>
<pre><code>import sys,time,random
def slowprint(t, s): # s is the typing speed - formally held in `typing_speed`
for l in t:
sys.stdout.write(l)
sys.stdout.flush() # Forcing the output of everything in the buffer.
time.sleep(random.random()*10.0/s)
sys.stdout.write("\n") # Printing a newline, because it looks cleaner.
slowprint ("Tell if you think this is too fast?", 50)
</code></pre>
| 3 |
2016-09-10T03:00:12Z
|
[
"python",
"python-2.7"
] |
Slow printing - still need answers
| 39,422,027 |
<p>This topic has already been covered but I am still having trouble. The code I have does print the string but it prints it extremely fast. I have tried changing the time value but it doesn't seem to change anything.</p>
<p>Here is <em>everything</em> I have - Maybe I just need a more complete example?</p>
<pre><code>import sys,time,random
typing_speed = 50 #wpm
def slow_type(t):
for l in t:
sys.stdout.write(l)
time.sleep(random.random()*10.0/typing_speed)
print "Tell if you think this is too fast?"
</code></pre>
| 0 |
2016-09-10T02:46:32Z
| 39,440,255 |
<p>My problem with the existing solution(s) is I don't see how the code coerces the output to the stated WPM except by trial and error:</p>
<pre><code>time.sleep(random.random()*10.0/typing_speed)
</code></pre>
<p>and seems to get it wrong, time-wise, by more than 2X too fast.</p>
<p>If instead we use the standard definition of <em>word</em> in WPM as "five keystrokes" and the standard definition of <em>minute</em> in WPM as "sixty seconds", then "Tell if you think this is too fast?" should take ~ 8.5 seconds. Reworking the code using the above logic, we get something like:</p>
<pre><code>import sys
import time
typing_speed = 50 # wpm
def slowprint(string):
# the WPM system considers a word to be 5 keystrokes
# so convert from words per minute to seconds per character
spc = 12.0 / typing_speed # 12.0 = 60 seconds per minute / 5 characters per word
for character in string:
sys.stdout.write(character)
sys.stdout.flush() # Force the output of everything in the buffer.
time.sleep(spc)
sys.stdout.write("\n") # print a newline, because it looks cleaner.
slowprint("Tell if you think this is too fast?")
</code></pre>
| 0 |
2016-09-11T20:17:38Z
|
[
"python",
"python-2.7"
] |
Removing round brackets from a dataframe of lat/lon pairs
| 39,422,043 |
<p>I'm sure this is a very simple thing to do but I seem to be having trouble! (I am rather new to this too.)</p>
<p>I have a dataframe containing lat long coordinates:</p>
<pre><code> LatLon
0 (49.766795012580374, -7.556440128791576)
1 (49.766843444728075, -7.556439417755133)
2 (49.766843444728075, -7.556439417755133)
</code></pre>
<p>I would like to remove the round brackets/parentheses, but I just cannot work it out.</p>
<p>I keep getting errors like</p>
<blockquote>
<p>AttributeError: Can only use .str accessor with string values, which use np.object_ dtype in pandas</p>
</blockquote>
<p>But I'm not sure what to do to fix it.</p>
<p>I think it is because the type is object - so I need to convert it to string first? </p>
<p>If I do <code>.info()</code>:</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
Int64Index: 22899 entries, 0 to 22898
Data columns (total 1 columns):
LatLon 22899 non-null object
dtypes: object(1)
</code></pre>
<p>and <code>df.dtypes</code>:</p>
<pre><code>LatLon object
dtype: object
</code></pre>
| 1 |
2016-09-10T02:51:08Z
| 39,422,059 |
<p>With the updated question, here is the updated answer.</p>
<p>Suppose we have this list of tuples:</p>
<pre><code>>>> li
[(49.766795012580374, -7.556440128791576), (49.766843444728075, -7.556439417755133), (49.766843444728075, -7.556439417755133)]
</code></pre>
<p>We can create a data frame (which, fundamentally is a matrix or a list of lists) directly:</p>
<pre><code>>>> df1=pd.DataFrame(li)
>>> df1
0 1
0 49.766795 -7.556440
1 49.766843 -7.556439
2 49.766843 -7.556439
>>> df1.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 3 entries, 0 to 2
Data columns (total 2 columns):
0 3 non-null float64
1 3 non-null float64
dtypes: float64(2)
memory usage: 72.0 bytes
</code></pre>
<p>Notice this is a 2 column data frame of floats. </p>
<p>However, imagine now we have this list, which is a list of lists of tuples:</p>
<pre><code>>>> li2
[[(49.766795012580374, -7.556440128791576)], [(49.766843444728075, -7.556439417755133)], [(49.766843444728075, -7.556439417755133)]]
</code></pre>
<p>If you create a data frame here, you get what you have in the example:</p>
<pre><code>>>> df2=pd.DataFrame(li2)
>>> df2
0
0 (49.7667950126, -7.55644012879)
1 (49.7668434447, -7.55643941776)
2 (49.7668434447, -7.55643941776)
>>> df2.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 3 entries, 0 to 2
Data columns (total 1 columns):
0 3 non-null object
dtypes: object(1)
</code></pre>
<p>Which is a one column data frame of tuples. </p>
<p>So I am guessing your issue is in the initial creation of you data frame. Instead of a list of lists or a list of tuples, your original data has a list of lists of tuples (or a list of tuples of tuples, etc)...</p>
<p>The fix (if I am correct) is to flatten the source list by one level:</p>
<pre><code>>>> pd.DataFrame(t for sl in li2 for t in sl)
0 1
0 49.766795 -7.556440
1 49.766843 -7.556439
2 49.766843 -7.556439
</code></pre>
| 1 |
2016-09-10T02:55:03Z
|
[
"python",
"pandas",
"dataframe"
] |
Error with OMP_NUM_THREADS when using dask distributed
| 39,422,092 |
<p>I am using <a href="http://distributed.readthedocs.io/en/latest/" rel="nofollow">distributed</a>, a framework to allow parallel computation. In this, my primary use case is with NumPy. When I include NumPy code that relies on <code>np.linalg</code>, I get an error with <code>OMP_NUM_THREADS</code>, which is related to the <a href="https://en.wikipedia.org/wiki/OpenMP" rel="nofollow">OpenMP library</a>.</p>
<p>An minimal example:</p>
<pre><code>from distributed import Executor
import numpy as np
e = Executor('144.92.142.192:8786')
def f(x, m=200, n=1000):
A = np.random.randn(m, n)
x = np.random.randn(n)
# return np.fft.fft(x) # tested; no errors
# return np.random.randn(n) # tested; no errors
return A.dot(y).sum() # tested; throws error below
s = [e.submit(f, x) for x in [1, 2, 3, 4]]
s = e.gather(s)
</code></pre>
<p>When I test with the linalg test, <code>e.gather</code> fails as each job throws the following error:</p>
<pre><code>OMP: Error #34: System unable to allocate necessary resources for OMP thread:
OMP: System error #11: Resource temporarily unavailable
OMP: Hint: Try decreasing the value of OMP_NUM_THREADS.
</code></pre>
<p>What should I set <code>OMP_NUM_THREADS</code> to?</p>
| 3 |
2016-09-10T03:01:53Z
| 39,425,693 |
<h3>Short answer</h3>
<pre><code>export OMP_NUM_THREADS=1
or
dask-worker --nthreads 1
</code></pre>
<h3>Explanation</h3>
<p>The <code>OMP_NUM_THREADS</code> environment variable controls the number of threads that many libraries, including the <code>BLAS</code> library powering <code>numpy.dot</code>, use in their computations, like matrix multiply.</p>
<p>The conflict here is that you have two parallel libraries that are calling each other, BLAS, and dask.distributed. Each library is designed to use as many threads as there are logical cores available in the system. </p>
<p>For example if you had eight cores then dask.distributed might run your function <code>f</code> eight times at once on different threads. The <code>numpy.dot</code> function call within <code>f</code> would use eight threads per call, resulting in 64 threads running at once. </p>
<p>This is actually fine, you'll experience a performance hit but everything can run correctly, but it will be slower than if you use just eight threads at a time, either by limiting dask.distributed or by limiting BLAS.</p>
<p>Your system probably has <code>OMP_THREAD_LIMIT</code> set at some reasonable number like 16 to warn you of this event when it happens.</p>
| 2 |
2016-09-10T11:43:13Z
|
[
"python",
"numpy",
"cluster-computing",
"dask"
] |
Pandas - make a large DataFrame into several small DataFrames and run each through a function
| 39,422,106 |
<p>I have a huge dataset with about 60000 data. I would first use some criteria to do groupby on the whole dataset, and what I want to do next is to separate the whole dataset to many small datasets within the criteria and to run a function to each of the small dataset automatically to get a parameter for each small dataset. I have no idea on how to do this. Is there any code to make it possible?
This is what I have</p>
<pre><code>Date name number
20100101 John 1
20100102 Kate 3
20100102 Kate 2
20100103 John 3
20100104 John 1
</code></pre>
<p>And I want it to be split into two small ones</p>
<pre><code>Date name number
20100101 John 1
20100103 John 3
20100104 John 1
Date name number
20100102 Kate 3
20100102 Kate 2
</code></pre>
| 1 |
2016-09-10T03:04:57Z
| 39,422,211 |
<p>Unless your function is really slow, this can probably be accomplished by slicing (e.g. <code>df_small = df[a:b]</code> for some indices a and b). The only trick is to choose a and b. I use <code>range</code> in the code below but you could do it other ways:</p>
<pre><code>param_list = []
n = 10000 #size of smaller dataframe
# loop up to 60000-n, n at a time
for i in range(0,60000-n,n):
# take a slice of big dataframe and apply function to get 'param'
df_small = df[i:i+n] #
param = function( df_small )
# keep our results in a list
param_list.append(param)
</code></pre>
<p>EDIT: Based on update, you could do something like this:</p>
<pre><code># loop through names
for i in df.name.values.unique():
# take a slice of big dataframe and apply function to get 'param'
df_small = df[df.name==i]
</code></pre>
| 0 |
2016-09-10T03:24:36Z
|
[
"python",
"pandas"
] |
Pandas - make a large DataFrame into several small DataFrames and run each through a function
| 39,422,106 |
<p>I have a huge dataset with about 60000 data. I would first use some criteria to do groupby on the whole dataset, and what I want to do next is to separate the whole dataset to many small datasets within the criteria and to run a function to each of the small dataset automatically to get a parameter for each small dataset. I have no idea on how to do this. Is there any code to make it possible?
This is what I have</p>
<pre><code>Date name number
20100101 John 1
20100102 Kate 3
20100102 Kate 2
20100103 John 3
20100104 John 1
</code></pre>
<p>And I want it to be split into two small ones</p>
<pre><code>Date name number
20100101 John 1
20100103 John 3
20100104 John 1
Date name number
20100102 Kate 3
20100102 Kate 2
</code></pre>
| 1 |
2016-09-10T03:04:57Z
| 39,422,526 |
<p>I think a more efficient way than filtering the original data set using subsetting is <code>groupby()</code>, as a demo:</p>
<pre><code>for _, g in df.groupby('name'):
print(g)
# Date name number
#0 20100101 John 1
#3 20100103 John 3
#4 20100104 John 1
# Date name number
#1 20100102 Kate 3
#2 20100102 Kate 2
</code></pre>
<p>So to get a list of small data frames, you can do <code>[g for _, g in df.groupby('name')]</code>.</p>
<p>To expand on this answer, we can see more clearly what <code>df.groupby()</code> returns as follows:</p>
<pre><code>for k, g in df.groupby('name'):
print(k)
print(g)
# John
# Date name number
# 0 20100101 John 1
# 3 20100103 John 3
# 4 20100104 John 1
# Kate
# Date name number
# 1 20100102 Kate 3
# 2 20100102 Kate 2
</code></pre>
<p>For each element returned by <code>groupby()</code>, it contains a key and a data frame with <code>name</code> which has a unique value of the key. In the above solution, we don't need the key, so we can just specify a position holder and discard it.</p>
| 1 |
2016-09-10T04:21:11Z
|
[
"python",
"pandas"
] |
python comprehension trubleshooting
| 39,422,290 |
<pre><code>base=2
digits=set(range(base))
key=range(base**3)
dict={ k:[a,b,c] for k in key for a in digits for b in digits for c in digits}
print(dict)
</code></pre>
<p>the output is:</p>
<pre><code>{0: [1, 1, 1], 1: [1, 1, 1], 2: [1, 1, 1], 3: [1, 1, 1], 4: [1, 1, 1], 5: [1, 1, 1], 6: [1, 1, 1], 7: [1, 1, 1]}
</code></pre>
<p>I wonder why the output for [a,b,c] is [1,1,1], not the same as the:</p>
<pre><code>[[a,b,c] for a in digits for b in digits for c in digits]
</code></pre>
<p>which is:</p>
<pre><code>[[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [1, 0, 0], [1, 0, 1],[1,1,0], [1, 1, 1]]
</code></pre>
| -1 |
2016-09-10T03:39:58Z
| 39,422,410 |
<p>We can simplify the dictionary comprehension to something like:</p>
<pre><code>In [728]: {k:[a,b] for k in [1,2] for a in [1,2] for b in [1,2]}
Out[728]: {1: [2, 2], 2: [2, 2]}
</code></pre>
<p>It is the equivalent to constructing a dictionary with nested loops:</p>
<pre><code>In [731]: d={}
In [732]: for k in [1,2]:
...: for a in [1,2]:
...: for b in [1,2]:
...: d[k]=[a,b]
...:
In [733]: d
Out[733]: {1: [2, 2], 2: [2, 2]}
</code></pre>
<p>But to clarify what is going, let's use a defaultdict, and append the new values:</p>
<pre><code>In [730]: from collections import defaultdict
In [734]: d=defaultdict(list)
In [735]: for k in [1,2]:
...: for a in [1,2]:
...: for b in [1,2]:
...: d[k].append([a,b])
...:
In [736]: d
Out[736]:
defaultdict(list,
{1: [[1, 1], [1, 2], [2, 1], [2, 2]],
2: [[1, 1], [1, 2], [2, 1], [2, 2]]})
</code></pre>
<p>It is writing each <code>[a,b]</code> pair to the dictionary, but you only end up seeing the last pair for each key.</p>
<p>And yes, <code>@user2357112's</code> comprehension does the same thing</p>
<pre><code>In [737]: {k:[[a,b] for a in [1,2] for b in [1,2]] for k in [1,2]}
Out[737]: {1: [[1, 1], [1, 2], [2, 1], [2, 2]], 2: [[1, 1], [1, 2], [2, 1], [2, 2]]}
</code></pre>
| 1 |
2016-09-10T04:00:35Z
|
[
"python",
"list-comprehension"
] |
Creating a startup service for python script to run as root in Arch Linux
| 39,422,298 |
<p>I have a python script that I want to run on start up as root. I believe that I need to add it as a service file but I don't know if it has root permission. This is how i have my service file. The python file wont run unless in root.</p>
<pre><code>[Unit]
Description= Description here
[Service]
Type=simple
ExecStart=/usr/bin/python /home/script.py
StandardOutput=null
[Install]
WantedBy=multi-user.target
Alias=script.service
</code></pre>
<p>Any help will be appreciated.</p>
| 1 |
2016-09-10T03:40:46Z
| 39,422,419 |
<p>Systemd runs multi-user services as root, so I don't see what help do you need.</p>
| 0 |
2016-09-10T04:01:47Z
|
[
"python",
"linux",
"root",
"startup"
] |
Creating a startup service for python script to run as root in Arch Linux
| 39,422,298 |
<p>I have a python script that I want to run on start up as root. I believe that I need to add it as a service file but I don't know if it has root permission. This is how i have my service file. The python file wont run unless in root.</p>
<pre><code>[Unit]
Description= Description here
[Service]
Type=simple
ExecStart=/usr/bin/python /home/script.py
StandardOutput=null
[Install]
WantedBy=multi-user.target
Alias=script.service
</code></pre>
<p>Any help will be appreciated.</p>
| 1 |
2016-09-10T03:40:46Z
| 39,424,821 |
<p>Got it to work. After I removed the root check from my python script everything worked fine, i also removed the <code>Type=simple</code> part from the service file. After those two things it worked fine.</p>
<p>the root check part i removed was.</p>
<pre><code>from os import getenv
user = getenv("SUDO_USER")
if user is None:
print ("This program needs to run as root")
exit(0)
</code></pre>
<p>and the service file now looks like this</p>
<pre><code>[Unit]
Description= Description here
[Service]
ExecStart=/usr/bin/python /home/script.py
StandardOutput=null
[Install]
WantedBy=multi-user.target
Alias=script.service
</code></pre>
<p>I still don't understand why I needed to take out the root check, program ran fine when I used <code>sudo python script.py</code>. For some reason when the script was ran with systemctl it would fail the root check.</p>
| 0 |
2016-09-10T09:48:59Z
|
[
"python",
"linux",
"root",
"startup"
] |
Jinja2 how to perform advanced slicing
| 39,422,474 |
<p>I've got a list, "foos" and if I've got more than 5 "foos" I want to do something specific to the first 2, and then something specific for the rest.</p>
<p>So I kinda want something like this in HTML:</p>
<pre><code><div id="accordion">
<p>Foo1<p>
<p>Foo2<p>
<div id="collapseMe" class="panel-collapse collapse">
<p>Foo3<p>
<p>Foo4<p>
etc...
</div>
</div>
<a data-parent="#accordion" href="#collapseMe"><p>Expand</p></a>
</code></pre>
<p>So, I've kinda solved this in Jinga2 but the solution is very ugly. I'm wondering if I'm missing something?</p>
<pre><code><div id="accordion">
{% for f in foos %}
{% if loop.index <= 2 %}
<p>{{ f.txt }}</p>
{% else %}
{% if loop.index == 3 %}
<div id="collapseMe" class="panel-collapse collapse">
<p>{{ f.txt }}</p>
{% else %}
<p>{{ f.txt }}</p>
{% endif %}
{% endif %}
{% endfor %}
{% if foos | length > 2 %}
</div>
<a data-parent="#accordion" href="#collapseMe"><p>Expand</p></a>
{% endif %}
</div>
</code></pre>
<p>Although this works I'm thinking there must be a better way to do it. Unfortunately slice functions are pretty limited in Jinga2 as far as I can see, maybe there's another way around this that I haven't picked up yet? I'm not fully clear on how the Batch function works either, but that may work?</p>
| 2 |
2016-09-10T04:12:37Z
| 39,422,487 |
<p>You could build, or find, a filter to pre-slice your <code>foos</code> list ahead of the for loop.</p>
<pre><code>{% for f in foos|slice:"0:10:2" %}
</code></pre>
<p>You could move most of your template looping logic into the filter itself or go the easy route and use existing slice notation on a list:</p>
<pre><code>from jinja2 import Environment, Undefined
def slice(iterable, pattern):
if iterable is None or isinstance(iterable, Undefined):
return iterable
# convert to list so we can slice
items = list(iterable)
start = None
end = None
stride = None
# split pattern into slice components
if pattern:
tokens = pattern.split(':')
if len(tokens) >= 1:
start = tokens[0]
if len(tokens) >= 2:
end = tokens[1]
if len(tokens) >= 3:
stride = tokens[2];
return items[start:end:stride]
</code></pre>
<p>Somewhere else, add the filter to your Jinja2 environment.</p>
<pre><code>env = Environment()
env.filters['slice'] = slice
</code></pre>
<p>Note that this works because <code>[:::]</code> and <code>[None:None:None]</code> are the same slice notation.</p>
| 1 |
2016-09-10T04:15:18Z
|
[
"python",
"templates",
"flask",
"jinja2"
] |
How to get all comments of a facebook post using facebook SDK in Python?
| 39,422,562 |
<p>I want to get all comments of a facebook post. We can extract comments by passing the <code>limit()</code> in Api call But how we know the limit? I need all comments. </p>
<pre><code>https://graph.facebook.com/10153608167431961?fields=comments.limit(100).summary(true)&access_token=LMN
</code></pre>
<p>By using this </p>
<pre><code>data = graph.get_connections(id='10153608167431961', connection_name='comments')
</code></pre>
<p>I am getting few comments.
How can I get all comments of a post?</p>
<p><strong>Edit</strong></p>
<pre><code>import codecs
import json
import urllib
import requests
B = "https://graph.facebook.com/"
P ="10153608167431961"
f = "?fields=comments.limit(4).summary(true)&access_token="
T = "EAACb6lXwZDZD"
url = B+P+f+T
def fun(data):
nextLink = (data['comments']['paging']['next'])
print(nextLink)
for i in data['comments']['data']:
print (i['message'])
html = urllib.request.urlopen(nextLink).read()
data = json.loads(html.decode('utf-8'))
fun(data)
html = urllib.request.urlopen(url).read()
d = json.loads(html.decode('utf-8'))
fun(d)
</code></pre>
<p>It is giving the error</p>
<blockquote>
<p>KeyError: 'comments'</p>
</blockquote>
| 1 |
2016-09-10T04:29:42Z
| 39,424,273 |
<p>You need to implement paging to get all/more comments: <a href="https://developers.facebook.com/docs/graph-api/using-graph-api/#paging" rel="nofollow">https://developers.facebook.com/docs/graph-api/using-graph-api/#paging</a></p>
<p>Usually, this is done with a "recursive function", but you have to keep in mind that you may hit API limits if there are a LOT of comments. It´s better to load more comments on demand only.</p>
<p>Example for JavaScript: <a href="http://stackoverflow.com/questions/5023757/how-does-paging-in-facebook-javascript-api-works">How does paging in facebook javascript API works?</a></p>
| 0 |
2016-09-10T08:40:59Z
|
[
"python",
"facebook",
"facebook-graph-api",
"recursion"
] |
How to nest or jointly use two template tags in Django templates?
| 39,422,570 |
<p>I'm trying to use template filters to do run a loop, but I'm unable to combine two python statements within the same statement/template. Whats the correct way to combine two variables in a template? Please see the syntax and explanation below:</p>
<p>I'm building a forum with a double index, meaning, I've got a col-md-2 with list of categories. Each categories has forums, and depending on which category is clicked, that category's forums populate the next col-md-2. The remaining col-md-8 gets its content based on which category and which forum is selected.</p>
<p>My logic:</p>
<p>I've defined a template tag that loads the list of categories, which never change irrespective of which page gets loaded or which category or forum is selected. So that works fine. But based on the selected category, my second column needs to get populated. For this, I'm trying to define a custom filter (below). However, I'm not sure how to use this as it needs to be passed to another template where it runs a loop to render the html. Even if I create the for loop in this template (rather than passing it to another), I still have to do nested template tags, something like: <code>{% for forum in {{ forum.category|forumindexlistbycategory }} %}</code> In either cases, I get an error of the type <code>Invalid filter: 'forumindexlistbycategory'</code> or <code>"with" in u'include' tag needs at least one keyword argument</code>.</p>
<p>I've defined the following custom template filter in my pybb_tags.py:</p>
<pre><code>from pybb.models import Forum
@register.filter
def forumindexlistbycat(category):
forumlistbycat = Forum.objects.filter(category=category)
return forumlistbycat
</code></pre>
<p>And in my template, I'm trying to load it as follows:</p>
<pre><code>{% load i18n pybb_tags %}
<div class='category'>
{% if category %}
<h3>{{ category }}</h3>
{% include 'pybb/forumindex_list.html' with forum_list=category.forums_accessed category=category parent_forum='' %}
{% else %}
<h3>{{ forum.category }}</h3>
{% include 'pybb/forumindex_list.html' with forum_list= %}{{ forum.category|forumindexlistbycategory }}
{% endif %}
</div>
</code></pre>
| 0 |
2016-09-10T04:31:08Z
| 39,422,811 |
<p>So you must first properly register template tag.</p>
<pre><code>from django import template
from pybb.models import Forum
register = template.Library()
@register.filter
def forumindexlistbycat(category):
forumlistbycat = Forum.objects.filter(category=category)
return forumlistbycat
</code></pre>
<p>Place code from above in file named as your filter, so <code>forumindexlistbycat.py</code> and move this file to templatetags folder in your app. If you don't have this folder you must create it. Don't forget to add empty file <code>__init__.py</code> in your templatetags folder. And now you could use it in template, so:</p>
<pre><code>{% load i18n forumindexlistbycat %}
</code></pre>
<p>When your templatetag is registered you load it by it's name.
And then you use it like:</p>
<pre><code>{% include 'pybb/forumindex_list.html' with forum_list=forum.category|forumindexlistbycategory %}
</code></pre>
<p>Check for more info - <a href="https://docs.djangoproject.com/en/1.10/howto/custom-template-tags/" rel="nofollow">Guide on Custom template tags and filters.</a></p>
| 2 |
2016-09-10T05:09:26Z
|
[
"python",
"django",
"django-templates",
"django-template-filters",
"templatetags"
] |
Python Writing Weird Unicode to CSV
| 39,422,573 |
<p>I'm attempting to extract article information using the python <a href="https://github.com/codelucas/newspaper" rel="nofollow">newspaper3k</a> package and then write to a CSV file. While the info is downloaded correctly, I'm having issues with the output to CSV. I don't think I fully understand unicode, despite my efforts to read about it. </p>
<pre><code>from newspaper import Article, Source
import csv
first_article = Article(url="http://www.bloomberg.com/news/articles/2016-09-07/asian-stock-futures-deviate-as-s-p-500-ends-flat-crude-tops-46")
first_article.download()
if first_article.is_downloaded:
first_article.parse()
first_article.nlp
article_array = []
collate = {}
collate['title'] = first_article.title
collate['content'] = first_article.text
collate['keywords'] = first_article.keywords
collate['url'] = first_article.url
collate['summary'] = first_article.summary
print(collate['content'])
article_array.append(collate)
keys = article_array[0].keys()
with open('bloombergtest.csv', 'w') as output_file:
csv_writer = csv.DictWriter(output_file, keys)
csv_writer.writeheader()
csv_writer.writerows(article_array)
output_file.close()
</code></pre>
<p>When I print collate['content'], which is first_article.text, the console outputs the article's content just fine. Everything shows up correctly, apostrophes and all. When I write to the CVS, the content cell text has odd characters in it. For example: </p>
<p>ââ¬ÅAt the end of the day, Europeââ¬â¢s economy isnââ¬â¢t in great shape, inflation doesnââ¬â¢t look exciting and there are a bunch of political risks to reckon with.</p>
<p>So far I have tried:</p>
<pre><code>with open('bloombergtest.csv', 'w', encoding='utf-8') as output_file:
</code></pre>
<p>to no avail. I also tried utf-16 instead of 8, but that just resulted in the cells writing in an odd order. It didn't create the cells correctly in the CSV, although the output looked correct. I've also tried .encode('utf-8') are various variable but nothing has worked. </p>
<p>What's going on? Why would the console print the text correctly, while the CSV file has odd characters? How can I fix this? </p>
| 0 |
2016-09-10T04:31:39Z
| 39,423,498 |
<p>That's most probably a problem with the software that you use to open or print the CSV file - it doesn't "understand" that CSV is encoded in UTF-8 and assumes ASCII, latin-1, ISO-8859-1 or a similar encoding for it.</p>
<p>You can aid that software in recognizing the CSV file's encoding by <a href="http://stackoverflow.com/a/934203/6394138">placing a BOM sequence</a> in the beginning of your file (which, in general, is not recommended for UTF-8).</p>
| 1 |
2016-09-10T06:59:10Z
|
[
"python",
"csv",
"unicode"
] |
Python Writing Weird Unicode to CSV
| 39,422,573 |
<p>I'm attempting to extract article information using the python <a href="https://github.com/codelucas/newspaper" rel="nofollow">newspaper3k</a> package and then write to a CSV file. While the info is downloaded correctly, I'm having issues with the output to CSV. I don't think I fully understand unicode, despite my efforts to read about it. </p>
<pre><code>from newspaper import Article, Source
import csv
first_article = Article(url="http://www.bloomberg.com/news/articles/2016-09-07/asian-stock-futures-deviate-as-s-p-500-ends-flat-crude-tops-46")
first_article.download()
if first_article.is_downloaded:
first_article.parse()
first_article.nlp
article_array = []
collate = {}
collate['title'] = first_article.title
collate['content'] = first_article.text
collate['keywords'] = first_article.keywords
collate['url'] = first_article.url
collate['summary'] = first_article.summary
print(collate['content'])
article_array.append(collate)
keys = article_array[0].keys()
with open('bloombergtest.csv', 'w') as output_file:
csv_writer = csv.DictWriter(output_file, keys)
csv_writer.writeheader()
csv_writer.writerows(article_array)
output_file.close()
</code></pre>
<p>When I print collate['content'], which is first_article.text, the console outputs the article's content just fine. Everything shows up correctly, apostrophes and all. When I write to the CVS, the content cell text has odd characters in it. For example: </p>
<p>ââ¬ÅAt the end of the day, Europeââ¬â¢s economy isnââ¬â¢t in great shape, inflation doesnââ¬â¢t look exciting and there are a bunch of political risks to reckon with.</p>
<p>So far I have tried:</p>
<pre><code>with open('bloombergtest.csv', 'w', encoding='utf-8') as output_file:
</code></pre>
<p>to no avail. I also tried utf-16 instead of 8, but that just resulted in the cells writing in an odd order. It didn't create the cells correctly in the CSV, although the output looked correct. I've also tried .encode('utf-8') are various variable but nothing has worked. </p>
<p>What's going on? Why would the console print the text correctly, while the CSV file has odd characters? How can I fix this? </p>
| 0 |
2016-09-10T04:31:39Z
| 39,427,362 |
<p>Use encoding <code>utf-8-sig</code>. Excel requires the BOM to interpret UTF8; otherwise, it assumes the default localized encoding. </p>
| 1 |
2016-09-10T15:05:27Z
|
[
"python",
"csv",
"unicode"
] |
Python Writing Weird Unicode to CSV
| 39,422,573 |
<p>I'm attempting to extract article information using the python <a href="https://github.com/codelucas/newspaper" rel="nofollow">newspaper3k</a> package and then write to a CSV file. While the info is downloaded correctly, I'm having issues with the output to CSV. I don't think I fully understand unicode, despite my efforts to read about it. </p>
<pre><code>from newspaper import Article, Source
import csv
first_article = Article(url="http://www.bloomberg.com/news/articles/2016-09-07/asian-stock-futures-deviate-as-s-p-500-ends-flat-crude-tops-46")
first_article.download()
if first_article.is_downloaded:
first_article.parse()
first_article.nlp
article_array = []
collate = {}
collate['title'] = first_article.title
collate['content'] = first_article.text
collate['keywords'] = first_article.keywords
collate['url'] = first_article.url
collate['summary'] = first_article.summary
print(collate['content'])
article_array.append(collate)
keys = article_array[0].keys()
with open('bloombergtest.csv', 'w') as output_file:
csv_writer = csv.DictWriter(output_file, keys)
csv_writer.writeheader()
csv_writer.writerows(article_array)
output_file.close()
</code></pre>
<p>When I print collate['content'], which is first_article.text, the console outputs the article's content just fine. Everything shows up correctly, apostrophes and all. When I write to the CVS, the content cell text has odd characters in it. For example: </p>
<p>ââ¬ÅAt the end of the day, Europeââ¬â¢s economy isnââ¬â¢t in great shape, inflation doesnââ¬â¢t look exciting and there are a bunch of political risks to reckon with.</p>
<p>So far I have tried:</p>
<pre><code>with open('bloombergtest.csv', 'w', encoding='utf-8') as output_file:
</code></pre>
<p>to no avail. I also tried utf-16 instead of 8, but that just resulted in the cells writing in an odd order. It didn't create the cells correctly in the CSV, although the output looked correct. I've also tried .encode('utf-8') are various variable but nothing has worked. </p>
<p>What's going on? Why would the console print the text correctly, while the CSV file has odd characters? How can I fix this? </p>
| 0 |
2016-09-10T04:31:39Z
| 39,429,388 |
<p>Changing <code>with open('bloombergtest.csv', 'w', encoding='utf-8') as output_file:</code> to <code>with open('bloombergtest.csv', 'w', encoding='utf-8-sig') as output_file:</code>, worked, as recommended by Leon and Mark Tolonen. </p>
| 0 |
2016-09-10T18:40:17Z
|
[
"python",
"csv",
"unicode"
] |
can't print the value of json object in django
| 39,422,633 |
<p>I have this json object in ajax_data variable</p>
<pre><code>{
"columns[0][data]": "0",
"columns[1][name]": "",
"columns[5][searchable]": "true",
"columns[5][name]": "",
"columns[4][search][regex]": "false",
"order[0][dir]": "asc",
"length": "10",
}
</code></pre>
<p>I have converted it using json.loads() function like.</p>
<pre><code>ajax_data = json.loads(ajax_data)
</code></pre>
<p>I want to get the value if "order[0][dir]" and "columns[0][data]" but if i print it using </p>
<pre><code>ajax_data['order'][0]['dir]
</code></pre>
<p>its giving error :</p>
<pre><code>KeyError at /admin/help
'order'
</code></pre>
<p>But same code works if i access it for length key then it works.</p>
| 0 |
2016-09-10T04:42:01Z
| 39,422,656 |
<p>That's because <code>length</code> is a key in that json object, and <code>order</code> is not. The key names are the <em>entire</em> strings inside the quotes: <code>columns[0][data]</code>, <code>order[0][dir]</code>, etc.</p>
<p>Those are unusual key names, but perfectly valid.</p>
| 0 |
2016-09-10T04:46:30Z
|
[
"python",
"django"
] |
can't print the value of json object in django
| 39,422,633 |
<p>I have this json object in ajax_data variable</p>
<pre><code>{
"columns[0][data]": "0",
"columns[1][name]": "",
"columns[5][searchable]": "true",
"columns[5][name]": "",
"columns[4][search][regex]": "false",
"order[0][dir]": "asc",
"length": "10",
}
</code></pre>
<p>I have converted it using json.loads() function like.</p>
<pre><code>ajax_data = json.loads(ajax_data)
</code></pre>
<p>I want to get the value if "order[0][dir]" and "columns[0][data]" but if i print it using </p>
<pre><code>ajax_data['order'][0]['dir]
</code></pre>
<p>its giving error :</p>
<pre><code>KeyError at /admin/help
'order'
</code></pre>
<p>But same code works if i access it for length key then it works.</p>
| 0 |
2016-09-10T04:42:01Z
| 39,422,725 |
<p>The keys you have used are actually not a good way of implementation. </p>
<pre><code>{
"columns[0][data]": "0",
"columns[1][name]": "",
"columns[5][searchable]": "true",
"columns[5][name]": "",
"columns[4][search][regex]": "false",
"order[0][dir]": "asc",
"length": "10",
}
</code></pre>
<p>Instead of this you should hav gone for </p>
<pre><code>{
"columns": [
{"data": "0", "name": "", "searchable": "true", "name": "", "search": {
"regex": "false"}
},
{"data": "0", "name": "", "searchable": "true", "name": ""," search": {
"regex": "false"}},
{"data": "0", "name": "", "searchable": "true", "name": "", "search": {
"regex": "false"}},
{"data": "0", "name": "", "searchable": "true", "name": "", "search": {
"regex": "false"}},
{"data": "0", "name": "", "searchable": "true", "name": "", "search": {
"regex": "false"}},
{"data": "0", "name": "", "searchable": "true", "name": "", "search": {
"regex": "false"}},
],
"order": [
{"dir": "asc"}
],
"length": "10"
}
</code></pre>
<p>In this case ajax_data['order'][0]['dir] will result in value "asc"</p>
<p>For your current implementation the key is "order[0][dir]"</p>
<p>That is go for </p>
<p><code>ajax_data["order[0][dir]"]</code></p>
<p>Hope you understood the issue. </p>
<p>Structuring of json is very important when dealing with APIs. Try to restructure your json which will help for future too.</p>
| 1 |
2016-09-10T04:56:37Z
|
[
"python",
"django"
] |
Looping through list of functions in a function in Python dynamically
| 39,422,641 |
<p>I'd like to see if it's possible to run through a list of functions in a function. The closest thing I could find is looping through an entire module. I only want to use a pre-selected list of functions.</p>
<p>Here's my original problem:</p>
<ol>
<li>Given a string, check each letter to see if any of the 5 tests fulfill.</li>
<li>If a minimum of 1 letter passes a check, return True.</li>
<li>If all letters in the string fails the check, return False.</li>
<li>For each letter in the string, we will check these functions: isalnum(), isalpha(), isdigit(), islower(), isupper()</li>
<li>The result of each test should print to different lines. </li>
</ol>
<p>Sample Input</p>
<pre><code> qA2
</code></pre>
<p>Sample Output (must print to separate lines, True if at least one letter passes, or false is all letters fail each test):</p>
<pre><code> True
True
True
True
True
</code></pre>
<p>I wrote this for one test. Of course I could just write 5 different sets of code but that seems ugly. Then I started wondering if I could just loop through all the tests they're asking for. </p>
<p>Code for just one test:</p>
<pre><code> raw = 'asdfaa3fa'
counter = 0
for i in xrange(len(raw)):
if raw[i].isdigit() == True: ## This line is where I'd loop in diff func's
counter = 1
print True
break
if counter == 0:
print False
</code></pre>
<p>My fail attempt to run a loop with all the tests:</p>
<pre><code> raw = 'asdfaa3fa'
lst = [raw[i].isalnum(),raw[i].isalpha(),raw[i].isdigit(),raw[i].islower(),raw[i].isupper()]
counter = 0
for f in range(0,5):
for i in xrange(len(raw)):
if lst[f] == True: ## loop through f, which then loops through i
print lst[f]
counter = 1
print True
break
if counter == 0:
print False
</code></pre>
<p>So how do I fix this code to fulfill all the rules up there?</p>
<hr>
<p><strong>Using info from all the comments - this code fulfills the rules stated above, looping through each method dynamically as well.</strong></p>
<pre><code> raw = 'ABC'
functions = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper]
for func in functions:
print any(func(letter) for letter in raw)
</code></pre>
<p><strong>getattr approach (I think this is called introspection method?)</strong></p>
<pre><code> raw = 'ABC'
meths = ['isalnum', 'isalpha', 'isdigit', 'islower', 'isupper']
for m in meths:
print any(getattr(c,m)() for c in raw)
</code></pre>
<p><strong>List comprehension approach:</strong></p>
<pre><code> from __future__ import print_function ## Changing to Python 3 to use print in list comp
raw = 'ABC'
functions = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper]
solution = [print(func(raw)) for func in functions]
</code></pre>
| 5 |
2016-09-10T04:42:56Z
| 39,422,750 |
<p>Since you are looping through a list of simple items and trying to find if <a href="https://docs.python.org/2/library/functions.html#all" rel="nofollow"><code>all</code></a> of the functions has <a href="https://docs.python.org/2/library/functions.html#any" rel="nofollow"><code>any</code></a> valid results, you can simply define the list of functions you want to call on the input and return that. Here is a rather pythonic example of what you are trying to achieve:</p>
<pre><code>def checker(checks, value):
return all(any(check(r) for r in value) for check in checks)
</code></pre>
<p>Test it out:</p>
<pre><code>>>> def checker(checks, value):
... return all(any(check(r) for r in value) for check in checks)
...
>>> checks = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper]
>>> checker(checks, 'abcdef123ABC')
True
>>> checker(checks, 'abcdef123')
False
>>>
</code></pre>
| 1 |
2016-09-10T04:59:40Z
|
[
"python",
"list",
"loops"
] |
Looping through list of functions in a function in Python dynamically
| 39,422,641 |
<p>I'd like to see if it's possible to run through a list of functions in a function. The closest thing I could find is looping through an entire module. I only want to use a pre-selected list of functions.</p>
<p>Here's my original problem:</p>
<ol>
<li>Given a string, check each letter to see if any of the 5 tests fulfill.</li>
<li>If a minimum of 1 letter passes a check, return True.</li>
<li>If all letters in the string fails the check, return False.</li>
<li>For each letter in the string, we will check these functions: isalnum(), isalpha(), isdigit(), islower(), isupper()</li>
<li>The result of each test should print to different lines. </li>
</ol>
<p>Sample Input</p>
<pre><code> qA2
</code></pre>
<p>Sample Output (must print to separate lines, True if at least one letter passes, or false is all letters fail each test):</p>
<pre><code> True
True
True
True
True
</code></pre>
<p>I wrote this for one test. Of course I could just write 5 different sets of code but that seems ugly. Then I started wondering if I could just loop through all the tests they're asking for. </p>
<p>Code for just one test:</p>
<pre><code> raw = 'asdfaa3fa'
counter = 0
for i in xrange(len(raw)):
if raw[i].isdigit() == True: ## This line is where I'd loop in diff func's
counter = 1
print True
break
if counter == 0:
print False
</code></pre>
<p>My fail attempt to run a loop with all the tests:</p>
<pre><code> raw = 'asdfaa3fa'
lst = [raw[i].isalnum(),raw[i].isalpha(),raw[i].isdigit(),raw[i].islower(),raw[i].isupper()]
counter = 0
for f in range(0,5):
for i in xrange(len(raw)):
if lst[f] == True: ## loop through f, which then loops through i
print lst[f]
counter = 1
print True
break
if counter == 0:
print False
</code></pre>
<p>So how do I fix this code to fulfill all the rules up there?</p>
<hr>
<p><strong>Using info from all the comments - this code fulfills the rules stated above, looping through each method dynamically as well.</strong></p>
<pre><code> raw = 'ABC'
functions = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper]
for func in functions:
print any(func(letter) for letter in raw)
</code></pre>
<p><strong>getattr approach (I think this is called introspection method?)</strong></p>
<pre><code> raw = 'ABC'
meths = ['isalnum', 'isalpha', 'isdigit', 'islower', 'isupper']
for m in meths:
print any(getattr(c,m)() for c in raw)
</code></pre>
<p><strong>List comprehension approach:</strong></p>
<pre><code> from __future__ import print_function ## Changing to Python 3 to use print in list comp
raw = 'ABC'
functions = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper]
solution = [print(func(raw)) for func in functions]
</code></pre>
| 5 |
2016-09-10T04:42:56Z
| 39,422,752 |
<p>You can use <a href="http://stackoverflow.com/questions/546337/how-do-i-perform-introspection-on-an-object-in-python-2-x" title="introspection">introspection</a> to loop through all of an object's attributes, whether they be functions or some other type.</p>
<p>However you probably don't want to do that here, because <code>str</code> has <em>lots</em> of function attributes, and you're only interested in five of them. It's probably better to do as you did and just make a list of the five you want.</p>
<p>Also, you don't need to loop over each character of the string if you don't want to; those functions already look at the whole string.</p>
| 1 |
2016-09-10T04:59:51Z
|
[
"python",
"list",
"loops"
] |
Looping through list of functions in a function in Python dynamically
| 39,422,641 |
<p>I'd like to see if it's possible to run through a list of functions in a function. The closest thing I could find is looping through an entire module. I only want to use a pre-selected list of functions.</p>
<p>Here's my original problem:</p>
<ol>
<li>Given a string, check each letter to see if any of the 5 tests fulfill.</li>
<li>If a minimum of 1 letter passes a check, return True.</li>
<li>If all letters in the string fails the check, return False.</li>
<li>For each letter in the string, we will check these functions: isalnum(), isalpha(), isdigit(), islower(), isupper()</li>
<li>The result of each test should print to different lines. </li>
</ol>
<p>Sample Input</p>
<pre><code> qA2
</code></pre>
<p>Sample Output (must print to separate lines, True if at least one letter passes, or false is all letters fail each test):</p>
<pre><code> True
True
True
True
True
</code></pre>
<p>I wrote this for one test. Of course I could just write 5 different sets of code but that seems ugly. Then I started wondering if I could just loop through all the tests they're asking for. </p>
<p>Code for just one test:</p>
<pre><code> raw = 'asdfaa3fa'
counter = 0
for i in xrange(len(raw)):
if raw[i].isdigit() == True: ## This line is where I'd loop in diff func's
counter = 1
print True
break
if counter == 0:
print False
</code></pre>
<p>My fail attempt to run a loop with all the tests:</p>
<pre><code> raw = 'asdfaa3fa'
lst = [raw[i].isalnum(),raw[i].isalpha(),raw[i].isdigit(),raw[i].islower(),raw[i].isupper()]
counter = 0
for f in range(0,5):
for i in xrange(len(raw)):
if lst[f] == True: ## loop through f, which then loops through i
print lst[f]
counter = 1
print True
break
if counter == 0:
print False
</code></pre>
<p>So how do I fix this code to fulfill all the rules up there?</p>
<hr>
<p><strong>Using info from all the comments - this code fulfills the rules stated above, looping through each method dynamically as well.</strong></p>
<pre><code> raw = 'ABC'
functions = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper]
for func in functions:
print any(func(letter) for letter in raw)
</code></pre>
<p><strong>getattr approach (I think this is called introspection method?)</strong></p>
<pre><code> raw = 'ABC'
meths = ['isalnum', 'isalpha', 'isdigit', 'islower', 'isupper']
for m in meths:
print any(getattr(c,m)() for c in raw)
</code></pre>
<p><strong>List comprehension approach:</strong></p>
<pre><code> from __future__ import print_function ## Changing to Python 3 to use print in list comp
raw = 'ABC'
functions = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper]
solution = [print(func(raw)) for func in functions]
</code></pre>
| 5 |
2016-09-10T04:42:56Z
| 39,422,759 |
<p>To answer the original question:</p>
<pre><code>raw = 'asdfa3fa'
functions = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper]
isanything = [func(raw) for func in functions]
print repr(isanything)
</code></pre>
| 2 |
2016-09-10T05:01:21Z
|
[
"python",
"list",
"loops"
] |
Looping through list of functions in a function in Python dynamically
| 39,422,641 |
<p>I'd like to see if it's possible to run through a list of functions in a function. The closest thing I could find is looping through an entire module. I only want to use a pre-selected list of functions.</p>
<p>Here's my original problem:</p>
<ol>
<li>Given a string, check each letter to see if any of the 5 tests fulfill.</li>
<li>If a minimum of 1 letter passes a check, return True.</li>
<li>If all letters in the string fails the check, return False.</li>
<li>For each letter in the string, we will check these functions: isalnum(), isalpha(), isdigit(), islower(), isupper()</li>
<li>The result of each test should print to different lines. </li>
</ol>
<p>Sample Input</p>
<pre><code> qA2
</code></pre>
<p>Sample Output (must print to separate lines, True if at least one letter passes, or false is all letters fail each test):</p>
<pre><code> True
True
True
True
True
</code></pre>
<p>I wrote this for one test. Of course I could just write 5 different sets of code but that seems ugly. Then I started wondering if I could just loop through all the tests they're asking for. </p>
<p>Code for just one test:</p>
<pre><code> raw = 'asdfaa3fa'
counter = 0
for i in xrange(len(raw)):
if raw[i].isdigit() == True: ## This line is where I'd loop in diff func's
counter = 1
print True
break
if counter == 0:
print False
</code></pre>
<p>My fail attempt to run a loop with all the tests:</p>
<pre><code> raw = 'asdfaa3fa'
lst = [raw[i].isalnum(),raw[i].isalpha(),raw[i].isdigit(),raw[i].islower(),raw[i].isupper()]
counter = 0
for f in range(0,5):
for i in xrange(len(raw)):
if lst[f] == True: ## loop through f, which then loops through i
print lst[f]
counter = 1
print True
break
if counter == 0:
print False
</code></pre>
<p>So how do I fix this code to fulfill all the rules up there?</p>
<hr>
<p><strong>Using info from all the comments - this code fulfills the rules stated above, looping through each method dynamically as well.</strong></p>
<pre><code> raw = 'ABC'
functions = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper]
for func in functions:
print any(func(letter) for letter in raw)
</code></pre>
<p><strong>getattr approach (I think this is called introspection method?)</strong></p>
<pre><code> raw = 'ABC'
meths = ['isalnum', 'isalpha', 'isdigit', 'islower', 'isupper']
for m in meths:
print any(getattr(c,m)() for c in raw)
</code></pre>
<p><strong>List comprehension approach:</strong></p>
<pre><code> from __future__ import print_function ## Changing to Python 3 to use print in list comp
raw = 'ABC'
functions = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper]
solution = [print(func(raw)) for func in functions]
</code></pre>
| 5 |
2016-09-10T04:42:56Z
| 39,422,761 |
<p>The way you are looping through a list of functions is slightly off. This would be a valid way to do it. The functions you need to store in the list are the generic string functions given by str.funcname. Once you have those list of functions, you can loop through them using a for loop, and just treat it like a normal function!</p>
<pre><code>raw = 'asdfaa3fa'
functions = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper] # list of functions
for fn in functions: # iterate over list of functions, where the current function in the list is referred to as fn
for ch in raw: # for each character in the string raw
if fn(ch):
print(True)
break
</code></pre>
<p>Sample outputs:</p>
<pre><code>Input Output
===================================
"qA2" -----> True True True True True
"asdfaa3fa" -----> True True True True
</code></pre>
<p>Also I notice you seem to use indexing for iteration which makes me feel like you might be coming from a language like C/C++. The for in loop construct is really powerful in python so I would read up on it (y).</p>
<p>Above is a more pythonic way to do this but just as a learning tool, I wrote a working version that matches how you tried to do it as much as possible to show you where you went wrong specifically. Here it is with comments:</p>
<pre><code>raw = 'asdfaa3fa'
lst = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper] # notice youre treating the functions just like variables and aren't actually calling them. That is, you're writing str.isalpha instead of str.isalpha()
for f in range(0,5):
counter = 0
for i in xrange(len(raw)):
if lst[f](raw[i]) == True: # In your attempt, you were checking if lst[f]==True; lst[f] is a function so you are checking if a function == True. Instead, you need to pass an argument to lst[f](), in this case the ith character of raw, and check whether what that function evaluates to is true
print lst[f]
counter = 1
print True
break
if counter == 0:
print False
</code></pre>
| 4 |
2016-09-10T05:01:32Z
|
[
"python",
"list",
"loops"
] |
Looping through list of functions in a function in Python dynamically
| 39,422,641 |
<p>I'd like to see if it's possible to run through a list of functions in a function. The closest thing I could find is looping through an entire module. I only want to use a pre-selected list of functions.</p>
<p>Here's my original problem:</p>
<ol>
<li>Given a string, check each letter to see if any of the 5 tests fulfill.</li>
<li>If a minimum of 1 letter passes a check, return True.</li>
<li>If all letters in the string fails the check, return False.</li>
<li>For each letter in the string, we will check these functions: isalnum(), isalpha(), isdigit(), islower(), isupper()</li>
<li>The result of each test should print to different lines. </li>
</ol>
<p>Sample Input</p>
<pre><code> qA2
</code></pre>
<p>Sample Output (must print to separate lines, True if at least one letter passes, or false is all letters fail each test):</p>
<pre><code> True
True
True
True
True
</code></pre>
<p>I wrote this for one test. Of course I could just write 5 different sets of code but that seems ugly. Then I started wondering if I could just loop through all the tests they're asking for. </p>
<p>Code for just one test:</p>
<pre><code> raw = 'asdfaa3fa'
counter = 0
for i in xrange(len(raw)):
if raw[i].isdigit() == True: ## This line is where I'd loop in diff func's
counter = 1
print True
break
if counter == 0:
print False
</code></pre>
<p>My fail attempt to run a loop with all the tests:</p>
<pre><code> raw = 'asdfaa3fa'
lst = [raw[i].isalnum(),raw[i].isalpha(),raw[i].isdigit(),raw[i].islower(),raw[i].isupper()]
counter = 0
for f in range(0,5):
for i in xrange(len(raw)):
if lst[f] == True: ## loop through f, which then loops through i
print lst[f]
counter = 1
print True
break
if counter == 0:
print False
</code></pre>
<p>So how do I fix this code to fulfill all the rules up there?</p>
<hr>
<p><strong>Using info from all the comments - this code fulfills the rules stated above, looping through each method dynamically as well.</strong></p>
<pre><code> raw = 'ABC'
functions = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper]
for func in functions:
print any(func(letter) for letter in raw)
</code></pre>
<p><strong>getattr approach (I think this is called introspection method?)</strong></p>
<pre><code> raw = 'ABC'
meths = ['isalnum', 'isalpha', 'isdigit', 'islower', 'isupper']
for m in meths:
print any(getattr(c,m)() for c in raw)
</code></pre>
<p><strong>List comprehension approach:</strong></p>
<pre><code> from __future__ import print_function ## Changing to Python 3 to use print in list comp
raw = 'ABC'
functions = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper]
solution = [print(func(raw)) for func in functions]
</code></pre>
| 5 |
2016-09-10T04:42:56Z
| 39,422,813 |
<p>Okay, so the first question is easy enough. The simple way to do it is just do</p>
<pre><code>def foo(raw):
for c in raw:
if c.isalpha(): return True
if c.isdigit(): return True
# the other cases
return False
</code></pre>
<p>Never neglect the simplest thing that could work.</p>
<p>Now, if you want to do it <em>dynamically</em> -- which is the magic keyword you probably needed, you want to apply something like this (cribbed from <a href="http://stackoverflow.com/questions/17734618/dynamic-method-call-in-python-2-7-using-strings-of-method-names">another question</a>):</p>
<pre><code>meths = [isalnum, isalpha, isdigit, islower, isupper]
for c in raw:
for m in meths:
getattr(c, m)()
</code></pre>
<p>Warning, this is untested code meant to give you the idea. The key notion here is that the methods of an object are attributes just like anything else, so, for example <code>getattr("a", "isalpha")()</code> does the following:</p>
<ul>
<li>Uses <code>getattr</code> to search the attributes dictionary of <code>"a"</code> for a method named <code>isalpha</code></li>
<li>Returns that method itself -- <code><function isalpha></code></li>
<li>then invokes that method using the <code>()</code> which is the function application operator in Python.</li>
</ul>
<p>See this example:</p>
<pre><code>In [11]: getattr('a', 'isalpha')()
Out[11]: True
</code></pre>
| 2 |
2016-09-10T05:09:43Z
|
[
"python",
"list",
"loops"
] |
Looping through list of functions in a function in Python dynamically
| 39,422,641 |
<p>I'd like to see if it's possible to run through a list of functions in a function. The closest thing I could find is looping through an entire module. I only want to use a pre-selected list of functions.</p>
<p>Here's my original problem:</p>
<ol>
<li>Given a string, check each letter to see if any of the 5 tests fulfill.</li>
<li>If a minimum of 1 letter passes a check, return True.</li>
<li>If all letters in the string fails the check, return False.</li>
<li>For each letter in the string, we will check these functions: isalnum(), isalpha(), isdigit(), islower(), isupper()</li>
<li>The result of each test should print to different lines. </li>
</ol>
<p>Sample Input</p>
<pre><code> qA2
</code></pre>
<p>Sample Output (must print to separate lines, True if at least one letter passes, or false is all letters fail each test):</p>
<pre><code> True
True
True
True
True
</code></pre>
<p>I wrote this for one test. Of course I could just write 5 different sets of code but that seems ugly. Then I started wondering if I could just loop through all the tests they're asking for. </p>
<p>Code for just one test:</p>
<pre><code> raw = 'asdfaa3fa'
counter = 0
for i in xrange(len(raw)):
if raw[i].isdigit() == True: ## This line is where I'd loop in diff func's
counter = 1
print True
break
if counter == 0:
print False
</code></pre>
<p>My fail attempt to run a loop with all the tests:</p>
<pre><code> raw = 'asdfaa3fa'
lst = [raw[i].isalnum(),raw[i].isalpha(),raw[i].isdigit(),raw[i].islower(),raw[i].isupper()]
counter = 0
for f in range(0,5):
for i in xrange(len(raw)):
if lst[f] == True: ## loop through f, which then loops through i
print lst[f]
counter = 1
print True
break
if counter == 0:
print False
</code></pre>
<p>So how do I fix this code to fulfill all the rules up there?</p>
<hr>
<p><strong>Using info from all the comments - this code fulfills the rules stated above, looping through each method dynamically as well.</strong></p>
<pre><code> raw = 'ABC'
functions = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper]
for func in functions:
print any(func(letter) for letter in raw)
</code></pre>
<p><strong>getattr approach (I think this is called introspection method?)</strong></p>
<pre><code> raw = 'ABC'
meths = ['isalnum', 'isalpha', 'isdigit', 'islower', 'isupper']
for m in meths:
print any(getattr(c,m)() for c in raw)
</code></pre>
<p><strong>List comprehension approach:</strong></p>
<pre><code> from __future__ import print_function ## Changing to Python 3 to use print in list comp
raw = 'ABC'
functions = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper]
solution = [print(func(raw)) for func in functions]
</code></pre>
| 5 |
2016-09-10T04:42:56Z
| 39,422,839 |
<p>All the other answers are correct, but since you're a beginner, I want to point out the problem in your code:</p>
<pre><code>lst = [raw[i].isalnum(),raw[i].isalpha(),raw[i].isdigit(),raw[i].islower(),raw[i].isupper()]
</code></pre>
<p>First: Not sure which value <em>i</em> currently has in your code snipped, but it seems to point somewhere in the string - which results in single characters being evaluated, not the whole string <em>raw</em>.</p>
<p>Second: When you build your list, you are already calling the methods you want to insert, which has the effect that not the functions themself get inserted, but their return values (that's why you're seeing all those <em>True</em> values in your print statement).</p>
<p>Try changing your code as follows:</p>
<pre><code>lst = [raw.isalnum, raw.isalpha, raw.isdigit, raw.islower, raw.isupper]
</code></pre>
| 2 |
2016-09-10T05:16:56Z
|
[
"python",
"list",
"loops"
] |
Looping through list of functions in a function in Python dynamically
| 39,422,641 |
<p>I'd like to see if it's possible to run through a list of functions in a function. The closest thing I could find is looping through an entire module. I only want to use a pre-selected list of functions.</p>
<p>Here's my original problem:</p>
<ol>
<li>Given a string, check each letter to see if any of the 5 tests fulfill.</li>
<li>If a minimum of 1 letter passes a check, return True.</li>
<li>If all letters in the string fails the check, return False.</li>
<li>For each letter in the string, we will check these functions: isalnum(), isalpha(), isdigit(), islower(), isupper()</li>
<li>The result of each test should print to different lines. </li>
</ol>
<p>Sample Input</p>
<pre><code> qA2
</code></pre>
<p>Sample Output (must print to separate lines, True if at least one letter passes, or false is all letters fail each test):</p>
<pre><code> True
True
True
True
True
</code></pre>
<p>I wrote this for one test. Of course I could just write 5 different sets of code but that seems ugly. Then I started wondering if I could just loop through all the tests they're asking for. </p>
<p>Code for just one test:</p>
<pre><code> raw = 'asdfaa3fa'
counter = 0
for i in xrange(len(raw)):
if raw[i].isdigit() == True: ## This line is where I'd loop in diff func's
counter = 1
print True
break
if counter == 0:
print False
</code></pre>
<p>My fail attempt to run a loop with all the tests:</p>
<pre><code> raw = 'asdfaa3fa'
lst = [raw[i].isalnum(),raw[i].isalpha(),raw[i].isdigit(),raw[i].islower(),raw[i].isupper()]
counter = 0
for f in range(0,5):
for i in xrange(len(raw)):
if lst[f] == True: ## loop through f, which then loops through i
print lst[f]
counter = 1
print True
break
if counter == 0:
print False
</code></pre>
<p>So how do I fix this code to fulfill all the rules up there?</p>
<hr>
<p><strong>Using info from all the comments - this code fulfills the rules stated above, looping through each method dynamically as well.</strong></p>
<pre><code> raw = 'ABC'
functions = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper]
for func in functions:
print any(func(letter) for letter in raw)
</code></pre>
<p><strong>getattr approach (I think this is called introspection method?)</strong></p>
<pre><code> raw = 'ABC'
meths = ['isalnum', 'isalpha', 'isdigit', 'islower', 'isupper']
for m in meths:
print any(getattr(c,m)() for c in raw)
</code></pre>
<p><strong>List comprehension approach:</strong></p>
<pre><code> from __future__ import print_function ## Changing to Python 3 to use print in list comp
raw = 'ABC'
functions = [str.isalnum, str.isalpha, str.isdigit, str.islower, str.isupper]
solution = [print(func(raw)) for func in functions]
</code></pre>
| 5 |
2016-09-10T04:42:56Z
| 39,422,846 |
<p>I'm going to guess that you're validating password complexity, and I'm also going to say that software which takes an input and says "False" and there's no indication <em>why</em> is user-hostile, so the most important thing is not "how to loop over nested char function code wizardry (*)" but "give good feedback", and suggest something more like:</p>
<pre><code>raw = 'asdfaa3fa'
import re
def validate_password(password):
""" This function takes a password string, and validates it
against the complexity requirements from {wherever}
and returns True if it's complex enough, otherwise False """
if not re.search('\d', password):
print("Error: password needs to include at least one number")
return False
elif not re.search('[a-z]', password):
print("Error: password must include at least one lowercase letter")
return False
elif not re.search('[A-Z]', password):
print("Error: password must include at least one uppercase letter")
return False
print("Password is OK")
return True
validate_password(raw)
</code></pre>
<p>Try online <a href="https://repl.it/D2Bm" rel="nofollow">at repl.it</a></p>
<p>And the regex searching checks ranges of characters and digits in one call, which is neater than a loop over characters.</p>
<p>(PS. your functions overlap; a string which has characters matching 'isupper', 'islower' and 'isnumeric' already has 'isadigit' and 'isalnum' covered. More interesting would be to handle characters like <code>!</code> which are not upper, lower, digits or alnum).</p>
<hr>
<p>(*) function wizardry like the other answers is normally exactly what I would answer, but there's so much of that already answered that I may as well answer the other way instead :P</p>
| 2 |
2016-09-10T05:17:51Z
|
[
"python",
"list",
"loops"
] |
Write to last line of file in python
| 39,422,699 |
<p>I wanna save some string on a file that i called inv.data. Every time i write a special command, I want to save a string in the file. The string should be at the last line in the file all times.
I read something about append, so I tried to do something like this:</p>
<pre><code>#Open and close the inventroy file
fileOpen = open('inv.data', 'a')
fileOpen.write(argOne)
fileOpen.close()
fileOpen = open('inv.data', 'r')
savedData = fileOpen.read().splitlines()
fileOpen.close()
</code></pre>
<p>This works fine the first time I want to add something during runtime, but when I try to add the second string it looks something like this:</p>
<pre><code>sword
axe
shield
bow
flower
monsterLol
</code></pre>
<p>Where monster was the first add, and Lol was the second thing I added.
What am I missing? Do I need to specify that it should go to a new line each time or?</p>
| 0 |
2016-09-10T04:53:29Z
| 39,422,751 |
<p>New line is not getting added and hence the next entry is appended in the same line. You ca rectify this as follows:</p>
<pre><code>fileOpen.write(argOne + '\n')
</code></pre>
<p>This way you don't have to modify the way you input your arguments.</p>
| 4 |
2016-09-10T04:59:46Z
|
[
"python",
"python-3.x"
] |
Django Settings for Rendering static images from Google Cloud storage
| 39,422,723 |
<p>I have my Django App hosted on Google Compute Engine. I wish to render static elements of the App from Google Cloud Storage. I have all the static elements inside Google Cloud storage bucket www.example.com/static</p>
<p>My Settings.py:</p>
<pre><code># Static files (CSS, JavaScript, Images)
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, '../example_static')
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, '../example_media')
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static'), MEDIA_ROOT,)
</code></pre>
<p>000-default.conf File:</p>
<pre><code><VirtualHost *:80>
.....
DocumentRoot /var/www/html
Alias /static /opt/projects/example-google/example_static
....
</VirtualHost>
</code></pre>
<p>With Current settings, it is picking up the static files from path: /opt/projects/example-google/example_static.</p>
<p>Can someone please explain the settings change required for rendering all the static images from Google Cloud storage bucket www.example.com/static ?</p>
<p>Thanks,</p>
| 0 |
2016-09-10T04:56:34Z
| 39,601,225 |
<p>While this isn't a Django-based answer, since I know little about that, you may find the Alpha release of <a href="https://cloud.google.com/compute/docs/load-balancing/http/using-http-lb-with-cloud-storage" rel="nofollow">Google Cloud Load Balancer support for Google Cloud Storage</a> another route to provide URL maps to static content in GCS while serving the rest of your data on GCE.</p>
| 0 |
2016-09-20T18:24:40Z
|
[
"python",
"django",
"google-cloud-storage",
"google-compute-engine",
"django-staticfiles"
] |
Django Settings for Rendering static images from Google Cloud storage
| 39,422,723 |
<p>I have my Django App hosted on Google Compute Engine. I wish to render static elements of the App from Google Cloud Storage. I have all the static elements inside Google Cloud storage bucket www.example.com/static</p>
<p>My Settings.py:</p>
<pre><code># Static files (CSS, JavaScript, Images)
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, '../example_static')
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, '../example_media')
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static'), MEDIA_ROOT,)
</code></pre>
<p>000-default.conf File:</p>
<pre><code><VirtualHost *:80>
.....
DocumentRoot /var/www/html
Alias /static /opt/projects/example-google/example_static
....
</VirtualHost>
</code></pre>
<p>With Current settings, it is picking up the static files from path: /opt/projects/example-google/example_static.</p>
<p>Can someone please explain the settings change required for rendering all the static images from Google Cloud storage bucket www.example.com/static ?</p>
<p>Thanks,</p>
| 0 |
2016-09-10T04:56:34Z
| 39,809,861 |
<p>You can find some documentation
<a href="https://cloud.google.com/python/django/flexible-environment#deploy_the_app_to_the_app_engine_flexible_environment" rel="nofollow" >here</a>
</p>
<p>One more thing i found useful is automatically switching between dev and prod environments by doing to following changes in your app settings.py:</p>
<pre><code>if os.getenv('SERVER_SOFTWARE', '').startswith('Google App Engine'):
STATIC_URL = 'https://storage.googleapis.com/<your-bucket>/static/'
else:
STATIC_URL = '/static/'
</code></pre>
| 0 |
2016-10-01T18:04:09Z
|
[
"python",
"django",
"google-cloud-storage",
"google-compute-engine",
"django-staticfiles"
] |
For Loop is not Following Indentations
| 39,422,791 |
<p>Afternoon all,</p>
<p>I'm encountering some strange behavior from the starred for loop below. The below function essentially iterates over the input dictionary <code>patient_features</code>, concatenating several strings to produce a SVMLight style vector. This vector is then intended to be written to the deliverables file. However, for some reason the writes at the end of the function are being called for every iteration of the starred for loop, resulting in a massive file size (and some other more minor issues). Any help on what might be causing this would be greatly appreciated.</p>
<pre><code>def save_svmlight(patient_features, mortality, op_file, op_deliverable):
deliverable1 = open(op_file, 'wb') # feature without patient id
deliverable2 = open(op_deliverable, 'wb') # features with patient id
d1_line = ''
d2_line = ''
count = 0 # VALUE TO TEST IF INCREMENTING
print count
for patient_id in patient_features: #**********
value_tuple_list = patient_features[patient_id]
value_tuple_list.sort()
d2_line += str(int(patient_id)) + ' '
if patient_id in mortality:
d1_line += str(1) + ' '
d2_line += str(1) + ' '
else:
d1_line += str(0) + ' '
d2_line += str(0) + ' '
for value_tuple in value_tuple_list:
d1_line += str(int(value_tuple[0])) + ":" + str("{:1.6f}".format(value_tuple[1])) + ' '
d2_line += str(int(value_tuple[0])) + ":" + str("{:1.6f}".format(value_tuple[1])) + ' '
count += 1
print count # VALUE INCREMENTS WHEN IT SHOULD NOT
deliverable1.write(d1_line); # <- BEING WRITTEN TO EACH LOOP :(
deliverable2.write(d2_line); # <- BEING WRITTEN TO EACH LOOP :(
</code></pre>
| 0 |
2016-09-10T05:04:59Z
| 39,422,887 |
<p>The issue is with using both indentations and tabs in your code. Python is not a fan if you mix the two :/</p>
<p>For future reference, if you are using Sublime Text, select all the text, and in the toolbar on top go to <code>View > Indentation > Convert Tabs to Spaces</code> and the issue will be resolved. </p>
<p>Thought I would save you having to manually search for each tab and replacing it :)</p>
| 0 |
2016-09-10T05:26:44Z
|
[
"python",
"python-2.7",
"for-loop"
] |
Get number of tweets per min for particular channel
| 39,422,946 |
<p>I am using <a href="https://github.com/geduldig/TwitterAPI" rel="nofollow">pypi Twitter API</a></p>
<p>I need to call twitter API to get tweets of one channel after certain interval. I need tweet counts in that time period. </p>
<p>For ex. @NASA channel tweet counts of it every 5 mins. </p>
<p>I am looking at <a href="https://dev.twitter.com/rest/reference/get/search/tweets" rel="nofollow">search tweets </a>. Is this correct? But it does not give tweets count I think.</p>
| 4 |
2016-09-10T05:37:25Z
| 39,423,088 |
<p>Use twitter's API. I mean, right?</p>
<p><a href="https://dev.twitter.com/rest/reference/get/statuses/user_timeline" rel="nofollow">https://dev.twitter.com/rest/reference/get/statuses/user_timeline</a></p>
<p>That will get you the most 3200 recent tweets by a user. So unless those guys are literally more than like 10x /sec you should be good.</p>
<p>Hit the api every 5 minutes, including on page load. Take the difference and there is your count.</p>
| 1 |
2016-09-10T05:59:37Z
|
[
"javascript",
"jquery",
"python",
"django",
"twitter"
] |
Python Git diff parser
| 39,423,122 |
<p>I would like to parse git diff with Python code and I am interested to get following information from diff parser:</p>
<ol>
<li>Content of deleted/added lines and also line number. </li>
<li>File name.</li>
<li>Status of file whether it is deleted, renamed or added. </li>
</ol>
<p>I am using <a href="https://pypi.python.org/pypi/unidiff" rel="nofollow">unidiff 0.5.2</a> for this purpose and I wrote the following code: </p>
<pre><code> from unidiff import PatchSet
import git
import os
commit_sha1 = 'b4defafcb26ab86843bbe3464a4cf54cdc978696'
repo_directory_address = '/my/git/repo'
repository = git.Repo(repo_directory_address)
commit = repository.commit(commit_sha1)
diff_index = commit.diff(commit_sha1+'~1', create_patch=True)
diff_text = reduce(lambda x, y: str(x)+os.linesep+str(y), diff_index).split(os.linesep)
patch = PatchSet(diff_text)
print patch[0].is_added_file
</code></pre>
<p>I am using <a href="https://github.com/gitpython-developers/GitPython" rel="nofollow">GitPython</a> to generate Git diff. I received following error for the above code: </p>
<pre><code> current_file = PatchedFile(source_file, target_file,
UnboundLocalError: local variable 'source_file' referenced before assignment
</code></pre>
<p>I would appreciate if you could help me to fix this error.</p>
| 0 |
2016-09-10T06:04:16Z
| 39,909,397 |
<p>Finally, I found the solution. The output of gitpython is little bit different from the standard git diff output. In the standard git diff source file start with <strong>---</strong> but output of gitpython start with <strong>------</strong> as you can see in the out put of running the following python code (this example is generated with <a href="https://github.com/elastic/elasticsearch" rel="nofollow">elasticsearch repository</a>):</p>
<pre><code>import git
repo_directory_address = '/your/elasticsearch/repository/address'
revision = "ace83d9d2a97cfe8a8aa9bdd7b46ce71713fb494"
repository = git.Repo(repo_directory_address)
commit = repository.commit(rev=revision)
# Git ignore white space at the end of line, empty lines,
# renamed files and also copied files
diff_index = commit.diff(revision+'~1', create_patch=True, ignore_blank_lines=True,
ignore_space_at_eol=True, diff_filter='cr')
print reduce(lambda x, y: str(x)+str(y), diff_index)
</code></pre>
<p>The partial out put would be as follow: </p>
<pre><code>core/src/main/java/org/elasticsearch/action/index/IndexRequest.java
=======================================================
lhs: 100644 | f8b0ce6c13fd819a02b1df612adc929674749220
rhs: 100644 | b792241b56ce548e7dd12ac46068b0bcf4649195
------ a/core/src/main/java/org/elasticsearch/action/index/IndexRequest.java
+++ b/core/src/main/java/org/elasticsearch/action/index/IndexRequest.java
@@ -20,16 +20,18 @@
package org.elasticsearch.action.index;
import org.elasticsearch.ElasticsearchGenerationException;
+import org.elasticsearch.Version;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.DocumentRequest;
import org.elasticsearch.action.RoutingMissingException;
import org.elasticsearch.action.TimestampParsingException;
import org.elasticsearch.action.support.replication.ReplicationRequest;
import org.elasticsearch.client.Requests;
+import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.metadata.MappingMetaData;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.common.Nullable;
-import org.elasticsearch.common.UUIDs;
+import org.elasticsearch.common.Strings;
import org.elasticsearch.common.bytes.BytesArray;
import org.elasticsearch.common.bytes.BytesReference;
</code></pre>
<p>As you can see the line 4 of the source file start with <strong>------</strong>. To fix the problem, you need to edit the source file regular expression of <a href="https://github.com/matiasb/python-unidiff" rel="nofollow">unidiff 0.5.2</a> which you find in <a href="https://github.com/matiasb/python-unidiff/blob/master/unidiff/constants.py" rel="nofollow">/unidiff/constants.py</a> from :</p>
<pre><code>RE_SOURCE_FILENAME = re.compile(
r'^--- (?P<filename>[^\t\n]+)(?:\t(?P<timestamp>[^\n]+))?')
</code></pre>
<p>to: </p>
<pre><code>RE_SOURCE_FILENAME = re.compile(
r'^------ (?P<filename>[^\t\n]+)(?:\t(?P<timestamp>[^\n]+))?')
</code></pre>
<p><strong>PS:</strong> if the source file is renamed, gitpython generates diff start with <strong>---</strong>. But it will not thrown an error, because I filtered git diff of rename file (<strong>diff_filter='cr'</strong>).</p>
| 0 |
2016-10-07T04:29:09Z
|
[
"python",
"git",
"parsing",
"gitpython"
] |
Python Facebook SDK: How to get response from request?
| 39,423,146 |
<p>I am using the following code to publish photo but I don't know how to get response ID post. How to get it ?</p>
<pre><code>import facebook
graph = facebook.GraphAPI(access_token='mytoken', version='2.7')
graph.put_photo(image=open(r'E:\Facebook\myphoto.jpg', 'rb'), message='Cool'.encode('utf-8'))
</code></pre>
<p>Also, Facebook doesn't show photo in wall. It shows "no automatic alt text available". So, how to publish photo correctly ?</p>
<p>Thank you :)</p>
| 0 |
2016-09-10T06:08:14Z
| 39,423,387 |
<p>According to the documentation <a href="http://facebook-sdk.readthedocs.io/en/latest/api.html#put-photo" rel="nofollow"><code>put_photo()</code></a> should return JSON containing the ID and the post ID, however, it actually returns a dictionary, i.e the JSON has been decoded for you. Try this:</p>
<pre><code>import json
import facebook
graph = facebook.GraphAPI(access_token='mytoken', version='2.7')
photo = graph.put_photo(image=open(r'E:\Facebook\myphoto.jpg', 'rb'), message='Cool'.encode('utf-8'))
print(photo)
print('id: {}'.format(photo['id']))
print('photo_id: {}'.format(photo['post_id']))
</code></pre>
| 1 |
2016-09-10T06:45:26Z
|
[
"python",
"facebook",
"facebook-graph-api"
] |
to convert a MCQ in the required format - python
| 39,423,212 |
<p>I am trying to convert MCQ which is as follows:</p>
<pre><code>Which will legally declare, construct, and initialize an array?
A. int [] myList = {"1", "2", "3"};
B. int [] myList = (5, 8, 2);
C. int myList [] [] = {4,9,7,0};
D. int myList [] = {4, 3, 7};
ANSWER: D
</code></pre>
<p>into the required format which is as mentioned below:</p>
<pre><code>Which will legally declare, construct, and initialize an array?
int [] myList = {"1", "2", "3"};
int [] myList = (5, 8, 2);
int myList [] [] = {4,9,7,0};
*int myList [] = {4, 3, 7};
</code></pre>
<p>The logic I'm trying is as follows:</p>
<pre><code>1. Open the text file and trying to fetch the line number of "ANSWER: D"
2. Open the file again and go to that line number
3. write a for loop which will iterate max 5 times till it find the match
"D." is found.
4.Once the match is found replace it with '*'
</code></pre>
<p>Below is the code I tried:</p>
<pre><code>import re
ans = []
line_no = []
class Options:
def __init__(self, ans1, num1):
self.a = ans1
self.n = num1
#print(self.a, self.n)
pattern = 'ANSWER: [A-Z]' # to fetch the answer of each question
r = re.compile(pattern)
pattern1 = '[A-F]\.\s'
re1 = re.compile(pattern1)
with open (r"C:\Users\dhvani\Desktop\test.txt", "r") as f:
for num, line in enumerate(f, 1):
d = r.findall(line)
if(d):
l = d[0].split(":")
m = l[1].split(" ")
m = m[1] + "."
ans.append(m)
line_no.append(num)
x = Options(ans, line_no)
print(x.a, x.n)
with open (r"C:\Users\dhvani\Desktop\test.txt", "r") as f:
for i, j in enumerate(ans):
j1 = j[0]
z = f.readlines()[j1 - 1]
print(z)
for n in range(j1 - 1, j1 - 7, -1):
value = f.readline()
value1 = re1.findall(value)
if value1:
if value1 == i:
value.sub('[A-F]\.\s', '*', value)
break;
</code></pre>
<p>I am able to fetch the line number of "ANSWER: D" and store 'D.' and its corresponding line number in two different lists.</p>
<p>Then later steps are unsuccessful.</p>
<p>Any help will be greatly appreciated.
I am new to Python.</p>
| 3 |
2016-09-10T06:19:27Z
| 39,424,334 |
<p>You could use the following code: reading from a file, doing required modifications and writing them into a new text file.</p>
<pre><code>import re
with open("your_filename_here", "r") as fp:
string = fp.read()
f1 = open(r"your_filename_here", "w")
rx = re.compile(r'''
(?!^[A-E]\.)
(?P<question>.+[\n\r])
(?P<choices>[\s\S]+?)
^ANSWER:\ (?P<answer>[A-E])
''', re.MULTILINE | re.VERBOSE)
rq = re.compile(r'^[A-E]\.\ (?P<choice>.+)')
for match in rx.finditer(string):
def repl(line, answer):
if line.startswith(answer):
line = rq.sub(r"*\1", line)
else:
line = rq.sub(r"\1", line)
return line
lines = [repl(line, match.group('answer'))
for line in match.group('choices').split("\n")
if line]
block = match.group('question') + "\n".join(lines)
#print(block)
f1.write(block + "\n\n")
</code></pre>
<p>This splits your blocks into parts of question, choices and the answer and analyzes them afterwards. See <a href="http://ideone.com/Mvp0PE" rel="nofollow"><strong>a demo on ideone.com</strong></a>.</p>
| 1 |
2016-09-10T08:50:28Z
|
[
"python",
"regex",
"file"
] |
Comparing by section two numpy arrays in python
| 39,423,331 |
<p>I want to do a comparison between the two arrays by section.
so far I have to get results for all arrays.</p>
<pre><code>import numpy as np
array1 = np.array(list(np.zeros(20))+(list(np.ones(20)))+(list(2*np.ones(20))))
array2 = np.array(list(np.ones(20))+(list(np.ones(20)))+(list(3*np.ones(20))))
result = np.sum(array1 == array2)
print 'all result :' + str(result)
</code></pre>
<p>how to can result in parts, such as the first data 20 then 20 the second data and 20 third data in the array??
the result should be:</p>
<p>all result : 20</p>
<p>result for 20 firt data : 0</p>
<p>result for 20 second data : 20</p>
<p>result for 20 third data : 0</p>
| 3 |
2016-09-10T06:37:37Z
| 39,423,368 |
<p>Just sum each 20 separately:</p>
<pre><code>matches = array1 == array2
print 'first 20: {}'.format(matches[:20].sum())
print 'second 20: {}'.format(matches[20:40].sum())
print 'third 20: {}'.format(matches[40:60].sum())
</code></pre>
<p><code>np.sum(x)</code> is usually equivalent to <code>x.sum()</code></p>
| 1 |
2016-09-10T06:43:22Z
|
[
"python",
"arrays",
"python-2.7",
"numpy"
] |
Comparing by section two numpy arrays in python
| 39,423,331 |
<p>I want to do a comparison between the two arrays by section.
so far I have to get results for all arrays.</p>
<pre><code>import numpy as np
array1 = np.array(list(np.zeros(20))+(list(np.ones(20)))+(list(2*np.ones(20))))
array2 = np.array(list(np.ones(20))+(list(np.ones(20)))+(list(3*np.ones(20))))
result = np.sum(array1 == array2)
print 'all result :' + str(result)
</code></pre>
<p>how to can result in parts, such as the first data 20 then 20 the second data and 20 third data in the array??
the result should be:</p>
<p>all result : 20</p>
<p>result for 20 firt data : 0</p>
<p>result for 20 second data : 20</p>
<p>result for 20 third data : 0</p>
| 3 |
2016-09-10T06:37:37Z
| 39,423,381 |
<p>First off, let's get the mask of comparisons -</p>
<pre><code>mask = array1 == array2
</code></pre>
<p>Then, to get all sum -</p>
<pre><code>allsum = mask.sum()
</code></pre>
<p>And to get sectionwise (of length <code>20</code>) sum -</p>
<pre><code>section_sums = mask.reshape(-1,20).sum(1)
</code></pre>
<p>Sample run -</p>
<pre><code>In [77]: mask = array1 == array2
In [78]: mask.sum()
Out[78]: 20
In [79]: mask.reshape(-1,20).sum(1)
Out[79]: array([ 0, 20, 0])
</code></pre>
<hr>
<p><strong>For generic lengths</strong></p>
<p>If the length of input arrays are not guaranteed to be a multiple of <code>20</code>, we could use an approach using <code>np.bincount</code> to get <code>section_sums</code>, like so -</p>
<pre><code>section_sums = np.bincount(np.arange(mask.size)//20,mask)
</code></pre>
<p>Sample run -</p>
<pre><code>In [5]: a1=np.array(list(np.zeros(20))+(list(np.ones(20)))+(list(2*np.ones(17))))
...: a2=np.array(list(np.ones(20))+(list(np.ones(20)))+(list(3*np.ones(17))))
...:
In [6]: mask = a1==a2
In [7]: np.bincount(np.arange(mask.size)//20,mask)
Out[7]: array([ 0., 20., 0.])
</code></pre>
| 3 |
2016-09-10T06:45:05Z
|
[
"python",
"arrays",
"python-2.7",
"numpy"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.