in this project, we will work with a data set of submissions to popular technology site Hacker News.
In this site, user-submitted stories (known as "posts") are voted and commented upon. The posts that make it to the top of Hacker News' listings can get hundreds of thousands of visitors as a result. Below are descriptions of the columns:
opened_file = open('hacker_news.csv')
from csv import reader
read_file = reader(opened_file)
hn = list(read_file)
hn_header = hn[0]
hn = hn[1:]
print(hn_header)
print('\n')
['id', 'title', 'url', 'num_points', 'num_comments', 'author', 'created_at']
def explore_data(dataset, start, end, rows_and_columns=False):
dataset_slice = dataset[start:end]
for row in dataset_slice:
print(row)
print('\n') # adds a new (empty) line after each row
if rows_and_columns:
print('Number of rows:', len(dataset))
print('Number of columns:', len(dataset[0]))
explore_data(hn, 0, 5, rows_and_columns=True)
['12224879', 'Interactive Dynamic Video', 'http://www.interactivedynamicvideo.com/', '386', '52', 'ne0phyte', '8/4/2016 11:52'] ['10975351', 'How to Use Open Source and Shut the Fuck Up at the Same Time', 'http://hueniverse.com/2016/01/26/how-to-use-open-source-and-shut-the-fuck-up-at-the-same-time/', '39', '10', 'josep2', '1/26/2016 19:30'] ['11964716', "Florida DJs May Face Felony for April Fools' Water Joke", 'http://www.thewire.com/entertainment/2013/04/florida-djs-april-fools-water-joke/63798/', '2', '1', 'vezycash', '6/23/2016 22:20'] ['11919867', 'Technology ventures: From Idea to Enterprise', 'https://www.amazon.com/Technology-Ventures-Enterprise-Thomas-Byers/dp/0073523429', '3', '1', 'hswarna', '6/17/2016 0:01'] ['10301696', 'Note by Note: The Making of Steinway L1037 (2007)', 'http://www.nytimes.com/2007/11/07/movies/07stein.html?_r=0', '8', '2', 'walterbell', '9/30/2015 4:12'] Number of rows: 20100 Number of columns: 7
in the dataset, we're specifically interested in posts whose titles begin with either 'Ask HN'(posts to ask Hacker News community a speecific question) or 'Show HN (posts to show the Hacker News community a project, product, or just generally something interesting).
We want to compare the two types of posts to determine the following:
Ask HN
or Show HN
receive more comments on average?To start, we will create a new lists of lists containing just the data for those titles.
ask_posts = []
show_posts = []
other_posts = []
for row in hn:
title = row[1]
if title.startswith('Ask HN'):
ask_posts.append(row)
elif title.startswith('Show HN'):
show_posts.append(row)
else:
other_posts.append(row)
print('Number of rows ask posts:', len(ask_posts))
print('\n')
print('Number of rows show post:', len(show_posts))
print('\n')
print('Number of rows other posts:', len(other_posts))
Number of rows ask posts: 1742 Number of rows show post: 1161 Number of rows other posts: 17197
Below are the first five rows in the ask_post
list of lists.
explore_data(ask_posts, 0, 5)
['12296411', 'Ask HN: How to improve my personal website?', '', '2', '6', 'ahmedbaracat', '8/16/2016 9:55'] ['10610020', 'Ask HN: Am I the only one outraged by Twitter shutting down share counts?', '', '28', '29', 'tkfx', '11/22/2015 13:43'] ['11610310', 'Ask HN: Aby recent changes to CSS that broke mobile?', '', '1', '1', 'polskibus', '5/2/2016 10:14'] ['12210105', 'Ask HN: Looking for Employee #3 How do I do it?', '', '1', '3', 'sph130', '8/2/2016 14:20'] ['10394168', 'Ask HN: Someone offered to buy my browser extension from me. What now?', '', '28', '17', 'roykolak', '10/15/2015 16:38']
Below are the first five rows of the show_post
list of lists.
explore_data(show_posts, 0, 5)
['10627194', 'Show HN: Wio Link ESP8266 Based Web of Things Hardware Development Platform', 'https://iot.seeed.cc', '26', '22', 'kfihihc', '11/25/2015 14:03'] ['10646440', 'Show HN: Something pointless I made', 'http://dn.ht/picklecat/', '747', '102', 'dhotson', '11/29/2015 22:46'] ['11590768', 'Show HN: Shanhu.io, a programming playground powered by e8vm', 'https://shanhu.io', '1', '1', 'h8liu', '4/28/2016 18:05'] ['12178806', 'Show HN: Webscope Easy way for web developers to communicate with Clients', 'http://webscopeapp.com', '3', '3', 'fastbrick', '7/28/2016 7:11'] ['10872799', 'Show HN: GeoScreenshot Easily test Geo-IP based web pages', 'https://www.geoscreenshot.com/', '1', '9', 'kpsychwave', '1/9/2016 20:45']
Now, let's determine if ask posts or show posts receive more comments on the average.
def aveg_comment(dataset):
total_comments = 0
for row in dataset:
num_comments = row[4]
num_comments = int(num_comments)
total_comments += num_comments
avg_comments = total_comments / len(dataset)
print(avg_comments)
aveg_comment(ask_posts)
14.044776119402986
aveg_comment(show_posts)
10.324720068906116
Our analysis shows that the posts with title that begins with Ask HN
has more comments on the average than posts whose title beins with Show HN
.
This means that whenn you ask the Hacker News
community a question, you'll get more responses (maybe answers to your question) to when you are just showing them a product or project.
since ask posts are more likely to recieve comments, we'll focus our remaining analysis on these posts.
Our next task is to determine if ask posts created at a certain time are more likely to attract comments.
We'll use the following steps:
import datetime as dt
result_list = []
for row in ask_posts:
created_at = row[6]
num_comments = row[4]
num_comments = int(num_comments)
result_list.append([created_at, num_comments])
counts_by_hour = {}
comments_by_hour = {}
for row in result_list:
date_n_time = row[0]
num_comments = row[1]
dt_object = dt.datetime.strptime(date_n_time, '%m/%d/%Y %H:%M')
dt_hour = dt_object.strftime('%H')
if dt_hour not in counts_by_hour:
counts_by_hour[dt_hour] = 1
comments_by_hour[dt_hour] = num_comments
else:
counts_by_hour[dt_hour] += 1
comments_by_hour[dt_hour] += num_comments
print(counts_by_hour)
{'20': 80, '19': 110, '12': 73, '00': 54, '10': 59, '17': 100, '13': 85, '15': 116, '23': 68, '07': 34, '01': 60, '03': 54, '02': 58, '04': 47, '09': 45, '14': 107, '21': 109, '18': 108, '08': 48, '06': 44, '16': 108, '05': 46, '22': 71, '11': 58}
print(comments_by_hour)
{'20': 1722, '19': 1188, '12': 687, '00': 439, '10': 793, '17': 1146, '13': 1253, '15': 4477, '23': 543, '07': 267, '01': 683, '03': 421, '02': 1381, '04': 337, '09': 251, '14': 1416, '21': 1745, '18': 1430, '08': 492, '06': 397, '16': 1814, '05': 464, '22': 479, '11': 641}
avg_by_hour = []
for key in comments_by_hour:
avg_value = comments_by_hour[key] / counts_by_hour[key]
avg_by_hour.append([key, avg_value])
print(avg_by_hour)
[['20', 21.525], ['19', 10.8], ['12', 9.41095890410959], ['00', 8.12962962962963], ['10', 13.440677966101696], ['17', 11.46], ['13', 14.741176470588234], ['15', 38.5948275862069], ['23', 7.985294117647059], ['07', 7.852941176470588], ['01', 11.383333333333333], ['03', 7.796296296296297], ['02', 23.810344827586206], ['04', 7.170212765957447], ['09', 5.5777777777777775], ['14', 13.233644859813085], ['21', 16.009174311926607], ['18', 13.24074074074074], ['08', 10.25], ['06', 9.022727272727273], ['16', 16.796296296296298], ['05', 10.08695652173913], ['22', 6.746478873239437], ['11', 11.051724137931034]]
swap_avg_by_hour = []
for row in avg_by_hour:
key = row[0]
key_value = row[1]
swap_avg_by_hour.append([key_value, key])
print(swap_avg_by_hour)
[[21.525, '20'], [10.8, '19'], [9.41095890410959, '12'], [8.12962962962963, '00'], [13.440677966101696, '10'], [11.46, '17'], [14.741176470588234, '13'], [38.5948275862069, '15'], [7.985294117647059, '23'], [7.852941176470588, '07'], [11.383333333333333, '01'], [7.796296296296297, '03'], [23.810344827586206, '02'], [7.170212765957447, '04'], [5.5777777777777775, '09'], [13.233644859813085, '14'], [16.009174311926607, '21'], [13.24074074074074, '18'], [10.25, '08'], [9.022727272727273, '06'], [16.796296296296298, '16'], [10.08695652173913, '05'], [6.746478873239437, '22'], [11.051724137931034, '11']]
sorted_swap = sorted(swap_avg_by_hour, reverse=True)
sorted_swap_first_five = sorted_swap[:5]
print(sorted_swap_first_five)
[[38.5948275862069, '15'], [23.810344827586206, '02'], [21.525, '20'], [16.796296296296298, '16'], [16.009174311926607, '21']]
for row in sorted_swap_first_five:
avg = row[0]
hr = row[1]
hr_dt_obj = dt.datetime.strptime(hr, '%H')
hr_dt_string = hr_dt_obj.strftime('%H:%M')
template = '{}: {:.2f} average comments per post'
avg_per_post = template.format(hr_dt_string, avg)
print(avg_per_post)
print('\n')
15:00: 38.59 average comments per post 02:00: 23.81 average comments per post 20:00: 21.52 average comments per post 16:00: 16.80 average comments per post 21:00: 16.01 average comments per post
My analysis shows that there's a higher chance of receiving comments if you create a post between 15:00-21:00hrs (i.e 3pm-9pm).
From 15:00, most people have started rounding up business for the day, so it makes sense to believe they've got time for the community till about 21:00 (9pm) when it willbe time to go to bed.
Although the 02:00hr (2am) mark looks favorably, i wouldn't advise it because it may just be sheer luck. Especially since there's no other time frame close to it in the top 5 comments per hour.