"There are few phrases, in the modern world, that can create the image of the highest quality: American customer service, Italian food, African safari, Caribbean beaches, British sense of humour. None of these quite drum up the image of perfection like "German Engineering”. The connotations of the phrase itself are almost certainly born from the famed quality, reliability and longevity of German cars." Taken from Top 5 German Car Brands.
In this project, we will analyze a data of used cars from eBay Kleinanzeigen, a classified section of the German eBay website.
The dataset was originally scraped and uploaded to Kaggle. Few modifications from the original dataset were done and uploaded to Kaggle:
The dataset was dirtied a bit to more closely resemble what you would expect from a scraped dataset (the version uploaded to Kaggle was cleaned to be easier to work with). The aim of this project is to get ourselves more familiar with data cleaning and analysis of the included used car listings.
Below are some points to desribe the column in the data:
dateCrawled
- When this ad was first crawled. All field-values are taken from this date.
name
- Name of the car.
seller
- Whether the seller is private or a dealer.
offerType
- The type of listing
price
- The price on the ad to sell the car.
abtest
- Whether the listing is included in an A/B test.
vehicleType
- The vehicle Type.
yearOfRegistration
- The year in which the car was first registered.
gearbox
- The transmission type.
powerPS
- The power of the car in PS.
model
- The car model name.
kilometer
- How many kilometers the car has driven.
monthOfRegistration
- The month in which the car was first registered.
fuelType
- What type of fuel the car uses.
brand
- The brand of the car.
notRepairedDamage
- If the car has a damage which is not yet repaired.
dateCreated
- The date on which the eBay listing was created.
nrOfPictures
- The number of pictures in the ad.
postalCode
- The postal code for the location of the vehicle.
lastSeenOnline
- When the crawler saw this ad last online.
So, of course, the first thing we'll need to do is cleaning. To do that, let's read the datas first to find out what needs to be cleaned.
import numpy as np
import pandas as pd
#reading the csv file
autos = pd.read_csv("autos.csv", encoding="Latin-1")
autos
dateCrawled | name | seller | offerType | price | abtest | vehicleType | yearOfRegistration | gearbox | powerPS | model | odometer | monthOfRegistration | fuelType | brand | notRepairedDamage | dateCreated | nrOfPictures | postalCode | lastSeen | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 2016-03-26 17:47:46 | Peugeot_807_160_NAVTECH_ON_BOARD | privat | Angebot | $5,000 | control | bus | 2004 | manuell | 158 | andere | 150,000km | 3 | lpg | peugeot | nein | 2016-03-26 00:00:00 | 0 | 79588 | 2016-04-06 06:45:54 |
1 | 2016-04-04 13:38:56 | BMW_740i_4_4_Liter_HAMANN_UMBAU_Mega_Optik | privat | Angebot | $8,500 | control | limousine | 1997 | automatik | 286 | 7er | 150,000km | 6 | benzin | bmw | nein | 2016-04-04 00:00:00 | 0 | 71034 | 2016-04-06 14:45:08 |
2 | 2016-03-26 18:57:24 | Volkswagen_Golf_1.6_United | privat | Angebot | $8,990 | test | limousine | 2009 | manuell | 102 | golf | 70,000km | 7 | benzin | volkswagen | nein | 2016-03-26 00:00:00 | 0 | 35394 | 2016-04-06 20:15:37 |
3 | 2016-03-12 16:58:10 | Smart_smart_fortwo_coupe_softouch/F1/Klima/Pan... | privat | Angebot | $4,350 | control | kleinwagen | 2007 | automatik | 71 | fortwo | 70,000km | 6 | benzin | smart | nein | 2016-03-12 00:00:00 | 0 | 33729 | 2016-03-15 03:16:28 |
4 | 2016-04-01 14:38:50 | Ford_Focus_1_6_Benzin_TÜV_neu_ist_sehr_gepfleg... | privat | Angebot | $1,350 | test | kombi | 2003 | manuell | 0 | focus | 150,000km | 7 | benzin | ford | nein | 2016-04-01 00:00:00 | 0 | 39218 | 2016-04-01 14:38:50 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
49995 | 2016-03-27 14:38:19 | Audi_Q5_3.0_TDI_qu._S_tr.__Navi__Panorama__Xenon | privat | Angebot | $24,900 | control | limousine | 2011 | automatik | 239 | q5 | 100,000km | 1 | diesel | audi | nein | 2016-03-27 00:00:00 | 0 | 82131 | 2016-04-01 13:47:40 |
49996 | 2016-03-28 10:50:25 | Opel_Astra_F_Cabrio_Bertone_Edition___TÜV_neu+... | privat | Angebot | $1,980 | control | cabrio | 1996 | manuell | 75 | astra | 150,000km | 5 | benzin | opel | nein | 2016-03-28 00:00:00 | 0 | 44807 | 2016-04-02 14:18:02 |
49997 | 2016-04-02 14:44:48 | Fiat_500_C_1.2_Dualogic_Lounge | privat | Angebot | $13,200 | test | cabrio | 2014 | automatik | 69 | 500 | 5,000km | 11 | benzin | fiat | nein | 2016-04-02 00:00:00 | 0 | 73430 | 2016-04-04 11:47:27 |
49998 | 2016-03-08 19:25:42 | Audi_A3_2.0_TDI_Sportback_Ambition | privat | Angebot | $22,900 | control | kombi | 2013 | manuell | 150 | a3 | 40,000km | 11 | diesel | audi | nein | 2016-03-08 00:00:00 | 0 | 35683 | 2016-04-05 16:45:07 |
49999 | 2016-03-14 00:42:12 | Opel_Vectra_1.6_16V | privat | Angebot | $1,250 | control | limousine | 1996 | manuell | 101 | vectra | 150,000km | 1 | benzin | opel | nein | 2016-03-13 00:00:00 | 0 | 45897 | 2016-04-06 21:18:48 |
50000 rows × 20 columns
So, there are 50,000 cars listed in the data with 20 columns to describe each. Let's continue to find out some more information.
print(autos.info())
print('\n')
print(autos.head())
<class 'pandas.core.frame.DataFrame'> RangeIndex: 50000 entries, 0 to 49999 Data columns (total 20 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 dateCrawled 50000 non-null object 1 name 50000 non-null object 2 seller 50000 non-null object 3 offerType 50000 non-null object 4 price 50000 non-null object 5 abtest 50000 non-null object 6 vehicleType 44905 non-null object 7 yearOfRegistration 50000 non-null int64 8 gearbox 47320 non-null object 9 powerPS 50000 non-null int64 10 model 47242 non-null object 11 odometer 50000 non-null object 12 monthOfRegistration 50000 non-null int64 13 fuelType 45518 non-null object 14 brand 50000 non-null object 15 notRepairedDamage 40171 non-null object 16 dateCreated 50000 non-null object 17 nrOfPictures 50000 non-null int64 18 postalCode 50000 non-null int64 19 lastSeen 50000 non-null object dtypes: int64(5), object(15) memory usage: 7.6+ MB None dateCrawled name \ 0 2016-03-26 17:47:46 Peugeot_807_160_NAVTECH_ON_BOARD 1 2016-04-04 13:38:56 BMW_740i_4_4_Liter_HAMANN_UMBAU_Mega_Optik 2 2016-03-26 18:57:24 Volkswagen_Golf_1.6_United 3 2016-03-12 16:58:10 Smart_smart_fortwo_coupe_softouch/F1/Klima/Pan... 4 2016-04-01 14:38:50 Ford_Focus_1_6_Benzin_TÜV_neu_ist_sehr_gepfleg... seller offerType price abtest vehicleType yearOfRegistration \ 0 privat Angebot $5,000 control bus 2004 1 privat Angebot $8,500 control limousine 1997 2 privat Angebot $8,990 test limousine 2009 3 privat Angebot $4,350 control kleinwagen 2007 4 privat Angebot $1,350 test kombi 2003 gearbox powerPS model odometer monthOfRegistration fuelType \ 0 manuell 158 andere 150,000km 3 lpg 1 automatik 286 7er 150,000km 6 benzin 2 manuell 102 golf 70,000km 7 benzin 3 automatik 71 fortwo 70,000km 6 benzin 4 manuell 0 focus 150,000km 7 benzin brand notRepairedDamage dateCreated nrOfPictures \ 0 peugeot nein 2016-03-26 00:00:00 0 1 bmw nein 2016-04-04 00:00:00 0 2 volkswagen nein 2016-03-26 00:00:00 0 3 smart nein 2016-03-12 00:00:00 0 4 ford nein 2016-04-01 00:00:00 0 postalCode lastSeen 0 79588 2016-04-06 06:45:54 1 71034 2016-04-06 14:45:08 2 35394 2016-04-06 20:15:37 3 33729 2016-03-15 03:16:28 4 39218 2016-04-01 14:38:50
And here are some thing that we could notice from the findings above:
Let's find column(s) which are numbers but expressed in strings and convert them to numerical values. And columns which are irrelevant to our work. Or some wrong datas ruining our analysis! But before that, we should convert the column names from camelcase to snakecase to improve readability.
print("Column names:", autos.columns)
Column names: Index(['dateCrawled', 'name', 'seller', 'offerType', 'price', 'abtest', 'vehicleType', 'yearOfRegistration', 'gearbox', 'powerPS', 'model', 'odometer', 'monthOfRegistration', 'fuelType', 'brand', 'notRepairedDamage', 'dateCreated', 'nrOfPictures', 'postalCode', 'lastSeen'], dtype='object')
autos.columns = ['date_crawled', 'name', 'seller', 'offer_type', 'price', 'abtest', 'vehicle_type', 'registration_year', 'gearbox', 'power_ps', 'model', 'odometer', 'registration_month', 'fuel_type', 'brand', 'unrepaired_damage', 'ad_created', 'pictures_nr', 'postal_code', 'last_seen']
autos.head()
date_crawled | name | seller | offer_type | price | abtest | vehicle_type | registration_year | gearbox | power_ps | model | odometer | registration_month | fuel_type | brand | unrepaired_damage | ad_created | pictures_nr | postal_code | last_seen | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 2016-03-26 17:47:46 | Peugeot_807_160_NAVTECH_ON_BOARD | privat | Angebot | $5,000 | control | bus | 2004 | manuell | 158 | andere | 150,000km | 3 | lpg | peugeot | nein | 2016-03-26 00:00:00 | 0 | 79588 | 2016-04-06 06:45:54 |
1 | 2016-04-04 13:38:56 | BMW_740i_4_4_Liter_HAMANN_UMBAU_Mega_Optik | privat | Angebot | $8,500 | control | limousine | 1997 | automatik | 286 | 7er | 150,000km | 6 | benzin | bmw | nein | 2016-04-04 00:00:00 | 0 | 71034 | 2016-04-06 14:45:08 |
2 | 2016-03-26 18:57:24 | Volkswagen_Golf_1.6_United | privat | Angebot | $8,990 | test | limousine | 2009 | manuell | 102 | golf | 70,000km | 7 | benzin | volkswagen | nein | 2016-03-26 00:00:00 | 0 | 35394 | 2016-04-06 20:15:37 |
3 | 2016-03-12 16:58:10 | Smart_smart_fortwo_coupe_softouch/F1/Klima/Pan... | privat | Angebot | $4,350 | control | kleinwagen | 2007 | automatik | 71 | fortwo | 70,000km | 6 | benzin | smart | nein | 2016-03-12 00:00:00 | 0 | 33729 | 2016-03-15 03:16:28 |
4 | 2016-04-01 14:38:50 | Ford_Focus_1_6_Benzin_TÜV_neu_ist_sehr_gepfleg... | privat | Angebot | $1,350 | test | kombi | 2003 | manuell | 0 | focus | 150,000km | 7 | benzin | ford | nein | 2016-04-01 00:00:00 | 0 | 39218 | 2016-04-01 14:38:50 |
autos.describe(include='all')
date_crawled | name | seller | offer_type | price | abtest | vehicle_type | registration_year | gearbox | power_ps | model | odometer | registration_month | fuel_type | brand | unrepaired_damage | ad_created | pictures_nr | postal_code | last_seen | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
count | 50000 | 50000 | 50000 | 50000 | 50000 | 50000 | 44905 | 50000.000000 | 47320 | 50000.000000 | 47242 | 50000 | 50000.000000 | 45518 | 50000 | 40171 | 50000 | 50000.0 | 50000.000000 | 50000 |
unique | 48213 | 38754 | 2 | 2 | 2357 | 2 | 8 | NaN | 2 | NaN | 245 | 13 | NaN | 7 | 40 | 2 | 76 | NaN | NaN | 39481 |
top | 2016-03-27 22:55:05 | Ford_Fiesta | privat | Angebot | $0 | test | limousine | NaN | manuell | NaN | golf | 150,000km | NaN | benzin | volkswagen | nein | 2016-04-03 00:00:00 | NaN | NaN | 2016-04-07 06:17:27 |
freq | 3 | 78 | 49999 | 49999 | 1421 | 25756 | 12859 | NaN | 36993 | NaN | 4024 | 32424 | NaN | 30107 | 10687 | 35232 | 1946 | NaN | NaN | 8 |
mean | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 2005.073280 | NaN | 116.355920 | NaN | NaN | 5.723360 | NaN | NaN | NaN | NaN | 0.0 | 50813.627300 | NaN |
std | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 105.712813 | NaN | 209.216627 | NaN | NaN | 3.711984 | NaN | NaN | NaN | NaN | 0.0 | 25779.747957 | NaN |
min | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 1000.000000 | NaN | 0.000000 | NaN | NaN | 0.000000 | NaN | NaN | NaN | NaN | 0.0 | 1067.000000 | NaN |
25% | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 1999.000000 | NaN | 70.000000 | NaN | NaN | 3.000000 | NaN | NaN | NaN | NaN | 0.0 | 30451.000000 | NaN |
50% | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 2003.000000 | NaN | 105.000000 | NaN | NaN | 6.000000 | NaN | NaN | NaN | NaN | 0.0 | 49577.000000 | NaN |
75% | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 2008.000000 | NaN | 150.000000 | NaN | NaN | 9.000000 | NaN | NaN | NaN | NaN | 0.0 | 71540.000000 | NaN |
max | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 9999.000000 | NaN | 17700.000000 | NaN | NaN | 12.000000 | NaN | NaN | NaN | NaN | 0.0 | 99998.000000 | NaN |
In the description above, we can see there are some columns which seemed irrelevant to our analysis, those that need more investigation and columns in which numeric data are stored as text.
seller
, offert_type
, pictures_nr
price
, odometer
registration year
, power_ps
Let's go in order.
autos["seller"].value_counts()
privat 49999 gewerblich 1 Name: seller, dtype: int64
In the seller
column, all car sales were sold by "privat" except one. And this data isn't really helpful for the analysis because of its uniformity. The same goes for the offer_type
columns below.
autos["offer_type"].value_counts()
Angebot 49999 Gesuch 1 Name: offer_type, dtype: int64
And here's another column. All values in the pictures_nr
column refers to 0 which is meaningless to be analyzed if all are the same.
autos["pictures_nr"].value_counts()
0 50000 Name: pictures_nr, dtype: int64
So, these columns can just be thrown away to the bin!
autos.drop(["seller", "offer_type", "pictures_nr"], axis=1, inplace=True)
autos.head(2)
date_crawled | name | price | abtest | vehicle_type | registration_year | gearbox | power_ps | model | odometer | registration_month | fuel_type | brand | unrepaired_damage | ad_created | postal_code | last_seen | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 2016-03-26 17:47:46 | Peugeot_807_160_NAVTECH_ON_BOARD | $5,000 | control | bus | 2004 | manuell | 158 | andere | 150,000km | 3 | lpg | peugeot | nein | 2016-03-26 00:00:00 | 79588 | 2016-04-06 06:45:54 |
1 | 2016-04-04 13:38:56 | BMW_740i_4_4_Liter_HAMANN_UMBAU_Mega_Optik | $8,500 | control | limousine | 1997 | automatik | 286 | 7er | 150,000km | 6 | benzin | bmw | nein | 2016-04-04 00:00:00 | 71034 | 2016-04-06 14:45:08 |
Done! Aaand there's 17 columns left! Yay less work for me~
And then, there are two columns which contains numerical data but stored as text, like price and odometer column. These columns need to be converted to numerical values to help better analysis.
For price
, the /$ symbol needs to be removed along with the , then converted to an integer. While odometer
needs to remove the km and , before converting to a numerical value.
#converting price column to numeric values
autos["price"] = (autos["price"]
.str.replace('$', '')
.str.replace(",", "")
.astype(int)
)
#converting odometer column to numeric values
autos["odometer"] = (autos["odometer"]
.str.replace("km", "")
.str.replace(",", "")
.astype(int)
)
autos.rename({"odometer": "odometer_km"}, axis=1, inplace=True)
#the result
print(autos["price"].head())
print('\n')
print(autos["odometer_km"].head())
0 5000 1 8500 2 8990 3 4350 4 1350 Name: price, dtype: int32 0 150000 1 150000 2 70000 3 70000 4 150000 Name: odometer_km, dtype: int32
Done! Now that it's numbers, let's investigate the data more deeply. Let's start from the price
and odometer_km
column first because the numerical values didn't appear in the previous description.
autos["price"].describe()
count 5.000000e+04 mean 9.840044e+03 std 4.811044e+05 min 0.000000e+00 25% 1.100000e+03 50% 2.950000e+03 75% 7.200000e+03 max 1.000000e+08 Name: price, dtype: float64
Whoops. It turns out that there is a mistake. The max data of price is 100 larger than the 75% price of the whole column. This needs more investigation which data is wrong.
autos["price"].value_counts().sort_index(ascending=False).head(15)
99999999 1 27322222 1 12345678 3 11111111 2 10000000 1 3890000 1 1300000 1 1234566 1 999999 2 999990 1 350000 1 345000 1 299000 1 295000 1 265000 1 Name: price, dtype: int64
autos["price"].value_counts().sort_index(ascending=True).head(10)
0 1421 1 156 2 3 3 1 5 2 8 1 9 1 10 7 11 2 12 3 Name: price, dtype: int64
According to eBay the most expensive vehicles sold on eBay cost around $3 million. This clearly means prices above 3 milion needs to be removed. However, if we're to see datas even below that, there seems to be a weird jump of prices between 350k and 1 million dollar. So, in this project, we'll remove prices above 350k.
And since auctions at eBay that start with $1 is not too uncommon, we'll start the price from there and exclude those zeroes.
autos = autos[autos["price"].between(1,351000)]
autos["price"].describe()
count 48565.000000 mean 5888.935591 std 9059.854754 min 1.000000 25% 1200.000000 50% 3000.000000 75% 7490.000000 max 350000.000000 Name: price, dtype: float64
Nice. This looks much better.
Continue on, let's check the odometer_km
column. As a reminder, this column describes how many kilometer the car has driven.
autos["odometer_km"].value_counts()
150000 31414 125000 5057 100000 2115 90000 1734 80000 1415 70000 1217 60000 1155 50000 1012 5000 836 40000 815 30000 780 20000 762 10000 253 Name: odometer_km, dtype: int64
Great! This doesn't look like there's a problem at all!
On the other hand, some column which needs to be investigated are registration_year
and power_ps
. The max data in registration_year
column indicates the year 9999, which doesn't exist. Meanwhile, the max data of power_ps
column far exceeds the 75% mark of the whole data column. We need to check if there's something wrong with the data in this column. And remove it if it does. Let's start with registration_year
.
autos["registration_year"].value_counts().sort_index()
1000 1 1001 1 1111 1 1800 2 1910 5 .. 5911 1 6200 1 8888 1 9000 1 9999 3 Name: registration_year, Length: 95, dtype: int64
See, there's so many wrong-inputted registration year. This column should describe the year the car was first registrated, so it shouldn't be possible for it to be above the year 2016, when the data was listed at the time. But it's also not possible that the registration year happened as long ago as the year 1800 or even 1000.
Since the data we're analyzing originates from Germany, let's limit the registration year of cars in Germany, which starts from the year 1896. Germany was the second country to introduce registration plate with the police in 1896, based on this source. So, only years between 1896 and 2016 will be accounted in this data.
autos = autos[autos["registration_year"].between(1896, 2016)]
autos["registration_year"].value_counts(normalize=True).sort_index(ascending=False).head(20)
2016 0.026135 2015 0.008397 2014 0.014203 2013 0.017202 2012 0.028063 2011 0.034768 2010 0.034040 2009 0.044665 2008 0.047450 2007 0.048778 2006 0.057197 2005 0.062895 2004 0.057904 2003 0.057818 2002 0.053255 2001 0.056468 2000 0.067608 1999 0.062060 1998 0.050620 1997 0.041794 Name: registration_year, dtype: float64
autos["registration_year"].value_counts(normalize=True).sort_index(ascending=False).tail(10)
1943 0.000021 1941 0.000043 1939 0.000021 1938 0.000021 1937 0.000086 1934 0.000043 1931 0.000021 1929 0.000021 1927 0.000021 1910 0.000107 Name: registration_year, dtype: float64
And here's the result! This looks much more rational and realistic. Furthermore, there's a logical reason as to why recent years cars are registered more than the old years. Of course, it's thanks to the advancement of technology that in the 2000s more and more cars are registered than the early years it was founded.
Let's go on to the power_ps
column. This column describes the power of the car in PS(Pferdestrke) which literally means 'horse strength'. It is also alternatively known as 'hp' and 'bhp'.
autos["power_ps"].value_counts().sort_index()
0 4507 1 4 2 2 3 2 4 4 ... 8404 1 14009 1 15001 1 16312 1 17700 1 Name: power_ps, Length: 437, dtype: int64
Whaat?? 0 PS? Is there even a car with an engine that doesn't give off any power? I did not expect this. Or 0 means that they don't know the power of the engine. I should remove these 0s.
Wait, what's that 17000 PS in the bottom? That's also weird.. As far as I searched, 8000 PS are for F1 cars.. And the most common for daily use cars are 120-150 hp(almost equivalent to 120-150 PS).
autos["power_ps"].value_counts().sort_index(ascending=False).head(20)
17700 1 16312 1 15001 1 14009 1 8404 1 7511 1 6512 1 6226 1 6045 1 5867 1 4400 1 3750 1 3500 1 2729 1 1998 1 1988 1 1800 1 1796 1 1793 1 1781 1 Name: power_ps, dtype: int64
Hmm.. this is not possible. Let's check how expensive a car with 17,700 PS power engine would be sold at..
autos[autos["power_ps"] == 17700][["name","price","power_ps","brand"]]
name | price | power_ps | brand | |
---|---|---|---|---|
36421 | Verkaufe_meinen_bmw_525d | 6000 | 17700 | bmw |
$6000 for a super power engine car?! No way.
Then, what about the 8000 PS car? Since the gap between 8000 and 14000 is a bit large, I will decide if I should throw the 8000 too or not after seeing the price.
autos[autos["power_ps"] == 8404][["name","price","power_ps","brand"]]
name | price | power_ps | brand | |
---|---|---|---|---|
41172 | Suzuki_Baleno_1_3l_GS_TÜV_neu | 950 | 8404 | suzuki |
Yup. $950 for a wonderful 8000 PS engine car. Throw this away.
I need to find a reasonable amount to limit the engine. Let's look at the median and percentages.
autos["power_ps"].describe()
count 46681.000000 mean 117.892933 std 184.922911 min 0.000000 25% 75.000000 50% 109.000000 75% 150.000000 max 17700.000000 Name: power_ps, dtype: float64
As expected, the 75% mark is in the normal power range for daily to use cars. According to this guide, 300 PS in a normal car is still realistic. So, I will set the limit between 1 PS and 300 PS if the highest data price matches the power.
#if you recall the cut above,
#the current highest price of a car is $350000
autos[autos["price"] == 350000][["name","price","power_ps","brand"]]
name | price | power_ps | brand | |
---|---|---|---|---|
36818 | Porsche_991 | 350000 | 500 | porsche |
Wow, so the highest price car has a power of 500 PS! Not bad, this is still acceptable since it's not far-fetched like the 17700 or 8000 PS. So I'll set the range between 1 and 500.
autos = autos[autos["power_ps"].between(1,500)]
autos["power_ps"].describe()
count 42065.000000 mean 126.050731 std 60.378704 min 1.000000 25% 80.000000 50% 116.000000 75% 150.000000 max 500.000000 Name: power_ps, dtype: float64
Now that the irregularity have been removed, only 42 thousand datas are left. But, it's alright! It's still a lot of data, plenty to be analyzed.
It seems there are 3 columns which represent date values expressed in string, date_crawled
, last_seen
, and ad_created
. We should convert them into numerical values to be able to analyze it quantitatively. First, let's see how they are represented.
autos[["date_crawled", "ad_created", "last_seen"]][:5]
date_crawled | ad_created | last_seen | |
---|---|---|---|
0 | 2016-03-26 17:47:46 | 2016-03-26 00:00:00 | 2016-04-06 06:45:54 |
1 | 2016-04-04 13:38:56 | 2016-04-04 00:00:00 | 2016-04-06 14:45:08 |
2 | 2016-03-26 18:57:24 | 2016-03-26 00:00:00 | 2016-04-06 20:15:37 |
3 | 2016-03-12 16:58:10 | 2016-03-12 00:00:00 | 2016-03-15 03:16:28 |
5 | 2016-03-21 13:47:45 | 2016-03-21 00:00:00 | 2016-04-06 09:45:21 |
The first 10 characters refers to the date and the last 8 are the time. Since we just need the date values, we can extract only the date, distribute and sort them by index. Phew, sounds complicated.
(autos["date_crawled"]
# to select only the first 10 characters
.str[:10]
# normalize to use percentages,
# unable dropna to include missing values in the distribution
.value_counts(normalize=True, dropna=False)
# to rank by date from earliest to latest
.sort_index(ascending=True)
)
2016-03-05 0.025247 2016-03-06 0.014287 2016-03-07 0.036444 2016-03-08 0.033353 2016-03-09 0.032854 2016-03-10 0.032260 2016-03-11 0.032093 2016-03-12 0.037157 2016-03-13 0.015856 2016-03-14 0.036610 2016-03-15 0.034209 2016-03-16 0.029336 2016-03-17 0.031404 2016-03-18 0.012766 2016-03-19 0.034375 2016-03-20 0.038583 2016-03-21 0.037181 2016-03-22 0.032688 2016-03-23 0.031594 2016-03-24 0.029454 2016-03-25 0.031784 2016-03-26 0.032402 2016-03-27 0.030714 2016-03-28 0.035160 2016-03-29 0.033686 2016-03-30 0.033662 2016-03-31 0.031808 2016-04-01 0.034043 2016-04-02 0.035635 2016-04-03 0.039035 2016-04-04 0.036824 2016-04-05 0.013027 2016-04-06 0.003090 2016-04-07 0.001379 Name: date_crawled, dtype: float64
It seems there are varying frequencies of crawled date, so let's sort them by values.
(autos["date_crawled"]
.str[:10]
.value_counts(normalize=True, dropna=False)
.sort_values(ascending=False)
)
2016-04-03 0.039035 2016-03-20 0.038583 2016-03-21 0.037181 2016-03-12 0.037157 2016-04-04 0.036824 2016-03-14 0.036610 2016-03-07 0.036444 2016-04-02 0.035635 2016-03-28 0.035160 2016-03-19 0.034375 2016-03-15 0.034209 2016-04-01 0.034043 2016-03-29 0.033686 2016-03-30 0.033662 2016-03-08 0.033353 2016-03-09 0.032854 2016-03-22 0.032688 2016-03-26 0.032402 2016-03-10 0.032260 2016-03-11 0.032093 2016-03-31 0.031808 2016-03-25 0.031784 2016-03-23 0.031594 2016-03-17 0.031404 2016-03-27 0.030714 2016-03-24 0.029454 2016-03-16 0.029336 2016-03-05 0.025247 2016-03-13 0.015856 2016-03-06 0.014287 2016-04-05 0.013027 2016-03-18 0.012766 2016-04-06 0.003090 2016-04-07 0.001379 Name: date_crawled, dtype: float64
Hmmm.. I think that the crawler mostly done the scraping between March and April, where the most crawls happened during late-March and early April.
Let's find out about the rest 2 columns.
(autos["ad_created"]
.str[:10]
.value_counts(normalize=True, dropna=False)
.sort_values(ascending=False)
)
2016-04-03 0.039320 2016-03-20 0.038631 2016-03-21 0.037371 2016-04-04 0.037133 2016-03-12 0.036967 ... 2016-02-09 0.000024 2016-02-08 0.000024 2016-02-07 0.000024 2015-12-05 0.000024 2016-01-16 0.000024 Name: ad_created, Length: 72, dtype: float64
ad_created
describes the date on which the eBay listing was created. The listing was created over 72 different dates and it seems like most listings was created during March and April of 2016, nearly the same as the crawled date. While there are some created months before the latest listing date.
(autos["last_seen"]
.str[:10]
.value_counts(normalize=True, dropna=False)
.sort_index(ascending=True)
)
2016-03-05 0.001022 2016-03-06 0.003946 2016-03-07 0.004969 2016-03-08 0.006918 2016-03-09 0.009462 2016-03-10 0.010199 2016-03-11 0.012077 2016-03-12 0.023392 2016-03-13 0.008344 2016-03-14 0.012576 2016-03-15 0.015761 2016-03-16 0.015904 2016-03-17 0.027434 2016-03-18 0.007132 2016-03-19 0.015761 2016-03-20 0.020611 2016-03-21 0.020017 2016-03-22 0.020445 2016-03-23 0.018257 2016-03-24 0.019303 2016-03-25 0.018828 2016-03-26 0.016379 2016-03-27 0.015143 2016-03-28 0.020302 2016-03-29 0.021728 2016-03-30 0.024724 2016-03-31 0.023345 2016-04-01 0.023060 2016-04-02 0.024890 2016-04-03 0.024843 2016-04-04 0.023773 2016-04-05 0.127493 2016-04-06 0.227386 2016-04-07 0.134577 Name: last_seen, dtype: float64
The last_seen
column describes the date the crawler last saw the ads of the car online, which could also informs us the date it was removed, possibly because it was sold.
The ads last seen continue to increase as time goes on, where the most ads seen last was on 6th of April and the least was a month before that. The crawler roughly saw these ads online in a span of a month between March and April. Thus, the large amount of ads seen in the last 3 days does not indicate car sales.
When speaking about cars, I mean car brands. Is it the notorious Porsche? Or my rich neighbor's Lexus? Or instead some Toyota my dad likes to use? Let's find out what brands are the most reliable in eBay.
brand = autos["brand"].value_counts(normalize=True)
brand
volkswagen 0.212386 bmw 0.113966 opel 0.105931 mercedes_benz 0.096351 audi 0.089433 ford 0.068418 renault 0.044717 peugeot 0.029692 fiat 0.024843 seat 0.018543 skoda 0.017116 mazda 0.015381 nissan 0.014763 smart 0.013907 citroen 0.013764 toyota 0.012885 hyundai 0.010270 mini 0.009390 volvo 0.009366 mitsubishi 0.008225 honda 0.007988 sonstige_autos 0.007821 kia 0.007156 alfa_romeo 0.006775 porsche 0.006157 suzuki 0.005919 chevrolet 0.005515 chrysler 0.003542 dacia 0.002710 jeep 0.002330 daihatsu 0.002258 subaru 0.002211 land_rover 0.002163 saab 0.001712 jaguar 0.001450 daewoo 0.001284 rover 0.001189 lancia 0.001022 trabant 0.000951 lada 0.000499 Name: brand, dtype: float64
We can see here the relative frequency of car brands sold in the German eBay website. Overall, German car brands like Volkswagen, Mercedes-Benz, BMW, Opel and Audi prevail in their own country. Volkwagen account almost twice the cars of BMW. While cars originating from other countries like Lada, Trabant and Lancia which gives off old, retro styles are not too popular.
As this is a German eBay website, I decided to select the top 5 car brands to aggregate by. Of course, seeing that the top 5 brands are all from Germany was one of the reason, but the fact that they all account over than 5% of the total values proves how popular they are. Let's find out the average price of these 5 favorite brands.
top_5_mean_price = {}
top_5_brands = brand.index[:5]
for b in top_5_brands:
selected_rows = autos[autos["brand"] == b]
mean_price = selected_rows["price"].mean()
top_5_mean_price[b] = mean_price
top_5_mean_price
{'volkswagen': 5686.671367808372, 'bmw': 8535.667918231122, 'opel': 3148.439856373429, 'mercedes_benz': 8857.581297804096, 'audi': 9654.76182881446}
To state the obvious, the Volkswagen are cheaper and Opel are much cheaper compared to other famous brands like the BMW, Merc and Audi.
While Opel are not that much familiar-sounding to the rest of the world like Mercedes Benz and Audi are, it is still a big brand name like the rest are. Though, seeing that there are a lot less publicity and fame, it might explain why the price are much cheaper than the others.
The same might be goes to Volkwagen. Thanks to its cheaper price in addition to its novel technology and eye-catching design with a built to give comfort and safety, it has secured its place as the number one car brand. To give you a bit of its background, it was originally made to provide the people with a car, hence its name literally means "People's Car" that the multitude could realistically afford.
This affordability for the masses would make it the top of any other cars, whereas other brands continue to strive higher and advance more with costly prices.
But, let's continue to dig deeper using the mileage to discover whether there's a connection to the price.
top_5_mean_mileage = {}
for b in top_5_brands:
sel_rows = autos[autos["brand"] == b]
mean_mileage = sel_rows["odometer_km"].mean()
top_5_mean_mileage[b] = mean_mileage
top_5_mean_mileage
{'volkswagen': 128277.36736064473, 'bmw': 132578.22277847308, 'opel': 129196.58886894076, 'mercedes_benz': 130742.6597582038, 'audi': 128641.67995746943}
I found the average mileage (though it's expressed in km) for each car brand, but like this it would be too hard to compare it to the price. Therefore, I will make these datas into a dataframe first to analyze it more efficiently.
# using series constructor to turn the dictionaries into series
bmp_series = pd.Series(top_5_mean_price)
bmm_series = pd.Series(top_5_mean_mileage)
# turn both series into a dataframe
df = pd.DataFrame(bmp_series, columns=["mean_price"])
df["mean_mileage"] = bmm_series
# result
df
mean_price | mean_mileage | |
---|---|---|
volkswagen | 5686.671368 | 128277.367361 |
bmw | 8535.667918 | 132578.222778 |
opel | 3148.439856 | 129196.588869 |
mercedes_benz | 8857.581298 | 130742.659758 |
audi | 9654.761829 | 128641.679957 |
Generally speaking, higher mileage should lead to lower prices. The fact that cars covering a distance less than 130,000 km mostly cost less, while above that point cars are more expensive come to tell us that probable mileage doesn't really have a connection to its price. That said, Audi has the least mileage among the five and is the most pricey.
Though, strictly speaking, there is not a large difference in the mileage in all five types of cars, so mileage plays a little role in the car sales.
All in all, the most popular car brands sold in the German eBay website are brands from their own country - Germany. Mercedez Benz, BMW, Opel, Volkswagen and Audi are the most popular ones, and among them Volkswagen - the "People's Car" - prevails. The sales of Volkswagen alone accounts over 20% of the whole sales.
Between the average price and mileage of the top five highest-selling brands, there are little to no connection. Although the affordability and relatively cheap price of the "People's Car" Volkswagen is outstanding, ensuring its fame.
However, more analysis need to be done to further understand the connection between mileage and price, to find the most common brand/model combinations and correlation of sales and damaged parts.