dateCrawled:
When this ad was first crawled. All field-values are taken from this date
name:
Name of the car
seller:
Whether the seller is private or a dealer
offerType:
The type of listing
price:
The price on the ad to sell the car
abtest:
Whether the listing is included in an A/B test
vehicleType:
The vehicle Type
yearOfRegistration:
The year in which the car was first registered
gearbox:
The transmission type
powerPS:
The power of the car in PS
model:
The car model name
kilometer:
How many kilometers the car has driven
monthOfRegistration:
The month in which the car was first registered
fuelType:
What type of fuel the car uses
brand:
The brand of the car
notRepairedDamage:
If the car has a damage which is not yet repaired
dateCreated:
The date on which the eBay listing was created
nrOfPictures:
The number of pictures in the ad
postalCode:
The postal code for the location of the vehicle
lastSeenOnline:
When the crawler saw this ad last online
Let's start by importing the libraries we need and reading the dataset into pandas.
import pandas as pd
import numpy as np
autos = pd.read_csv("autos.csv", encoding = "Latin-1")
autos.head()
dateCrawled | name | seller | offerType | price | abtest | vehicleType | yearOfRegistration | gearbox | powerPS | model | odometer | monthOfRegistration | fuelType | brand | notRepairedDamage | dateCreated | nrOfPictures | postalCode | lastSeen | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 2016-03-26 17:47:46 | Peugeot_807_160_NAVTECH_ON_BOARD | privat | Angebot | $5,000 | control | bus | 2004 | manuell | 158 | andere | 150,000km | 3 | lpg | peugeot | nein | 2016-03-26 00:00:00 | 0 | 79588 | 2016-04-06 06:45:54 |
1 | 2016-04-04 13:38:56 | BMW_740i_4_4_Liter_HAMANN_UMBAU_Mega_Optik | privat | Angebot | $8,500 | control | limousine | 1997 | automatik | 286 | 7er | 150,000km | 6 | benzin | bmw | nein | 2016-04-04 00:00:00 | 0 | 71034 | 2016-04-06 14:45:08 |
2 | 2016-03-26 18:57:24 | Volkswagen_Golf_1.6_United | privat | Angebot | $8,990 | test | limousine | 2009 | manuell | 102 | golf | 70,000km | 7 | benzin | volkswagen | nein | 2016-03-26 00:00:00 | 0 | 35394 | 2016-04-06 20:15:37 |
3 | 2016-03-12 16:58:10 | Smart_smart_fortwo_coupe_softouch/F1/Klima/Pan... | privat | Angebot | $4,350 | control | kleinwagen | 2007 | automatik | 71 | fortwo | 70,000km | 6 | benzin | smart | nein | 2016-03-12 00:00:00 | 0 | 33729 | 2016-03-15 03:16:28 |
4 | 2016-04-01 14:38:50 | Ford_Focus_1_6_Benzin_TÜV_neu_ist_sehr_gepfleg... | privat | Angebot | $1,350 | test | kombi | 2003 | manuell | 0 | focus | 150,000km | 7 | benzin | ford | nein | 2016-04-01 00:00:00 | 0 | 39218 | 2016-04-01 14:38:50 |
autos.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 50000 entries, 0 to 49999 Data columns (total 20 columns): dateCrawled 50000 non-null object name 50000 non-null object seller 50000 non-null object offerType 50000 non-null object price 50000 non-null object abtest 50000 non-null object vehicleType 44905 non-null object yearOfRegistration 50000 non-null int64 gearbox 47320 non-null object powerPS 50000 non-null int64 model 47242 non-null object odometer 50000 non-null object monthOfRegistration 50000 non-null int64 fuelType 45518 non-null object brand 50000 non-null object notRepairedDamage 40171 non-null object dateCreated 50000 non-null object nrOfPictures 50000 non-null int64 postalCode 50000 non-null int64 lastSeen 50000 non-null object dtypes: int64(5), object(15) memory usage: 7.6+ MB
The dataset has three datetime columns: [dataCrawled],[dataCreated], [lastSeen]
; these columns are showns as objects so we have to convert them to datetime objects. Some coulnms contain missing values too: [vehicleType],[gearbox],[model],[fuelType], [notRepairedDamage]
; these will be cleaned. We will start by taking care of column names to make analysis easier.
autos.columns = ['date_crawled', 'name', 'seller', 'offer_type', 'price', 'ab_test',
'vehicle_type', 'registration_year', 'gearbox', 'power_ps', 'model',
'odometer', 'registration_month', 'fuel_type', 'brand',
'unrepaired_damage', 'ad_created', 'num_photos', 'postal_code',
'last_seen']
autos.head()
date_crawled | name | seller | offer_type | price | ab_test | vehicle_type | registration_year | gearbox | power_ps | model | odometer | registration_month | fuel_type | brand | unrepaired_damage | ad_created | num_photos | postal_code | last_seen | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 2016-03-26 17:47:46 | Peugeot_807_160_NAVTECH_ON_BOARD | privat | Angebot | $5,000 | control | bus | 2004 | manuell | 158 | andere | 150,000km | 3 | lpg | peugeot | nein | 2016-03-26 00:00:00 | 0 | 79588 | 2016-04-06 06:45:54 |
1 | 2016-04-04 13:38:56 | BMW_740i_4_4_Liter_HAMANN_UMBAU_Mega_Optik | privat | Angebot | $8,500 | control | limousine | 1997 | automatik | 286 | 7er | 150,000km | 6 | benzin | bmw | nein | 2016-04-04 00:00:00 | 0 | 71034 | 2016-04-06 14:45:08 |
2 | 2016-03-26 18:57:24 | Volkswagen_Golf_1.6_United | privat | Angebot | $8,990 | test | limousine | 2009 | manuell | 102 | golf | 70,000km | 7 | benzin | volkswagen | nein | 2016-03-26 00:00:00 | 0 | 35394 | 2016-04-06 20:15:37 |
3 | 2016-03-12 16:58:10 | Smart_smart_fortwo_coupe_softouch/F1/Klima/Pan... | privat | Angebot | $4,350 | control | kleinwagen | 2007 | automatik | 71 | fortwo | 70,000km | 6 | benzin | smart | nein | 2016-03-12 00:00:00 | 0 | 33729 | 2016-03-15 03:16:28 |
4 | 2016-04-01 14:38:50 | Ford_Focus_1_6_Benzin_TÜV_neu_ist_sehr_gepfleg... | privat | Angebot | $1,350 | test | kombi | 2003 | manuell | 0 | focus | 150,000km | 7 | benzin | ford | nein | 2016-04-01 00:00:00 | 0 | 39218 | 2016-04-01 14:38:50 |
The column names have been changed from camel case to snakecase and more descriptive names have been given to some of the coulumns(reference the data dictionary on kaggle). The column names is now in lowercase to make it eaier to call.
# generating a basic statistics of all our columns
autos.describe(include='all')
date_crawled | name | seller | offer_type | price | ab_test | vehicle_type | registration_year | gearbox | power_ps | model | odometer | registration_month | fuel_type | brand | unrepaired_damage | ad_created | num_photos | postal_code | last_seen | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
count | 50000 | 50000 | 50000 | 50000 | 50000 | 50000 | 44905 | 50000.000000 | 47320 | 50000.000000 | 47242 | 50000 | 50000.000000 | 45518 | 50000 | 40171 | 50000 | 50000.0 | 50000.000000 | 50000 |
unique | 48213 | 38754 | 2 | 2 | 2357 | 2 | 8 | NaN | 2 | NaN | 245 | 13 | NaN | 7 | 40 | 2 | 76 | NaN | NaN | 39481 |
top | 2016-03-27 22:55:05 | Ford_Fiesta | privat | Angebot | $0 | test | limousine | NaN | manuell | NaN | golf | 150,000km | NaN | benzin | volkswagen | nein | 2016-04-03 00:00:00 | NaN | NaN | 2016-04-07 06:17:27 |
freq | 3 | 78 | 49999 | 49999 | 1421 | 25756 | 12859 | NaN | 36993 | NaN | 4024 | 32424 | NaN | 30107 | 10687 | 35232 | 1946 | NaN | NaN | 8 |
mean | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 2005.073280 | NaN | 116.355920 | NaN | NaN | 5.723360 | NaN | NaN | NaN | NaN | 0.0 | 50813.627300 | NaN |
std | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 105.712813 | NaN | 209.216627 | NaN | NaN | 3.711984 | NaN | NaN | NaN | NaN | 0.0 | 25779.747957 | NaN |
min | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 1000.000000 | NaN | 0.000000 | NaN | NaN | 0.000000 | NaN | NaN | NaN | NaN | 0.0 | 1067.000000 | NaN |
25% | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 1999.000000 | NaN | 70.000000 | NaN | NaN | 3.000000 | NaN | NaN | NaN | NaN | 0.0 | 30451.000000 | NaN |
50% | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 2003.000000 | NaN | 105.000000 | NaN | NaN | 6.000000 | NaN | NaN | NaN | NaN | 0.0 | 49577.000000 | NaN |
75% | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 2008.000000 | NaN | 150.000000 | NaN | NaN | 9.000000 | NaN | NaN | NaN | NaN | 0.0 | 71540.000000 | NaN |
max | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 9999.000000 | NaN | 17700.000000 | NaN | NaN | 12.000000 | NaN | NaN | NaN | NaN | 0.0 | 99998.000000 | NaN |
Taking a look at the above table, we can infer that some values have very little unique values, going forward; we will explore these columns.
autos['num_photos'].value_counts()
0 50000 Name: num_photos, dtype: int64
The above column has only one value which is 0
# insapecting columns with little unique values
columns_to_inspect = ['seller','offer_type','ab_test','registration_year','registration_month','power_ps','unrepaired_damage','num_photos','postal_code']
for col in columns_to_inspect:
print("For",col)
print(autos[col].value_counts(dropna=False).head(),"\n")
For seller privat 49999 gewerblich 1 Name: seller, dtype: int64 For offer_type Angebot 49999 Gesuch 1 Name: offer_type, dtype: int64 For ab_test test 25756 control 24244 Name: ab_test, dtype: int64 For registration_year 2000 3354 2005 3015 1999 3000 2004 2737 2003 2727 Name: registration_year, dtype: int64 For registration_month 0 5075 3 5071 6 4368 5 4107 4 4102 Name: registration_month, dtype: int64 For power_ps 0 5500 75 3171 60 2195 150 2046 140 1884 Name: power_ps, dtype: int64 For unrepaired_damage nein 35232 NaN 9829 ja 4939 Name: unrepaired_damage, dtype: int64 For num_photos 0 50000 Name: num_photos, dtype: int64 For postal_code 10115 109 65428 104 66333 54 45888 50 44145 48 Name: postal_code, dtype: int64
Taking a look at our data we can observe that some columns have same or only two differnt values [seller],[offer_type],[num_photos], [ab_test], [unrepaired_damage]
. ab_test
& unrepaired_damage
are categoric booleens so they could be usefull moving forward. Other columns will not be of much help in the analysis so we have to drop them. Also, some features that should be numeric are string because they contain special charaters of ",": [price],[odometer]
. So the next step of cleaning is to treat such cases. The [registartion_year]
seems to have an outlier and we would investigate that.
#dropping some columns
autos.drop(['seller','offer_type','num_photos'], inplace=True, axis = 1)
On investigation we discovered that the [price] & [odometer]
columns contain some non-numeric characters. We have to:
# converting to numeric value
autos["price"] = (autos["price"]
.str.replace("$","")
.str.replace(",","")
.astype(int)
)
autos["price"].unique().shape
(2357,)
autos["odometer"] = (autos["odometer"]
.str.replace("km","")
.str.replace(",","")
.astype(int)
)
autos.rename({"odometer": "odometer_km"}, axis=1, inplace=True)
autos["odometer_km"].unique().shape
(13,)
Looking at the basic statistics of our new numeric columns.
print(autos["price"].describe(),"\n")
print(autos["odometer_km"].describe())
count 5.000000e+04 mean 9.840044e+03 std 4.811044e+05 min 0.000000e+00 25% 1.100000e+03 50% 2.950000e+03 75% 7.200000e+03 max 1.000000e+08 Name: price, dtype: float64 count 50000.000000 mean 125732.700000 std 40042.211706 min 5000.000000 25% 125000.000000 50% 150000.000000 75% 150000.000000 max 150000.000000 Name: odometer_km, dtype: float64
print(autos["price"].value_counts().sort_index(ascending=False).head(10),"\n")
print(autos["price"].value_counts().sort_index().head(10),"\n")
print(autos["odometer_km"].value_counts().sort_index(ascending=False).head())
99999999 1 27322222 1 12345678 3 11111111 2 10000000 1 3890000 1 1300000 1 1234566 1 999999 2 999990 1 Name: price, dtype: int64 0 1421 1 156 2 3 3 1 5 2 8 1 9 1 10 7 11 2 12 3 Name: price, dtype: int64 150000 32424 125000 5170 100000 2169 90000 1757 80000 1436 Name: odometer_km, dtype: int64
From the above tables, we notice cars with mileage of 150000
(high mileage cars) are prominent in the data. Also, there are some abnormally high priced cars(above $1 million) and cars priced between 0 - 12 dollars. Low priced cars is possibe since bids can start for as low as 1 dollar. The highly priced cars are to be removed beacuse they are outliers and will affect our analysis heavily (in a bad way).
(~autos["price"].between(1,35100)).sum()/autos.shape[0]
0.03888
Cars that fall outside the range of 1- 350,000 make up just 4% of the our dataset. We will remove them
autos = autos[autos["price"].between(1,351000)]
autos["price"].describe()
count 48565.000000 mean 5888.935591 std 9059.854754 min 1.000000 25% 1200.000000 50% 3000.000000 75% 7490.000000 max 350000.000000 Name: price, dtype: float64
There are 5 columns that represent date values.
date_crawled
: added by the crawlerlast_seen
: added by the crawlerad_created
: from the websiteregistration_month
: from the websiteregistration_year
: from the websiteLet's first understand how the values in the three string columns are formatted. These columns all represent full timestamp values, like so:
autos[['date_crawled','ad_created','last_seen']][0:5]
date_crawled | ad_created | last_seen | |
---|---|---|---|
0 | 2016-03-26 17:47:46 | 2016-03-26 00:00:00 | 2016-04-06 06:45:54 |
1 | 2016-04-04 13:38:56 | 2016-04-04 00:00:00 | 2016-04-06 14:45:08 |
2 | 2016-03-26 18:57:24 | 2016-03-26 00:00:00 | 2016-04-06 20:15:37 |
3 | 2016-03-12 16:58:10 | 2016-03-12 00:00:00 | 2016-03-15 03:16:28 |
4 | 2016-04-01 14:38:50 | 2016-04-01 00:00:00 | 2016-04-01 14:38:50 |
The first 10 characters represent the year and date. We will look to see the distribution of each column.
dates_col = ["date_crawled", "ad_created","last_seen"]
for date in dates_col:
print("For",date)
print(autos[date]
.str[:10]
.value_counts(normalize=True, dropna=False)
.sort_index()
)
For date_crawled 2016-03-05 0.025327 2016-03-06 0.014043 2016-03-07 0.036014 2016-03-08 0.033296 2016-03-09 0.033090 2016-03-10 0.032184 2016-03-11 0.032575 2016-03-12 0.036920 2016-03-13 0.015670 2016-03-14 0.036549 2016-03-15 0.034284 2016-03-16 0.029610 2016-03-17 0.031628 2016-03-18 0.012911 2016-03-19 0.034778 2016-03-20 0.037887 2016-03-21 0.037373 2016-03-22 0.032987 2016-03-23 0.032225 2016-03-24 0.029342 2016-03-25 0.031607 2016-03-26 0.032204 2016-03-27 0.031092 2016-03-28 0.034860 2016-03-29 0.034099 2016-03-30 0.033687 2016-03-31 0.031834 2016-04-01 0.033687 2016-04-02 0.035478 2016-04-03 0.038608 2016-04-04 0.036487 2016-04-05 0.013096 2016-04-06 0.003171 2016-04-07 0.001400 Name: date_crawled, dtype: float64 For ad_created 2015-06-11 0.000021 2015-08-10 0.000021 2015-09-09 0.000021 2015-11-10 0.000021 2015-12-05 0.000021 2015-12-30 0.000021 2016-01-03 0.000021 2016-01-07 0.000021 2016-01-10 0.000041 2016-01-13 0.000021 2016-01-14 0.000021 2016-01-16 0.000021 2016-01-22 0.000021 2016-01-27 0.000062 2016-01-29 0.000021 2016-02-01 0.000021 2016-02-02 0.000041 2016-02-05 0.000041 2016-02-07 0.000021 2016-02-08 0.000021 2016-02-09 0.000021 2016-02-11 0.000021 2016-02-12 0.000041 2016-02-14 0.000041 2016-02-16 0.000021 2016-02-17 0.000021 2016-02-18 0.000041 2016-02-19 0.000062 2016-02-20 0.000041 2016-02-21 0.000062 ... 2016-03-09 0.033151 2016-03-10 0.031895 2016-03-11 0.032904 2016-03-12 0.036755 2016-03-13 0.017008 2016-03-14 0.035190 2016-03-15 0.034016 2016-03-16 0.030125 2016-03-17 0.031278 2016-03-18 0.013590 2016-03-19 0.033687 2016-03-20 0.037949 2016-03-21 0.037579 2016-03-22 0.032801 2016-03-23 0.032060 2016-03-24 0.029280 2016-03-25 0.031751 2016-03-26 0.032266 2016-03-27 0.030989 2016-03-28 0.034984 2016-03-29 0.034037 2016-03-30 0.033501 2016-03-31 0.031875 2016-04-01 0.033687 2016-04-02 0.035149 2016-04-03 0.038855 2016-04-04 0.036858 2016-04-05 0.011819 2016-04-06 0.003253 2016-04-07 0.001256 Name: ad_created, Length: 76, dtype: float64 For last_seen 2016-03-05 0.001071 2016-03-06 0.004324 2016-03-07 0.005395 2016-03-08 0.007413 2016-03-09 0.009595 2016-03-10 0.010666 2016-03-11 0.012375 2016-03-12 0.023783 2016-03-13 0.008895 2016-03-14 0.012602 2016-03-15 0.015876 2016-03-16 0.016452 2016-03-17 0.028086 2016-03-18 0.007351 2016-03-19 0.015834 2016-03-20 0.020653 2016-03-21 0.020632 2016-03-22 0.021373 2016-03-23 0.018532 2016-03-24 0.019767 2016-03-25 0.019211 2016-03-26 0.016802 2016-03-27 0.015649 2016-03-28 0.020859 2016-03-29 0.022341 2016-03-30 0.024771 2016-03-31 0.023783 2016-04-01 0.022794 2016-04-02 0.024915 2016-04-03 0.025203 2016-04-04 0.024483 2016-04-05 0.124761 2016-04-06 0.221806 2016-04-07 0.131947 Name: last_seen, dtype: float64
For ["date_crawled"]
, the site was crawled daily for a period of 1-2 months.Each day has a uniform districution of listings.
For ["last_seen"]
, the last 3 days sees an unusual spike in activties. This column contains time a posting was last seen on the site(it might have been sold or listing taken down). The former unlikely. This may be due to the crawling period coming to an end.
For [ad_created]
, their is a large number of ads created within a 2 month period. Some are really old, by around 9 months.
autos["registration_year"].describe()
count 48565.000000 mean 2004.755421 std 88.643887 min 1000.000000 25% 1999.000000 50% 2004.000000 75% 2008.000000 max 9999.000000 Name: registration_year, dtype: float64
There is an an anomaly here. Notice that the min year is 1000, long before cars were inveted and the max year is 9999, back to the future?. So we try to handle this.
autos["registration_year"].value_counts().sort_index().head(10)
1000 1 1001 1 1111 1 1800 2 1910 5 1927 1 1929 1 1931 1 1934 2 1937 4 Name: registration_year, dtype: int64
autos["registration_year"].value_counts().sort_index(ascending=False).head(20)
9999 3 9000 1 8888 1 6200 1 5911 1 5000 4 4800 1 4500 1 4100 1 2800 1 2019 2 2018 470 2017 1392 2016 1220 2015 392 2014 663 2013 803 2012 1310 2011 1623 2010 1589 Name: registration_year, dtype: int64
We select cars between the range of 1900-2016. 2016 because cars cannot be registered after 2016 when the ads where craeted.
autos.loc[~((autos["registration_year"] >= 1800) & (autos["registration_year"] < 2017))].shape[0]/autos.shape[0]
0.03875218778956038
(~autos["registration_year"].between(1800,2016)).sum() / autos.shape[0]
0.03875218778956038
Since only about 4% of cars fall outside this range, we will drop those rows.
autos = autos[autos["registration_year"]
.between(1800,2016)]
autos["registration_year"].value_counts(normalize=True).head(20)
2000 0.067605 2005 0.062892 1999 0.062057 2004 0.057901 2003 0.057815 2006 0.057194 2001 0.056466 2002 0.053253 1998 0.050618 2007 0.048776 2008 0.047448 2009 0.044663 1997 0.041793 2011 0.034766 2010 0.034038 1996 0.029411 2012 0.028062 1995 0.026284 2016 0.026134 2013 0.017201 Name: registration_year, dtype: float64
Most cars were registered in the last 20 years.
autos["brand"].value_counts(normalize=True)
volkswagen 0.211255 bmw 0.110040 opel 0.107577 mercedes_benz 0.096459 audi 0.086563 ford 0.069897 renault 0.047148 peugeot 0.029840 fiat 0.025641 seat 0.018272 skoda 0.016409 nissan 0.015273 mazda 0.015188 smart 0.014159 citroen 0.014009 toyota 0.012703 hyundai 0.010025 sonstige_autos 0.009811 volvo 0.009147 mini 0.008761 mitsubishi 0.008269 honda 0.007840 kia 0.007069 alfa_romeo 0.006641 porsche 0.006126 suzuki 0.005934 chevrolet 0.005698 chrysler 0.003513 dacia 0.002635 daihatsu 0.002506 jeep 0.002271 subaru 0.002142 land_rover 0.002099 saab 0.001649 jaguar 0.001564 daewoo 0.001499 trabant 0.001392 rover 0.001328 lancia 0.001071 lada 0.000578 Name: brand, dtype: float64
Taking a look at our data, I decided to aggreate over the top 9 brands in the market(Brands with above 2% market share). Volkswagen is the most popular brand in the market here
brand_mean_price = {}
for i in range(autos["brand"].value_counts().shape[0]):
if (autos["brand"].value_counts(normalize=True)[i]) > 0.02:
car = autos["brand"].value_counts().index[i]
mean_price = autos.loc[autos["brand"] == car,"price"].mean()
brand_mean_price[car] = mean_price
brand_mean_price
{'volkswagen': 5402.410261610221, 'bmw': 8332.820517811953, 'opel': 2975.2419354838707, 'mercedes_benz': 8628.450366422385, 'audi': 9336.687453600594, 'ford': 3749.4695065890287, 'renault': 2474.8646069968195, 'peugeot': 3094.0172290021537, 'fiat': 2813.748538011696}
common_brands = autos["brand"].value_counts(normalize=True)
common_brands = common_brands[common_brands > .02].index
print(common_brands)
Index(['volkswagen', 'bmw', 'opel', 'mercedes_benz', 'audi', 'ford', 'renault', 'peugeot', 'fiat'], dtype='object')
brand_mean_prices = {}
total_prices = 0
total_num = 0
for brand in common_brands:
brand_prices = autos[autos["brand"]== brand]["price"]
total_prices += brand_prices.sum()
total_num += brand_prices.shape[0]
mean_price = brand_prices.mean()
brand_mean_prices[brand] = int(mean_price)
avg_of_prices = total_prices/total_num
print("The average price of cars in this range is:",avg_of_prices)
brand_mean_prices
The average price of cars in this range is: 5815.809252027636
{'volkswagen': 5402, 'bmw': 8332, 'opel': 2975, 'mercedes_benz': 8628, 'audi': 9336, 'ford': 3749, 'renault': 2474, 'peugeot': 3094, 'fiat': 2813}
Audi, Mercedes and Bmw are the most expensive. Renault is cheapest and Volkswagen is inbetween. That could explain it's popularity as it can be seen as the best of both worlds.
brand_mean_mileage = {}
total_mileage = 0
for brand in common_brands:
brand_mileages = autos[autos["brand"]== brand]["odometer_km"]
total_mileage += brand_mileages.sum()
mean_mileage = brand_mileages.mean()
brand_mean_mileage[brand] = int(mean_mileage)
print("The average mileage on cars is: ",total_mileage/total_num)
brand_mean_mileage
The average mileage on cars is: 128765.94117807696
{'volkswagen': 128707, 'bmw': 132572, 'opel': 129310, 'mercedes_benz': 130788, 'audi': 129157, 'ford': 124266, 'renault': 128071, 'peugeot': 127153, 'fiat': 117121}
We can notice that expensive cars have more mileage on them. The statistics is difficult to interpret so we create a dataframe for it.
# creating a series for mean prices and passing it to the dataframe constructor
bmp_series = pd.Series(brand_mean_prices)
brands_info = pd.DataFrame(bmp_series, columns = ["mean_price"])
brands_info.head()
mean_price | |
---|---|
volkswagen | 5402 |
bmw | 8332 |
opel | 2975 |
mercedes_benz | 8628 |
audi | 9336 |
# merging mean_mileage to the brand_info dataframe
bmm_series = pd.Series(brand_mean_mileage)
brands_info["mean_mileage"] = bmm_series
brands_info.sort_values("mean_price", ascending =False)
mean_price | mean_mileage | |
---|---|---|
audi | 9336 | 129157 |
mercedes_benz | 8628 | 130788 |
bmw | 8332 | 132572 |
volkswagen | 5402 | 128707 |
ford | 3749 | 124266 |
peugeot | 3094 | 127153 |
opel | 2975 | 129310 |
fiat | 2813 | 117121 |
renault | 2474 | 128071 |
With this we can see that expensive cars marginally have more mileage on them. It seems that price reduces with less mileage but that seems wrong from an intuition point of view. There is therefore other factors affecting price. We will try to identify these factors
In this section, the following operations will ne performed:
autos.head()
date_crawled | name | price | ab_test | vehicle_type | registration_year | gearbox | power_ps | model | odometer_km | registration_month | fuel_type | brand | unrepaired_damage | ad_created | postal_code | last_seen | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 2016-03-26 17:47:46 | Peugeot_807_160_NAVTECH_ON_BOARD | 5000 | control | bus | 2004 | manuell | 158 | andere | 150000 | 3 | lpg | peugeot | nein | 2016-03-26 00:00:00 | 79588 | 2016-04-06 06:45:54 |
1 | 2016-04-04 13:38:56 | BMW_740i_4_4_Liter_HAMANN_UMBAU_Mega_Optik | 8500 | control | limousine | 1997 | automatik | 286 | 7er | 150000 | 6 | benzin | bmw | nein | 2016-04-04 00:00:00 | 71034 | 2016-04-06 14:45:08 |
2 | 2016-03-26 18:57:24 | Volkswagen_Golf_1.6_United | 8990 | test | limousine | 2009 | manuell | 102 | golf | 70000 | 7 | benzin | volkswagen | nein | 2016-03-26 00:00:00 | 35394 | 2016-04-06 20:15:37 |
3 | 2016-03-12 16:58:10 | Smart_smart_fortwo_coupe_softouch/F1/Klima/Pan... | 4350 | control | kleinwagen | 2007 | automatik | 71 | fortwo | 70000 | 6 | benzin | smart | nein | 2016-03-12 00:00:00 | 33729 | 2016-03-15 03:16:28 |
4 | 2016-04-01 14:38:50 | Ford_Focus_1_6_Benzin_TÜV_neu_ist_sehr_gepfleg... | 1350 | test | kombi | 2003 | manuell | 0 | focus | 150000 | 7 | benzin | ford | nein | 2016-04-01 00:00:00 | 39218 | 2016-04-01 14:38:50 |
autos.describe(include='all')
date_crawled | name | price | ab_test | vehicle_type | registration_year | gearbox | power_ps | model | odometer_km | registration_month | fuel_type | brand | unrepaired_damage | ad_created | postal_code | last_seen | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
count | 46683 | 46683 | 46683.000000 | 46683 | 43977 | 46683.000000 | 44571 | 46683.000000 | 44488 | 46683.000000 | 46683.000000 | 43363 | 46683 | 38376 | 46683 | 46683.000000 | 46683 |
unique | 45126 | 35813 | NaN | 2 | 8 | NaN | 2 | NaN | 244 | NaN | NaN | 7 | 40 | 2 | 74 | NaN | 37148 |
top | 2016-03-05 16:57:05 | BMW_316i | NaN | test | limousine | NaN | manuell | NaN | golf | NaN | NaN | benzin | volkswagen | nein | 2016-04-03 00:00:00 | NaN | 2016-04-07 06:17:27 |
freq | 3 | 75 | NaN | 24062 | 12598 | NaN | 34715 | NaN | 3707 | NaN | NaN | 28540 | 9862 | 33836 | 1821 | NaN | 8 |
mean | NaN | NaN | 5977.479982 | NaN | NaN | 2002.902063 | NaN | 117.964998 | NaN | 125581.689266 | 5.826961 | NaN | NaN | NaN | NaN | 51097.957908 | NaN |
std | NaN | NaN | 9177.784188 | NaN | NaN | 7.306668 | NaN | 185.246421 | NaN | 39859.490163 | 3.670306 | NaN | NaN | NaN | NaN | 25754.959757 | NaN |
min | NaN | NaN | 1.000000 | NaN | NaN | 1800.000000 | NaN | 0.000000 | NaN | 5000.000000 | 0.000000 | NaN | NaN | NaN | NaN | 1067.000000 | NaN |
25% | NaN | NaN | 1250.000000 | NaN | NaN | 1999.000000 | NaN | 75.000000 | NaN | 100000.000000 | 3.000000 | NaN | NaN | NaN | NaN | 30827.000000 | NaN |
50% | NaN | NaN | 3100.000000 | NaN | NaN | 2003.000000 | NaN | 109.000000 | NaN | 150000.000000 | 6.000000 | NaN | NaN | NaN | NaN | 49828.000000 | NaN |
75% | NaN | NaN | 7500.000000 | NaN | NaN | 2008.000000 | NaN | 150.000000 | NaN | 150000.000000 | 9.000000 | NaN | NaN | NaN | NaN | 71732.000000 | NaN |
max | NaN | NaN | 350000.000000 | NaN | NaN | 2016.000000 | NaN | 17700.000000 | NaN | 150000.000000 | 12.000000 | NaN | NaN | NaN | NaN | 99998.000000 | NaN |
Some columns contain names in German. We will look into the dataset to indentify such columns.
To indentify columns with German words, we explore object columns. To reduce the number of columns to explore, we first convert columns with dates to numerical data. Recall our dates_col
list.
autos[dates_col].head(2)
date_crawled | ad_created | last_seen | |
---|---|---|---|
0 | 2016-03-26 17:47:46 | 2016-03-26 00:00:00 | 2016-04-06 06:45:54 |
1 | 2016-04-04 13:38:56 | 2016-04-04 00:00:00 | 2016-04-06 14:45:08 |
for col in dates_col:
autos[col] = (autos[col]
.str[:10]
.str.replace("-","")
.astype(int)
)
autos[dates_col].head(2)
date_crawled | ad_created | last_seen | |
---|---|---|---|
0 | 20160326 | 20160326 | 20160406 |
1 | 20160404 | 20160404 | 20160406 |
column_names = autos.columns
for col in column_names:
if autos[col].dtype == 'O':
print("For",col)
print(autos[col].value_counts(dropna=False).head(11),"\n")
For name BMW_316i 75 Volkswagen_Golf_1.4 75 Ford_Fiesta 74 BMW_318i 72 Volkswagen_Polo 72 BMW_320i 71 Opel_Corsa 68 Renault_Twingo 66 Volkswagen_Golf 57 Opel_Corsa_1.2_16V 56 BMW_116i 53 Name: name, dtype: int64 For ab_test test 24062 control 22621 Name: ab_test, dtype: int64 For vehicle_type limousine 12598 kleinwagen 10585 kombi 8930 bus 4031 cabrio 3016 NaN 2706 coupe 2462 suv 1965 andere 390 Name: vehicle_type, dtype: int64 For gearbox manuell 34715 automatik 9856 NaN 2112 Name: gearbox, dtype: int64 For model golf 3707 andere 3373 3er 2615 NaN 2195 polo 1609 corsa 1592 passat 1349 astra 1348 a4 1231 c_klasse 1136 5er 1132 Name: model, dtype: int64 For fuel_type benzin 28540 diesel 14032 NaN 3320 lpg 649 cng 71 hybrid 37 elektro 19 andere 15 Name: fuel_type, dtype: int64 For brand volkswagen 9862 bmw 5137 opel 5022 mercedes_benz 4503 audi 4041 ford 3263 renault 2201 peugeot 1393 fiat 1197 seat 853 skoda 766 Name: brand, dtype: int64 For unrepaired_damage nein 33836 NaN 8307 ja 4540 Name: unrepaired_damage, dtype: int64
From the above we can easily see that [vehicle_type],[gear_box],[model],[fuel_type],['unrepaired_damage']
all have words in German, so we change these to English words.[model]
column has a lot of unique german words so we skip that for now.
map_for_vehicle_type = {"limousine":"limousine",
"kleinwagen":"small_car",
"kombi":"estate_car",
"bus":"bus",
"cabrio": "convertible",
"coupe":"coupe",
"suv":"suv",
"andere":"other"}
map_for_gearbox = {"manuell": "manual","automatik":"automatic"}
map_for_fuel = {"benzin":"petrol",
"diesel":"diesel",
"lpg":"lpg",
"cng":"cng",
"hybrid":"hybrid",
"elektro":"electric",
"andere":"other"}
map_for_repairs = {"nein":"no",
"ja": 'yes'}
autos['vehicle_type'] = autos['vehicle_type'].map(map_for_vehicle_type)
autos['gearbox'] = autos['gearbox'].map(map_for_gearbox)
autos['fuel_type'] = autos['fuel_type'].map(map_for_fuel)
autos['unrepaired_damage'] = autos['unrepaired_damage'].map(map_for_repairs)
autos.head()
date_crawled | name | price | ab_test | vehicle_type | registration_year | gearbox | power_ps | model | odometer_km | registration_month | fuel_type | brand | unrepaired_damage | ad_created | postal_code | last_seen | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 20160326 | Peugeot_807_160_NAVTECH_ON_BOARD | 5000 | control | bus | 2004 | manual | 158 | andere | 150000 | 3 | lpg | peugeot | no | 20160326 | 79588 | 20160406 |
1 | 20160404 | BMW_740i_4_4_Liter_HAMANN_UMBAU_Mega_Optik | 8500 | control | limousine | 1997 | automatic | 286 | 7er | 150000 | 6 | petrol | bmw | no | 20160404 | 71034 | 20160406 |
2 | 20160326 | Volkswagen_Golf_1.6_United | 8990 | test | limousine | 2009 | manual | 102 | golf | 70000 | 7 | petrol | volkswagen | no | 20160326 | 35394 | 20160406 |
3 | 20160312 | Smart_smart_fortwo_coupe_softouch/F1/Klima/Pan... | 4350 | control | small_car | 2007 | automatic | 71 | fortwo | 70000 | 6 | petrol | smart | no | 20160312 | 33729 | 20160315 |
4 | 20160401 | Ford_Focus_1_6_Benzin_TÜV_neu_ist_sehr_gepfleg... | 1350 | test | estate_car | 2003 | manual | 0 | focus | 150000 | 7 | petrol | ford | no | 20160401 | 39218 | 20160401 |
autos['name'].head(10)
0 Peugeot_807_160_NAVTECH_ON_BOARD 1 BMW_740i_4_4_Liter_HAMANN_UMBAU_Mega_Optik 2 Volkswagen_Golf_1.6_United 3 Smart_smart_fortwo_coupe_softouch/F1/Klima/Pan... 4 Ford_Focus_1_6_Benzin_TÜV_neu_ist_sehr_gepfleg... 5 Chrysler_Grand_Voyager_2.8_CRD_Aut.Limited_Sto... 6 VW_Golf_III_GT_Special_Electronic_Green_Metall... 7 Golf_IV_1.9_TDI_90PS 8 Seat_Arosa 9 Renault_Megane_Scenic_1.6e_RT_Klimaanlage Name: name, dtype: object
One intresting thing to note here is that the engine capacity is part of the name. Noramlly, this ranges from 1.0 - 18.8 Liters(L). We will extract this information and use it to build a new column.
#creating a dictionary of engine capacity with the original index(in the autos dataframe) as keys
engine_cap_index = {}
for i in np.arange(1,19,0.1):
i = str(round(i,1))
car_bool = autos['name'].str.contains(i,regex=False) # generate a boolean indexer with columns that contain i
car_index = autos.loc[car_bool,'name'].index #filter the df with the boolean indexer and get the index for rows that match
for num in car_index:
engine_cap_index[num] = i # generate the dictionary where keys are the original index and values are the engine capacity for that position
# adding the new column to the autos dataframe
engine_cap = pd.Series(engine_cap_index)
engine_cap_df = pd.DataFrame(engine_cap, columns = ['engine_capacity'])
autos = autos.join(engine_cap_df)
autos.head()
date_crawled | name | price | ab_test | vehicle_type | registration_year | gearbox | power_ps | model | odometer_km | registration_month | fuel_type | brand | unrepaired_damage | ad_created | postal_code | last_seen | engine_capacity | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 20160326 | Peugeot_807_160_NAVTECH_ON_BOARD | 5000 | control | bus | 2004 | manual | 158 | andere | 150000 | 3 | lpg | peugeot | no | 20160326 | 79588 | 20160406 | NaN |
1 | 20160404 | BMW_740i_4_4_Liter_HAMANN_UMBAU_Mega_Optik | 8500 | control | limousine | 1997 | automatic | 286 | 7er | 150000 | 6 | petrol | bmw | no | 20160404 | 71034 | 20160406 | NaN |
2 | 20160326 | Volkswagen_Golf_1.6_United | 8990 | test | limousine | 2009 | manual | 102 | golf | 70000 | 7 | petrol | volkswagen | no | 20160326 | 35394 | 20160406 | 1.6 |
3 | 20160312 | Smart_smart_fortwo_coupe_softouch/F1/Klima/Pan... | 4350 | control | small_car | 2007 | automatic | 71 | fortwo | 70000 | 6 | petrol | smart | no | 20160312 | 33729 | 20160315 | NaN |
4 | 20160401 | Ford_Focus_1_6_Benzin_TÜV_neu_ist_sehr_gepfleg... | 1350 | test | estate_car | 2003 | manual | 0 | focus | 150000 | 7 | petrol | ford | no | 20160401 | 39218 | 20160401 | NaN |
autos['engine_capacity'].value_counts().head(9)
2.0 3646 1.6 2647 1.4 2164 1.2 1706 1.9 1612 1.8 1437 3.0 693 1.0 647 2.2 530 Name: engine_capacity, dtype: int64
From this statistics, we can see that the most common engine type is the 2.0L engines
In this section we will be trying to:
autos[autos['brand'] == 'sonstige_autos']['model'].head()
15 NaN 140 NaN 152 NaN 175 NaN 283 NaN Name: model, dtype: object
brand_model = {}
manufacturers = autos['brand'].unique()
for manufacturer in manufacturers:
if manufacturer != 'sonstige_autos':
model_freq = autos[autos['brand'] == manufacturer]['model'].value_counts(normalize = True)
highest_freq = model_freq.index[0]
brand_model[manufacturer] = highest_freq
brand_model
{'peugeot': '2_reihe', 'bmw': '3er', 'volkswagen': 'golf', 'smart': 'fortwo', 'ford': 'focus', 'chrysler': 'andere', 'seat': 'ibiza', 'renault': 'twingo', 'mercedes_benz': 'c_klasse', 'audi': 'a4', 'opel': 'corsa', 'mazda': '3_reihe', 'porsche': '911', 'mini': 'cooper', 'toyota': 'yaris', 'dacia': 'sandero', 'nissan': 'micra', 'jeep': 'grand', 'saab': 'andere', 'volvo': 'v70', 'mitsubishi': 'colt', 'jaguar': 'andere', 'fiat': 'punto', 'skoda': 'octavia', 'subaru': 'legacy', 'kia': 'andere', 'citroen': 'andere', 'chevrolet': 'andere', 'hyundai': 'i_reihe', 'honda': 'civic', 'daewoo': 'matiz', 'suzuki': 'andere', 'trabant': '601', 'land_rover': 'freelander', 'alfa_romeo': '156', 'lada': 'niva', 'rover': 'andere', 'daihatsu': 'cuore', 'lancia': 'ypsilon'}
During analysis I discovered that 'sonstige_autos' has NaN values in the model column so I removed it from the analysis. We can see that C-class
is the most common car for Mercedes, Golf
for Volkswagen, and the A4
for Audi
autos.groupby(["brand"])["model"].value_counts().sort_values(ascending=False).head(10)
brand model volkswagen golf 3707 bmw 3er 2615 volkswagen polo 1609 opel corsa 1592 volkswagen passat 1349 opel astra 1348 audi a4 1231 mercedes_benz c_klasse 1136 bmw 5er 1132 mercedes_benz e_klasse 958 Name: model, dtype: int64
We can see that that volkswagen has three entries in the top ten cars in the listing with golf being it's most common model.
autos['odometer_km'].value_counts().sort_index()
5000 787 10000 241 20000 742 30000 760 40000 797 50000 993 60000 1128 70000 1187 80000 1375 90000 1673 100000 2058 125000 4857 150000 30085 Name: odometer_km, dtype: int64
Dividing the distance traveled into groups to see if the price changes with a group
group1 = autos[autos['odometer_km'].between(0,31000)]['price']
group2 = autos[autos['odometer_km'].between(31000,51000)]['price']
group3 = autos[autos['odometer_km'].between(51000,71000)]['price']
group4 = autos[autos['odometer_km'].between(71000,110000)]['price']
group5 = autos[autos['odometer_km'].between(110000,150000)]['price']
mileage_price_df = pd.DataFrame(pd.Series({'very_low':int(group1.mean()),
'low':int(group2.mean()),
'medium':int(group3.mean()),
'high':int(group4.mean()),
'very_high':int(group5.mean())}),
columns=['mean_price'])
mileage_price_df.head()
mean_price | |
---|---|
very_low | 15111 |
low | 14563 |
medium | 11637 |
high | 8669 |
very_high | 4107 |
We can notice that the prices decreases gradually between level.Price change from low to meduim to high is rather consistent at approximately $3,000. High mileage cars cost the least. This is however not the case for the expensive cars as evidenced in our previous analysis. This may be attributed to the cars been luxirious and of better build so deemed to be able to last even longer not withstanding the mileage on it.
unrepaired = autos.loc[autos['unrepaired_damage'] == 'yes','price'].mean()
repaired = autos.loc[autos['unrepaired_damage'] == 'no','price'].mean()
print("The average price for cars with unrepaired damage is:",unrepaired)
print("The average price for cars with repaired damage is:",repaired)
The average price for cars with unrepaired damage is: 2241.146035242291 The average price for cars with repaired damage is: 7163.63624541908
Cars with damages cost approximately $5,000 less.