A Conversation With Data - Car Parking Meter Data

This data conversation uses the car parking meter data obtained via an FOI request to the the Isle of Wight Council for Pay and Display ticket machine transaction data from the ticket machines in the River Road Car Park, Yarmouth, Isle of Wight, for the financial year 2012-13.

The data includes an identifier for the ticket machine that issued the ticket, the time the ticket was issued, the tariff band (i.e. the nominal ticket value), and the amount of cash paid for the ticket.

The following conversation is a predomninantly visual one, where questions are asked of the data and responses provided in a graphical form - as charts - that then need interpreting.

Several lines of questioning naturally arise:

  • when are car parks actually used, based on ticket purchases?
  • are different ticket types purchased at different times of day or different days of the week?
  • do customers ever pay more than they need to when purchasing a ticket?

A note on the format of this data conversation

This conversation with data has been created within an interactive IPython Notebook, using the pandas data wrangling library and the ggplot graphics library.

For more information, contact [email protected], Twitter: @psychemedia

In [15]:
#Import some necessary programming libraries that we'll use for the analysis
import pandas as pd
from ggplot import *

#And some housekeeping
import warnings
warnings.simplefilter(action = "ignore", category = FutureWarning)
In [3]:
#See what data files we have available
!ls data/iw_parkingMeterData/
4_10_River Road Transaction Report April 2012.xls
4_11_Transaction Report RR Aug 2012.xls
4_3_Ticket Machine Locations with GIS.xlsx
4_4_Tony hirst reply iw14 2 27649 18mar14.pdf
4_5_Transaction Report RR Dec 2012 March 2013.xls
4_6_Transaction Report RR July 2012.xls
4_7_Transaction Report RR June 2012.xls
4_8_Transaction Report RR May 2012.xls
4_9_Transaction Report RR Sept Nov 2012.xls
correspondence.pdf
In [4]:
#I'm going to start by just looking at data from the period Dec 2012 to March 2013.
#Read in a single spreadsheet and tidy it
df=pd.read_excel("data/iw_parkingMeterData/4_5_Transaction Report RR Dec 2012 March 2013.xls",skiprows=6)
#We need to clean the data a little, dropping empty columns, identifying timestamps as such
df.dropna(how='all',axis=1,inplace=True)
df.Date=pd.to_datetime(df.Date,  format="%Y-%m-%d %H:%M:%S",coerce=True)
df.dropna(subset=["Date"],inplace=True)

#So what does the data look like?
df[:5]
Out[4]:
Date Machine Description Tariff Description.1 Cash
0 2012-12-01 06:38:53 YARR01 River Road 1 Yarmouth 01F LS with Cch £6.60 6>24hrs 6.6
1 2012-12-01 07:26:12 YARR01 River Road 1 Yarmouth 01F LS with Cch £6.60 6>24hrs 6.6
2 2012-12-01 08:22:15 YARR01 River Road 1 Yarmouth 01F LS with Cch £6.60 6>24hrs 6.6
3 2012-12-01 08:27:01 YARR01 River Road 1 Yarmouth 01F LS with Cch £6.60 6>24hrs 6.6
4 2012-12-01 08:34:11 YARR01 River Road 1 Yarmouth 01F LS with Cch £6.60 6>24hrs 6.6
In [133]:
#What are the separate tariff bands?
df['Description.1'].unique()
Out[133]:
array(['LS with Cch £6.60 6>24hrs', 'LS with Cch £1.00 30m>1hr',
       'LS with Cch £4.50 4->6hrs', 'LS with Cch £1.90 1->2hrs',
       'LS with Cch £3.40 2->4hrs', 'LS with Cch £0.60 30 Mins',
       'LS with Cch £3.00 Cch>10h', 'LS with Cch £10 Cch10>14h'], dtype=object)
In [21]:
#It's possibly easier to work with the Tariff code, so what code applies to which description?
from pandasql import sqldf
pysqldf = lambda q: sqldf(q, globals())
dfx=df[["Tariff","Description.1"]].copy()
dfx.rename(columns=lambda x: x.replace('.','_'), inplace=True)
q="SELECT DISTINCT Tariff, Description_1 FROM dfx"
pysqldf(q)
Out[21]:
Tariff Description_1
0 01F LS with Cch £6.60 6>24hrs
1 01B LS with Cch £1.00 30m>1hr
2 01E LS with Cch £4.50 4->6hrs
3 01C LS with Cch £1.90 1->2hrs
4 01D LS with Cch £3.40 2->4hrs
5 01A LS with Cch £0.60 30 Mins
6 01G LS with Cch £3.00 Cch>10h
7 01H LS with Cch £10 Cch10>14h
In [22]:
#We can use this information to generate a mapping from the description or tariff to the tariff price
#[Really should automate the extraction of the amount from the description]
tariffMap={'01A':0.6, '01B':1,'01C':1.9, '01D':3.4,'01E':4.5,'01F':6.6,'01G':3,'01H':10}

df["Tariff_val"]=df['Tariff'].apply(lambda x: tariffMap[x])
df[:3]
Out[22]:
Date Machine Description Tariff Description.1 Cash weekday hour Tariff_val
0 2012-12-01 06:38:53 YARR01 River Road 1 Yarmouth 01F LS with Cch £6.60 6>24hrs 6.6 5 6 6.6
1 2012-12-01 07:26:12 YARR01 River Road 1 Yarmouth 01F LS with Cch £6.60 6>24hrs 6.6 5 7 6.6
2 2012-12-01 08:22:15 YARR01 River Road 1 Yarmouth 01F LS with Cch £6.60 6>24hrs 6.6 5 8 6.6
In [7]:
#How much cash was taken over this period in total?
df[['Cash']].sum()
Out[7]:
Cash    18385.85
dtype: float64
In [62]:
#If people paid exactly the tariff price, how much would have been taken?
df[['Tariff_val']].sum()
Out[62]:
Tariff_val    18076.6
dtype: float64
In [69]:
#So for this one car park, over four off season months, how much was overpaid?
round(float(df[['Cash']].sum())-float(df[['Tariff_val']].sum()),2)
Out[69]:
309.25
In [9]:
#How much cash was taken over this period for each machine?
df[['Machine','Cash']].groupby('Machine').sum()
Out[9]:
Cash
Machine
YARR01 9876.65
YARR02 8509.20
In [11]:
#How much cash was taken over this period for each machine and tariff?
df[['Machine','Tariff','Cash']].groupby(['Machine','Tariff']).sum()
Out[11]:
Cash
Machine Tariff
YARR01 01A 81.00
01B 634.05
01C 2004.90
01D 1981.35
01E 1391.80
01F 3780.55
01G 3.00
YARR02 01A 116.90
01B 601.15
01C 1979.45
01D 1676.00
01E 1017.50
01F 3070.20
01G 18.00
01H 30.00

That total cash amounts are interesting, but if we want to know how busy the car parks were, we need to count the number of tickets issued.

In [12]:
#So how many tickets of each tariff type were issued by each machine?
df[["Tariff","Machine"]].groupby(['Tariff',"Machine"]).agg(len).sort_index()
Out[12]:
Tariff  Machine
01A     YARR01      133
        YARR02      192
01B     YARR01      627
        YARR02      595
01C     YARR01     1022
        YARR02     1014
01D     YARR01      572
        YARR02      488
01E     YARR01      302
        YARR02      222
01F     YARR01      564
        YARR02      463
01G     YARR01        1
        YARR02        6
01H     YARR02        3
dtype: int64
In [18]:
#Can you show me that graphically?
p = ggplot(aes(x='Tariff'), data=df)
p + geom_bar() + ggtitle("Number of Tickets per Tariff")  + labs("Tariff Code", "Count") + facet_wrap('Machine',scales='fixed')
Out[18]:
<ggplot: (-9223363294099285553)>

It looks as if YARR02 is used slightly less - is the area of the car park it covers "further away" from where people are likely to want to go?

There's possibly a diagnostic here too - if the sales from one machine fall off and the other runs at a higher rate than normal, it suggests a possible problem with the former machine? We want explore that here, but we could explore it in a more detailed investigation.

In [52]:
#Here's the same question asked another way
p = ggplot(aes(x='Tariff',fill="Machine"), data=df)
p + geom_bar() + ggtitle("Number of Tickets per Tariff")  + labs("Tariff Code", "Count")
#Ideally these bars would be "dodged" - placed side-by-side, but the charting library doesn't support that at the moment
Out[52]:
<ggplot: (-9223363294100548878)>

I'm now going to start exploring when there is most activity. One way of doing this is to summarise the data and look for activity around particular days of the week or hours of the day.

In [19]:
#We can get a designation different time components as follows
# /via http://pandas-docs.github.io/pandas-docs-travis/timeseries.html
# minute: the minutes of the datetime
# weekday OR dayofweek: the day of the week with Monday=0, Sunday=6
# week: the week ordinal of the year
df['weekday']=df['Date'].apply(lambda x: x.dayofweek)
# hour: the hour of the datetime
df['hour']=df['Date'].apply(lambda x: x.hour)
#Let's just check that's worked:
df[:3]
Out[19]:
Date Machine Description Tariff Description.1 Cash weekday hour
0 2012-12-01 06:38:53 YARR01 River Road 1 Yarmouth 01F LS with Cch £6.60 6>24hrs 6.6 5 6
1 2012-12-01 07:26:12 YARR01 River Road 1 Yarmouth 01F LS with Cch £6.60 6>24hrs 6.6 5 7
2 2012-12-01 08:22:15 YARR01 River Road 1 Yarmouth 01F LS with Cch £6.60 6>24hrs 6.6 5 8

Number of Tickets by day of week

How many transactions are issued by day of week? Let's plot them as a bar chart.

In [20]:
ggplot(df, aes(x='factor(weekday)'))+geom_bar()
Out[20]:
<ggplot: (8742780763421)>
In [ ]:
# this, or similar, should be supported at some point? +scale_x_discrete(labels=["Mon","Tues","Weds","Thurs","Fri","Sat","Sun"])

So Saturday apppears to be the most popular day of the week, and Monday the quietest.

Number of Transactions by Hour of Day

In [23]:
#How many transactions occured by hour of day?
ggplot(df, aes(x='hour'))+geom_bar(binwidth=1)
Out[23]:
<ggplot: (-9223363294099411554)>
In [24]:
#Can we split that up to see whether it's different across days of the week?
ggplot(df, aes(x='hour'))+geom_bar(binwidth=1)+facet_wrap('weekday',scales='fixed')
Out[24]:
<ggplot: (-9223363294100000403)>

In distribution terms, it looks as if this concentrates more the middle of the day at the weekends, compared to weekdays. (We could run statistical tests to check this.)

In [85]:
#Can we probe that distribution a little further, perhaps seeing how the hourly counts are made up from different tariff counts?
ggplot(df, aes(x='hour',fill='Tariff'))+geom_bar(binwidth=1)+facet_wrap('weekday',scales='fixed')
Out[85]:
<ggplot: (8770290310783)>

So that's not too clear - and we need a legend. But the grey-blue band doesn't appear to be used much in the afternoon... And there's a burst of red band activity last thing on a Saturday. The light blue also seems quite popular on a Saturday?

In [68]:
#Let's try to dig into that a little more. For a given day of the week, how do the tariff bands get used over the day?
ggplot(df[df['weekday']==2], aes(x='hour'))+geom_bar(binwidth=1)+facet_grid('Tariff')+ggtitle('Wednesday')
Out[68]:
<ggplot: (-9223363266572976910)>

So the longer ticket 01F is bought in the morning (reasonable) and late in the day (to cover the next morning). 01B and 01C (up to an hour and 1-2 hours) are popular throughout the day. There is maybe a burst in sales of the short 30 minute 01A ticket at the end of the day?

So how does another day compare?

In [36]:
#Let's see what activity for Saturday looks like:
ggplot(df[df['weekday']==5], aes(x='hour'))+geom_bar(binwidth=1)+facet_grid('Tariff')+ggtitle('Saturday')
Out[36]:
<ggplot: (-9223363294100597458)>

There definitely seems to be an upswing in short term ticket sales at the end of the day: people going out for the evening?

Number of Transactions by Hour of Day, Faceted by Tariff

In [39]:
#Let's try to look over all the data to see how the tariff bands compare by hour of day
ggplot(df[(df['Tariff']!='01H') & (df['Tariff']!='01G') ], aes(x='hour'))+geom_bar(binwidth=1)+facet_wrap('Tariff',scales='fixed')
Out[39]:
<ggplot: (8742753738907)>

Overpayments

To what extent do people pay more for their parking than they need to - at least in terms of paying more for a ticket than its actual marked price?

In [54]:
#Let's plot a count of cash payments using bins that ar 5 pence wide
p = ggplot(aes(x='Cash'), data=df)
p + geom_histogram(binwidth=0.05) 
Out[54]:
<ggplot: (-9223363294101038101)>

Note the "echo peaks" at £2.00 and £3.50 - representing 10p overpayments on the £1.90 01C tariff and £3.40 01D tariff. Clever, eh? Set the tariff just below natural coinage, perhaps in the expectation you'll get the 'natural' amount a good proportion of the time.

In [41]:
#The Overpayment column is a boolean that specifies whether there was an overpayment or not
df["Overpayment"]=(df["Cash"]!=df["Tariff_val"])
#The OverpaymentVal identifies how much, if anything, was overpaid
df["OverpaymentVal"]=df["Cash"]-df["Tariff_val"]
In [48]:
df[1220:1223]
Out[48]:
Date Machine Description Tariff Description.1 Cash weekday hour Tariff_val Overpayment OverpaymentVal
1220 2013-01-16 13:34:31 YARR01 River Road 1 Yarmouth 01D LS with Cch £3.40 2->4hrs 3.4 2 13 3.4 False 0.0
1221 2013-01-16 14:14:49 YARR01 River Road 1 Yarmouth 01C LS with Cch £1.90 1->2hrs 2.0 2 14 1.9 True 0.1
1222 2013-01-16 14:17:16 YARR01 River Road 1 Yarmouth 01A LS with Cch £0.60 30 Mins 0.6 2 14 0.6 False 0.0
In [49]:
#So how common are overpayents by tariff type?
df[["Tariff","Overpayment"]].groupby(['Tariff',"Overpayment"]).agg(len)
Out[49]:
Tariff  Overpayment
01A     False           304
        True             21
01B     False          1197
        True             25
01C     False          1074
        True            962
01D     False           854
        True            206
01E     False           455
        True             69
01F     False           890
        True            137
01G     False             7
01H     False             3
dtype: int64

Seems like 01C has a getting on for almost 50% overpayment!

How does revenue come in over the data collection period?

In [56]:
#Let's order the data by timestamp, then add up the cumulative revenue
df.sort(['Date'],inplace=True)
df['Cash_cumul'] = df.Cash.cumsum()
df[:3]
Out[56]:
Date Machine Description Tariff Description.1 Cash weekday hour Tariff_val Overpayment OverpaymentVal Cash_cumul
0 2012-12-01 06:38:53 YARR01 River Road 1 Yarmouth 01F LS with Cch £6.60 6>24hrs 6.6 5 6 6.6 False 0.0 6.6
1 2012-12-01 07:26:12 YARR01 River Road 1 Yarmouth 01F LS with Cch £6.60 6>24hrs 6.6 5 7 6.6 False 0.0 13.2
3221 2012-12-01 07:30:14 YARR02 River Road 2 Yarmouth 01F LS with Cch £6.60 6>24hrs 7.0 5 7 6.6 True 0.4 20.2
In [57]:
#How does it look?
g = ggplot(aes(x="Date",y="Cash_cumul"), data=df )+ geom_line()
g
Out[57]:
<ggplot: (-9223363294101464881)>
In [59]:
#We can also calculate the accumulated amount within each tariff band

#Let's group=df[['Tariff','Cash']].groupby('Tariff')
#For group of rows, apply the transformation to each row in the group
#The number of rows in the response will be the same as the number of rows in the original data frame
df['Cash_cumul2']=group.transform(cumsum)['Cash']
In [60]:
#Here's how it looks:
ggplot(df,aes(x="Date",y="Cash_cumul2",colour="Tariff"))+geom_line()
Out[60]:
<ggplot: (-9223363294101508818)>
In [61]:
#We can also split the amounts out into separate charts
ggplot(df, aes(x="Date",y="Cash_cumul2")) + geom_line() \
                                   + ggtitle("Payments made over time") \
                                   + labs("Transaction Date", "Transaction amount (£)") \
                                   + facet_wrap("Tariff",scales = "fixed")
Out[61]:
<ggplot: (8742753242523)>

The Full Data Set

Here's the start of a conversation with the full data set. It's a little scrappier at the moment, and in rather more of a quickfire, note form, but you're hopefully in the swing of it now...

In [110]:
dfx=pd.DataFrame()
for fn in ['4_10_River Road Transaction Report April 2012.xls', 
           '4_11_Transaction Report RR Aug 2012.xls',
           '4_5_Transaction Report RR Dec 2012 March 2013.xls',
           '4_6_Transaction Report RR July 2012.xls',
           '4_7_Transaction Report RR June 2012.xls',
           '4_8_Transaction Report RR May 2012.xls',
           '4_9_Transaction Report RR Sept Nov 2012.xls']:
    dfx=pd.concat([dfx,pd.read_excel('data/iw_parkingMeterData/'+fn,skiprows=6)])
dfx.dropna(how='all',axis=1,inplace=True)
dfx.Date=pd.to_datetime(dfx.Date,  format="%Y-%m-%d %H:%M:%S",coerce=True)
dfx.dropna(subset=["Date"],inplace=True)
WARNING *** file size (788481) not 512 + multiple of sector size (512)
WARNING *** file size (1736705) not 512 + multiple of sector size (512)
WARNING *** file size (1185281) not 512 + multiple of sector size (512)
WARNING *** file size (1128449) not 512 + multiple of sector size (512)
WARNING *** file size (1121281) not 512 + multiple of sector size (512)
WARNING *** file size (2069505) not 512 + multiple of sector size (512)
In [111]:
dfx['weekday']=dfx['Date'].apply(lambda x: x.dayofweek)
ggplot(dfx, aes(x='factor(weekday)'))+geom_bar()
#0-Mon 6-Sun
Out[111]:
<ggplot: (8770281332179)>
In [112]:
dfx['month']=dfx['Date'].apply(lambda x: x.month)
In [113]:
dfx['week']=dfx['Date'].apply(lambda x: x.week)
In [118]:
dfx['hour']=dfx['Date'].apply(lambda x: x.hour)
In [117]:
#How much activity is there by week of year? Note: the data was collected from a financial year,
#so the scale actually runs Jan-Mar 13, then Apr-Dec 12.
ggplot(dfx, aes(x='week'))+geom_bar()+facet_wrap('Tariff',scales='free_y')
Out[117]:
<ggplot: (8770285612243)>
In [176]:
#Is there any evidence of folk paying just what they need as it gets closer to free parking time at 18.00?
ggplot(dfx[(dfx['Tariff']=='01A') | (dfx['Tariff']=='01B')|(dfx['Tariff']=='01C')], aes(x='hour')) \
    +geom_bar(binwidth=1)+facet_wrap('Tariff',scales='free_y')
#Note that the bin widths are 1 hour wide for doing setting the bars.
Out[176]:
<ggplot: (8770276897863)>
In [140]:
tariffMap2={'01A':0.6, '01B':1,'01C':1.9, '01D':3.4,'01E':4.5,'01F':6.6,'01G':3,'01H':10,'01I':13,
            '02A':0.6, '02B':1,'02C':1.9, '02D':3.4,'02E':4.5,'02F':6.6,}

dfx["Tariff_val"]=dfx['Tariff'].apply(lambda x: tariffMap2[x])
In [145]:
#Set a boolean to say whether or not a line item was an overpayment
dfx["Overpayment"]=(dfx["Cash"]!=dfx["Tariff_val"])
In [149]:
#Calculate amount of overpayment (if any) for each transaction
dfx["OverpaymentVal"]=dfx["Cash"]-dfx["Tariff_val"]
In [152]:
#What's the total amount of overpayment?
dfx["OverpaymentVal"].sum()
Out[152]:
2215.6499999995817
In [153]:
#How much was overpaid at the 01C/ £1.90 tariff level?
dfx[dfx['Tariff']=='01C']["OverpaymentVal"].sum()
#Note - I think 02C is the same level and there were also overpayments at that level.
Out[153]:
1011.2000000001232
In [183]:
#Total revenue over the year:
dfx["Cash"].sum()
Out[183]:
130121.34999997147
In [159]:
#How many people paid £2 on the 01C tariff?
dfx[(dfx['Tariff']=='01C') & (dfx['Cash']==2)]["OverpaymentVal"].count()
Out[159]:
7533
In [175]:
dfx['OverpaymentValRounded']=dfx['OverpaymentVal'].apply(lambda  x: round(x,2))

#This crosstab counts the occurrences of one column value or index value with respect to another
#So we can get count of the number of overpayments of a particular size by Tariff
pd.crosstab(dfx['OverpaymentValRounded'],dfx['Tariff'], margins=True)
Out[175]:
Tariff 01A 01B 01C 01D 01E 01F 01G 01H 01I 02A 02B 02C 02D 02E 02F All
OverpaymentValRounded
0.0 2172 10923 9994 6952 3084 4218 19 5 1 110 553 461 312 109 118 39031
0.05 5 7 19 4 4 1 0 0 0 1 1 4 3 1 0 50
0.1 123 30 7533 1110 73 170 0 0 0 7 1 460 74 7 12 9600
0.15 2 0 6 2 2 1 0 0 0 0 0 1 0 0 0 14
0.2 11 26 16 15 9 13 0 0 0 0 2 0 0 0 0 92
0.25 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 3
0.3 18 3 16 8 6 3 0 0 0 2 2 1 3 0 0 62
0.35 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 4
0.4 0 16 7 9 1 302 1 0 0 0 2 2 0 0 11 351
0.45 0 2 1 0 0 0 0 0 0 0 0 0 0 0 0 3
0.5 0 48 21 7 183 3 0 0 0 0 2 2 0 8 0 274
0.55 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0.6 0 12 36 320 3 7 0 0 0 0 1 2 21 0 0 402
0.65 0 2 1 0 1 0 0 0 0 0 0 0 0 0 0 4
0.7 0 33 7 2 1 2 0 0 0 0 3 0 1 0 0 49
0.75 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0.8 0 31 6 6 1 0 0 0 0 0 1 1 0 0 0 46
0.85 0 8 1 1 0 0 0 0 0 0 0 0 0 0 0 10
0.9 0 0 2 4 1 8 0 0 0 0 0 0 0 0 0 15
0.95 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1
1.0 0 0 11 16 10 10 0 0 0 0 0 0 4 1 2 54
1.05 0 0 0 5 1 0 0 0 0 0 0 0 0 0 0 6
1.1 0 0 128 0 9 2 0 0 0 0 0 10 0 0 0 149
1.15 0 0 2 0 2 0 0 0 0 0 0 0 0 0 0 4
1.2 0 0 3 0 1 1 0 0 0 0 0 0 0 0 0 5
1.3 0 0 18 0 1 0 0 0 0 0 0 2 0 0 0 21
1.35 0 0 0 0 2 0 0 0 0 0 0 1 0 0 0 3
1.4 0 0 10 0 0 7 0 0 0 0 0 3 0 0 0 20
1.45 0 0 3 0 0 0 0 0 0 0 0 1 0 0 0 4
1.5 0 0 0 0 15 1 0 0 0 0 0 0 0 0 0 16
1.6 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 3
1.7 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 2
1.8 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 2
1.9 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 2
1.95 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1
2.0 0 0 0 0 63 0 0 0 0 0 0 0 0 1 0 64
2.4 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 2
2.9 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1
3.4 0 0 0 0 0 5 0 0 0 0 0 0 0 0 1 6
4.4 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1
6.4 0 0 0 0 0 10 0 0 0 0 0 0 0 0 0 10
6.6 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 3
6.9 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1
All 2333 11148 17842 8461 3483 4772 20 5 1 120 568 951 418 129 144 50395
In [ ]: