This notebook presents a case study of the TriScale framework. It revisits the analysis of Pensieve, a system that generates adaptive bitrate algorithms for video streaming using reinforcement learning. Parts of this case study are described in the TriScale paper.
In this case study, various adaptive bitrate algorithms are compared using a user quality of experimence (QoE) as metric.
The experiment has been designed and performed by the authors of the Pensieve paper. In this case study, we show how TriScale can be used to provide confidence intervals not only on single KPIs, but on entire cumulative distribution functions (CDFs).
import os
import copy
from pathlib import Path
import zipfile
import pandas as pd
import numpy as np
import plotly.graph_objects as go
import triscale
import triplots
The dataset for this case study is available on Zenodo:
The wget commands below download the required files to reproduce this case study.
The .zip file is ~620 kB
# Set `download = True` to download (and extract) the data from this case study
# Eventually, adjust the record_id for the file version you are interested in.
# For reproducing the results of the TriScale paper, set `record_id = 3666724`
download = True
record_id = 3666724 # v3.0.1 (https://doi.org/10.5281/zenodo.3666724)
files= ['UseCase_VideoStreaming.zip']
if download:
for file in files:
print(file)
url = 'https://zenodo.org/record/'+str(record_id)+'/files/'+file
os.system('wget %s' %url)
if file[-4:] == '.zip':
with zipfile.ZipFile(file,"r") as zip_file:
zip_file.extractall()
print('Done.')
else:
print('Nothing to download')
Nothing to download
We now import the custom module for the case study.
import UseCase_VideoStreaming.videostreaming as vs
The metric values are given (retrieved from the Pensieve paper experiments). For each algorithm, we compute a set of KPIs which range from the 5th to the 98th percentile. Since our metric is QoE (larger is better) we compute lower-bounds for all KPIs.
# Construct the path to the different test results
result_dir_path = Path('UseCase_VideoStreaming/FCC/linear')
protocol_list = [x.stem for x in result_dir_path.iterdir()]
protocol_list = list(set(protocol_list))
config_file = Path('UseCase_VideoStreaming/config.yml')
# Define the KPIs
KPI_percentiles = np.arange(5,100,2) # percentiles
KPI_confidence = 95 # confidence level
KPI_base = {'confidence': KPI_confidence,
'bound': 'lower',
'unit': '',
}
KPI_list = []
for p in KPI_percentiles:
kpi = copy.deepcopy(KPI_base)
kpi['percentile'] = p
kpi['name'] = 'P%d'%p
KPI_list.append(kpi)
# Compute and store KPIs
out_name = Path('UseCase_VideoStreaming') / 'kpis.csv'
QoE = vs.compute_kpi(
protocol_list,
KPI_list,
result_dir_path,
out_name=out_name
)
Output retrieved from file. Skipping computation.
display(QoE)
Protocol | Network | QoE | Percentile | KPI | |
---|---|---|---|---|---|
0 | robust_mpc | FCC | linear | 2 | -0.760 |
1 | robust_mpc | FCC | linear | 4 | -0.033 |
2 | robust_mpc | FCC | linear | 6 | 0.114 |
3 | robust_mpc | FCC | linear | 8 | 0.140 |
4 | robust_mpc | FCC | linear | 10 | 0.163 |
... | ... | ... | ... | ... | ... |
338 | buffer | FCC | linear | 90 | 1.073 |
339 | buffer | FCC | linear | 92 | 1.134 |
340 | buffer | FCC | linear | 94 | 1.197 |
341 | buffer | FCC | linear | 96 | 1.614 |
342 | buffer | FCC | linear | 98 | 2.237 |
343 rows × 5 columns
sample = dict(
sample_cdf=True,
protocol=['pensieve']
)
figure = vs.plot_cdf(
QoE,
config_file,
result_dir_path,
sample=sample
)
figure.show()
sample = dict(
sample_cdf=False,
)
figure = vs.plot_cdf(
QoE,
config_file,
result_dir_path,
sample=sample
)
figure.show()