This example compares the YOLOv4 and EfficientDet object detection models on the COCO dataset using FiftyOne.
For more information check out the YOLOv4 blog post and EfficientDet blog post.
If you haven't already, install FiftyOne:
!pip install fiftyone
First, let's load the validation split of COCO-2017 from the FiftyOne Dataset Zoo:
import os
import fiftyone as fo
import fiftyone.zoo as foz
dataset = foz.load_zoo_dataset("coco-2017", split="validation")
Split 'validation' already downloaded Loading 'coco-2017' split 'validation' 100% |███████████████| 5000/5000 [41.7s elapsed, 0s remaining, 121.3 samples/s] Dataset 'coco-2017-validation' created
Next, let's add some pre-generated YOLOv4 and EfficientDet predictions to the dataset.
You can download the predictions from this Google Drive link (72MB).
# Path to the downloaded JSON file
DATASET_PATH = "/path/to/yolo_edet_dataset.json"
# Load the predictions
data_dir = os.path.dirname(dataset.first().filepath)
predictions = fo.Dataset.from_json(DATASET_PATH, rel_dir=data_dir)
100% |███████████████| 5000/5000 [4.6m elapsed, 0s remaining, 18.6 samples/s]
# Merge the predictions into `dataset`
dataset.merge_samples(predictions)
Let's launch the FiftyOne App and qualitatively compare the predictions of the various models:
session = fo.launch_app(dataset)
You can evaluate any of the predictions with respect to the ground truth labels.
For example, let's evaluate the YOLOv4 predictions:
results = dataset.evaluate_detections(
"yolov4",
gt_field="ground_truth",
eval_key="yolov4",
)
Evaluating detections... 100% |███████████████| 5000/5000 [1.4m elapsed, 0s remaining, 68.4 samples/s]
With dataset views, you can easily identify samples of interest. For example, let's view the samples where YOLOv4 had the most false positives:
session.view = dataset.sort_by("yolov4_fp", reverse=True)
session.freeze() # for notebook sharing