This example follows along with this blog post analyzing the presidential and vice-presidential debates.
You can download a dataset of images and labels from the debate used in the blog post from this Google Drive link (958MB).
For reference, the videos in this dataset were taken from:
And the emotion recognition model used is from https://github.com/justinshenk/fer.
!pip install fiftyone
Let's load the dataset into FiftyOne:
import fiftyone as fo
from fiftyone import ViewField as F
# The path to the unzipped dataset on disk
DATASET_PATH = "/path/to/debate_images"
dataset = fo.Dataset.from_dir(DATASET_PATH, dataset_type=fo.types.FiftyOneDataset)
100% |███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11892/11892 [1.7m elapsed, 0s remaining, 113.4 samples/s]
# View the dataset in the App
session = fo.launch_app(dataset)
App launched
With the data loaded into FiftyOne, you can use the App's features to visually explore the dataset. You can also construct views into the dataset programmatically to identify particular samples of interest.
For example, each image is tagged with the corresponding speaker. Let's filter the samples in the dataset to find only those of Biden speaking, and visualze the distribution of his emotions in the Labels tab:
# Only show images where Biden is speaking
session.view = dataset.match_tags("biden")
# Show examples where Trump is happy
session.view = dataset.match_tag("trump").match(F("top_emotion.label") == "happy")