Creating your own dataset from Google Images

by: Francisco Ingham and Jeremy Howard. Inspired by Adrian Rosebrock

In this tutorial we will see how to easily create an image dataset through Google Images. Note: You will have to repeat these steps for any new category you want to Google (e.g once for dogs and once for cats).

Get a list of URLs

Search and scroll

Go to Google Images and search for the images you are interested in. The more specific you are in your Google Search, the better the results and the less manual pruning you will have to do.

Scroll down until you've seen all the images you want to download, or until you see a button that says 'Show more results'. All the images you scrolled past are now available to download. To get more, click on the button, and continue scrolling. The maximum number of images Google Images shows is 700.

It is a good idea to put things you want to exclude into the search query, for instance if you are searching for the Eurasian wolf, "canis lupus lupus", it might be a good idea to exclude other variants:

"canis lupus lupus" -dog -arctos -familiaris -baileyi -occidentalis

You can also limit your results to show only photos by clicking on Tools and selecting Photos from the Type dropdown.

Download into file

Now you must run some Javascript code in your browser which will save the URLs of all the images you want for you dataset.

Press CtrlShiftJ in Windows/Linux and CmdOptJ in Mac, and a small window the javascript 'Console' will appear. That is where you will paste the JavaScript commands.

You will need to get the urls of each of the images. You can do this by running the following commands:

urls = Array.from(document.querySelectorAll('.rg_di .rg_meta')).map(el=>JSON.parse(el.textContent).ou);
window.open('data:text/csv;charset=utf-8,' + escape(urls.join('\n')));

Create directory and upload urls file into your server

In [ ]:
from fastai import *
from fastai.vision import *

Choose an appropriate name for your labeled images. You can run these steps multiple times to grab different labels.

In [ ]:
folder = 'black'
file = 'urls_black.txt'
In [ ]:
folder = 'teddys'
file = 'urls_teddys.txt'
In [ ]:
folder = 'grizzly'
file = 'urls_grizzly.txt'

You will need to run this line once per each category.

In [ ]:
path = Path('data/bears')
dest = path/folder
dest.mkdir(parents=True, exist_ok=True)

Finally, upload your urls file. You just need to press 'Upload' in your working directory and select your file, then click 'Upload' for each of the displayed files.

uploaded file

Download images

Now you will need to download you images from their respective urls.

fast.ai has a function that allows you to do just that. You just have to specify the urls filename and the destination folder and this function will download and save all images that can be opened. If they have some problem in being opened, they will not be saved.

Let's download our images! Notice you can choose a maximum number of images to be downloaded. In this case we will not download all the urls.

You will need to run this line once for every category.

In [ ]:
classes = ['teddys','grizzly','black']
In [ ]:
download_images(path/file, dest, max_pics=200)
In [ ]:
# If you have problems download, try with `max_workers=0` to see exceptions:
# download_images(path/file, dest, max_pics=20, max_workers=0)

Then we can remove any images that can't be opened:

In [ ]:
for c in classes:
    print(c)
    verify_images(path/c, delete=True, max_workers=8)
teddys
100.00% [272/272 00:06<00:00]
grizzly
100.00% [168/168 00:05<00:00]
cannot identify image file '/data0/datasets/part1v3/bears/grizzly/00000011.jpg'
cannot identify image file '/data0/datasets/part1v3/bears/grizzly/00000014.jpg'
black
100.00% [176/176 00:05<00:00]

View data

In [ ]:
np.random.seed(42)
data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.2,
        ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)

Good! Let's take a look at some of our pictures then.

In [ ]:
data.classes
Out[ ]:
['black', 'grizzly', 'teddys']
In [ ]:
data.show_batch(rows=3, figsize=(7,8))
In [ ]:
data.classes, data.c, len(data.train_ds), len(data.valid_ds)
Out[ ]:
(['black', 'grizzly', 'teddys'], 3, 473, 141)

Train model

In [ ]:
learn = create_cnn(data, models.resnet34, metrics=error_rate)
In [ ]:
learn.fit_one_cycle(4)
Total time: 00:54
epoch  train_loss  valid_loss  error_rate
1      0.710584    0.087024    0.021277    (00:14)
2      0.414239    0.045413    0.014184    (00:13)
3      0.306174    0.035602    0.014184    (00:13)
4      0.239355    0.035230    0.021277    (00:13)

In [ ]:
learn.save('stage-1')
In [ ]:
learn.unfreeze()
In [ ]:
learn.lr_find()
LR Finder complete, type {learner_name}.recorder.plot() to see the graph.
In [ ]:
learn.recorder.plot()
In [ ]:
learn.fit_one_cycle(2, max_lr=slice(3e-5,3e-4))
Total time: 00:28
epoch  train_loss  valid_loss  error_rate
1      0.107059    0.056375    0.028369    (00:14)
2      0.070725    0.041957    0.014184    (00:13)

In [ ]:
learn.save('stage-2')

Interpretation

In [ ]:
learn.load('stage-2')
In [ ]:
interp = ClassificationInterpretation.from_learner(learn)
In [ ]:
interp.plot_confusion_matrix()

Cleaning Up

Some of our top losses aren't due to bad performance by our model. There are images in our data set that shouldn't be.

Using the FileDeleter widget from fastai.widgets we can prune our top losses, removing photos that don't belong.

First we need to get the file paths from our top_losses. Here's a handy function that pulls out all our top_losses:

In [ ]:
from fastai.widgets import *

losses,idxs = interp.top_losses()
top_loss_paths = data.valid_ds.x[idxs]

Now we can pass in these paths to our widget.

In [ ]:
fd = FileDeleter(file_paths=top_loss_paths)

Flag photos for deletion by clicking 'Delete'. Then click 'Confirm' to delete flagged photos and keep the rest in that row. The File_Deleter will show you a new row of images until there are no more to show. In this case, the widget will show you images until there are none left from top_losses.

Putting your model in production

In [ ]:
data.classes
Out[ ]:
['black', 'grizzly', 'teddys']

You probably want to use CPU for inference, except at massive scale (and you almost certainly don't need to train in real-time). If you don't have a GPU that happens automatically. You can test your model on CPU like so:

In [ ]:
# fastai.defaults.device = torch.device('cpu')
In [ ]:
img = open_image(path/'black'/'00000021.jpg')
img
Out[ ]: