Creating your own dataset from Google Images

by: Francisco Ingham and Jeremy Howard. Inspired by Adrian Rosebrock

In this tutorial we will see how to easily create an image dataset through Google Images. Note: You will have to repeat these steps for any new category you want to Google (e.g once for dogs and once for cats).

In [25]:
from fastai.vision import *

Get a list of URLs

Search and scroll

Go to Google Images and search for the images you are interested in. The more specific you are in your Google Search, the better the results and the less manual pruning you will have to do.

Scroll down until you've seen all the images you want to download, or until you see a button that says 'Show more results'. All the images you scrolled past are now available to download. To get more, click on the button, and continue scrolling. The maximum number of images Google Images shows is 700.

It is a good idea to put things you want to exclude into the search query, for instance if you are searching for the Eurasian wolf, "canis lupus lupus", it might be a good idea to exclude other variants:

"canis lupus lupus" -dog -arctos -familiaris -baileyi -occidentalis

You can also limit your results to show only photos by clicking on Tools and selecting Photos from the Type dropdown.

Download into file

Now you must run some Javascript code in your browser which will save the URLs of all the images you want for you dataset.

Press CtrlShiftJ in Windows/Linux and CmdOptJ in Mac, and a small window the javascript 'Console' will appear. That is where you will paste the JavaScript commands.

You will need to get the urls of each of the images. You can do this by running the following commands:

urls = Array.from(document.querySelectorAll('.rg_di .rg_meta')).map(el=>JSON.parse(el.textContent).ou);
window.open('data:text/csv;charset=utf-8,' + escape(urls.join('\n')));

Create directory and upload urls file into your server

Choose an appropriate name for your labeled images. You can run these steps multiple times to grab different labels.

In [39]:
folder = 'heterochromia'
file = 'heterochromia.txt'
In [40]:
folder = 'cataracts'
file = 'cataracts.txt'
In [41]:
folder = 'conjunctivitis'
file = 'conjunctivitis.txt'
In [3]:
folder = 'orbital_cellulitis'
file = 'orbital_cellulitis.txt'
In [2]:
folder = 'strabismus'
In [4]:
folder = "BCC_eyelid"
file = "BCC_eyelid.txt"

You will need to run this line once per each category.

In [5]:
path = Path('data/eyecolor')
dest = path/folder
dest.mkdir(parents=True, exist_ok=True)
In [26]:
path.ls()
Out[26]:
[PosixPath('data/eyecolor/conjunctivitis'),
 PosixPath('data/eyecolor/orbital_cellulitis.txt'),
 PosixPath('data/eyecolor/strabismus'),
 PosixPath('data/eyecolor/export.pkl'),
 PosixPath('data/eyecolor/cataracts'),
 PosixPath('data/eyecolor/cataracts.txt'),
 PosixPath('data/eyecolor/BCC_eyelid'),
 PosixPath('data/eyecolor/orbital_cellulitis'),
 PosixPath('data/eyecolor/heterochromia.txt'),
 PosixPath('data/eyecolor/heterochromia'),
 PosixPath('data/eyecolor/BCC_eyelid.txt'),
 PosixPath('data/eyecolor/models'),
 PosixPath('data/eyecolor/.ipynb_checkpoints'),
 PosixPath('data/eyecolor/conjunctivitis.txt'),
 PosixPath('data/eyecolor/cleaned.csv')]

Finally, upload your urls file. You just need to press 'Upload' in your working directory and select your file, then click 'Upload' for each of the displayed files.

uploaded file

Download images

Now you will need to download you images from their respective urls.

fast.ai has a function that allows you to do just that. You just have to specify the urls filename and the destination folder and this function will download and save all images that can be opened. If they have some problem in being opened, they will not be saved.

Let's download our images! Notice you can choose a maximum number of images to be downloaded. In this case we will not download all the urls.

You will need to run this line once for every category.

In [27]:
classes = ['heterochromia','cataracts','conjunctivitis','orbital_cellulitis','strabismus','BCC_eyelid']

Do not run this line download_images(path/file, dest, max_pics=200)

Do not run this line If you have problems download, try with max_workers=0 to see exceptions: download_images(path/file, dest, max_pics=20, max_workers=0)

Then we can remove any images that can't be opened:

In [29]:
for c in classes:
    print(c)
    verify_images(path/c, delete=True, max_size=500)
heterochromia
100.00% [161/161 00:00<00:00]
cataracts
100.00% [122/122 00:00<00:00]
conjunctivitis
100.00% [134/134 00:00<00:00]
orbital_cellulitis
100.00% [66/66 00:00<00:00]
strabismus
100.00% [25/25 00:00<00:00]
BCC_eyelid
100.00% [128/128 00:00<00:00]
Image data/eyecolor/BCC_eyelid/00000125.gif has 1 instead of 3 channels

View data

np.random.seed(42) data = ImageDataBunch.from_folder(".", train=".", valid_pct=0.2, ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)

In [30]:
path
Out[30]:
PosixPath('data/eyecolor')
In [31]:
np.random.seed(42)
data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.2,
        ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)
In [25]:
#If you already cleaned your data, run this cell instead of the one before
np.random.seed(42)
data = ImageDataBunch.from_csv(path, folder=".", valid_pct=0.2, csv_labels='cleaned.csv', ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)

Good! Let's take a look at some of our pictures then.

In [32]:
data.classes
Out[32]:
['BCC_eyelid',
 'cataracts',
 'conjunctivitis',
 'heterochromia',
 'orbital_cellulitis',
 'strabismus']
In [33]:
data.show_batch(rows=3, figsize=(7,8))
In [34]:
data.classes, data.c, len(data.train_ds), len(data.valid_ds)
Out[34]:
(['BCC_eyelid',
  'cataracts',
  'conjunctivitis',
  'heterochromia',
  'orbital_cellulitis',
  'strabismus'],
 6,
 508,
 127)

Train model

In [35]:
learn = create_cnn(data, models.resnet34, metrics=error_rate)
In [37]:
learn.fit_one_cycle(5)
Total time: 00:28

epoch train_loss valid_loss error_rate
1 0.739979 0.752978 0.299213
2 0.733769 0.763614 0.275591
3 0.704961 0.747587 0.267717
4 0.671372 0.720132 0.259842
5 0.625559 0.722971 0.259842
In [38]:
learn.save('stage-1')
In [32]:
learn.unfreeze()
In [26]:
learn.lr_find()
LR Finder is complete, type {learner_name}.recorder.plot() to see the graph.
In [27]:
learn.recorder.plot()
Min numerical gradient: 3.98E-04
In [28]:
learn.fit_one_cycle(2, max_lr=slice(1e-4,1e-3))
Total time: 00:13

epoch train_loss valid_loss error_rate
1 0.580512 1.209677 0.416667
2 0.460311 1.013116 0.370370
In [29]:
learn.save('stage-2')

Interpretation

In [33]:
learn.load('stage-1');
In [39]:
interp = ClassificationInterpretation.from_learner(learn)
In [40]:
interp.plot_confusion_matrix()

Cleaning Up

Some of our top losses aren't due to bad performance by our model. There are images in our data set that shouldn't be.

Using the ImageCleaner widget from fastai.widgets we can prune our top losses, removing photos that don't belong.

In [20]:
from fastai.widgets import *

First we need to get the file paths from our top_losses. We can do this with .from_toplosses. We then feed the top losses indexes and corresponding dataset to ImageCleaner.

Notice that the widget will not delete images directly from disk but it will create a new csv file cleaned.csv from where you can create a new ImageDataBunch with the corrected labels to continue training your model.

In [23]:
ds, idxs = DatasetFormatter().from_toplosses(learn, ds_type=DatasetType.Valid)
In [24]:
ImageCleaner(ds, idxs, path)

Flag photos for deletion by clicking 'Delete'. Then click 'Next Batch' to delete flagged photos and keep the rest in that row. ImageCleaner will show you a new row of images until there are no more to show. In this case, the widget will show you images until there are none left from top_losses.ImageCleaner(ds, idxs)

You can also find duplicates in your dataset and delete them! To do this, you need to run .from_similars to get the potential duplicates' ids and then run ImageCleaner with duplicates=True. The API works in a similar way as with misclassified images: just choose the ones you want to delete and click 'Next Batch' until there are no more images left.

In [22]:
ds, idxs = DatasetFormatter().from_similars(learn, ds_type=DatasetType.Valid)
Getting activations...
100.00% [10/10 00:06<00:00]
Computing similarities...
In [24]:
ImageCleaner(ds, idxs, path, duplicates=True)

Remember to recreate your ImageDataBunch from your cleaned.csv to include the changes you made in your data!

Putting your model in production

First thing first, let's export the content of our Learner object for production:

In [41]:
learn.export()

This will create a file named 'export.pkl' in the directory where we were working that contains everything we need to deploy our model (the model, the weights but also some metadata like the classes or the transforms/normalization used).

You probably want to use CPU for inference, except at massive scale (and you almost certainly don't need to train in real-time). If you don't have a GPU that happens automatically. You can test your model on CPU like so:

In [42]:
defaults.device = torch.device('cpu')
In [56]:
choice=random.choice(data.valid_ds.x.items)
In [43]:
#folder = random.choice(classes)
#folder_path = path/folder

#randomchoice = random.choice(os.listdir(folder_path))
#randomchoice
Out[43]:
'00000057.jpg'
In [57]:
img = open_image(choice)
img
Out[57]:

We create our Learner in production enviromnent like this, jsut make sure that path contains the file 'export.pkl' from before.

In [45]:
learn = load_learner(path)
In [58]:
pred_class,pred_idx,outputs = learn.predict(img)
pred_class, pred_idx, outputs
Out[58]:
(Category BCC_eyelid,
 tensor(0),
 tensor([0.8797, 0.0788, 0.0366, 0.0026, 0.0012, 0.0011]))

So you might create a route something like this (thanks to Simon Willison for the structure of this code):

@app.route("/classify-url", methods=["GET"])
async def classify_url(request):
    bytes = await get_bytes(request.query_params["url"])
    img = open_image(BytesIO(bytes))
    _,_,losses = learner.predict(img)
    return JSONResponse({
        "predictions": sorted(
            zip(cat_learner.data.classes, map(float, losses)),
            key=lambda p: p[1],
            reverse=True
        )
    })

(This example is for the Starlette web app toolkit.)

Things that can go wrong

  • Most of the time things will train fine with the defaults
  • There's not much you really need to tune (despite what you've heard!)
  • Most likely are
    • Learning rate
    • Number of epochs

Learning rate (LR) too high

In [ ]:
learn = create_cnn(data, models.resnet34, metrics=error_rate)
In [ ]:
learn.fit_one_cycle(1, max_lr=0.5)

Learning rate (LR) too low

In [ ]:
learn = create_cnn(data, models.resnet34, metrics=error_rate)

Previously we had this result:

Total time: 00:57
epoch  train_loss  valid_loss  error_rate
1      1.030236    0.179226    0.028369    (00:14)
2      0.561508    0.055464    0.014184    (00:13)
3      0.396103    0.053801    0.014184    (00:13)
4      0.316883    0.050197    0.021277    (00:15)
In [ ]:
learn.fit_one_cycle(5, max_lr=1e-5)
In [ ]:
learn.recorder.plot_losses()

As well as taking a really long time, it's getting too many looks at each image, so may overfit.

Too few epochs

In [ ]:
learn = create_cnn(data, models.resnet34, metrics=error_rate, pretrained=False)
In [ ]:
learn.fit_one_cycle(1)

Too many epochs

In [ ]:
np.random.seed(42)
data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.9, bs=32, 
        ds_tfms=get_transforms(do_flip=False, max_rotate=0, max_zoom=1, max_lighting=0, max_warp=0
                              ),size=224, num_workers=4).normalize(imagenet_stats)
In [ ]:
learn = create_cnn(data, models.resnet50, metrics=error_rate, ps=0, wd=0)
learn.unfreeze()
In [ ]:
learn.fit_one_cycle(40, slice(1e-6,1e-4))