This walkthrough shows you how to use the cognitive services Face API to detect faces in an image. The API also returns various attributes such as the gender and age of each person. The sample images used in this walkthrough are from the How-Old Robot that uses the same APIs.
You can run this example as a Jupyter notebook on MyBinder by clicking on the launch Binder badge:
For more information, see the REST API Reference.
You must have a Cognitive Services API account with Face API. The free trial is sufficient for this quickstart. You need the subscription key provided when you activate your free trial, or you may use a paid subscription key from your Azure dashboard.
To continue with this walkthrough, replace subscription_key
with a valid subscription key.
subscription_key = None
assert subscription_key
Next, verify face_api_url
and make sure it corresponds to the region you used when generating the subscription key. If you are using a trial key, you don't need to make any changes.
face_api_url = 'https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect'
Here is the URL of the image. You can experiment with different images by changing image_url
to point to a different image and rerunning this notebook.
image_url = 'https://how-old.net/Images/faces2/main007.jpg'
The next few lines of code call into the Face API to detect the faces in the image. In this instance, the image is specified via a publically visible URL. You can also pass an image directly as part of the request body. For more information, see the API reference.
import requests
from IPython.display import HTML
headers = { 'Ocp-Apim-Subscription-Key': subscription_key }
params = {
'returnFaceId': 'true',
'returnFaceLandmarks': 'false',
'returnFaceAttributes': 'age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise',
}
response = requests.post(face_api_url, params=params, headers=headers, json={"url": image_url})
faces = response.json()
HTML("<font size=5>Detected <font color='blue'>%d</font> faces in the image</font>"%len(faces))
Finally, the face information can be overlaid of the original image using the matplotlib
library in Python.
%matplotlib inline
import matplotlib.pyplot as plt
from PIL import Image
from matplotlib import patches
from io import BytesIO
response = requests.get(image_url)
image = Image.open(BytesIO(response.content))
plt.figure(figsize=(8,8))
ax = plt.imshow(image, alpha=0.6)
for face in faces:
fr = face["faceRectangle"]
fa = face["faceAttributes"]
origin = (fr["left"], fr["top"])
p = patches.Rectangle(origin, fr["width"], fr["height"], fill=False, linewidth=2, color='b')
ax.axes.add_patch(p)
plt.text(origin[0], origin[1], "%s, %d"%(fa["gender"].capitalize(), fa["age"]), fontsize=20, weight="bold", va="bottom")
_ = plt.axis("off")
Here are more images that can be analyzed using the same technique.
First, define a helper function, annotate_image
to annotate an image given its URL by calling into the Face API.
def annotate_image(image_url):
response = requests.post(face_api_url, params=params, headers=headers, json={"url": image_url})
faces = response.json()
image_file = BytesIO(requests.get(image_url).content)
image = Image.open(image_file)
plt.figure(figsize=(8,8))
ax = plt.imshow(image, alpha=0.6)
for face in faces:
fr = face["faceRectangle"]
fa = face["faceAttributes"]
origin = (fr["left"], fr["top"])
p = patches.Rectangle(origin, fr["width"], \
fr["height"], fill=False, linewidth=2, color='b')
ax.axes.add_patch(p)
plt.text(origin[0], origin[1], "%s, %d"%(fa["gender"].capitalize(), fa["age"]), \
fontsize=20, weight="bold", va="bottom")
plt.axis("off")
You can then call annotate_image
on other images. A few examples samples are shown below.
annotate_image("https://how-old.net/Images/faces2/main001.jpg")
annotate_image("https://how-old.net/Images/faces2/main002.jpg")
annotate_image("https://how-old.net/Images/faces2/main004.jpg")