This is a notebook to demonstrate some cool ways one can swap two faces from an image without using photoshop. This notebook has been heavily inspired by the great Satya Mallick and you can find the original and far better tutorial here.
A word to the wise: Please do not use this repository for any unethical purposes.
# install dlib
!apt update
!apt install -y cmake
!pip install dlib
Get:1 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/ InRelease [3,626 B] Ign:2 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease Ign:3 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease Hit:4 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release Get:5 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease [21.3 kB] Get:6 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] Hit:7 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release Hit:8 http://archive.ubuntu.com/ubuntu bionic InRelease Get:9 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB] Get:10 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/ Packages [88.1 kB] Get:11 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic InRelease [15.4 kB] Get:13 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB] Get:15 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic/main amd64 Packages [37.4 kB] Get:16 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [889 kB] Get:17 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic/main Sources [1,811 kB] Get:18 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1,372 kB] Get:19 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [839 kB] Get:20 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [44.6 kB] Get:21 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [8,213 B] Get:22 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [12.6 kB] Get:23 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [1,183 kB] Get:24 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [59.0 kB] Get:25 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic/main amd64 Packages [874 kB] Fetched 7,509 kB in 3s (2,212 kB/s) Reading package lists... Done Building dependency tree Reading state information... Done 61 packages can be upgraded. Run 'apt list --upgradable' to see them. Reading package lists... Done Building dependency tree Reading state information... Done cmake is already the newest version (3.10.2-1ubuntu2.18.04.1). 0 upgraded, 0 newly installed, 0 to remove and 61 not upgraded. Requirement already satisfied: dlib in /usr/local/lib/python3.6/dist-packages (19.18.0)
#cloning into the repository for helper functions
# note that this repository contains files which can be originally found here:
# https://github.com/spmallick/PyImageConf2018
! git clone https://github.com/ritwikraha/face-swapper.git
Cloning into 'face-swapper'... remote: Enumerating objects: 6, done. remote: Total 6 (delta 0), reused 0 (delta 0), pack-reused 6 Unpacking objects: 100% (6/6), done.
# changing directory into the repository
cd face-swapper
/content/face-swapper
# import necessary modules
# note that to import faceBlendCommon
# you need to have faceBlendCommon.py file in your working directory
import sys, cv2, dlib, time
import numpy as np
import faceBlendCommon as fbc
import matplotlib.pyplot as plt
# mount google drive to access files
from google.colab import drive
drive.mount('/gdrive')
%cd /gdrive
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly Enter your authorization code: ·········· Mounted at /gdrive /gdrive
A rule of thumb we graphic designers have when bledning two images is to have images that inherently line up well together. If you take two faces that look nothing like each other, as I have done, you are most likely to find the resultant face swap ineffective. This is because of a number of reasons, the camera angle, exposure, facial structure and a lot more.
img1 =cv2.imread('/gdrive/My Drive/Colab Notebooks/images/leo2.jpg')
img2 =cv2.imread('/gdrive/My Drive/Colab Notebooks/images/Ritwik_Raha_officialjpg.jpg')
I1_dis = cv2.cvtColor(img1, cv2.COLOR_BGR2RGB)
I2_dis = cv2.cvtColor(img2, cv2.COLOR_BGR2RGB)
img1Warped = np.copy(img2)
# Display Images
# the skin color and lighting can be matched
# it simply requires some basic preprocessing such as adjusting temperature and exposure
plt.figure(figsize = (20,10))
plt.subplot(121); plt.imshow(I1_dis); plt.axis('off');
plt.subplot(122); plt.imshow(I2_dis); plt.axis('off');
We use the pretraines shape_68_face_landmarks.dat to extract the most prominent landmarks from the face. Every face has 68 landmarks points that we can use to define alignment movement and features. Remember apple's memoji well find those points and you can create your own memoji. To find an example of the code, you can look at my basic implementation here.
# Initialize the dlib facial landmark detector
# a copy of the .dat file was saved in my drive
# please check the github repository for a downloadable version
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("/gdrive/My Drive/Colab Notebooks/shape_predictor_68_face_landmarks.dat")
# Read array of corresponding points
points1 = fbc.getLandmarks(detector, predictor, img1)
points2 = fbc.getLandmarks(detector, predictor, img2)
# visualizing the landmarks by creating little dots
#image = cv2.circle(image, center_coordinates, radius, color, thickness)
# the above is the format for cv2.circle
# this figure is just show the points on th face
img_Temp = I2_dis.copy()
for p in points2:
cv2.circle(img_Temp, p, 5, (255,0,0), 5)
plt.figure(figsize = (10,10)); plt.imshow(img_Temp); plt.axis('off');
Although we have 68 points we do not need all of them to swap faces. You see Leonardo dicaprio has beatiful nose and I do not. That information is encoded in all the 68 points we extracted but that is not needed for the swap. We need a boundary, a mask of entire Leo's face that can be put over mine. So what we need in simple terms is to define a boundary or region on my face that can simply be replaced.
# so we find convex hull
# Convex Hull of a shape or a group of points
# is a tight fitting convex boundary around the points or the shape
hullIndex = cv2.convexHull(np.array(points2).astype(np.int32), returnPoints=False)
# add .astype(np.int32) to fix TypeError: data type = 9 not supported
# Create convex hull lists
hull1 = []
hull2 = []
for i in range(0, len(hullIndex)):
hull1.append(points1[hullIndex[i][0]])
hull2.append(points2[hullIndex[i][0]])
# plotting the line
# to display convex hull
# this is the boundary which will be used to create mask
img_Temp = I2_dis.copy()
numPoints = len(hull2)
for i in range(0, numPoints):
cv2.line(img_Temp, hull2[i], hull2[(i+1)%numPoints], (255,0,0), 3)
cv2.circle(img_Temp, hull2[i], 5, (0,0,255), 5)
plt.figure(figsize = (10,10)); plt.imshow(img_Temp); plt.axis('off');
Remember where I talked about creating a replacable region for my face, well this is the mask. Its a white region on my face where everything else is just black. This can be easily added with other images to create blending.The black pixels will be added (0+anything=anything) and the white pixels multiplied (1Xanything =anything)
# Calculate Mask for Seamless cloning
# create an empty list to store convex hull coordinates
hull8U = []
for i in range(0, len(hull2)):
hull8U.append((hull2[i][0], hull2[i][1]))
mask = np.zeros(img2.shape, dtype=img2.dtype)
cv2.fillConvexPoly(mask, np.int32(hull8U), (255, 255, 255))
# finding the Centroid
# lucking cv2.moments() allows us to
# find the best moments in the life of that image
# just kidding!
# Image Moment is a particular weighted average of image pixel intensities
m = cv2.moments(mask[:,:,1])
# The centroid is given by the formula
# C_x = m10/m00
# C_y = m01/m00
center = (int(m['m10']/m['m00']), int(m['m01']/m['m00']))
plt.figure(figsize = (10,10)); plt.imshow(mask); plt.axis('off');
Delaunay triangulation for a given set P of discrete points in a plane is a triangulation DT such that no point in P is inside the circumcircle of any triangle in DT. If it makes no sense try this link. One way to interpret this is that now the face is divided into unique segments. These segments will later be stretched and transformed so that one face fits another. Imagine for a second all our faces are made of differently shaped triangles. To change a face we simply need stretch and push some triangles to fit. A rather fun way of visualizing this is in this video.
# Find Delaunay traingulation for convex hull points
size2 = img2.shape
rect = (0, 0, size2[1], size2[0])
dt = fbc.calculateDelaunayTriangles(rect, hull2)
# if no Delaunay Triangles were found
# abort finding
if len(dt) == 0:
quit()
# displaying the triangulation
# by creating polygonal lines
# from the triangle coordinates
img_Temp1 = I1_dis.copy()
img_Temp2 = I2_dis.copy()
tris1 = []
tris2 = []
for i in range(0, len(dt)):
tri1 = []
tri2 = []
for j in range(0, 3):
tri1.append(hull1[dt[i][j]])
tri2.append(hull2[dt[i][j]])
tris1.append(tri1)
tris2.append(tri2)
cv2.polylines(img_Temp1,np.array(tris1).astype(np.int32),True,(0,0,255),2);
cv2.polylines(img_Temp2,np.array(tris2).astype(np.int32),True,(0,0,255),2);
plt.figure(figsize = (15,10));
plt.subplot(121); plt.imshow(img_Temp1); plt.axis('off');
plt.subplot(122); plt.imshow(img_Temp2); plt.axis('off');
Remember the portion where we talked about stretching the triangles and squeazing them to fit another face? This is the part where we do just that. Opencv allows us two kinds of geometric transformations: perspective and affine. Affine transform is done to match the facial features of the first image to the facial features of the second lenghth and angles are not preserved. Full examples shown here.
# apply affine transformation to Delaunay triangles
for i in range(0, len(tris1)):
fbc.warpTriangle(img1, img1Warped, tris1[i], tris2[i])
plt.figure(figsize=(10,10));
# The : means "take everything in this dimension"
# and the ::-1 means "take everything in this dimension but backwards.
# an image matrix has three dimensions: height, width and color.
# so flip the color from BGR to RGB
plt.imshow(np.uint8(img1Warped)[:,:,::-1]); plt.axis('off');
This is blending where the target image matches the texture color brightness and other factors of the source image. A plain background is mutliplied with the background of the target image and the texture gets easily transferred. Learn more about seamless cloning in Satya's blog.
# clone seamlessly.
# lets just say this is photoshop level blending
# a NORMAL_CLONE will allow us to completely swap the image without any correction
output = cv2.seamlessClone(np.uint8(img1Warped), img2, mask, center, cv2.MIXED_CLONE)
#visalizing the result with the original images
plt.figure(figsize=(20,5))
plt.subplot((131)); plt.imshow(np.uint8(img1)[:,:,::-1]); plt.axis('off');
plt.subplot((132)); plt.imshow(np.uint8(img2)[:,:,::-1]); plt.axis('off');
plt.subplot((133)); plt.imshow(output[:,:,::-1]); plt.axis('off');