Here’s a brief rundown of the key components of this project:
After uploading an photo but before it’s usable by the ML models, there are some preparatory steps. The image has to be converted to a
Tensor of the appropriate shape and structure – i.e., normalizing pixel values from [0, 255] to [-1, 1] and resizing dimensions to match the model parameters. In my case, I also had to convert color photos to grayscale (because the emotion model was trained on black and white images). These helper functions can be found here.
Before we can determine emotions, we have to find the people / faces in the image. For this, I’m utilizing face-api.js, a library built on top of Tensorflow.js for face detection / recognition. The library has a few models to choose from (i.e., SSD Mobilenet, Tiny Yolo); after some experimentation, I went with MTCNN (Multi-task Cascaded Convolutional Neural Networks). See this paper for more details; my very lightweight wrapper around this
face-api model and utilities is here.
Now it’s time for the primary activity – classifying emotions. For this, I’m using an open-sourced CNN model trained on the FER-2013 dataset, which contains images of faces categorized by one of seven emotional states (Angry, Disgust, Fear, Happy, Sad, Surprise, Neutral). This model was built in Python, so I used this nifty tfjs-converter tool to convert the Keras model to a web-friendly format. The code for that is here.
Wiring up front-end
Finally, the site is wired up with React (main component here). And for drawing the boxes around the faces and adding emojis to the image, I’m using an overlaid
<canvas> element; one thing to note: the canvas must be cleared and redrawn if the image dimensions ever change (i.e., if you resize your browser window).
That’s about it. Check it out and let me know what you think!