Documentation

Hey friend! We are co-hosting a conference with Software Mansion, learn more.

FaceDetector

FaceDetector lets you use the power of Google Mobile Vision framework to detect faces on images.

  • Android does not recognize faces that aren't aligned with the interface (top of the interface matches top of the head).

Check out a full example at expo/camerja. You can try it with Expo at @community/camerja.
FaceDetector is used in Gallery screen — it should detect faces on saved photos and show the probability that the face is smiling.

Other modules, like eg. Camera are able to use this FaceDetector.

In order to configure detector's behavior modules pass a settings object which is then interpreted by this module. The shape of the object should be as follows:
  • mode? (FaceDetector.Constants.Mode) -- Whether to detect faces in fast or accurate mode. Use FaceDetector.Constants.Mode.{fast, accurate}.
  • detectLandmarks? (FaceDetector.Constants.Landmarks) -- Whether to detect and return landmarks positions on the face (ears, eyes, mouth, cheeks, nose). Use FaceDetector.Constants.Landmarks.{all, none}.
  • runClassifications? (FaceDetector.Constants.Classifications) -- Whether to run additional classifications on detected faces (smiling probability, open eye probabilities). Use FaceDetector.Constants.Classifications.{all, none}.
Eg. you could use the following snippet to detect faces in fast mode without detecting landmarks or whether face is smiling:
import { FaceDetector } from 'expo';

<Camera
  // ... other props
  onFacesDetected={this.handleFacesDetected}
  faceDetectorSettings={{
    mode: FaceDetector.Constants.Mode.fast,
    detectLandmarks: FaceDetector.Constants.Mode.none,
    runClassifications: FaceDetector.Constants.Mode.none,
  }}
/>

While detecting faces, FaceDetector will emit object events of the following shape:
  • faces (array) - array of faces objects:
    • faceID (number) -- a face identifier (used for tracking, if the same face appears on consecutive frames it will have the same faceID).
    • bounds (object) -- an object containing:
      • origin ({ x: number, y: number }) -- position of the top left corner of a square containing the face in view coordinates,
      • size ({ width: number, height: number }) -- size of the square containing the face in view coordinates,
    • rollAngle (number) -- roll angle of the face (bank),
    • yawAngle (number) -- yaw angle of the face (heading, turning head left or right),
    • smilingProbability (number) -- probability that the face is smiling,
    • leftEarPosition ({ x: number, y: number}) -- position of the left ear in view coordinates,
    • rightEarPosition ({ x: number, y: number}) -- position of the right ear in view coordinates,
    • leftEyePosition ({ x: number, y: number}) -- position of the left eye in view coordinates,
    • leftEyeOpenProbability (number) -- probability that the left eye is open,
    • rightEyePosition ({ x: number, y: number}) -- position of the right eye in view coordinates,
    • rightEyeOpenProbability (number) -- probability that the right eye is open,
    • leftCheekPosition ({ x: number, y: number}) -- position of the left cheek in view coordinates,
    • rightCheekPosition ({ x: number, y: number}) -- position of the right cheek in view coordinates,
    • mouthPosition ({ x: number, y: number}) -- position of the center of the mouth in view coordinates,
    • leftMouthPosition ({ x: number, y: number}) -- position of the left edge of the mouth in view coordinates,
    • rightMouthPosition ({ x: number, y: number}) -- position of the right edge of the mouth in view coordinates,
    • noseBasePosition ({ x: number, y: number}) -- position of the nose base in view coordinates.
smilingProbability, leftEyeOpenProbability and rightEyeOpenProbability are returned only if faceDetectionClassifications property is set to .all.
Positions of face landmarks are returned only if faceDetectionLandmarks property is set to .all.

To use methods that FaceDetector exposes one just has to import the module. (In ejected apps on iOS face detection will be supported only if you add the FaceDetector subspec to your project. Refer to Adding the Payments Module on iOS for an example of adding a subspec to your ejected project.)
import { FaceDetector } from 'expo';

// ...
detectFaces = async (imageUri) => {
  const options = { mode: FaceDetector.Constants.Mode.fast };
  return await FaceDetector.detectFacesAsync(imageUri, options);
};
// ...

Detect faces on a picture.

  • uri (string) -- file:// URI to the image.
  • options? (object) -- A map of options:
    • mode? (FaceDetector.Constants.Mode) -- Whether to detect faces in fast or accurate mode. Use FaceDetector.Constants.Mode.{fast, accurate}.
    • detectLandmarks? (FaceDetector.Constants.Landmarks) -- Whether to detect and return landmarks positions on the face (ears, eyes, mouth, cheeks, nose). Use FaceDetector.Constants.Landmarks.{all, none}.
    • runClassifications? (FaceDetector.Constants.Classifications) -- Whether to run additional classifications on detected faces (smiling probability, open eye probabilities). Use FaceDetector.Constants.Classifications.{all, none}.

Returns a Promise that resolves to an object: { faces, image } where faces is an array of the detected faces and image is an object containing uri: string of the image, width: number of the image in pixels, height: number of the image in pixels and orientation: number of the image (value conforms to the EXIF orientation tag standard).
Detected face schema
A detected face is an object containing at most following fields:
  • bounds (object) -- an object containing:
    • origin ({ x: number, y: number }) -- position of the top left corner of a square containing the face in image coordinates,
    • size ({ width: number, height: number }) -- size of the square containing the face in image coordinates,
  • rollAngle (number) -- roll angle of the face (bank),
  • yawAngle (number) -- yaw angle of the face (heading, turning head left or right),
  • smilingProbability (number) -- probability that the face is smiling,
  • leftEarPosition ({ x: number, y: number}) -- position of the left ear in image coordinates,
  • rightEarPosition ({ x: number, y: number}) -- position of the right ear in image coordinates,
  • leftEyePosition ({ x: number, y: number}) -- position of the left eye in image coordinates,
  • leftEyeOpenProbability (number) -- probability that the left eye is open,
  • rightEyePosition ({ x: number, y: number}) -- position of the right eye in image coordinates,
  • rightEyeOpenProbability (number) -- probability that the right eye is open,
  • leftCheekPosition ({ x: number, y: number}) -- position of the left cheek in image coordinates,
  • rightCheekPosition ({ x: number, y: number}) -- position of the right cheek in image coordinates,
  • mouthPosition ({ x: number, y: number}) -- position of the center of the mouth in image coordinates,
  • leftMouthPosition ({ x: number, y: number}) -- position of the left edge of the mouth in image coordinates,
  • rightMouthPosition ({ x: number, y: number}) -- position of the right edge of the mouth in image coordinates,
  • noseBasePosition ({ x: number, y: number}) -- position of the nose base in image coordinates.
smilingProbability, leftEyeOpenProbability and rightEyeOpenProbability are returned only if runClassifications option is set to .all.
Positions of face landmarks are returned only if detectLandmarks option is set to .all.