expo-face-detector
lets you use the power of the Google Mobile Vision framework to detect faces on images.Android Device | Android Emulator | iOS Device | iOS Simulator | Web |
---|---|---|---|---|
Pending |
expo install expo-face-detector
If you're installing this in a bare React Native app, you should also follow these additional installation instructions.
FaceDetector
is used in Gallery screen — it should detect faces on saved photos and show the probability that the face is smiling.FaceDetector
.import * as FaceDetector from 'expo-face-detector';
FaceDetector.Constants.Mode.{fast, accurate}
.FaceDetector.Constants.Landmarks.{all, none}
.FaceDetector.Constants.Classifications.{all, none}
.faceID
attribute which should be consistent across frames. Defaults to false
;import * as FaceDetector from 'expo-face-detector'; <Camera // ... other props onFacesDetected={this.handleFacesDetected} faceDetectorSettings={{ mode: FaceDetector.Constants.Mode.fast, detectLandmarks: FaceDetector.Constants.Landmarks.none, runClassifications: FaceDetector.Constants.Classifications.none, minDetectionInterval: 100, tracking: true, }} />;
FaceDetector
will emit object events of the following shape:faceID
).{ x: number, y: number }
) -- position of the top left corner of a square containing the face in view coordinates,{ width: number, height: number }
) -- size of the square containing the face in view coordinates,{ x: number, y: number}
) -- position of the left ear in view coordinates,{ x: number, y: number}
) -- position of the right ear in view coordinates,{ x: number, y: number}
) -- position of the left eye in view coordinates,{ x: number, y: number}
) -- position of the right eye in view coordinates,{ x: number, y: number}
) -- position of the left cheek in view coordinates,{ x: number, y: number}
) -- position of the right cheek in view coordinates,{ x: number, y: number}
) -- position of the center of the mouth in view coordinates,{ x: number, y: number}
) -- position of the left edge of the mouth in view coordinates,{ x: number, y: number}
) -- position of the right edge of the mouth in view coordinates,{ x: number, y: number}
) -- position of the nose base in view coordinates.smilingProbability
, leftEyeOpenProbability
and rightEyeOpenProbability
are returned only if faceDetectionClassifications
property is set to .all
.faceDetectionLandmarks
property is set to .all
.FaceDetector
exposes one just has to import the module. (In ejected apps on iOS face detection will be supported only if you add the FaceDetector
subspec to your project. Refer to Adding the Payments Module on iOS for an example of adding a subspec to your ejected project.)import * as FaceDetector from 'expo-face-detector'; // ... detectFaces = async imageUri => { const options = { mode: FaceDetector.Constants.Mode.fast }; return await FaceDetector.detectFacesAsync(imageUri, options); }; // ...
file://
URI to the image.FaceDetector.Constants.Mode.{fast, accurate}
.FaceDetector.Constants.Landmarks.{all, none}
.FaceDetector.Constants.Classifications.{all, none}
.{ faces, image }
where faces
is an array of the detected faces and image
is an object containing uri: string
of the image, width: number
of the image in pixels, height: number
of the image in pixels and orientation: number
of the image (value conforms to the EXIF orientation tag standard).{ x: number, y: number }
) -- position of the top left corner of a square containing the face in image coordinates,{ width: number, height: number }
) -- size of the square containing the face in image coordinates,{ x: number, y: number}
) -- position of the left ear in image coordinates,{ x: number, y: number}
) -- position of the right ear in image coordinates,{ x: number, y: number}
) -- position of the left eye in image coordinates,{ x: number, y: number}
) -- position of the right eye in image coordinates,{ x: number, y: number}
) -- position of the left cheek in image coordinates,{ x: number, y: number}
) -- position of the right cheek in image coordinates,{ x: number, y: number}
) -- position of the center of the mouth in image coordinates,{ x: number, y: number}
) -- position of the left edge of the mouth in image coordinates,{ x: number, y: number}
) -- position of the right edge of the mouth in image coordinates,{ x: number, y: number}
) -- position of the nose base in image coordinates.smilingProbability
, leftEyeOpenProbability
and rightEyeOpenProbability
are returned only if runClassifications
option is set to .all
.detectLandmarks
option is set to .all
.