Skip to content

shaibt/AppleFaceDetection

 
 

Repository files navigation

THIS IS AN ATTEMPT TO COMPARE APPLE VISION API TO GOOGLE ML KIT FACE DETECTION

Currently app easily crashes due to memory leak in Google code

Face Detection with Vision Framework

ios11+ swift4+

Previously, in iOS 10, to detect faces in a picture, you can use CIDetector (Apple) or Mobile Vision (Google)

In iOS11, Apple introduces CoreML. With the Vision Framework, it's much easier to detect faces in real time 😃

Try it out with real time face detection on your iPhone! 📱

You can find out the differences between CIDetector and Vison Framework down below.

Moving From Voila-Jones to Deep Learning


Details

Specify the VNRequest for face recognition, either VNDetectFaceRectanglesRequest or VNDetectFaceLandmarksRequest.

private var requests = [VNRequest]() // you can do mutiple requests at the same time

var faceDetectionRequest: VNRequest!
@IBAction func UpdateDetectionType(_ sender: UISegmentedControl) {
    // use segmentedControl to switch over VNRequest
    faceDetectionRequest = sender.selectedSegmentIndex == 0 ? VNDetectFaceRectanglesRequest(completionHandler: handleFaces) : VNDetectFaceLandmarksRequest(completionHandler: handleFaceLandmarks) 
}

Perform the requests every single frame. The image comes from camera via captureOutput(_:didOutput:from:), see AVCaptureVideoDataOutputSampleBufferDelegate

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer),
        let exifOrientation = CGImagePropertyOrientation(rawValue: exifOrientationFromDeviceOrientation()) else { return }
    var requestOptions: [VNImageOption : Any] = [:]

    if let cameraIntrinsicData = CMGetAttachment(sampleBuffer, kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, nil) {
      requestOptions = [.cameraIntrinsics : cameraIntrinsicData]
    }
    
    // perform image request for face recognition
    let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: exifOrientation, options: requestOptions)

    do {
      try imageRequestHandler.perform(self.requests)
    }

    catch {
      print(error)
    }

}

Handle the return of your request, VNRequestCompletionHandler.

  • handleFaces for VNDetectFaceRectanglesRequest
  • handleFaceLandmarks for VNDetectFaceLandmarksRequest

then you will get the result from the request, which are VNFaceObservations. That's all you got from the Vision API

func handleFaces(request: VNRequest, error: Error?) {
    DispatchQueue.main.async {
        //perform all the UI updates on the main queue
        guard let results = request.results as? [VNFaceObservation] else { return }
        print("face count = \(results.count) ")
        self.previewView.removeMask()

        for face in results {
            self.previewView.drawFaceboundingBox(face: face)
        }
    }
}
    
func handleFaceLandmarks(request: VNRequest, error: Error?) {
    DispatchQueue.main.async {
        //perform all the UI updates on the main queue
        guard let results = request.results as? [VNFaceObservation] else { return }
        self.previewView.removeMask()
        for face in results {
            self.previewView.drawFaceWithLandmarks(face: face)
        }
    }
}

Lastly, DRAW corresponding location on the screen! <Hint: UIBezierPath to draw line for landmarks>

func drawFaceboundingBox(face : VNFaceObservation) {
    // The coordinates are normalized to the dimensions of the processed image, with the origin at the image's lower-left corner.

    let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -frame.height)

    let scale = CGAffineTransform.identity.scaledBy(x: frame.width, y: frame.height)

    let facebounds = face.boundingBox.applying(scale).applying(transform)

    _ = createLayer(in: facebounds)

}

// Create a new layer drawing the bounding box
private func createLayer(in rect: CGRect) -> CAShapeLayer {

    let mask = CAShapeLayer()
    mask.frame = rect
    mask.cornerRadius = 10
    mask.opacity = 0.75
    mask.borderColor = UIColor.yellow.cgColor
    mask.borderWidth = 2.0

    maskLayer.append(mask)
    layer.insertSublayer(mask, at: 1)

    return mask
}

About

Face Detection with CoreML

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Swift 98.7%
  • Ruby 1.3%