Wednesday, August 27, 2014

iOS Face and Facial-Part Detection

iOS Face and Facial-Part Detection
With Sample Live App





Steps for iOS face and facial part recognition:

1: Setup AVConnection
  1. create a new AVCaptureSession
  2. Set SessionPreset for adjusting video and picture quality
  3. create instance of AVCaptureDevice with media type video
  4. create instance of AVCaptureDeviceInput with device instance created above
  5. check whether current deviceInput can add input to session or not and if yes
  6. addInput to session using device input.
  7. create AVCaptureStillImageOutput instance and AVCaptureVideoDataOutput instance one by one.
  8. For StillImageOutput add observer for “capturingStillImage” key with observing option new
  9. then check whether current session can add still image output or not and if yes
  10. addOutput to instance of AVCaptureStillImageOutput
  11. Similarly with instance of AVCaptureVideoDataOutput add video settings using dictionary and set “setAlwaysDiscardsLateVideoFrames” to yes for this example otherwise as per need.
  12. create a serial dispatch queue.
  13. assign SampleBufferDelegate class and queue to AVCaptureVideoDataOutput instance.
  14. Check whether session can add output to AVCaptureVideoDataOutput instance or not, if yes
  15. add output to AVCaptureVideoDataOutput instance.
  16. setup connection with media type using videoDataOutput and then enable the connection.
  17. create instance of AVCaptureVideoPreviewLayer with the same session.
  18. set background colour and gravity to preview layer.
  19. get the layer from preview view and set its bound to PreviewLayer and then add Previewlayer as sublayer to root layer(preview view layer).
  20. start the session using startRunning method of session instance.

Setup done………

Now you have to take care of drawing rectangle for faces or facial parts using the delegate method

::: didOutputSampleBuffer  (Preferred one)

alternate method is 

::: didDropSampleBuffer

of protocol :: AVCaptureVideoDataOutputSampleBufferDelegate



2: Delegate Method Implementation
  1. create CVPixelBufferref from the sampleBuffer received in delegate method using CMSampleBufferGetImageBuffer method.
  2. create CFDictionaryRef of attachments from sample buffer with attachment mode propagate.
  3. create CIImage from pixel buffer instance using attachments dictionary.
  4. create imageOptions dictionary using current device orientation and use it for feature detection.
  5. use faceDetector featuresInImage method to detect features. it will return you array of features.
  6. get CMFormatDescriptionRef from SampleBuffer using CMSampleBufferGetFormatdescription method.
  7. get the rest of video being displayed on iOS device using CMVideoFormatDescriptionGetCleanAperture method.
  8. use the main queue and drawfaceboxes on features detected or features you want to highlight.

3: DrawFaceBoxesForFeatures implementation details
  1. get the sublayers from the previewLayer sublayers.
  2. get the sublayers count and features count.
  3. Start the drawing process using CATransaction begin method.
  4. setValue for CSTransactionDisableActions key of CATransaction. e.g.. [CATransaction setValue:(id)kCFBooleanTrue forKey:kCATransactionDisableActions];
  5. from the sublayers get the layer labeled “FaceLayer” and hide it for all this type of layer.
  6. check whether any feature is detected or faceDetection is enabled by user and on the basis of decide whether to commit transaction now or not.
  7. Find where the video box is positioned within the preview layer based on the video size and gravity using PreviewView size previewLayer Gravity feature
  8. For each feature detected, draw a feature layer and add it to previewLayer and draw rect (i used image in this sample code, same can also be used or we can draw an square).
  9. Apply orientation as per device orientation.
  10. once done for all features detected then commit CATransaction.

4: Additional Feature:
  1.   You can provide feature like handling pinch gesture and detectFace user input to enable or disable face detection… remember to draw face rect each time user enable or disable.
  2. TakePicture with face detection highlighted feature or without face marker.

5: Take Picture and saving it to cameraRoll is explained below:

Utility method from Apple:
CreateCGImageFromCVPixelBuffer

Implementation::::
static OSStatus CreateCGImageFromCVPixelBuffer(CVPixelBufferRef pixelBuffer, CGImageRef *imageOut) 
{
OSStatus err = noErr;
OSType sourcePixelFormat;
size_t width, height, sourceRowBytes;
void *sourceBaseAddr = NULL;
CGBitmapInfo bitmapInfo;
CGColorSpaceRef colorspace = NULL;
CGDataProviderRef provider = NULL;
CGImageRef image = NULL;

sourcePixelFormat = CVPixelBufferGetPixelFormatType( pixelBuffer );
if ( kCVPixelFormatType_32ARGB == sourcePixelFormat )
bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst;
else if ( kCVPixelFormatType_32BGRA == sourcePixelFormat )
bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst;
else
return -95014; // only uncompressed pixel formats

sourceRowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
width = CVPixelBufferGetWidth( pixelBuffer );
height = CVPixelBufferGetHeight( pixelBuffer );

CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
sourceBaseAddr = CVPixelBufferGetBaseAddress( pixelBuffer );

colorspace = CGColorSpaceCreateDeviceRGB();
    
CVPixelBufferRetain( pixelBuffer );
provider = CGDataProviderCreateWithData( (void *)pixelBuffer, sourceBaseAddr, sourceRowBytes * height, ReleaseCVPixelBuffer);
image = CGImageCreate(width, height, 8, 32, sourceRowBytes, colorspace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);

bail:
if ( err && image ) {
CGImageRelease( image );
image = NULL;
}
if ( provider ) CGDataProviderRelease( provider );
if ( colorspace ) CGColorSpaceRelease( colorspace );
*imageOut = image;
return err;
}
:::
&
CreateCGBitmapContextForSize

Implementation:::
static CGContextRef CreateCGBitmapContextForSize(CGSize size)
{
    CGContextRef    context = NULL;
    CGColorSpaceRef colorSpace;
    int             bitmapBytesPerRow;

    bitmapBytesPerRow = (size.width * 4);

    colorSpace = CGColorSpaceCreateDeviceRGB();
    context = CGBitmapContextCreate (NULL,
size.width,
size.height,
8,      // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGContextSetAllowsAntialiasing(context, NO);
    CGColorSpaceRelease( colorSpace );
    return context;
}
:::

Process to save image to camera roll:
  1. Find out the current orientation and tell the still image output.
  2. get stillImageConnection from stillImageOutput with mediaType as AVMediaTypeVideo.
  3. get current device orientation.
  4. get avorientation for current device orientation.
  5. Check out helper method below for avOrientation.
  6. using StillImageConnetion set video orientation as avOrientation.
  7. set video scale and crop factor in StillImageConnection.
  8. Check whether face detection is ON or not? on the basis of that: set the appropriate pixel format / image type output setting depending on if we'll need an uncompressed image for the possiblity of drawing the red square over top or if we're just writing a jpeg to the camera roll which is the trival case.
  9. on the basis of whether face detection is on or not…we have to set the output settings for StillImageOutput object as explained above.
  10. Now Call captureStillImageAsynchronouslyFromConnection Method and inside completion handler check for error, if error show proper error message to user else do faceDetection bool value check and perform save to camera roll operation.
  11. Considering face detection is on, then:
  12. create pixel buffer from imageDataSampleBuffer using CMSampleBufferGetImageBuffer method.
  13. get the attachment dictionary using CMCopyDictionaryOfAttachments method.
  14. create CIImage object using pixel buffer and attachment objects.
  15. get orientation from the imageDataSampleBuffer using CMGetAttachment method and create imageOptions dictionary with this orientation and CIDetectorImageOrientation key.
  16. now dispatch synchronously the videoDataOutputQueue, this will ensure that new frames are automatically dropped while we are processing the existing frames.
  17. in the dispatch_sync method, do as described for this kind of app demo…
  18. get the features in image as a featuresArray.
  19. create CGImageRef using the CreateCGImageFromCVPixelBuffer helper method as described above.
  20. As face detection was on and in that case we want image to saved with square on face, so using the CGImageRef create another CGImageRef with square overlayed using another helper method newSquareOverlayedImageForFeatures will be described later.
  21. Now write the CGImageRef to camera roll, remember we can write to camera roll using the ALAssetsLibrary but using this we can write standard compressed format images, thus to write CGImageRef we need to do some additional processing.
  22. Check out writeCGImageToCameraRoll helper method for this processing.


  1. Now suppose face detection was off then:
  2. using jpegStillImageNSDataRepresentation method of ACCaptureStillImageOutput get the jpegdata object.
  3. get the attachments dictionary ref from sample buffer data.
  4. create instance of ALAssetsLibrary and write image data to photo album using writeImageDataToSavedPhotosAlbum method.

Done…

Additional Helper Methods:

// utility routine to create a new image with the red square overlay with appropriate orientation
// and return the new composited image which can be saved to the camera roll

// used with take picture method only.
- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features 
inCGImage:(CGImageRef)backgroundImage 
 withOrientation:(UIDeviceOrientation)orientation 
 frontFacing:(BOOL)isFrontFacing
{
CGImageRef returnImage = NULL;
CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(backgroundImage), CGImageGetHeight(backgroundImage));
CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size);
CGContextClearRect(bitmapContext, backgroundImageRect);
CGContextDrawImage(bitmapContext, backgroundImageRect, backgroundImage);
CGFloat rotationDegrees = 0.;

switch (orientation) {
case UIDeviceOrientationPortrait:
rotationDegrees = -90.;
break;
case UIDeviceOrientationPortraitUpsideDown:
rotationDegrees = 90.;
break;
case UIDeviceOrientationLandscapeLeft:
if (isFrontFacing) rotationDegrees = 180.;
else rotationDegrees = 0.;
break;
case UIDeviceOrientationLandscapeRight:
if (isFrontFacing) rotationDegrees = 0.;
else rotationDegrees = 180.;
break;
case UIDeviceOrientationFaceUp:
case UIDeviceOrientationFaceDown:
default:
break; // leave the layer in its last known orientation
}
UIImage *rotatedSquareImage = [square imageRotatedByDegrees:rotationDegrees];

    // features found by the face detector
for ( CIFaceFeature *ff in features ) {
CGRect faceRect = [ff bounds];
CGContextDrawImage(bitmapContext, faceRect, [rotatedSquareImage CGImage]);
}
returnImage = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease (bitmapContext);

return returnImage;
}

// utility routine used after taking a still image to write the resulting image to the camera roll
- (BOOL)writeCGImageToCameraRoll:(CGImageRef)cgImage withMetadata:(NSDictionary *)metadata
{
CFMutableDataRef destinationData = CFDataCreateMutable(kCFAllocatorDefault, 0);
CGImageDestinationRef destination = CGImageDestinationCreateWithData(destinationData, 
CFSTR("public.jpeg"), 
1, 
NULL);
BOOL success = (destination != NULL);
require(success, bail);
const float JPEGCompQuality = 0.85f; // JPEGHigherQuality
CFMutableDictionaryRef optionsDict = NULL;
CFNumberRef qualityNum = NULL;

qualityNum = CFNumberCreate(0, kCFNumberFloatType, &JPEGCompQuality);    
if ( qualityNum ) {
optionsDict = CFDictionaryCreateMutable(0, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
if ( optionsDict )
CFDictionarySetValue(optionsDict, kCGImageDestinationLossyCompressionQuality, qualityNum);
CFRelease( qualityNum );
}

CGImageDestinationAddImage( destination, cgImage, optionsDict );
success = CGImageDestinationFinalize( destination );
if ( optionsDict )
CFRelease(optionsDict);

require(success, bail);

CFRetain(destinationData);
ALAssetsLibrary *library = [ALAssetsLibrary new];
[library writeImageDataToSavedPhotosAlbum:(id)destinationData metadata:metadata completionBlock:^(NSURL *assetURL, NSError *error) {
if (destinationData)
CFRelease(destinationData);
}];
[library release];
bail:
if (destinationData)
CFRelease(destinationData);
if (destination)
CFRelease(destination);
return success;
}

// utility routine to display error aleart if takePicture fails
- (void)displayErrorOnMainQueue:(NSError *)error withMessage:(NSString *)message
{
dispatch_async(dispatch_get_main_queue(), ^(void) {
UIAlertView *alertView = [[UIAlertView alloc] initWithTitle:[NSString stringWithFormat:@"%@ (%d)", message, (int)[error code]]
message:[error localizedDescription]
  delegate:nil 
 cancelButtonTitle:@"Dismiss" 
 otherButtonTitles:nil];
[alertView show];
[alertView release];
});
}

Use this delegate method to perform some animation while capturing and saving image to Camera Roll:

// perform a flash bulb animation using KVO to monitor the value of the capturingStillImage property of the AVCaptureStillImageOutput class
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context
{
if ( context == AVCaptureStillImageIsCapturingStillImageContext ) {
BOOL isCapturingStillImage = [[change objectForKey:NSKeyValueChangeNewKey] boolValue];

if ( isCapturingStillImage ) {
// do flash bulb like animation
flashView = [[UIView alloc] initWithFrame:[previewView frame]];
[flashView setBackgroundColor:[UIColor whiteColor]];
[flashView setAlpha:0.f];
[[[self view] window] addSubview:flashView];

[UIView animateWithDuration:.4f
animations:^{
[flashView setAlpha:1.f];
}
];
}
else {
[UIView animateWithDuration:.4f
animations:^{
[flashView setAlpha:0.f];
}
completion:^(BOOL finished){
[flashView removeFromSuperview];
[flashView release];
flashView = nil;
}
];
}
}
}

Smile and Eye Blink Detection:
-(void)updateUIForFeatures:(NSString*)feature value:(BOOL)args
{
    if ([feature isEqualToString:@"smile"]) {
        self.smileImgView.hidden = NO;
        self.lEyeImgView.hidden = YES;
        self.REyeImgView.hidden = YES;
        self.txtLbl.hidden = NO;
        self.txtLbl.text = @"Smiling......";
    }
    if ([feature isEqualToString:@"leftEye"]) {
        self.lEyeImgView.hidden = NO;
        self.smileImgView.hidden = YES;
        self.REyeImgView.hidden = YES;
        self.txtLbl.hidden = NO;
        self.txtLbl.text = @"Left Eye Closed......";
    }
    
    if ([feature isEqualToString:@"rightEye"]) {
        self.REyeImgView.hidden = NO;
        self.smileImgView.hidden = YES;
        self.lEyeImgView.hidden = YES;
        self.txtLbl.hidden = NO;
        self.txtLbl.text = @"Right Eye Closed......";
    }
    
}
We will be using delegate method for the calling of this:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{ 
 // got an image
 CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
 CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
 CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
 if (attachments)
  CFRelease(attachments);
 NSDictionary *imageOptions = nil;
 UIDeviceOrientation curDeviceOrientation = [[UIDevice currentDevice] orientation];
 int exifOrientation;
        
 enum {
  PHOTOS_EXIF_0ROW_TOP_0COL_LEFT   = 1, //   1  =  0th row is at the top, and 0th column is on the left (THE DEFAULT).
  PHOTOS_EXIF_0ROW_TOP_0COL_RIGHT   = 2, //   2  =  0th row is at the top, and 0th column is on the right.  
  PHOTOS_EXIF_0ROW_BOTTOM_0COL_RIGHT      = 3, //   3  =  0th row is at the bottom, and 0th column is on the right.  
  PHOTOS_EXIF_0ROW_BOTTOM_0COL_LEFT       = 4, //   4  =  0th row is at the bottom, and 0th column is on the left.  
  PHOTOS_EXIF_0ROW_LEFT_0COL_TOP          = 5, //   5  =  0th row is on the left, and 0th column is the top.  
  PHOTOS_EXIF_0ROW_RIGHT_0COL_TOP         = 6, //   6  =  0th row is on the right, and 0th column is the top.  
  PHOTOS_EXIF_0ROW_RIGHT_0COL_BOTTOM      = 7, //   7  =  0th row is on the right, and 0th column is the bottom.  
  PHOTOS_EXIF_0ROW_LEFT_0COL_BOTTOM       = 8  //   8  =  0th row is on the left, and 0th column is the bottom.  
 };
 
 switch (curDeviceOrientation) {
  case UIDeviceOrientationPortraitUpsideDown:  // Device oriented vertically, home button on the top
   exifOrientation = PHOTOS_EXIF_0ROW_LEFT_0COL_BOTTOM;
   break;
  case UIDeviceOrientationLandscapeLeft:       // Device oriented horizontally, home button on the right
   if (isUsingFrontFacingCamera)
    exifOrientation = PHOTOS_EXIF_0ROW_BOTTOM_0COL_RIGHT;
   else
    exifOrientation = PHOTOS_EXIF_0ROW_TOP_0COL_LEFT;
   break;
  case UIDeviceOrientationLandscapeRight:      // Device oriented horizontally, home button on the left
   if (isUsingFrontFacingCamera)
    exifOrientation = PHOTOS_EXIF_0ROW_TOP_0COL_LEFT;
   else
    exifOrientation = PHOTOS_EXIF_0ROW_BOTTOM_0COL_RIGHT;
   break;
  case UIDeviceOrientationPortrait:            // Device oriented vertically, home button on the bottom
  default:
   exifOrientation = PHOTOS_EXIF_0ROW_RIGHT_0COL_TOP;
   break;
 }

 imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:exifOrientation] forKey:CIDetectorImageOrientation];
 NSArray *features = [faceDetector featuresInImage:ciImage options:@{CIDetectorEyeBlink: @YES,
                                                                        CIDetectorSmile: @YES,
                                                                        CIDetectorImageOrientation: [NSNumber numberWithInt:exifOrientation]}];
 [ciImage release];
    
    detectFacesFeatures = YES;
    for (CIFaceFeature *ff in features)
    {
        
        if (ff.hasSmile) {
            NSLog(@"has smile %d", 1);   
            dispatch_async(dispatch_get_main_queue(), ^(void) {
                [self updateUIForFeatures:@"smile" value:ff.hasSmile];
            });
        }
        if (ff.leftEyeClosed) {
            NSLog(@"leftEyeClosed %d", 1);
            dispatch_async(dispatch_get_main_queue(), ^(void) {
                [self updateUIForFeatures:@"leftEye" value:ff.leftEyeClosed];
            });
            
        }
        if (ff.rightEyeClosed) {
            NSLog(@"rightEyeClosed %d", 1);
            dispatch_async(dispatch_get_main_queue(), ^(void) {
                [self updateUIForFeatures:@"rightEye" value:ff.rightEyeClosed];
            });
        }
        
        if (ff.hasTrackingFrameCount) {
            NSLog(@"trackingFrameCount %d", ff.trackingFrameCount);
            
        }
        
        if (ff.hasTrackingID) {
            NSLog(@"trackingFrameCount %d", ff.trackingID);
            
        }
        
        if (ff.hasTrackingID) {
            NSLog(@"trackingFrameCount %d", ff.trackingID);
            
        }
        
        NSLog(@"type %@", ff.type);
       // NSLog(@"face bounds %@", NSStringFromCGRect(faceRect));
        
        if (ff.hasFaceAngle){
            NSLog(@"faceAngle %g", ff.faceAngle);
        }
        
        if (ff.hasMouthPosition){
            NSLog(@"Mouth %g %g", ff.mouthPosition.x, ff.mouthPosition.y);
        }
        
        if (ff.hasRightEyePosition){
            NSLog(@"right eye %g %g", ff.rightEyePosition.x, ff.rightEyePosition.y);
        }
        
        if (ff.hasLeftEyePosition){
            NSLog(@"right eye %g %g", ff.leftEyePosition.x, ff.leftEyePosition.y);
        }
        
    }

 CMFormatDescriptionRef fdesc = CMSampleBufferGetFormatDescription(sampleBuffer);
 CGRect clap = CMVideoFormatDescriptionGetCleanAperture(fdesc, false /*originIsTopLeft == false*/);
 
 dispatch_async(dispatch_get_main_queue(), ^(void) {
  [self drawFaceBoxesForFeatures:features forVideoBox:clap orientation:curDeviceOrientation];
 });
}
Thank you for reading this blog. Post your feed back and queries.

No comments:

Post a Comment