Wednesday, August 27, 2014

iOS Face and Facial-Part Detection

iOS Face and Facial-Part Detection
With Sample Live App





Steps for iOS face and facial part recognition:

1: Setup AVConnection
  1. create a new AVCaptureSession
  2. Set SessionPreset for adjusting video and picture quality
  3. create instance of AVCaptureDevice with media type video
  4. create instance of AVCaptureDeviceInput with device instance created above
  5. check whether current deviceInput can add input to session or not and if yes
  6. addInput to session using device input.
  7. create AVCaptureStillImageOutput instance and AVCaptureVideoDataOutput instance one by one.
  8. For StillImageOutput add observer for “capturingStillImage” key with observing option new
  9. then check whether current session can add still image output or not and if yes
  10. addOutput to instance of AVCaptureStillImageOutput
  11. Similarly with instance of AVCaptureVideoDataOutput add video settings using dictionary and set “setAlwaysDiscardsLateVideoFrames” to yes for this example otherwise as per need.
  12. create a serial dispatch queue.
  13. assign SampleBufferDelegate class and queue to AVCaptureVideoDataOutput instance.
  14. Check whether session can add output to AVCaptureVideoDataOutput instance or not, if yes
  15. add output to AVCaptureVideoDataOutput instance.
  16. setup connection with media type using videoDataOutput and then enable the connection.
  17. create instance of AVCaptureVideoPreviewLayer with the same session.
  18. set background colour and gravity to preview layer.
  19. get the layer from preview view and set its bound to PreviewLayer and then add Previewlayer as sublayer to root layer(preview view layer).
  20. start the session using startRunning method of session instance.

Setup done………

Now you have to take care of drawing rectangle for faces or facial parts using the delegate method

::: didOutputSampleBuffer  (Preferred one)

alternate method is 

::: didDropSampleBuffer

of protocol :: AVCaptureVideoDataOutputSampleBufferDelegate



2: Delegate Method Implementation
  1. create CVPixelBufferref from the sampleBuffer received in delegate method using CMSampleBufferGetImageBuffer method.
  2. create CFDictionaryRef of attachments from sample buffer with attachment mode propagate.
  3. create CIImage from pixel buffer instance using attachments dictionary.
  4. create imageOptions dictionary using current device orientation and use it for feature detection.
  5. use faceDetector featuresInImage method to detect features. it will return you array of features.
  6. get CMFormatDescriptionRef from SampleBuffer using CMSampleBufferGetFormatdescription method.
  7. get the rest of video being displayed on iOS device using CMVideoFormatDescriptionGetCleanAperture method.
  8. use the main queue and drawfaceboxes on features detected or features you want to highlight.

3: DrawFaceBoxesForFeatures implementation details
  1. get the sublayers from the previewLayer sublayers.
  2. get the sublayers count and features count.
  3. Start the drawing process using CATransaction begin method.
  4. setValue for CSTransactionDisableActions key of CATransaction. e.g.. [CATransaction setValue:(id)kCFBooleanTrue forKey:kCATransactionDisableActions];
  5. from the sublayers get the layer labeled “FaceLayer” and hide it for all this type of layer.
  6. check whether any feature is detected or faceDetection is enabled by user and on the basis of decide whether to commit transaction now or not.
  7. Find where the video box is positioned within the preview layer based on the video size and gravity using PreviewView size previewLayer Gravity feature
  8. For each feature detected, draw a feature layer and add it to previewLayer and draw rect (i used image in this sample code, same can also be used or we can draw an square).
  9. Apply orientation as per device orientation.
  10. once done for all features detected then commit CATransaction.

4: Additional Feature:
  1.   You can provide feature like handling pinch gesture and detectFace user input to enable or disable face detection… remember to draw face rect each time user enable or disable.
  2. TakePicture with face detection highlighted feature or without face marker.

5: Take Picture and saving it to cameraRoll is explained below:

Utility method from Apple:
CreateCGImageFromCVPixelBuffer

Implementation::::
static OSStatus CreateCGImageFromCVPixelBuffer(CVPixelBufferRef pixelBuffer, CGImageRef *imageOut) 
{
OSStatus err = noErr;
OSType sourcePixelFormat;
size_t width, height, sourceRowBytes;
void *sourceBaseAddr = NULL;
CGBitmapInfo bitmapInfo;
CGColorSpaceRef colorspace = NULL;
CGDataProviderRef provider = NULL;
CGImageRef image = NULL;

sourcePixelFormat = CVPixelBufferGetPixelFormatType( pixelBuffer );
if ( kCVPixelFormatType_32ARGB == sourcePixelFormat )
bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst;
else if ( kCVPixelFormatType_32BGRA == sourcePixelFormat )
bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst;
else
return -95014; // only uncompressed pixel formats

sourceRowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
width = CVPixelBufferGetWidth( pixelBuffer );
height = CVPixelBufferGetHeight( pixelBuffer );

CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
sourceBaseAddr = CVPixelBufferGetBaseAddress( pixelBuffer );

colorspace = CGColorSpaceCreateDeviceRGB();
    
CVPixelBufferRetain( pixelBuffer );
provider = CGDataProviderCreateWithData( (void *)pixelBuffer, sourceBaseAddr, sourceRowBytes * height, ReleaseCVPixelBuffer);
image = CGImageCreate(width, height, 8, 32, sourceRowBytes, colorspace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);

bail:
if ( err && image ) {
CGImageRelease( image );
image = NULL;
}
if ( provider ) CGDataProviderRelease( provider );
if ( colorspace ) CGColorSpaceRelease( colorspace );
*imageOut = image;
return err;
}
:::
&
CreateCGBitmapContextForSize

Implementation:::
static CGContextRef CreateCGBitmapContextForSize(CGSize size)
{
    CGContextRef    context = NULL;
    CGColorSpaceRef colorSpace;
    int             bitmapBytesPerRow;

    bitmapBytesPerRow = (size.width * 4);

    colorSpace = CGColorSpaceCreateDeviceRGB();
    context = CGBitmapContextCreate (NULL,
size.width,
size.height,
8,      // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGContextSetAllowsAntialiasing(context, NO);
    CGColorSpaceRelease( colorSpace );
    return context;
}
:::

Process to save image to camera roll:
  1. Find out the current orientation and tell the still image output.
  2. get stillImageConnection from stillImageOutput with mediaType as AVMediaTypeVideo.
  3. get current device orientation.
  4. get avorientation for current device orientation.
  5. Check out helper method below for avOrientation.
  6. using StillImageConnetion set video orientation as avOrientation.
  7. set video scale and crop factor in StillImageConnection.
  8. Check whether face detection is ON or not? on the basis of that: set the appropriate pixel format / image type output setting depending on if we'll need an uncompressed image for the possiblity of drawing the red square over top or if we're just writing a jpeg to the camera roll which is the trival case.
  9. on the basis of whether face detection is on or not…we have to set the output settings for StillImageOutput object as explained above.
  10. Now Call captureStillImageAsynchronouslyFromConnection Method and inside completion handler check for error, if error show proper error message to user else do faceDetection bool value check and perform save to camera roll operation.
  11. Considering face detection is on, then:
  12. create pixel buffer from imageDataSampleBuffer using CMSampleBufferGetImageBuffer method.
  13. get the attachment dictionary using CMCopyDictionaryOfAttachments method.
  14. create CIImage object using pixel buffer and attachment objects.
  15. get orientation from the imageDataSampleBuffer using CMGetAttachment method and create imageOptions dictionary with this orientation and CIDetectorImageOrientation key.
  16. now dispatch synchronously the videoDataOutputQueue, this will ensure that new frames are automatically dropped while we are processing the existing frames.
  17. in the dispatch_sync method, do as described for this kind of app demo…
  18. get the features in image as a featuresArray.
  19. create CGImageRef using the CreateCGImageFromCVPixelBuffer helper method as described above.
  20. As face detection was on and in that case we want image to saved with square on face, so using the CGImageRef create another CGImageRef with square overlayed using another helper method newSquareOverlayedImageForFeatures will be described later.
  21. Now write the CGImageRef to camera roll, remember we can write to camera roll using the ALAssetsLibrary but using this we can write standard compressed format images, thus to write CGImageRef we need to do some additional processing.
  22. Check out writeCGImageToCameraRoll helper method for this processing.


  1. Now suppose face detection was off then:
  2. using jpegStillImageNSDataRepresentation method of ACCaptureStillImageOutput get the jpegdata object.
  3. get the attachments dictionary ref from sample buffer data.
  4. create instance of ALAssetsLibrary and write image data to photo album using writeImageDataToSavedPhotosAlbum method.

Done…

Additional Helper Methods:

// utility routine to create a new image with the red square overlay with appropriate orientation
// and return the new composited image which can be saved to the camera roll

// used with take picture method only.
- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features 
inCGImage:(CGImageRef)backgroundImage 
 withOrientation:(UIDeviceOrientation)orientation 
 frontFacing:(BOOL)isFrontFacing
{
CGImageRef returnImage = NULL;
CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(backgroundImage), CGImageGetHeight(backgroundImage));
CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size);
CGContextClearRect(bitmapContext, backgroundImageRect);
CGContextDrawImage(bitmapContext, backgroundImageRect, backgroundImage);
CGFloat rotationDegrees = 0.;

switch (orientation) {
case UIDeviceOrientationPortrait:
rotationDegrees = -90.;
break;
case UIDeviceOrientationPortraitUpsideDown:
rotationDegrees = 90.;
break;
case UIDeviceOrientationLandscapeLeft:
if (isFrontFacing) rotationDegrees = 180.;
else rotationDegrees = 0.;
break;
case UIDeviceOrientationLandscapeRight:
if (isFrontFacing) rotationDegrees = 0.;
else rotationDegrees = 180.;
break;
case UIDeviceOrientationFaceUp:
case UIDeviceOrientationFaceDown:
default:
break; // leave the layer in its last known orientation
}
UIImage *rotatedSquareImage = [square imageRotatedByDegrees:rotationDegrees];

    // features found by the face detector
for ( CIFaceFeature *ff in features ) {
CGRect faceRect = [ff bounds];
CGContextDrawImage(bitmapContext, faceRect, [rotatedSquareImage CGImage]);
}
returnImage = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease (bitmapContext);

return returnImage;
}

// utility routine used after taking a still image to write the resulting image to the camera roll
- (BOOL)writeCGImageToCameraRoll:(CGImageRef)cgImage withMetadata:(NSDictionary *)metadata
{
CFMutableDataRef destinationData = CFDataCreateMutable(kCFAllocatorDefault, 0);
CGImageDestinationRef destination = CGImageDestinationCreateWithData(destinationData, 
CFSTR("public.jpeg"), 
1, 
NULL);
BOOL success = (destination != NULL);
require(success, bail);
const float JPEGCompQuality = 0.85f; // JPEGHigherQuality
CFMutableDictionaryRef optionsDict = NULL;
CFNumberRef qualityNum = NULL;

qualityNum = CFNumberCreate(0, kCFNumberFloatType, &JPEGCompQuality);    
if ( qualityNum ) {
optionsDict = CFDictionaryCreateMutable(0, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
if ( optionsDict )
CFDictionarySetValue(optionsDict, kCGImageDestinationLossyCompressionQuality, qualityNum);
CFRelease( qualityNum );
}

CGImageDestinationAddImage( destination, cgImage, optionsDict );
success = CGImageDestinationFinalize( destination );
if ( optionsDict )
CFRelease(optionsDict);

require(success, bail);

CFRetain(destinationData);
ALAssetsLibrary *library = [ALAssetsLibrary new];
[library writeImageDataToSavedPhotosAlbum:(id)destinationData metadata:metadata completionBlock:^(NSURL *assetURL, NSError *error) {
if (destinationData)
CFRelease(destinationData);
}];
[library release];
bail:
if (destinationData)
CFRelease(destinationData);
if (destination)
CFRelease(destination);
return success;
}

// utility routine to display error aleart if takePicture fails
- (void)displayErrorOnMainQueue:(NSError *)error withMessage:(NSString *)message
{
dispatch_async(dispatch_get_main_queue(), ^(void) {
UIAlertView *alertView = [[UIAlertView alloc] initWithTitle:[NSString stringWithFormat:@"%@ (%d)", message, (int)[error code]]
message:[error localizedDescription]
  delegate:nil 
 cancelButtonTitle:@"Dismiss" 
 otherButtonTitles:nil];
[alertView show];
[alertView release];
});
}

Use this delegate method to perform some animation while capturing and saving image to Camera Roll:

// perform a flash bulb animation using KVO to monitor the value of the capturingStillImage property of the AVCaptureStillImageOutput class
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context
{
if ( context == AVCaptureStillImageIsCapturingStillImageContext ) {
BOOL isCapturingStillImage = [[change objectForKey:NSKeyValueChangeNewKey] boolValue];

if ( isCapturingStillImage ) {
// do flash bulb like animation
flashView = [[UIView alloc] initWithFrame:[previewView frame]];
[flashView setBackgroundColor:[UIColor whiteColor]];
[flashView setAlpha:0.f];
[[[self view] window] addSubview:flashView];

[UIView animateWithDuration:.4f
animations:^{
[flashView setAlpha:1.f];
}
];
}
else {
[UIView animateWithDuration:.4f
animations:^{
[flashView setAlpha:0.f];
}
completion:^(BOOL finished){
[flashView removeFromSuperview];
[flashView release];
flashView = nil;
}
];
}
}
}

Smile and Eye Blink Detection:
-(void)updateUIForFeatures:(NSString*)feature value:(BOOL)args
{
    if ([feature isEqualToString:@"smile"]) {
        self.smileImgView.hidden = NO;
        self.lEyeImgView.hidden = YES;
        self.REyeImgView.hidden = YES;
        self.txtLbl.hidden = NO;
        self.txtLbl.text = @"Smiling......";
    }
    if ([feature isEqualToString:@"leftEye"]) {
        self.lEyeImgView.hidden = NO;
        self.smileImgView.hidden = YES;
        self.REyeImgView.hidden = YES;
        self.txtLbl.hidden = NO;
        self.txtLbl.text = @"Left Eye Closed......";
    }
    
    if ([feature isEqualToString:@"rightEye"]) {
        self.REyeImgView.hidden = NO;
        self.smileImgView.hidden = YES;
        self.lEyeImgView.hidden = YES;
        self.txtLbl.hidden = NO;
        self.txtLbl.text = @"Right Eye Closed......";
    }
    
}
We will be using delegate method for the calling of this:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{ 
 // got an image
 CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
 CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
 CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
 if (attachments)
  CFRelease(attachments);
 NSDictionary *imageOptions = nil;
 UIDeviceOrientation curDeviceOrientation = [[UIDevice currentDevice] orientation];
 int exifOrientation;
        
 enum {
  PHOTOS_EXIF_0ROW_TOP_0COL_LEFT   = 1, //   1  =  0th row is at the top, and 0th column is on the left (THE DEFAULT).
  PHOTOS_EXIF_0ROW_TOP_0COL_RIGHT   = 2, //   2  =  0th row is at the top, and 0th column is on the right.  
  PHOTOS_EXIF_0ROW_BOTTOM_0COL_RIGHT      = 3, //   3  =  0th row is at the bottom, and 0th column is on the right.  
  PHOTOS_EXIF_0ROW_BOTTOM_0COL_LEFT       = 4, //   4  =  0th row is at the bottom, and 0th column is on the left.  
  PHOTOS_EXIF_0ROW_LEFT_0COL_TOP          = 5, //   5  =  0th row is on the left, and 0th column is the top.  
  PHOTOS_EXIF_0ROW_RIGHT_0COL_TOP         = 6, //   6  =  0th row is on the right, and 0th column is the top.  
  PHOTOS_EXIF_0ROW_RIGHT_0COL_BOTTOM      = 7, //   7  =  0th row is on the right, and 0th column is the bottom.  
  PHOTOS_EXIF_0ROW_LEFT_0COL_BOTTOM       = 8  //   8  =  0th row is on the left, and 0th column is the bottom.  
 };
 
 switch (curDeviceOrientation) {
  case UIDeviceOrientationPortraitUpsideDown:  // Device oriented vertically, home button on the top
   exifOrientation = PHOTOS_EXIF_0ROW_LEFT_0COL_BOTTOM;
   break;
  case UIDeviceOrientationLandscapeLeft:       // Device oriented horizontally, home button on the right
   if (isUsingFrontFacingCamera)
    exifOrientation = PHOTOS_EXIF_0ROW_BOTTOM_0COL_RIGHT;
   else
    exifOrientation = PHOTOS_EXIF_0ROW_TOP_0COL_LEFT;
   break;
  case UIDeviceOrientationLandscapeRight:      // Device oriented horizontally, home button on the left
   if (isUsingFrontFacingCamera)
    exifOrientation = PHOTOS_EXIF_0ROW_TOP_0COL_LEFT;
   else
    exifOrientation = PHOTOS_EXIF_0ROW_BOTTOM_0COL_RIGHT;
   break;
  case UIDeviceOrientationPortrait:            // Device oriented vertically, home button on the bottom
  default:
   exifOrientation = PHOTOS_EXIF_0ROW_RIGHT_0COL_TOP;
   break;
 }

 imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:exifOrientation] forKey:CIDetectorImageOrientation];
 NSArray *features = [faceDetector featuresInImage:ciImage options:@{CIDetectorEyeBlink: @YES,
                                                                        CIDetectorSmile: @YES,
                                                                        CIDetectorImageOrientation: [NSNumber numberWithInt:exifOrientation]}];
 [ciImage release];
    
    detectFacesFeatures = YES;
    for (CIFaceFeature *ff in features)
    {
        
        if (ff.hasSmile) {
            NSLog(@"has smile %d", 1);   
            dispatch_async(dispatch_get_main_queue(), ^(void) {
                [self updateUIForFeatures:@"smile" value:ff.hasSmile];
            });
        }
        if (ff.leftEyeClosed) {
            NSLog(@"leftEyeClosed %d", 1);
            dispatch_async(dispatch_get_main_queue(), ^(void) {
                [self updateUIForFeatures:@"leftEye" value:ff.leftEyeClosed];
            });
            
        }
        if (ff.rightEyeClosed) {
            NSLog(@"rightEyeClosed %d", 1);
            dispatch_async(dispatch_get_main_queue(), ^(void) {
                [self updateUIForFeatures:@"rightEye" value:ff.rightEyeClosed];
            });
        }
        
        if (ff.hasTrackingFrameCount) {
            NSLog(@"trackingFrameCount %d", ff.trackingFrameCount);
            
        }
        
        if (ff.hasTrackingID) {
            NSLog(@"trackingFrameCount %d", ff.trackingID);
            
        }
        
        if (ff.hasTrackingID) {
            NSLog(@"trackingFrameCount %d", ff.trackingID);
            
        }
        
        NSLog(@"type %@", ff.type);
       // NSLog(@"face bounds %@", NSStringFromCGRect(faceRect));
        
        if (ff.hasFaceAngle){
            NSLog(@"faceAngle %g", ff.faceAngle);
        }
        
        if (ff.hasMouthPosition){
            NSLog(@"Mouth %g %g", ff.mouthPosition.x, ff.mouthPosition.y);
        }
        
        if (ff.hasRightEyePosition){
            NSLog(@"right eye %g %g", ff.rightEyePosition.x, ff.rightEyePosition.y);
        }
        
        if (ff.hasLeftEyePosition){
            NSLog(@"right eye %g %g", ff.leftEyePosition.x, ff.leftEyePosition.y);
        }
        
    }

 CMFormatDescriptionRef fdesc = CMSampleBufferGetFormatDescription(sampleBuffer);
 CGRect clap = CMVideoFormatDescriptionGetCleanAperture(fdesc, false /*originIsTopLeft == false*/);
 
 dispatch_async(dispatch_get_main_queue(), ^(void) {
  [self drawFaceBoxesForFeatures:features forVideoBox:clap orientation:curDeviceOrientation];
 });
}
Thank you for reading this blog. Post your feed back and queries.

Tuesday, August 26, 2014

Google DFP (Double Click Publishers) In MobileApps & Web

Using Google DFP (Double Click For Publishers)
With Mobile Apps & Web

Whats is double click for publishers:
Double click for publishers is a free advertisement service from Google for small enterprises or individual group of entrepreneur who wants to earn money by providing dedicated space for  advertisement in their Mobile Apps or Web. Also you can create your business advertisement free of cost and get it delivered to world wide web and mobile Apps.

How to use it:
and sign in with your google account for which adsense is enabled.

Whats is ad-sense?
Ad-sense is a google service for advertisement, once ad-sense is enabled for your google account
you can use it for advertisement purposes.

Why Ad-sense is required?
Ad-sense is a kind of service which is responsible for delivering advertisement or putting advertisement on your web site. Using ad-sense you can apply for domain on which you want to show advertisement and on click of those advertisement can earn money as well.
To enable ad-sense you can login to ad-sense and follow google procedure here: https://www.google.co.in/adsense

Whats next?
Once adsene is enabled on your account you can login to google DFP web site and start creating your advertisement. In Ad world we call the advertisement which is displayed as creative.

How to create Advertisement?
There are couple of steps involved in this:

Step 1:
Go to DFP website and login with your google account credential.
Step 2:
On the top bar you will see a inventory section click on that and go in. (As shown in Image below)


Step 3:
Once inside inventory section, click on "+ New ad unit" Button. (A form will be opened in action)


Step 4: 
New Ad unit form:

Fill the form and at the end click save.

Step 5:
Example form filled just for demo purpose, (For mobile Apps if required then we can choose the refresh rate option also which is set to no refresh in the demo form filled.)


Once you click save, if any error reported then resolve those. Error are generally reported if you use any non permitted characters and generally space in the ad unit name.
Click save and move to next operation.



Step 6:
Go to placement section on the same inventory tab and create placement for your ad unit.(Click on "+New placement" button and it will again open placement form).


New placement form:


Fill the form and create placement for your ad unit.

Example filled form just for demo purpose.


Once filled, click on save and your placement is saved for ad unit 320x50 as we made an association by choosing corresponding ad unit above in the form.


Step 7:
Now for the time being work with the inventory tab is finished, so we move to the Orders tab.


As in the image you can see that, once as order is created and advertisement is being displayed then it has many status under line items field. Line Item status are self descriptive and details can be read in DFP official Doc.

Now click on "New Order" button to create an Order here.
Form needs to be filled to create your new order, which is shown below.


After filling this form, we need to save it, but here we have to option. either just save it or save and upload creative. (remember creative is the visible part when advertisement is being delivered.)

Step 8:
Demo Form filled and uploading creative.


After filling up the form, when you click on the "Save and Upload creatives"  then this order is saved and will open form to add creative to this order associated with ad unit and placement.

Step 9: (Adding Creative to your ad unit)
You get screen to create and add creative to your order. Order & Line item you can see at the top of the screen. See the top blue line in the image, which describes and order and line item.



As i am targeting mobile ad of size 320x50, so for demo i am using image as 320x50 banner and link on click to show details.

Once saved, you get a screen for advertisement approval, like below.


If you get an Overbooking Warning approval screen then also approve and go ahead.


After doing this much work, if all goes well then within few minutes your line item should be in a ready state and now you need to create mobile app to use this advertisement.

Step 10: (Generating Tag for mobile and web)
In the mobile app you will be using tag for this advertisement, to generate tag. Go back to inventory tag and on the left hand side panel click on Generate Tag. Check the example images to explain this process.






Copy the Mobile App tag and use it with DFP library in your mobile application.
Tested in my mobile application:

Step 11: (Test Mobile App)
Advertisement Delivered to my mobile App.


When i click on my advertisement, then click through URL set in the creative will be displayed as below image describes the same.



Example Provided by Google which can be used to test our advertisements.

Step 12: (Change in status for order)
Now see the last important thing:, change in status of your new advertisement.



This status change from Ready to Delivering occurs when your first impression is delivered.

Done - Thank you for reading my Blog.

References: 


Complete Video describing the above process live is coming soon... 
Stay tuned to my blogs and subscribe to my Youtube channel for more videos.