Wednesday, August 27, 2014

iOS Face and Facial-Part Detection

iOS Face and Facial-Part Detection
With Sample Live App





Steps for iOS face and facial part recognition:

1: Setup AVConnection
  1. create a new AVCaptureSession
  2. Set SessionPreset for adjusting video and picture quality
  3. create instance of AVCaptureDevice with media type video
  4. create instance of AVCaptureDeviceInput with device instance created above
  5. check whether current deviceInput can add input to session or not and if yes
  6. addInput to session using device input.
  7. create AVCaptureStillImageOutput instance and AVCaptureVideoDataOutput instance one by one.
  8. For StillImageOutput add observer for “capturingStillImage” key with observing option new
  9. then check whether current session can add still image output or not and if yes
  10. addOutput to instance of AVCaptureStillImageOutput
  11. Similarly with instance of AVCaptureVideoDataOutput add video settings using dictionary and set “setAlwaysDiscardsLateVideoFrames” to yes for this example otherwise as per need.
  12. create a serial dispatch queue.
  13. assign SampleBufferDelegate class and queue to AVCaptureVideoDataOutput instance.
  14. Check whether session can add output to AVCaptureVideoDataOutput instance or not, if yes
  15. add output to AVCaptureVideoDataOutput instance.
  16. setup connection with media type using videoDataOutput and then enable the connection.
  17. create instance of AVCaptureVideoPreviewLayer with the same session.
  18. set background colour and gravity to preview layer.
  19. get the layer from preview view and set its bound to PreviewLayer and then add Previewlayer as sublayer to root layer(preview view layer).
  20. start the session using startRunning method of session instance.

Setup done………

Now you have to take care of drawing rectangle for faces or facial parts using the delegate method

::: didOutputSampleBuffer  (Preferred one)

alternate method is 

::: didDropSampleBuffer

of protocol :: AVCaptureVideoDataOutputSampleBufferDelegate



2: Delegate Method Implementation
  1. create CVPixelBufferref from the sampleBuffer received in delegate method using CMSampleBufferGetImageBuffer method.
  2. create CFDictionaryRef of attachments from sample buffer with attachment mode propagate.
  3. create CIImage from pixel buffer instance using attachments dictionary.
  4. create imageOptions dictionary using current device orientation and use it for feature detection.
  5. use faceDetector featuresInImage method to detect features. it will return you array of features.
  6. get CMFormatDescriptionRef from SampleBuffer using CMSampleBufferGetFormatdescription method.
  7. get the rest of video being displayed on iOS device using CMVideoFormatDescriptionGetCleanAperture method.
  8. use the main queue and drawfaceboxes on features detected or features you want to highlight.

3: DrawFaceBoxesForFeatures implementation details
  1. get the sublayers from the previewLayer sublayers.
  2. get the sublayers count and features count.
  3. Start the drawing process using CATransaction begin method.
  4. setValue for CSTransactionDisableActions key of CATransaction. e.g.. [CATransaction setValue:(id)kCFBooleanTrue forKey:kCATransactionDisableActions];
  5. from the sublayers get the layer labeled “FaceLayer” and hide it for all this type of layer.
  6. check whether any feature is detected or faceDetection is enabled by user and on the basis of decide whether to commit transaction now or not.
  7. Find where the video box is positioned within the preview layer based on the video size and gravity using PreviewView size previewLayer Gravity feature
  8. For each feature detected, draw a feature layer and add it to previewLayer and draw rect (i used image in this sample code, same can also be used or we can draw an square).
  9. Apply orientation as per device orientation.
  10. once done for all features detected then commit CATransaction.

4: Additional Feature:
  1.   You can provide feature like handling pinch gesture and detectFace user input to enable or disable face detection… remember to draw face rect each time user enable or disable.
  2. TakePicture with face detection highlighted feature or without face marker.

5: Take Picture and saving it to cameraRoll is explained below:

Utility method from Apple:
CreateCGImageFromCVPixelBuffer

Implementation::::
static OSStatus CreateCGImageFromCVPixelBuffer(CVPixelBufferRef pixelBuffer, CGImageRef *imageOut) 
{
OSStatus err = noErr;
OSType sourcePixelFormat;
size_t width, height, sourceRowBytes;
void *sourceBaseAddr = NULL;
CGBitmapInfo bitmapInfo;
CGColorSpaceRef colorspace = NULL;
CGDataProviderRef provider = NULL;
CGImageRef image = NULL;

sourcePixelFormat = CVPixelBufferGetPixelFormatType( pixelBuffer );
if ( kCVPixelFormatType_32ARGB == sourcePixelFormat )
bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst;
else if ( kCVPixelFormatType_32BGRA == sourcePixelFormat )
bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst;
else
return -95014; // only uncompressed pixel formats

sourceRowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
width = CVPixelBufferGetWidth( pixelBuffer );
height = CVPixelBufferGetHeight( pixelBuffer );

CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
sourceBaseAddr = CVPixelBufferGetBaseAddress( pixelBuffer );

colorspace = CGColorSpaceCreateDeviceRGB();
    
CVPixelBufferRetain( pixelBuffer );
provider = CGDataProviderCreateWithData( (void *)pixelBuffer, sourceBaseAddr, sourceRowBytes * height, ReleaseCVPixelBuffer);
image = CGImageCreate(width, height, 8, 32, sourceRowBytes, colorspace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);

bail:
if ( err && image ) {
CGImageRelease( image );
image = NULL;
}
if ( provider ) CGDataProviderRelease( provider );
if ( colorspace ) CGColorSpaceRelease( colorspace );
*imageOut = image;
return err;
}
:::
&
CreateCGBitmapContextForSize

Implementation:::
static CGContextRef CreateCGBitmapContextForSize(CGSize size)
{
    CGContextRef    context = NULL;
    CGColorSpaceRef colorSpace;
    int             bitmapBytesPerRow;

    bitmapBytesPerRow = (size.width * 4);

    colorSpace = CGColorSpaceCreateDeviceRGB();
    context = CGBitmapContextCreate (NULL,
size.width,
size.height,
8,      // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGContextSetAllowsAntialiasing(context, NO);
    CGColorSpaceRelease( colorSpace );
    return context;
}
:::

Process to save image to camera roll:
  1. Find out the current orientation and tell the still image output.
  2. get stillImageConnection from stillImageOutput with mediaType as AVMediaTypeVideo.
  3. get current device orientation.
  4. get avorientation for current device orientation.
  5. Check out helper method below for avOrientation.
  6. using StillImageConnetion set video orientation as avOrientation.
  7. set video scale and crop factor in StillImageConnection.
  8. Check whether face detection is ON or not? on the basis of that: set the appropriate pixel format / image type output setting depending on if we'll need an uncompressed image for the possiblity of drawing the red square over top or if we're just writing a jpeg to the camera roll which is the trival case.
  9. on the basis of whether face detection is on or not…we have to set the output settings for StillImageOutput object as explained above.
  10. Now Call captureStillImageAsynchronouslyFromConnection Method and inside completion handler check for error, if error show proper error message to user else do faceDetection bool value check and perform save to camera roll operation.
  11. Considering face detection is on, then:
  12. create pixel buffer from imageDataSampleBuffer using CMSampleBufferGetImageBuffer method.
  13. get the attachment dictionary using CMCopyDictionaryOfAttachments method.
  14. create CIImage object using pixel buffer and attachment objects.
  15. get orientation from the imageDataSampleBuffer using CMGetAttachment method and create imageOptions dictionary with this orientation and CIDetectorImageOrientation key.
  16. now dispatch synchronously the videoDataOutputQueue, this will ensure that new frames are automatically dropped while we are processing the existing frames.
  17. in the dispatch_sync method, do as described for this kind of app demo…
  18. get the features in image as a featuresArray.
  19. create CGImageRef using the CreateCGImageFromCVPixelBuffer helper method as described above.
  20. As face detection was on and in that case we want image to saved with square on face, so using the CGImageRef create another CGImageRef with square overlayed using another helper method newSquareOverlayedImageForFeatures will be described later.
  21. Now write the CGImageRef to camera roll, remember we can write to camera roll using the ALAssetsLibrary but using this we can write standard compressed format images, thus to write CGImageRef we need to do some additional processing.
  22. Check out writeCGImageToCameraRoll helper method for this processing.


  1. Now suppose face detection was off then:
  2. using jpegStillImageNSDataRepresentation method of ACCaptureStillImageOutput get the jpegdata object.
  3. get the attachments dictionary ref from sample buffer data.
  4. create instance of ALAssetsLibrary and write image data to photo album using writeImageDataToSavedPhotosAlbum method.

Done…

Additional Helper Methods:

// utility routine to create a new image with the red square overlay with appropriate orientation
// and return the new composited image which can be saved to the camera roll

// used with take picture method only.
- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features 
inCGImage:(CGImageRef)backgroundImage 
 withOrientation:(UIDeviceOrientation)orientation 
 frontFacing:(BOOL)isFrontFacing
{
CGImageRef returnImage = NULL;
CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(backgroundImage), CGImageGetHeight(backgroundImage));
CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size);
CGContextClearRect(bitmapContext, backgroundImageRect);
CGContextDrawImage(bitmapContext, backgroundImageRect, backgroundImage);
CGFloat rotationDegrees = 0.;

switch (orientation) {
case UIDeviceOrientationPortrait:
rotationDegrees = -90.;
break;
case UIDeviceOrientationPortraitUpsideDown:
rotationDegrees = 90.;
break;
case UIDeviceOrientationLandscapeLeft:
if (isFrontFacing) rotationDegrees = 180.;
else rotationDegrees = 0.;
break;
case UIDeviceOrientationLandscapeRight:
if (isFrontFacing) rotationDegrees = 0.;
else rotationDegrees = 180.;
break;
case UIDeviceOrientationFaceUp:
case UIDeviceOrientationFaceDown:
default:
break; // leave the layer in its last known orientation
}
UIImage *rotatedSquareImage = [square imageRotatedByDegrees:rotationDegrees];

    // features found by the face detector
for ( CIFaceFeature *ff in features ) {
CGRect faceRect = [ff bounds];
CGContextDrawImage(bitmapContext, faceRect, [rotatedSquareImage CGImage]);
}
returnImage = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease (bitmapContext);

return returnImage;
}

// utility routine used after taking a still image to write the resulting image to the camera roll
- (BOOL)writeCGImageToCameraRoll:(CGImageRef)cgImage withMetadata:(NSDictionary *)metadata
{
CFMutableDataRef destinationData = CFDataCreateMutable(kCFAllocatorDefault, 0);
CGImageDestinationRef destination = CGImageDestinationCreateWithData(destinationData, 
CFSTR("public.jpeg"), 
1, 
NULL);
BOOL success = (destination != NULL);
require(success, bail);
const float JPEGCompQuality = 0.85f; // JPEGHigherQuality
CFMutableDictionaryRef optionsDict = NULL;
CFNumberRef qualityNum = NULL;

qualityNum = CFNumberCreate(0, kCFNumberFloatType, &JPEGCompQuality);    
if ( qualityNum ) {
optionsDict = CFDictionaryCreateMutable(0, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
if ( optionsDict )
CFDictionarySetValue(optionsDict, kCGImageDestinationLossyCompressionQuality, qualityNum);
CFRelease( qualityNum );
}

CGImageDestinationAddImage( destination, cgImage, optionsDict );
success = CGImageDestinationFinalize( destination );
if ( optionsDict )
CFRelease(optionsDict);

require(success, bail);

CFRetain(destinationData);
ALAssetsLibrary *library = [ALAssetsLibrary new];
[library writeImageDataToSavedPhotosAlbum:(id)destinationData metadata:metadata completionBlock:^(NSURL *assetURL, NSError *error) {
if (destinationData)
CFRelease(destinationData);
}];
[library release];
bail:
if (destinationData)
CFRelease(destinationData);
if (destination)
CFRelease(destination);
return success;
}

// utility routine to display error aleart if takePicture fails
- (void)displayErrorOnMainQueue:(NSError *)error withMessage:(NSString *)message
{
dispatch_async(dispatch_get_main_queue(), ^(void) {
UIAlertView *alertView = [[UIAlertView alloc] initWithTitle:[NSString stringWithFormat:@"%@ (%d)", message, (int)[error code]]
message:[error localizedDescription]
  delegate:nil 
 cancelButtonTitle:@"Dismiss" 
 otherButtonTitles:nil];
[alertView show];
[alertView release];
});
}

Use this delegate method to perform some animation while capturing and saving image to Camera Roll:

// perform a flash bulb animation using KVO to monitor the value of the capturingStillImage property of the AVCaptureStillImageOutput class
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context
{
if ( context == AVCaptureStillImageIsCapturingStillImageContext ) {
BOOL isCapturingStillImage = [[change objectForKey:NSKeyValueChangeNewKey] boolValue];

if ( isCapturingStillImage ) {
// do flash bulb like animation
flashView = [[UIView alloc] initWithFrame:[previewView frame]];
[flashView setBackgroundColor:[UIColor whiteColor]];
[flashView setAlpha:0.f];
[[[self view] window] addSubview:flashView];

[UIView animateWithDuration:.4f
animations:^{
[flashView setAlpha:1.f];
}
];
}
else {
[UIView animateWithDuration:.4f
animations:^{
[flashView setAlpha:0.f];
}
completion:^(BOOL finished){
[flashView removeFromSuperview];
[flashView release];
flashView = nil;
}
];
}
}
}

Smile and Eye Blink Detection:
-(void)updateUIForFeatures:(NSString*)feature value:(BOOL)args
{
    if ([feature isEqualToString:@"smile"]) {
        self.smileImgView.hidden = NO;
        self.lEyeImgView.hidden = YES;
        self.REyeImgView.hidden = YES;
        self.txtLbl.hidden = NO;
        self.txtLbl.text = @"Smiling......";
    }
    if ([feature isEqualToString:@"leftEye"]) {
        self.lEyeImgView.hidden = NO;
        self.smileImgView.hidden = YES;
        self.REyeImgView.hidden = YES;
        self.txtLbl.hidden = NO;
        self.txtLbl.text = @"Left Eye Closed......";
    }
    
    if ([feature isEqualToString:@"rightEye"]) {
        self.REyeImgView.hidden = NO;
        self.smileImgView.hidden = YES;
        self.lEyeImgView.hidden = YES;
        self.txtLbl.hidden = NO;
        self.txtLbl.text = @"Right Eye Closed......";
    }
    
}
We will be using delegate method for the calling of this:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{ 
 // got an image
 CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
 CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
 CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
 if (attachments)
  CFRelease(attachments);
 NSDictionary *imageOptions = nil;
 UIDeviceOrientation curDeviceOrientation = [[UIDevice currentDevice] orientation];
 int exifOrientation;
        
 enum {
  PHOTOS_EXIF_0ROW_TOP_0COL_LEFT   = 1, //   1  =  0th row is at the top, and 0th column is on the left (THE DEFAULT).
  PHOTOS_EXIF_0ROW_TOP_0COL_RIGHT   = 2, //   2  =  0th row is at the top, and 0th column is on the right.  
  PHOTOS_EXIF_0ROW_BOTTOM_0COL_RIGHT      = 3, //   3  =  0th row is at the bottom, and 0th column is on the right.  
  PHOTOS_EXIF_0ROW_BOTTOM_0COL_LEFT       = 4, //   4  =  0th row is at the bottom, and 0th column is on the left.  
  PHOTOS_EXIF_0ROW_LEFT_0COL_TOP          = 5, //   5  =  0th row is on the left, and 0th column is the top.  
  PHOTOS_EXIF_0ROW_RIGHT_0COL_TOP         = 6, //   6  =  0th row is on the right, and 0th column is the top.  
  PHOTOS_EXIF_0ROW_RIGHT_0COL_BOTTOM      = 7, //   7  =  0th row is on the right, and 0th column is the bottom.  
  PHOTOS_EXIF_0ROW_LEFT_0COL_BOTTOM       = 8  //   8  =  0th row is on the left, and 0th column is the bottom.  
 };
 
 switch (curDeviceOrientation) {
  case UIDeviceOrientationPortraitUpsideDown:  // Device oriented vertically, home button on the top
   exifOrientation = PHOTOS_EXIF_0ROW_LEFT_0COL_BOTTOM;
   break;
  case UIDeviceOrientationLandscapeLeft:       // Device oriented horizontally, home button on the right
   if (isUsingFrontFacingCamera)
    exifOrientation = PHOTOS_EXIF_0ROW_BOTTOM_0COL_RIGHT;
   else
    exifOrientation = PHOTOS_EXIF_0ROW_TOP_0COL_LEFT;
   break;
  case UIDeviceOrientationLandscapeRight:      // Device oriented horizontally, home button on the left
   if (isUsingFrontFacingCamera)
    exifOrientation = PHOTOS_EXIF_0ROW_TOP_0COL_LEFT;
   else
    exifOrientation = PHOTOS_EXIF_0ROW_BOTTOM_0COL_RIGHT;
   break;
  case UIDeviceOrientationPortrait:            // Device oriented vertically, home button on the bottom
  default:
   exifOrientation = PHOTOS_EXIF_0ROW_RIGHT_0COL_TOP;
   break;
 }

 imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:exifOrientation] forKey:CIDetectorImageOrientation];
 NSArray *features = [faceDetector featuresInImage:ciImage options:@{CIDetectorEyeBlink: @YES,
                                                                        CIDetectorSmile: @YES,
                                                                        CIDetectorImageOrientation: [NSNumber numberWithInt:exifOrientation]}];
 [ciImage release];
    
    detectFacesFeatures = YES;
    for (CIFaceFeature *ff in features)
    {
        
        if (ff.hasSmile) {
            NSLog(@"has smile %d", 1);   
            dispatch_async(dispatch_get_main_queue(), ^(void) {
                [self updateUIForFeatures:@"smile" value:ff.hasSmile];
            });
        }
        if (ff.leftEyeClosed) {
            NSLog(@"leftEyeClosed %d", 1);
            dispatch_async(dispatch_get_main_queue(), ^(void) {
                [self updateUIForFeatures:@"leftEye" value:ff.leftEyeClosed];
            });
            
        }
        if (ff.rightEyeClosed) {
            NSLog(@"rightEyeClosed %d", 1);
            dispatch_async(dispatch_get_main_queue(), ^(void) {
                [self updateUIForFeatures:@"rightEye" value:ff.rightEyeClosed];
            });
        }
        
        if (ff.hasTrackingFrameCount) {
            NSLog(@"trackingFrameCount %d", ff.trackingFrameCount);
            
        }
        
        if (ff.hasTrackingID) {
            NSLog(@"trackingFrameCount %d", ff.trackingID);
            
        }
        
        if (ff.hasTrackingID) {
            NSLog(@"trackingFrameCount %d", ff.trackingID);
            
        }
        
        NSLog(@"type %@", ff.type);
       // NSLog(@"face bounds %@", NSStringFromCGRect(faceRect));
        
        if (ff.hasFaceAngle){
            NSLog(@"faceAngle %g", ff.faceAngle);
        }
        
        if (ff.hasMouthPosition){
            NSLog(@"Mouth %g %g", ff.mouthPosition.x, ff.mouthPosition.y);
        }
        
        if (ff.hasRightEyePosition){
            NSLog(@"right eye %g %g", ff.rightEyePosition.x, ff.rightEyePosition.y);
        }
        
        if (ff.hasLeftEyePosition){
            NSLog(@"right eye %g %g", ff.leftEyePosition.x, ff.leftEyePosition.y);
        }
        
    }

 CMFormatDescriptionRef fdesc = CMSampleBufferGetFormatDescription(sampleBuffer);
 CGRect clap = CMVideoFormatDescriptionGetCleanAperture(fdesc, false /*originIsTopLeft == false*/);
 
 dispatch_async(dispatch_get_main_queue(), ^(void) {
  [self drawFaceBoxesForFeatures:features forVideoBox:clap orientation:curDeviceOrientation];
 });
}
Thank you for reading this blog. Post your feed back and queries.

Tuesday, August 26, 2014

Google DFP (Double Click Publishers) In MobileApps & Web

Using Google DFP (Double Click For Publishers)
With Mobile Apps & Web

Whats is double click for publishers:
Double click for publishers is a free advertisement service from Google for small enterprises or individual group of entrepreneur who wants to earn money by providing dedicated space for  advertisement in their Mobile Apps or Web. Also you can create your business advertisement free of cost and get it delivered to world wide web and mobile Apps.

How to use it:
and sign in with your google account for which adsense is enabled.

Whats is ad-sense?
Ad-sense is a google service for advertisement, once ad-sense is enabled for your google account
you can use it for advertisement purposes.

Why Ad-sense is required?
Ad-sense is a kind of service which is responsible for delivering advertisement or putting advertisement on your web site. Using ad-sense you can apply for domain on which you want to show advertisement and on click of those advertisement can earn money as well.
To enable ad-sense you can login to ad-sense and follow google procedure here: https://www.google.co.in/adsense

Whats next?
Once adsene is enabled on your account you can login to google DFP web site and start creating your advertisement. In Ad world we call the advertisement which is displayed as creative.

How to create Advertisement?
There are couple of steps involved in this:

Step 1:
Go to DFP website and login with your google account credential.
Step 2:
On the top bar you will see a inventory section click on that and go in. (As shown in Image below)


Step 3:
Once inside inventory section, click on "+ New ad unit" Button. (A form will be opened in action)


Step 4: 
New Ad unit form:

Fill the form and at the end click save.

Step 5:
Example form filled just for demo purpose, (For mobile Apps if required then we can choose the refresh rate option also which is set to no refresh in the demo form filled.)


Once you click save, if any error reported then resolve those. Error are generally reported if you use any non permitted characters and generally space in the ad unit name.
Click save and move to next operation.



Step 6:
Go to placement section on the same inventory tab and create placement for your ad unit.(Click on "+New placement" button and it will again open placement form).


New placement form:


Fill the form and create placement for your ad unit.

Example filled form just for demo purpose.


Once filled, click on save and your placement is saved for ad unit 320x50 as we made an association by choosing corresponding ad unit above in the form.


Step 7:
Now for the time being work with the inventory tab is finished, so we move to the Orders tab.


As in the image you can see that, once as order is created and advertisement is being displayed then it has many status under line items field. Line Item status are self descriptive and details can be read in DFP official Doc.

Now click on "New Order" button to create an Order here.
Form needs to be filled to create your new order, which is shown below.


After filling this form, we need to save it, but here we have to option. either just save it or save and upload creative. (remember creative is the visible part when advertisement is being delivered.)

Step 8:
Demo Form filled and uploading creative.


After filling up the form, when you click on the "Save and Upload creatives"  then this order is saved and will open form to add creative to this order associated with ad unit and placement.

Step 9: (Adding Creative to your ad unit)
You get screen to create and add creative to your order. Order & Line item you can see at the top of the screen. See the top blue line in the image, which describes and order and line item.



As i am targeting mobile ad of size 320x50, so for demo i am using image as 320x50 banner and link on click to show details.

Once saved, you get a screen for advertisement approval, like below.


If you get an Overbooking Warning approval screen then also approve and go ahead.


After doing this much work, if all goes well then within few minutes your line item should be in a ready state and now you need to create mobile app to use this advertisement.

Step 10: (Generating Tag for mobile and web)
In the mobile app you will be using tag for this advertisement, to generate tag. Go back to inventory tag and on the left hand side panel click on Generate Tag. Check the example images to explain this process.






Copy the Mobile App tag and use it with DFP library in your mobile application.
Tested in my mobile application:

Step 11: (Test Mobile App)
Advertisement Delivered to my mobile App.


When i click on my advertisement, then click through URL set in the creative will be displayed as below image describes the same.



Example Provided by Google which can be used to test our advertisements.

Step 12: (Change in status for order)
Now see the last important thing:, change in status of your new advertisement.



This status change from Ready to Delivering occurs when your first impression is delivered.

Done - Thank you for reading my Blog.

References: 


Complete Video describing the above process live is coming soon... 
Stay tuned to my blogs and subscribe to my Youtube channel for more videos.


Monday, June 2, 2014

REST API development with NodeJS and Mongo DB

Brief description about NodeJS:
Node.js is a platform built on Chrome's JavaScript runtime for easily building fast,
scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.


So lets start with our main topic, without discussing more about other things.
Developing Rest APIs with node is also having one or more node module available in the market like expressJS and restify.js. Lots of pros and cons are also available on net.

For me everything has its own pros & cons, so i am using Restify.js here. As i find it easy.
So install NodeJS & MongoDb in your operating system and register it globally then install the required module.
1: MongoDB.
2: RestifyJS.
3: MongoJS.

MongoDB can be downloaded from http://www.mongodb.org/ and after installing nodeJS which can downloaded from http://nodejs.org/, you can instal restify and mongojs easily.

Commands to install both one by one.
1: npm install restify
2:npm install mongojs

Now create a directory and go inside that directory, within that directory create a package.json file.
This package.json file will be used when you package your project for distribution.
To auto generate package.json type in terminal "npm init" (Windows user first register node and npm in environment variable to access it globally.)

Command and its uses:
"npm init"

My Demo App Details: $ npm init This utility will walk you through creating a package.json file. It only covers the most common items, and tries to guess sane defaults. See `npm help json` for definitive documentation on these fields and exactly what they do. Use `npm install --save` afterwards to install a package and save it as a dependency in the package.json file. Press ^C at any time to quit. name: (app) HelloRestAPI version: (0.0.0) 1.0.0 description: This is my first rest API project entry point: (index.js) app.js test command: node app.js git repository: https://github.com/ashishnigam/Team_Dev_Work/tree/master/Servers/nodeJS/myapp keywords: Rest,REST, REST APIs, API, APIs, Rest API, Node API, NPM. author: Ashish Nigam license: (BSD-2-Clause) Copyright (c) 2014, Ashish Nigam All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. About to write to /Users/ashish.nigam/Documents/NodeHTTP_Server/app/package.json: {   "name": "HelloRestAPI",   "version": "1.0.0",   "description": "This is my first rest API project",   "main": "app.js",   "scripts": {     "test": "node app.js"   },   "repository": {     "type": "git",     "url": "https://github.com/ashishnigam/Team_Dev_Work/tree/master/Servers/nodeJS/myapp"   },   "keywords": [     "Rest",     "REST",     "REST",     "APIs",     "API",     "APIs",     "Rest",     "API",     "Node",     "API",     "NPM."   ],   "author": "Ashish Nigam",   "license": "Copyright (c) 2014, Ashish Nigam All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE." } Is this ok? (yes) yes

After this, your package.json is generated and you can check the content of package.json by opening in any text editor.

After that to install the dependencies and add it to package.json, execute the below mentioned command in the terminal (within same directory in terminal).


$ npm install restify --save
$ npm install mongojs --save

Similarly you can install any dependency and add it package.json. a dependency key will be added automatically once you use the above mentioned command.

My package.json file generated:

{
  "name": "HelloRestAPI",
  "version": "1.0.0",
  "description": "This is my first rest API project",
  "main": "app.js",
  "scripts": {
    "test": "node app.js"
  },
  "repository": {
    "type": "git",
    "url": "https://github.com/ashishnigam/Team_Dev_Work/tree/master/Servers/nodeJS/myapp"
  },
  "keywords": [
    "Rest",
    "REST",
    "REST",
    "APIs",
    "API",
    "APIs",
    "Rest",
    "API",
    "Node",
    "API",
    "NPM."
  ],
  "author": "Ashish Nigam",
  "license": "Copyright (c) 2014, Ashish Nigam All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.",
  "dependencies": {
    "mongojs": "~0.13.0",
    "restify": "~2.8.1"
  }
}
See the dependency section, which i made it bold to highlight.

Now create a app.js file in the same directory and open it in your favourite text editor.

Require both the dependency in your app.js at the top. copy paste to write it at the top of your app.js

var restify = require('restify'); 
var mongojs = require("mongojs"); 
The two lines show above load the restify and mongojs modules using the require function and assign them to variables.
Now create a new server using restify API:
var ip_addr = '127.0.0.1';
var port    =  '8080';
 
var server = restify.createServer({
    name : "myapp"
});
 
server.listen(port ,ip_addr, function(){
    console.log('%s listening at %s ', server.name , server.url);
});
The code shown above creates a new server. The createServer() function takes an options object. We passed myapp as the name of the server in options object. You can view the full list of options in the documentation. After create the server instance, we call the listen function passing port, ip address, and a callback function.

Run the application by typing the following command.

$ node app.js
You will see following on the command line terminal.
myapp listening at http://127.0.0.1:8080

Configure Plugins

The restify module has a lot of built in plugins which we can use. Copy and paste the following in app.js. These should be added before the server.listen() function. Refer to documentation to learn about all the supported plugins.
server.use(restify.queryParser());
server.use(restify.bodyParser());
server.use(restify.CORS());

The three lines shown above :
  1. The restify.queryParser() plugin is used to parse the HTTP query string (i.e., /jobs?skills=java,mysql). The parsed content will always be available in req.query.
  2. The restify.bodyParser() takes care of turning your request data into a JavaScript object on the server automatically.
  3. The restify.CORS() configures CORS support in the application.

Configure MongoDB

Before adding the routes, let's add code to connect to myapp to the MongoDB database.
var connection_string = '127.0.0.1:27017/myapp';
var db = mongojs(connection_string, ['myapp']);
var jobs = db.collection("jobs");
In the code shown above, we connect to local MongoDB instance. Next, we get the jobs collection using database object.

Writing CRUD API

Now, we have the server and database part ready. We still need routes to define the behaviour of the API. Copy and paste the following code to app.js.
var PATH = '/jobs'
server.get({path : PATH , version : '0.0.1'} , findAllJobs);
server.get({path : PATH +'/:jobId' , version : '0.0.1'} , findJob);
server.post({path : PATH , version: '0.0.1'} ,postNewJob);
server.del({path : PATH +'/:jobId' , version: '0.0.1'} ,deleteJob);
The code shown above does the following:
  1. When a user makes a GET request to '/jobs', then findAllJobs callback will be invoked. The another interesting part is the use of versioned routes. A client can specify the version using Accept-Version header.
  2. When a user makes a GET request to '/jobs/123', then findJob callback will be invoked.
  3. When a user makes POST request to '/jobs', then postNewJob callback will be invoked.
  4. When a user makes DELETE request to '/jobs/123', then postNewJob callback will be invoked.
Now we will write the callbacks. Copy and paste the following to app.js.
function findAllJobs(req, res , next){
    res.setHeader('Access-Control-Allow-Origin','*');
    jobs.find().limit(20).sort({postedOn : -1} , function(err , success){
        console.log('Response success '+success);
        console.log('Response error '+err);
        if(success){
            res.send(200 , success);
            return next();
        }else{
            return next(err);
        }
 
    });
 
}
 
function findJob(req, res , next){
    res.setHeader('Access-Control-Allow-Origin','*');
    jobs.findOne({_id:mongojs.ObjectId(req.params.jobId)} , function(err , success){
        console.log('Response success '+success);
        console.log('Response error '+err);
        if(success){
            res.send(200 , success);
            return next();
        }
        return next(err);
    })
}
 
function postNewJob(req , res , next){
    var job = {};
    job.title = req.params.title;
    job.description = req.params.description;
    job.location = req.params.location;
    job.postedOn = new Date();
 
    res.setHeader('Access-Control-Allow-Origin','*');
 
    jobs.save(job , function(err , success){
        console.log('Response success '+success);
        console.log('Response error '+err);
        if(success){
            res.send(201 , job);
            return next();
        }else{
            return next(err);
        }
    });
}
 
function deleteJob(req , res , next){
    res.setHeader('Access-Control-Allow-Origin','*');
    jobs.remove({_id:mongojs.ObjectId(req.params.jobId)} , function(err , success){
        console.log('Response success '+success);
        console.log('Response error '+err);
        if(success){
            res.send(204);
            return next();      
        } else{
            return next(err);
        }
    })
 
}
The code shown above is self explanatory. We are using mongojs API to perform CRUD operations.
We can test the web services using curl. To create a new job, type the command shown below.
$ curl -i -X POST -H "Content-Type: application/json" -d '{"title":"NodeJS Developer Required" , "description":"NodeJS Developer Required" , "location":"Sector 30, Gurgaon, India"}' http://127.0.0.1:8080/jobs
To find all the jobs
$ curl -is http://127.0.0.1:8080/jobs
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Type: application/json
Content-Length: 187
Date: Sun, 24 Nov 2013 16:17:27 GMT
Connection: keep-alive
 
[{"title":"NodeJS Developer Required","description":"NodeJS Developer Required","location":"Sector 30, Gurgaon, India","postedOn":"2013-11-24T16:16:16.688Z","_id":"52922650aab6107320000001"}]
Complete app.js 
// Below shared app.js is prepared for uploading same project in open shift (RedHat directory).
// Will be sharing details of how to upload on open shift in my next blog.
#!/bin/env node
//  OpenShift sample Node application
var restify = require('restify');
var mongojs = require("mongojs");

var ip_addr = process.env.OPENSHIFT_NODEJS_IP   || '127.0.0.1';
var port    = process.env.OPENSHIFT_NODEJS_PORT || '8080';

var db_name = process.env.OPENSHIFT_APP_NAME || "localjobs";

var connection_string = '127.0.0.1:27017/' + db_name;
// if OPENSHIFT env variables are present, use the available connection info:
if(process.env.OPENSHIFT_MONGODB_DB_PASSWORD){
  connection_string = process.env.OPENSHIFT_MONGODB_DB_USERNAME + ":" +
  process.env.OPENSHIFT_MONGODB_DB_PASSWORD + "@" +
  process.env.OPENSHIFT_MONGODB_DB_HOST + ':' +
  process.env.OPENSHIFT_MONGODB_DB_PORT + '/' +
  process.env.OPENSHIFT_APP_NAME;
}

var db = mongojs(connection_string, [db_name]);
var jobs = db.collection("jobs");

var items = db.collection("items");

var server = restify.createServer({
    name : "localjobs"
});

server.pre(restify.pre.userAgentConnection());
server.use(restify.acceptParser(server.acceptable));
server.use(restify.queryParser());
server.use(restify.bodyParser());
server.use(restify.CORS());

function findAllJobs(req, res , next){
    res.setHeader('Access-Control-Allow-Origin','*');
    jobs.find().limit(20).sort({postedOn : -1} , function(err , success){
        console.log('Response success '+success);
        console.log('Response error '+err);
        if(success){
            res.send(200 , success);
            return next();
        }else{
            return next(err);
        }
        
    });
    
}

function findJob(req, res , next){
    res.setHeader('Access-Control-Allow-Origin','*');
    jobs.findOne({_id:mongojs.ObjectId(req.params.jobId)} , function(err , success){
        console.log('Response success '+success);
        console.log('Response error '+err);
        if(success){
            res.send(200 , success);
            return next();
        }
        return next(err);
    })
}

function findItem(req, res , next){
    res.setHeader('Access-Control-Allow-Origin','*');
    items.findOne({_id:mongojs.ObjectId(req.params.itemId)} , function(err , success){
        console.log('Response success '+success);
        console.log('Response error '+err);
        if(success){
            res.send(200 , success);
            return next();
        }
        return next(err);
    })
}

function postNewItem(req , res , next){
    var item = {};
    item.title = req.params.title;
    item.description = req.params.description;
    item.value = req.params.value;
    item.postedOn = new Date();

    res.setHeader('Access-Control-Allow-Origin','*');
    
    items.save(item , function(err , success){
        console.log('Response success '+success);
        console.log('Response error '+err);
        if(success){
            res.send(201 , item);
            return next();
        }else{
            return next(err);
        }
    });
}

function postNewJob(req , res , next){
    var job = {};
    job.title = req.params.title;
    job.description = req.params.description;
    job.location = req.params.location;
    job.postedOn = new Date();

    res.setHeader('Access-Control-Allow-Origin','*');
    
    jobs.save(job , function(err , success){
        console.log('Response success '+success);
        console.log('Response error '+err);
        if(success){
            res.send(201 , job);
            return next();
        }else{
            return next(err);
        }
    });
}

function deleteJob(req , res , next){
    res.setHeader('Access-Control-Allow-Origin','*');
    jobs.remove({_id:mongojs.ObjectId(req.params.jobId)} , function(err , success){
        console.log('Response success '+success);
        console.log('Response error '+err);
        if(success){
            res.send(204);
            return next();      
        } else{
            return next(err);
        }
    })
    
}

var PATH2 = '/items';

server.get({path : PATH2 +'/:itemId' , version : '0.0.1'} , findItem);
server.post({path : PATH2 , version: '0.0.1'} ,postNewItem);

var PATH = '/jobs';

server.get({path : PATH , version : '0.0.1'} , findAllJobs);
server.get({path : PATH +'/:jobId' , version : '0.0.1'} , findJob);
server.post({path : PATH , version: '0.0.1'} ,postNewJob);
server.del({path : PATH +'/:jobId' , version: '0.0.1'} ,deleteJob);


server.listen(port ,ip_addr, function(){
    console.log('%s listening at %s ', server.name , server.url);
});
Test the application with POST and get request.
Read steps below to start both DB and node server first.
Start MongoDB database server first, for this you have to go to your MongoDB bin directory and start(execute) mongod executable.
on MAC machine, command will work like this.
ashish-mac:bin ashish.nigam$ sudo ./mongod
// I used sudo to provide admin access, it depends upon your directory on which mongod resides.

Now you have to start the node server which we created just now, open another terminal window
and from there type in following command (move to project directory).

$ node app.js 
Now you are ready to make post and get request. so open another window and use below mentioned post request.
POST Request:
$ curl -i -X POST -H "Content-Type: application/json" -d '{"title":"NodeJS Developer Required" , "description":"NodeJS Developer Required ashish nigam" , "location":"Sector 30, Gurgaon, India"}' http://127.0.0.1:8080/jobs
GET Request: (Open your browser and use below mentioned IP and get request for all jobs)
http://127.0.0.1:8080/jobs
My Chrome browser image with get request:




Get a specific JOB POST:
http://127.0.0.1:8080/jobs/538c42d9a24b3e1b094e1aa6
Part which is common : http://127.0.0.1:8080/jobs while job id added after jobs.
538c42d9a24b3e1b094e1aa6        is JOB id in my sample request.
To find the get & post request path refer the following code section in app.js
var PATH2 = '/items';

server.get({path : PATH2 +'/:itemId' , version : '0.0.1'} , findItem);

server.post({path : PATH2 , version: '0.0.1'} ,postNewItem);



var PATH = '/jobs';

server.get({path : PATH , version : '0.0.1'} , findAllJobs);

server.get({path : PATH +'/:jobId' , version : '0.0.1'} , findJob);

server.post({path : PATH , version: '0.0.1'} ,postNewJob);

server.del({path : PATH +'/:jobId' , version: '0.0.1'} ,deleteJob);
Thank you for reading my BLOG.
Please share your feedback and any query regarding this blog are most welcome.

Reference Link: