Why isn't my face detection code using CIDetector working properly?CIDetector not detecting the face in ios7Using iOS CIDetector face detection to trigger functionCIDetector Not Detecting Faces with UIImagePickerController (Swift)CIDetector either not detecting, or detecting in odd placesHow to crop detected rectangle in Image with CIDetector and SwiftCIDetector , detected face image is not showing?QR image not detected by CIDetectorCIDetector face detection in real-time However memory consumption increases linearly how to avoid this issue?CIDetector can detect more faces on iOS12CIDetector not detecting proper Rectangle in iOS?

Converting multiple assignment statements to single comma separated assignment

I asked for a graduate student position from a professor. He replied "welcome". What does that mean?

Can a new chain significantly improve the riding experience? If yes - what else can?

Is English tonal for some words, like "permit"?

Might have gotten a coworker sick, should I address this?

A Little Riddle

Can I use ratchet straps to lift a dolly into a truck bed?

extract lines from bottom until regex match

Why did they ever make smaller than full-frame sensors?

Do Milankovitch Cycles fully explain climate change?

How to help my 2.5-year-old daughter take her medicine when she refuses to?

How to stabilise the bicycle seatpost and saddle when it is all the way up?

Gas pipes - why does gas burn "outwards?"

Where can I get an anonymous Rav Kav card issued?

How can I maximize the impact of my charitable donations?

My research paper filed as a patent in China by my Chinese supervisor without me as inventor

Kingdom Map and Travel Pace

Does the wording of the Wrathful Smite spell imply that there are other living beings that aren't considered "creatures"?

My employer wants me to do a work of 6 months in just 2 months

Are programming languages necessary/useful for operations research practitioner?

Can I disable a battery powered device by reversing half of its batteries?

Tracks in the snow

Can I say "I have encrypted something" if I hash something?

Do all humans have an identical nucleotide sequence for certain proteins, e.g haemoglobin?



Why isn't my face detection code using CIDetector working properly?


CIDetector not detecting the face in ios7Using iOS CIDetector face detection to trigger functionCIDetector Not Detecting Faces with UIImagePickerController (Swift)CIDetector either not detecting, or detecting in odd placesHow to crop detected rectangle in Image with CIDetector and SwiftCIDetector , detected face image is not showing?QR image not detected by CIDetectorCIDetector face detection in real-time However memory consumption increases linearly how to avoid this issue?CIDetector can detect more faces on iOS12CIDetector not detecting proper Rectangle in iOS?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








6















I'm trying to detect faces in my iOS camera app, but it doesn't work properly, while it works properly in Camera.app. Notice that:



  • The first face isns't detected in my app, only in Camera.app.

  • For the third face — the east asian woman — Camera.app correctly draws a rectangle around her face, while my app draws a rectangle that extends far below her face.

  • Obama's face isn't detected in my app, only in Camera.app.

  • When the camera zooms out from Putin's face, my app draws a rectangle over the right half of his face, cutting it in half, while Camera.app draws a rectangle correctly around his face.

Why is this happening?



My code is as follows. Do you see anything wrong?



First, I create a video output as follows:



let videoOutput = AVCaptureVideoDataOutput()
videoOutput.videoSettings =
[kCVPixelBufferPixelFormatTypeKey as AnyHashable:
Int(kCMPixelFormat_32BGRA)]

session.addOutput(videoOutput)

videoOutput.setSampleBufferDelegate(faceDetector, queue: faceDetectionQueue)


This is the delegate:



class FaceDetector: NSObject, AVCaptureVideoDataOutputSampleBufferDelegate 
func captureOutput(_ captureOutput: AVCaptureOutput!,
didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
from connection: AVCaptureConnection!)
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let features = FaceDetector.ciDetector.features(
in: CIImage(cvPixelBuffer: imageBuffer))

let faces = features.map $0.bounds
let imageSize = CVImageBufferGetDisplaySize(imageBuffer)

let faceBounds = faces.map (face: CIFeature) -> CGRect in
var ciBounds = face.bounds

ciBounds = ciBounds.applying(
CGAffineTransform(scaleX: 1/imageSize.width, y: -1/imageSize.height))
CGRect(x: 0, y: 0, width: 1, height: -1).verifyContains(ciBounds)

let bounds = ciBounds.applying(CGAffineTransform(translationX: 0, y: 1.0))
CGRect(x: 0, y: 0, width: 1, height: 1).verifyContains(bounds)
return bounds

DispatchQueue.main.sync
facesUpdated(faceBounds, imageSize)



private static let ciDetector = CIDetector(ofType: CIDetectorTypeFace,
context: nil,
options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])!



The facesUpdated() callback is as follows:



class PreviewView: UIView 
private var faceRects = [UIView]()

private static func makeFaceRect() -> UIView
let r = UIView()
r.layer.borderWidth = FocusRect.borderWidth
r.layer.borderColor = FocusRect.color.cgColor
faceRects.append(r)
addSubview(r)
return r


private func removeAllFaceRects()
for faceRect in faceRects
verify(faceRect.superview == self)
faceRect.removeFromSuperview()

faceRects.removeAll()


private func facesUpdated(_ faces: [CGRect], _ imageSize: CGSize)
removeAllFaceRects()

let faceFrames = faces.map (original: CGRect) -> CGRect in
let face = original.applying(CGAffineTransform(scaleX: bounds.width, y: bounds.height))
verify(self.bounds.contains(face))
return face


for faceFrame in faceFrames
let faceRect = PreviewView.makeFaceRect()
faceRect.frame = faceFrame





I also tried the following, but they didn't help:



  1. Setting the AVCaptureVideoDataOutput's videoSettings to nil.

  2. Explicitly setting the CIDetector's orientation to portrait. The phone is in portrait for this test, so it shouldn't matter.

  3. Setting and removing CIDetectorTracking: true

  4. Setting and removing CIDetectorAccuracy: CIDetectorAccuracyHigh

  5. Trying to track only one face, by looking only at the first feature detected.

  6. Replacing CVImageBufferGetDisplaySize() with CVImageBufferGetEncodedSize() — they're anyway same, at 1440 x 1080.









share|improve this question


























  • Just as a kind of side note, generally you want to have a serial queue for processing the output frames, theres a good example of this in rosyWriterSwift, this may lead to something github.com/ooper-shlab/RosyWriter2.1-Swift

    – Sean Lintern
    Apr 13 '17 at 9:13











  • @SeanLintern88 Good point, but I'm already using a serial queue. I didn't include the threading code above since it's long enough already.

    – Vaddadi Kartick
    Apr 13 '17 at 11:38












  • Any chance you could include a sample project, I would be interested in trying a few things

    – Sean Lintern
    Apr 13 '17 at 12:00











  • I tried to, but removing other stuff turned out to be too hard, so sorry.

    – Vaddadi Kartick
    Apr 13 '17 at 12:14












  • There must be something wrong with the code where you set AVCaptureVideoDataOutput

    – Tom Testicool
    Apr 16 '17 at 22:23

















6















I'm trying to detect faces in my iOS camera app, but it doesn't work properly, while it works properly in Camera.app. Notice that:



  • The first face isns't detected in my app, only in Camera.app.

  • For the third face — the east asian woman — Camera.app correctly draws a rectangle around her face, while my app draws a rectangle that extends far below her face.

  • Obama's face isn't detected in my app, only in Camera.app.

  • When the camera zooms out from Putin's face, my app draws a rectangle over the right half of his face, cutting it in half, while Camera.app draws a rectangle correctly around his face.

Why is this happening?



My code is as follows. Do you see anything wrong?



First, I create a video output as follows:



let videoOutput = AVCaptureVideoDataOutput()
videoOutput.videoSettings =
[kCVPixelBufferPixelFormatTypeKey as AnyHashable:
Int(kCMPixelFormat_32BGRA)]

session.addOutput(videoOutput)

videoOutput.setSampleBufferDelegate(faceDetector, queue: faceDetectionQueue)


This is the delegate:



class FaceDetector: NSObject, AVCaptureVideoDataOutputSampleBufferDelegate 
func captureOutput(_ captureOutput: AVCaptureOutput!,
didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
from connection: AVCaptureConnection!)
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let features = FaceDetector.ciDetector.features(
in: CIImage(cvPixelBuffer: imageBuffer))

let faces = features.map $0.bounds
let imageSize = CVImageBufferGetDisplaySize(imageBuffer)

let faceBounds = faces.map (face: CIFeature) -> CGRect in
var ciBounds = face.bounds

ciBounds = ciBounds.applying(
CGAffineTransform(scaleX: 1/imageSize.width, y: -1/imageSize.height))
CGRect(x: 0, y: 0, width: 1, height: -1).verifyContains(ciBounds)

let bounds = ciBounds.applying(CGAffineTransform(translationX: 0, y: 1.0))
CGRect(x: 0, y: 0, width: 1, height: 1).verifyContains(bounds)
return bounds

DispatchQueue.main.sync
facesUpdated(faceBounds, imageSize)



private static let ciDetector = CIDetector(ofType: CIDetectorTypeFace,
context: nil,
options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])!



The facesUpdated() callback is as follows:



class PreviewView: UIView 
private var faceRects = [UIView]()

private static func makeFaceRect() -> UIView
let r = UIView()
r.layer.borderWidth = FocusRect.borderWidth
r.layer.borderColor = FocusRect.color.cgColor
faceRects.append(r)
addSubview(r)
return r


private func removeAllFaceRects()
for faceRect in faceRects
verify(faceRect.superview == self)
faceRect.removeFromSuperview()

faceRects.removeAll()


private func facesUpdated(_ faces: [CGRect], _ imageSize: CGSize)
removeAllFaceRects()

let faceFrames = faces.map (original: CGRect) -> CGRect in
let face = original.applying(CGAffineTransform(scaleX: bounds.width, y: bounds.height))
verify(self.bounds.contains(face))
return face


for faceFrame in faceFrames
let faceRect = PreviewView.makeFaceRect()
faceRect.frame = faceFrame





I also tried the following, but they didn't help:



  1. Setting the AVCaptureVideoDataOutput's videoSettings to nil.

  2. Explicitly setting the CIDetector's orientation to portrait. The phone is in portrait for this test, so it shouldn't matter.

  3. Setting and removing CIDetectorTracking: true

  4. Setting and removing CIDetectorAccuracy: CIDetectorAccuracyHigh

  5. Trying to track only one face, by looking only at the first feature detected.

  6. Replacing CVImageBufferGetDisplaySize() with CVImageBufferGetEncodedSize() — they're anyway same, at 1440 x 1080.









share|improve this question


























  • Just as a kind of side note, generally you want to have a serial queue for processing the output frames, theres a good example of this in rosyWriterSwift, this may lead to something github.com/ooper-shlab/RosyWriter2.1-Swift

    – Sean Lintern
    Apr 13 '17 at 9:13











  • @SeanLintern88 Good point, but I'm already using a serial queue. I didn't include the threading code above since it's long enough already.

    – Vaddadi Kartick
    Apr 13 '17 at 11:38












  • Any chance you could include a sample project, I would be interested in trying a few things

    – Sean Lintern
    Apr 13 '17 at 12:00











  • I tried to, but removing other stuff turned out to be too hard, so sorry.

    – Vaddadi Kartick
    Apr 13 '17 at 12:14












  • There must be something wrong with the code where you set AVCaptureVideoDataOutput

    – Tom Testicool
    Apr 16 '17 at 22:23













6












6








6








I'm trying to detect faces in my iOS camera app, but it doesn't work properly, while it works properly in Camera.app. Notice that:



  • The first face isns't detected in my app, only in Camera.app.

  • For the third face — the east asian woman — Camera.app correctly draws a rectangle around her face, while my app draws a rectangle that extends far below her face.

  • Obama's face isn't detected in my app, only in Camera.app.

  • When the camera zooms out from Putin's face, my app draws a rectangle over the right half of his face, cutting it in half, while Camera.app draws a rectangle correctly around his face.

Why is this happening?



My code is as follows. Do you see anything wrong?



First, I create a video output as follows:



let videoOutput = AVCaptureVideoDataOutput()
videoOutput.videoSettings =
[kCVPixelBufferPixelFormatTypeKey as AnyHashable:
Int(kCMPixelFormat_32BGRA)]

session.addOutput(videoOutput)

videoOutput.setSampleBufferDelegate(faceDetector, queue: faceDetectionQueue)


This is the delegate:



class FaceDetector: NSObject, AVCaptureVideoDataOutputSampleBufferDelegate 
func captureOutput(_ captureOutput: AVCaptureOutput!,
didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
from connection: AVCaptureConnection!)
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let features = FaceDetector.ciDetector.features(
in: CIImage(cvPixelBuffer: imageBuffer))

let faces = features.map $0.bounds
let imageSize = CVImageBufferGetDisplaySize(imageBuffer)

let faceBounds = faces.map (face: CIFeature) -> CGRect in
var ciBounds = face.bounds

ciBounds = ciBounds.applying(
CGAffineTransform(scaleX: 1/imageSize.width, y: -1/imageSize.height))
CGRect(x: 0, y: 0, width: 1, height: -1).verifyContains(ciBounds)

let bounds = ciBounds.applying(CGAffineTransform(translationX: 0, y: 1.0))
CGRect(x: 0, y: 0, width: 1, height: 1).verifyContains(bounds)
return bounds

DispatchQueue.main.sync
facesUpdated(faceBounds, imageSize)



private static let ciDetector = CIDetector(ofType: CIDetectorTypeFace,
context: nil,
options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])!



The facesUpdated() callback is as follows:



class PreviewView: UIView 
private var faceRects = [UIView]()

private static func makeFaceRect() -> UIView
let r = UIView()
r.layer.borderWidth = FocusRect.borderWidth
r.layer.borderColor = FocusRect.color.cgColor
faceRects.append(r)
addSubview(r)
return r


private func removeAllFaceRects()
for faceRect in faceRects
verify(faceRect.superview == self)
faceRect.removeFromSuperview()

faceRects.removeAll()


private func facesUpdated(_ faces: [CGRect], _ imageSize: CGSize)
removeAllFaceRects()

let faceFrames = faces.map (original: CGRect) -> CGRect in
let face = original.applying(CGAffineTransform(scaleX: bounds.width, y: bounds.height))
verify(self.bounds.contains(face))
return face


for faceFrame in faceFrames
let faceRect = PreviewView.makeFaceRect()
faceRect.frame = faceFrame





I also tried the following, but they didn't help:



  1. Setting the AVCaptureVideoDataOutput's videoSettings to nil.

  2. Explicitly setting the CIDetector's orientation to portrait. The phone is in portrait for this test, so it shouldn't matter.

  3. Setting and removing CIDetectorTracking: true

  4. Setting and removing CIDetectorAccuracy: CIDetectorAccuracyHigh

  5. Trying to track only one face, by looking only at the first feature detected.

  6. Replacing CVImageBufferGetDisplaySize() with CVImageBufferGetEncodedSize() — they're anyway same, at 1440 x 1080.









share|improve this question
















I'm trying to detect faces in my iOS camera app, but it doesn't work properly, while it works properly in Camera.app. Notice that:



  • The first face isns't detected in my app, only in Camera.app.

  • For the third face — the east asian woman — Camera.app correctly draws a rectangle around her face, while my app draws a rectangle that extends far below her face.

  • Obama's face isn't detected in my app, only in Camera.app.

  • When the camera zooms out from Putin's face, my app draws a rectangle over the right half of his face, cutting it in half, while Camera.app draws a rectangle correctly around his face.

Why is this happening?



My code is as follows. Do you see anything wrong?



First, I create a video output as follows:



let videoOutput = AVCaptureVideoDataOutput()
videoOutput.videoSettings =
[kCVPixelBufferPixelFormatTypeKey as AnyHashable:
Int(kCMPixelFormat_32BGRA)]

session.addOutput(videoOutput)

videoOutput.setSampleBufferDelegate(faceDetector, queue: faceDetectionQueue)


This is the delegate:



class FaceDetector: NSObject, AVCaptureVideoDataOutputSampleBufferDelegate 
func captureOutput(_ captureOutput: AVCaptureOutput!,
didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
from connection: AVCaptureConnection!)
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let features = FaceDetector.ciDetector.features(
in: CIImage(cvPixelBuffer: imageBuffer))

let faces = features.map $0.bounds
let imageSize = CVImageBufferGetDisplaySize(imageBuffer)

let faceBounds = faces.map (face: CIFeature) -> CGRect in
var ciBounds = face.bounds

ciBounds = ciBounds.applying(
CGAffineTransform(scaleX: 1/imageSize.width, y: -1/imageSize.height))
CGRect(x: 0, y: 0, width: 1, height: -1).verifyContains(ciBounds)

let bounds = ciBounds.applying(CGAffineTransform(translationX: 0, y: 1.0))
CGRect(x: 0, y: 0, width: 1, height: 1).verifyContains(bounds)
return bounds

DispatchQueue.main.sync
facesUpdated(faceBounds, imageSize)



private static let ciDetector = CIDetector(ofType: CIDetectorTypeFace,
context: nil,
options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])!



The facesUpdated() callback is as follows:



class PreviewView: UIView 
private var faceRects = [UIView]()

private static func makeFaceRect() -> UIView
let r = UIView()
r.layer.borderWidth = FocusRect.borderWidth
r.layer.borderColor = FocusRect.color.cgColor
faceRects.append(r)
addSubview(r)
return r


private func removeAllFaceRects()
for faceRect in faceRects
verify(faceRect.superview == self)
faceRect.removeFromSuperview()

faceRects.removeAll()


private func facesUpdated(_ faces: [CGRect], _ imageSize: CGSize)
removeAllFaceRects()

let faceFrames = faces.map (original: CGRect) -> CGRect in
let face = original.applying(CGAffineTransform(scaleX: bounds.width, y: bounds.height))
verify(self.bounds.contains(face))
return face


for faceFrame in faceFrames
let faceRect = PreviewView.makeFaceRect()
faceRect.frame = faceFrame





I also tried the following, but they didn't help:



  1. Setting the AVCaptureVideoDataOutput's videoSettings to nil.

  2. Explicitly setting the CIDetector's orientation to portrait. The phone is in portrait for this test, so it shouldn't matter.

  3. Setting and removing CIDetectorTracking: true

  4. Setting and removing CIDetectorAccuracy: CIDetectorAccuracyHigh

  5. Trying to track only one face, by looking only at the first feature detected.

  6. Replacing CVImageBufferGetDisplaySize() with CVImageBufferGetEncodedSize() — they're anyway same, at 1440 x 1080.






ios camera avfoundation avcapture cidetector






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 28 at 8:59







Vaddadi Kartick

















asked Apr 7 '17 at 5:06









Vaddadi KartickVaddadi Kartick

1,6855 gold badges25 silver badges37 bronze badges




1,6855 gold badges25 silver badges37 bronze badges















  • Just as a kind of side note, generally you want to have a serial queue for processing the output frames, theres a good example of this in rosyWriterSwift, this may lead to something github.com/ooper-shlab/RosyWriter2.1-Swift

    – Sean Lintern
    Apr 13 '17 at 9:13











  • @SeanLintern88 Good point, but I'm already using a serial queue. I didn't include the threading code above since it's long enough already.

    – Vaddadi Kartick
    Apr 13 '17 at 11:38












  • Any chance you could include a sample project, I would be interested in trying a few things

    – Sean Lintern
    Apr 13 '17 at 12:00











  • I tried to, but removing other stuff turned out to be too hard, so sorry.

    – Vaddadi Kartick
    Apr 13 '17 at 12:14












  • There must be something wrong with the code where you set AVCaptureVideoDataOutput

    – Tom Testicool
    Apr 16 '17 at 22:23

















  • Just as a kind of side note, generally you want to have a serial queue for processing the output frames, theres a good example of this in rosyWriterSwift, this may lead to something github.com/ooper-shlab/RosyWriter2.1-Swift

    – Sean Lintern
    Apr 13 '17 at 9:13











  • @SeanLintern88 Good point, but I'm already using a serial queue. I didn't include the threading code above since it's long enough already.

    – Vaddadi Kartick
    Apr 13 '17 at 11:38












  • Any chance you could include a sample project, I would be interested in trying a few things

    – Sean Lintern
    Apr 13 '17 at 12:00











  • I tried to, but removing other stuff turned out to be too hard, so sorry.

    – Vaddadi Kartick
    Apr 13 '17 at 12:14












  • There must be something wrong with the code where you set AVCaptureVideoDataOutput

    – Tom Testicool
    Apr 16 '17 at 22:23
















Just as a kind of side note, generally you want to have a serial queue for processing the output frames, theres a good example of this in rosyWriterSwift, this may lead to something github.com/ooper-shlab/RosyWriter2.1-Swift

– Sean Lintern
Apr 13 '17 at 9:13





Just as a kind of side note, generally you want to have a serial queue for processing the output frames, theres a good example of this in rosyWriterSwift, this may lead to something github.com/ooper-shlab/RosyWriter2.1-Swift

– Sean Lintern
Apr 13 '17 at 9:13













@SeanLintern88 Good point, but I'm already using a serial queue. I didn't include the threading code above since it's long enough already.

– Vaddadi Kartick
Apr 13 '17 at 11:38






@SeanLintern88 Good point, but I'm already using a serial queue. I didn't include the threading code above since it's long enough already.

– Vaddadi Kartick
Apr 13 '17 at 11:38














Any chance you could include a sample project, I would be interested in trying a few things

– Sean Lintern
Apr 13 '17 at 12:00





Any chance you could include a sample project, I would be interested in trying a few things

– Sean Lintern
Apr 13 '17 at 12:00













I tried to, but removing other stuff turned out to be too hard, so sorry.

– Vaddadi Kartick
Apr 13 '17 at 12:14






I tried to, but removing other stuff turned out to be too hard, so sorry.

– Vaddadi Kartick
Apr 13 '17 at 12:14














There must be something wrong with the code where you set AVCaptureVideoDataOutput

– Tom Testicool
Apr 16 '17 at 22:23





There must be something wrong with the code where you set AVCaptureVideoDataOutput

– Tom Testicool
Apr 16 '17 at 22:23












0






active

oldest

votes










Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);














draft saved

draft discarded
















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f43270067%2fwhy-isnt-my-face-detection-code-using-cidetector-working-properly%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes




Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.







Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.




















draft saved

draft discarded















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f43270067%2fwhy-isnt-my-face-detection-code-using-cidetector-working-properly%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

Swift 4 - func physicsWorld not invoked on collision? The Next CEO of Stack OverflowHow to call Objective-C code from Swift#ifdef replacement in the Swift language@selector() in Swift?#pragma mark in Swift?Swift for loop: for index, element in array?dispatch_after - GCD in Swift?Swift Beta performance: sorting arraysSplit a String into an array in Swift?The use of Swift 3 @objc inference in Swift 4 mode is deprecated?How to optimize UITableViewCell, because my UITableView lags

Access current req object everywhere in Node.js ExpressWhy are global variables considered bad practice? (node.js)Using req & res across functionsHow do I get the path to the current script with Node.js?What is Node.js' Connect, Express and “middleware”?Node.js w/ express error handling in callbackHow to access the GET parameters after “?” in Express?Modify Node.js req object parametersAccess “app” variable inside of ExpressJS/ConnectJS middleware?Node.js Express app - request objectAngular Http Module considered middleware?Session variables in ExpressJSAdd properties to the req object in expressjs with Typescript