How do I send a command when x and y are center during face tracking? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) The Ask Question Wizard is Live! Data science time! April 2019 and salary with experienceFace-tracking libraries for Java or PythonHow to read/process command line arguments?How to send email attachments?How do I access command line arguments in Python?What is __future__ in Python used for and how/when to use it, and how it worksHow to send an email with Gmail as provider using Python?How to send POST request?Python exit commands - why so many and when should each be used?Detect face center in ROIFace tracking with a sequence ID
How many things? AとBがふたつ
Complexity of many constant time steps with occasional logarithmic steps
Why does tar appear to skip file contents when output file is /dev/null?
90's book, teen horror
How to politely respond to generic emails requesting a PhD/job in my lab? Without wasting too much time
Cauchy Sequence Characterized only By Directly Neighbouring Sequence Members
New Order #5: where Fibonacci and Beatty meet at Wythoff
Blender game recording at the wrong time
How to market an anarchic city as a tourism spot to people living in civilized areas?
How can I make names more distinctive without making them longer?
How to set letter above or below the symbol?
How can you insert a "times/divide" symbol similar to the "plus/minus" (±) one?
Why don't the Weasley twins use magic outside of school if the Trace can only find the location of spells cast?
Why use gamma over alpha radiation?
Area of a 2D convex hull
Why is "Captain Marvel" translated as male in Portugal?
Can a monk deflect thrown melee weapons?
What is the electric potential inside a point charge?
Slither Like a Snake
Stars Make Stars
Was credit for the black hole image misattributed?
Is there a service that would inform me whenever a new direct route is scheduled from a given airport?
How to say that you spent the night with someone, you were only sleeping and nothing else?
Can I add database to AWS RDS MySQL without creating new instance?
How do I send a command when x and y are center during face tracking?
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
The Ask Question Wizard is Live!
Data science time! April 2019 and salary with experienceFace-tracking libraries for Java or PythonHow to read/process command line arguments?How to send email attachments?How do I access command line arguments in Python?What is __future__ in Python used for and how/when to use it, and how it worksHow to send an email with Gmail as provider using Python?How to send POST request?Python exit commands - why so many and when should each be used?Detect face center in ROIFace tracking with a sequence ID
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
Please forgive my awful code, this is just a hobby for me. But I am trying to have python play a sound when both x and y axis of face tracking are at a determined center point. Right now I'm testing with playing a sound, once the sound plays I can swap playing a sound with sending data to arduino to spool up a brushless motor for the orbeez minigun. The rest of the code is working, but I cannot get the sound to play. The sound WILL play if I call it a different way.
Here is a snippet of the code
Also, I have tried many different ways of 'if (xcenter + ycenter) == 2:', this is just the last one tried.
# This will send data to the arduino according to the x coordinate
def angle_servox(angle):
if angle>320:
prov=1
ser.write(b'2')
print("Right")
xcenter = 0
elif angle<250:
prov=2
ser.write(b'1')
print("Left")
xcenter = 0
elif angle>250 & angle<320:
ser.write(b'0')
print("Stop")
xcenter = 1
# This will send data to the arduino according to the x coordinate
def angle_servoy(angle):
if angle>250:
prov=3
ser.write(b'4')
print("Down")
ycenter = 0
elif angle<75:
prov=4
ser.write(b'3')
print("Up")
ycenter = 0
elif angle>80 & angle<240:
ser.write(b'5')
print("Stop")
ycenter = 1
# import the haarcascade file
face_casc = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
#train the face for recognition
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read("recognizers/face-trainer.yml.txt")
labels = "person_name": 1
with open("pickles/face-labels.pickle", 'rb') as f:
og_labels = pickle.load(f)
labels = v:k for k,v in og_labels.items()
# for default camera put value 0 or else 1
videoWeb = cv2.VideoCapture(1)
n=0
while (videoWeb.isOpened()):
print(ser.read().decode().strip('rn'))
ret,imag = videoWeb.read()
gray = cv2.cvtColor(imag, cv2.COLOR_BGR2GRAY)
#cv2.imshow('xyz',imag)
faces = face_casc.detectMultiScale(
gray,
scaleFactor=1.4,
minNeighbors=5,
minSize=(30,30)
)
if (xcenter + ycenter) == 2:
voice.play(active2)
Thanks in advance
python
add a comment |
Please forgive my awful code, this is just a hobby for me. But I am trying to have python play a sound when both x and y axis of face tracking are at a determined center point. Right now I'm testing with playing a sound, once the sound plays I can swap playing a sound with sending data to arduino to spool up a brushless motor for the orbeez minigun. The rest of the code is working, but I cannot get the sound to play. The sound WILL play if I call it a different way.
Here is a snippet of the code
Also, I have tried many different ways of 'if (xcenter + ycenter) == 2:', this is just the last one tried.
# This will send data to the arduino according to the x coordinate
def angle_servox(angle):
if angle>320:
prov=1
ser.write(b'2')
print("Right")
xcenter = 0
elif angle<250:
prov=2
ser.write(b'1')
print("Left")
xcenter = 0
elif angle>250 & angle<320:
ser.write(b'0')
print("Stop")
xcenter = 1
# This will send data to the arduino according to the x coordinate
def angle_servoy(angle):
if angle>250:
prov=3
ser.write(b'4')
print("Down")
ycenter = 0
elif angle<75:
prov=4
ser.write(b'3')
print("Up")
ycenter = 0
elif angle>80 & angle<240:
ser.write(b'5')
print("Stop")
ycenter = 1
# import the haarcascade file
face_casc = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
#train the face for recognition
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read("recognizers/face-trainer.yml.txt")
labels = "person_name": 1
with open("pickles/face-labels.pickle", 'rb') as f:
og_labels = pickle.load(f)
labels = v:k for k,v in og_labels.items()
# for default camera put value 0 or else 1
videoWeb = cv2.VideoCapture(1)
n=0
while (videoWeb.isOpened()):
print(ser.read().decode().strip('rn'))
ret,imag = videoWeb.read()
gray = cv2.cvtColor(imag, cv2.COLOR_BGR2GRAY)
#cv2.imshow('xyz',imag)
faces = face_casc.detectMultiScale(
gray,
scaleFactor=1.4,
minNeighbors=5,
minSize=(30,30)
)
if (xcenter + ycenter) == 2:
voice.play(active2)
Thanks in advance
python
add a comment |
Please forgive my awful code, this is just a hobby for me. But I am trying to have python play a sound when both x and y axis of face tracking are at a determined center point. Right now I'm testing with playing a sound, once the sound plays I can swap playing a sound with sending data to arduino to spool up a brushless motor for the orbeez minigun. The rest of the code is working, but I cannot get the sound to play. The sound WILL play if I call it a different way.
Here is a snippet of the code
Also, I have tried many different ways of 'if (xcenter + ycenter) == 2:', this is just the last one tried.
# This will send data to the arduino according to the x coordinate
def angle_servox(angle):
if angle>320:
prov=1
ser.write(b'2')
print("Right")
xcenter = 0
elif angle<250:
prov=2
ser.write(b'1')
print("Left")
xcenter = 0
elif angle>250 & angle<320:
ser.write(b'0')
print("Stop")
xcenter = 1
# This will send data to the arduino according to the x coordinate
def angle_servoy(angle):
if angle>250:
prov=3
ser.write(b'4')
print("Down")
ycenter = 0
elif angle<75:
prov=4
ser.write(b'3')
print("Up")
ycenter = 0
elif angle>80 & angle<240:
ser.write(b'5')
print("Stop")
ycenter = 1
# import the haarcascade file
face_casc = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
#train the face for recognition
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read("recognizers/face-trainer.yml.txt")
labels = "person_name": 1
with open("pickles/face-labels.pickle", 'rb') as f:
og_labels = pickle.load(f)
labels = v:k for k,v in og_labels.items()
# for default camera put value 0 or else 1
videoWeb = cv2.VideoCapture(1)
n=0
while (videoWeb.isOpened()):
print(ser.read().decode().strip('rn'))
ret,imag = videoWeb.read()
gray = cv2.cvtColor(imag, cv2.COLOR_BGR2GRAY)
#cv2.imshow('xyz',imag)
faces = face_casc.detectMultiScale(
gray,
scaleFactor=1.4,
minNeighbors=5,
minSize=(30,30)
)
if (xcenter + ycenter) == 2:
voice.play(active2)
Thanks in advance
python
Please forgive my awful code, this is just a hobby for me. But I am trying to have python play a sound when both x and y axis of face tracking are at a determined center point. Right now I'm testing with playing a sound, once the sound plays I can swap playing a sound with sending data to arduino to spool up a brushless motor for the orbeez minigun. The rest of the code is working, but I cannot get the sound to play. The sound WILL play if I call it a different way.
Here is a snippet of the code
Also, I have tried many different ways of 'if (xcenter + ycenter) == 2:', this is just the last one tried.
# This will send data to the arduino according to the x coordinate
def angle_servox(angle):
if angle>320:
prov=1
ser.write(b'2')
print("Right")
xcenter = 0
elif angle<250:
prov=2
ser.write(b'1')
print("Left")
xcenter = 0
elif angle>250 & angle<320:
ser.write(b'0')
print("Stop")
xcenter = 1
# This will send data to the arduino according to the x coordinate
def angle_servoy(angle):
if angle>250:
prov=3
ser.write(b'4')
print("Down")
ycenter = 0
elif angle<75:
prov=4
ser.write(b'3')
print("Up")
ycenter = 0
elif angle>80 & angle<240:
ser.write(b'5')
print("Stop")
ycenter = 1
# import the haarcascade file
face_casc = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
#train the face for recognition
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read("recognizers/face-trainer.yml.txt")
labels = "person_name": 1
with open("pickles/face-labels.pickle", 'rb') as f:
og_labels = pickle.load(f)
labels = v:k for k,v in og_labels.items()
# for default camera put value 0 or else 1
videoWeb = cv2.VideoCapture(1)
n=0
while (videoWeb.isOpened()):
print(ser.read().decode().strip('rn'))
ret,imag = videoWeb.read()
gray = cv2.cvtColor(imag, cv2.COLOR_BGR2GRAY)
#cv2.imshow('xyz',imag)
faces = face_casc.detectMultiScale(
gray,
scaleFactor=1.4,
minNeighbors=5,
minSize=(30,30)
)
if (xcenter + ycenter) == 2:
voice.play(active2)
Thanks in advance
python
python
asked Mar 22 at 7:47
thomas cthomas c
31
31
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
As far as I can tell you are not correctly interrogating the information you get back from the cv2 face detection (also, you could probably delete the two angle_servo()
functions for this question).
At a high level, you want your script to:
- Setup your camera stream
- Continously get images from the camera and find faces
- If there is a face in the "middle" play your sound
I've never used openCV, but based on some example code I found here, a revision of your script might be:
# Setup our face detection
face_casc = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read("recognizers/face-trainer.yml.txt")
# Setup our video input
videoWeb = cv2.VideoCapture(1)
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def is_close_to(self, other_pt, error=5):
x_is_close = abs(self.x - other_pt.x) <= error
y_is_close = abs(self.y - other_pt.y) <= error
return x_is_close and y_is_close
# If an image from the video camera looks like this:
#
# x-->
# +-------------+
# y | A | A is at (10, 20)
# | | |
# v | |
# | B | B is at (300, 500)
# +-------------+
#
# Our predetermined references points
A = Point(10, 20)
B = Point(300, 500)
# Continuously get images from the camera
while (videoWeb.isOpened()):
# and try to find faces in them
ret,imag = videoWeb.read()
gray = cv2.cvtColor(imag, cv2.COLOR_BGR2GRAY)
faces = face_casc.detectMultiScale(
gray,
scaleFactor=1.4,
minNeighbors=5,
minSize=(30,30)
)
# I'm just guessing here, I have no idea if "size" is a property or not
middle_of_imag = Point(imag.size.x/2, imag.size.y/2)
# check the location of every face we found
for (x,y,w,h) in faces:
face_pt = Point(x, y)
#NB: it might be better to get the "middle" of the face
#face_pt = Point(x + w/2, y + h/2)
if face_pt.is_close_to(A):
print("face at point A")
if face_pt.is_close_to(B):
print("face at point B")
if face_pt.is_close_to(middle_of_imag):
print("face somewhat in the middle")
# if x in faces is between 250 & 320 AND y in faces is between 80 & 240: send 'fire'"
if (250 <= x <= 320) and (80 <= y <= 240):
print("send fire")
While this does work, it needs to work in a range. Instead of servo_x being set at 123, it would need to be all numbers <123 and <250, as an example.
– thomas c
Mar 22 at 9:54
Unfortunately I don't think there is space for solving your whole problem here, and besides, that would deprive you of the great feeling you'll get once you solve it ;-)
– Doddie
Mar 22 at 10:14
Yeah I've spent I don't even know how many hours searching and trying different things to solve this one. I'm just going to scrap this python project and go a different direction.
– thomas c
Mar 22 at 11:07
Hmm, have you had any luck without worrying about the servo position? E.g. have 2 different sounds that play if there is a face in either of two locations? I'll make an edit to show what I mean
– Doddie
Mar 22 at 11:11
Servo position really doesn't matter. The 'def angle_servo(angle)' is just reading the position of the face on the screen and sending a command (b'5') to arduino. The comment 'Left/Right/Up/Down' is just printed for calibration purposes. The main part of the code that matters in this instance is elif angle>250 & angle<320: ser.write(b'0') print("Stop") and elif angle>80 & angle<240: ser.write(b'5') print("Stop")
– thomas c
Mar 22 at 11:22
|
show 4 more comments
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55295026%2fhow-do-i-send-a-command-when-x-and-y-are-center-during-face-tracking%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
As far as I can tell you are not correctly interrogating the information you get back from the cv2 face detection (also, you could probably delete the two angle_servo()
functions for this question).
At a high level, you want your script to:
- Setup your camera stream
- Continously get images from the camera and find faces
- If there is a face in the "middle" play your sound
I've never used openCV, but based on some example code I found here, a revision of your script might be:
# Setup our face detection
face_casc = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read("recognizers/face-trainer.yml.txt")
# Setup our video input
videoWeb = cv2.VideoCapture(1)
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def is_close_to(self, other_pt, error=5):
x_is_close = abs(self.x - other_pt.x) <= error
y_is_close = abs(self.y - other_pt.y) <= error
return x_is_close and y_is_close
# If an image from the video camera looks like this:
#
# x-->
# +-------------+
# y | A | A is at (10, 20)
# | | |
# v | |
# | B | B is at (300, 500)
# +-------------+
#
# Our predetermined references points
A = Point(10, 20)
B = Point(300, 500)
# Continuously get images from the camera
while (videoWeb.isOpened()):
# and try to find faces in them
ret,imag = videoWeb.read()
gray = cv2.cvtColor(imag, cv2.COLOR_BGR2GRAY)
faces = face_casc.detectMultiScale(
gray,
scaleFactor=1.4,
minNeighbors=5,
minSize=(30,30)
)
# I'm just guessing here, I have no idea if "size" is a property or not
middle_of_imag = Point(imag.size.x/2, imag.size.y/2)
# check the location of every face we found
for (x,y,w,h) in faces:
face_pt = Point(x, y)
#NB: it might be better to get the "middle" of the face
#face_pt = Point(x + w/2, y + h/2)
if face_pt.is_close_to(A):
print("face at point A")
if face_pt.is_close_to(B):
print("face at point B")
if face_pt.is_close_to(middle_of_imag):
print("face somewhat in the middle")
# if x in faces is between 250 & 320 AND y in faces is between 80 & 240: send 'fire'"
if (250 <= x <= 320) and (80 <= y <= 240):
print("send fire")
While this does work, it needs to work in a range. Instead of servo_x being set at 123, it would need to be all numbers <123 and <250, as an example.
– thomas c
Mar 22 at 9:54
Unfortunately I don't think there is space for solving your whole problem here, and besides, that would deprive you of the great feeling you'll get once you solve it ;-)
– Doddie
Mar 22 at 10:14
Yeah I've spent I don't even know how many hours searching and trying different things to solve this one. I'm just going to scrap this python project and go a different direction.
– thomas c
Mar 22 at 11:07
Hmm, have you had any luck without worrying about the servo position? E.g. have 2 different sounds that play if there is a face in either of two locations? I'll make an edit to show what I mean
– Doddie
Mar 22 at 11:11
Servo position really doesn't matter. The 'def angle_servo(angle)' is just reading the position of the face on the screen and sending a command (b'5') to arduino. The comment 'Left/Right/Up/Down' is just printed for calibration purposes. The main part of the code that matters in this instance is elif angle>250 & angle<320: ser.write(b'0') print("Stop") and elif angle>80 & angle<240: ser.write(b'5') print("Stop")
– thomas c
Mar 22 at 11:22
|
show 4 more comments
As far as I can tell you are not correctly interrogating the information you get back from the cv2 face detection (also, you could probably delete the two angle_servo()
functions for this question).
At a high level, you want your script to:
- Setup your camera stream
- Continously get images from the camera and find faces
- If there is a face in the "middle" play your sound
I've never used openCV, but based on some example code I found here, a revision of your script might be:
# Setup our face detection
face_casc = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read("recognizers/face-trainer.yml.txt")
# Setup our video input
videoWeb = cv2.VideoCapture(1)
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def is_close_to(self, other_pt, error=5):
x_is_close = abs(self.x - other_pt.x) <= error
y_is_close = abs(self.y - other_pt.y) <= error
return x_is_close and y_is_close
# If an image from the video camera looks like this:
#
# x-->
# +-------------+
# y | A | A is at (10, 20)
# | | |
# v | |
# | B | B is at (300, 500)
# +-------------+
#
# Our predetermined references points
A = Point(10, 20)
B = Point(300, 500)
# Continuously get images from the camera
while (videoWeb.isOpened()):
# and try to find faces in them
ret,imag = videoWeb.read()
gray = cv2.cvtColor(imag, cv2.COLOR_BGR2GRAY)
faces = face_casc.detectMultiScale(
gray,
scaleFactor=1.4,
minNeighbors=5,
minSize=(30,30)
)
# I'm just guessing here, I have no idea if "size" is a property or not
middle_of_imag = Point(imag.size.x/2, imag.size.y/2)
# check the location of every face we found
for (x,y,w,h) in faces:
face_pt = Point(x, y)
#NB: it might be better to get the "middle" of the face
#face_pt = Point(x + w/2, y + h/2)
if face_pt.is_close_to(A):
print("face at point A")
if face_pt.is_close_to(B):
print("face at point B")
if face_pt.is_close_to(middle_of_imag):
print("face somewhat in the middle")
# if x in faces is between 250 & 320 AND y in faces is between 80 & 240: send 'fire'"
if (250 <= x <= 320) and (80 <= y <= 240):
print("send fire")
While this does work, it needs to work in a range. Instead of servo_x being set at 123, it would need to be all numbers <123 and <250, as an example.
– thomas c
Mar 22 at 9:54
Unfortunately I don't think there is space for solving your whole problem here, and besides, that would deprive you of the great feeling you'll get once you solve it ;-)
– Doddie
Mar 22 at 10:14
Yeah I've spent I don't even know how many hours searching and trying different things to solve this one. I'm just going to scrap this python project and go a different direction.
– thomas c
Mar 22 at 11:07
Hmm, have you had any luck without worrying about the servo position? E.g. have 2 different sounds that play if there is a face in either of two locations? I'll make an edit to show what I mean
– Doddie
Mar 22 at 11:11
Servo position really doesn't matter. The 'def angle_servo(angle)' is just reading the position of the face on the screen and sending a command (b'5') to arduino. The comment 'Left/Right/Up/Down' is just printed for calibration purposes. The main part of the code that matters in this instance is elif angle>250 & angle<320: ser.write(b'0') print("Stop") and elif angle>80 & angle<240: ser.write(b'5') print("Stop")
– thomas c
Mar 22 at 11:22
|
show 4 more comments
As far as I can tell you are not correctly interrogating the information you get back from the cv2 face detection (also, you could probably delete the two angle_servo()
functions for this question).
At a high level, you want your script to:
- Setup your camera stream
- Continously get images from the camera and find faces
- If there is a face in the "middle" play your sound
I've never used openCV, but based on some example code I found here, a revision of your script might be:
# Setup our face detection
face_casc = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read("recognizers/face-trainer.yml.txt")
# Setup our video input
videoWeb = cv2.VideoCapture(1)
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def is_close_to(self, other_pt, error=5):
x_is_close = abs(self.x - other_pt.x) <= error
y_is_close = abs(self.y - other_pt.y) <= error
return x_is_close and y_is_close
# If an image from the video camera looks like this:
#
# x-->
# +-------------+
# y | A | A is at (10, 20)
# | | |
# v | |
# | B | B is at (300, 500)
# +-------------+
#
# Our predetermined references points
A = Point(10, 20)
B = Point(300, 500)
# Continuously get images from the camera
while (videoWeb.isOpened()):
# and try to find faces in them
ret,imag = videoWeb.read()
gray = cv2.cvtColor(imag, cv2.COLOR_BGR2GRAY)
faces = face_casc.detectMultiScale(
gray,
scaleFactor=1.4,
minNeighbors=5,
minSize=(30,30)
)
# I'm just guessing here, I have no idea if "size" is a property or not
middle_of_imag = Point(imag.size.x/2, imag.size.y/2)
# check the location of every face we found
for (x,y,w,h) in faces:
face_pt = Point(x, y)
#NB: it might be better to get the "middle" of the face
#face_pt = Point(x + w/2, y + h/2)
if face_pt.is_close_to(A):
print("face at point A")
if face_pt.is_close_to(B):
print("face at point B")
if face_pt.is_close_to(middle_of_imag):
print("face somewhat in the middle")
# if x in faces is between 250 & 320 AND y in faces is between 80 & 240: send 'fire'"
if (250 <= x <= 320) and (80 <= y <= 240):
print("send fire")
As far as I can tell you are not correctly interrogating the information you get back from the cv2 face detection (also, you could probably delete the two angle_servo()
functions for this question).
At a high level, you want your script to:
- Setup your camera stream
- Continously get images from the camera and find faces
- If there is a face in the "middle" play your sound
I've never used openCV, but based on some example code I found here, a revision of your script might be:
# Setup our face detection
face_casc = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read("recognizers/face-trainer.yml.txt")
# Setup our video input
videoWeb = cv2.VideoCapture(1)
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def is_close_to(self, other_pt, error=5):
x_is_close = abs(self.x - other_pt.x) <= error
y_is_close = abs(self.y - other_pt.y) <= error
return x_is_close and y_is_close
# If an image from the video camera looks like this:
#
# x-->
# +-------------+
# y | A | A is at (10, 20)
# | | |
# v | |
# | B | B is at (300, 500)
# +-------------+
#
# Our predetermined references points
A = Point(10, 20)
B = Point(300, 500)
# Continuously get images from the camera
while (videoWeb.isOpened()):
# and try to find faces in them
ret,imag = videoWeb.read()
gray = cv2.cvtColor(imag, cv2.COLOR_BGR2GRAY)
faces = face_casc.detectMultiScale(
gray,
scaleFactor=1.4,
minNeighbors=5,
minSize=(30,30)
)
# I'm just guessing here, I have no idea if "size" is a property or not
middle_of_imag = Point(imag.size.x/2, imag.size.y/2)
# check the location of every face we found
for (x,y,w,h) in faces:
face_pt = Point(x, y)
#NB: it might be better to get the "middle" of the face
#face_pt = Point(x + w/2, y + h/2)
if face_pt.is_close_to(A):
print("face at point A")
if face_pt.is_close_to(B):
print("face at point B")
if face_pt.is_close_to(middle_of_imag):
print("face somewhat in the middle")
# if x in faces is between 250 & 320 AND y in faces is between 80 & 240: send 'fire'"
if (250 <= x <= 320) and (80 <= y <= 240):
print("send fire")
edited Mar 22 at 11:48
answered Mar 22 at 8:48
DoddieDoddie
1,001920
1,001920
While this does work, it needs to work in a range. Instead of servo_x being set at 123, it would need to be all numbers <123 and <250, as an example.
– thomas c
Mar 22 at 9:54
Unfortunately I don't think there is space for solving your whole problem here, and besides, that would deprive you of the great feeling you'll get once you solve it ;-)
– Doddie
Mar 22 at 10:14
Yeah I've spent I don't even know how many hours searching and trying different things to solve this one. I'm just going to scrap this python project and go a different direction.
– thomas c
Mar 22 at 11:07
Hmm, have you had any luck without worrying about the servo position? E.g. have 2 different sounds that play if there is a face in either of two locations? I'll make an edit to show what I mean
– Doddie
Mar 22 at 11:11
Servo position really doesn't matter. The 'def angle_servo(angle)' is just reading the position of the face on the screen and sending a command (b'5') to arduino. The comment 'Left/Right/Up/Down' is just printed for calibration purposes. The main part of the code that matters in this instance is elif angle>250 & angle<320: ser.write(b'0') print("Stop") and elif angle>80 & angle<240: ser.write(b'5') print("Stop")
– thomas c
Mar 22 at 11:22
|
show 4 more comments
While this does work, it needs to work in a range. Instead of servo_x being set at 123, it would need to be all numbers <123 and <250, as an example.
– thomas c
Mar 22 at 9:54
Unfortunately I don't think there is space for solving your whole problem here, and besides, that would deprive you of the great feeling you'll get once you solve it ;-)
– Doddie
Mar 22 at 10:14
Yeah I've spent I don't even know how many hours searching and trying different things to solve this one. I'm just going to scrap this python project and go a different direction.
– thomas c
Mar 22 at 11:07
Hmm, have you had any luck without worrying about the servo position? E.g. have 2 different sounds that play if there is a face in either of two locations? I'll make an edit to show what I mean
– Doddie
Mar 22 at 11:11
Servo position really doesn't matter. The 'def angle_servo(angle)' is just reading the position of the face on the screen and sending a command (b'5') to arduino. The comment 'Left/Right/Up/Down' is just printed for calibration purposes. The main part of the code that matters in this instance is elif angle>250 & angle<320: ser.write(b'0') print("Stop") and elif angle>80 & angle<240: ser.write(b'5') print("Stop")
– thomas c
Mar 22 at 11:22
While this does work, it needs to work in a range. Instead of servo_x being set at 123, it would need to be all numbers <123 and <250, as an example.
– thomas c
Mar 22 at 9:54
While this does work, it needs to work in a range. Instead of servo_x being set at 123, it would need to be all numbers <123 and <250, as an example.
– thomas c
Mar 22 at 9:54
Unfortunately I don't think there is space for solving your whole problem here, and besides, that would deprive you of the great feeling you'll get once you solve it ;-)
– Doddie
Mar 22 at 10:14
Unfortunately I don't think there is space for solving your whole problem here, and besides, that would deprive you of the great feeling you'll get once you solve it ;-)
– Doddie
Mar 22 at 10:14
Yeah I've spent I don't even know how many hours searching and trying different things to solve this one. I'm just going to scrap this python project and go a different direction.
– thomas c
Mar 22 at 11:07
Yeah I've spent I don't even know how many hours searching and trying different things to solve this one. I'm just going to scrap this python project and go a different direction.
– thomas c
Mar 22 at 11:07
Hmm, have you had any luck without worrying about the servo position? E.g. have 2 different sounds that play if there is a face in either of two locations? I'll make an edit to show what I mean
– Doddie
Mar 22 at 11:11
Hmm, have you had any luck without worrying about the servo position? E.g. have 2 different sounds that play if there is a face in either of two locations? I'll make an edit to show what I mean
– Doddie
Mar 22 at 11:11
Servo position really doesn't matter. The 'def angle_servo(angle)' is just reading the position of the face on the screen and sending a command (b'5') to arduino. The comment 'Left/Right/Up/Down' is just printed for calibration purposes. The main part of the code that matters in this instance is elif angle>250 & angle<320: ser.write(b'0') print("Stop") and elif angle>80 & angle<240: ser.write(b'5') print("Stop")
– thomas c
Mar 22 at 11:22
Servo position really doesn't matter. The 'def angle_servo(angle)' is just reading the position of the face on the screen and sending a command (b'5') to arduino. The comment 'Left/Right/Up/Down' is just printed for calibration purposes. The main part of the code that matters in this instance is elif angle>250 & angle<320: ser.write(b'0') print("Stop") and elif angle>80 & angle<240: ser.write(b'5') print("Stop")
– thomas c
Mar 22 at 11:22
|
show 4 more comments
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55295026%2fhow-do-i-send-a-command-when-x-and-y-are-center-during-face-tracking%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown