Camera pose estimation giving wrong resultsCamera pose estimation: How do I interpret rotation and translation matrices?OpenCV: Camera Pose EstimationOpencv camera pose estimationCamera pose estimationCamera displacement estimation in OpenCV - incorrect pose outputHow can I estimate the camera pose with 3d-to-2d-point-correspondences (using opencv)3D reconstruction from two calibrated cameras - where is the error in this pipeline?Epipolar geometry pose estimation: Epipolar lines look good but wrong poseCamera pose estimation is invertedReal Time pose estimation of a textured objectStereo-calibration with virtual camera

Going to get married soon, should I do it on Dec 31 or Jan 1?

Sir Alex Ferguson advice OR Sir Alex Ferguson's advice

Can a US president have someone sent to prison?

Why is the Turkish president's surname spelt in Russian as Эрдоган, with г?

How come I was asked by a CBP officer why I was in the US when leaving?

Should I hide continue button until tasks are completed?

Why does the A-4 Skyhawk sit nose-up when on ground?

Do sudoku answers always have a single minimal clue set?

Find smallest index that is identical to the value in an array

What shortcut does ⌦ symbol in Camunda macOS app indicate and how to invoke it?

Should I tell my insurance company I have an unsecured loan for my new car?

Why isn’t the tax system continuous rather than bracketed?

Alphabet completion rate

Was "I have the farts, again" broadcast from the Moon to the whole world?

can’t run a function against EXEC

Cross over of arrows in a complex diagram

Why does the numerical solution of an ODE move away from an unstable equilibrium?

Should I report a leak of confidential HR information?

Children's short story about material that accelerates away from gravity

Set vertical spacing between two particular items

If protons are the only stable baryons, why do they decay into neutrons in positron emission?

What's the difference between にしては、 わりに and くせに?

Are there any vegetarian astronauts?

Confusion about multiple information Sets



Camera pose estimation giving wrong results


Camera pose estimation: How do I interpret rotation and translation matrices?OpenCV: Camera Pose EstimationOpencv camera pose estimationCamera pose estimationCamera displacement estimation in OpenCV - incorrect pose outputHow can I estimate the camera pose with 3d-to-2d-point-correspondences (using opencv)3D reconstruction from two calibrated cameras - where is the error in this pipeline?Epipolar geometry pose estimation: Epipolar lines look good but wrong poseCamera pose estimation is invertedReal Time pose estimation of a textured objectStereo-calibration with virtual camera






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















I am trying to estimate relative camera movement based on matching points in two different images. Much like described here:
Camera pose estimation: How do I interpret rotation and translation matrices?



But the estimated translation and rotation do not make sense.



I use synthetic input to make sure all points are valid and perfectly positioned.



10 x 10 x 10 points evenly spread within a cube.
(Cube plotted with blue front face, red back face, lighter top and darker bottom)



zeroProjection
Camera is in front of the cube pointing into to the front face.



rotate90projection
Camera to the left of the cube pointing into the left side face.



I plot the two projections. You can easily verify visually that the camera has panned 90 degrees and moved diagonally in the x-z plane between the two projections.



In the code, the rotation (in degrees) is given as (0, -90, 0)



The translation is (0.7071, 0, 0.7071), camera movement distance is exactly 1.



I then do findEssentialMat() and recoverPose() on the 2d point sets to get translation and rotation estimates.



I expect to see the same translation and rotation I used to generate the images, but the estimates are completely wrong:



rotation estimate: (-74.86565284711004, -48.52201867665918, 121.26023708879158)
translation estimate: [[0.96576997]
[0.17203598]
[0.19414426]]


How can recover the actual (0, -90, 0), (0.7071, 0, 0,7071) transformation?



Complete code that displays the two cube images and prints out estimates:



import cv2
import numpy as np
import math


def cameraMatrix(f, w, h):
return np.array([
[f, 0, w/2],
[0, f, h/2],
[0, 0, 1]])


n = 10
f = 300
w = 640
h = 480
K = cameraMatrix(f, w, h)


def cube(x=0, y=0, z=0, radius=1):
c = np.zeros((n * n * n, 3), dtype=np.float32)
for i in range(0, n):
for j in range(0, n):
for k in range(0, n):
index = i + j * n + k * n * n
c[index] = [i, j, k]
c = 2 * c / (n - 1) - 1
c *= radius
c += [x, y, z]
return c


def project3dTo2dArray(points3d, K, rotation, translation):
imagePoints, _ = cv2.projectPoints(points3d,
rotation,
translation,
K,
np.array([]))
p2d = imagePoints.reshape((imagePoints.shape[0],2))
return p2d


def estimate_pose(projectionA, projectionB):
E, _ = cv2.findEssentialMat(projectionA, projectionB, focal = f)
_, r, t, _ = cv2.recoverPose(E, projectionA, projectionB)
angles, _, _, _, _, _ = cv2.RQDecomp3x3(r)
print('rotation estimate:', angles)
print('translation estimate:', t)


def main():
c = cube(0, 0, math.sqrt(.5), 0.1)
rotation = np.array([[0], [0], [0]], dtype=np.float32)
translation = np.array([[0], [0], [0]], dtype=np.float32)
zeroProjection = project3dTo2dArray(c, K, rotation, translation)
displayCube(w, h, zeroProjection)

rotation = np.array([[0], [-90], [0]], dtype=np.float32)
translation = np.array([[math.sqrt(.5)], [0], [math.sqrt(.5)]], dtype=np.float32)
print('applying rotation: ', rotation)
print('applying translation: ', translation)
rotate90projection = project3dTo2dArray(c, K, rotation * math.pi / 180, translation)
displayCube(w, h, rotate90projection)

estimate_pose(zeroProjection, rotate90projection)


def displayCube(w, h, points):
img = np.zeros((h, w, 3), dtype=np.uint8)

plotCube(img, points)

cv2.imshow('img', img)
k = cv2.waitKey(0) & 0xff
if k == ord('q'):
exit(0)


def plotCube(img, points):
# Red back face
cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n-1]), (0, 0, 255), 2)
cv2.line(img, tuple(points[n*n*(n-1)+n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 128), 2)
cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (0, 0, 200), 2)
cv2.line(img, tuple(points[n*n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 200), 2)

# gray connectors
cv2.line(img, tuple(points[0]), tuple(points[n*n*(n-1)]), (150, 150, 150), 2)
cv2.line(img, tuple(points[n-1]), tuple(points[n*n*(n-1)+n-1]), (150, 150, 150), 2)
cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (100, 100, 100), 2)
cv2.line(img, tuple(points[n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (100, 100, 100), 2)

# Blue front face
cv2.line(img, tuple(points[0]), tuple(points[n-1]), (255, 0, 0), 2)
cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*(n-1)+n-1]), (128, 0, 0), 2)
cv2.line(img, tuple(points[0]), tuple(points[n*(n-1)]), (200, 0, 0), 2)
cv2.line(img, tuple(points[n-1]), tuple(points[n*(n-1)+n-1]), (200, 0, 0), 2)


main()









share|improve this question
























  • Have you verified the results are wrong and not your interpretation of the values according to my answer in the linked question?

    – oarfish
    Mar 26 at 10:05











  • 8 points may not be enough. Updated code to use 1000 points. Also updated the code to use translation distance = 1, since recoverPose() only provides the translation direction unit vector. (Distance is unknown) This is the information I want to recover: translation = (0.7071, 0, 0.7071), rotation = (0, -90, 0)

    – Ragnvald Tullenavn
    Mar 27 at 14:03

















0















I am trying to estimate relative camera movement based on matching points in two different images. Much like described here:
Camera pose estimation: How do I interpret rotation and translation matrices?



But the estimated translation and rotation do not make sense.



I use synthetic input to make sure all points are valid and perfectly positioned.



10 x 10 x 10 points evenly spread within a cube.
(Cube plotted with blue front face, red back face, lighter top and darker bottom)



zeroProjection
Camera is in front of the cube pointing into to the front face.



rotate90projection
Camera to the left of the cube pointing into the left side face.



I plot the two projections. You can easily verify visually that the camera has panned 90 degrees and moved diagonally in the x-z plane between the two projections.



In the code, the rotation (in degrees) is given as (0, -90, 0)



The translation is (0.7071, 0, 0.7071), camera movement distance is exactly 1.



I then do findEssentialMat() and recoverPose() on the 2d point sets to get translation and rotation estimates.



I expect to see the same translation and rotation I used to generate the images, but the estimates are completely wrong:



rotation estimate: (-74.86565284711004, -48.52201867665918, 121.26023708879158)
translation estimate: [[0.96576997]
[0.17203598]
[0.19414426]]


How can recover the actual (0, -90, 0), (0.7071, 0, 0,7071) transformation?



Complete code that displays the two cube images and prints out estimates:



import cv2
import numpy as np
import math


def cameraMatrix(f, w, h):
return np.array([
[f, 0, w/2],
[0, f, h/2],
[0, 0, 1]])


n = 10
f = 300
w = 640
h = 480
K = cameraMatrix(f, w, h)


def cube(x=0, y=0, z=0, radius=1):
c = np.zeros((n * n * n, 3), dtype=np.float32)
for i in range(0, n):
for j in range(0, n):
for k in range(0, n):
index = i + j * n + k * n * n
c[index] = [i, j, k]
c = 2 * c / (n - 1) - 1
c *= radius
c += [x, y, z]
return c


def project3dTo2dArray(points3d, K, rotation, translation):
imagePoints, _ = cv2.projectPoints(points3d,
rotation,
translation,
K,
np.array([]))
p2d = imagePoints.reshape((imagePoints.shape[0],2))
return p2d


def estimate_pose(projectionA, projectionB):
E, _ = cv2.findEssentialMat(projectionA, projectionB, focal = f)
_, r, t, _ = cv2.recoverPose(E, projectionA, projectionB)
angles, _, _, _, _, _ = cv2.RQDecomp3x3(r)
print('rotation estimate:', angles)
print('translation estimate:', t)


def main():
c = cube(0, 0, math.sqrt(.5), 0.1)
rotation = np.array([[0], [0], [0]], dtype=np.float32)
translation = np.array([[0], [0], [0]], dtype=np.float32)
zeroProjection = project3dTo2dArray(c, K, rotation, translation)
displayCube(w, h, zeroProjection)

rotation = np.array([[0], [-90], [0]], dtype=np.float32)
translation = np.array([[math.sqrt(.5)], [0], [math.sqrt(.5)]], dtype=np.float32)
print('applying rotation: ', rotation)
print('applying translation: ', translation)
rotate90projection = project3dTo2dArray(c, K, rotation * math.pi / 180, translation)
displayCube(w, h, rotate90projection)

estimate_pose(zeroProjection, rotate90projection)


def displayCube(w, h, points):
img = np.zeros((h, w, 3), dtype=np.uint8)

plotCube(img, points)

cv2.imshow('img', img)
k = cv2.waitKey(0) & 0xff
if k == ord('q'):
exit(0)


def plotCube(img, points):
# Red back face
cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n-1]), (0, 0, 255), 2)
cv2.line(img, tuple(points[n*n*(n-1)+n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 128), 2)
cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (0, 0, 200), 2)
cv2.line(img, tuple(points[n*n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 200), 2)

# gray connectors
cv2.line(img, tuple(points[0]), tuple(points[n*n*(n-1)]), (150, 150, 150), 2)
cv2.line(img, tuple(points[n-1]), tuple(points[n*n*(n-1)+n-1]), (150, 150, 150), 2)
cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (100, 100, 100), 2)
cv2.line(img, tuple(points[n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (100, 100, 100), 2)

# Blue front face
cv2.line(img, tuple(points[0]), tuple(points[n-1]), (255, 0, 0), 2)
cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*(n-1)+n-1]), (128, 0, 0), 2)
cv2.line(img, tuple(points[0]), tuple(points[n*(n-1)]), (200, 0, 0), 2)
cv2.line(img, tuple(points[n-1]), tuple(points[n*(n-1)+n-1]), (200, 0, 0), 2)


main()









share|improve this question
























  • Have you verified the results are wrong and not your interpretation of the values according to my answer in the linked question?

    – oarfish
    Mar 26 at 10:05











  • 8 points may not be enough. Updated code to use 1000 points. Also updated the code to use translation distance = 1, since recoverPose() only provides the translation direction unit vector. (Distance is unknown) This is the information I want to recover: translation = (0.7071, 0, 0.7071), rotation = (0, -90, 0)

    – Ragnvald Tullenavn
    Mar 27 at 14:03













0












0








0








I am trying to estimate relative camera movement based on matching points in two different images. Much like described here:
Camera pose estimation: How do I interpret rotation and translation matrices?



But the estimated translation and rotation do not make sense.



I use synthetic input to make sure all points are valid and perfectly positioned.



10 x 10 x 10 points evenly spread within a cube.
(Cube plotted with blue front face, red back face, lighter top and darker bottom)



zeroProjection
Camera is in front of the cube pointing into to the front face.



rotate90projection
Camera to the left of the cube pointing into the left side face.



I plot the two projections. You can easily verify visually that the camera has panned 90 degrees and moved diagonally in the x-z plane between the two projections.



In the code, the rotation (in degrees) is given as (0, -90, 0)



The translation is (0.7071, 0, 0.7071), camera movement distance is exactly 1.



I then do findEssentialMat() and recoverPose() on the 2d point sets to get translation and rotation estimates.



I expect to see the same translation and rotation I used to generate the images, but the estimates are completely wrong:



rotation estimate: (-74.86565284711004, -48.52201867665918, 121.26023708879158)
translation estimate: [[0.96576997]
[0.17203598]
[0.19414426]]


How can recover the actual (0, -90, 0), (0.7071, 0, 0,7071) transformation?



Complete code that displays the two cube images and prints out estimates:



import cv2
import numpy as np
import math


def cameraMatrix(f, w, h):
return np.array([
[f, 0, w/2],
[0, f, h/2],
[0, 0, 1]])


n = 10
f = 300
w = 640
h = 480
K = cameraMatrix(f, w, h)


def cube(x=0, y=0, z=0, radius=1):
c = np.zeros((n * n * n, 3), dtype=np.float32)
for i in range(0, n):
for j in range(0, n):
for k in range(0, n):
index = i + j * n + k * n * n
c[index] = [i, j, k]
c = 2 * c / (n - 1) - 1
c *= radius
c += [x, y, z]
return c


def project3dTo2dArray(points3d, K, rotation, translation):
imagePoints, _ = cv2.projectPoints(points3d,
rotation,
translation,
K,
np.array([]))
p2d = imagePoints.reshape((imagePoints.shape[0],2))
return p2d


def estimate_pose(projectionA, projectionB):
E, _ = cv2.findEssentialMat(projectionA, projectionB, focal = f)
_, r, t, _ = cv2.recoverPose(E, projectionA, projectionB)
angles, _, _, _, _, _ = cv2.RQDecomp3x3(r)
print('rotation estimate:', angles)
print('translation estimate:', t)


def main():
c = cube(0, 0, math.sqrt(.5), 0.1)
rotation = np.array([[0], [0], [0]], dtype=np.float32)
translation = np.array([[0], [0], [0]], dtype=np.float32)
zeroProjection = project3dTo2dArray(c, K, rotation, translation)
displayCube(w, h, zeroProjection)

rotation = np.array([[0], [-90], [0]], dtype=np.float32)
translation = np.array([[math.sqrt(.5)], [0], [math.sqrt(.5)]], dtype=np.float32)
print('applying rotation: ', rotation)
print('applying translation: ', translation)
rotate90projection = project3dTo2dArray(c, K, rotation * math.pi / 180, translation)
displayCube(w, h, rotate90projection)

estimate_pose(zeroProjection, rotate90projection)


def displayCube(w, h, points):
img = np.zeros((h, w, 3), dtype=np.uint8)

plotCube(img, points)

cv2.imshow('img', img)
k = cv2.waitKey(0) & 0xff
if k == ord('q'):
exit(0)


def plotCube(img, points):
# Red back face
cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n-1]), (0, 0, 255), 2)
cv2.line(img, tuple(points[n*n*(n-1)+n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 128), 2)
cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (0, 0, 200), 2)
cv2.line(img, tuple(points[n*n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 200), 2)

# gray connectors
cv2.line(img, tuple(points[0]), tuple(points[n*n*(n-1)]), (150, 150, 150), 2)
cv2.line(img, tuple(points[n-1]), tuple(points[n*n*(n-1)+n-1]), (150, 150, 150), 2)
cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (100, 100, 100), 2)
cv2.line(img, tuple(points[n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (100, 100, 100), 2)

# Blue front face
cv2.line(img, tuple(points[0]), tuple(points[n-1]), (255, 0, 0), 2)
cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*(n-1)+n-1]), (128, 0, 0), 2)
cv2.line(img, tuple(points[0]), tuple(points[n*(n-1)]), (200, 0, 0), 2)
cv2.line(img, tuple(points[n-1]), tuple(points[n*(n-1)+n-1]), (200, 0, 0), 2)


main()









share|improve this question
















I am trying to estimate relative camera movement based on matching points in two different images. Much like described here:
Camera pose estimation: How do I interpret rotation and translation matrices?



But the estimated translation and rotation do not make sense.



I use synthetic input to make sure all points are valid and perfectly positioned.



10 x 10 x 10 points evenly spread within a cube.
(Cube plotted with blue front face, red back face, lighter top and darker bottom)



zeroProjection
Camera is in front of the cube pointing into to the front face.



rotate90projection
Camera to the left of the cube pointing into the left side face.



I plot the two projections. You can easily verify visually that the camera has panned 90 degrees and moved diagonally in the x-z plane between the two projections.



In the code, the rotation (in degrees) is given as (0, -90, 0)



The translation is (0.7071, 0, 0.7071), camera movement distance is exactly 1.



I then do findEssentialMat() and recoverPose() on the 2d point sets to get translation and rotation estimates.



I expect to see the same translation and rotation I used to generate the images, but the estimates are completely wrong:



rotation estimate: (-74.86565284711004, -48.52201867665918, 121.26023708879158)
translation estimate: [[0.96576997]
[0.17203598]
[0.19414426]]


How can recover the actual (0, -90, 0), (0.7071, 0, 0,7071) transformation?



Complete code that displays the two cube images and prints out estimates:



import cv2
import numpy as np
import math


def cameraMatrix(f, w, h):
return np.array([
[f, 0, w/2],
[0, f, h/2],
[0, 0, 1]])


n = 10
f = 300
w = 640
h = 480
K = cameraMatrix(f, w, h)


def cube(x=0, y=0, z=0, radius=1):
c = np.zeros((n * n * n, 3), dtype=np.float32)
for i in range(0, n):
for j in range(0, n):
for k in range(0, n):
index = i + j * n + k * n * n
c[index] = [i, j, k]
c = 2 * c / (n - 1) - 1
c *= radius
c += [x, y, z]
return c


def project3dTo2dArray(points3d, K, rotation, translation):
imagePoints, _ = cv2.projectPoints(points3d,
rotation,
translation,
K,
np.array([]))
p2d = imagePoints.reshape((imagePoints.shape[0],2))
return p2d


def estimate_pose(projectionA, projectionB):
E, _ = cv2.findEssentialMat(projectionA, projectionB, focal = f)
_, r, t, _ = cv2.recoverPose(E, projectionA, projectionB)
angles, _, _, _, _, _ = cv2.RQDecomp3x3(r)
print('rotation estimate:', angles)
print('translation estimate:', t)


def main():
c = cube(0, 0, math.sqrt(.5), 0.1)
rotation = np.array([[0], [0], [0]], dtype=np.float32)
translation = np.array([[0], [0], [0]], dtype=np.float32)
zeroProjection = project3dTo2dArray(c, K, rotation, translation)
displayCube(w, h, zeroProjection)

rotation = np.array([[0], [-90], [0]], dtype=np.float32)
translation = np.array([[math.sqrt(.5)], [0], [math.sqrt(.5)]], dtype=np.float32)
print('applying rotation: ', rotation)
print('applying translation: ', translation)
rotate90projection = project3dTo2dArray(c, K, rotation * math.pi / 180, translation)
displayCube(w, h, rotate90projection)

estimate_pose(zeroProjection, rotate90projection)


def displayCube(w, h, points):
img = np.zeros((h, w, 3), dtype=np.uint8)

plotCube(img, points)

cv2.imshow('img', img)
k = cv2.waitKey(0) & 0xff
if k == ord('q'):
exit(0)


def plotCube(img, points):
# Red back face
cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n-1]), (0, 0, 255), 2)
cv2.line(img, tuple(points[n*n*(n-1)+n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 128), 2)
cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (0, 0, 200), 2)
cv2.line(img, tuple(points[n*n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 200), 2)

# gray connectors
cv2.line(img, tuple(points[0]), tuple(points[n*n*(n-1)]), (150, 150, 150), 2)
cv2.line(img, tuple(points[n-1]), tuple(points[n*n*(n-1)+n-1]), (150, 150, 150), 2)
cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (100, 100, 100), 2)
cv2.line(img, tuple(points[n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (100, 100, 100), 2)

# Blue front face
cv2.line(img, tuple(points[0]), tuple(points[n-1]), (255, 0, 0), 2)
cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*(n-1)+n-1]), (128, 0, 0), 2)
cv2.line(img, tuple(points[0]), tuple(points[n*(n-1)]), (200, 0, 0), 2)
cv2.line(img, tuple(points[n-1]), tuple(points[n*(n-1)+n-1]), (200, 0, 0), 2)


main()






opencv 3d-reconstruction opencv-python






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 27 at 13:53







Ragnvald Tullenavn

















asked Mar 25 at 11:26









Ragnvald TullenavnRagnvald Tullenavn

13 bronze badges




13 bronze badges












  • Have you verified the results are wrong and not your interpretation of the values according to my answer in the linked question?

    – oarfish
    Mar 26 at 10:05











  • 8 points may not be enough. Updated code to use 1000 points. Also updated the code to use translation distance = 1, since recoverPose() only provides the translation direction unit vector. (Distance is unknown) This is the information I want to recover: translation = (0.7071, 0, 0.7071), rotation = (0, -90, 0)

    – Ragnvald Tullenavn
    Mar 27 at 14:03

















  • Have you verified the results are wrong and not your interpretation of the values according to my answer in the linked question?

    – oarfish
    Mar 26 at 10:05











  • 8 points may not be enough. Updated code to use 1000 points. Also updated the code to use translation distance = 1, since recoverPose() only provides the translation direction unit vector. (Distance is unknown) This is the information I want to recover: translation = (0.7071, 0, 0.7071), rotation = (0, -90, 0)

    – Ragnvald Tullenavn
    Mar 27 at 14:03
















Have you verified the results are wrong and not your interpretation of the values according to my answer in the linked question?

– oarfish
Mar 26 at 10:05





Have you verified the results are wrong and not your interpretation of the values according to my answer in the linked question?

– oarfish
Mar 26 at 10:05













8 points may not be enough. Updated code to use 1000 points. Also updated the code to use translation distance = 1, since recoverPose() only provides the translation direction unit vector. (Distance is unknown) This is the information I want to recover: translation = (0.7071, 0, 0.7071), rotation = (0, -90, 0)

– Ragnvald Tullenavn
Mar 27 at 14:03





8 points may not be enough. Updated code to use 1000 points. Also updated the code to use translation distance = 1, since recoverPose() only provides the translation direction unit vector. (Distance is unknown) This is the information I want to recover: translation = (0.7071, 0, 0.7071), rotation = (0, -90, 0)

– Ragnvald Tullenavn
Mar 27 at 14:03












1 Answer
1






active

oldest

votes


















0














Turned out to be a few minor bugs in my code (like wrong principal point).
Working code below shows 3 images.



First is a cube displayed in front of the camera.
Second is the same cube, but different projection. Camera has been moved 1 unit and rotated around all 3 axes.
Camera translation and rotation is estimated from the two projections.
Third shows the cube projected using the rotation and translation estimates.



The code works since the second and third images are similar.



import cv2
import numpy as np
import math


def cameraMatrix(f, w, h):
return np.array([
[f, 0, w/2],
[0, f, h/2],
[0, 0, 1]])


n = 10
f = 300
w = 640
h = 480
K = cameraMatrix(f, w, h)


def cube(x=0, y=0, z=0, radius=1):
c = np.zeros((n * n * n, 3), dtype=np.float32)
for i in range(0, n):
for j in range(0, n):
for k in range(0, n):
index = i + j * n + k * n * n
c[index] = [i, j, k]
c = 2 * c / (n - 1) - 1
c *= radius
c += [x, y, z]
return c


def project3dTo2dArray(points3d, K, rotation, translation):
imagePoints, _ = cv2.projectPoints(points3d,
rotation,
translation,
K,
np.array([]))
p2d = imagePoints.reshape((imagePoints.shape[0],2))
return p2d


def estimate_pose(projectionA, projectionB):
principal_point = (w/2,h/2)
E, m = cv2.findEssentialMat(projectionA, projectionB, focal = f, pp = principal_point, method=cv2.RANSAC, threshold=1, prob=0.999)
_, r, t, _ = cv2.recoverPose(E, projectionA, projectionB, focal = f, pp = principal_point, mask = m)
angles, _, _, _, _, _ = cv2.RQDecomp3x3(r)
return angles, t


def main():
c = cube(0, 0, math.sqrt(.5), 0.1)
rotation = np.array([[0], [0], [0]], dtype=np.float32)
translation = np.array([[0], [0], [0]], dtype=np.float32)
zeroProjection = project3dTo2dArray(c, K, rotation, translation)
displayCube(w, h, zeroProjection)

rotation = np.array([[10], [-30], [5]], dtype=np.float32)
translation = np.array([[math.sqrt(.7)], [0], [math.sqrt(.3)]], dtype=np.float32)

print('applying rotation: ', rotation)
print('applying translation: ', translation)
movedprojection = project3dTo2dArray(c, K, rotation * math.pi / 180, translation)
displayCube(w, h, movedprojection)

estRot, estTra= estimate_pose(zeroProjection, movedprojection)
print('rotation estimate:', estRot)
print('translation estimate:', estTra)

rotation = np.array([[estRot[0]], [estRot[1]], [estRot[2]]], dtype=np.float32)
translation = np.array([[estTra[0]], [estTra[1]], [estTra[2]]], dtype=np.float32)
estimateProjection = project3dTo2dArray(c, K, rotation * math.pi / 180, translation)
displayCube(w, h, estimateProjection)


def displayCube(w, h, points):
img = np.zeros((h, w, 3), dtype=np.uint8)

plotCube(img, points)

cv2.imshow('img', img)
k = cv2.waitKey(0) & 0xff
if k == ord('q'):
exit(0)


def plotCube(img, points):
# Red back face
cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n-1]), (0, 0, 255), 2)
cv2.line(img, tuple(points[n*n*(n-1)+n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 128), 2)
cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (0, 0, 200), 2)
cv2.line(img, tuple(points[n*n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 200), 2)

# gray connectors
cv2.line(img, tuple(points[0]), tuple(points[n*n*(n-1)]), (150, 150, 150), 2)
cv2.line(img, tuple(points[n-1]), tuple(points[n*n*(n-1)+n-1]), (150, 150, 150), 2)
cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (100, 100, 100), 2)
cv2.line(img, tuple(points[n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (100, 100, 100), 2)

# Blue front face
cv2.line(img, tuple(points[0]), tuple(points[n-1]), (255, 0, 0), 2)
cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*(n-1)+n-1]), (128, 0, 0), 2)
cv2.line(img, tuple(points[0]), tuple(points[n*(n-1)]), (200, 0, 0), 2)
cv2.line(img, tuple(points[n-1]), tuple(points[n*(n-1)+n-1]), (200, 0, 0), 2)


main()





share|improve this answer



























    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55336759%2fcamera-pose-estimation-giving-wrong-results%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    Turned out to be a few minor bugs in my code (like wrong principal point).
    Working code below shows 3 images.



    First is a cube displayed in front of the camera.
    Second is the same cube, but different projection. Camera has been moved 1 unit and rotated around all 3 axes.
    Camera translation and rotation is estimated from the two projections.
    Third shows the cube projected using the rotation and translation estimates.



    The code works since the second and third images are similar.



    import cv2
    import numpy as np
    import math


    def cameraMatrix(f, w, h):
    return np.array([
    [f, 0, w/2],
    [0, f, h/2],
    [0, 0, 1]])


    n = 10
    f = 300
    w = 640
    h = 480
    K = cameraMatrix(f, w, h)


    def cube(x=0, y=0, z=0, radius=1):
    c = np.zeros((n * n * n, 3), dtype=np.float32)
    for i in range(0, n):
    for j in range(0, n):
    for k in range(0, n):
    index = i + j * n + k * n * n
    c[index] = [i, j, k]
    c = 2 * c / (n - 1) - 1
    c *= radius
    c += [x, y, z]
    return c


    def project3dTo2dArray(points3d, K, rotation, translation):
    imagePoints, _ = cv2.projectPoints(points3d,
    rotation,
    translation,
    K,
    np.array([]))
    p2d = imagePoints.reshape((imagePoints.shape[0],2))
    return p2d


    def estimate_pose(projectionA, projectionB):
    principal_point = (w/2,h/2)
    E, m = cv2.findEssentialMat(projectionA, projectionB, focal = f, pp = principal_point, method=cv2.RANSAC, threshold=1, prob=0.999)
    _, r, t, _ = cv2.recoverPose(E, projectionA, projectionB, focal = f, pp = principal_point, mask = m)
    angles, _, _, _, _, _ = cv2.RQDecomp3x3(r)
    return angles, t


    def main():
    c = cube(0, 0, math.sqrt(.5), 0.1)
    rotation = np.array([[0], [0], [0]], dtype=np.float32)
    translation = np.array([[0], [0], [0]], dtype=np.float32)
    zeroProjection = project3dTo2dArray(c, K, rotation, translation)
    displayCube(w, h, zeroProjection)

    rotation = np.array([[10], [-30], [5]], dtype=np.float32)
    translation = np.array([[math.sqrt(.7)], [0], [math.sqrt(.3)]], dtype=np.float32)

    print('applying rotation: ', rotation)
    print('applying translation: ', translation)
    movedprojection = project3dTo2dArray(c, K, rotation * math.pi / 180, translation)
    displayCube(w, h, movedprojection)

    estRot, estTra= estimate_pose(zeroProjection, movedprojection)
    print('rotation estimate:', estRot)
    print('translation estimate:', estTra)

    rotation = np.array([[estRot[0]], [estRot[1]], [estRot[2]]], dtype=np.float32)
    translation = np.array([[estTra[0]], [estTra[1]], [estTra[2]]], dtype=np.float32)
    estimateProjection = project3dTo2dArray(c, K, rotation * math.pi / 180, translation)
    displayCube(w, h, estimateProjection)


    def displayCube(w, h, points):
    img = np.zeros((h, w, 3), dtype=np.uint8)

    plotCube(img, points)

    cv2.imshow('img', img)
    k = cv2.waitKey(0) & 0xff
    if k == ord('q'):
    exit(0)


    def plotCube(img, points):
    # Red back face
    cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n-1]), (0, 0, 255), 2)
    cv2.line(img, tuple(points[n*n*(n-1)+n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 128), 2)
    cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (0, 0, 200), 2)
    cv2.line(img, tuple(points[n*n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 200), 2)

    # gray connectors
    cv2.line(img, tuple(points[0]), tuple(points[n*n*(n-1)]), (150, 150, 150), 2)
    cv2.line(img, tuple(points[n-1]), tuple(points[n*n*(n-1)+n-1]), (150, 150, 150), 2)
    cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (100, 100, 100), 2)
    cv2.line(img, tuple(points[n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (100, 100, 100), 2)

    # Blue front face
    cv2.line(img, tuple(points[0]), tuple(points[n-1]), (255, 0, 0), 2)
    cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*(n-1)+n-1]), (128, 0, 0), 2)
    cv2.line(img, tuple(points[0]), tuple(points[n*(n-1)]), (200, 0, 0), 2)
    cv2.line(img, tuple(points[n-1]), tuple(points[n*(n-1)+n-1]), (200, 0, 0), 2)


    main()





    share|improve this answer





























      0














      Turned out to be a few minor bugs in my code (like wrong principal point).
      Working code below shows 3 images.



      First is a cube displayed in front of the camera.
      Second is the same cube, but different projection. Camera has been moved 1 unit and rotated around all 3 axes.
      Camera translation and rotation is estimated from the two projections.
      Third shows the cube projected using the rotation and translation estimates.



      The code works since the second and third images are similar.



      import cv2
      import numpy as np
      import math


      def cameraMatrix(f, w, h):
      return np.array([
      [f, 0, w/2],
      [0, f, h/2],
      [0, 0, 1]])


      n = 10
      f = 300
      w = 640
      h = 480
      K = cameraMatrix(f, w, h)


      def cube(x=0, y=0, z=0, radius=1):
      c = np.zeros((n * n * n, 3), dtype=np.float32)
      for i in range(0, n):
      for j in range(0, n):
      for k in range(0, n):
      index = i + j * n + k * n * n
      c[index] = [i, j, k]
      c = 2 * c / (n - 1) - 1
      c *= radius
      c += [x, y, z]
      return c


      def project3dTo2dArray(points3d, K, rotation, translation):
      imagePoints, _ = cv2.projectPoints(points3d,
      rotation,
      translation,
      K,
      np.array([]))
      p2d = imagePoints.reshape((imagePoints.shape[0],2))
      return p2d


      def estimate_pose(projectionA, projectionB):
      principal_point = (w/2,h/2)
      E, m = cv2.findEssentialMat(projectionA, projectionB, focal = f, pp = principal_point, method=cv2.RANSAC, threshold=1, prob=0.999)
      _, r, t, _ = cv2.recoverPose(E, projectionA, projectionB, focal = f, pp = principal_point, mask = m)
      angles, _, _, _, _, _ = cv2.RQDecomp3x3(r)
      return angles, t


      def main():
      c = cube(0, 0, math.sqrt(.5), 0.1)
      rotation = np.array([[0], [0], [0]], dtype=np.float32)
      translation = np.array([[0], [0], [0]], dtype=np.float32)
      zeroProjection = project3dTo2dArray(c, K, rotation, translation)
      displayCube(w, h, zeroProjection)

      rotation = np.array([[10], [-30], [5]], dtype=np.float32)
      translation = np.array([[math.sqrt(.7)], [0], [math.sqrt(.3)]], dtype=np.float32)

      print('applying rotation: ', rotation)
      print('applying translation: ', translation)
      movedprojection = project3dTo2dArray(c, K, rotation * math.pi / 180, translation)
      displayCube(w, h, movedprojection)

      estRot, estTra= estimate_pose(zeroProjection, movedprojection)
      print('rotation estimate:', estRot)
      print('translation estimate:', estTra)

      rotation = np.array([[estRot[0]], [estRot[1]], [estRot[2]]], dtype=np.float32)
      translation = np.array([[estTra[0]], [estTra[1]], [estTra[2]]], dtype=np.float32)
      estimateProjection = project3dTo2dArray(c, K, rotation * math.pi / 180, translation)
      displayCube(w, h, estimateProjection)


      def displayCube(w, h, points):
      img = np.zeros((h, w, 3), dtype=np.uint8)

      plotCube(img, points)

      cv2.imshow('img', img)
      k = cv2.waitKey(0) & 0xff
      if k == ord('q'):
      exit(0)


      def plotCube(img, points):
      # Red back face
      cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n-1]), (0, 0, 255), 2)
      cv2.line(img, tuple(points[n*n*(n-1)+n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 128), 2)
      cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (0, 0, 200), 2)
      cv2.line(img, tuple(points[n*n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 200), 2)

      # gray connectors
      cv2.line(img, tuple(points[0]), tuple(points[n*n*(n-1)]), (150, 150, 150), 2)
      cv2.line(img, tuple(points[n-1]), tuple(points[n*n*(n-1)+n-1]), (150, 150, 150), 2)
      cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (100, 100, 100), 2)
      cv2.line(img, tuple(points[n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (100, 100, 100), 2)

      # Blue front face
      cv2.line(img, tuple(points[0]), tuple(points[n-1]), (255, 0, 0), 2)
      cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*(n-1)+n-1]), (128, 0, 0), 2)
      cv2.line(img, tuple(points[0]), tuple(points[n*(n-1)]), (200, 0, 0), 2)
      cv2.line(img, tuple(points[n-1]), tuple(points[n*(n-1)+n-1]), (200, 0, 0), 2)


      main()





      share|improve this answer



























        0












        0








        0







        Turned out to be a few minor bugs in my code (like wrong principal point).
        Working code below shows 3 images.



        First is a cube displayed in front of the camera.
        Second is the same cube, but different projection. Camera has been moved 1 unit and rotated around all 3 axes.
        Camera translation and rotation is estimated from the two projections.
        Third shows the cube projected using the rotation and translation estimates.



        The code works since the second and third images are similar.



        import cv2
        import numpy as np
        import math


        def cameraMatrix(f, w, h):
        return np.array([
        [f, 0, w/2],
        [0, f, h/2],
        [0, 0, 1]])


        n = 10
        f = 300
        w = 640
        h = 480
        K = cameraMatrix(f, w, h)


        def cube(x=0, y=0, z=0, radius=1):
        c = np.zeros((n * n * n, 3), dtype=np.float32)
        for i in range(0, n):
        for j in range(0, n):
        for k in range(0, n):
        index = i + j * n + k * n * n
        c[index] = [i, j, k]
        c = 2 * c / (n - 1) - 1
        c *= radius
        c += [x, y, z]
        return c


        def project3dTo2dArray(points3d, K, rotation, translation):
        imagePoints, _ = cv2.projectPoints(points3d,
        rotation,
        translation,
        K,
        np.array([]))
        p2d = imagePoints.reshape((imagePoints.shape[0],2))
        return p2d


        def estimate_pose(projectionA, projectionB):
        principal_point = (w/2,h/2)
        E, m = cv2.findEssentialMat(projectionA, projectionB, focal = f, pp = principal_point, method=cv2.RANSAC, threshold=1, prob=0.999)
        _, r, t, _ = cv2.recoverPose(E, projectionA, projectionB, focal = f, pp = principal_point, mask = m)
        angles, _, _, _, _, _ = cv2.RQDecomp3x3(r)
        return angles, t


        def main():
        c = cube(0, 0, math.sqrt(.5), 0.1)
        rotation = np.array([[0], [0], [0]], dtype=np.float32)
        translation = np.array([[0], [0], [0]], dtype=np.float32)
        zeroProjection = project3dTo2dArray(c, K, rotation, translation)
        displayCube(w, h, zeroProjection)

        rotation = np.array([[10], [-30], [5]], dtype=np.float32)
        translation = np.array([[math.sqrt(.7)], [0], [math.sqrt(.3)]], dtype=np.float32)

        print('applying rotation: ', rotation)
        print('applying translation: ', translation)
        movedprojection = project3dTo2dArray(c, K, rotation * math.pi / 180, translation)
        displayCube(w, h, movedprojection)

        estRot, estTra= estimate_pose(zeroProjection, movedprojection)
        print('rotation estimate:', estRot)
        print('translation estimate:', estTra)

        rotation = np.array([[estRot[0]], [estRot[1]], [estRot[2]]], dtype=np.float32)
        translation = np.array([[estTra[0]], [estTra[1]], [estTra[2]]], dtype=np.float32)
        estimateProjection = project3dTo2dArray(c, K, rotation * math.pi / 180, translation)
        displayCube(w, h, estimateProjection)


        def displayCube(w, h, points):
        img = np.zeros((h, w, 3), dtype=np.uint8)

        plotCube(img, points)

        cv2.imshow('img', img)
        k = cv2.waitKey(0) & 0xff
        if k == ord('q'):
        exit(0)


        def plotCube(img, points):
        # Red back face
        cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n-1]), (0, 0, 255), 2)
        cv2.line(img, tuple(points[n*n*(n-1)+n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 128), 2)
        cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (0, 0, 200), 2)
        cv2.line(img, tuple(points[n*n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 200), 2)

        # gray connectors
        cv2.line(img, tuple(points[0]), tuple(points[n*n*(n-1)]), (150, 150, 150), 2)
        cv2.line(img, tuple(points[n-1]), tuple(points[n*n*(n-1)+n-1]), (150, 150, 150), 2)
        cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (100, 100, 100), 2)
        cv2.line(img, tuple(points[n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (100, 100, 100), 2)

        # Blue front face
        cv2.line(img, tuple(points[0]), tuple(points[n-1]), (255, 0, 0), 2)
        cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*(n-1)+n-1]), (128, 0, 0), 2)
        cv2.line(img, tuple(points[0]), tuple(points[n*(n-1)]), (200, 0, 0), 2)
        cv2.line(img, tuple(points[n-1]), tuple(points[n*(n-1)+n-1]), (200, 0, 0), 2)


        main()





        share|improve this answer















        Turned out to be a few minor bugs in my code (like wrong principal point).
        Working code below shows 3 images.



        First is a cube displayed in front of the camera.
        Second is the same cube, but different projection. Camera has been moved 1 unit and rotated around all 3 axes.
        Camera translation and rotation is estimated from the two projections.
        Third shows the cube projected using the rotation and translation estimates.



        The code works since the second and third images are similar.



        import cv2
        import numpy as np
        import math


        def cameraMatrix(f, w, h):
        return np.array([
        [f, 0, w/2],
        [0, f, h/2],
        [0, 0, 1]])


        n = 10
        f = 300
        w = 640
        h = 480
        K = cameraMatrix(f, w, h)


        def cube(x=0, y=0, z=0, radius=1):
        c = np.zeros((n * n * n, 3), dtype=np.float32)
        for i in range(0, n):
        for j in range(0, n):
        for k in range(0, n):
        index = i + j * n + k * n * n
        c[index] = [i, j, k]
        c = 2 * c / (n - 1) - 1
        c *= radius
        c += [x, y, z]
        return c


        def project3dTo2dArray(points3d, K, rotation, translation):
        imagePoints, _ = cv2.projectPoints(points3d,
        rotation,
        translation,
        K,
        np.array([]))
        p2d = imagePoints.reshape((imagePoints.shape[0],2))
        return p2d


        def estimate_pose(projectionA, projectionB):
        principal_point = (w/2,h/2)
        E, m = cv2.findEssentialMat(projectionA, projectionB, focal = f, pp = principal_point, method=cv2.RANSAC, threshold=1, prob=0.999)
        _, r, t, _ = cv2.recoverPose(E, projectionA, projectionB, focal = f, pp = principal_point, mask = m)
        angles, _, _, _, _, _ = cv2.RQDecomp3x3(r)
        return angles, t


        def main():
        c = cube(0, 0, math.sqrt(.5), 0.1)
        rotation = np.array([[0], [0], [0]], dtype=np.float32)
        translation = np.array([[0], [0], [0]], dtype=np.float32)
        zeroProjection = project3dTo2dArray(c, K, rotation, translation)
        displayCube(w, h, zeroProjection)

        rotation = np.array([[10], [-30], [5]], dtype=np.float32)
        translation = np.array([[math.sqrt(.7)], [0], [math.sqrt(.3)]], dtype=np.float32)

        print('applying rotation: ', rotation)
        print('applying translation: ', translation)
        movedprojection = project3dTo2dArray(c, K, rotation * math.pi / 180, translation)
        displayCube(w, h, movedprojection)

        estRot, estTra= estimate_pose(zeroProjection, movedprojection)
        print('rotation estimate:', estRot)
        print('translation estimate:', estTra)

        rotation = np.array([[estRot[0]], [estRot[1]], [estRot[2]]], dtype=np.float32)
        translation = np.array([[estTra[0]], [estTra[1]], [estTra[2]]], dtype=np.float32)
        estimateProjection = project3dTo2dArray(c, K, rotation * math.pi / 180, translation)
        displayCube(w, h, estimateProjection)


        def displayCube(w, h, points):
        img = np.zeros((h, w, 3), dtype=np.uint8)

        plotCube(img, points)

        cv2.imshow('img', img)
        k = cv2.waitKey(0) & 0xff
        if k == ord('q'):
        exit(0)


        def plotCube(img, points):
        # Red back face
        cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n-1]), (0, 0, 255), 2)
        cv2.line(img, tuple(points[n*n*(n-1)+n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 128), 2)
        cv2.line(img, tuple(points[n*n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (0, 0, 200), 2)
        cv2.line(img, tuple(points[n*n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (0, 0, 200), 2)

        # gray connectors
        cv2.line(img, tuple(points[0]), tuple(points[n*n*(n-1)]), (150, 150, 150), 2)
        cv2.line(img, tuple(points[n-1]), tuple(points[n*n*(n-1)+n-1]), (150, 150, 150), 2)
        cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*n*(n-1)+n*(n-1)]), (100, 100, 100), 2)
        cv2.line(img, tuple(points[n*(n-1)+n-1]), tuple(points[n*n*(n-1)+n*(n-1)+n-1]), (100, 100, 100), 2)

        # Blue front face
        cv2.line(img, tuple(points[0]), tuple(points[n-1]), (255, 0, 0), 2)
        cv2.line(img, tuple(points[n*(n-1)]), tuple(points[n*(n-1)+n-1]), (128, 0, 0), 2)
        cv2.line(img, tuple(points[0]), tuple(points[n*(n-1)]), (200, 0, 0), 2)
        cv2.line(img, tuple(points[n-1]), tuple(points[n*(n-1)+n-1]), (200, 0, 0), 2)


        main()






        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Apr 9 at 13:33

























        answered Apr 9 at 8:37









        Ragnvald TullenavnRagnvald Tullenavn

        13 bronze badges




        13 bronze badges



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55336759%2fcamera-pose-estimation-giving-wrong-results%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

            Swift 4 - func physicsWorld not invoked on collision? The Next CEO of Stack OverflowHow to call Objective-C code from Swift#ifdef replacement in the Swift language@selector() in Swift?#pragma mark in Swift?Swift for loop: for index, element in array?dispatch_after - GCD in Swift?Swift Beta performance: sorting arraysSplit a String into an array in Swift?The use of Swift 3 @objc inference in Swift 4 mode is deprecated?How to optimize UITableViewCell, because my UITableView lags

            Access current req object everywhere in Node.js ExpressWhy are global variables considered bad practice? (node.js)Using req & res across functionsHow do I get the path to the current script with Node.js?What is Node.js' Connect, Express and “middleware”?Node.js w/ express error handling in callbackHow to access the GET parameters after “?” in Express?Modify Node.js req object parametersAccess “app” variable inside of ExpressJS/ConnectJS middleware?Node.js Express app - request objectAngular Http Module considered middleware?Session variables in ExpressJSAdd properties to the req object in expressjs with Typescript