MATH1002: Linear Algebra Assignment 2 Semester 1, 2022
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
MATH1002: Linear Algebra
Assignment 2
Semester 1, 2022
1. In this question, you will be using the following trigonometric identities:
cos2 ↵ +sin2 ↵ = 1 (1)
cos(↵ + β) = cos↵ cosβ − sin↵ sinβ (2)
sin(↵ + β) = sin↵ cosβ +cos↵ sinβ (3)
where ↵ , β 2 R. You do not need to prove these identities. You may also use without proof the fact that the set
# : ↵ 2 R$
is exactly the set of unit vectors in R2 .
Now for any real number ↵, define
R↵ = #
(a) Prove that for all ↵ , β 2 R,
R↵Rβ = R↵+β
(b) Using part (a), or otherwise, prove that R↵ is invertible and that R1 = R−↵, for
all ↵ 2 R.
(c) Prove that for all ↵ 2 R and all x, y 2 R2 ,
(R↵x) · (R↵y) = x · y
(d) Suppose A is a 2 ⇥ 2 matrix such that for all x, y 2 R2 , (Ax) · (Ay) = x · y
Must it be true that A = R↵, for some ↵ 2 R? Either prove this, or give a counterexample (including justification).
(e) Let B = c(a) d(b)# be any 2 ⇥ 2 matrix.
(i) Show that there are real numbers u11 and ↵ such that #c(a) = u11 #. (ii) Let ↵ 2 R. Use the invertibility of R↵ to prove that there are unique
u12,u22 2 R such that
#d(b) = u12 # + u22 #
(iii) Use parts (i) and (ii) to show that B can be expressed in the form
B = R↵U
for some ↵ 2 R and some upper-triangular matrix U.
(iv) Suppose that B = R↵U = RβV, where ↵ , β 2 R and U and V are upper-
triangular. Prove that if B is invertible, then U = ±V.
2. Some parts of this question refer to the attached research paper:
Mili Shah (2013), Solving the Robot-World/Hand-Eye Calibration Problem Using the Kronecker Product, Journal of Mechanisms and Robotics, Volume 5, Issue 3 (2013).
(a) A rotation matrix is a 3 ⇥ 3 matrix R such that det(R) = 1 and R− 1 = RT . Let
R and S be rotation matrices, and let t, u 2 R3 .
(i) Prove that RS is a rotation matrix.
2r11 r12 r133 2x3
(ii) Given R = 4r21 r22 r235 and t = 4y5 2 R3, we write 0 1 for the
4 ⇥ 4 matrix
2r11 r12 r13 x3
r31 r32 r33 z . 0 0 0 1
From page 1 of the attached paper, the product
R t#S u#
0 1 0 1
is a matrix of the form 0 1 , where R0 is a rotation matrix and v 2 R3 . Write down expressions for R0 and v in terms of R, S, t and u. For this question only, you do not need to give any reasons for your answer.
(b) Suppose a robot is posed n times, leading to the equations
where RAj , RX , RY and RBj are rotation matrices, for j = 1, 2,...,n. Write a few sentences to summarise the results of the attached paper by Shah on:
(i) the number of poses needed to obtain unique matrices RX and RY which satisfy these equations; and
(ii) how the position errors for the method presented in the attached paper
la.edu |
|
neous closed-form solutions, which solve the orientational component and positional component at the same time. Many of these solutions are based on the closed-form solutions of the related hand-eye calibration problem AjX ¼ XBj , which are formulated using either angle-axis formulations for rota- tions [11–13], Lie group theory [14], quaternions [15–21], screw theory [22], or Kronecker products [23].
Historically, the separable solutions to the robot-world/hand- eye calibration problem are formulated using quaternions [4,8]. These quaternion solutions are clean and efficient but can have stability issues as will be discussed in Sec. 5. In addition, errors computed from first solving the orientational component can get passed to the positional component. Simultaneous solutions do not have this problem; however, the optimal orientations (RX and RY) that are calculated may be negatively influenced by noise from the positional component [21]. Furthermore, simultaneous solutions may not live in the actual search space of possible appli- cable solutions [23]. For example, Li et al. [9] formulate a simul- taneous solution to the robot-world/hand-eye calibration problem using the Kronecker product, which follows the methodology of solving the hand-eye calibration problem of Andreff et al. [23]. The resulting solution for the optimal orientations (RX and RY) may not necessarily be rotation matrices. Li et al. suggest calculat- ing the best orthogonal approximation to guarantee a rotation. However, they do not update the positional approximation (tX and tY) after. This can lead to errors in the optimal positional approxi- mation as will be shown in Sec. 5. In this paper, we create a stable separable closed-form solution that combines the quaternion work of [4,8] with the Kronecker work of [9,23]. Though this method is an example of a separable closed-form solution and thus suffers from the problem that errors obtained from first calculating the orientational component get passed to the positional component, the resulting positional errors may be less than the simultaneous methods. Examples of this phenomena will be shown in Sec. 5. For completeness, a full mathematical analysis of the problem, which includes minimal requirements to find a closed-form solu- tion using the Kronecker product that were not discussed in Ref. [9], will also be presented. It should be noted that this analysis was inspired by the proofs shown in Refs. [4,8] that solved AjX ¼ YBj using quaternions and in Ref. [23] that solved AjX ¼ XBj using the Kronecker product.
This paper is organized as followed: Sec. 2 will give qualifica- tions and methodology for calculating the optimal rotations RY and RX, Sec. 3 will give qualifications and methodology for calcu- lating the optimal translations tY and tX, Sec. 4 will describe error metrics, and Sec. 5 will describe experiments illustrating the effectiveness of the Kronecker product for solving the robot- world/hand-eye calibration problem. Here, k & k denotes the Fro- benius norm, so
kAk ¼ qffiffiffiffiffiAffiffiffiffiffiffiffiffiAT% ¼ qffitffirffiffiffiffiffiffiffi$AffiffiffiffiffiffiffiffiTA%
where T denotes the transpose operator and tr() denotes the matrix trace operation. The determinant of a matrix A is represented as det(A), vectorizing a matrix A column-wise is represented as vec(A), and the symbol ' denotes the Kronecker product. Here, the Kronecker product of an m " n matrix A with a p " q matrix B is defined as the mp " nq matrix
0 |
a11B & && a1nB 1 . . . C am1B && .& am B.n A |
Some properties of the Kronecker product that will be useful in the proofs of this paper are
(1) ðA ' BÞ ( 1 ¼ A( 1 ' B( 1
(2) ðA ' BÞT ¼ AT ' BT
031007-2 / Vol. 5, AUGUST 2013
(3) ðA ' BÞðC ' DÞ ¼ AC ' BD
(4) If A is orthogonal and B is orthogonal, then their Kronecker product A ' B is also orthogonal.
for matrices A, B, C, and D of appropriate degree [24].
2 Finding RY and RX
This section presents the methodology and qualifications for obtaining a unique solution for RY and RX. To begin notice that the orientational component
RAjRX ¼ RYRBj
is equivalent to
RAjRXRB(T)j ¼ RY
since RBj is an orthogonal matrix. Therefore, the orientational component (1) can be represented as either
ðRBj' RAjÞvecðRXÞ( vecðRYÞ ¼ 0
$(I RBj' RAj% " ¼ 0 (3)
Here, we use the fact that if AXB ¼ C for unknown matrix X, then the problem can be rewritten as a linear system
ðBT ' AÞvecðXÞ ¼ vecðAXBÞ ¼ vecðCÞ
Note that once the rotation matrix RY is known, the positional component (2) can be represented as the linear system
$ I (RAj % ! t(t)X(Y) " ¼ tAj ( RYtBj (4)
The following lemma will be useful in characterizing a unique so- lution for RY and RX.
LEMMA 2.1. The matrices RB(T)jRBk and RA(T)jRAk have the same eigenvalues for j, k ¼ 1, 2,…, n. Furthermore, these eigenvalues can be represented asf1; eih ; e(ihg.
Proof. We assume that
RAjRX ¼ RYRBj , RAjRXRB(T)j ¼ RY
for j, k ¼ 1, 2,…, n. But, then
RAjRXRB(T)j ¼ RY ¼ RAkRXRB(T)k ) RA(T)kRAj ¼ RXRB(T)kRBjRX(T)
Therefore, RB(T)kRBj and RA(T)kRAj are similar matrices and thus have the same eigenvalues. Furthermore, since both of these matrices are rotations their eigenvalues can be represented as f1e eih(ihg.
Using the above lemma, the minimum number of pose mea- surement can now be given. We should note that these qualifica- tions are similar to the qualifications of uniqueness shown for the quaternion method of [8]. However, the proofs here are derived using the Kronecker product instead of quaternions.
THEOREM 2.2. The minimum number n of pose measurements necessary to obtain a unique solution to linear system (3) is n ¼ 3.
Proof. Consider the case where n ¼ 2. Then the linear system (3) becomes
"!" ¼ ! "
which clearly is a square system. However, the dimension of the
nullspace is at least three. This can be seen by first noticing
Transactions of the ASME
rank |
" ¼ rank |
' RA2 |
by elementary matrix row operations. Hence, the resulting matrix is block-triangular and the
rankðRB1 ' RA1 ( RB2 ' RA2 Þ
¼ rank&ðRB1 ' RA1 ÞðI ( RB(T)1 RB2 ' RA(T)1 RA2 Þ' ¼ rank&I ( RB(T)1 RB2 ' RA(T)1 RA2 Þ' ) 6
This is a consequence of the previous lemma: since RB(T)1 RB2 and RA(T)1 RA2 have similar eigenvalues f1; eih ; e(ihg, at least three of the eigenvalues of RB(T)1 RB2 ' RA(T)1 RA2 are 1. Therefore,
rankðRB1 ' RA1 ( RB2 ' RA2 Þ ) 6:
Consequently, the dimension of the nullspace is greater than one, and thus the number of pose measurements n * 3.
Even when n ¼ 3 there are situations when the linear system does not have a unique solution as illustrated in the following theorem.
THEOREM 2.3. Assume n ¼ 3. If the principal axes (up to sign) for RA(T)2 RA1 and RA(T)3 RA1 are not equal, then the linear system (3) has a unique solution.
Proof. For n ¼ 3 the linear system (3) becomes
ðRB1 ' RA1 ÞvecðRXÞ( vecðRYÞ ¼ 0
ðRB2 ' RA2 ÞvecðRXÞ( vecðRYÞ ¼ 0
ðRB3 ' RA3ÞvecðRXÞ( vecðRYÞ ¼ 0
which implies
ðRB1 ' RA1 ÞvecðRXÞ ¼ ðRB2 ' RA2 ÞvecðRXÞ ¼ ðRB3 ' RA3ÞvecðRXÞ
But, then
ðRB(T)1 RB2 ' RA(T)1 RA2 ÞvecðRXÞ ¼ vecðRXÞ
ðRB(T)1 RB3 ' RA(T)1 RA3ÞvecðRXÞ ¼ vecðRXÞ
which is equivalent to finding the nullspace of
!vecðRXÞ ¼ 0 (5)
This problem appears in the work of Ref. [23] where they are searching for the solution of the similar problem RARX ¼ RXRB . In this work, they reformulate the problem as
"vecðRXÞ ¼ 0
and show that a unique solution exists only if the principal axes of RA1 and RA2 are nonparallel. For problem (5), this is equivalent to stating that the principal axes of RA(T)1 RA2 and RA(T)1 RA3 are not equal (up to sign).
We now concentrate on finding an efficient unique solution for linear system (3). For n * 3, linear system (3) becomes rectangular and therefore the nullspace for the corresponding normal equation
nI ( P1 RBj ( P1 RB(T)j ' RA(T)j nI |
' RAj ! ! |
vecðRYÞ vecðRXÞ |
" ¼ ! " (6) |
Journal of Mechanisms and Robotics
has to be considered. Note that the normal equation for a linear system Ax ¼ b is defined as ATAx ¼ ATb.
THEOREM 2.4. The solutions vec(RY) and vec(RX) of the linear system (6) are proportional to the left singular vector un and right singular vector vn corresponding to the singular value, n, of
n
K ¼ XRBj ' RAj
j¼1
respectively. The resulting RX and RY can be calculated as
RX ¼ aVX
RY ¼ bVY
where VX ¼ vec( 1 ðvnÞ; VY ¼ vec( 1 ðunÞ, and
a ¼ signðVXÞdetðVXÞ (1=3
b ¼ signðVYÞdetðVYÞ ( 1=3
Proof. Breaking up the linear system (6) leads to two equations:
n vecðRYÞ( K vecðRXÞ ¼ 0
(KT vecðRYÞþ n vecðRXÞ ¼ 0
Solving the first equation yields
1
vecðRYÞ ¼ K vecðRXÞ
and substituting this expression into the second equation yields
n vecð2RXÞ ¼ KTK vecðRXÞ
Similarly, we can show that
n vecð2RYÞ ¼ KKT vecðRYÞ
Therefore, vec(RX) is proportional to the eigenvector correspond- ing to the eigenvalue n2 of KTK and vec(RY) is proportional to the eigenvector corresponding the to eigenvalue of n2 of KKT. These vectors can efficiently be computed by taking the singular value decomposition of K. Specifically, vec(RY) is proportional to the left singular vector un and vec(RX) is proportional to the right singular vector vn corresponding to the singular value n. Let VY ¼ vec( 1 ðunÞ and VX ¼ vec(1 ðvnÞ. The proportionality con- stants can be determined by noting that detðRXÞ ¼ 1 ¼ detðRYÞ, since RX and RY are rotation matrices. Therefore, since
aVX ¼ RX
bVY ¼ RY
the proportionality constants
a ¼ signðVXÞdetðVXÞ (1=3
b ¼ signðVYÞdetðVYÞ ( 1=3
Here, we use the property that detðaXÞ ¼ a3detðXÞ for given sca- lar a and 3 " 3 matrix X.
In practice, the method above may not give accurate solutions due to noise. The method above guarantees that the computed RX and RY have determinant 1. However, the orthogonality of the matrices RX and RY computed from the method may be lost due to noise. Therefore, it may be beneficial to re-orthogonalize the computed matrices to guarantee that they are indeed rotations. In
AUGUST 2013, Vol. 5 / 031007-3
addition, noise in the data may make it difficult to find the eigen- vector corresponding to a specific value. As a result, the next theo- rem proves that n2 is the largest possible eigenvalue for KTK, and hence n is the largest singular value of K. Therefore, in practice instead of searching for the singular vectors corresponding to the singular value n, one should search for singular vectors corre- sponding to the largest singular value of K.
THEOREM 2.5. The largest possible eigenvalue of KTK is n2 .
Proof. To show that n2 is the largest possible eigenvalue of KTK, first note that this is a real symmetric matrix. Consequently, by the Rayleigh quotient
xTKTKx ¼ kmax
if x is a unit eigenvector corresponding to the largest eigenvalue kmax of KTK. But,
kmax ¼ xTKTKx
¼ X X xT &RB(T)jRBk' R j RA(T) Ak'x
n n
j¼1 k¼ 1
where
yfj;kg ¼ &RB(T)jRBk' R j RA(T) Ak'x
However, in the last theorem it was shown that n is a singular value for K. Thus, n2 is an eigenvalue of KTK. Moreover,
kmax ¼ n2
Note in this proof we used the fact that yfj;kg is a unit vector since RB(T)jRBk ' RA(T)jRAk is an orthogonal matrix and hence preserves length. Therefore, xTyfjk:g ) 1.
3 Finding tx and ty
Once RY is calculated with the method outlined from Sec. 2, in THEOREM 2.4 tX and tY can be calculated. Specifically, tX and tY is the solution to the linear system (4):
$ I (RAj % ! t(t)X(Y) " ¼ tAj ( RYtBj
Clearly, multiple measurements are necessary to obtain a unique solution for this problem. The following will give qualifications for uniqueness. It should be noted that the results of this section are similar to the results of solving AX ¼ YB using quaternions shown in Ref. [8].
THEOREM 3.1. The minimum number n of pose measurements necessary to obtain a unique solution to linear system (4) is n ¼ 3.
Proof. Consider the case where n ¼ 2. Then the linear system (4) becomes
"! t(t)X(Y) " ¼ "
which clearly is a square system. Therefore, uniqueness of the so- lution is dependent on the rank of the matrix
"
2022-05-09