With real vectors and matrices, the transpose operation is simple and familiar. It also happens to correspond to what we call the adjoint mathematically. In the complex case, one also has to conjugate the entries to keep the mathematical structure intact. We call this operator the hermitian of a matrix and use a star superscript for it.
A = rand(2,4) + 1i*rand(2,4)
A = 0.8147 + 0.9575i 0.1270 + 0.1576i 0.6324 + 0.9572i 0.2785 + 0.8003i 0.9058 + 0.9649i 0.9134 + 0.9706i 0.0975 + 0.4854i 0.5469 + 0.1419i
Aadjoint = A'
Aadjoint = 0.8147 - 0.9575i 0.9058 - 0.9649i 0.1270 - 0.1576i 0.9134 - 0.9706i 0.6324 - 0.9572i 0.0975 - 0.4854i 0.2785 - 0.8003i 0.5469 - 0.1419i
To get plain transpose, use a .^
operator.
Atrans = A.'
Atrans = 0.8147 + 0.9575i 0.9058 + 0.9649i 0.1270 + 0.1576i 0.9134 + 0.9706i 0.6324 + 0.9572i 0.0975 + 0.4854i 0.2785 + 0.8003i 0.5469 + 0.1419i
If u and v are column vectors of the same length, then their inner product is u∗v. The result is a scalar.
u = [ 4; -1; 2+2i ], v = [ -1; 1i; 1 ],
innerprod = u'*v
u = 4.0000 + 0.0000i -1.0000 + 0.0000i 2.0000 + 2.0000i v = -1.0000 + 0.0000i 0.0000 + 1.0000i 1.0000 + 0.0000i innerprod = -2.0000 - 3.0000i
The inner product has geometric significance. It is used to define length through the 2-norm,
length_u_squared = u'*u
length_u_squared = 25
sum( abs(u).^2 )
ans = 25
norm_u = norm(u)
norm_u = 5
It also defines the angle between two vectors as a generalization of the familiar dot product.
cos_theta = (u'*v) / ( norm(u)*norm(v) )
cos_theta = -0.2309 - 0.3464i
The angle may be complex when the vectors are complex!
theta = acos(cos_theta)
theta = 1.7902 + 0.3479i
The operations of inverse and hermitian commute.
A = rand(4,4)+1i*rand(4,4); (inv(A))'
ans = 2.4392 + 2.1789i 4.2203 - 2.1410i -5.6430 - 3.9836i -1.0386 + 4.9812i -0.1291 - 0.5146i -1.8260 - 1.0582i -1.0840 + 3.6489i 3.6179 - 1.7832i -3.0514 - 0.3801i -0.9192 + 3.9355i 6.9683 - 0.4530i -2.9434 - 3.8565i -0.4551 - 0.0746i -0.7963 + 1.3118i 1.9624 - 0.7105i -0.6719 - 0.2405i
inv(A')
ans = 2.4392 + 2.1789i 4.2203 - 2.1410i -5.6430 - 3.9836i -1.0386 + 4.9812i -0.1291 - 0.5146i -1.8260 - 1.0582i -1.0840 + 3.6489i 3.6179 - 1.7832i -3.0514 - 0.3801i -0.9192 + 3.9355i 6.9683 - 0.4530i -2.9434 - 3.8565i -0.4551 - 0.0746i -0.7963 + 1.3118i 1.9624 - 0.7105i -0.6719 - 0.2405i
So we just write A−∗ for either case.
Orthogonality, which is the multidimensional extension of perpendicularity, means that cosθ=0, i.e., that the inner product between vectors is zero. A collection of vectors is orthogonal if they are all pairwise orthogonal.
Don't worry about how we are creating the vectors here for now.
[Q,~] = qr(rand(5,3),0)
Q = -0.5813 0.1775 0.0673 -0.3501 -0.4777 0.2848 -0.0651 -0.3952 -0.9041 -0.1779 -0.7018 0.2561 -0.7097 0.3025 -0.1769
Since Q∗Q is a matrix of all inner products between columns of Q, those columns are orthogonal if and only if that matrix is diagonal.
QhQ = Q'*Q
QhQ = 1.0000 0.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.0000 1.0000
In fact we have a stronger condition here: the columns are orthonormal, meaning that they are orthogonal and each has 2-norm equal to 1.
Given any other vector of length 5, we can compute its inner product with each of the columns of Q.
u = rand(5,1); c = Q'*u
c = -0.8950 -0.4719 -0.6467
We can then use these coefficients to find a vector in the column space of Q.
v = Q*c
v = 0.3930 0.3546 0.8295 0.3248 0.6068
As explained in the text, r=u−v is orthogonal to all of the columns of Q.
r = u-v; Q'*r
ans = 1.0e-15 * -0.3608 0.1110 0.2776
Consequently, we have decomposed u=v+r into the sum of two orthogonal parts, one lying in the range of Q.
v'*r
ans = 1.3184e-16
We just saw that a matrix whose columns are orthonormal is pretty special. It becomes even more special if the matrix is also square, in which case we call it unitary. (In the real case, such matrices are confusingly called orthogonal. Ugh.) Say Q is unitary and m×m. Then Q∗Q is an m×m identity matrix---that is, Q∗=Q−1! It can't get much easier in terms of finding the inverse of a matrix.
[Q,~] = qr(rand(5,5)+1i*rand(5,5));
abs( inv(Q) - Q' )
ans = 1.0e-15 * 0.0555 0.1618 0.1570 0.0555 0.1144 0.1241 0.1127 0.0416 0.1001 0.0191 0.0964 0.2783 0.1144 0.0416 0.0878 0.2483 0.0555 0.0785 0.2355 0.1388 0.0747 0.1144 0.1430 0 0.2289
The rank of Q is m, so continuing the discussion above, the original vector u lies in its column space. Hence the remainder r=0.
c = Q'*u;
v = Q*c;
r = u - v
r = 1.0e-15 * 0.0000 + 0.0647i 0.0555 + 0.0625i 0.2220 - 0.0640i 0.2498 + 0.0499i 0.1665 - 0.0122i
This is another way to arrive at a fact we already knew: Multiplication by Q∗=Q−1 changes the basis to the columns of Q.