ML4360 - Computer vision
Direct Linear Transform
Concatenate
where we have fixed
The solution to the above optimization problem is the singular vector corresponding to the smallest singular value of A. The resulting algorithm is called Direct Linear Transformation.
Singular Value Decomposition
The SVD of
where:
-
: orthogonal matrix of the orthonormal eigenvectors of -
: transpose of a orthogonal matrix containing the orthonormal eigenvectors of -
: a matrix, actually a diagonal matrix with elements equal to the root of the positive eigenvalues of or (both matrics have the same positive eigenvalues anyway) and extra zero rows and new zero columns.
Pseudoinverse Matrix
Definition
For
need not be the general identity matrix, but it maps all column vectors of A to themselves:
acts like a weak inverse:
is Hermitian. Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose.
is also Hermitian.
Linear least-squares
The pseudoinverse provides a least squares solution to a system of linear equations.For
-
, we want to find a satisfying , where and denotes the Euclidean norm. -
This weak inequality holds with equality if and only if
for any vector ; this provides an infinitude of minimizing solutions unless has full column rank, in which case is a zero matrix.
Assume we want to solve
Chamfer Distance
Chamfer Distance is widely used in 3D reconstruction to judge how close one point cloud is on average to the other.
Given two point clouds
In the following code, we use the KDTree to get the nearest neighbors of one point cloud to the other.
def chamfer_distance(pcl_0, pcl_1): # pcl_0 and pcl_1 are two point clouds in 3D space
assert pcl_1.shape[-1] == 3
assert pcl_0.shape[-1] == 3
kd_pcl_0 = KDTree(pcl_0)
kd_pcl_1 = KDTree(pcl_1)
mindis_x2y, nearest_x2y= kd_pcl_1.query(pcl_0)
mindis_y2x, nearest_y2x= kd_pcl_0.query(pcl_1)
chamfer_dist = 0.5 * float(mindis_x2y.mean() + mindis_y2x.mean())
assert type(chamfer_dist) == float
return chamfer_dist