[Go to Index][Go to Next]

2. Extensions

In this project, I implement three extensions: example-based normal estimation [2], Hayakawa's normal estimation without a chrome ball [3], and Frankot-Chellappa algorithm for depth estimation [5].

1.1 Example-based Normal Estimation
(Related files: eBNormal.cpp, PsmApp.cpp)

The example-based approach [2] utilizes orientation consistency, i.e. the fact that two points with the same surface orientation reflect the same light toward the viewer. With the assumption that the object "gray" is perfectly a half-sphere, I use it as the reference object. For each point (pixel) on the surface of an object, the observation vector is I = [I1, I2, ..., If], where Ii is the grayvaule of the pixel in the image i and f (=12 in the test data) is the number of images. Note that Ii should be normalized using all pixels in the image i. To obtain the normal of a point on the query object, my implementation finds the k (=5 in my implementation) points on the reference object with the most similar observation vector. Then the normal is the average of the k normal vectors. The nearest neighbor search is accelerated with ANN library [6]. The normals of the reference object is pre-computed using the method similar to the normal estimation of the chrome ball (Section 1.1).

2.2 Normal Estimation without Lighting Directions
(Related files: normalsWoLight.cpp, PsmApp.cpp)

Hayakawa's algorithm [3] estimates the light source directions and surface normals simultaneously. Thus, the previous step of lighting direction estimation using a chrome ball (Section 1.1) is not needed. The algorithm is as follows.

First, construct the image data matrix

where Iij is the pixel intensity (grayvalue), f is the number of images, and p is the number of pixels in the mask. I = RNMT, where R is the surface reflectance matrix (pxp diagonal), N is the surface normal matrix (px3), M is the lighting direction matrix (3xf), and T is the light-source intensity matrix (fxf). Denote the suface matrix S=RN and the light-source matrix L=MT.

Second, perform singular value decomposition (SVD) I=Uāˆ‘V. Let U', āˆ‘', V' be the sub-matrices of U, āˆ‘, V by taking the first 3 largest singular values, and define   and . The final surface matrix and light-source matrix are and .

Third, compute the matrix A using either the constraint that six pixels in which the relative value of the surface reflectance is constant or the constraint that six images in which the relative value of the light-source intensity is constant. I use the first constraint and the six pixels are manually selected. The formulation is to solve
, k = 1,2,...,6.      (6)
Let and the symmetric matrix B has six unknown entries, which can be solved from the above linear equations. Then perform SVD and .

Finally, the normal is the row-normalized S and the light source direction is the row-normalized L. If the z component of most lighting directions are negative, let S=-S and L=-L.

I use CLAPACK library [8] for the SVD and linear equation solver.

2.3 Frankot-Chellappa Algorithm for Depth Estimation
(Related files: FrankotChellappa.cpp, PsmApp.cpp)
It is easy to obtain gradients given normals. Frankot-Chellappa Algorithm [5] is a fast algorithm to compute depth from gradients. The most straightforward of obtaining depths from 2D gradients is integration. However, the integration along different paths gives different results, so it is sensitive to noises. Better methods include least squares fit (Section 1.4) and Frankot-Chellappa Algorithm.
The basic idea of Frankot-Chellappa Algorithm is to project the estimated surface slopes p(x,y) and q(x,y) of the surface f(x,y) onto a set of integrable surface slopes  and  such that integratibility constraint holds:
and the error
is minimized.

Expand f(x,y) by Fourier series
where w = (wx, wy).

The optimal coefficients that minimize the distance error are
where Cx(w) and Cy(w) are the coefficients of Fourier series of p(x,y) and q(x,y).

My implementation uses Discrete Fourier Transform (DFT) instead, i.e. Cx(w)=DFT(p(x,y)), Cy(w)=DFT(q(x,y)), and f(x,y)=IDFT(C(w)).
DFT is implemented as Fast Fourier Transform in FFTW library [7].

[Go to Index][Go to Next]


Copyright (c) 2009 Wei Zhang