Publications by Year: 2014

2014
A. Agaskar and Y. M. Lu, “Optimal hypothesis testing with combinatorial structure: Detecting random walks on graphs,” in Proc. of Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, 2014.Abstract

Suppose we have a time series of observations for each node in a network, and we wish to detect the presence of a particle undergoing a random walk on the network. If there is no such particle, then we observe only zero-mean Gaussian noise. If present, however, the location of the particle has an elevated mean. How well can we detect the particle at low signal-to-noise ratios? This is a special case of the problem of detecting hidden Markov processes (HMPs).

The performance metric we analyze is the error exponent of the optimal detector, which measures the exponential rate of decay in the miss probability if the false alarm probability is held fixed as the observation time increases. This problem exhibits deep connections to a problem in statistical physics: computing the free energy density of a spin glass. 

We develop a generalized version of the random energy model (REM) spin glass, whose free energy density provides a lower bound for our error exponent, and compute the bound using large deviations techniques. The bound closely matches empirical results in numerical experiments, and suggests a phase transition phenomenon: below a threshold SNR, the error exponent is nearly constant and near zero, indicating poor performance; above the threshold, there is rapid improvement in performance as the SNR increases. The location of the phase transition depends on the entropy rate of the Markov process.

S. H. Chan and Y. M. Lu, “Efficient image reconstruction for gigapixel quantum image sensors,” in IEEE Global Conference on Signal and Information Processing (GlobalSIP), Atlanta, GA, 2014.Abstract

Recent advances in materials, devices and fabrication technologies have motivated a strong momentum in developing solid-state sensors that can detect individual photons in space and time. It has been envisioned that such sensors can eventually achieve very high spatial resolutions (e.g., 10^9 pixels/chip) as well as high frame rates (e.g., 10^6 frames/sec). In this paper, we present an efficient algorithm to reconstruct images from the massive binary bit-streams generated by these sensors. Based on the concept of alternating direction method of multipliers (ADMM), we transform the computationally intensive optimization problem into a sequence of subproblems, each of which has efficient implementations in the form of polyphase-domain filtering or pixel-wise nonlinear mappings. Moreover, we reformulate the original maximum likelihood estimation as maximum a posterior estimation by introducing a total variation prior. Numerical results demonstrate the strong performance of the proposed method, which achieves several dB’s of improvement in PSNR and requires a shorter runtime as compared to standard gradient-based approaches.

qis_image_reconstruction.pdf
A. Agaskar, C. Wang, and Y. M. Lu, “Randomized Kaczmarz algorithms: Exact MSE analysis and optimal sampling probabilities,” in IEEE Global Conference on Signal and Information Processing (GlobalSIP), Atlanta, GA, 2014.Abstract

The Kaczmarz method, or the algebraic reconstruction technique (ART), is a popular method for solving large-scale overdetermined systems of equations. Recently, Strohmer et al. proposed the randomized Kaczmarz algorithm, an improvement that guarantees exponential convergence to the solution. This has spurred much interest in the algorithm and its extensions. We provide in this paper an exact formula for the mean squared error (MSE) in the value reconstructed by the algorithm. We also compute the exponential decay rate of the MSE, which we call the “annealed” error exponent. We show that the typical performance of the algorithm is far better than the average performance. We define the “quenched” error exponent to characterize the typical performance. This is far harder to compute than the annealed error exponent, but we provide an approximation that matches empirical results. We also explore optimizing the algorithm’s row-selection probabilities to speed up the algorithm’s convergence.

randkac_globalsip14.pdf

(This paper received the Best Student Paper Award of GlobalSIP)

J. Oñativia, Y. M. Lu, and P. L. Dragotti, “Finite Dimensional FRI,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, 2014.Abstract

Traditional Finite Rate of Innovation (FRI) theory has considered the problem of sampling continuous-time signals. This framework can be naturally extended to the case where the input is a discrete-time signal. Here we present a novel approach which uses both the traditional FRI sampling scheme, based on the annihilating filter method, and the fact that in this new setup the null space of the problem to be solved is finite dimensional.

In the noiseless scenario, we show that this new approach is able to perfectly recover the original signal at the critical sampling rate. We also present simulation results in the noisy scenario where this new approach improves performances in terms of the mean squared error (MSE) of the reconstructed signal when compared to the canonical FRI algorithms and compressed sensing (CS).

finite_fri.pdf
S. H. Chan, T. Zickler, and Y. M. Lu, “Monte Carlo Non-Local Means: Random Sampling for Large-Scale Image Filtering,” IEEE Transactions on Image Processing, vol. 23, no. 8, pp. 3711-3725, 2014.Abstract

We propose a randomized version of the non-local means (NLM) algorithm for large-scale image filtering. The new algorithm, called Monte Carlo non-local means (MCNLM), speeds up the classical NLM by computing a small subset of image patch distances, which are randomly selected according to a designed sampling pattern. We make two contributions. First, we analyze the performance of the MCNLM algorithm and show that, for large images or large external image databases, the random outcomes of MCNLM are tightly concentrated around the deterministic full NLM result. In particular, our error probability bounds show that, at any given sampling ratio, the probability for MCNLM to have a large deviation from the original NLM solution decays exponentially as the size of the image or database grows. Second, we derive explicit formulas for optimal sampling patterns that minimize the error probability bound by exploiting partial knowledge of the pairwise similarity weights. Numerical experiments show that MCNLM is competitive with other state-of-the-art fast NLM algorithms for single-image denoising. When applied to denoising images using an external database containing ten billion patches, MCNLM returns a randomized solution that is within 0.2 dB of the full NLM solution while reducing the runtime by three orders of magnitude.

mcnlm.pdf
P. L. Dragotti and Y. M. Lu, “On Sparse Representation in Fourier and Local Bases,” IEEE Transactions on Information Theory, vol. 60, no. 12, pp. 7888-7899, 2014.Abstract

We consider the classical problem of finding the sparse representation of a signal in a pair of bases. When both bases are orthogonal, it is known that the sparse representation is unique when the sparsity $K$ of the signal satisfies $K<1/\mu(\mD)$, where $\mu(\mD)$ is the mutual coherence of the dictionary. Furthermore, the sparse representation can be obtained in polynomial time by Basis Pursuit (BP), when $K<0.91/\mu(\mD)$. Therefore, there is a gap between the unicity condition and the one required to use the polynomial-complexity BP formulation. For the case of general dictionaries, it is also well known that finding the sparse representation under the only constraint of unicity is NP-hard. In this paper, we introduce, for the case of Fourier and canonical bases, a polynomial complexity algorithm that finds all the possible $K$-sparse representations of a signal under the weaker condition that $K<\sqrt{2} /\mu(\mD)$. Consequently, when $K<1/\mu(\mD)$, the proposed algorithm solves the unique sparse representation problem for this structured dictionary in polynomial time. We further show that the same method can be extended to many other pairs of bases, one of which must have local atoms. Examples include the union of Fourier and local Fourier bases, the union of discrete cosine transform and canonical bases, and the union of random Gaussian and canonical bases.

prosparse.pdf
S. Maranò, D. Fäh, and Y. M. Lu, “Sensor Placement for the Analysis of Seismic Surface Waves: Source of Error, Design Criterion, and Array Design Algorithms,” Geophys. J. Int. vol. 197, no. 3, pp. 1566-1581, 2014. Publisher's VersionAbstract

Seismic surface waves can be measured by deploying an array of seismometers on the surface of the earth. The goal of such measurement surveys is, usually, to estimate the velocity of propagation and the direction of arrival of the seismic waves. In this paper, we address the issue of sensor placement for the analysis of seismic surface waves from ambient vibration wavefields. First, we explain in detail how the array geometry affects the mean-squared estimation error (MSEE) of parameters of interest, such as the velocity and direction of propagation, both at low and high signal-to-noise ratios (SNRs). Second, we propose a cost function suitable for the design of the array geometry with particular focus on the estimation of the wavenumber of both Love and Rayleigh waves. Third, we present and compare several computational approaches to minimize the proposed cost function. Numerical experiments verify the effectiveness of our cost function and resulting array geometry designs, leading to greatly improved estimation performance in comparison to arbitrary array geometries, both at low and high SNR levels.