Publications by Type: Conference Paper

2014
S. H. Chan and Y. M. Lu, “Efficient image reconstruction for gigapixel quantum image sensors,” in IEEE Global Conference on Signal and Information Processing (GlobalSIP), Atlanta, GA, 2014.Abstract

Recent advances in materials, devices and fabrication technologies have motivated a strong momentum in developing solid-state sensors that can detect individual photons in space and time. It has been envisioned that such sensors can eventually achieve very high spatial resolutions (e.g., 10^9 pixels/chip) as well as high frame rates (e.g., 10^6 frames/sec). In this paper, we present an efficient algorithm to reconstruct images from the massive binary bit-streams generated by these sensors. Based on the concept of alternating direction method of multipliers (ADMM), we transform the computationally intensive optimization problem into a sequence of subproblems, each of which has efficient implementations in the form of polyphase-domain filtering or pixel-wise nonlinear mappings. Moreover, we reformulate the original maximum likelihood estimation as maximum a posterior estimation by introducing a total variation prior. Numerical results demonstrate the strong performance of the proposed method, which achieves several dB’s of improvement in PSNR and requires a shorter runtime as compared to standard gradient-based approaches.

qis_image_reconstruction.pdf
A. Agaskar, C. Wang, and Y. M. Lu, “Randomized Kaczmarz algorithms: Exact MSE analysis and optimal sampling probabilities,” in IEEE Global Conference on Signal and Information Processing (GlobalSIP), Atlanta, GA, 2014.Abstract

The Kaczmarz method, or the algebraic reconstruction technique (ART), is a popular method for solving large-scale overdetermined systems of equations. Recently, Strohmer et al. proposed the randomized Kaczmarz algorithm, an improvement that guarantees exponential convergence to the solution. This has spurred much interest in the algorithm and its extensions. We provide in this paper an exact formula for the mean squared error (MSE) in the value reconstructed by the algorithm. We also compute the exponential decay rate of the MSE, which we call the “annealed” error exponent. We show that the typical performance of the algorithm is far better than the average performance. We define the “quenched” error exponent to characterize the typical performance. This is far harder to compute than the annealed error exponent, but we provide an approximation that matches empirical results. We also explore optimizing the algorithm’s row-selection probabilities to speed up the algorithm’s convergence.

randkac_globalsip14.pdf

(This paper received the Best Student Paper Award of GlobalSIP)

J. Oñativia, Y. M. Lu, and P. L. Dragotti, “Finite Dimensional FRI,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, 2014.Abstract

Traditional Finite Rate of Innovation (FRI) theory has considered the problem of sampling continuous-time signals. This framework can be naturally extended to the case where the input is a discrete-time signal. Here we present a novel approach which uses both the traditional FRI sampling scheme, based on the annihilating filter method, and the fact that in this new setup the null space of the problem to be solved is finite dimensional.

In the noiseless scenario, we show that this new approach is able to perfectly recover the original signal at the critical sampling rate. We also present simulation results in the noisy scenario where this new approach improves performances in terms of the mean squared error (MSE) of the reconstructed signal when compared to the canonical FRI algorithms and compressed sensing (CS).

finite_fri.pdf
2013
Y. M. Lu, “A Framework for Adaptive Parameter Estimation with Finite Memory,” in Proc. IEEE Global Conference on Signal and Information Processing, Austin, TX, 2013.Abstract
We consider the problem of estimating an unknown parameter from a finite collection of different statistical experiments. The measurements are taken sequentially. Based on the observations made so far, we adaptively select the next experiment that provides the most information about the parameter. Summarizing past information with finite memory, we present a general framework for efficient adaptive estimation, with the sensing schemes fully characterized by finite-state parametric Markov chains. We establish an analytic formula linking the asymptotic performance of adaptive estimation schemes to the steady-state distributions of the associated Markov chains. Consequently, finding optimal adaptive strategies can be reformulated as the problem of designing a (continuous) family of Markov chains with prescribed steady-state distributions.We also propose a quantitative design criterion for optimal sensing policies based on minimax ratio regret.
A. Agaskar and Y. M. Lu, “ALARM: A Logistic Auto-Regressive Model for binary processes on networks,” in Proc. IEEE Global Conference on Signal and Information Processing, Austin, TX, 2013.Abstract

We introduce the ALARM model, a logistic autoregressive model for discrete-time binary processes on networks, and describe a technique for learning the graph structure underlying the model from observations. Using only a small number of parameters, the proposed ALARM can describe a wide range of dynamic behavior on graphs, such as the contact process, voter process, and even some epidemic processes. Under ALARM, at each time step, the probability of a node having value 1 is determined by the values taken by its neighbors in the past; specifically, its probability is given by the logistic function evaluated at a linear combination of its neighbors' past values (within a fixed time window) plus a bias term. We examine the behavior of this model for 1D and 2D lattice graphs, and observe a phase transition in the steady state for 2D lattices. We then study the problem of learning a graph from ALARM observations. We show how a regularizer promoting group sparsity can be used to efficiently learn the parameters of the model from a realization, and demonstrate the resulting ability to reconstruct the underlying network from the data.

alarm_globalsip13.pdf
Z. Sadeghipoor, Y. M. Lu, and S. Susstrunk, “A novel compressive sensing approach to simultaneously acquire color and near-infrared images on a single sensor,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, Canada, 2013.Abstract

Sensors of most digital cameras are made of silicon that is inherently sensitive to both the visible and near-infrared parts of the electromagnetic spectrum. In this paper, we address the problem of color and NIR joint acquisition. We propose a framework for the joint acquisition that uses only a single silicon sensor and a slightly modified version of the Bayer color-filter array that is already mounted in most color cameras. Implementing such a design for an RGB and NIR joint acquisition system requires minor changes to the hardware of commercial color cameras. One of the important differences between this design and the conventional color camera is the post-processing applied to the captured values to reconstruct full resolution images. By using a CFA similar to Bayer, the sensor records a mixture of NIR and one color channel in each pixel. In this case, separating NIR and color channels in different pixels is equivalent to solving an under-determined system of linear equations. To solve this problem, we propose a novel algorithm that uses the tools developed in the field of compressive sensing. Our method results in high-quality RGB and NIR images (the average PSNR of more than 30 dB for the reconstructed images) and shows a promising path towards RGB and NIR cameras. 

S. H. Chan, T. Zickler, and Y. M. Lu, “Fast non-local filtering by random sampling: It works, especially for large images,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, Canada, 2013.Abstract

Non-local means (NLM) is a popular denoising scheme. Conceptually simple, the algorithm is computationally intensive for large images. We propose to speed up NLM by using random sampling. Our algorithm picks, uniformly at random, a small number of columns of the weight matrix, and uses these ``representatives'' to compute an approximate result. It also incorporates an extra column-normalization of the sampled columns, a form of symmetrization that often boosts the denoising performance on real images. Using statistical large deviation theory, we analyze the proposed algorithm and provide guarantees on its performance. We show that the probability of having a large approximation error decays exponentially as the image size increases. Thus, for large images, the random estimates generated by the algorithm are tightly concentrated around their limit values, even if the sampling ratio is small. Numerical results confirm our theoretical analysis: the proposed algorithm reduces the run time of NLM, and thanks to the symmetrization step, actually provides some improvement in peak signal-to-noise ratios.

random_nlm.pdf
A. Agaskar and Y. M. Lu, “Detecting random walks hidden in noise: Phase transition on large graphs,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, Canada, 2013.Abstract

We consider the problem of distinguishing between two hypotheses: that a sequence of signals on a large graph consists entirely of noise, or that it contains a realization of a random walk buried in the noise. The problem of computing the error exponent of the optimal detector is simple to formulate, but exhibits deep connections to problems known to be difficult, such as computing Lyapunov exponents of products of random matrices and the free entropy density of statistical mechanical systems. We describe these connections, and define an algorithm that efficiently computes the error exponent of the Neyman-Pearson detector. We also derive a closed-form formula, derived from a statistical mechanics-based approximation, for the error exponent on an arbitrary graph of large size. The derivation of this formula is not entirely rigorous, but it closely matches the empirical results in all our experiments. This formula explains a phase transition phenomenon in the error exponent: below a threshold SNR, the error exponent is nearly constant and near zero, indicating poor performance; above the threshold, there is rapid improvement in performance as the SNR increases. The location of the phase transition depends on the entropy rate of the random walk.

mcdet_icassp.pdf
Y. M. Lu, “Adaptive sensing and inference for single-photon imaging,” in Proc. 47th Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, 2013.Abstract

In recent years, there have been increasing efforts to develop solid-state sensors with single-photon sensitivity, with applications ranging from bio-imaging to 3D computer vision. In this paper, we present adaptive sensing models, theory and algorithms for these single-photon sensors, aiming to improve their dynamic ranges. Mapping different sensor configurations onto a finite set of states, we represent adaptive sensing schemes as finite-state parametric Markov chains. After deriving an asymptotic expression for the Fisher information rate of these Markovian systems, we propose a design criterion for sensing policies based on minimax ratio regret. We also present a suboptimal yet effective sensing policy based on random walks. Numerical experiments demonstrate the strong performance of the proposed scheme, which expands the sensor dynamic ranges of existing nonadaptive approaches by several orders of magnitude.

adaptive_sensing.pdf
C. Hu, L. Cheng, J. Sepulcre, G. E. Fakhri, Y. M. Lu, and Q. Li, “Matched signal detection on graphs: Theory and application to brain network classification,” in Proc. 23rd International Conference on Information Processing in Medical Imaging (IPMI 2013), Asilomar, CA, 2013.Abstract
We develop a matched signal detection (MSD) theory for signals with an intrinsic structure described by a weighted graph. Hypothesis tests are formulated under different signal models. In the simplest scenario, we assume that the signal is deterministic with noise in a subspace spanned by a subset of eigenvectors of the graph Laplacian. The conventional matched subspace detection can be easily extended to this case. Furthermore, we study signals with certain level of smoothness. The test turns out to be a weighted energy detector, when the noise variance is negligible. More generally, we presume that the signal follows a prior distribution, which could be learnt from training data. The test statistic is then the difference of signal variations on associated graph structures, if an Ising model is adopted. Effectiveness of the MSD on graph is evaluated both by simulation and real data. We apply it to the network classification problem of Alzheimer’s disease (AD) particularly. The preliminary results demonstrate that our approach is able to exploit the sub-manifold structure of the data, and therefore achieve a better performance than the traditional principle component analysis (PCA).
graphmsd_impi13.pdf
C. Hu, L. Cheng, J. Sepulcre, G. E. Fakhri, Y. M. Lu, and Q. Li, “A graph theoretical regression model for brain connectivity learning of Alzheimer's disease,” in Proc. International Symposium on Biomedical Imaging (ISBI), San Francisco, CA, 2013.Abstract

Learning functional brain connectivity is essential to the understanding of neurodegenerative diseases. In this paper, we introduce a novel graph regression model (GRM) which regards the imaging data as signals defined on a graph and optimizes the fitness between the graph and the data, with a sparsity level regularization. The proposed framework features a nice interpretation in terms of low-pass signals on graphs, and is more generic compared with previous statistical models. Results based on the data illustrates that our approach can obtain a very close reconstruction of the true network. We then apply the GRM to learn the brain connectivity of Alzheimer’s disease (AD). Evaluations performed upon PET imaging data of 30 AD patients demonstrate that the connectivity patterns discovered are easy to interpret and consistent with known pathology.

graphrecon_isbi_reprocessed.pdf
2012
C. Hu and Y. M. Lu, “Adaptive time-sequential binary sensing for high dynamic range imaging,” in Proc. SPIE Conference on Advanced Photon Counting Techniques VI, Baltimore, MD, 2012.Abstract

We present a novel image sensor for high dynamic range imaging. The sensor performs an adaptive one-bit quantization at each pixel, with the pixel output switched from 0 to 1 only if the number of photons reaching that pixel is greater than or equal to a quantization threshold. With an oracle knowledge of the incident light intensity, one can pick an optimal threshold (for that light intensity) and the corresponding Fisher information contained in the output sequence follows closely that of an ideal unquantized sensor over a wide range of intensity values. This observation suggests the potential gains one may achieve by adaptively updating the quantization thresholds. As the main contribution of this work, we propose a time-sequential threshold-updating rule that asymptotically approaches the performance of the oracle scheme. With every threshold mapped to a number of ordered states, the dynamics of the proposed scheme can be modeled as a parametric Markov chain. We show that the frequencies of different thresholds converge to a steady-state distribution that is concentrated around
the optimal choice. Moreover, numerical experiments show that the theoretical performance measures (Fisher information and Cramer-Rao bounds) can be achieved by a maximum likelihood estimator, which is guaranteed to find globally optimal solution due to the concavity of the log-likelihood functions. Compared with conventional image sensors and the strategy that utilizes a constant single-photon threshold considered in previous work, the
proposed scheme attains orders of magnitude improvement in terms of sensor dynamic ranges.

C. Hu, L. Cheng, and Y. M. Lu, “Graph-Based regularization for color image demosaicking,” in Proc. IEEE International Conference on Image Processing, Orlando, FL, 2012.Abstract

We present a novel regularization framework for demosaicking by viewing images as smooth signals defined on weighted graphs. The restoration problem is formulated as a minimization of variation of these graph-domain signals. As an initial step, we build a weight matrix which measures the similarity between every pair of pixels, from an estimate of the full color image. Subsequently, a two-stage optimization is carried out: first, we assume that the graph Laplacian is signal dependent and solve a non-quadratic problem by gradient descent; then, we pose a variational problem on graphs with a fixed Laplacian, subject to the constraint of consistency given by available samples in each color channel. Performance evaluation shows that our approach can improve existing demosaicking methods both quantitively and visually, by reducing color artifacts.

graphdemo_icip.pdf
Z. Sadeghipoor, Y. M. Lu, and S. Süsstrunk, “Optimal spectral sensitivity functions for single sensor color imaging,” in Proc. SPIE Conference on Digital Photography VIII, Burlingame, 2012.Abstract

A cost-effective and convenient approach for color imaging is to use a single sensor and mount a color filter array
(CFA) in front of it, such that at each spatial position the scene information in one color channel is captured. To
estimate the missing colors at each pixel, a demosaicing algorithm is applied to the CFA samples. Besides the
filter arrangement and the demosaicing method, the spectral sensitivity functions of the CFA filters considerably
affect the quality of the demosaiced image. In this paper, we extend the algorithm presented by Lu and Vetterli,
originally proposed for designing the optimum CFA, to compute the optimum spectral sensitivities. The proposed
algorithm solves a constrained optimization problem to find optimum spectral sensitivities and the corresponding
linear demosaicing method. An important constraint for this problem is the smoothness of spectral sensitivities,
which is imposed by modeling these functions as a linear combination of several smooth kernels. Simulation
results verify the effectiveness of the proposed algorithm in finding optimal spectral sensitivity functions, which
outperform measured camera sensitivity functions.

A. Agaskar and Y. M. Lu, “Uncertainty principles for signals defined on graphs: Bounds and characterizations,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, Kyoto, 2012.Abstract

The classical uncertainty principle provides a fundamental tradeoff in the localization of a signal in the time and frequency domains. In this paper we describe a similar tradeoff for signals defined on graphs. We describe the notions of "spread" in the graph and spectral domains, using the eigenvectors of the graph Laplacian as a surrogate Fourier basis. We then describe how to find signals that, among all signals with the same spectral spread, have the smallest graph spread about a given vertex. For every possible spectral spread, the desired signal is the solution to an eigenvalue problem. Since localization in graph and spectral domains is a desirable property of the elements of wavelet frames on graphs, we compare the performance of some existing wavelet transforms to the obtained bound.

graph_icassp12.pdf
See, also, the extended journal version: A. Agaskar and Y. M. Lu, A Spectral Graph Uncertainty Principle, IEEE Transactions on Information Theory, to appear.
Y. Xiong and Y. M. Lu, “Blind estimation and low-rate sampling of sparse MIMO systems with common support,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, Kyoto, 2012.Abstract

We present a blind estimation algorithm for multi-input and multi-output (MIMO) systems with sparse common support. Key to the proposed algorithm is a matrix generalization of the classical annihilating filter technique, which allows us to estimate the nonlinear parameters of the channels through an efficient and noniterative procedure. An attractive property of the proposed algorithm is that it only needs the sensor measurements at a narrow frequency band. By exploiting this feature, we can derive efficient sub-Nyquist sampling schemes which significantly reduce the number of samples that need to be retained at each sensor. Numerical simulations verify the accuracy of the proposed estimation algorithm and its robustness in the presence of noise.

mc_sparse_icassp2012.pdf
2011
A. Agaskar and Y. M. Lu, “An uncertainty principle for functions defined on graphs,” in Proc. SPIE Conf. on Wavelets and Sparsity, San Diego, 2011.Abstract

The classical uncertainty principle provides a fundamental tradeoff in the localization of a function in the time
and frequency domains. In this paper we extend this classical result to functions defined on graphs. We justify
the use of the graph Laplacian’s eigenbasis as a surrogate for the Fourier basis for graphs, and define the notions of "spread" in the graph and spectral domains. We then establish an analogous uncertainty principle relating the two quantities, showing the degree to which a function can be simultaneously localized in the graph and spectral domains.

graphuncert.pdf
See, also, the extended journal version: A. Agaskar and Y. M. Lu, A Spectral Graph Uncertainty Principle, IEEE Transactions on Information Theory, to appear.
Z. Sadeghipoor, Y. M. Lu, and S. Susstrunk, “Correlation-Based joint acquisition and demosaicing of visible and near-infrared images,” in Proc. IEEE International Conference on Image Processing, Brussels, 2011.Abstract

Joint processing of visible (RGB) and near-infrared (NIR) images has recently found some appealing applications, which make joint capturing a pair of visible and NIR images an important problem. In this paper, we propose a new method to design color filter ar- rays (CFA) and demosaicing matrices for acquiring NIR and visible images using a single sensor. The proposed method modifies the optimum CFA algorithm proposed in [1] by taking advantage of the NIR/visible correlation in the design process. Simulation results show that by applying the proposed method, the quality of demosaiced NIR and visible images is increased by about 1 dB in peak signal-to-noise ratio over the results of the optimum CFA algorithm. It is also shown that better visual quality can be obtained by using the proposed algorithm.

visnir_icip11.pdf
Y. M. Lu and M. Vetterli, “Sparse spectral factorization: Unicity and reconstruction algorithms,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, Prague, 2011.Abstract

Spectral factorization is a classical tool in signal processing and communications. It also plays a critical role in X-ray crystallography, in the context of phase retrieval. In this work, we study the problem of sparse spectral factorization, aiming to recover a one-dimensional sparse signal from its autocorrelation. We present a sufficient condi- tion for the recovery to be unique, and propose an iterative algorithm that can obtain the original signal (up to a sign change, time-shift and time-reversal). Numerical simulations verify the effectiveness of the proposed algorithm.

sparsesf_icassp11.pdf
Y. M. Lu, P. L. Dragotti, and M. Vetterli, “Localizing point sources in diffusion fields from spatiotemporal measurements,” in Proc. Int. Conf. Sampling Theory and applications (SampTA), Singapore, 2011.Abstract

Consider a diffusion field induced by a finite number of localized and instantaneous sources. In this paper, we study the problem of estimating these sources (including their intensities, spatial locations, and activation time) from the spatiotemporal samples taken by a network of spatially distributed sensors. We propose two estimation algorithms, depending on whether the activation time of the sources is known. For the case of known activation
time, we present an annihilating filter based method to estimate the Euclidean distances between the sources and sensors, which can be subsequently used to localize the sources. For the case of a single source but with unknown activation time, we show that the diffusion field at any spatial location is a scaled and shifted version of a common prototype function, and that this function is the unique solution to a particular differential equation. This observation leads to an efficient algorithm that can estimate the unknown parameters of the source by solving
a system of linear equations. For both algorithms proposed in this work, the minimum number of sensors required is d + 1, where d is the spatial dimension of the field. This requirement is independent of the number of active sources.

point_diffusion_sampta11.pdf

Pages