Monday, January 31, 2011

CS: Fault Identification via Non-parametric Belief Propagation, Recovery of Functions of many Variables via Compressive Sensing, Sparse Signal Recovery and Dynamic Update of the Underdetermined System, A Nonlinear Approach to Dimension Reduction, Heat Source Identification Based on L1 Constrained Minimization

 Has Compressive Sensing peaked ? This cannot be the impression you get from reading today's contributions.

Dror let me know of his new paper: Fault Identification via Non-parametric Belief Propagation by Danny Bickson, Dror Baron, Alexander Ihler, Harel Avissar, Danny Dolev. The abstract reads:
We consider the problem of identifying a pattern of faults from a set of noisy linear measurements. Unfortunately, maximum a posteriori probability estimation of the fault pattern is computationally intractable. To solve the fault identification problem, we propose a non-parametric belief propagation approach. We show empirically that our belief propagation solver is more accurate than recent state-of-the-art algorithms including interior point methods and semidefinite programming. Our superior performance is explained by the fact that we take into account both the binary nature of the individual faults and the sparsity of the fault pattern arising from their rarity.
The NBP solver is here.


Recovery of functions of many variables from sample values usually suffers the curse of dimensionality: The number of required samples scales exponentially with the spatial dimension. In order to avoid this severe bottleneck, one needs to impose further structural properties of the function to be recovered apart from smoothness. Here, we build on ideas from compressive sensing and introduce a function model that involves “sparsity with respect to dimensions” in the Fourier domain. Using recent estimates on the restricted isometry constants of measurement matrices associated to randomly sampled trigonometric systems, we show that the number of required samples scales only logarithmically in the spatial dimension provided the function to be recovered follows the newly introduced highdimensional function model.

Sparse Signal Recovery and Dynamic Update of the Underdetermined System by M. Salman Asif and Justin Romberg. The abstract reads:
Sparse signal priors help in a variety of modern signal processing tasks. In many cases, a sparse signal needs to be recovered from an underdetermined system of equations. For instance, sparse approximation of a signal with an overcomplete dictionary or reconstruction of a sparse signal from a small number of linear measurements. The reconstruction problem typically requires solving an `1 norm minimization problem. In this paper we present homotopy based algorithms to update the solution of some `1 problems when the system is updated by adding new rows or columns to the underlying system matrix. We also discuss a case where these ideas can be extended to accommodate for more general changes in the system matrix.
A Nonlinear Approach to Dimension Reduction by Lee-Ad Gottlieb, Robert Krauthgamer. The abstract reads:
The ℓ2 flattening lemma of Johnson and Lindenstrauss [JL84] is a powerful tool for dimension reduction. It has been conjectured that the target dimension bounds can be refined and bounded in terms of the intrinsic dimensionality of the data set (for example, the doubling dimension). One such problem was proposed by Lang and Plaut [LP01] (see also [GKL03, Mat02, ABN08, CGT10]), and is still open. We prove another result in this line of work: The snowflake metric d1/2 of a doubling set S ⊂ ℓ2 can be embedded with arbitrarily low distortion into ℓD2, for dimension D that depends solely on the doubling constant of the metric. In fact, the target dimension is polylogarithmic in the doubling constant. Our techniques are robust and
extend to the more difficult spaces ℓ1 and ℓ∞, although the dimension bounds here are quantitatively inferior than those for ℓ2.

The inverse problem of finding sparse initial data from the solutions to the heat equation is considered. The initial data is assumed to be a sum of an unknown but nite number of Dirac delta functions at unknown locations. Point-wise values of the heat solution at only a few locations are used in an L1 constrained optimization to find such sparse initial data. A concept of domain of effective sensing is introduced to speed up the already fast Bregman iterative algorithm for L1 optimization. Furthermore, an algorithm which successively adds new measurements at intelligent locations is introduced. By comparing the solutions of the inverse problem that are obtained from different number of measurements, the algorithm decides where to add new measurements in order to improve the reconstruction of the sparse initial data.
I note from the paper:

In compressed sensing [7], we can solve an L0 problems by solving its L1 relaxation when the associated matrix has the restricted isometry property (RIP) [6]. The heat operator does not satisfy RIP, but we can adopt the idea of substituting L0 with L1 for sparse optimization. We will show numerical results which indicate the effectiveness of this strategy. Our approach to this problem is to solve a L1 minimization problem with constraints. We apply the Bregman iterative method [9, 13] to solve the constrained problem as a sequence of unconstrained subproblems. To solve these subproblems, we use the greedy coordinate descent method developed in [11] for solving the L1 unconstrained problem, which was shown to be very efficient for sparse recovery.
It is nice to see this fact acknowledged.



An LANL/UCSD newsletter features some hardware performing some compressive sensing:

Embedded computing and sensing are entrenched in many facets of daily life. Embedded devices must be able to collect large amounts of data from multiple sources, and then present the user with an “executive summary” of the observations. A user can then use this distilled information to quickly plan a course of action. It is therefore imperative that new methods are explored for distilling data to a form that is suitable for a user. Furthermore, the prevalence of wireless communications demands that relevant information be preextracted from high-dimension data in order to reduce bandwidth, memory, and energy requirements. Applications of interest to the EI such as structural health monitoring (SHM) and treaty verification typically require the collection of data from large arrays of wireless sensor nodes at high data rates. Data from different sensors must be combined in order to draw inferences about the state of the system under observation. The sensor nodes used to collect these data typically have severe data storage and energy constraints. Wireless transmission of data must be executed in a thoughtful manner and only the most relevant data should be committed to memory. Recently, compressed sensing has presented itself as a candidate solution for directly collecting relevant information from sparse, highdimensional measurements. The main idea behind compressed sensing is that, by directly collecting a relatively small number of coefficients, it is possible to reconstruct the original measurement. The coefficients are derived from linear combinations of measurements. Conveniently, most signals found in nature are indeed sparse with the notable exception of noise. The findings of the compressed sensing community hold great potential for changing the way data are collected. EI and ISR researchers (David Mascarenas, Chuck Farrar, Don Hush, James Thieler) has begun exploring the possibility of incorporating compressed sensing principles into SHM and treaty verification wireless sensor nodes. Advantages offered by compressed sensing include lower energy use for data collection and transmission, as well as reduced memory requirements. Compressed coefficients also have the interesting property that they are democratic, in the sense that no individual coefficient has any more information than any other. In this way they exhibit some robustness against data corruption. It is also worth noting that compressed sensing versions of conventional statistical signal processing techniques have been adopted by EI based on the use of smashed filters. The extension of statistical signal processing to the compressed domain helps facilitate the transition of the SHM strategies to take advantage of compressed sensing techniques. Currently, a digital version of the compressed sensor onboard a microcontroller is developed. (shown in Figure). This compressed sensor node is being tested for a SHM application requiring acceleration measurements, as well as a CO2 climate treaty verification application. The prototype compressed sensor is capable of collecting compressed coefficients from measurements and sending them to an off-board processor for reconstruction. In addition, the smashed filter has also been implemented onboard the embedded compressed sensor node. Preliminary results have shown that the smashed filter successfully distinguishes between the damaged and undamaged states using only 1/32 the number of measurements used in the conventional matched filter. EI plans to extend the compressive sensing to the mobile-host wireless sensing network architectures that has been studied in the past, as compressed sensing holds great promise for distilling data collected from wireless sensor networks.

Sunday, January 30, 2011

Compressive Sensing Landscape version 0.2

Upon a commenter's remark on the obvious need for mentioning statistics, I just updated the mental picture of the communities involved in contributing to compressive sensing. This is version 0.2, anybody has a better one?

Landscape - version 0.1

Here is a mental picture of the communities more or less involved in contributing to compressive sensing. This is version 0.1, anybody has a better one ?

Friday, January 28, 2011

CS: Op-ed, some videos, Compressive Sensing Using the Entropy Functional, A Message-Passing Receiver for BICM-OFDM

Every so often, I get an email asking me to link back to certain sites, here is the latest instance, NB is listed in the Rest of the Best category, whatever that means, here is what struck me: the definition:
Math blogs are a great way for professors, scientists, and researchers to convey the purpose of their work to a broader audience. Some bloggers offer an introduction to mathematical concepts and current research, while others post updates on their research and interests. The following 50 blogs are the cream of the crop: entertaining and informative posts that have something to offer for math geeks of all stripes.
All I know is that Nuit Blanche ain't:
  • about Math
  • about conveying my work most of the time.
  • there to offer an introduction to current research most of the time.
  • there to post updates on my interests.
It's a chronicle... and, from my point of view, a good way to broker important and honest discussions.

One of you just contacted me and wondered if this paper was an instance of compressive sensing. I'll have to read the paper first. In the meantime, we have several videos, two papers and two announcements for talks. Enjoy.

First, the videos. The last one isn't about CS but rather how to use a Kinect sensor to perform some SLAM computations, wow.

Compressive Estimation for Signal Integration in Rendering








Robustness of Compressed Sensing Parallel MRI in the Presence of Inaccurate Sensitivity Estimates






6D SLAM with RGB-D Data






The papers:
In most compressive sensing problems l1 norm is used during the signal reconstruction process. In this article the use of entropy functional is proposed to approximate the l1 norm. A modified version of the entropy functional is continuous, differentiable and convex. Therefore, it is possible to construct globally convergent iterative algorithms using Bregman's row action D-projection method for compressive sensing applications. Simulation examples are presented.
I wonder when a solver will be available.

We propose a factor-graph-based approach to joint channel-estimation-and-decoding (JCED) of bit- interleaved coded orthogonal frequency division multiplexing (BICM-OFDM). In contrast to existing designs, ours is capable of exploiting not only sparsity in sampled channel taps but also clustering among the large taps, behaviors which are known to manifest at larger communication bandwidths. In order to exploit these channel-tap structures, we adopt a two-state Gaussian mixture prior in conjunction with a Markov model on the hidden state. For loopy belief propagation, we exploit a "generalized approximate message passing" (GAMP) algorithm recently developed in the context of compressed sensing, and show that it can be successfully coupled with soft-input soft-output decoding, as well as hidden Markov inference, through the standard sum-product framework. For N subcarriers and M bits per subcarrier (and any channel length L \lt N), the resulting JCED-GAMP scheme has a computational complexity of only O(N log2 N+N 2^M). Numerical experiments show that our scheme yields BER performance within 1 dB of the known-channel bound and 4 dB better than decoupled channel-estimation-and-decoding via LASSO.

At UT, there is going to be talk from somebody in industry on his use of compressed sensing and related techniques:

Detection of Patterns in Networks
ECE Seminar Series

Friday, February 4, 2011
12:00 - 2:00 PM
ENS 637

Dr. Randy C. Paffenroth

Numerica Corporation
Abstract

Discuss the development and application of a mathematical and computational framework for detecting and classifying weak, distributed patterns in sensor networks. The work being done at Numerica demonstrates the effectiveness of space-time inference on graphs, robust matrix completion and second order analysis in the detection and classification of distributed patterns that are not discernible at the level of individual nodes. Our focus is on cyber security scenarios where computer nodes (such as terminals, routers and servers) are sensors that provide measurements of packet rates, user activity, central processing unit usage, etc. When viewed independently, they cannot provide a definitive determination of the underlying pattern, but when fused with data from across the network – both spatially and temporally – the relevant patterns emerge. The clear underlying suggestion is that only detectors and classifiers that use a rigorous mathematical analysis of temporal measurements at many spatially distributed points in the network can identify network attacks. This research builds upon work in compressed sensing and robust matrix completion and is an excellent example of industry-academic collaboration.

whereas at University of Illinois, there will be:

GE/IE 590 Seminar-Large-Scale Optimization with Applications in Compressed Sensing
Speaker Professor Fatma Kilinc-Karzan
Date Feb 3, 2011
Time 4:00 pm
Location 101 Transportation Building
Cost Free
Sponsor ISE
Contact Holly Tipsword
E-Mail tippy6@illinois.edu
Phone 217-333-2730
Views 4
In this talk, we will cover some of the recent developments in large-scale optimization motivated by the compressed sensing paradigm. Sparsity plays a key role in dealing with high-dimensional data sets. Exploiting this fact, compressed sensing suggests a new paradigm by directly acquiring and storing only a few linear measurements (possibly corrupted with noise) of high-dimensional signals, and then using efficient -recovery procedures for reconstruction. Successful applications of this theory range from MRI image processing to statistics and machine learning. This talk will have two main parts. In the first part, after presenting the necessary background from compressed sensing, we will show that prior results can be generalized to utilize a priori information given in the form of sign restrictions on the signal. We will investigate the underlying conditions allowing successful -recovery of sparse signals and show that these conditions although difficult to evaluate lead to sufficient conditions that can be efficiently verified via linear or semidefinite programming. We will analyze the properties of these conditions, and describe their limits of performance. In the second part, we will develop efficient first-order methods with both deterministic and stochastic oracles for solving large-scale well-structured convex optimization problems. As an application of this theory, we will show how large-scale problems originating from compressed sensing fall into this framework. We will conclude with numerical results demonstrating the effectiveness of our algorithms.

Thursday, January 27, 2011

Around the blogs in 80 hours.


I have changed the underlying template of the blog in order to have a larger width for every entries. One can now also re-tweet entries seamlessly.. Let me know if this improves the reading of the blog.

Here  is a small selection of blog entries mixing both hardware and theoretical elements you have seen mentioned on Nuit Blanche.

Greg: MIT IAP '11 radar course SAR example, imaging with coffee cans, wood, and the audio input from your laptop 
Gustavo (Greg's student): Cleaner Ranging and Multiple Targets
Anand:  Privacy and entropy (needs improvement)
Vladimir (ISW): Albert Theuwissen Reports from EI 2011 - Part 1 and 3D Sensing Forum at ISSCC 2011
Suresh

Jordan: Compressed sensing, compressed MDS, compressed clustering, and my talk tomorrow
Bob: Paper of the Day (Po'D): Performance Limits of Matching Pursuit Algorithms Edition
Terry: An introduction to measure theory
Arxiv blog: The Nuclear Camera Designed to Spot Hidden Radiation Sources 

This last entry pointed to this paper on arxiv that featured a position sensitive radiation camera. I noted in this paper the following interesting graphs:

as they point to the ability to readily perform spectrum unfolding. More on this later.




Credit: NASA/JPL/University of Arizona (view from Mars Orbiter and view from the ground of the same scene)

Wednesday, January 26, 2011

Postdoc Position at EPFL on Compressed Sensing for Interferometric Imaging


Yves Wiaux just sent me the following:

POSTDOCTORAL POSITION at EPFL ON COMPRESSED SENSING FOR INTERFEROMETRIC IMAGING

Announcement: January 25 2011
Application Deadline: March 1 2011
Starting date: as soon as June 1 2011

A postdoctoral position on "Compressed sensing imaging techniques for radio interferometry" is available, as soon as in June 2011, at the Signal Processing Laboratories (SP Lab, splabs.epfl.ch) of EPFL. The opening relies on new funding obtained by Dr Yves Wiaux and Pierre Vandergheynst at the Swiss National Science Foundation (SNSF, www.snf.ch), in the aim of strengthening the activities of the BASP (lts2www.epfl.ch/~ywiaux/index-baspnode) research node in astrophysical signal processing, in collaboration with the Signal Processing Laboratory 2 (LTS2, lts2www.epfl.ch).

The position is opened to any dynamic and highly qualified candidate, Ph.D. in Electrical Engineering, Physics or equivalent and with a strong background in signal processing, specifically on compressive sampling. Competence in programming (MATLAB, C) is required. Knowledge of signal processing on the sphere, or radio-interferometric imaging is a plus. The successful candidate will be in charge of the collaborations between the BASP node and the international radio astronomy community. In addition to his/her main activities, he/she will also be welcome to collaborate with other researchers of the BASP node and of the SP Lab on signal processing for magnetic resonance imaging.

The appointment is initially for one-year, with the possibility of renewal, based upon performance. Note that EPFL offers very attractive salaries.

Requests for further information and applications (including cover letter, CV, and three reference letters), should be sent to Dr Y. Wiaux, directly by email (yves.wiaux[at]epfl.ch), ideally by March 1 2011 (though applications will be considered until the position is filled).

Thanks Yves..... and Pierre. It is now listed on the compressive sensing jobs page.


Image Credit: NASA/JPL/Space Science Institute
W00066496.jpg was taken on January 21, 2011 and received on Earth January 23, 2011. The camera was pointing toward SATURN at approximately 2,594,465 kilometers away, and the image was taken using the CL1 and BL1 filters.

Tuesday, January 25, 2011

CS: The really long post of the week: Lost in Translation, lots of papers and summer jobs

Today we have a really long post but first, I wanted to mention the following: 
* Bob's entry on Path-finding back to Copenhagen where he talks about the A* algorithm and greedy algorithm. 

Dataset B: Coded Exposure Images using Canon Camera and Ferro-Electric Shutter
(Download complete matlab code and all four input files)


Indoor Toy Train
(118 pixel blur)
Outdoor Green Taxi
(235 pixel blur)
Outdoor White Taxi
(66 pixel blur)
Outdoor Car License Plate
(60 pixel blur)


* Three of you kindly pointed out the french newsletter of Matlab came out with a Cleve's corner article on compressed sensing. We mentioned it before but what got my attention is the title "La note de Cleve - une reconstruction « magique » : la détection compressée". I am not happy with the translation because before you do some detection, you really need to acquire a signal, a concept that seems to have been lost on Mathworks France. Translation mishaps are the reason I had asked Emmanuel for a translation of the term before it got to be maimed back into French. It's really Acquisition Comprimee. I have written to the PR firm and to Cleve, Moler the founder of Matworks. Let's see how much time it takes to remove this abomination. In the meantime, Cleve kindly responded with

Thanks.  I agree that Candes is particularly well qualified to make this decision.
 -- Cleve
 
But the newsletter still has the wrong translation, muuuhhh. I like Cleve and his software, but maybe we should call their software Mathlabs or something.

* While we are on the subject of language, I think I should be able to learn Spanish by just reading this introduction to compressed sensing (the whole document is Algoritmos y aplicaciones de Compressive Sensing by María José Díaz Sánchez)  Though, I really need to hear someone speak it. It is fascinating how a domain expertise really allows one to hop back and forth between languages. Maybe we should have a Rosetta stone explaining Compressed Sensing in many languages, including Indonesian.

* Stephane Chretien updated his page to include a code to his recent paper. From  his page:

Sparse recovery with unknown variance: a LASSO-type approach. (with Sebastien Darses) PDF. Submitted. An illustration using NESTAv1.1 of Lemma 5.3 about the support of the LASSO solution can be found here. The results can be checked using the modification of S. Becker and J. Bobin's Matlab code available here.

* Yesterday I mentioned a presentation by Vivek Goyal, Alyson Fletcher, Sundeep Rangan entitled The Optimistic Bayesian: Replica Method Analysis of Compressed Sensing

Now let's get on with the new papers that appeared on various webpages and on arxiv:

The Random Demodulator (RD) and the ModulatedWideband Converter (MWC) are two recently proposed compressed sensing (CS) techniques for the acquisition of continuous-time spectrally-sparse signals. They extend the standard CS paradigm from sampling discrete, finite dimensional signals to sampling continuous and possibly infinite dimensional ones, and thus establish the ability to capture these signals at sub-Nyquist sampling rates. The RD and the MWC have remarkably similar structures (similar block diagrams), but their reconstruction algorithms and signal models strongly differ. To date, few results exist that compare these systems, and owing to the potential impacts they could have on spectral estimation in applications like electromagnetic scanning and cognitive radio, we more fully investigate their relationship in this paper. Specifically, we show that the RD and the MWC are both based on the general concept of random filtering, but that the sampling functions characterising the systems differ significantly. We next demonstrate a previously unreported model sensitivity the MWC has to short duration signals that resembles the known sensitivity the RD has to nonharmonic tones. We also show that block convolution is a fundamental aspect of the MWC, allowing it to successfully sample and reconstruct block-sparse (multiband) signals. This aspect is lacking in the RD, but we use it to propose a new CS based acquisition system for continuous-time signals whose amplitudes are block sparse. The paper includes detailed time and frequency domain analyses of the RD and the MWC that differ, sometimes substantially, from published results.

In this paper, we consider power spectral density estimation of bandlimited, wide-sense stationary signals from sub-Nyquist sampled data. This problem has recently received attention from within the emerging field of cognitive radio for example, and solutions have been proposed that use ideas from compressed sensing and the theory of digital alias-free signal processing. Here we develop a compressed sensing based technique that employs multi-coset sampling and produces multi-resolution power spectral estimates at arbitrarily low average sampling rates. The technique applies to spectrally sparse and nonsparse signals alike, but we show that when the widesense stationary signal is spectrally sparse, compressed sensing is able to enhance the estimator. The estimator does not require signal reconstruction and can be directly obtained from a straightforward application of nonnegative least squares.


Sparse Interactions: Identifying High-Dimensional Multilinear Systems via Compressed Sensing by Bobak Nazer and Robert D. Nowak. The abstract reads:
This paper investigates the problem of identifying sparse multilinear systems. Such systems are characterized by multiplicative interactions between the input variables with sparsity meaning that relatively few of all conceivable interactions are present. This problem is motivated by the study of interactions among genes and proteins in living cells. The goal is to develop a sampling/sensing scheme to identify sparse multilinear systems using as few measurements as possible. We derive bounds on the number of measurements required for perfect reconstruction as a function of the sparsity level. Our results extend the notion of compressed sensing from the traditional notion of (linear) sparsity to more refined notions of sparsity encountered in nonlinear systems. In contrast to the linear sparsity models, in the multilinear case the pattern of sparsity may play a role in the sensing requirements.

Structured sublinear compressive sensing via dense belief propagation by Wei Dai, Olgica Milenkovic, Hoa Vin Pham. The abstract reads:
Compressive sensing (CS) is a sampling technique designed for reducing the complexity of sparse data acquisition. One of the major obstacles for practical deployment of CS techniques is the signal reconstruction time and the high storage cost of random sensing matrices. We propose a new structured compressive sensing scheme, based on codes of graphs, that allows for a joint design of structured sensing matrices and logarithmic-complexity reconstruction algorithms. The compressive sensing matrices can be shown to offer asymptotically optimal performance when used in combination with Orthogonal Matching Pursuit (OMP) methods. For more elaborate greedy reconstruction schemes, we propose a new family of dense list decoding belief propagation algorithms, as well as reinforced- and multiple-basis belief propagation algorithms. Our simulation results indicate that reinforced BP CS schemes offer very good complexity-performance tradeoffs for very sparse signal vectors.
This paper presents an average case denoising performance analysis for SP, CoSaMP and IHT algorithms. This analysis considers the recovery of a noisy signal, with the assumptions that it is corrupted by an additive random white Gaussian noise and has a K-sparse representation with respect to a known dictionary D. The proposed analysis is based on the RIP, establishing a near-oracle performance guarantee for each of these algorithms. Similar RIP-based analysis was carried out previously for Dantzig Selector (DS) and Basis Pursuit (BP). Past work also considered a mutual-coherence-based analysis of OMP and thresholding. The performance guarantees developed in this work resemble those obtained for the relaxation-based methods (DS and BP), suggesting that the performance is independent of the sparse representation entries contrast and magnitude which is not true for OMP and thresholding.
Sparse Recovery, Kashin Decomposition and Conic Programming by Alexandre d'Aspremont. The abstract reads:
We produce relaxation bounds on the diameter of arbitrary sections of the l1 ball in R^n. We use these results to test conditions for sparse recovery.
CSSF MIMO RADAR: Low-Complexity Compressive Sensing Based MIMO Radar That Uses Step Frequency by Yao Yu, Athina P. Petropulu, H. Vincent Poor. The abstract reads:
A new approach is proposed, namely CSSF MIMO radar, which applies the technique of step frequency (SF) to compressive sensing (CS) based multi-input multi-output (MIMO) radar. The proposed approach enables high resolution range, angle and Doppler estimation, while transmitting narrowband pulses. The problem of joint angle-Doppler-range estimation is first formulated to fit the CS framework, i.e., as an L1 optimization problem. Direct solution of this problem entails high complexity as it employs a basis matrix whose construction requires discretization of the angle-Doppler-range space. Since high resolution requires fine space discretization, the complexity of joint range, angle and Doppler estimation can be prohibitively high. For the case of slowly moving targets, a technique is proposed that achieves significant complexity reduction by successively estimating angle-range and Doppler in a decoupled fashion and by employing initial estimates obtained via matched filtering to further reduce the space that needs to be digitized. Numerical results show that the combination of CS and SF results in a MIMO radar system that has superior resolution and requires far less data as compared to a system that uses a matched filter with SF.
A Novel Approach for Fast Detection of Multiple Change Points in Linear Models by Xiaoping Shi, Yuehua Wu, Baisuo Jin. The abstract reads:
A change point problem occurs in many statistical applications. If there exist change points in a model, it is harmful to make a statistical analysis without any consideration of the existence of the change points and the results derived from such an analysis may be misleading. There are rich literatures on change point detection. Although many methods have been proposed for detecting multiple change points, using these methods to find multiple change points in a large sample seems not feasible. In this article, a connection between multiple change point detection and variable selection through a proper segmentation of data sequence is established, and a novel approach is proposed to tackle multiple change point detection problem via the following two key steps: (1) apply the recent advances in consistent variable selection methods such as SCAD, adaptive LASSO and MCP to detect change points; (2) employ a refine procedure to improve the accuracy of change point estimation. Five algorithms are hence proposed, which can detect change points with much less time and more accuracy compared to those in literature. In addition, an optimal segmentation algorithm based on residual sum of squares is given. Our simulation study shows that the proposed algorithms are computationally efficient with improved change point estimation accuracy. The new approach is readily generalized to detect multiple change points in other models such as generalized linear models and nonparametric models.

Peak Reduction and Clipping Mitigation by Compressive Sensing by Ebrahim B. Al-Safadi, Tareq Y. Al-Naffouri. The abstract reads:
  This work establishes the design, analysis, and fine-tuning of a Peak-to-Average-Power-Ratio (PAPR) reducing system, based on compressed sensing at the receiver of a peak-reducing sparse clipper applied to an OFDM signal at the transmitter. By exploiting the sparsity of the OFDM signal in the time domain relative to a pre-defined clipping threshold, the method depends on partially observing the frequency content of extremely simple sparse clippers to recover the locations, magnitudes, and phases of the clipped coefficients of the peak-reduced signal. We claim that in the absence of optimization algorithms at the transmitter that confine the frequency support of clippers to a predefined set of reserved-tones, no other tone-reservation method can reliably recover the original OFDM signal with such low complexity.
    Afterwards we focus on designing different clipping signals that can embed a priori information regarding the support and phase of the peak-reducing signal to the receiver, followed by modified compressive sensing techniques for enhanced recovery. This includes data-based weighted {\ell} 1 minimization for enhanced support recovery and phase-augmention for homogeneous clippers followed by Bayesian techniques.
    We show that using such techniques for a typical OFDM signal of 256 subcarriers and 20% reserved tones, the PAPR can be reduced by approximately 4.5 dB with a significant increase in capacity compared to a system which uses all its tones for data transmission and clips to such levels. The design is hence appealing from both capacity and PAPR reduction aspects.


Remarks on the Restricted Isometry Property in Orthogonal Matching Pursuit algorithm by Qun Mo, Yi Shen. The abstract reads:
  This paper demonstrates theoretically that if the restricted isometry constant $\delta_K$ of the compressed sensing matrix satisfies $$ \delta_{K+1} < \frac{1}{\sqrt{K}+1}, $$ then a greedy algorithm called Orthogonal Matching Pursuit (OMP) can recover a signal with $K$ nonzero entries in $K$ iterations. In contrast, matrices are also constructed with restricted isometry constant $$ \delta_{K+1} = \frac{1}{\sqrt{K}} $$ such that OMP can not recover $K$-sparse $x$ in $K$ iterations. This result shows that the conjecture given by Dai and Milenkovic is ture.

 I am sure they meant True, not ture.


Distributed Representation of Geometrically Correlated Images with Compressed Linear Measurements by Vijayaraghavan Thirumalai ; Pascal Frossard. The abstract reads:
The distributed representation of correlated images is an important challenge in applications such as multi-view imaging in camera networks or low complexity video coding. This paper addresses the problem of distributed coding of images whose correlation is driven by the motion of objects or the positioning of the vision sensors. It concentrates on the problem where images are encoded with compressed linear measurements, which are used for estimation of the correlation between images at decoder. We propose a geometry-based correlation model in order to describe the correlation between pairs of images. We assume that the constitutive components of natural images can be captured by visual features that undergo local transformations (e.g., translation) in different images. These prominent visual features are estimated with a sparse approximation of a reference image by a dictionary of geometric basis functions. The corresponding features in the other images are then identified from the compressed measurements. The correlation model is given by the relative geometric transformations between corresponding features. We thus formulate a regularized optimization problem for the estimation of correspondences where the local transformations between images form a consistent motion or disparity map. Then, we propose an efficient joint reconstruction algorithm that decodes the compressed images such that they stay consistent with the quantized measurements and the correlation model. Experimental results show that the proposed algorithm effectively estimates the correlation between images in video sequences or multi-view data. In addition, the proposed reconstruction strategy provides effective decoding performance that compares advantageously to distributed coding schemes based on disparity or motion learning and to independent coding solution based on JPEG-2000.
Efficient Image Reconstruction Under Sparsity Constraints with Application to MRI and Bioluminescence Tomography by Matthieu Guerquin-Kern, J.-C. Baritaux, Michael Unser. The abstract reads:
Most bioimaging modalities rely on indirect measurements of the quantity under investigation. The image is obtained as the result of an optimization problem involving a physical model of the measurement system. Due to the ill-posedness of the above problem, the impact of the noise on the reconstructed images must be controlled. The recent emphasis in biomedical image reconstruction is on regularization schemes that favor sparse solutions, which renders the optimization problem non-smooth. In this work, we show how step-size adaptation can be used to speed up the most recent multi-step algorithms (e.g. FISTA) employed in sparse image recovery. We present experiments in MRI and Fluorescence Molecular Tomography with specifically tailored step-adaptation strategies. Our results demonstrate the possibility of an order-of-magnitude speed enhancement over state-of-the-art algorithms.


In this paper we introduce a nonuniform sparsity model and analyze the performance of an optimized weighted `1 minimization over that sparsity model. In particular, we focus on a model where the entries of the unknown vector fall into two sets, with entries of each set having a specific probability of being nonzero. We propose a weighted `1 minimization recovery algorithm and analyze its performance using a Grassmann angle approach. We compute explicitly the relationship between the system parameters-the weights, the number of measurements, the size of the two sets, the probabilities of being nonzero- so that when i.i.d. random Gaussian measurement matrices are used, the weighted `1 minimization recovers a randomly selected signal drawn from the considered sparsity model with overwhelming probability as the problem dimension increases. This allows us to compute the optimal weights. We demonstrate through rigorous analysis and simulations that for the case when the support of the signal can be divided into two different subclasses with unequal sparsity fractions, the weighted `1 minimization outperforms the regular `1 minimization substantially. We also generalize our results to signal vectors with an arbitrary number of subclasses for sparsity.

Penalized least squares regression is often used for signal denoising and inverse problems, and is commonly interpreted in a Bayesian framework as a Maximum A Posteriori (MAP) estimator, the penalty function being the negative logarithm of the prior. For example, the widely used quadratic program (with an $\ell^1$ penalty) associated to the LASSO / Basis Pursuit Denoising is very often considered as MAP estimation under a Laplacian prior in the context of additive white Gaussian noise (AWGN) reduction. This paper highlights the fact that, while this is {\em one} possible Bayesian interpretation, there can be other equally acceptable Bayesian interpretations. Therefore, solving a penalized least squares regression problem with penalty $\phi(x)$ need not be interpreted as assuming a prior $C\cdot \exp(-\phi(x))$ and using the MAP estimator. In particular, it is shown that for {\em any} prior $P_X$, the minimum mean square error (MMSE) estimator is the solution of a penalized least square problem with some penalty $\phi(x)$, which can be interpreted as the MAP estimator with the prior $C \cdot \exp(-\phi(x))$. Vice-versa, for {\em certain} penalties $\phi(x)$, the solution of the penalized least squares problem is indeed the MMSE estimator, with a certain prior $P_X$. In general $dP_X(x) \neq C \cdot \exp(-\phi(x))dx$.

Sparsity Equivalence of Anisotropic Decompositions by Gitta Kutyniok. The abstract reads;
Anisotropic decompositions using representation systems such as curvelets, contourlet, or shearlets have recently attracted significantly increased attention due to the fact that they were shown to provide optimally sparse approximations of functions exhibiting singularities on lower dimensional embedded manifolds. The literature now contains various direct proofs of this fact and of related sparse approximation results. However, it seems quite cumbersome to prove such a canon of results for each system separately, while many of the systems exhibit certain similarities. In this paper, with the introduction of the concept of sparsity equivalence, we aim to provide a framework which allows categorization of the ability for sparse approximations of representation systems. This framework, in particular, enables transferring results on sparse approximations from one system to another. We demonstrate this concept for the example of curvelets and shearlets, and discuss how this viewpoint immediately leads to novel results for both systems.

Behind a paywall:

Automatic Container Code Recognition Using Compressed Sensing Method by Chien-Cheng Tseng and Su-Ling Lee. The abstract reads:
In this paper, an automatic container code recognition method is presented by using compressed sensing (CS). First, the compressed sensing approach which uses the constrained L1 minimization method is reviewed. Then, a general pattern recognition framework based on CS theory is described. Next, the CS recognition method is applied to construct an automatic container code recognition system. Finally, the real-life images provided by trading port of Kaohsiung are used to evaluate the performance of the proposed method.
Here are a list of papers that caught my attention as they werre directly or indirectly related to compressive sensing:

Computational Cameras: Approaches, Benefits and Limits by Shree K. Nayar. The abstract reads:
A computational camera uses a combination of optics and software to produce images that cannot be taken with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras have been demonstrated - some designed to achieve new imaging functionalities, and others to reduce the complexity of traditional imaging. In this article, we describe how computational cameras have evolved and present a taxonomy for the technical approaches they use. We explore the benefits and limits of computational imaging, and discuss how it is related to the adjacent and overlapping fields of digital imaging, computational photography and computational image sensors.
We consider a generalization of the classical group testing problem. Let us be given a sample contaminated with a chemical substance. We want to estimate the unknown concentration c of this substance in the sample. There is a threshold indicator which can detect whether the concentration is at least a known threshold. We consider both the case when the threshold indicator does not affect the tested units and the more difficult case when the threshold indicator destroys the tested units. For both cases, we present a family of efficient algorithms each of which achieves a good approximation of c using a small number of tests and of auxiliary resources. Each member of the family provides a different tradeoff between the number of tests and the use of other resources involved by the algorithm. Previously known algorithms for this problem use more tests than most of our algorithms do.




Identification and Classification Problems on Pooling Designs for Inhibitor Models by Huilan Chang, Hong-Bin Chen, Hung-Lin Fu. The abstract reads:

Pooling designs are common tools to efficiently distinguish positive clones from negative clones in clone library screening. In some applications, there is a third type of clones called \inhibitors" whose effect is in a sense to obscure the positive clones in pools. Various inhibitor models have been proposed in the literature. We address the inhibitor problems of designing efficient nonadaptive procedures for both identification and classification problems, and improve previous results in three aspects:
* The algorithm that is used to identify the positive clones works on a more general inhibitor model and has a polynomial-time decoding procedure that recovers the set of positives from the knowledge of the outcomes.
* The algorithm that is used to classify all clones works in one-stage, i.e., all tests are arranged in advance without knowing the outcomes of other tests, along with a polynomial-time decoding procedure.
We extend our results to pooling designs on complexes where the property to be screened is defined on subsets of biological objects, instead of on individual ones.

Group testing is a search paradigm where one is given a population [Formula: see text] of n elements and an unknown subset [Formula: see text] of defective elements and the goal is to determine [Formula: see text] by performing tests on subsets of [Formula: see text]. In classical group testing a test on a subset [Formula: see text] receives a YES response if [Formula: see text], and a NO response otherwise. In group testing with inhibitors (GTI), identifying the defective items is more difficult due to the presence of elements called inhibitors that interfere with the queries so that the answer to a query is YES if and only if the queried group contains at least one defective item and no inhibitor. In the present article, we consider a new generalization of the GTI model in which there are two unknown thresholds h and g and the response to a test is YES both in the case when the queried subset contains at least one defective item and less than h inhibitors, and in the case when the queried subset contains at least g defective items. Moreover, our search model assumes that no knowledge on the number [Formula: see text] of defective items is given. We derive lower bounds on the minimum number of tests required to determine the defective items under this model and present an algorithm that uses an almost optimal number of tests.

This paper studies the "explanation problem" for tree- and linearly-ordered array data, a problem motivated by database applications and recently solved for the one-dimensional tree-ordered case. In this paper, one is given a matrix A whose rows and columns have semantics: special subsets of the rows and special subsets of the columns are meaningful, others are not. A submatrix in A is said to be meaningful if and only if it is the cross product of a meaningful row subset and a meaningful column subset, in which case we call it an "allowed rectangle." The goal is to "explain" A as a sparse sum of weighted allowed rectangles. Specifically, we wish to find as few weighted allowed rectangles as possible such that, for all i,j, a_{ij} equals the sum of the weights of all rectangles which include cell (i,j).
In this paper we consider the natural cases in which the matrix dimensions are tree-ordered or linearly-ordered. In the tree-ordered case, we are given a rooted tree T1 whose leaves are the rows of A and another, T2, whose leaves are the columns. Nodes of the trees correspond in an obvious way to the sets of their leaf descendants. In the linearly-ordered case, a set of rows or columns is meaningful if and only if it is contiguous.
For tree-ordered data, we prove the explanation problem NP-Hard and give a randomized 2-approximation algorithm for it. For linearly-ordered data, we prove the explanation problem NP-Hard and give a 2.56-approximation algorithm. To our knowledge, these are the first results for the problem of sparsely and exactly representing matrices by weighted rectangles.

Asynchronous Code-Division Random Access Using Convex Optimization by Lorne Applebaum, Waheed U. Bajwa, Marco F. Duarte, Robert Calderbank. The abstract reads:
Many applications in cellular systems and sensor networks involve a random subset of a large number of users asynchronously reporting activity to a base station. This paper examines the problem of multiuser detection (MUD) in random access channels for such applications. Traditional orthogonal signaling ignores the random nature of user activity in this problem and limits the total number of users to be on the order of the number of signal space dimensions. Contention-based schemes, on the other hand, suffer from delays caused by colliding transmissions and the hidden node problem. In contrast, this paper presents a novel asynchronous (non-orthogonal) code-division random access scheme along with a convex optimization-based MUD algorithm that overcomes the issues associated with orthogonal signaling and contention-based methods. Two key distinguishing features of the proposed algorithm are that it does not require knowledge of the delay or channel state information of every user and it has polynomial-time computational complexity. The main analytical contribution of this paper is the relationship between the performance of the proposed MUD algorithm and two simple metrics of the set of user codewords. The study of these metrics is then focused on two specific sets of codewords, random binary codewords and specially constructed algebraic codewords, for asynchronous random access. The ensuing analysis confirms that the proposed scheme together with either of these two codeword sets significantly outperforms the orthogonal signaling-based random access in terms of the total number of users in the system.
Following are three announcement for three talks, some have passed but what is important is the abstract of the talks:

iCME Colloquium, Resolution Analysis for Compressive Imaging
Information

* Wednesday
March 2, 2011
* 4:15 pm - 5:05 pm
* Bldg. 380, Rm. 380/380X

Albert Fannjiang (University of California, Davis)
In this talk we will focus on the issue of resolution when compressed sensing techniques are applied to imaging and inverse problems. The physical process of measurement consists of illumination/emission, wave propagation and reception/sampling, giving rise to measurement matrices more structured than those typically studied in the compressed sensing literature. We will show how to achieve the diffraction limit in diffraction tomography, inverse scattering and radio interferometry by using random sampling. We will also discuss the superresolution effect resulting from near-field measurements, random illumination and multiple measurements in conjunction with random sampling.
Fast dimensionality reduction: improved bounds and implications for compressed sensing
Event on 2011-01-20 16:10:00
Fast dimensionality reduction: improved bounds and implications for compressed sensing Colloquium | January 20 | 4:10-5 p.m. | 60 Evans Hall
Speaker: Rachel Ward, Courant Institute, New York University Sponsor: Mathematics, Department of
Embedding high-dimensional data sets into subspaces of much lower dimension is important for reducing storage cost and speeding up computation in several applications, including numerical linear algebra, manifold learning, and computer science. The relatively new field of compressed sensing is based on the observation that if the high-dimensional data are sparse in a known basis, they can be embedded into a lower-dimensional space in a manner that permits their efficient recovery through l1-minimization. We first give a brief overview of compressed sensing, and discuss how certain statistical procedures like cross validation can be naturally incorporated into this set-up. The latter part of the talk will focus on a near equivalence of two fundamental concepts in compressed sensing: the Restricted Isometry Property and the Johnson-Lindenstrauss Lemma; as a consequence of this result, we can improve on the best-known bounds for dimensionality reduction using structured, or fast linear embeddings. Finally, we discuss the Restricted Isometry Property for structured measurement matrices formed by subsampling orthonormal polynomial systems, and high-dimensional function approximation from a few samples.
Event Contact: 510-642-6550

at University of California, Berkeley
110 Sproul Hall
Berkeley, United States
Speaker: Qiyu Sun, Approximation Property in Compressive Sampling
Wednesday, January 19, 2011
2:00 PM to 4:00 PM
3076 Duncan Hall
Rice University
6100 Main St
Houston, Texas, USA

The null space property and the restricted isometry property for a measurement matrix are two basic properties in compressive sampling, and are closely related to the sparse approximation. In this talk, we introduce the sparse approximation property for a measurement matrix, a weaker version of the restricted isometry property and a stronger version of the null space property. We show that the sparse approximation property for a measurement matrix could be an appropriate condition to consider stable recovery of any compressible signal from its noisy measurements.

Finally, some summer jobs:

Internship Opportunity (2011 summer) at Mitsubishi Electric Research Labs (MERL), Cambridge, MA, USA (http://www.merl.com/)

Posting Date: Jan 23, 2011

Internship Positions: Multiple

MERL is the North American corporate research and development organization of the Mitsubishi Electric Corporation (MELCO). Based in Cambridge, MA, MERL offers a challenging and stimulating research environment and opportunity to pursue independent projects and receive authorship on publications. The imaging group at MERL does research concerning information capture/extraction/analysis and display including computer vision, computational photography, multi-projector systems, image processing, machine learning and 3D graphics & modeling.

MERL invites applications for summer internship in 2011 in the imaging group. Graduate students pursuing a Phd degree in EE, CS or a related discipline are encouraged to apply. Publications at the flagship conferences (ICCV/CVPR/SIGGRAPH) as a result of the internship are highly encouraged. Development will be a mix of prototyping in Matlab and/or C/C++ and strong programming skills are required.

Specific research topics of interest include sparse/dense 3D reconstruction, stereo and multi-view stereo, SLAM, structure light scanning, wide-angle imaging using catadioptric sensors and 1D signal processing on DSP's. Applicants are expected to be familiar with the fundamentals of computer vision and image/video processing, along with sufficient knowledge in these areas. Prior publication in these areas is a plus.

Please submit resumes to Amit Agrawal. For more information about past projects, please visit www.amitkagrawal.com

Monday, January 24, 2011

"...I found this idea of CS sketchy,..."

Yesterday I stated the following:

"...Anyway, in seismology it was known since at least 1973 that l_1 minimization was promoting sparsity but nobody really knew why and what to make of it in the sense that none of the sensors were modified as a result of this finding. What the papers of  Tao, Candes, Romberg [1] and Donoho [2] did was give a sense of the kind of acquisition that would be acceptable ( measurement matrix satisfying RIP, KGG....) which is the reason we can contemplate building hardware..."

 I added the emphasis after a commenter stated that

Dear Igor

"but nobody really knew why and what to make of it in the sense that none of the sensors were modified as a result of this finding"

I respectfully disagree
1) L1 in seismic was for deconvolution, nothing to do with CS.
2) The "Golden Oldies" section of Donoho's publications is worth reading, in particular "Superresolution via Sparsity Constraints", "Uncertainty Principles and Signal Recovery". It is quite clear that even in 1990 Donoho's understanding of L1 (and other sparsity promoting functional like entropy) and his relation with L0 was almost perfect.

To what I responded:

Hello,

You seem to be arguing about the first part of the sentence, but are you sure you are arguing about the second part of that sentence (which is really the point I was making) ? Namely:

"..in the sense that none of the sensors were modified as a result of this finding.."

Cheers,

I realize that most of you in the signal processing community will feel strongly  about what I am about to say but this is mostly because you have had a very raw deal all these years. Other people take data and then tell you : please clean it up I'll come back in the morning to pick it up when you're done. Would you want your kids to have that kind of a job ? It reminds me of being a thermal engineer in the space business, they would always lament being the last one to add anything in the design as it generally amounts to put wrinkled cloth on top of something. At dinner parties, you have a dry martini in one hand and you look sharp, and /or gorgeous while people laugh at your cynical view of what you are doing,  you are so funny being the center of attention but deep down you feel dirty just explaining it.

Since this is the second time I see this type of blockage (the first one was here), let me be forthright about what I think on the matter:
Wavelets alone are not important
l_1 solvers promoting sparsity alone are not important

However, these unimportant pieces of the puzzle crystallize into something important because of these two papers [1] [2]. I realize that l_1 promoting sparsity was discovered before empirically, I realize that wavelets were developed in the late 1980's and I realize that they both made the work of signal processing people tremendously easier, but none of these two findings amount to much unless you can change the sensors and the way we (literally) see the world. I could go further:

Fractals alone are not important
Structured sparsity solvers alone are not important
....


[1] E. J. Candès, J. Romberg and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math., 59 1207-1223. (pdf)
[2] D. Donoho, Compressed Sensing

Sunday, January 23, 2011

Islands of Knowledge

[Yesterday's poll is at the end of this entry.] [Update: I have highlighted the most important part of a sentence]

On the NIPS videos entry, I said:

"...Of particular interest is a question asked to Francis Bach at the end of his presentation where it looks like known bounds are O(p^6) whereas is own empirical experience seems to show an O(p^2) bound, mmmuh, a little bit like what happened in the 1970s' when the seismic folks thought the l_1 norm was a good proxy for l_0........"

To what a youngster commented:
"...I wonder if you'd be kind enough to elucidate what happened in the 70s for those of us not old enough to remember, and not well-versed enough in seismology?.."

I don't think I qualify for either of these statements, but I'll take them to mean that I know everything, muaaaahahahahahahah... Anyway, in seismology it was known since at least 1973 that l_1 minimization was promoting sparsity but nobody really knew why and what to make of it in the sense that none of the sensors were modified as a result of this finding. What the papers of  Tao, Candes, Romberg [1] and Donoho [2] did was give a sense of the kind of acquisition that would be acceptable ( measurement matrix satisfying RIP, KGG....) which is the reason we can contemplate building hardware. From [3]:




[1] E. J. Candès, J. Romberg and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math., 59 1207-1223. (pdf)
[2] D. Donoho, Compressed Sensing
[3] Vivek Goyal, Alyson Fletcher, Sundeep Rangan, The Optimistic Bayesian: Replica Method Analysis of Compressed Sensing

Saturday, January 22, 2011

Question of the Day; Does this patent cover a hardware based computational photography solution ?

In Is Turbulence Ripe for Compressive Imaging ?, I suggested that an imager could be used to detect microbursts on airplanes. Boeing seems to have filed a patent for a similar idea (Boeing invention lets planes spot dangerous turbulence) on July 17, 2009.

Now here is my generic question (you can answer in the poll below), is this claim (a good idea by the way) too tied to a specific technology  / or would a specific computational photography or random lens imager  avoid fitting the description of the system in the claims ? (I am specifically thinking of item 3)

"......1. A clear air turbulence detection system, comprising:an image capturing element;a lens having a focal length adapted to focus a scene onto said image capturing element such that a combination of said lens and said image capturing element are adapted to optically resolve a visual distortion of a feature in said scene due to turbulence; and a  processor adapted to process an image of said scene from said image capturing element, said processor adapted to compare a plurality of said images to detect a change in refraction of light received from said feature due to turbulence and produce an indication of an area of turbulence.

2. The clear air turbulence detection system of claim 1, wherein said image capturing element is a CCD camera.

3. The clear air turbulence detection system of claim 1, wherein said lens is a telephoto lens.

4. The clear air turbulence detection system of claim 1, wherein said combination of said lens and said image capturing element is adapted to have a minimum resolving capability of between approximately 10 microradians of angle and approximately 100 microradians of angle.

5. The clear air turbulence detection system of claim 1, wherein the clear air turbulence system is mounted on a moving platform and wherein said processor is further adapted to transform a scene in a first image relative to a scene in a second image such that a feature in said first image approximates a position, a size and an orientation of said feature in said second image.

6. The clear air turbulence detection system of claim 5, wherein said processor is adapted to transform said scene in said first image using a transformation selected from the group consisting of translating said scene, scaling said scene, rotating said scene, and projecting said scene.

7. The clear air turbulence detection system of claim 5, further comprising:a inertial navigation device providing an orientation data of said feature; anda global positioning system data providing a position of said moving platform, and wherein said processor is adapted to process said global positioning system data and said orientation data to query a geographic information system source for an image data of said feature, and wherein said processor is adapted to compare an image of said feature captured by said image capturing element with said image data of said feature from said geographic information system source to detect a change in spatial information content of said feature due to an area of turbulence.

8. The clear air turbulence detection system of claim 1, wherein said processor detects said change in refraction of said feature due to an area of turbulence by performing a power spectral density operation selected from the group consisting of measuring a decrease in a high spatial frequency content of a measured power spectral density of a feature of an image compared with a calculated power spectral density of said feature of said image, and an increase in a high frequency content of a measured temporal power spectral density of a feature over a plurality of images compared with a calculated power spectral density of said feature over said plurality of images.

9. The clear air turbulence detection system of claim 1, wherein said processor detects turbulence by detecting angular blurring in an image.

10. The clear air turbulence detection system of claim 1, wherein said processor detects turbulence by detecting temporal flickering between images.

11. The clear air turbulence detection system of claim 1, wherein said processor detects a range to said turbulence by tracking a size of an area of scintillation through a plurality of images to determine a rate of change in an angular extent of said area of scintillation.

12. The clear air turbulence detection system of claim 1, wherein said turbulence indication is selected from the group consisting of an audible alarm, a graphical display of said turbulence, a graphical overlay superimposed on a real-time image, an input to an autopilot navigation system, and a broadcasted electronic message.

13. A method of detecting clear air turbulence, comprising:capturing a first image of a scene;capturing a second image consisting essentially of said scene;selecting a feature present in said first image and said second image;registering said first image to said second image such that said feature in said first image has approximately a same position, scale, and orientation as said feature in said second image;comparing said feature in said first image with said feature in said second image to determine a change to a portion of said feature between said first image and said second image; anddisplaying a turbulence indication based upon said change to said feature.

14. The method of detecting clear air turbulence of claim 13, wherein said first image and second image are adapted to resolve a minimum angular resolution of approximately 10 microradians to approximately 100 microradians.

15. The method of detecting clear air turbulence of claim 13, further comprising:removing a change to a portion of said feature caused by moving objects.

16. The method of detecting clear air turbulence of claim 13, further comprising:detecting an angular blurring in a image; anddisplaying a turbulence indication base upon said angular blurring.

17. The method of detecting clear air turbulence of claim 13, further comprising:detecting a temporal flickering in a set of images; anddisplaying a turbulence indication base upon said temporal flickering.

18. The method of detecting clear air turbulence of claim 13, wherein said change to a portion of said feature is an increase in a high temporal frequency content of a power spectrum distribution of said feature.

19. The method of detecting clear air turbulence of claim 13, further comprising:computing a power spectral density of said feature of said second image;computing a power spectral density of said feature of said first image; andcomparing said power spectral density of a said first image and said second image to determine said turbulence indication.

20. The method of detecting clear air turbulence of claim 13, further comprising:receiving an orientation data of said orientation of said scene of said first image relative to said feature;receiving a position data of a distance to said feature of said first image; andtransforming said scene of said second image to have approximately a same position, scale, and orientation as said feature in said first image.

21. The method of detecting clear air turbulence of claim 20, further comprising:querying a geographic information service for a data correlating to said feature; andcomputing said feature of said second image from the data of said geographic information service.

22. The method of detecting clear air turbulence of claim 13, wherein said displaying a turbulence indication is selected from the group consisting of playing an audible alarm, displaying a graphical display of said turbulence, superimposing a graphical overlay on a real-time image, sending a message to an autopilot navigation system, and broadcasting an electronic message.

23. An aircraft with a turbulence detection system, comprising:a turbulence detection system, further comprising a camera system and an image processing system;an aircraft, said aircraft adapted to mount said camera system; anda turbulence alerting system displaced in a cockpit of said aircraft, said turbulence alerting system in electronic communication with said image processing system.

24. The aircraft of claim 23, wherein said camera system further comprises:a CCD camera having a pixel size and focal length adapted to resolve visual distortions in a scene imaged by said CCD camera that are caused by turbulent air.

25. The aircraft of claim 24, wherein said image processing system is adapted to receive a plurality of images of said scene from said camera system, process said plurality of images, and produce an indication of turbulence.

26. The aircraft of claim 25, wherein said image processing system is adapted to select a selected image from said plurality of images, to select a reference image with which to compare said selected image, to perform a geometric image transformation selected from the group consisting of translating a scene, scaling a scene, rotating a scene, and projecting a scene, said geometric image transformation registering scenes from two different perspectives with one another, and to compare said selected image to said reference image to produce said indication of turbulence.

27. The aircraft of claim 26, wherein said reference image is selected from the group consisting of a second selected image from said plurality of images, a previously stored image of the scene, an image of the scene from a geographic information system, and an image of a feature in the scene from a geographic information system.

28. The aircraft of claim 25, wherein said image processing system detects a change selected from the group consisting of an angular blurring of a feature in an image, a temporal flickering of a feature in a set of images, a change in size of an area of scintillation of a feature in an image, a change in high temporal frequency content of a feature in an image, and a change in power spectral density of a feature in an image.

29. The aircraft of claim 23, wherein said turbulence alerting system receives said indication of turbulence from said image processing system, and said turbulence alerting system is selected from the group consisting of an audible alarm, a graphical display of said turbulence, a graphical overlay superimposed on a real-time image, an input to an autopilot navigation system, and a broadcasted electronic message...."








nathalie senglet

Printfriendly