Advances in Structured Compressed Sensing

Reference format Liu Fang, Wu Jiao, Yang Shuyuan, Jiao Licheng. Advances in Structured Compressed Sensing. Journal of Automation, 2013, 39(12): 1980-1995 DOI Innovation Initiative (111) (B07048), Ministry of Education Yangtze River Scholars and Innovation Team Development Plan (IRT1170), National Ministry of Education Doctoral Fund (20110203110006), Intelligence The Department of Perception and Image Understanding is the focus of the Ministry of Education. The three basic modules of compressed observation, sparse representation and signal optimization and reconstruction are three important directions for CS theory research. The sparseness of the signal is a prerequisite for CS. Non-correlated observations are the key to CS. Non-linear optimization is a means for CS to reconstruct signals.

Traditional low-dimensional structural model of compressed sensing frame 2 signals Generally speaking, models containing prior knowledge are very helpful to find or process the signals we are interested in, and the signals we study often have a variety of potential The low-dimensional structure, that is, the freedom of high-dimensional signals is usually much lower than the dimension of the signal. In recent years, there have been many researches on low-dimensional structural models of signals. This section will introduce several signal structure models commonly used in CS.

2.1 Sparse Signal Model The sparse signal model is the simplest model commonly used in signal processing. Traditional CS theory is based on it. From the definition of mathematics, when the signal /eRw contains only non-zero entries in a certain base or dictionary transform coefficient, ie, ||||.fc(fc"W), said/is fc-sparse . Sparseness shows that in many cases high-dimensional signals actually contain only a small amount of information far below their dimensions. Most of the signals in an actual scene are not exactly sparse, but can be well approximated by fc-sparse signals, which are often said to be compressible. For the sparse signal eRw, the set of all fc-sparse signals is written as described above, and the reconstructed signal in compressed measurement is an underdetermined problem. Many researchers have studied this in theory and solution algorithms in the past decade. The theory shows that under sparse model constraints, when the minimum number of columns in the observed or linearly correlated column, spark(), is greater than the number of measurements obtained by the spark property, the number of guaranteed unique restorations, 2fc, is much lower.

2.3 Subspace Joint Model This structured model can be extended to infinite-dimensional space. For W-dimensional fc-sparse signals with certain structures, the signal structure can be well characterized only by constraining the signal support to a smaller subset of them. For example, when the non-zero coefficients of the signal appear in some form of aggregation, the structure of the signal can be described by the Unions of subspaces model. The subspace joint model of the signal is an extension of the sparse model and can be used to characterize more types of signals including finite and infinite dimensions.

In the subspace joint model, if we already know "a subspace in the L possible subspaces R, ..., then" is positioned in the union of the L subspaces, ie, where Ui (1SzS magic is Rw The fc-dimensional subspace in , corresponds to a certain set of positions of fc non-zero coefficients in *, compared to a set containing all possible W-dimensional fc-sparse signals (constructed by the sum of subspaces), L It is often far less than C|. There is no unified method to deal with all the joint models. Researchers have made relevant theoretical and applied researches on signal sampling and restoration under some special types of subspace joint models. The simple joint model is a joint subspace (Finiteunionofsubspaces, FUS) model, in which the number of subspaces and their dimensions are finite.

A special case of Structuresparsesupports model of the FUS model is used. The model takes advantage of the additional information of the support, such as the position of non-zero elements of the vector, so that U is only part of. A typical knot wavelet coefficient (a) - dimension-valued wavelet tree structure (b) wavelet quadtree structure of two-dimensional image signal/image wavelet tree structure sparse support model for tree structure support (Tree-structured supports )model. Smooth wavelet bases are smooth and segmented smooth signals, including natural images, which provide sparse or compressible representations, and the wavelet coefficients of these signals and images naturally form a tree structure with large values ​​of coefficients along the tree The branches are gathered as shown. Therefore it is only necessary to use the union of the subspaces corresponding to the tree structure to represent the signal.

Another special case of the FUS model is the sparse and subspaces model of the subspace. In this model, each subspace that forms a union is a direct sum of a low-dimensional subspace: where WZl,..., is given Subspace set, dim(W~)=required. Therefore, different subspaces Ui correspond to the sum of the different subspaces taken from the L subspaces. When dim(Wi,)=1, the model degenerates into a standard sparse model. Thus, a block sparsity model can be obtained, that is, some blocks in one vector are equal to zero and other parts are not zero. Give an example of a block sparse vector. The vector is divided into 5 blocks, where the shaded region represents the 10 non-zero elements of the vector. They occupy 2 blocks and must represent the number of elements contained in the Z-th block. When all Z, must = 1, block sparsity degenerates to standard sparsity. In the field of statistics, a lot of research has been conducted on the properties of block sparse models. In addition, block sparse models have also been used in applications such as DNA microarray analysis, sparse communication channel equalization, and source positioning.

The sparse vector block expands the standard RIP property of the traditional CS into the (U,5)-RIP property for the FUS model. It is proved that if the constant 5 is sufficiently small, the reconstruction algorithm designed for the FUS model can correctly recover the sparsity. Vectors, and give the number of measurements required to ensure stable recovery. The coherence is generalized in the subspace sparseness and model, and the block-coherence of the matrix is ​​defined. Joining the internal structure of the subspace, such as the sparseness of the subspace, this is equivalent to adding a regular term representing sparsity to the optimization of a single block, so as to obtain a multi-layered sparse structure model. This model has been successfully applied. Source identification and separation issues.

The above-mentioned subspace joint model with limited dimensions and numbers mainly depends on the discretization of the analog input, without considering the actual hardware system. In order to be able to really implement low-speed sampling and reconstruction of structured analog signals on hardware, there has been a study of more complex subspace joint models. The joint model of these subspaces includes a model with a limited number of subspaces and an infinite number of subspace dimensions, a model with limited subspace dimensions and an infinite number of models with subspace dimensions and infinite numbers.

Since the low-speed sampling of the analog signal represented by the unified subspace is performed, the method used to solve the same problem is essentially different from the method of using the discretized signal in the limited subspace joint model described above. The two main frameworks for dealing with the under-Nyquist sampling problem of analog signals are Xampling and Finite-rate of Innovation (FRI). The Xampling framework deals primarily with analog signals that can be represented as a combination of finite infinite-dimensional subspaces, such as multi-band models. In this model, the analog signal consists of a finite sum of the band-limited signals. The signal components usually have a relatively small bandwidth but are distributed over a relatively large frequency range. Another type of signal that can be represented by a subspace is a type of signal with a limited update rate. Depending on the specific structure, this model corresponds to an infinite or finite number of finite dimensional subspaces that can characterize many signals with low degrees of freedom. In this case, each subspace corresponds to a certain choice of parameter values, and the set of possible values ​​of the parameters is infinite-dimensional, so that the number of subspaces opened by the model is also infinite. With the aid of this analog combination of subspaces, we are able to sample and process analog signals at low rates, and design effective hardware, such as using modulators and low-rate analog-to-digital converters (Analog-to-digital). Standard analog design components such as converter, ADC, and low-pass filtering implement the analog front-end to facilitate the development of analog CS frameworks from theoretical to practical applications.

2.4 Sparsity of low-rank matrix matrices is mainly manifested in two aspects: 1) Sparseness of matrix elements, ie, the matrix has few non-zero elements; 2) Sparsity of matrix singular values, ie, the matrix has very few non-zero Zero singular value, that is to say the rank of the matrix is ​​very small, then we call the matrix a low rank matrix.

For the matrix XeRW1XW2, the set of low-rank matrices can be expressed as the singular value decomposition of matrix X to X = In recent years, low-rank matrix reconstruction has become a hot topic in the fields of machine learning, signal processing, computer vision, etc. The recovery and filling of matrices can be seen. The work is a generalization of CS reconstruction from one-dimensional signals to two-dimensional matrices.

Under low-rank matrix constraints, the matrix-filling problem is expressed as the identity set of known elements in matrix X where Q is a missing element, and Pn(X) is defined as if (i,j)eQ is the other nearest, and some consider the matrix elements simultaneously. Sparse low-rank matrix models with matrix singular values ​​are used for matrix recovery problems: where A>0 is a regular parameter, and ||| is a regular strategy. Model (11) is often referred to as Robust Principal Component Analysis (RPCA). On the basis of RPCA, the low-rank sparse matrix decomposition low-rank representation problem is proposed. The LRR model is represented as DeRNlXn is a dictionary of linear data spaces, and n is the number of atoms in the dictionary. Similar to the Z.-minimization problem of CS, instead of rank(Z), the above problem is converted into a convex optimization problem to be solved.

3 Structured Compressed Sensing The traditional CS only uses sparsity as the only prior information in signal acquisition and reconstruction. Structured CS introduces structural priors in three basic modules of traditional CS, namely structured observation, Structured dictionary and structured signal reconstruction. As shown in the theoretical framework of structured CS, it can be seen that structured CS is based on sparse representation of structures and adopts structured observations matched with signals. Under structured priors, it is more effective for more extensive signal classes. Refactoring. Next, we will give a detailed introduction to the three basic problems of structured CS theory by combining various low-dimensional structural models of the signals given in the previous section.

Structured Compressed Sensing Framework 3.1 Structured Observation Matrix In order to ensure that RIP and irrelevance are guaranteed with high probability when reconstructing signals from low-dimensional measurements y, the random observation matrix will result in high complexity when the signal dimension is high. The problem is not easy to achieve.

In some specific applications, the type of observation matrix is ​​usually limited by the sensor's perception mode and capability. At the same time, to reduce the number of measurements and to sample the analog signal, we also hope that the observation matrix matches the signal. Therefore, compared to the traditional CS, the structured CS uses a structured observation matrix that matches the signal structure or the sensor perception mode. At present, the structured observation matrix mainly includes subsampled coherent bases, structural undersampled matrices, subsampled cyclic matrices, and separable matrices. A detailed overview of the theory and hardware implementation of structured observation matrices can be found.

Sampling using an undersampled uncorrelated base is obtained by first randomly selecting an orthogonal base that is irrelevant to the sparse base and then selecting a subset of the coefficients of the signal under this orthogonal basis to obtain CS measurements. There are two main types of applications for undersampled uncorrelated bases. In the first type of application, acquisition hardware is limited to the direct measurement in the transform domain, the most common examples being NMRI, tomography, and light microscopy. In these several applications, the measurements obtained from the hardware correspond to the two-dimensional continuous Fourier transform coefficients of the image, an example of an NMRI sampling and a CS reconstruction is shown, and a second type of application is to design an obtainable signal in a vector. A new collection device that projects on projections, for example a single pixel camera (as shown), can obtain the projection of the image on a vector set with binary elements.

In addition, this type of structured observation matrix has been used to design acquisition of periodic multi-frequency analog signals. The designed acquisition equipment is called a random sampling ADC. In some applications, the magnetic resonance imaging is measured by the acquisition equipment. Can not directly correspond to the signal under the specific transformation of the coefficients, the observations obtained is a linear combination of multiple signal coefficients, in this case the resulting CS observation matrix is ​​called structured undersampling matrix. The structured undersampling matrix is ​​used to design a compressed information acquisition device that collects periodic multi-frequency analog signals (as shown). With this framework and improved recovery algorithms, wider frequency sparse signals can be sampled.

Single pixel camera imaging simulation filter / (>) pseudo-random sequence from (e) compressed ADC sampling model The cyclic structure is used for the CS observation matrix. It first appeared in the signal estimation and multi-user detection in the communication field where the signal response and the multi-user mode of these signals to be estimated are given sparse priors, and these signals are convoluted with the impulse response of the sampling hardware before measurement. Since the convolution is equivalent to the product operator of the Fourier transform domain, the multiplication operation using the fast Fourier transform can speed up the CS recovery process.

For multidimensional signals, separable matrices provide computationally very efficient compression measurements, such as hypercube sampling from multidimensional data. These matrices have a concise mathematical form like the Kronecker product, and the correlation between the matrices of the matrices reflects the significant feature structure. The separable CS observation matrix is ​​mainly used for multidimensional signals such as video sequences and hyperspectral cube data. Using a separable observation matrix, a single pixel camera has been promoted as a single-pixel hyperspectral camera.

3.2 Structured sparse representation The sparse representation of the signal is the premise of the application of CS theory. Choosing an appropriate dictionary pill makes the signal have a higher sparsity under the tune, which can improve the efficiency of signal perception. Candes et al. and Donoho's research showed that using only K2cfclog(W/fc) independent and identically distributed Gaussian measurements can accurately reconstruct fc-sparse signals or compressible signals that are highly similar to sparse signals with high probability. It can be seen that the higher the sparseness of the signal under the dictionary, the less the number of observations required to accurately reconstruct the signal. Therefore, in CS, we try to use or design a dictionary that can obtain a signal sparse representation.

There are usually two methods for constructing a dictionary: 1) An analytical method for constructing a dictionary based on a mathematical tool, the constructed dictionary is a fixed dictionary, and 2) a learning-based method for constructing a dictionary matching a specific signal data. Traditional CS often uses fixed dictionaries. For signals with complex structures, such dictionaries are not flexible enough to obtain sufficient sparseness. Structured CS enriches the contents of dictionaries by learning the cascading of fixed dictionaries and dictionaries with specific structures, and realizes adaptive sparse representation of signals.

3.2.1 Fixed Dictionaries Orthogonal dictionaries are fixed-form dictionaries traditionally used by CS in the early stages, usually described by their algorithms, such as the standard orthogonal dictionary obtained by the Fourier transform, discrete cosine transform, wavelet transform, etc. These transformations have the characteristics of simple construction, fast implementation, and low complexity of representation. When the signal features are consistent with the atomic features of the dictionary, an efficient and accurate representation can be obtained. But for complex signals such as images, orthogonal dictionaries cannot flexibly represent them and obtain sufficient sparseness. A large number of studies have shown that an overcomplete redundant dictionary can more flexibly represent signals and obtain higher sparseness, including Curvelets, Contourlets, and Bandelets. In the field of CS, Candes et al. have theoretically proved that under certain conditions, a The sparse signals under the fully redundant dictionary can be accurately reconstructed.

Signals in the real world have a complex structure that can be seen as being composed of components of various structural types, such as transients and invariants in audio signals, edges, textures, and smooth parts in natural images. Each of these types of structure is completely different, and neither one can effectively represent the other. This mixed signal from different structures can be effectively represented by an orthogonal base cascading dictionary. When the coherence coefficient of a dictionary concatenated by orthogonal bases is equal to 1/, the cascade dictionary is considered (completely) incoherent, and the sparse representation of the signal satisfies the exact reconstruction conditions. Common Orthogonality The base cascade dictionary has an orthogonal base cascading dictionary composed of different orthogonal wavelet bases, an orthogonal base cascade dictionary composed of wavelet functions and Curvelet functions, etc. Gribonval et al. Dictionary has a unique sparse representation condition, indicating cascaded dictionaries made up of non-orthogonal dictionaries, such as bi-orthogonal wavelet base cascading dictionaries, which also have good performance in sparse representation of signals containing multiple structures. The contents of the dictionary are enriched through cascading, so that each structure in the signal can be sparsely represented in the corresponding dictionary, but the application of the cascade dictionary also requires that the characteristics of the signal be consistent with the characteristics of the dictionary, otherwise It is difficult to get a satisfactory expression.

3.2.2 Structured Dictionary Learning The above dictionary is fixed. Once the atom type is determined, it will not change. When a dictionary is selected in CS, it needs to know the prior information of the signal more or less, and when the signal of the study changes, The dictionary used does not necessarily fit. Therefore, an adaptive structured dictionary learning method to obtain a signal optimal sparse representation appears. This method concentrates on learning a dictionary from a large number of training samples. The mathematical model of the dictionary learning is as follows: where the matrix FeRWxi is the training sample set, fi is the i-th column of F, the matrix eRWxM is an unknown dictionary, the matrix XeRMxi is a sparse matrix, and each column vector of X corresponds to each of F's Sparse representation of column vectors beneath the dictionary.

The dictionary learning problem (13) is a non-convex combinatorial optimization problem. The classical algorithms for solving include the MOD (Method of Optimal Di- merisms) algorithm and the K-SVD (K-Sigular Value Decomposition) algorithm. The MOD algorithm alternately performs sparse coding and dictionary updating. In sparse coding steps, the algorithm fixes the dictionary, and each signal is sparsely coded independently; in the dictionary update step, the algorithm updates the dictionary by solving the analytical solution of the quadratic optimization problem (13). The MOD algorithm requires only a few iterations to converge. Although the inverse matrix operation makes the algorithm more complex, overall MOD is a very effective method. The K-SVD algorithm uses a dictionary update rule that is different from that of the MOD to update the atoms in the dictionary (ie, the column vectors of the dictionary) one by one. The K-SVD updates the atoms of the current iterative step and the corresponding sparse coefficients. The algorithm is more efficient. Compared with the analytic method of constructing a dictionary, the above dictionary learning algorithm can obtain a more effective dictionary, and better performance can be obtained in practical applications.

There is a lot of research on structured dictionary learning methods, that is, adding structural information between dictionary elements in learning to obtain a structured sparse representation of the signal. Training a learning algorithm with a dictionary of cascading matrices is the first attempt to learn a structured overcomplete dictionary. This structure can ensure that the training dictionary is a tight frame and can reduce the computational complexity of dictionary learning.

The algorithm is supposed to learn a sparse representation that is effectively concatenated by L caesium matrices; in the dictionary update step, the algorithm updates the L matrices iteratively and alternately. Due to the relatively strict model used, this method does not very well represent a very flexible structure in practice. The double sparsity (Doublesparsity) dictionary learning method is a method of learning a dictionary by using the sparse model of atoms of a trained dictionary under a known dictionary. In this structure, each atom of the trained dictionary can be represented by a sparse combination of atoms of a given dictionary. This method can construct the dictionary adaptively on the one hand, and it can effectively improve the efficiency of dictionary learning on the other hand. ISD (Images ignaturedictionary) is a transformation invariant dictionary with sub-image blocks as atoms. The method describes the invariance in the form of a block, requires few training samples, the dictionary learning process can quickly converge, and under this structure it is possible to train a dictionary with atoms of different sizes to propose an optimal block for finding a given sample set. Sparse representation of dictionary learning methods. This method takes the block structure of the dictionary as an unknown priori, deduces the block structure through data, and adjusts the dictionary. The proposed Block K-SVD (BK-SVD) algorithm exchanges and updates the block structure and the dictionary iteratively. In the block structure update step, the atoms are merged gradually according to the similarity of the sample set represented by the dictionary atom; in the dictionary update step, a generalized form of the K-SVD algorithm is adopted, and the sparse dictionary is obtained by sequentially updating the dictionary atom. A block size of 1 is the K-SVD algorithm. The BK-SVD algorithm does not consider the interatomic structural information. Based on the BK-SVD, Li et al. added the local map model of atoms in the block, then combined the block sparsity constraint and the graph regular term to obtain the dictionary learning model. Finally, the model was solved by updating the block sparse coding and the dictionary. The dictionary not only guarantees block sparsity but also maintains the local geometric structure between atoms, and can effectively reduce the correlation between dictionary blocks. In addition, Jenatton et al. proposed a hierarchical dictionary learning method based on sparse regularization of the tree structure. This method adds a tree-structured hierarchical model that reflects the correlation between dictionary elements in dictionary learning, uses the proto-dual method to calculate the sparse regularized neighboring operator of the tree structure, and solves the tree sparse decomposition problem of the signal through an accelerated gradient method. In recent years, some domestic scholars have also used and constructed some new structural dictionary learning methods such as image denoising and repair, such as sparse representation of signals based on tree-redundant dictionaries and dictionaries based on local similarity clustering of image blocks. Learning methods, dictionary learning methods based on non-local joint sparse approximation, and adaptive multi-component sparse representation of image structures.

3.3 Structured Signal Reconstruction 3.3.1 Traditional CS Reconstruction Based on Standard Sparse Priorities Under the constraints of sparse models, the traditional CS reconstruction problem can be expressed as follows. - Non-convex norm optimization problem: Solve the above problem. The most primitive approach to the norm optimization problem is to search for the sparsest solution vector that is consistent with the linear measurement. For W-dimensional fc-sparse signals, an exhaustive feasible solution is needed, making the problem NP-hard. For this reason, there are many feasible algorithms that can replace the Z-norm optimization.

One common method is to use 丨1 norm instead of 丨. The norm converts the problem to Zi-norm minimization (convex optimization). The following theoretical study shows that, under certain conditions, the 丨1-minimization problem and the /-problem are equivalent. - Convex optimization also gets the sparsest solution. Solving-problem pursuitalgorithm, BEPA, etc. Based on multi-level sparse priors assumptions, a fast Bayesian CS (BCS) algorithm can be obtained by sparse Bayesian learning (SBL) method. A detailed review of the traditional CS reconstruction algorithm can be found.

The dimension of the feasible solution space of the inverse problem of the traditional CS based on the standard sparse model increases exponentially with the increase of the signal dimension, so the feasible solution is selected from which has sufficient freedom. In practical applications, this excessive freedom often leads to unstable algorithms and inaccurate estimates. In order to overcome this problem, structured CS introduces the structural model of the signal and uses it as a priori choice for the feasible solution of the CS inverse problem to constrain the feasible solution space. Compared with the traditional CS, structured CS effectively reduces the number of compression measurements and improves the quality of reconstruction. It also extends the process of compressive sensing of finite-dimensional signals to the processing of infinite-dimensional signals. In the following, we will introduce the finite-dimensional signal based on the MMV model, the subspace joint model, and the structural reconstruction based on a priori regularization method.

3.3.2 Optimization of Structured CS Reconstruction (or SMV) Based on the MMV Model (14) To generalize, the following optimization problems for MMV sparse recovery can be obtained: In the matrix norm, the elements in the matrix are assumed to be independently and identically distributed. This assumption is inappropriate in many practical scenarios. For example, at high sampling rates, the amplitude of successive samples of the resulting signal source is strongly correlated. The algorithm proposed by Zhang et al. and Cho et al. learns this spatial-temporal structure in sparse recovery by establishing an autoregression (AR) model of the signal source. Although the performance of the MMV algorithm superior to the above-mentioned temporal-spatial structure is not obtained, it is too low. The efficiency limits their application. Zhang et al. proposed a sparse Bayesian learning (TMSBL) algorithm based on the spatial and temporal correlation structure of signal sources for sparse Bayesian learning (TMSBL), which improved the quality and efficiency of joint sparse reconstruction of multiple signals. In addition, Zhang et al. 125 improved the iterative repetition weighting algorithm M-FOCUSS, and proposed an iteratively repeated weighting algorithm tMFOCUSS.Wu based on the spatio-temporal correlation structure of the signal source. By designing a multivariate CS-based image multivariate compression sampling method, the wavelet was proposed. The CS reconstruction problem of the domain image translates into the MMV problem. The clustering of image wavelet coefficients makes the coefficients in the neighborhood of the same scale have significant statistical correlation structure and instant null correlation structure. In the Bayesian framework, they use the multivariate scale mixing model to model the correlation structure within the scale of wavelet coefficients, combine it with the reconstruction of CS, and propose a fast multivariate pursuit algorithm (MPA) to solve the MMV problem.

For a more detailed overview of CS reconstruction based on the MMV model, see.

3.3.3 Structured CS reconstruction based on the subspace joint model Recently, the tree structure model has been used in some reconstruction algorithms and has achieved better performance than traditional CS reconstruction. The model-based CoSaMP (TMP) algorithm proposed by Bara-niuk et al. and the Tree-based OMP (TOMP) algorithm by La et al. are based on the traditional greedy algorithm. owned. Compared with the standard greedy algorithm, these algorithms determine the signal support through the tree structure, narrow the search range of the algorithm, and effectively improve the sparseness of the reconstructed signal. Lian Qiusheng et al. improved the above algorithm, proposed convex projection alternate projection algorithm based on dual-tree wavelet general HMT (Hidden Markovtree) model, and iterated hard threshold algorithm based on reasonable tree structure model of wavelet coefficients. The tree-structured wavelet CS (TSW-CS) algorithm based on the wavelet HMT model proposed by Ugami et al. uses the wavelet HMT structure to obtain the probability estimation of image wavelet coefficients through Bayesian learning. The HMT-based weights for iterative feedback (HMT+IRWL1) algorithm proposed by Duarte et al. based on the HMT weighting method constructs a weighted method by using the HMT model of wavelet coefficients, effectively increasing the sparsity of the reconstruction coefficients and improving the reconstruction. Accuracy. Zhao Wei and others used the 4-state wavelet HMT model to improve the HMT+IRWL1 algorithm. In addition to the above algorithms, there are belief propagation and message passing algorithms that use Markov chains, Markov fields, or Markov trees as a priori of structured probabilities, multi-task CSs based on Bayesian multi-layer models, and learning. problem. In statistics, when there are group or block structure correlations between sparse coefficients, Yuan et al. extended the Lasso algorithm to GroupLasso. Ja-cob et al. and Jenatton et al. extended the GroupLasso by adding other types of structural sparsity to the model. More complex sparse regularization conditions. Eldar et al. see the reconstruction of block sparse signals as a mixed 丨2/丨1 norm optimization problem. The convex optimization method is used to solve the block sparse signal reconstruction. The proposed algorithm can be seen as a BP algorithm in block sparse signal reconstruction. Promotion. Eldar et al. extended the MP and OMP algorithms to block sparse matching pursuit (BlockMP) and block sparse orthogonal matching tracking (BlockOMP) algorithms. In addition, Fu Ning et al. proposed a block sparsity adaptive iterative algorithm and a subspace-based block sparse signal CS reconstruction algorithm.

3.3.4 Structured CS Reconstruction Based on Transcendental Regularity In addition to the above two types of reconstruction algorithms using signal structure models, there is a class of structured CS reconstruction algorithms based on a priori regularity. This type of method is mostly used for image reconstruction. The priori of the structures used are derived from the image itself, for example, the edges and textures of the image, the neighborhood structure information of the image pixels, and the non-local similarity of the image sub-blocks, and are often iterated. The method adaptively learns the parameters of the structure prior regular model and synchronously realizes image restoration.

Wu et al. added the edge information of the image to the sparse reconstruction process of the MMV, and proposed an MPA algorithm based on edge guidance. The algorithm can obtain high-quality reconstruction of images with strong edges. Wu et al. used an auto-regressive model of pixels in the spatial domain to propose a reconstruction algorithm based on an auto-regressive image model. This algorithm significantly improves the recovery of image edge, texture, and other detailed information.

Based on this, they further improve the performance of image reconstruction algorithms based on autoregressive models by adding non-local similarity learning to images.卩6丫 et al., Zhang et al., and Chen Shuzheng et al. proposed an iterative non-local regularization-based image reconstruction algorithm that alternates iteratively between image reconstruction and learning the optimal non-local graph matching to the image structure. Reconstruct the edges and textures of natural images. The optimal base-compressive sensing proposed by 卩6丫 et al. learns from the tree structure dictionary and obtains the optimal orthogonal basis, so as to obtain the image adaptive regular prior model and adaptive reconstruction. Duarte-Carvajalino et al. proposed a priori structure of synchronous learning and an image restoration algorithm of the observation matrix. Yu et al. proposed an adaptive sparse domain selection (Adaptivesparsedomainselection) and an adaptive regularization dictionary learning algorithm to solve the image restoration problem. The algorithm clusters the image blocks and uses the PCA method to construct a dictionary for each clustering learning subdictionary. The structure of the image is well represented. Zhou et al. proposed a non-parametric multi-level Bayesian model dictionary learning method for image restoration. Under this model, it is not necessary to know any prior knowledge of the training sample to adaptively obtain a low value of the image block in the learning dictionary element. The sparse representation under the dimension subset, and can easily combine the structural model of the image, such as the cluster structure, spatial structure, etc., in the form of a random process with the hierarchical Bayesian model to improve the performance of CS reconstruction. .

4 Summary and Outlook This paper elaborates on the basic models and key technologies involved in structured compressive sensing in three basic aspects of compressed sensing theory, and summarizes the latest research results of structured compressive sensing theory. The introduction of more complex structural models has greatly advanced the application capabilities of compressed sensing theory in practice. The new structured compressive sensing theory framework has extended the types of signals it can handle. Although there are many researches on structured compressive sensing at present and many achievements have been made, there are still many problems to be solved.

Adaptive observation matrix learning Different observation methods have a significant impact on the number of measurements required for compressed sensing reconstruction and the recovery performance of the algorithm. Structured observations determined by the sensor perception mode used by random observations and structured compressive sensing in conventional compressive sensing usually have a fixed form and cannot adapt to the internal structure of complex signals such as those based on distributed and multi-task compressive sensing. In compressive sensing such as array signals and video images, fixed random observations are often used to independently sample each signal source or image frame without taking into account the correlation between and within the signal source or image frame. At present, the learning and design of adaptive observations are limited to theoretical preliminary studies. The simulation experiments have verified the effectiveness of reducing measurement rate and improving reconstruction quality. Therefore, for different applications, designing an optimal adaptive structured observation matrix and a simple sampling implementation technique are important means for expanding the scope of compressive sensing applications, and further research is needed.

Compressed Sensing Reconstruction Based on Kernel Method It is well known that compressed sensing can recover high-dimensional signals from low-dimensional observations, and the price paid is nonlinear optimization recovery process.

In recent years, some scholars have devoted themselves to seeking analytical solutions to build a more practical and efficient compressed sensing solution. It is pointed out that by choosing a proper kernel mapping and forming a sparse representation of the signal in the feature space, the least squares method can be used to obtain the analytical solution, which is called nuclear compression sensing. Kernel Compression Sensing is essentially a kind of compressed sensing under nonlinear sparse representation. It not only can avoid the iterative nonlinear optimization process, but also can recover the signal with fewer observations than the linear sparse representation. Studying the structure and implementation of the structured model under this theory is also a development direction of the future structured compressed perception.

Non-convex Structured Compressed Sensing Reconstruction The original problems of traditional compressive sensing reconstruction and structured compressive sensing reconstruction are flawed. The non-convex optimization problem under norm is an NP-hard problem. The greedy algorithm represented by matching pursuit and orthogonal matching pursuit and the threshold algorithm represented by iterative hard threshold shrinking can be regarded as directly solving 丨. The method of the problem. However, neither the greedy algorithm nor the threshold algorithm can guarantee convergence to the global optimal solution. At present, scholars have begun to use natural computing methods to solve the non-convexity of compressed sensing reconstruction (Zhao Lu, Ma Ronghua, Xue Chaogai, Li Hengjian. Signal sparse decomposition based on tree-type redundant dictionary orthogonal matching pursuit. Journal of Yangzhou University (自然科学版),2011,14(4):52―55,82)(徐健,常志国。基于聚类的自适应图像稀疏表示算法及其应用。光(胡正平,刘文,许成谦。基于分类学习字典全局稀疏表示模型的图像修复算法研究。数学的认识与实践,2011,(李民,程建,李小文,乐翔。非局部学习字典的图像修复。电子与信(孙玉宝,肖亮,韦志辉,刘青山。图像稀疏表示的结构自适应子空间匹配追踪算法研究。计算机学报,2012,2011,39(1):142―148(杨海蓉,张成,丁大为,韦穗。压缩传感理论与重构算法。电子学报,(王法松,张林让,周宇。压缩感知的多重测量向量模型与算法分析。信号处理,2012,28(6 :785―792)(练秋生,王艳。基于双树小波通用隐马尔可夫树模型的图像压缩感(练秋生,肖莹。基于小波树结构和迭代收缩的图像压缩感知算法研(赵贻玖,王厚军,戴志坚。基于隐马尔可夫树模型的小波域压缩采样信号重构方法。电子测量与仪器学报,2010,(付宁,曹离然,彭喜元。基于子空间的块稀疏信号压缩感知重构算(付宁,乔立岩,曹离然。面向压缩感知的块稀疏度自适应迭代算法。电子学报,2011,39(3A):75―79)(陈书贞,李光耀,练秋生。基于非局部相似性和交替迭代优化算法的图像压缩感知。信号处理,2012,XidianUniversity,China,2012(杨丽。基于Ridgelet冗余字典和遗传进化的压缩感知重构,西安电子科技大学,中国,2012)(马红梅。基于Curvelet冗余字典和免疫克隆优化的压缩感知重构,西安电子科技大学,中国,2012)(郜国栋。基于交替学习和免疫优化的压缩感知图像重构,西安电子科技大学,中国,2012)刘芳西安电子科技大学计算机学院教授。1995年获西安电子科技大学计算机学院硕士学位。主要研究方向为信号和图像处理,学习理论与算法,模式识别。

武娇中国计量学院理学院讲师。2012年获西安电子科技大学计算机学院博士学位。主要研究方向为图像处理,机器学习,统计学习理论与算法。本文通信作杨淑媛西安电子科技大学电子工程学院教授。2005年获西安电子科技大学电子工程学院博士学位。主要研究方向为机器学习,多尺度分析,压缩采样。

焦李成西安电子科技大学电子工程学院特聘教授。1990年获西安交通大学博士学位。主要研究方向为信号和图像处理,自然计算,智能信息处理。

silica

Precipitated Silica,Micro Silica,Fused Silica,Densified Micro Silica

Ningxia Jinyihui Environmental Protection Technology Co., Ltd. , https://www.chinajyhhb.com