最新刊期

    20 12 2015
    • Review of remote sensing image change detection

      Tong Guofeng, Li Yong, Ding Weili, Yue Xiaoyang
      Vol. 20, Issue 12, Pages: 1561-1571(2015) DOI: 10.11834/jig.20151201
      摘要:In recent years,remote sensing technology has developed rapidly. And remote sensing image technology has been applied in more and morefields, especially inland and resources management, earth's surfacechange,agroforestry monitoring and other fields. At present,there wererelatively few reviews based on change detection process. Most algorithms were only for change information extraction method.In order to make more researchers have a more comprehensive understanding in remote sensing image change detection theory, process and its existing problems, a detailed introductionwas reviewed. a large number of remote sensing image change detection algorithms were summarized, classified and compared. A deep description based on the flow of the change detection technology was given in this paper. And the development status and trends of the image segmentation, feature extraction and classification algorithms in the step of change information extraction were mainly discussed. Most of the change detection methods have good performance for specified condition. There are no generic algorithm, and the existing algorithms have problems in the perspectives of efficiency, accuracy, intelligent. The majority of algorithms have solved the relatively scattered problems and theories. Combined with the existing problems and the present state of development under the influence of the big data technology, the future development trend of remote sensing images change detection field was forecasted and prospected-from five aspects, which are data types, pre-processing method, the change information extraction method, the algorithm efficiency and theoretical innovation. Remote sensing image change detection has high research value in many areas, but some of the limitations of change detection at the present time also need further research. The study of change detection needs creative thinking from the research hot spot and introducing deep learning and other development trends.  
      关键词:remote sensing;information extraction;change detection;research situation;prospect;review   
      10784
      |
      527
      |
      35
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56115478 false
      更新时间:2024-05-07
    • Li Xuchao, Ma Songyan, Bian Suxuan
      Vol. 20, Issue 12, Pages: 1572-1582(2015) DOI: 10.11834/jig.20151202
      摘要:To overcome the texture loss and false edge caused by a bounded variation function, a mixing regularization model in a tight frame domain that protects image texture information and reduces false edge is proposed. An alternating-direction iteration multiplier algorithm is introduced. First, in the tight frame domain, for images blurred by system and Poisson noises, the fitting term is described by the Kullback-Leibler function, the mixing regularization terms are composed of the semi-norm of the bound variation function and the L norm, and the fitting and weight regularization terms constitute the energy functional regularization model. Second, the solution and uniqueness of the mixing regularization model are analyzed. Third, the minimum problem of the mixing regularization model is decomposed into four easily solved sub-problems by introducing auxiliary variables and utilizing the alternating-direction iteration multiplier algorithm. Finally, an effective optimization algorithm is constructed with the four sub-problems via alterative iteration. The mixing regularization of the tight frame domain can effectively overcome the texture information loss and false edge caused by the bound variation function. Compared with traditional algorithms, the proposed algorithm can increase the peak signal-to-noise ratio to approximately 0.1 dB to 0.7 dB. The proposed model can protect image texture information, alleviate false edges, achieve higher peak signal-to-noise ratio and structural similarity index measure, and restore images blurred by system and Poisson noises compared with other regularization models.  
      关键词:tight frame domain;hybrid regularization model;alternating direction iteration algorithm;image restoration   
      3348
      |
      410
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56114487 false
      更新时间:2024-05-07
    • Application of color entropy to image quality assessment

      Xu Lin, Chen Qiang, Wang Qing
      Vol. 20, Issue 12, Pages: 1583-1592(2015) DOI: 10.11834/jig.20151203
      摘要:An improved no-reference image quality assessment metric IQALE is proposed in this paper. Color space also includes lots of image information, and Lab color space is closer to human vision system. Therefore, to improve the metric accuracy, this paper adds a channel and b channel of Lab color space into the spatial-spectral entropy-based quality (SSEQ) algorithm. Information entropy is an image feature that is studied more in recent years, and can be applied to image quality assessment better. Information entropy is extracted in both color and gray spaces. Then the image features and MOS value are trained and tested via support vector machine (SVM). The results on LIVE, TID2008, MICT, CSIQ and IVC databases demonstrate that adding the information of Lab color space can improve the metric accuracy and IQALE algorithm is better than the recent popular no-reference image quality assessment algorithms. Moreover, in order to test the scalability of the proposed metric, the database independence experiment is conducted on the five image databases. According to the result, IQALE method has better and more stable accuracy by adding the feature of color entropy. The database independence experiment also shows the better Robustness of the method. Furthermore, IQALE has better universality for every distortion type.  
      关键词:image quality assessment;color space;information entropy;color entropy   
      3748
      |
      307
      |
      5
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56114731 false
      更新时间:2024-05-07
    • Shi Yonggang, Wang Dongqing, Liu Zhiwen
      Vol. 20, Issue 12, Pages: 1593-1601(2015) DOI: 10.11834/jig.20151204
      摘要:Satisfactory segmentation results of hippocampal subfields are difficult to obtain via most existing multi-atlas segmentation methods due to the tiny volume and complex structure of hippocampus. A segmentation method for hippocampal subfields based on sparse representation and dictionary learning is proposed. Sparse representationand dictionary learning models are constructed andpatches are extracted from registered atlases for dictionary learning to determine the label for a voxel in the target image. Besides, local binary patterns (LBP) features of labeled atlases are exploited to improve discrimination of the learned dictionary. The label for the voxel is acquired, after sparse representation of the patch in target image over the learned dictionary is solved. Finally, a correcting method is used for mislabeled voxels, according to priors of atlases. Quantitative and qualitative comparisons demonstrate that the proposed method, which achieves an average Dice Similarity Coefficient (DSC) of 0.890 for the larger hippocampal subfields, outperforms typical approaches based on multi-atlas. The proposed method is suitable to segment hippocampal subfields from MR brain image with higher accuracy and robustness, which provides a favorable basis for the diagnosis of neurodegenerative diseases.  
      关键词:segmentation of hippocampal subfields;sparse representation;dictionary learning;multi-atlas;local binary patterns;patch   
      3957
      |
      359
      |
      4
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56114550 false
      更新时间:2024-05-07
    • Watershed algorithm with threshold mark for color image segmentation

      Zhang Haitao, Li Yanan
      Vol. 20, Issue 12, Pages: 1602-1611(2015) DOI: 10.11834/jig.20151205
      摘要:In view of the segmentation problem caused by the traditional watershed algorithm, a new improved watershed algorithm for color image segmentation based on threshold mark is proposed. The method applied watershed algorithm directly to the original gradient image but not the simplified image, the purpose of which is to insure the loss edge information can be avoided. With different size structure element for color morphological gradient image, solved the contradiction between the protection of the edge and the image is simplified. At the same time, the new algorithm designed a method of threshold automaticly selection and marker extraction-extracting local minimum value related to the objects from the low-frequency components of the gradient, with these minimal values constituting the binary image force calibration of the original gradient image,then used watershed segmentation on the modified gradient. In the simulation experiment,the algorithm in this paper is compared with similar segmentation method.Algorithm obtained accurate and continuous closed boundary for the different RGB color images segmented,which got the minimum segmentation number that conforming to human vision,and improved the working efficiency. The method can adaptively extract marker without the need for prior knowledge, effectively solved the over segmentation problem of watershed,compared with the traditional algorithm improved the segmentation performance and having good applicability and robustness. It can be applied to the machine vision, biomedicine and hyperspectral remote sensing image segmentation.  
      关键词:color image segmentation;multi-scale gradient;maximum entropy threshold;BTW low pass filter;marker extraction;watershed algorithm   
      4635
      |
      424
      |
      13
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56115523 false
      更新时间:2024-05-07
    • Improved K-means active contours stem

      Zhang Qianying, Wu Jitao, Xie Xiaozhen, Wang Xiaotao
      Vol. 20, Issue 12, Pages: 1612-1618(2015) DOI: 10.11834/jig.20151206
      摘要:Active contour models (ACM) are efficient frameworks for image segmentation because they can provide smooth and closed contours to recover object boundaries with sub-pixel accuracy. Region-based ACM use regional statistical information as an additional constraint to stop the contours on the boundaries of the desired objects. One of the most popular region-based ACM is the C-V model, which has been successfully used in binary phase segmentation with the assumption that each image region is statistically homogeneous. However, typical region-based models do not work well on images with intensity inhomogeneity because these models rely on the uniformity of intensities. This paper presents a new level-set-based K-means active contour model that can segment images with intensity inhomogeneity. We derived this model from a linear level-set-based K-means model constructed by researching the properties of the Euler-Lagrange equation of the C-V model. Our background model is the C-V model, which consists of a fitting term and a regularization term. The fitting term corresponds to classical K-means. When parameters in a threshold are fixed, all pixels in an image have identical thresholds, and the evolution function has a quadratic form, normal K-means and the associated ACM will be unable to process images with intensity inhomogeneity.By researching the reasons for the aforementioned problems of the C-V model, a novel active contour based on a modified K-means is proposed in this paper. Compared with a K-means using fixed parameters, the new K-means contains a variable-weight coefficient matrix, which can be defined with different values for different pixels. Thus, the defined K-means can overcome the drawbacks of the C-V model. Moreover, we defined a local adaptive weighting (LAW) function thatcan identify the cluster threshold of each pixel according to its neighborhood statistical information. This threshold protects the model from the influence of intensity inhomogeneity and enables a successful segmentation ofinhomogeneity images. The LAW-based model can successfully detect objects on a noisy synthetic image with intensity inhomogeneity. Experimental results for medical images show that compared with the local binary fitting (LBF) model, local image fitting (LIF) model, and local correntropy-based K-means model, the proposed model can yield competitive results. Furthermore, when using the provided undesirable initial contours, the proposed model can still derive a correct segmentation of inhomogeneity images, whereas the LBF and LIF models are easily trapped into local minima. This result demonstrates that the proposed model is robust to contour initialization. Given the use of fixed-weight parameters, the typical C-V model may fail to detect meaningful objects from images with intensity inhomogeneity. This paper proposes a modified K-means-based active contour by employing a variable-weight coefficient matrix. Different choices of variable-weight coefficient matrix can be defined to process specific images. We also provide a LAW function for this framework to segment inhomogeneous images. Experiment results indicate that the proposed model can effectively process images with intensity inhomogeneity and is robust to the position of the initial curve.  
      关键词:image segmentation;active contour;level set method;C-V model;K-means;intensity inhomogeneity   
      3662
      |
      372
      |
      1
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56114640 false
      更新时间:2024-05-07
    • Integration of global and local correntropy image segmentation algorithm

      Huang Yang, Guo Lijun, Zhang Rong
      Vol. 20, Issue 12, Pages: 1619-1628(2015) DOI: 10.11834/jig.20151207
      摘要:The local correntropy-based k-means (LCK) model can segment an image that contains unknown noise and has an uneven gray distribution. However, the segmentation result is sensitive to the initial contour. To solve this problem, a new dynamic model based on global correntropy-based k-means (GCK) and LCK is presented. The dynamic model is a combination of two models. A new algorithm, i.e., GCK, is proposed by introducing correntropy to the coefficient of variation (CV) model and improving the CV model. A global and local correntropy-based k-means (GLCK) model is then proposed by combining GCK and LCK dynamically to retain each method's advantages. The GLCK model is not a simple linear combination of the two models. The model implements two steps to complete segmentation. First, the GCK model isutilized to segment an image and obtain the general outline of the image. Second, the image with the initial contour as segmentation results of GCK is segmented finely by LCK. To improve segmentation accuracy, a dynamic combination algorithm is designed by controlling the time when the GCK model transforms into the LCK model automatically. The segmentation result of the proposed method is compared to that of three other similar segmentation methods, namely, LCK, local binary fitting, and CV models, on natural and synthetic images. Results showed that the proposed model is more robust than the three other models. By segmenting two natural images on the BSD library and using the Jaccard similarity ratio for quantitative analysis, accuracy rates of 91.37% and 89.12% are obtained. The proposed algorithm can effectively segment medical images and the simple structure of natural images with unknown noise and an uneven gray distribution; the result is robust to the initial outline.  
      关键词:correntropy;variational method;level set;dynamic combination   
      3250
      |
      564
      |
      3
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56115617 false
      更新时间:2024-05-07
    • Progressive iterative impulse noise detection

      Sun Jinguang, Huang Xu
      Vol. 20, Issue 12, Pages: 1629-1638(2015) DOI: 10.11834/jig.20151208
      摘要:Impulse noise is the main cause low image quality. The filtering of impulse noise has always been a research hotspot in the field of image processing. On the basis of theoretical analysis for current switching filtering algorithms in terms of detection time, detection accuracy, and recovery strategy, this study proposes a progressive iterative impulse noise detection algorithm, which can obtain a high recovery effect from noisy images. First, we adopt gray-level histograms that possess global statistical significance to identify the pixel gray value boundary of impulse noise and real pixels b, b. By using these histograms, we can distinguish the suspected points and real points. Second, we use the method of local structure significance to identify and classify the noise points from the suspected points. These points are then saved in Table G. Finally, according to the different noise types in Table G,we use 3 different strategies to remove the noise points. The experiments on three representative images with different noise densities and noise intensities show that our detection time is 520 times and 15 times faster than that of the two current classic algorithms, respectively. Furthermore, the proposed method has a detection accuracy of 99%, recover images with excellent visual effects, and enhances the peak signal-noise ratio to 12 dB. The proposed algorithm can protect the image detail and recover the original features of the image when filtering impulse noise. The proposed method can also compensate for the disadvantage of current switching filters in terms of detection time, detection accuracy, and peak signal-noise ratio.  
      关键词:progressive iteration;impulse noise;detection accuracy;time complexity;peak signal to noise ratio   
      2417
      |
      238
      |
      1
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56115444 false
      更新时间:2024-05-07
    • Detection and description of scale-invariant keypoints in log-polar space

      Tao Tao, Zhang Yun
      Vol. 20, Issue 12, Pages: 1639-1651(2015) DOI: 10.11834/jig.20151209
      摘要:The internationally popular scale-invariant feature transform (SIFT) algorithm and its improved algorithms are based on the difference-of-Gaussian (DoG) function for keypoint detection and description. However, the DoG function causes high-frequency image information loss, which leads to a sharp decline in matching performances along with increased image deformation. According to previous research on images in log-polar space, a new algorithm for keypoint detection and description in log-polar space is developed in this study. The new algorithm can completely reserve image information to overcome the drawbacks of the SIFT algorithm and its improved algorithms. The algorithm employed in this study converts the round image block centered on the sample point in Cartesian space into a rectangular image block in log-polar space and performs keypoint detection and descriptor extraction based on the derived rectangular image block. When performing keypoint detection, the proposed algorithm utilizes a window with a constant width that moves along the log axis of the radial gradient image in the log-polar space of the sample point to determine whether a sample point is to be defined as a keypoint and to calculate the character scales of the sample point. When a sample point is defined as a keypoint, the proposed algorithm performs descriptor extraction in the location of the character scale with a local maximum window response. The descriptor is a 192-dimensional vector that is based on the magnitude and orientation of the grayscale gradient of the rectangular image block in the log-polar space; it is invariant to changes in scale, orientation, and intensity. The SIFT algorithm, the speeded up robust feature (SURF) algorithm, and the proposed algorithm are compared based on the dataset and the performance evaluation indices proposed by Mikolajczyk. Results demonstrate that compared with SIFT and SURF algorithms, the proposed algorithm has significant advantages in the performance evaluation indices, such as correspondences, repeatability, correct matchs, and matching score. Classical image matching algorithms are based on Cartesian space; their matching performances for images with deformation, such as scale changing, are limited. This study formulates a new image matching algorithm based on log-polar space. First, the proposed algorithm converts the round image block centered on the sample point in Cartesian space into a rectangular image block in log-polar space. Thus, the proposed algorithm can effectively avoid high-frequency image information loss caused by the DoG function when performing keypoint detection. Second, the proposed algorithm extracts the descriptors of the keypoint based on the derived rectangular image block in log-polar space. This condition reduces the variance of images significantly. In sum, the proposed algorithm can significantly improve the performance of image matching by transforming an image in Cartesian space into one in log-polar space.  
      关键词:computer vision;image matching;the log-polar space;scale-invariant keypoint;descriptor   
      3465
      |
      277
      |
      4
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56114755 false
      更新时间:2024-05-07
    • Pavement crack detection algorithm based on sub-patch discriminant analysis

      Qian Bin, Tang Zhenmin, Xu Wei, Tao Yuting
      Vol. 20, Issue 12, Pages: 1652-1663(2015) DOI: 10.11834/jig.20151210
      摘要:How to extract and collect crack information efficiently and effectively still remains a challenging task due to illumination, lane and stains over the pavement images. In this paper, based on the sub-patch discriminant analysis, we propose a novel pavement crack detection method to address the foregoing problem. First, an intensity compensation based grayscale correction algorithm is presented to weaken uneven illumination, then the sparse autoencoder model is applied to extract sub-patch features. Second, in order to extract more discriminative features, a new two class iterative discriminant analysis is further proposed, where the projection and clustering processing steps are alternatively performed to update the inter-distance of different sub-classes of all crack patches until convergence. Finally, the nearest neighbor classifier is adopted in the discriminative subspace for classification tasks. As the distribution of samples in the transformed subspace approaches to the true one via the iterative process, discrimination of features can be enhanced significantly. A series of experiments show that the proposed method achieves high recognition rates, i.e., up to 95.5% on the benchmark dataset, 90.9% on a practical highway dataset. A sub-patch discriminant analysis based method is developed for effective crack detection. Our method aims to extract highly discriminative features for sub-patches of road images. Three main steps, i.e., grayscale correction, sparse autoencoding, iterative discriminant feature extraction are involved, making our method highly robust and adaptive to the road images with several kinds of heavy noises. The final classification is performed in the obtained low dimensional subspace. Extensive experimental results on two datasets demonstrate that our proposed method generally outperforms other existing related algorithms.  
      关键词:crack detection;discriminant analysis;grayscale correction;sparse autoencoder   
      3409
      |
      586
      |
      5
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56114828 false
      更新时间:2024-05-07
    • Multi-scale saliency detection based on composition prior

      Wang Jiaojiao, Liu Zhengyi
      Vol. 20, Issue 12, Pages: 1664-1673(2015) DOI: 10.11834/jig.20151211
      摘要:Saliency detection is a fundamental part of computer vision applications, the goal is to detect important pixels or regions in an image which attracts human visual attention most. Recently, people have proposed boundary prior, or background information to enhance saliency detection. Such methods even achieve state-of-the-art result, suggesting that boundary prior is effective. Compared with most existing bottom-up methods which consider saliency based on the contrast between salient objects and their surrounding regions, boundary prior characterizes the spatial layout of image regions with respect to image boundaries. Inspired by this idea, we propose image composition prior to detect saliency. Observing from images, we find salient objects usually placed in center regions while background lies in boundary regions. And images are usually formed with some composition rules, such as Rule of Thirds. We propose composition prior method by assuming objects are distributed near composition lines. We select regions near composition lines as initial seeds, and compute saliency according to feature relevance. To be specific, firstly, we segment the image into multi scales and construct a close-loop graph where each node is a super pixel. Secondly, we use nodes which near composition lines as queries, and extract their features to rank the relevancies of all the other regions by Manifold Ranking, and then compute saliency based on the ranking result. Thirdly, according to the last step, we iteratively refine saliency in the perspective of both object and background. Then assign the saliency value to each pixel. Considering the distinctness of different pixels in the same region, we need to correct their saliency. We choose to add a correction value to each pixel based on their distance to feature center. Finally, the saliency detection is carried out by integrating multi-scale saliency. In comparison experiments on datasets of MSRA-1000, CSSD, and ECSSD, our method performs well when against the state-of-the-art methods. It gets highest precision on three datasets (92.6%, 89.2%, and 76.6% respectively). The average run time of a single image is 0.692, which still has some advantages compared with other algorithms. This study presents a new salient detection method based on composition prior. Human vision has the tendency of detecting saliency from regions near composition lines rather than image boundaries. Composition prior detects saliency based on human vision mechanism. Experimental results demonstrate detect saliency in the perspective of image composition is reasonable, and using composition prior can improve the detecting accuracy.  
      关键词:saliency detection;multi-scale;composition prior;the rule of thirds;manifold ranking   
      2848
      |
      271
      |
      2
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56114876 false
      更新时间:2024-05-07
    • Chen Ying, Huo Zhonghua
      Vol. 20, Issue 12, Pages: 1674-1683(2015) DOI: 10.11834/jig.20151212
      摘要:Person re-identification is important in video surveillance systems because it reduces human efforts in searching for a target from a large number of video sequences. However, this task is difficult because of variations in lighting conditions, clutter in the background, changes in individual viewpoints, and differences in individual poses. To tackle this problem, most studies concentrated either on designing a feature representation, metric learning method, or discriminative learning method. Visual saliency based on discriminative learning methods has recently been exploited because salient regions can help humans efficiently distinguish targets. Given the problem of inconsistent salience properties between matched patches in person re-identification, this study proposes a multi-directional salience similarity evaluation approach for person re-identification based on metric learning. The proposed method is robust to viewpoints and background variations. First, the salience of image patches is obtained by fusing inter-salience and intra-salience, which are estimated by manifold ranking. The visual similarity between matched patches is then established by the multi-directional weighted fusion of salience according to the distribution of the four saliency types of matched patches. The weight of saliency in each direction is obtained by using metric learning in the base of structural SVM ranking. Finally, a comprehensive similarity measure of image pairs is formed. The proposed method is demonstrated on two public benchmark datasets (e.g., VIPeR and ETHZ), and experimental results show that the proposed method achieves excellent re-identification rates with comprehensive similarity measures compared with other similar algorithms. Moreover, the proposed method is invariant to the effects of background variations. The re-identification results on the VIPeR dataset with half of the dataset sampled as training samples are quantitatively analyzed, and the performance of the proposed method outperforms existing learning based methods by 30% at rank 1(represents the correct matched pair) and 72% at rank 15(represents the expectation of the matches at rank 15). The proposed method can still achieve state-of-the-art performance even if the size of the training pair is small. For generalization verification, experiments are conducted on the ETHZ dataset for testing. Result shows that the proposed method outperforms existing feature-design-based methods and supervised-learning-based methods on all three sequences. Thus, the proposed method shows practical significance. The multi-directional weighted fusion of salience can yield a comprehensive description of the saliency distribution of image pairs and obtain a comprehensive similarity measure. The proposed method can realize person reidentification in large-scale, non-overlapping, multi-camera views. Furthermore, the proposed method improves the discriminative and accuracy performance of re-identification and has strong robustness to background changes.  
      关键词:person re-identification;metric learning;salience feature;ranking   
      4611
      |
      342
      |
      5
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56116408 false
      更新时间:2024-05-07
    • Zhao Limei, Jia Weimin, Wang Biaobiao, Yu Qiang
      Vol. 20, Issue 12, Pages: 1684-1688(2015) DOI: 10.11834/jig.20151213
      摘要:Principal component analysis (PCA) plays an important role in image processing and machine learning because of its simplicity and effectiveness. In the image analysis domain, a 2D image is usually reshaped into a 1D vector, thus leading to high dimensionality and damage to intrinsic spatial information. Two-dimensional PCA (2DPCA) successfully solves the high-dimensional problem caused by vector-based methods and can significantly preserve spatial information. However, 2DPCA minimizes the minimum mean square error (MSE) and is sensitive to outliers. In this paper, a robust 2DPCA algorithm based on the maximum correntropy criterion (MCC) is proposed for face recognition. The proposed method can reduce the effect of outliers and improve recognition accuracy remarkably. In 2DPCA-MCC, the objective function is computed on the basis of the MCC, which is a useful measurement to handle non-zero-mean data. Given that the correntropy objective is a nonlinear optimization problem, which can be efficiently solved by the half-quadratic (HQ) optimization framework in an iterative manner, this study derives the algorithm to solve the correntropy objective on the basis of MCC. At each iteration, the complex nonlinear optimization problem is reduced to a weighed PCA problem and the correntropy objective is increased systematically. Thus, this new 2DPCA algorithm can handle non-centered data and can naturally estimate the data mean. Face recognition experiments have been implemented on ORL databases. In the experiment, 5 training images are randomly selected from all 10 face images for each individual and the data is not zero-centered. To test the robustness to outliers of the proposed method, 20, 40, and 60 percent of the 5 training images are occluded with random location and random size dots. The proposed algorithm can separately improve the recognition accuracy by nearly 10, 19, and 30 percent compared to the original two-dimensional PCA algorithms at 20, 40, and 60 percent images with outliers of all training images for each individual. The minimum MSE is a simple measurement to compute the objective function. However, the MSE-based algorithm can only handle zero-mean data. To address this problem, this study proposes a new robust 2DPCA algorithm based on the MCC, which is a useful measurement for handling non-centered data. The algorithm uses the HQ optimization framework to solve the correntropy objective, which is a complicated nonlinear optimization problem. The experimental results on the ORL database prove that the proposed 2DPCA-MCC algorithm is more robust to outliers and can achieve better recognition results than the original MSE-based 2DPCA algorithms without the limitation of zero-mean data.  
      关键词:maximum correntropy criterion;principal component analysis;robust;information theoretic;outliers   
      3807
      |
      227
      |
      1
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56116441 false
      更新时间:2024-05-07
    • Gao Hongmin, Li Chenming, Wang Yan, Xie Kewei, Chen Linghui, He Zhenyu
      Vol. 20, Issue 12, Pages: 1689-1698(2015) DOI: 10.11834/jig.20151214
      摘要:The high dimensions of hyperspectral remote-sensing images cause information redundancy and data processing complexity, thus leading to high computing workloads and low application accuracy. Therefore, before analyzing hyperspectral images, the high dimensions of hyperspectral data should be reduced. Classification is an important means of acquiring information, but existing image classification methods for identifying different ground objects at the pixel level and object level have low accuracies under the condition of strong noise interference to training samples. This interference decreases when similar objects with spectral and spatial characteristics are merged into large collections and classified according to the spectral and spatial characteristics of each collection. This paper proposes a double-population hybrid search strategy for dimension reduction based on differential evolution and particle swarm optimization with hybrid encoding. In this strategy, a support vector machine is adopted as a classifier with multiple-instance learning to improve the classification accuracy, reduce the dimension, and construct the classification model of encapsulation type. Experiments were conducted with AVIRIS images. Results show that the proposed method can obtain a classification accuracy of 96.03% for small training samples, 0.62% higher than the best classification accuracy among similar hybrid encoding methods of classification. Under the strong noise interference of training samples, the proposed strategy utilizes the double-population hybrid search strategy, which is based on differential evolution and particle swarm optimization with hybrid encoding. The noise interference can be viewed as the specific form of "ambiguity" in the training package of multiple-instance learning. The proposed method can obtain high classification accuracy for small training samples and significantly alleviate strong noise interference.  
      关键词:hyperspectral remote sensing image;classification;particle swarm optimization;differential evolution;multiple instance learning;hybrid encoding   
      2737
      |
      314
      |
      2
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56115763 false
      更新时间:2024-05-07
    • Song Fang, Li Yong, Yu Tao
      Vol. 20, Issue 12, Pages: 1699-1704(2015) DOI: 10.11834/jig.20151215
      摘要:In the process of phase unwrapping of InSAR, the error of phase unwrapping can be gained if unwrapping path is on the residual error point. Our objective is limiting the error of phase unwrapping in local region. The coherence map of InSAR is used as reliability map for phase unwrapping and is quantized with given level. The residues of wrapped phase are found and then their reliability is set to the lowest level. Phase unwrapping is carried out after reliability reordering. The algorithm using classification table is adopted instead of the slow classical reliability-guided phase unwrapping algorithm.The experiments were carried out to verify the proposed method. The speed of our method is faster than classical reliability-guided algorithm in the experiments. The phase unwrapping results of InSAR show enhanced accuracy, and the error propagations are effectively controlled. The experimental results prove the proposed method is accurate and applicable for InSAR's phase unwrapping.  
      关键词:InSAR;reliability map;phase unwrapping;residues   
      2955
      |
      262
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56114852 false
      更新时间:2024-05-07
    0