最新刊期

    20 1 2015
    • High-quality depth map reconstruction combining stereo image pair

      Yang Yuxiang, Gao Mingyu, Yin Ke, Wu Zhanxiong
      Vol. 20, Issue 1, Pages: 1-10(2015) DOI: 10.11834/jig.20150101
      摘要:The capability to capture depth information of static real-world objects has achieved increased importance in many fields of application, such as manufacturing and prototyping, as well as in the design of virtual worlds for movies and games. A time-of-flight camera can conveniently obtain scene depth images. However, the resolution of a depth image is low and cannot satisfy actual requirements because of hardware limitations. Stereo matching algorithms are classical methods used to obtain depth images, but they are significantly limited in practical applications because of the occlusion between left and right images and the non-textured area. In this study, we propose a novel method to obtain a high-resolution, high-quality depth map by combining stereo matching with the use of a time-of-flight camera.We formulate a non-local adaptive weighting filter and obtain an initial high-resolution depth map using the low-resolution depth map from the time-of-flight camera. Then, we use the initial depth map and a local stereo matching algorithm to construct adaptive weights for stereo matching and obtain a raw depth map. Given that discontinuities within a range and coloring tend to coalign, we construct a local weighting filter using the raw depth map and the features of a high-resolution color image to reinforce the preservation of fine details. Experiments demonstrate that our approach can obtain an excellent high-resolution range image. Comparison experiments on peak signal-to-noise ratio and error rate show that our method can reconstruct high-quality depth maps. The proposed method can produce sharper edges and more accurate details compared with other state-of-the-art approaches.  
      关键词:depth map;time of flight camera;stereo matching;adaptive weighting match;local weighting filter   
      4670
      |
      609
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56121522 false
      更新时间:2024-05-07
    • Tang Songze, Xiao Liang, Huang Wei, Liu Pengfei
      Vol. 20, Issue 1, Pages: 11-19(2015) DOI: 10.11834/jig.20150102
      摘要:Regularization-based reconstruction is an important single-image super-resolution (SR) method. This class of methods aims to design effective image priors and incorporate them into a regularization framework to enhance edge-and texture-preserving capabilities during the SR process. In this study, a global and local structural content adaptive regularization model is proposed to solve the single-image SR problem. This model combines the global non-Gaussian statistics of an image gradient with the orientation-adaptive regression property of a local structure. Generalized Gaussian distribution is applied to fit the heavy-tailed distribution of the image gradient. A global content-based sparsity measure (0< <1) norm is constructed under the maximum a posterior probability framework. The anisotropic correlation of local content is employed to construct an adaptive regression prior of the local structure based on the Geman-McClure function. Finally, a half-quadratic penalty method and a variable splitting technique are used to solve the model effectively.For an objective assessment, experimental results demonstrate that the quality of the SR images obtained by the proposed method is better than those obtained by other methods in terms of peak signal-to-noise ratio and structural similarity. For a subjective evaluation, the proposed method can retain edge and image details effectively. The proposed adaptive regularization method can preserve edge and image details effectively in single-image super resolution.  
      关键词:super-resolution;regularization;sparsity;structure orientation-adaptive regression   
      3848
      |
      502
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56118723 false
      更新时间:2024-05-07
    • Image local blur measurement based on BP neural network

      Huang Shanchun, Fang Xianyong, Zhou Jian, Shen Feng
      Vol. 20, Issue 1, Pages: 20-28(2015) DOI: 10.11834/jig.20150103
      摘要:The existing blur metrics for locally blurred images are difficult to use in the measurement of flat textured areas. Thus, a back propagation(BP) neural network-based image local blur measurement method is proposed to overcome this limitation.A new unified blur feature based on all singular values and non-zero discrete cosine transform(DCT) coefficients is presented. This feature measures sharpness from both spatial and frequency domains. Different singular values reflect the distribution of different scale information, which vary differently after blurring.The number of non-zero DCT coefficients depicts the information lost in the high frequency domain. Their combination can capture the blurring effect in the flattened textured area. BP neural network-based classifier is trained to predict the blur measurement of each block on the basis of the metric. The method can better distinguish the flat textured areas and blurred areas of a single locally blurred image compared with existing methods. According to the recall-precision curve, the statistical experiment of multiple locally blurred images shows that a higher precision can be obtained with the proposed method than with existing methods. Therefore, the proposed method measure the local blur more effectively, particularly that of flat textured areas, than the existing methods can.  
      关键词:back-propagation neural network;blur measurement;singular values;DCT coefficient   
      4512
      |
      401
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56120834 false
      更新时间:2024-05-07
    • Cheng Li, Yao Wei, Li Bo
      Vol. 20, Issue 1, Pages: 29-38(2015) DOI: 10.11834/jig.20150104
      摘要:In optical character recognition, skew is typically introduced into the document images obtained during the process. Fast and accurate skew detection is important to implement skew correction to these tilted document images, and thus, facilitates subsequent processing. An improved projection profile-based approach,called two-stage projection histogram variance, is proposed in this study. Angle space is discredited at a certain step length within the scope of a predetermined value.Projection histograms of the number of dark pixels are obtained at each possible angle.Variances of all histograms and their maximal difference values are calculated.The angle that corresponds to the maximal difference is selected as a rough estimation of the skew angle. New histograms are computed in the same manner, but angle space is discredited at the increment of the detection precision between the sum and difference of the rough skew estimation value and the step length used in the first histograms.The maximal value of the variance of the histograms is calculated.The corresponding angle is calculated as the final skew angle estimation. The proposed algorithm can be applied to all kinds of complex document images. The mean and maximal absolute values of error resulting from the algorithm do not exceed 0.5° and 0.7°, respectively. The maximal variance of error does not exceed 0.1.Thus, the proposed algorithm exhibits the most concentrated error distribution compared with other methods. Furthermore, the processing speed of the proposed algorithm is fast. Skew detection for a document image with 2 480×3 508 pixels can be accomplished within 200 ms by the algorithm. Tests results show that the proposed algorithm exhibits fast running speed, high precision and wide scope for skew detection, strong resistance to noise, and excellent adaption to complex document layout.  
      关键词:skew detection;projection histogram;variance;RADON transform;down-sample   
      3551
      |
      376
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56120525 false
      更新时间:2024-05-07
    • Liu Zi, Song Xiaoning, Tang Zhenmin
      Vol. 20, Issue 1, Pages: 39-49(2015) DOI: 10.11834/jig.20150105
      摘要:The success of sparse representation in image reconstruction triggers the research on sparse representation-based pattern classification (SRC). A strict and standard dictionary creates sparse coefficients, but may not achieve a better classification accuracy in the SRC model. Recent research reveals that the collaborative representation (CR) mechanism, but not the L-norm sparsity constraint, improves face recognition (FR) accuracy. Therefore, constructing a rational and optimal dictionary for SRC model is a challenging task. We propose a novel SRC fusion method using a dynamical class-elimination mechanism, which strengthens the ability of collaborative representation, and a greedy search (GS) strategy for face recognition. The proposed method involves two aspects: training variable selection and classification strategy and sparse coding coefficient decomposition. The training variable selection and classification strategy aims to represent a query sample as a linear combination of the most informative training samples and to exploit an optimal representation of the training samples from the classes with major relevant contributions. Instead of eliminating several classes at one time, we eliminate classes one by one with GS sparse coding process until the ideal number of classes is obtained. In the context of the proposed method, an important goal is to select a subset of variables that can accomplish the provision of a descriptive representation for sparse category knowledge structure. We develop a heuristic learning strategy to achieve this goal. The method converts the original classification problem into a simpler one that contains a relatively small number of classes. The remaining final training samples are used for classification and to produce the best representation of the test sample. Literature validates that the CR mechanism has a significant role in the SRC scheme compared with the sparsity constraint. However, the sparsity constraint cannot be removed from the SRC scheme. We introduce a greedy search method, i.e., error-constrained OMP, to integrate sparsity and CR mechanism and to solve the problem of sparse decomposition. Term regularization has a twofold role. First, it makes the least square solution stable; and second, it introduces a certain amount of “sparsity” to the solution. However, this sparsity is weaker than that by L norm. The SRC model shows that the test samples are not represented by all training samples, but can be sparsely represented over a dictionary. Therefore, we follow the following two key points: CR mechanism to improve classification and the condition of sparse constraint. For the SRC analysis, a test sample has been represented as a linear combination of all the training samples. The coefficient of a training sample in the linear combination acts as the weight of the training sample. The method assigns the test sample into the class that produces the minimum representation residual. A smaller coefficient denotes that some training samples have fewer contributions. Therefore, these training samples are inconclusive to the classification. The class that has fewer contribution to represent the test sample can be assigned to a zero coefficient, and the linear combination of all the remaining training samples is reassessed. These remaining training samples are informative to exploit an optimal representation of the test sample with major relevant contributions. The proposed method removes only one candidate class from all classes. The removed class is far from the test sample of the linear combination that will represent the test sample during iteration. The classes that are very far from the test sample are first excluded, and the classification decision depends only on the remaining classes. Experiments conducted on the ORL, FERET, and AR face databases have demonstrated the effectiveness of the proposed method, and the recognitions rates are up to 97.88%, 67.95%, and 94.50%, respectively. A novel SRC fusion method using hierarchical multi-scale LBP and greedy search strategy is developed. The method aims to establish a heuristic iterative algorithm to obtain the supervised sparse representation of the test sample and then exploit an optimal representation of training samples from the classes with major relevant contributions. This method involves two aspects: dynamical class-elimination mechanism and replacement of the sparse approximation problem by a greedy search strategy. The test sample is classified into a class whose contribution has the minimum error. In this method, the sparse factors exerting influence on the structure knowledge can be incorporated into the sparse representation via iterative measurement of sparsity constraint. Experimental results show the feasibility and effectiveness of the proposed method. This type of sparse learning method is a new attempt compared with traditional popular representation methods in image recognition and can be clearly interpreted and be successfully applied in classifications. Future work can place emphasis on the quality of dictionary leaning. A study on the effect of introducing fuzzy weights into each atom will give them a varying fuzzy degree of popularity.  
      关键词:sparse representation;greedy algorithm;face recognition;classifications   
      4081
      |
      413
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56119113 false
      更新时间:2024-05-07
    • The sketch face recognition combining with AdaBoost and blocking LBP

      Zhou Xi, Cao Lin
      Vol. 20, Issue 1, Pages: 50-58(2015) DOI: 10.11834/jig.20150106
      摘要:Sketch face recognition, which belongs to heterogeneous face recognition, is a difficult research area in criminal investigation. Blocking local binary pattern (LBP) features are used according to the characteristics of sketch face recognition, and the features that can discriminate the sketch face image and visible face image are extracted by using AdaBoost algorithm. After registering the sketch image and visible image, the image is blocked, and the LBP histogram of each block is calculated. This LBP histogram is used to select the features of each block. The log probability statistics of the sketch and visible images is calculated, and features are extracted by using AdaBoost algorithm. Features that can recognize effectively are chosen step by step and are used in unknown sketch face recognition. Crossover and non-crossover experiments are tested by using the existing sketch database, and the recognition rates are 99% and 100%, respectively. Results prove that the proposed algorithm is an effective sketch face recognition approach. This method can be used in sketch face recognition after optimization.  
      关键词:sketch face recognition;blocking LBP;Adaboost algorithm;feature extraction   
      4487
      |
      466
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56118840 false
      更新时间:2024-05-07
    • Multi-cues object tracking based on motion consistence in random field

      Chen Chenshu, Zhang Jun, Xie Zhao, Gao Jun
      Vol. 20, Issue 1, Pages: 59-71(2015) DOI: 10.11834/jig.20150107
      摘要:The relationships among different cues are established to improve the robustness of a tracking method.A simple but effective model is utilized to easily implement the tracking method. A motion-consistency constraint is proposed among objects represented by different cues.A chain-structure Markov random field is used to express the objects represented by different cues and the constraint among them. The tracking problem is converted into a simple optimization of the target function of a Markov random field. The cues used in the experiment are luminance histogram, oriented gradient histogram, and local binary pattern. The comparison between several state-of-the-art tracking methods and the proposed method on 15 video sequences shows the effectiveness of the latter.The proposed method has low position error and high tracking accuracy when an object is influenced by occlusion, motion blur, illumination changes, and clutter. A motion-consistency constraint enhances the relationships among different cues to a certain degree. Expressing the constraint and the objects represented by different cues through a chain-structure Markov random field improves the robustness of the tracking method and makes it easy to implement.  
      关键词:object tracking;multi cues;motion consistency;random field model   
      3298
      |
      431
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56120909 false
      更新时间:2024-05-07
    • Improved tracking algorithm with background-weighted histogram

      Tian Hao, Ju Yongfeng, Meng Fankun, Li Fufan
      Vol. 20, Issue 1, Pages: 72-84(2015) DOI: 10.11834/jig.20150108
      摘要:A mean shift (MS) object tracking algorithm with a corrected background-weighted histogram(CBWH) only provides CBWH update but lacks an object template update. Moreover, it exhibits poor robustness in case of object occlusion.Our algorithm combines the reliability of the Kalman filter (KF) in terms of object state prediction and parameter updating, and applies two layers of the KF framework into MS with CBWH. The first layer of the KF framework for predicting object location achieves adaptive tracking results by applying the relationship between KF noise and the Bhattacharyya coefficient, and thus, reduces occlusion effect on the tracking results. The second layer of the KF framework for updating the object template achieves update synchronization of the object template and CBWH by filtering each nonzero element in the object template, and consequently, reduces the effect of changes in object features on the tracking results. Under background interference, occlusion, and characteristic change, the average tracking errors of our algorithm, MS with CBWH, and traditional MS are 5.43, 19.2, and 51.43, respectively. This result shows that the tracking precision of our algorithm is the highest. Our algorithm also performs well in real time. Our algorithm adds two layers of the KF framework into MS with CBWH, thereby solving the weakness of the initial algorithm,which does not provide a template update and exhibits poor robustness in case of object occlusion.The effectiveness of our algorithm is verified in the experiments.  
      关键词:object tracking;mean shift;corrected background-weighted histogram;two layers of the Kalman filter;Bhattacharyya coefficient;template update   
      3897
      |
      450
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56121277 false
      更新时间:2024-05-07
    • Double-sided shreds restoration based on English letters feature

      Zhou Yifan, Wang Songjing, Huang Yongbin
      Vol. 20, Issue 1, Pages: 85-94(2015) DOI: 10.11834/jig.20150109
      摘要:By combining an image processing technique with English letter feature, we propose a new algorithm based on clustering and global optimization for double-sided shred restoration. The image processing technique is applied to eliminate the parts of letters at different height levels. The parameters (pixel difference) that describe the matching degree of adjacent shreds based on preprocessing images are obtained. The parameters (correlation coefficient) of the matching degree of shreds and the rows based on post-processing images are also obtained. The optimization problem is converted into two sub-problems by using these parameters. The first problem is to establish a global optimal clustering model that minimizes the maximum target of pixel difference. The second problem is to translate the problem of matching adjacent shreds in the same row into a traveling salesman problem (TSP). A global optimization model is developed to solve the TSP for each row. Our simulation result demonstrates that the proposed image processing technique considerably eliminates the negative influences of the letter parts in different levels. The two feature parameters can capture most information of the matching degree. Recovery accuracy reaches over 90%. This study presents an efficient algorithm for shred restoration based on clustering and global optimization. Experimental results show that the proposed algorithm can significantly reduce the complexity of the optimization problem with good restoration result. The proposed algorithm has practical significance in shred restoration for paper shredders.  
      关键词:shreds restoration;clustering;global optimization;traveling salesman problem;image processing techniques.   
      3020
      |
      397
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56118604 false
      更新时间:2024-05-07
    • Fast SURF key-points image registration algorithm by fusion features

      Luo Tianjian, Liu Binghan
      Vol. 20, Issue 1, Pages: 95-103(2015) DOI: 10.11834/jig.20150110
      摘要:Using image registration algorithms leads to poor results when the color of original images is simple and when time complexities are high.Thus, we propose the method fast speed up robust feature (SURF)keypoint registration algorithm by using fusion features. Color invariance margin and central symmetry-local binary pattern(CS-LBP)as fusion features are first extracted, and the variance of quantified color histogram for the weight of the fusion features is calculated. SURF keypoints and descriptors on grayscale are extracted. The use of the nearest neighbor match method requires rough matches, whereas the use of improved random sample consensus(RANSAC) algorithm requires fine matches. Least Square Method(LMS)needs transform relation to register. Experimental results show that the proposed algorithm can extract robust SURF features in fusion feature, and using these features for image registration can improve precision by 5% and decrease time complexity by 15%. Thus, this method can pervade images of all situations. The proposed algorithm can obtain enough keypoints to improve precision and robustness; the improved RANSAC decreases time complexity and iterations.  
      关键词:speed up robust feature keypoints;color invariance margin;central symmetry-local binary patterns texture;random sample consensus;least square method   
      4259
      |
      754
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56120661 false
      更新时间:2024-05-07
    • Cross-based bidirectional adaptive window for stereo matching

      Fu Limei, Peng Guohua
      Vol. 20, Issue 1, Pages: 104-112(2015) DOI: 10.11834/jig.20150111
      摘要:Region-based local match methods are the simplest and most effective stereo matching algorithms. Considering the problem of the window chosen in the local methods, we propose a cross-based bidirectional adaptive window-matching algorithm. In this algorithm, we construct the support window adaptively by cross-based bidirectional search, which is based on the correlation of intensity and disparity in the image patches, and obtain a mask window. The integral images are adopted to calculate the matching costs in the mask window. Thus, disparity map is obtained. Two steps are implemented: Union Jack-shaped voting and bilateral filtering algorithm as a post-processing step. The proposal is adopted on different stereo images, and adaptive match windows are obtained for the image structures. The matching accuracy is increased by 30% for Teddy compared with the original cross-based method. The two-step disparity post-processing keeps the edges of the images well. Experimental results show that the proposed algorithm alleviates the depth edge expansion problem introduced by regular window as well as improves the robustness and depth accuracy of the algorithm.  
      关键词:disparity map;across-based bidirectional;adaptive window;union Jack shape voting;bilateral filtering   
      3722
      |
      321
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56118754 false
      更新时间:2024-05-07
    • Luo Nan, Sun Quansen, Chen Qiang, Ji Zexuan, Xia Deshen
      Vol. 20, Issue 1, Pages: 113-124(2015) DOI: 10.11834/jig.20150112
      摘要:In many computer vision tasks, one of the core steps is to set up reliable correspondences of points between two images. Although image matching methods based on local descriptors have been well studied, they are usually unable to find the correct corresponding points of the images containing repetitive patterns even if the viewpoint changes are very small. Due to the local information ambiguities of the images containing repetitive patterns, false matches can be easily produced by the local feature based image matching algorithms. Meanwhile, the matching algorithms combining with the global feature still depend on the main orientation which is obtained by calculating the local information. Therefore, these algorithms also usually lead to mismatching for the images with repetitive patterns. Thus, it is meaningful to cope with the challenging matching task since such repetitive patterns widely exist in the real world images of artificial objects or scenes. To solve this problem, a novel image matching algorithm based on pair-wise feature points is proposed in this paper. First, FAST detector is adopted to estimate the locations of the feature points. It is an effective and efficient method for feature detection. Then, the direction vector between the pair-wise points is utilized to be the main orientation, which provides the right direction for both the local and global feature description. In addition, local DAISY descriptor and the improved global context descriptor are used in the proposed algorithm to improve the matching ability. We evaluate the proposed method on both the simulative and real images against several state-of-the-art algorithms. For the simulative images experiments, the proposed method outperforms than other ones on the mean and the standard deviation of the matching accuracy. For the real images experiments, the test datasets contain the stereo matching images and the remote sense images. On the average matching correct rate, the proposed algorithm can reach more than 88% and increase at least 26% more than the other classical matching methods. Experiments on images of both simulative and real as well as comparisons with the state-of-the-art methods have demonstrated the effectiveness and robustness of the proposed method. Moreover, the proposed algorithm is an effective approach to solve the repetitive patterns images matching problem.  
      关键词:image matching;repetitive patterns;pair-wise feature points;local feature;global feature   
      3221
      |
      244
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56118672 false
      更新时间:2024-05-07
    • Fu Xiaowei, Wang Yi, Chen Li, Tian Jing
      Vol. 20, Issue 1, Pages: 125-131(2015) DOI: 10.11834/jig.20150113
      摘要:Ultrasonography is one of the most important modalities of medical imaging system, and medical ultrasound images play a significant role in medical imaging techniques. However, medical ultrasound images are always contaminated by a noise called "speck noise", which has a visual effect similar to speck, instead of the point-like Gaussian white noise. Speck noise seriously degrades the quality of medical ultrasound images. Thus, in a contaminated medical ultrasound image, the observer has difficulty discriminatingthe fine details and structural features, hindering the application of ultrasound images in clinical diagnosis and treatment. In this paper, the quantum-inspired diffusion coefficient is introduced to discuss the challenge of despeckling while preserving the edge detail and structural features of ultrasound images. The proposed method improves the diffusion coefficient in traditional P-M equations based on the denoising method by some foundational knowledge in quantum theory.Anisotropic diffusion model is built on the basis of the traditional P-M equations. The proposed quantum-inspired diffusion coefficient changes over the gradient direction to take advantage of the better directional selectivity of wavelet coefficients. The optimization of this coefficient can be strengthened by the improved anisotropic diffusion model. Thus, a novel quantum-inspired partial differential equation based on medical ultrasound image despeckling method is proposed. Experiments are conducted on both images with simulation speck noise and real medical ultrasound images to show the performance of the proposed method in comparison with other classic despeckling methods. Among all compared methods, the proposed method obtains the best objective evaluation,such as signal-to-noise ratio, edge preserve measurement, structural similarity index measurement, and equivalent number of looks. Experimental results of both images with simulation speck noise and real medical ultrasound images can demonstrate that the proposed method efficiently reduces the speck noise and maintains the edge, detail, and structure of the images. An effective despeckling method inspired by quantum theory and based on partial differential equations is proposed for medical ultrasound image despeckling; experiments are conducted to demonstrate the effectiveness of this method. The proposed method effectively addresses the problem of reducing speck noise in a medical ultrasound image while maintaining the details, edge, and structure of the image.Good despeckling results are obtained. The introduction of quantum theory can not only provide a solution to the present medical ultrasound image despeckling problems but also inspire researchers to make use of the theory in various medical image processing methods. A new path of interdisciplinary exploration can be achieved in the medical image processing research and even in other complicated problems in various disciplines by combining the quantum-inspired theory or other advanced theories in multiple fields to gain better processing performance.  
      关键词:quantum-inspired;partial differential equation;medical ultrasound images;despeckling   
      3482
      |
      358
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56121490 false
      更新时间:2024-05-07
    • Ye Zhen, He Mingyi
      Vol. 20, Issue 1, Pages: 132-139(2015) DOI: 10.11834/jig.20150114
      摘要:High spectral resolution and correlation hinder the application of classification in hyperspectral data. To improve classification accuracy, a hyperspectral decision fusion classification method based on principal component analysis (PCA) and windowed wavelet transform is proposed in this study. A correlation coefficient matrix is used to group original hyperspectral data. PCA is applied to reduce the spectral dimensions of data for each group. The proposed windowed wavelet transform method is used to extract spatial features. Linear opinion pool is employed to fuse the classification results from multi-classifiers. Using two hyperspectral data sets from different sensors, the proposed algorithm obtain higher classification accuracy and Kappa coefficient than five existing algorithms. The classification accuracy of the proposed algorithm outperforms that of support vector machine-radial basis function (SVM-RBF) by approximately 8%. Experimental results show that the proposed method can explore spectral-spatial information from hyperspectral imagery, improve classification accuracy efficiently, and provide outstanding classification performance under a small sample size and noisy environments.  
      关键词:hyperspectral classification;principal component analysis;wavelet transform;decision fusion   
      3314
      |
      417
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56118980 false
      更新时间:2024-05-07
    • Ma Xiaoshuang, Shen Huanfeng, Yang Jie, Zhang Liangpei
      Vol. 20, Issue 1, Pages: 140-150(2015) DOI: 10.11834/jig.20150115
      摘要:The presence of speckle noise degrades the quality of polarimetric synthetic aperture radar (PolSAR) images. Suppressing speckle is an essential preprocessing step when using SAR data. In this study, a technique based on a nonlocal weighted linear minimum mean squared error (LMMSE) filter is proposed for filtering speckle in polarimetric SAR images. The concept of nonlocal means is employed in the proposed method to evaluate the weights of the samples in the LMMSE estimator. In the pixel sample selection process, the polarimetric scattering property and heterogeneity of the neighboring patch of the processed pixel are utilized to discard unrelated pixels. The algorithm is accelerated while preserving the point targets and adaptively changing the size of the patch window. Experiments on polarimetric SAR images show that the quality of the images filtered by the proposed algorithm is improved. Compared with the number of looks for images filtered by traditional LMMSE filters, that for images filtered by the proposed method increases by over 10 looks for single-look or multi-look speckled images. The peak signal-to-noise ratio is also increased by 5.8 dB. Overall classification accuracy of the filtered images is higher than 83%. The proposed method is more computationally efficient than nonlocal means algorithms. The proposed filter effectively reduces speckle and preserves edges, features, and the polarimetric scattering mechanism, and thus, supports the efficient use of SAR data.  
      关键词:polarimetric synthetic aperture radar;speckle noise;nonlocal means;linear minimum mean squared error filter   
      3050
      |
      379
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56121058 false
      更新时间:2024-05-07
    • Fast visualization method for UWB SAR images based on 3

      Li Chao, Li Yueli, An Daoxiang, Wang Guangxue
      Vol. 20, Issue 1, Pages: 151-158(2015) DOI: 10.11834/jig.20150116
      摘要:Visualization of ultra wide bandwidth synthetic aperture radar (UWB SAR) data involves mapping from a high dynamic range amplitude values to the gray values of a lower dynamic range display device. This step is vital in the processing of UWB SAR images. However, traditional visual methods are unsuitable for the processing of UWB SAR images because these methods do not consider the characters of UWB SAR and because these methods require long processing time. To compress the UWB SAR data dynamic range in a shorter time, a new fast visualization method based on screening data for UWB SAR images is proposed. In the new method, the distribution of low frequency UWB SAR data and gray-value images is first discussed to obtain easily the reasonable distribution model for UWB SAR images. A 3 measurement and amend mapping function are used to screen the image data. Therefore, the high dynamic range of amplitude values is compressed in a small dynamic range. The quality of UWB SAR images should be evaluated to determine which image is convenient for the human visual system to obtain more geographic information. The proposed method costs less time compared with the original method, and the performance indicators of the former are better than that of the latter. The dark pixels are also stretched appropriately, and the bright details are preserved. Moreover, the images handled by the new method are suitable for the human visual system. Therefore, this method will have a major role in the real-time processing of UWB SAR image data.  
        
      3377
      |
      465
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56120414 false
      更新时间:2024-05-07
    0