最新刊期

    21 5 2016
    • Image engineering in China: 2015

      Zhang Yujin
      Vol. 21, Issue 5, Pages: 533-543(2016) DOI: 10.11834/jig.20160501
      摘要:This is the twenty-first one in the annual survey series of the yearly bibliographies on image engineering in China. The purpose of this survey work is mainly to capture the up-to-date development of image engineering in China, to provide a convenient means of literature searching facility for readers working in related areas, and to supply a useful recommendation for the editors of journals and potential authors of papers. Considering the wide distribution of related publications in China, 723 references on image engineering research and technique are selected carefully from 2975 research papers published in 148 issues of a set of 15 Chinese journals. These 15 journals are considered as important journals in which papers concerning image engineering have higher quality and are relatively concentrated. Those selected references are classified first into 5 categories (image processing, image analysis, image understanding, technique application and survey), and then into 23 specialized classes according to their main contents (same as the last 10 years). Some analysis and discussions about the statistics made on the results of classifications by journal and by category are also presented, respectively. According to the analysis on the statistics in 2015, it seems that image analysis is getting the most attention, the number of publications on target tracking has a significant increase, image segmentation and edge detection are still the focus of research, image matching and fusion in remote sensing, mapping and other applications once again become a hot spot. This work shows a general and off-the-shelf picture of the various progresses, either for depth or for width, of image engineering in China in 2015.  
      关键词:image engineering;image processing;image analysis;image understanding;technique application;literature survey;literature statistics;literature classification;bibliometrics   
      3269
      |
      304
      |
      10
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56112846 false
      更新时间:2024-05-07
    • Object-tracking method based on improved cost-sensitive Adaboost

      Xue Yizhe, Wang Tuo
      Vol. 21, Issue 5, Pages: 544-555(2016) DOI: 10.11834/jig.20160502
      摘要:Visual tracking is one of the most active computer vision research topics because of its wide range of applications. Currently, target tracking problems are often solved through online learning and detection methods. A tracking task can be considered a binary classification problem solved using online learning method. However, in the process of online learning, the classifier training takes a considerable amount of time to improve its recognition accuracy. In this study, a method using the Adaboost algorithm is proposed to solve this problem. The algorithm initially trains weak classifiers in a certain number of beginning frames and will subsequently perform only as a detector without training to address the issues related to real time and accuracy. The Haar feature needs to be simplified because its computational cost remains a burden for real-time tracking. Thus, we remove the Haar orientation to facilitate calculation. Positive samples, i.e., samples containing the target, are always the minority in tracking; as a result, the training samples are imbalanced. Accordingly, the algorithm needs to focus more on the positive targets to achieve higher detection rate. The equal treatment of false positives and false negatives by Adaboost may no longer be appropriate. In this case, we choose a cost-sensitive Adaboost to achieve higher detection rate for the positives. Furthermore, given that misclassified samples appear more often during a scenario because of the complex environment in visual tracking, we add a new parameter in the sample weight-updating formula of the cost-sensitive Adaboost to provide more weight to the misclassified samples, which consequently will be given more focus by the classifier. Finally, we propose a tracking method based on the simplified Haar feature as descriptor and the improved cost-sensitive Adaboost as classifier with online learning strategy. In our experiments, we compared our method with two state-of-the-art algorithms and the original cost-sensitive method in both accuracy and processing speed. We tested the different methods on 20 benchmark video sequences. In terms of accuracy, the average representative precision of our method is approximately 26% higher than that of the compressive tracking method and approximately 11% higher than that of the original cost-sensitive method. In terms of processing speed, the average frame rate of our method is approximately 38% faster than that of the compressive tracking method. Our method is based on a modified cost-sensitive Adaboost that focuses more on the minority positive samples to improve detection rate. The proposed method performs well in terms of accuracy and speed, especially for non-rigid objects, such as human bodies.  
      关键词:object tracking;cost-sensitive Adaboost;simplified haar feature;a parameter for weight updating;online learning;non-rigid objects   
      2445
      |
      353
      |
      5
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56115169 false
      更新时间:2024-05-07
    • Image super-resolution using two-channel convolutional neural networks

      Xu Ran, Zhang Junge, Huang Kaiqi
      Vol. 21, Issue 5, Pages: 556-564(2016) DOI: 10.11834/jig.20160503
      摘要:All traditional example-based super-resolution methods adopt image-gradient features for low-resolution images and thus, these methods are unable to characterize the low-resolution space satisfactorily. To address this issue, this paper proposes a novel unified framework for image super-resolution that effectively combines example-based method with deep learning models. The proposed method consists of three main stages:low- and high-resolution similarity-learning, high-resolution patch-dictionary-learning, and high-resolution patch-generating stages. At the first stage, two different convolutional neural networks are proposed for learning a novel similarity metric between high- and low-resolution image patches. At the second stage, the high-resolution patch dictionaries are learned from training sets. At the last stage, the high-resolution patches are generated based on learned similarities between the input low-resolution patch and the atoms in the high-resolution patch dictionary. Experimental results on several commonly adopted datasets show that the proposed two-channel model quantitatively and qualitatively achieves improved performance compared with other methods. The proposed two-channel model can preserve more detailed information and reduce ringing artifacts in the resulting images.  
      关键词:image super resolution;Pair-wise convolutional neural networks;two-channel convolutional neural networks;patch similarity learning   
      4918
      |
      361
      |
      8
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56113061 false
      更新时间:2024-05-07
    • Ren Fuji, Li Yanqiu, Xu Liangfeng, Hu Min, Wang Xiaohua
      Vol. 21, Issue 5, Pages: 565-573(2016) DOI: 10.11834/jig.20160504
      摘要:The LBP(local binary pattern) algorithm is sensitive to edge and noise. Thus, this study proposes a new algorithm called uniform local mean pattern (ULMP). Considering the complementarity of global and local features on recognition, this study proposes a face recognition method based on ULMP description and double weighted decision fusion for classification. First, we use the ULMP algorithm to derive the code diagram of the entire image. The concrete steps to obtain the eight binary codes are implemented by comparing the eight average pixels with the center pixel. Each of the eight values is obtained by computing the average pixels of the eight directions, three horizontal directions, three vertical directions, and two diagonal directions. Each binary code is multiplied by the corresponding weight coefficient and then added to derive the ULMP coding value of the center pixel and the code pattern of the entire image. Then, the code diagram is divided into equal sub-blocks and each sub-block histogram is assessed to determine the local texture features. The global texture feature is obtained by connecting the histogram of different sub-block features. To emphasize the importance of different sub-blocks in the final recognition, this study introduces the cloud model and structure-based classifiers by constructing sub-image sets to obtain the weight of each sub-block. In the testing phase, each block of the statistical characteristics of a test sample is combined with the BP neural network to determine the posterior probability of each category. We use the weights calculated by the cloud model fused with the linear weighted decision to derive the local classification results. After obtaining the results of local and global classification, we conducted weighted integration to obtain the final recognition results. The experimental results on the ORL and Yale face database show that the ULMP exhibits good recognition performance. The average recognition rate is 95.9% on the ORL database with five test samples. The proposed approach increases the recognition rate of LBP, MCT(modified census transform), LGP(local gradient patterns), ULBP, and CSLBP by 11.3%, 10.6%, 9.5%, 8.9%, and 3.9%, respectively. The average recognition rate is 97.4% on the Yale database with five test samples. The proposed approach increases the recognition rate of LBP, MCT, LGP, ULBP, and CSLBP by 18.9%, 17.7%, 17.1%, 10.7%, and 0.7%, respectively. At the same time, the double weighted decision fusion pattern proposed in this study obtained the average recognition rates of 98.5% and 98.34% on ORL and Yale database, respectively. These rates are higher than any single module. In this study, a new texture extraction algorithm called ULMP is proposed. The ULMP has a good effect on smoothing noise and edge information and can achieve a high recognition rate. The ULMP is suitable for extracting facial features. At the same time, the method employs the cloud model to derive the weights that can better develop the integration between the local classifiers. Ultimately, the ULMP improves the overall performance of the system and obtains a higher recognition rate. The ULMP is compared with other methods to verify its validity. Results show that the double weighted fusion model is a precise and effective method for face recognition.  
      关键词:face recognition;local mean pattern (LMP);double weighted decision fusion;cloud model   
      3623
      |
      371
      |
      1
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56112966 false
      更新时间:2024-05-07
    • K-steps stabilization based on automatic clustering of shoeprint images

      Wang Xinnian, Shu Yingying
      Vol. 21, Issue 5, Pages: 574-587(2016) DOI: 10.11834/jig.20160505
      摘要:Shoeprints serve as vital evidence in forensic investigations, and determining how a massive amount of shoeprint images can be clustered automatically through long-term accumulation has become one of the urgent tasks of criminal technology. Unlike other image sets, the number of shoeprint categories is not only considerable but also unknown. Shoeprint images in the feature space are distributed inhomogeneously and sparsely, but the quantity of each class is low. For these reasons, most existing clustering algorithms cannot satisfactorily cluster shoeprints. In this study, an automatic clustering method is proposed to divide shoeprint sets effectively based on an analysis of the distribution of shoeprint images in the feature space. Through statistics on labeled shoeprint-image databases, we found that shoeprint sets of different patterns do not intersect, and a blank region, where no shoeprints exist, is present between every two classes. Blank regions are called margins in this paper. The core objective of the proposed algorithm is to determine the margins between classes and use them to divide a shoeprint set. The process involves the following steps:1) dividing the shoeprint set with monotonically increasing or descending thresholds, which are used to classify two shoeprint images into the same cluster; 2) searching for the cluster that does not change with K consecutive partitions; 3) outputting the stable cluster and removing the shoeprints belonging to the output stable cluster from the dataset; 4) choosing the next threshold and dividing the remaining dataset; 5) returning to step 2) until the remaining set is empty. Experimental results on two kinds of publicly available databases and one real shoeprint database which comprises 5792 images, have shown that the proposed algorithm outperforms state-of-the-art clustering algorithms on common clustering evaluation measures. The precision and F-measure of the proposed algorithm on the real shoeprint database are approximately 99.68 and 95.99 percent, respectively. In this study, based on the distribution of shoeprint images in the feature space, an automatic clustering algorithm that searches for margins between clusters to divide a dataset is proposed. The proposed algorithm achieves a comparable or even better performance on clustering a shoeprint dataset than its competitors. Experiments have also shown that the performance of the algorithm is less sensitive to the parameter and shape of the clusters. The algorithm can also be applied to clustering other datasets of images with characteristics similar to those of shoeprint images.  
      关键词:shoeprint image;clustering;margin;K-steps stabilization;reachable radius;cluster aggregation tree;clusters of arbitrary shape   
      2446
      |
      286
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56114311 false
      更新时间:2024-05-07
    • Dong Junning, Yang Cihui
      Vol. 21, Issue 5, Pages: 588-594(2016) DOI: 10.11834/jig.20160506
      摘要:Aiming at reducing the high computation cost of the classic Gaussian mixture model (GMM), we propose a GMM with spatial constraint (SCGMM). The main reason for the high computation cost of the GMM is that all pixel models are computed at every frame, and a large part of the computation is useless for GMM. Thus, the SCGMM focuses on reducing the number of models involved in the computation of the GMM. For the three parts of the GMM, three methods are utilized to reduce the computation cost of the GMM. In the initial part of the GMM, a method of fast initialization is utilized to shorten the process of initial modeling. The initialization of the GMM requires a large amount of statistical information from all the pixel points. Each pixel should be involved in all operations and the computation cost for one frame used in the initial part cannot be obviously reduced. For this reason, a simple adaptive learning rate is applied to reduce the number of frames required in the initial part of the GMM. In the moving object detection part of the GMM, a double background model is adopted. The detection results for moving objects of the first adaptive background model are used as the spatial constraint condition of the GMM to reduce the redundant computation of the GMM at the region without moving objects. The moving object detection method of the GMM is also used at the region that may contain moving objects to maintain the accuracy of the GMM. Therefore, the advantages of the SCGMM in the moving object detection part is that the SCGMM reduces the number of pixels involved in the computation of the GMM and maintains the accuracy of the GMM. In the parameter update part, multi-strategy adaptive model updating is adopted. The final result of the moving object detection part is used as the spatial constraint condition to reduce the quantity of pixels involved in parameter update. Adaptive learning rate and periodic global update are applied to improve the accuracy of moving object detection. By using the aforementioned methods, the performance of GMM is evidently optimized. Experimental results show that SCGMM has better performance and accuracy than GMM, CodeBook, GMG, and MODGMM(mean of deviation GMM). The processing speed is increased by more than three times. Notably, the processing speed of SCGMM is increased by more than six times compared with that of GMM, and the percentage of pixels involved in the complex computation process is less than 20%. Compared with GMM, SCGMM has better performance at real-time processing and better accuracy in a fixed camera scene.  
      关键词:moving object detection;Gaussian mixture models;spatial constraint   
      2223
      |
      272
      |
      3
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56114371 false
      更新时间:2024-05-07
    • Object level saliency detection by hierarchical information fusion

      Li Bo, Jin Lianbao, Cao Junjie, Leng Chengcai, Lu Chunyuan, Su Zhixun
      Vol. 21, Issue 5, Pages: 595-604(2016) DOI: 10.11834/jig.20160507
      摘要:Saliency detection, which is based on the simulation of human visual attention, is an important way to help computer sensors to understand the world. Saliency detection has many applications in computer vision, such as image segmentation, image retrieval, and retargeting. However, saliency detection is a challenging computer vision task. Most of the existing saliency algorithms can only detect pixels or regions of interest. A new method based on hierarchical information fusion is proposed in this study to distinguish the saliency object from the complex background region and guarantee the uniformity of patches in the same object. The proposed method is different from the state-of-art method that uses mid-level superpixels and object-level regions to adjust the raw saliency map. First, an edge-preserving filtering is adopted as a pretreatment and then the mid-level superpixels are generated by the simple linear iterative clustering algorithm. Second, the mid-level raw saliency map is obtained by the saliency filter and adjusted by two priors, which can reduce the influence of complex background regions. Afterward, the mid-level superpixels are clustered to object-level segments by spectral clustering, and an object boundary prior is defined to enhance the consistency of the saliency map. Finally, the saliency label will be diffused from superpixels to object-level regions by heat diffusion. The evaluation experiments against 16 other methods are conducted on the benchmark MSRA1000 database by the precision-recall curve and the F-measure score. By utilizing the mid-level superpixels and object-level clustering regions, our method can reflect the hierarchical relationship between patches and objects well. The experimental result show that our method is also applicable to multi-target saliency detection.  
      关键词:saliency detection;hierarchical information fusion;edge-preserving filter;heat diffusion   
      2895
      |
      359
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56114021 false
      更新时间:2024-05-07
    • Image saliency detection using regional covariance analysis

      Zhang Xudong, Lyu Yanyan, Miao Yongwei, Hao Pengyi, Chen Jiazhou
      Vol. 21, Issue 5, Pages: 605-615(2016) DOI: 10.11834/jig.20160508
      摘要:The purpose of image saliency detection is to obtain high-quality saliency maps that can reflect the significance degrees of different image areas. Based on the saliency map, the visually salient regions of the input images can be processed efficiently, which benefits various applications, such as image segmentation, object detection, and object recognition. According to the theoretical analysis of regional covariance, the intrinsic properties of the image superpixels can be described by the high-dimensional covariance matrix, and thus, the dissimilarity degree between two image superpixels can be determined by the regional covariance distance. Using the regional covariance analysis, a novel method for image saliency detection is proposed. First, the input image is preprocessed by superpixel segmentation. Then, the saliency of superpixels can be calculated using the regional covariance distance. Finally, the saliency of superpixels can be up-sampled to determine the saliency of the image pixels. In this study, we test 200 images selected from the THUS10000 data set for saliency analysis and compare 4 different detection schemes. Experimental results show that our saliency maps are similar to the ground truth manual calibration results. Our method can effectively estimate the saliency of input images with complex background or with similar color between front and background. By combining the high-dimensional intrinsic properties of image pixels and superpixels, our approach can not only avoid the negative effect of single noise pixels but also improve the accuracy of saliency detection. Moreover, by using the covariance matrix of image superpixels, the final saliency map can be robust to the number of feature points, sequence of image pixels, and illumination. The regional-covariance-based image saliency map can be applied to salient object extraction and image segmentation.  
      关键词:saliency analysis;regional covariance;superpixels;saliency map;image segmentation   
      3039
      |
      369
      |
      5
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56113762 false
      更新时间:2024-05-07
    • Foreground discrimination in local model-matching tracking

      Liu Daqian, Liu Wanjun, Fei Bowen
      Vol. 21, Issue 5, Pages: 616-627(2016) DOI: 10.11834/jig.20160509
      摘要:Under a complex background, a majority of the traditional model-matching tracking methods only consider the characteristics of the moving target without fully utilizing the relationship between the moving target and the image for object tracking, especially when the target was occluded during the process of object tracking. Consequently, these methods allow the moving target to drift easily; as a result, the moving target is sometimes lost. To solve these problems, a novel object-tracking approach based on foreground discrimination of local model matching is proposed. First, the algorithm selects previous frames of the image frame sequences for tracking training, and each image frame is divided into superpixel blocks. Second, the vector cluster is composed of all superpixel blocks, and the object model that contains superpixel blocks is established by the discrimination appearance model. Finally, the algorithm takes the object model as a matching model, adopts expectation maximization to estimate the foreground information, and utilizes foreground discrimination to match the local model. Hence, the tracking object is determined. Compared with other excellent tracking algorithms, the proposed target-tracking algorithm can accurately and effectively adapt to the complex changes in the target states of a video scene through foreground discrimination and local model matching and can adequately solve the problems of tracking drift under various uncertain factors. This algorithm can also achieve the same or even higher tracking accuracy compared with existing model-matching tracking methods. For the video sequences of Girl, Lemming, Liquor, Shop, Woman, Bolt, CarDark, David, and Basketball, the average center errors are 9.76, 28.65, 19.41, 5.22, 8.26, 7.69, 8.13, 11.36, and 7.66, respectively, and the tracking overlap ratios are 0.69, 0.61, 0.77, 0.74, 0.80, 0.79, 0.79, 0.75, and 0.69, respectively. Experiment results indicate that the proposed target-tracking algorithm can adaptively update noise model parameters in real time, accurately estimate the foreground information of images according to different image sequences, eliminate background information interference, and achieve tracking accuracy and adaptability under the conditions of partial occlusion, target deformation, illumination changes, and complex background.  
      关键词:foreground discrimination;super-pixel;local model;expectation maximization;object tracking   
      2718
      |
      388
      |
      2
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56113883 false
      更新时间:2024-05-07
    • Yang Jun, Yan Han, Wang Maozheng
      Vol. 21, Issue 5, Pages: 628-635(2016) DOI: 10.11834/jig.20160510
      摘要:A new fused feature descriptor based on heat kernel and wave kernel signatures is proposed to calculate shape correspondences between isometric 3D models. First, the Laplace operators of 3D models are calculated to obtain eigenvectors and eigenvalues, which are defined as the basic parameters for computing the heat kernel and wave kernel signatures of the source and target models, respectively. Second, heat kernel and wave kernel signatures are fused together as a new feature descriptor, which serves as a constraint for points sampled uniformly in random. Finally, the shape correspondences between models are calculated by a minimum-value matching algorithm. Experimental results show that the accuracy of the correspondence ratio calculated using the proposed feature-descriptor constraint increases by 19.429% and 4.857% on average with respect to the correspondence ratios calculated using heat kernel signature and wave kernel signature constraints, respectively. The fused feature descriptor presented in this article is applicable to calculating correspondences between isometric 3D models or approximately isometric 3D models. Compared with the descriptors that only use heat kernel or wave kernel signatures, the fused-feature descriptor was able to obtain more accurate correspondences.  
      关键词:isometric model;feature descriptor;heat Kernel signatures;wave kernel signatures;laplace operator;shape correspondence   
      2503
      |
      240
      |
      7
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56114294 false
      更新时间:2024-05-07
    • Liu Zhi, Xiao Kai, Chen Xiaoyan, Jiang Ping, Xie Jin
      Vol. 21, Issue 5, Pages: 636-645(2016) DOI: 10.11834/jig.20160511
      摘要:This study presents a new weighted rational spline interpolation surface based on the values and partial derivatives of functions being interpolated and discusses its properties. The local constraint control of surfaces is parsed. On one hand, a rational cubic interpolation spline is constructed on the -direction and a bivariate rational interpolation spline surface is constructed on the -direction. On the other hand, another bivariate rational interpolation spline surface is obtained in the reverse order. Finally, a new weighted rational interpolation spline surface is generated by weighting two different interpolation surfaces. This study discusses several properties of the interpolating function, such as the bases of the interpolation, the bounded property, the properties of integral weighted coefficients, and the error between the interpolating function and the function being interpolated. By selecting suitable parameters and weight coefficients, the local constraint control in the interpolating region can be obtained without changing the interpolating data. Experimental results illustrate that the new weighted rational interpolation spline surface possesses good constraint control properties.  
      关键词:rational interpolation spline;local control;integral weighted coefficient;weight   
      2257
      |
      275
      |
      1
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56112815 false
      更新时间:2024-05-07
    • Lin Mudan, Yang Feng, Liang Shujun, Zhao Haisheng, Huang Zheng, Cui Kai
      Vol. 21, Issue 5, Pages: 646-656(2016) DOI: 10.11834/jig.20160512
      摘要:This paper presents an improved method based on sequential learning and prior shape information for detecting the external elastic membrane (EEM) in intravascular ultrasound (IVUS) images to overcome the problems of interference factors, such as calcified plaque and acoustic shadow. Multi-class multi-scale stacked sequential learning was applied to divide an IVUS image into seven tissues. Subsequently, critical points on the external elastic membrane border were selected based on the classification results and the prior shape information of the vessel. Finally, a snake model combined with gradient and phase information of the IVUS images was used to obtain the final external elastic membrane border. In the experiments, the algorithm was implemented on 153 typical IVUS images from 22 in vivo clinical IVUS sequences. Statistical results showed that the average JACC measure of EEM borders detected by the algorithm was 88.5%; thus, the algorithm could meet clinical demands, and its performance was better than those of algorithms in recent studies in China. The proposed automatic algorithm is simple and effective. Compared with existing Chinese algorithms, it has improved capability of recognizing calcified plaque, fibrous plaque, and acoustic shadow and can be applied to IVUS images with calcified plaque, fibrous plaque, or catheter eccentricity.  
      关键词:intravascular ultrasound;external elastic membrane;sequential learning;shape information   
      2524
      |
      292
      |
      4
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56114183 false
      更新时间:2024-05-07
    • Li Xuan, Liu Yunqing, Bian Chunjiang, Mao Bonian
      Vol. 21, Issue 5, Pages: 657-664(2016) DOI: 10.11834/jig.20160513
      摘要:Automatic inshore ship detection from remote sensing imagery has many important applications, such as ship change detection and harbor dynamic surveillance. Stable performance of inshore ship detection is vital to the analysis of ship change and the determination of the harbor surveillance effect. Ship detection using optical remote sensing images has been a hot research topic. However, detecting inshore ships utilizing the traditional area-based method is difficult because the gray scale and texture character of inshore ships are similar to that of the shore. Therefore, we propose a method of inshore ship detection using local salient characteristics. First, the binary image is obtained by water and land segmentation preprocessing. Then, the line segments from binary images are extracted as the local salient features to detect ship targets. Next, line segment extraction result is combined with ship bow detection result to generate the ship detection model. Finally, the ship targets are acquired by calculating the ship geometric size and analyzing the environmental information. Experimental results indicate that the proposed inshore ship detection method is more effective and can robustly adapt to the complex background and mooring orientation. The detection result is more accurate compared with that of traditional methods, and the relognition rate is 100%. In the complex background environment and other interferences, this method exhibits a high recognition rate, with high robustness and high adaptability.  
      关键词:optical remote sensing images;inshore ship;target detection;line segment;local salient characteristics   
      2612
      |
      245
      |
      5
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56114127 false
      更新时间:2024-05-07
    • Shang Ren, Hei Baoqin, Li Shengyang, Qin Bangyong, Zhang Jiuxing
      Vol. 21, Issue 5, Pages: 665-673(2016) DOI: 10.11834/jig.20160514
      摘要:The clarity of remote sensing images is one of typical evaluation indicators for assessing the image quality of Earth-observation payload. The sharpness degree of imaging payload can be determined from the changing gray values at the edge of objects. Currently however, studies on orbit tests of Earth-observation payload and quality evaluation of remote sensing images focus mostly on whether the clarity values of the remote sensing images reach a certain standard. Very few of these studies conduct further analysis on the factors influencing the clarity of a remote sensing image. To address this issue, we assessed the clarity values of short-wave infrared (SWI) images of Tiangong-1 hyper-spectral payload in a long time series and analyzed the influencing factors on clarity. First, we calculated clarity using an improved clarity algorithm, which is based on image-edge detection. Then, the corresponding Tiangong-1 hyperspectral payload engineering parameters of the remote sensing images were obtained. Finally, association rule mining was performed using the typical Apriori algorithm on the clarity values of the SWI images of Tiangong-1 hyperspectral payload in the long time series and their corresponding engineering parameters. The association rules were viewed as alternative strong association rules when their supporting and confidence measurements exceeded the set minimum thresholds. Only the alternative strong association rules verified by the addition of the lift interest and cosine interest measurements could be adopted. The influencing factors on the clarity of the SWI images of Tiangong-1 hyper spectral payload were analyzed using the strong association rules and three-dimensional scatter plots. After testing a considerable amount of data, the results show that the clarity values of the SWI images of Tiangong-1 hyperspectral payload are good. The influencing factors on the clarity of the SWI images of Tiangong-1 hyperspectral images include solar angle, integral time, and platform stability (including the stability of pitch, yaw, and rolling angles). Solar angle is positively correlated with clarity. When the solar angle is larger than 65°, the clarity values tend to be higher, but when the solar angle is less than 30°, the clarity values tend to be lower. When the solar angle is larger than 30° and less than 65°, platform stability is positively correlated with clarity. Integral time has a negative relationship with remote sensing image clarity. The method of analyzing the factors influencing the clarity of SWI images of Tiangong-1 hyperspectral payload in a long time series is feasible and effective. The relationships between the clarity values of the SWI images of Tiangong-1 hyper-spectral payload and their corresponding engineering parameters have been identified. The scope of engineering parameters should be expanded to identify quantitative and more significant relationships. Furthermore, the association rule mining method could be helpful in analyzing the influencing factors for other remote sensing image evaluation indicators.  
      关键词:clarity;long time series;association rules;hyper spectral images;engineering parameters;influence factors   
      3033
      |
      301
      |
      1
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56112904 false
      更新时间:2024-05-07
    • Lin Chenxi, Zhou Yi, Wang Shixin, Liu Wenliang, Tian Ye, Zhang Yannan
      Vol. 21, Issue 5, Pages: 674-682(2016) DOI: 10.11834/jig.20160515
      摘要:With rapid development of the social economy and the urbanization process, building land has increased continuously, promoting the use of remote sensing images on built-up area extraction, which lays the foundation for detecting buildings and obtaining indices, such as building density and population density. Variogram is proven to be an effective methodology in extracting texture characterization with high precision when applied to urban built-up area extraction from high-resolution synthetic aperture radar (SAR) images. However, limited by factors, such as imagery resolution and structural features of rural built-up areas, the traditional way of executing the variogram is characterized by a high false alarm rate when extracting rural built-up areas in a large extent from middle- and high-resolution SAR images. Aimed at extracting rural built-up areas precisely, ensuring a high detection rate, and lowering the false alarm rate, this study proposed a new method to determine the threshold based on the iterative -parameter. Through the establishment of the brightness threshold, pixels that meet the requirement can be assigned a weight to increase the value of the variogram for pixels up to a standard in four directions (which can be considered built-up areas) and to restrain the increase of pixels only fitting one direction or having no direction (which can be considered non-built-up areas). Thus, the confusion between built-up areas and its counterpart is reduced. The experiment employed Radarsat-2 image as data source. According to the features of the variogram, cross-polarization images, VH and HV bands, and their principal component analysis transformation, which are characterized by a high discrepancy between built-up areas and non-built-up areas, are selected as input data. The first step is to determine the threshold and weight based on the iterative -parameter. The second step is to extract the feature vectors of the image using the variogram. The third step is binarization with the FCM classifier. The final step is to determine whether to recalculate or not according to the extraction precision. The results indicate that the mean detection rates of the advanced variogram method on experimental areas 1 and 2 are 91.58% and 9011%, respectively, whereas the mean false alarm rates are 1983% and 3187%, respectively. Compared with the traditional and minimum distance methods, the proposed method not only maintains a relatively high detection rate but also decreases the false alarm rate significantly. The precision of rural built-up area extraction basically satisfies the requirements of practical application. However, the method still has several drawbacks. First, the "edge effect", which refers to the phenomenon that the edge of built-up areas extracted from the image are actually wider than the actual edge, accompany almost every part of the extracted built-up areas. Meanwhile, this methodology is unable to recognize regions sharing similar texture features with built-up areas, such as the highlighted thready road and lumpish arable land, leading to false recognition of built-up areas. To solve the aforementioned problems, a method to restrain the high value of the variogram in places adjacent to the boundary of building areas and preprocessing are needed to ensure that certain texture features symbolize only the built-up areas.  
      关键词:texture analysis;variogram;SAR image;rural build-up area;iterative;-parameter   
      2727
      |
      259
      |
      5
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56113073 false
      更新时间:2024-05-07
    0