最新刊期

    21 12 2016
    • Survey on anti-forensics techniques of digital image

      Wang Wei, Zeng Feng, Tang Min, Chen Junjie, Li Hongjun
      Vol. 21, Issue 12, Pages: 1563(2016) DOI: 10.11834/jig.20161201
      摘要:At present, digital images are suffering a serious credibility crisis. While existing passive forensic techniques are able to successfully detect multiple types of standard image manipulations, many forensic techniques do not account for the possibility that a forger with advanced knowledge of signal processing techniques may attempt to disguise their forgery. As a result, researchers have begun studying anti-forensic operations that are capable of deceiving forensic techniques. This study attempts to review the state-of-the-art image anti-forensic techniques (including rising reasons, implementation principles, technical characteristics, and application prospects) and summarizes the challenges and opportunities associated with anti-forensic techniques. Most studies on passive forensic techniques in image forgeries have focused on identifying different tampered traces and inherent characteristics. The present study examines different characteristics to review and compare previous studies on image anti-forensic techniques. Based on different forensic characteristics, the anti-forensic techniques of digital images are summarized as three research topics:hiding left traces, inherent characteristic forgery, and anti-forensic detection. The related research progress and challenges are surveyed according to different characteristics. Main problems in the current research on image anti-forensic techniques were pointed out, and further research directions are presented.  
      关键词:anti-forensics;passive forensics;image forensics;image forgery;anti-forensics detection   
      5578
      |
      907
      |
      1
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56113535 false
      更新时间:2024-05-07
    • Ma Jinxiang, Fan Xinnan, Wu Zhixiang, Zhang Xuewu, Shi Pengfei, Wang Gengren
      Vol. 21, Issue 12, Pages: 1574(2016) DOI: 10.11834/jig.20161202
      摘要:Extracting clear crack images from an underwater dam image is the fundamental step in digital imaging application. These images are the crucial prerequisite for good understanding of real underwater dam scenes. However, complex underwater environmental conditions, such as uneven illumination distribution, low signal-to-noise ratio, and low contrast, make the task of extracting clear crack images more challenging than in other conditions. We propose a self-adaptive image enhancement algorithm based on improved dark channel prior for underwater dam crack images. Global illumination balance and noise suppression processes were conducted to deal with the uneven illumination problem while maintaining the original image textures. The underwater dam images with different illumination conditions correspond to different brightness balance layers. Accurate estimation of denoising image was processed on the basis of improved dark channel prior and guided filtering. The proposed algorithm improved the original dark channel prior method in terms of optical transmittance, which is determined by the absolute difference value of the global atmospheric light and the dark channel values. If the absolute difference value is greater than the threshold, then the image pixels are in the dark zone. Otherwise, the image pixels are in the bright zone. The optical transmittance formula varies according to the different illumination zones. A self-adaptive, parameter-calculating method of 3 introduction principle was used to compute the image tensile stretch points and enhance the image. A series of evaluation parameters, such as mean, variance, peak signal-to-noise ratio (PSNR), contrast, and information entropy, were proposed to assess the image enhancement algorithms. The proposed algorithm is tested to enhance underwater dam crack images with different uneven illumination conditions. Two typical underwater dam crack images, namely, a medium-sized crack image and a small crack image, are selected as research objects. The enhancement effects of the proposed algorithm are compared with those of other enhancement algorithms such as histogram equalization, homomorphic filtering, multiscale retinex, and multiscale retinex with color restoration. To test the robustness of the proposed algorithm, salt-pepper noise and Gaussian noise are added to the original underwater dam crack images to evaluate the anti-interference ability of the proposed algorithm with specific distribution noise. The different sizes of salt-pepper noise and Gaussian noise are used in this experiment. The proposed algorithm has the best enhancement effects compared with the aforementioned enhancement algorithms. It has shown strong robustness against specific interferences such as salt-pepper noise and Gaussian noise. The PSNR levels of the proposed algorithm for the two original images are 42.77 and 41.49, which are higher than those of other image enhancement algorithms. The simulation experiment results show that the proposed algorithm can effectively suppress noise interference and enhance image quality of the underwater dam crack image. The proposed algorithm has preferable adaptive performance for underwater dam crack image with different brightness conditions. This improved algorithm is feasible and efficient for underwater dam crack image enhancement.  
      关键词:image processing;image enhancement;dark channel prior;illumination balance processing;self-adaptive   
      4863
      |
      480
      |
      5
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56112920 false
      更新时间:2024-05-07
    • Visibility restoration algorithm of dust-degraded images

      Zhi Ning, Mao Shanjun, Li Mei
      Vol. 21, Issue 12, Pages: 1585(2016) DOI: 10.11834/jig.20161203
      摘要:Images captured during sandstorm conditions frequently feature undesirable color-cast effects and reduced contrast, and are therefore not suitable for object identification on the scene and further image processing. To solve problems of dust-degraded images, a new visibility restoration algorithm based on color adjustment and contrast enhancement is proposed. Restoration of dust-degraded images faces two main problems:color cast and contrast enhancement. Through analysis of the color histograms of a large number of dust-degraded images, we summarized three characteristics:aggregation, order, and deviation. Then, we adopted the normal distribution model to describe each color channel. Regarding the difference between the degraded and clear images, we calculated the extension coefficient based on green channel histogram and adjusted the image value ranges through the extension coefficient. After these steps, the color-cast problem was preliminarily eliminated, but the contrast was still relatively low. To further deal with overall dimness, low contrast, and noise, an improved enhancement algorithm based on singular value decomposition was used to obtain the final result. To verify the effectiveness of the proposed algorithm, the other four methods were compared. The proposed method could effectively solve the color cast, improve the dust image contrast, and enhance the overall visual effect of the image. By taking advantage of the characteristics of three-channel color histogram distribution of dust-degraded images, the proposed method can achieve fast and efficient color correction. Furthermore, the singular value information image in the frequency domain is adopted to enhance contrast. As verified by many dust-degraded image visibility restoration experiments, the method can enhance different dust-degraded levels; thus, it has strong applicability.  
      关键词:dust images;color cast;Gaussian model;singular value decomposition   
      3675
      |
      514
      |
      6
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56113200 false
      更新时间:2024-05-07
    • Binary quantized SIFT feature descriptors for optimized image stitching

      Li Qian, Jiang Zetao
      Vol. 21, Issue 12, Pages: 1593(2016) DOI: 10.11834/jig.20161204
      摘要:A novel binary local feature descriptor based on SIFT is proposed to avoid disadvantages such as high computational cost and large memory cost of SIFT, and low discriminative power and robustness from binary-valued descriptors such as BRIEF, ORB, BRISK, and FREAK. Traditional SIFT feature space and distribution of feature vectors are analyzed theoretically and experimentally. Based on the results, the SIFT algorithm is improved by combining the advantages of binary descriptors. Different from traditional binary descriptors, each component of the SIFT feature vector is sorted by magnitude, and median values are selected as quantization thresholds to transform the high-dimensional floating point SIFT feature vector to a bit vector. Similarity between key points is evaluated by the Hamming distance instead of the Euclidean distance to improve matching efficiency. Then, the binary descriptor is divided into two parts that are matched at the matching stage. The purpose is to eliminate invalid matching feature points to further reduce matching time. Extensive experiments on large databases demonstrate the strong discriminative power and robustness of our quantization methods. The binary feature descriptor proposed considers low memory cost and high matching efficiency while maintaining the strong discriminative power and robustness. The descriptor proposed solves the computational complexity from SIFT and the low discriminative power and robustness from binary descriptors. Moreover, an average of 77.5% invalid matching key points is eliminated to reduce the number of iterations of RANSAC. The proposed quantization algorithm can be used for fast image matching and fast image stitching to improve the efficiency of matching and stitching.  
      关键词:SIFT(scale invariant feature transform);binary feature descriptors;robustness;discriminative power;fast image stitching   
      4478
      |
      432
      |
      3
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56112608 false
      更新时间:2024-05-07
    • Yun Yao, Yang Jianwei, Zhang Liang
      Vol. 21, Issue 12, Pages: 1602(2016) DOI: 10.11834/jig.20161205
      摘要:Global image registration aimed at finding a transformation aligning two images can be approximated by estimating parameters of affine deformations. Some of the existing methods are inapplicable to binary images. The burden of computation process in other methods is more expensive. In this paper, we modified the definition of centroid for images and proposed the concept of generalized centroid. By combining the generalized centroid, we proposed an algorithm to achieve the estimation for parameters of affine deformations. Unlike the traditional centroid, the generalized centroid is defined by a modified repeated integral. The traditional centroid is only a special case of the proposed generalized centroid. To maintain the affine deformation relation, we present the condition in which the generalized centroid needs to be satisfied. We propose an algorithm to achieve the estimation for parameters of affine deformations. The basic idea of the algorithm is that we should find three sets of corresponding points in the original image and corresponding deformation image using these three pairs of points and establish equations to determine the parameters of affine deformations. The proposed centroids are applicable not only to gray images but also to binary images. Compared with the cross-weighted moment method to estimate the parameters of affine deformations, the proposed method requires less calculation and the recovery effect of the two methods is not significantly different. Compared with the method of constructing a nonlinear equation group using the image moment, the proposed method has a good ability to estimate the parameters of affine deformations. By combining the generalized centroid, we proposed an algorithm to achieve the estimation for parameters of affine deformations. The proposed method is applicable to gray images and binary images. Moreover, the recovery effect is better and the calculation is less.  
      关键词:image registration;feneralized centroids;affine transformation;parametric estimation of affine transformations;modified repeated integral   
      3186
      |
      390
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56113283 false
      更新时间:2024-05-07
    • Image defog algorithm based on variogram and morphological filter

      Liu Wanjun, Zhao Qingguo, Qu Haicheng
      Vol. 21, Issue 12, Pages: 1610(2016) DOI: 10.11834/jig.20161206
      摘要:To solve blurring and low contrast of images captured by outdoor visual systems under bad weather conditions, a new fast image defogging algorithm called IDA_VAM is presented; this algorithm is based on variogram and multiple structure element morphological open-and-close filter. The algorithm initially uses a variogram to obtain an accurate atmospheric optical value, and then exploits a multiple structure element morphological open-and-close filter toward the minimum channel map to obtain a rough scattering map. The transmittance map is estimated and corrected, and a bilateral filter is used for smooth operation. Recovery images are obtained by the physical model, and color adjustments are made to obtain bright, clear, and non-foggy images. Compared with other image defogging algorithms, the proposed method utilized for foggy images containing close range image, image perspective, and image with bright areas can be effective in removing the fog. The information entropy is relatively increased by 38.0%, and the contrast value is relatively increased by 34.1%. The definition value is relatively increased by 134.5%. Moreover, better restoration effect is obtained, thereby achieving a more natural, bright, and haze-free image. A large number of experimental results show that this algorithm can effectively recover color and definition of the foggy image containing a close-range image, image perspective, and image with bright areas under non-complex scenes. Clear and natural fog-free images with details of higher visibility can be obtained, and time complexity of the IDA_VAM algorithm and number of image pixels are linearly correlated, thereby meeting real-time requirements.  
      关键词:image defog;variogram;multiple structure element morphological open and close filter;bilateral filter;color adjustment   
      3270
      |
      471
      |
      6
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56113382 false
      更新时间:2024-05-07
    • Pavement crack detection algorithm for linear array CCD images

      Jia Di, Song Weidong, Dai Jiguang, Zhu Hong
      Vol. 21, Issue 12, Pages: 1623(2016) DOI: 10.11834/jig.20161207
      摘要:Grade evaluation of road cracks is one of the basic tasks in highway maintenance. At present, line array camera is mainly used for road image acquisition by relevant departments. Given that the recognition of road crack image is affected by many factors (such as projection of trees and vehicles, illumination changes, grease, branches and straw, and various types of garbage), the accuracy of the automatic identification of crack is reduced. Thus, an artificial method is always used to evaluate the road grade. In this paper, a new method of identifying the road crack image is proposed. Given the large size of the collected image and problem posed by uneven illumination, the image is initially divided into many blocks, and pretreatment with CV model is used to process each block to obtain preliminary segmentation results. Cracks of linear array CCD images are identified by the following features:1) cracks occupy a small portion of the patch,2) cracks have poor continuity in these images, 3) the ratio of crack width to length is small, and 4) the same trends of cracks are basically consistent. To employ the last two characteristics, we use ellipse fitting method in calculating the direction of the preliminary test results, and these areas are divided into four categories. In each category, the location of the center of mass for each region is calculated, and a vector table between the center of mass tables is established. A recursive algorithm is designed to calculate the collinearity, and crack-detection results are obtained. The accurate cracks are obtained by iteratively solving in the original image. A total of 100 images containing road cracks are selected from 2 000 road images. According to the serial numbers, 5 groups at equal intervals are taken out of the images that contain no cracks. Thus, the data sets are constructed with these 200 images in each group. The performance of the algorithm is evaluated by the method of classification index statistics. True positives, false positives, false negatives, and true negatives reached more than 95%, and the execution time of road crack detection and extraction is approximately 1 minute. Experimental results show that the algorithm not only can identify the cracks effectively but can also overcome the negative interference of various factors. Thus, this algorithm has potential for practical implementation.  
      关键词:pavement crack detection;ellipse fitting;centroid;interference;level set   
      3069
      |
      381
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56114006 false
      更新时间:2024-05-07
    • Salient object detection based on background learning

      Xiang Dao, Hou Saihui, Wang Zilei
      Vol. 21, Issue 12, Pages: 1634(2016) DOI: 10.11834/jig.20161208
      摘要:Salient object detection aims to identify spatial locations and scales of the most attention-grabbing objects in a given image, which is shown to be helpful in various computer vision tasks, such as object recognition, adaptive image display, and object detection. Different from eye-fixation saliency prediction, salient object detection emphasizes the saliency and wholeness of detected objects. Thus, dealing with cluttered background and diversity of object parts within an image has always been one of the major challenges in salient object detection. Bottom-up visual saliency is commonly characterized by the contrast of primitive image features at the pixel or super-pixel levels because contrast is the most predominant factor in human cognition. In the literature, the local or global contrast is usually adopted to straightforwardly derive the saliency map, where the contrast in a certain region is calculated by comparing its feature with that of the reference regions. However, such methods using local or global contrast reference regions may fail to detect whole salient objects, especially when dealing with complicated images. We attribute their failure to unreasonable setting of contrast reference regions. To enhance the wholeness of the detected salient objects, an explicit background-driven method is proposed, in which background prior is comprehensively utilized in saliency estimation and optimization. To obtain the background regions of an image for contrast estimation, deep convolutional neural networks were initially used to learn a background map representing the likelihood of each region belonging to the background. From the obtained background map, the background regions could be segmented with a simple thresholding strategy. The learned background regions were then used as references for region contrast computation. To enhance the consistency between the foreground and background regions, enhanced graph-based optimization was adopted to propagate saliencies along the graph. Besides the conventional local connections in a k-regular graph, prior connections with virtual nodes and non-local connections between nodes belonging to background regions were also added to the graph to embed the learned background prior. To verify the effectiveness of the proposed salient object detection method, comprehensive experiments were conducted on four public saliency detection datasets, namely, ASD, SED, SOD, and THUS-10000. The results were compared with those of nine state-of-the-art methods. Four indicators (i.e., precision, recall, F-measure, and MAE) were adopted for comparison. The average scores in precision, recall, F-measure, and MAE of our method were 0.873 6, 0.795 2, 0.844 1, and 0.112 2, respectively, which showed that our method outperformed other popular methods. The best results on all of the datasets were achieved using the proposed method, thereby demonstrating its effectiveness and superiority. Restricting the contrast reference regions to the background could significantly improve contrast-based saliency estimation. The background regions in an image could be effectively learned by convolutional neural networks. An enhanced graph-based optimization could fuse the saliency confidences of different parts from the same object to discover the whole salient object; thus, a more consistent and background-suppressed saliency map could be generated. Experimental results showed that the proposed method can be successfully used in salient object detection and object segmentation in natural images.  
      关键词:salient object detection;background learning;background prior;convolutional neural networks;enhanced graph based optimization   
      3724
      |
      473
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56115201 false
      更新时间:2024-05-07
    • Dai Rong, Xiao Changyan
      Vol. 21, Issue 12, Pages: 1644(2016) DOI: 10.11834/jig.20161209
      摘要:Batch counting of thin sheet products such as paper has been widely applied in the industrial field. To solve the problem of machine vision quantity measurement for very thin paper stacks, a robust image counting algorithm based on global period constraint and local pattern correlation is presented. 1D profiles were extracted along the direction of stack height, and then denoised with Fourier spectrum analysis and a comb filter to preserve the useful period signal. Each candidate paper was located using a traditional peak finding algorithm. An optimal peak template was constructed, and an improved function of normalized cross-correlation was presented to calculate the correlation coefficient between the former template and the original signal by local matching. This approach helped reduce false detection from complex factors such as rugged edge, varying thickness and gap, and irregular arrangement. The collinear property and similar shape of signal wave were utilized to further suppress clutter, and the ultimate measure was obtained from optimal statistics of different profile counts. For comparison, the proposed method and several traditional algorithms were used together in counting experiments with different types of paper sheets with thickness varying from 0.08 mm to 0.23 mm. Our algorithm was verified to eliminate interference more effectively than other methods, and the missing and false alarm rates appeared comparatively low. Our algorithm can achieve very high detection accuracy for paper sheets with thickness of more than 0.08 mm and has good real-time performance, which makes it suitable for in-line industrial applications with high accuracy requirement.  
      关键词:sheet counting;spectrum analysis;template matching;computer vision   
      3092
      |
      298
      |
      3
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56113144 false
      更新时间:2024-05-07
    • Video key frame selection based on mapping and clustering

      Wang Ronggui, Hu Jiangen, Yang Juan, Xue Lixia, Zhang Qingyang
      Vol. 21, Issue 12, Pages: 1652(2016) DOI: 10.11834/jig.20161210
      摘要:Increasing public awareness and interest on access to visual information forces the creation of new technologies for representing, indexing, and retrieving multimedia data. For large image data and video libraries, use of efficient algorithms is necessary to enable fast browsing and access. Video abstract technology plays an important role in multimedia data processing and computer vision. Based on the clustering of the global or local features of an image, the video frames are clustered and the representative key frames are obtained. However, most of the existing methods need to determine the number of clusters in advance, and the adaptive method is inefficient in obtaining the clustering center. This paper presents a method for video key frame selection based on mapping and clustering. The difference between the different images was used to map the image to the corresponding point in 2D space, and the relative position and field density of points were used to cluster the points. Based on the results of the classification, a representative frame set was selected and used to constitute a video summary. Olivetti face database and Open Video database were used to test the proposed algorithm. Video summary results showed precision of 66% and recall of 74%. The value was 11%. Experimental results showed that the proposed method could effectively identify the image categories, which can then be used to quickly obtain the key frames in the video.  
      关键词:mapping;clustering;key frame;video summary;image density   
      3320
      |
      277
      |
      3
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56113910 false
      更新时间:2024-05-07
    • Robust visual tracking via fast deep learning

      Dai Bo, Hou Zhiqiang, Yu Wangsheng, Hu Dan, Fan Shunyi
      Vol. 21, Issue 12, Pages: 1662(2016) DOI: 10.11834/jig.20161211
      摘要:Deep learning-based trackers can always achieve high tracking precision and strong adaptability in diff-erent scenarios. However, because the number of the parameter is large and finetuning is challenging, the time complexity is high. To improve efficiency, we proposed a tracker based on fast deep learning through construction of a new network with less redundancy. The feature extractor plays the most important role in a visual tracking system. Based on the theory of deep learning, we proposed a deep neural network to describe essential features of images. Fast deep learning can be achieved by restricting the network size. With the help of GPU(graphics processing unit), the time complexity of the network training is released to a large extent. Under the framework of particle filter, the proposed method combined the deep learning extractor with a support vector machine scoring professor to distinguish the target from the background. The condensed network structure reduced the complexity of the model. Compared with other deep learning-based trackers, the proposed method can achieve higher efficiency. The frame rate is kept at 22 frames per second on average. Experiments on an open tracking benchmark demonstrate that both the robustness and timeliness of the proposed tracker is promising when the appearance of the target changes contains translation, rotation, and scale, or the interference contains illumination, occlusion, and cluttered background. Unfortunately, the tracker is not robust enough when the target moves fast or the motion blur and some similar objects exist.  
      关键词:visual tracking;deep learning;support vector machine (SVM);particle filter;autoencoder   
      3262
      |
      368
      |
      2
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56113968 false
      更新时间:2024-05-07
    • GCT transform and similarity determination of geometry shapes

      Wu Shaogen, Wang Kang, Lu Lijun, Liu Yaqin
      Vol. 21, Issue 12, Pages: 1671(2016) DOI: 10.11834/jig.20161212
      摘要:The perceptual ability of human beings can determine the similarity of two shapes easily. However, this matter is still an open issue in computer machines. In computer vision applications, classifying and determining the similarity of shapes and providing a correspondence result with human beings in shape similarity determination are necessary. Unfortunately, these issues have not been addressed by up-to-date shape similarity determination algorithms. Geometry complex transform(GCT), a method of transforming a geometric shape from its planar coordinates into the complex domain space of multidimensional vector, was used to transfer the similarity determination of two geometric shapes into that of two complex vectors. GCT transform is also an information-preserving method, which means that it can reconstruct the original shape of an object. GCT transform is translation, scale, and rotation invariant. Aside from being able to determine the similarity of two geometric shapes in correspondence with results generated by humans, this method can also compute rotation angle and scale factor between shapes. Theoretical proof and experiments show that GCT transform is feasible, effective, and efficient in determining the similarity for this class of shapes, which has its centroid in its inner region. Moreover, only two intersections of point exist between any line passing through its centroid with the contour of the shape. GCT can compute the similarity of two shapes with the same result as that of the human being.  
      关键词:GCT transform;geometric shape;complex space;feature vector;shape similarity determination   
      5231
      |
      554
      |
      8
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56112592 false
      更新时间:2024-05-07
    • Parameter selection of shape-adjustable interpolation curve and surface

      Yan Lanlan, Li Shuiping
      Vol. 21, Issue 12, Pages: 1685(2016) DOI: 10.11834/jig.20161213
      摘要:For the parameters in most of the interpolation basis functions are global parameters, resulting in the shape of the interpolation curves and surfaces cannot be adjusted locally. In addition, when the interpolation curves and surfaces are shape adjustable, we need to consider how to choose the parameters to obtain ideal shape. For this, this paper proposes a new construction method for interpolation curve and surface. This method has the following advantages:it requires no reverse calculation of control points, it contains a local shape parameter, it has explicit expression, and it can reconstruct certain conic sections. We also aim to present a shape parameter selection scheme that can be easily applied. The method is based on the expression of the classical cubic Hermite interpolation curve in Bernstein basis form. The Bernstein basis functions are substituted by a set of trigonometric basis functions that are proven to be completely positive in the literature. To ensure interpolation property, the expression of the curve is adjusted according to the endpoint property of the trigonometric basis. The derivate vectors at the interpolation data are assigned, and parameters are incorporated in them. The continuity between the adjacent curve segments is also considered. A new interpolation curve based on trigonometric basis is obtained. The new curve can be rearranged as the linear combination of the interpolation data and a set of interpolation basis functions. The interpolation basis has a simple expression. The interpolation curve contains a set of local shape parameters. The change of one parameter can only affect the shape of one curve segment. The adjacent two curve segments are G continuous. The curve can reconstruct an ellipse. According to a different goal, three criteria for the selection of the shape parameter are provided, and each criterion has a formula that can be used directly. The corresponding interpolation surface has a similar property with the interpolation curve. The parameter selection scheme transforms the design of the interpolation curve with parameter change from random to determinate. A satisfactory result can be obtained through this method. The construction method of the interpolation basis is general and can be used to construct other basis functions with similar properties.  
      关键词:piecewise curve and surface;interpolation;trigonometric basis;shape parameter;parameter selection   
      2679
      |
      247
      |
      2
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56113355 false
      更新时间:2024-05-07
    • Yang Na, Feng Yun, Wei Ying
      Vol. 21, Issue 12, Pages: 1696(2016) DOI: 10.11834/jig.20161214
      摘要:Given the fuzziness and unevenness of infant brain images, infant brain MR image enhancement has become an important topic. Traditional fractional differential algorithm is used essentially to expand the difference between adjacent gray pixels. The order of traditional fractional differential fluctuates strongly in the area of intense gray change, thereby leading to image over-enhancement and introduction of noise. This condition is likely to cause the infant brain MR image enhancement effect to be limited and over-enhanced. To solve the aforementioned problems, we propose an adaptive fractional differential MR image enhancement algorithm based on non-local means value. Otsu algorithm and texture roughness are used to determine the initial number of fractional order. The pixel gray value of the Otsu algorithm is replaced with the average gradient matrix, and the image is divided into two parts by threshold () determined by Otsu algorithm and gradient matrix, texture section, and edge section. Roughness is the physical quantity that describes the size and distribution of the particle. The larger the size of the base elements and the farther the distance between them, the rougher is the texture. In the smooth region of the image, roughness is smaller, and roughness is relatively larger in rich texture regions. To suppress noise interference, a larger range of texture information is integrated into the non-local means, which is used to determine the final number of fractional order. The order of the current search block is determined by initial order matrix. The non-local means is used to filter the order matrix of the searching area. The weight contribution of the neighborhood block to the central block is determined by the similarity of their order matrix. If their structures are similar, the weight is large. Otherwise, the weight is small. This condition can effectively reduce the number of mutations caused by noise, edge, and other factors, thereby effectively retaining the image details after filtering. Finally, the fractional-order filter original image is used and the final enhanced image is obtained. In this paper, the information entropy, average gradient, and spatial frequency are obtained as statistical indexes. Experimental results prove that this algorithm has superior performance in image enhancement. Information entropy is higher by 0.2% to 12% than that of compared algorithms. Average gradient is higher by 5% to 59% than that of compared algorithms. Spatial frequency is higher by 6% to 59% than that of compared algorithms. Only the information entropy is slightly lower than the fractional-order differential and wavelet decomposition of the image enhancement algorithm, and the average gradient and spatial frequency is slightly lower than that of the adaptive fractional-order differential algorithm. The algorithm we propose is more capable of enhancing texture details and is effective in suppressing new noise than the compared algorithms in this paper. The algorithm is also applicable to general fuzzy images and has good application potential.  
      关键词:image enhancement;self-adaption;fractional differential;non-local means;infant brain MR image   
      3378
      |
      282
      |
      1
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56113869 false
      更新时间:2024-05-07
    • Classification of hyperspectral image based on double L2 sparse coding

      Liu Yang, Ji Xiaofei, Wang Yangyang
      Vol. 21, Issue 12, Pages: 1707(2016) DOI: 10.11834/jig.20161215
      摘要:To improve the classification accuracy of a hyperspectral image, double L2 sparse coding is proposed in this paper. Pre-processing work was conducted on the hyperspectral image. In this process, the spatial and spectral information of the image were integrated adequately. Based on spatial continuity, the L2 sparse coding was introduced to reconstruct each pixel of the hyperspectral image. A pixel was represented by linear combination of all pixels in its neighborhood. This representation integrated spatial and spectral information, which benefited classification. The L2 sparse coding was used to achieve hyperspectral image classification according to construction error. Moreover, a coding coefficient was introduced into classification principles because of its distinguishable information. Experiments were conducted on a publicly available hyperspectral image database called AVIRIS. To validate the effectiveness of the proposed method, the comparison with SVM, KNN, and L1 sparse coding was carried out using both original and reconstructed images. The proposed method outperformed earlier approaches and improved the accuracy of classification of the hyperspectral image effectively, and then 99.44% classification accuracy was obtained. The method proposed in this paper can be effectively applied to the classification of hyperspectral images.  
      关键词:sparse coding;L2 sparse regularization;hyperspectral image;image reconstruction;image classification   
      3733
      |
      302
      |
      1
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56113187 false
      更新时间:2024-05-07
    0