最新刊期

    19 2 2014
    • Cognitive reasoning method for behavior understanding

      Tao Linmi, Yang Zhuoning, Wang Guojian
      Vol. 19, Issue 2, Pages: 167-174(2014) DOI: 10.11834/jig.20140201
      摘要:Human behavior understanding is a challenging area in computer vision and machine intelligence. A basic problem in this area is the semantic gap between observable actions and human behavior, which should be bridged via context based reasoning. In this paper, we proposed a method to model the daily knowledge about action behavior, environment and their relationship. A novel progressive reasoning method is further built for striding over the semantic gap based on an extendable environment and action model. At first, models about the relation between features, complex features and behavior are built. Feature extraction modules process the continuously sensor data and forward the results to the reasoning module, in which a set of possible behaviors is selected via the feature-behavior models. The set of behaviors is further used as a condition in feature extraction for finding more features to support or to discriminate the behaviors in the set. A system is developed to continuously progressively reason human behavior by the proposed models. Primary experiment shown the system can continuously understand concurrent human behaviors.  
      关键词:behavior understanding;cognitive reasoning;feature behavior relation;context aware   
      3792
      |
      390
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56121727 false
      更新时间:2024-05-07
    • Deep learning and its new progress in object and behavior recognition

      Zheng Yin, Chen Quanqi, Zhang Yujin
      Vol. 19, Issue 2, Pages: 175-184(2014) DOI: 10.11834/jig.20140202
      摘要:Deep learning is a new research area in machine learning. Currently,extracting features by deep learning for visual object recognition and behavior recognition capture many attentions. To draw more attention from research community about deep learning,and to push forward the research frontier of object and behavior recognition,we give a general progress overview for deep learning and its application to visual object and behavior recognition. First,we give a general introduction to deep learning,including the basic situation,main concepts and principle. Then,some new progresses on using deep learning in visual object recognition and behavior recognition are presented. A discussion about the differences between deep learning and neural network as well as the advantage and disadvantage of deep learning are given,the main existing problems that should be solved for deep learning theory are pointed. This paper should provide some help for the research community on applying the deep learning to the visual object and behavior recognition.  
      关键词:deep learning;object recognition;behavior recognition;computer vision   
      6317
      |
      690
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56123393 false
      更新时间:2024-05-07
    • Yang Chunling, Wu Juan
      Vol. 19, Issue 2, Pages: 185-193(2014) DOI: 10.11834/jig.20140203
      摘要:In Wyner-Ziv (WZ)Distributed Video Codec, in order to describe the change characteristics of the correlation noise residual sub-band more accurately, we propose a new correlation noise-modeling algorithm based on improved-FCM (Fuzzy C-Means)clustering. In our proposed method,for each decoded sub-band,the residual coefficient eigenvectors are formed with the residual coefficients from adjacent sub-bands, and then they are clustered into different categories by an improved fuzzy C-means clustering algorithm. In order to avoid the overflowing problem caused by having only a few samples in one category, a threshold method is adopted to estimate the correlation noise parameter,which is useful to help decode the corresponding sub-band. Then the reconstructed sub-band is used to update the next subband eigenvectors to obtain more accurate eigenvectors. All the sub-bands are decoded using the same process. Fuzzy C-means algorithm is sensitive to the initial clustering centers. A method that is to produce a random membership degree matrix before iteration can solve this problem to some extent. Considering both the algorithm performance and complexity,the subband residual coefficients are clustered to eight classes. The experimental results show that this method can simulate accurately the different channel noise characteristics of different region in one frame. What's more,the more complex the video motion,the more obvious the performance superiority of the new algorithm. Compared with that of the subband level Laplace method,the average online rate-distortion performance can be improved up to 1 dB. A new correlation noise model based on an improved fuzzy C-means clustering algorithm is proposed in this paper. The experiment results show that the rate-distortion performance based on the new algorithm is better than that of sub-band Laplacian solution and the Laplacian-Cauchy mixture model,and the more complex video motion,the more obvious performance gain for the new algorithm.  
      关键词:Wyner-Ziv distributed video coding;correlation noise modeling;fuzzy C-means clustering algorithm;eigenvectors;laplacian parameter   
      2848
      |
      232
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56123525 false
      更新时间:2024-05-07
    • Wang Jiaoyu, Xu Xiaohong, Shen Renming, Liao Chongyang, Yang Xun
      Vol. 19, Issue 2, Pages: 194-201(2014) DOI: 10.11834/jig.20140204
      摘要:In order to reduce the large data volumes in video processing,we combine the first-order Auto Regressive Moving Average (ARMA)model video model with compressed sensing theory,and propose a compressed sensing video model, which is based on the first-order ARMA. The main idea is making full use of video sparsity and frame coherence under the theoretical framework of compressed sensing,and dividing the video into a static part and a dynamic part. The new model gets the key parameters through simultaneous sampling and separate processing. Moreover, we discuss the construction conditions of the model and provide concrete guidelines on how to use this new model with provable performance. We present experimental evidence that,within our framework, the data volume can be reduced largely and reconstructed video shows a robust result even with compression rates at a ratio of 100 to 200. Combining with the compressed sensing and linear prediction technology, we propose a new video acquisition model,which static and dynamic parts of video can process respectively.Additionally, we give the conditions of using this model. Experiments show that the model has a well compression effect while facing smooth video.  
      关键词:video processing;compressed sensing;sparse representation;auto regressive moving average model   
      2509
      |
      292
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56122073 false
      更新时间:2024-05-07
    • Xu Shaoping, Yang Rongchang, Liu Xiaoping
      Vol. 19, Issue 2, Pages: 201-210(2014) DOI: 10.11834/jig.20140205
      摘要:Image quality assessment (IQA)aims to use computational models to measure the image quality consistently with subjective evaluations. The well-known structural-similarity (SSIM)index brings IQA from pixel-based stage to structure-based stage,and a multi-scale information content weighting approach based upon a Gaussian scale mixture(GSM)model of natural images,namely information content weighted SSIM image quality assessment algorithm (IW-SSIM). It was proposed recently and achieved significant and consistent performance improvement compared to SSIM-based IQA algorithms. The success of the IW-SSIM approach may be understood as a natural consequence of an effective combination of several approaches that have been proven useful in IQA research. These include multi-scale image decomposition followed by scale-variant weighting,SSIM-based local quality measurement,and information theoretic analysis of visual information content and fidelity. Aiming at improving the local distortion measurement of the IW-SSIM,we propose a novel algorithm called information content weighted gradient salience SSIM (IW-GS-SSIM). It is well known that human vision has a nonlinear perception to different physical stimuli (e.g. luminance),which has been theorized and empirically proven by the psychologist E.H. Weber. Weber's Law is computed empirically but its effectiveness had been proven for several application fields. We assume that Weber's law is also suitable for image gradient magnitude,which has been empirically verified by testing subject-rated IQA databases. Because the human visual system (HVS)responds to the brightness stimulus mainly complying with Weber's law,the proposed algorithm only performs one pass filtering to calculate the contrast between the current pixel and its background quickly,which is used as a dimensionless measure of the visual significance of the gradient structure after applying nonlinear mapping. The gradient magnitude combined with Weber visual significance serves basis for characterizing the local image quality. Weber visual significance and the gradient magnitude play complementary roles in characterizing the image local quality,which is used to replace the SSIM local distortion metric of the IW-SSIM algorithm. Extensive experiments performed on six benchmark IQA databases demonstrate that as a local distortion metric the GS-SSIM has higher evaluation accuracy than SSIM for image noise and blurring distortion. Our extensive tests across six publicly-available independent image databases verified that IW-GS-SSIM's performances are often superior otherwise similar when compared to the other representative or prominent IQA algorithms in terms of correlation between objective measured quality values and subjective observations,validating that it is a very robust IQA algorithm. In summary,we propose a novel gradient magnitude saliency-based IQA algorithm in this paper,namely information content weighted gradient salience SSIM (IW-GS-SSIM). The underlying principle of the IW-GS-SSIM is that the HVS perceives an image mainly based on its salient low-level image feature. Specially,the perception nonlinearity of image gradient magnitude-based local image quality measurement is implemented,i.e. GS,which can better characterize image local quality measure. We believe that our results support the general principle underlying our approach,i.e.,some HVS properties on image gradient should been explored and incorporated into designing the new IQA algorithm. We expect that our works could give new insights to people who are interested in image quality assessment.  
      关键词:image quality assessment;information content weighted;Weber law;visual salience;gradient structure   
      3568
      |
      315
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56122661 false
      更新时间:2024-05-07
    • Fei Xuan, Wei Zhihui, Xiao Liang, Li Xingxiu
      Vol. 19, Issue 2, Pages: 211-218(2014) DOI: 10.11834/jig.20140206
      摘要:Breaking the limitations of the traditional Shanon-Nyquist sampling theorem,compressed sensing is a recent paradigm, which allows a signal to be sampled at sub-Nyquist rates and proposed a methodology of recovery that incurs no loss. The field of compressed sensing is related to other topics in signal processing. Especially,imaging techniques having a strong affinity with compressed sensing include coded aperture and computational photography. In recent years,compressed sensing image reconstruction has caused widespread concern,and the total variation (TV) regularization, which describes the sparsity of the image gradient, has been widely used for image reconstruction. Inspired by these ideas,we propose a novel compound regularized compressed sensing image reconstruction model based on optimal reweighted TV. Our reconstruction modeling is based on the classical TV regularization recovery model,and some actions have been taken to improve the reconstruction performance. At first,the TV regularization always results in a piecewise constant solutions. This will make the reconstructed image too smooth and some details, such as edges and textures, are lost. To overcome this drawback,the gradient information of the image is utilized to estimate weights,and build a reweighted TV-based compressed sensing image reconstruction model. Then,for reducing the noise or other degradation influence,we introduce a TV denoising (Rudin-Osher-Fatemi,ROF)model into the optimal estimation of weights. Next,the characters of the natural image are introduced into image modeling as the priors such as the nonlocal structure similarity and local regression priors. We introduce these priors into the reconstruction model to preserve the image details,and propose a novel compound regularized optimization model based on optimal reweighted TV. At last,the optimization model could be reduced to a series of convex minimization problems that can be efficiently solved with a combination of the projection method and operator splitting method,leading to fast and easy-to-code algorithms. In the conventional TV regularization reconstruction model,there is not enough prior information which is utilized to represent different characters of the natural image. Therefore, a compound regularized model is proposed to use these corresponding priors of different characters in this paper. The experimental results demonstrate the more refined image reconstruction in our proposed method. Especially,the sparsity prior of the image gradient is fully considered as reweighted TV regularizer, which could guarantee the reconstructed effects of the smoothed areas and strong edges,while the nonlocal structure similarity and local regression priors are also exploited to improve the reconstructed performance of weak edges or textures. TV regularization model is a widely-used compressed sensing image reconstruction method. However,this process is easy to be affected by noise or other degradation. Meanwhile, it is difficult to obtain refined reconstruction result because image prior information is rarely utilized. We propose a novel compound regularized compressed sensing image reconstruction model. Here,we integrate the sparsity prior,the nonlocal structure similarity prior and the local regression prior into reconstruction model as compound regularizers. Extended experiment results indicate that the proposed compressed sensing reconstruction method has a better improvement in terms of objective criterion and visual fidelity over other related TV-based reconstruction methods.  
      关键词:compressed sensing;reweighted total variation;nonlocal structure similarity;local regression   
      3701
      |
      286
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56123024 false
      更新时间:2024-05-07
    • False contour suppression with anisotropic adaptive filtering

      Ni Jing, Wang Shuozhong, Liao Chun, Zeng Xing
      Vol. 19, Issue 2, Pages: 219-226(2014) DOI: 10.11834/jig.20140207
      摘要:Image processing operations, such as contrast enhancement,re-quantization and compression often produce false contours in the image,featured by unrealistic edges in areas that are actually smooth,which would damage the image quality. To suppress false contours and improve the image quality, we propose an anisotropic adaptive filtering technique based on an analysis of local characteristics of the image edges and false contours. Edges are detected using the Canny operator,and the flat areas are identified. Edges in smooth areas are judged as false contours. A map of false contours can be obtained. The direction and density of false contours are then computed to provide a basis for selecting proper filtering parameters. The contour directions are quantized to eight angles,and the scales of the filtering kernels are set to six different values according to the density of the contours. To preserve real edges and avoid unwanted blurring of fine details,edges of sufficient strength are extracted and dilated to form a protection mask. The method can effectively reduce false contour artifacts and preserve fine details in the image. Experimental results show that the results are better than those of other methods in terms of peak signal-to-noise ratio (PSNR)and structural similarity index (SSIM). The proposed adaptive filtering algorithm can remove false contours in images due to excessive enhancement or improper quantization while keeping true edges and fine details intact to improve the images' visual quality.  
      关键词:false contour suppression;edge detection;anisotropic filtering;adaptive filtering   
      5572
      |
      1024
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56121558 false
      更新时间:2024-05-07
    • Zheng Yue, Cheng Hong, Sun Wenbang
      Vol. 19, Issue 2, Pages: 227-233(2014) DOI: 10.11834/jig.20140208
      摘要:We try to solve the problem that most algorithms only emphasize on the minimum difference between pixels rather than the integrity of objects for finding the optimal seam-line. Therefore, the mosaicking result will cause damages to objects and information loss in images. We put forward a novel algorithm for finding the optimal seam-line based on the shortest distance between the targets and each pixels in the 8-neighborhood. First,the image is divided into the targets and the background the morphological processing. Then, the closest points to the target are found as the splice points according to the rule of avoiding cutting the target. The optimal seam-line is finally retrieved by linking all the splice points. The seam-line retrieved by the algorithm proposed in this paper improves the structure similarity and correlation. The dislocation degree is also reduced by more than 5%. This algorithm keeps the integrity of the targets better than the conventional algorithms. Obvious dislocations are removed and the gray mutation is weakened visually. The quality of image is improved and the result image reaches a satisfying effect.  
      关键词:seam-line removal;geometric dislocation;optimal sseam-line;improved A;algorithm;neighborhood   
      3030
      |
      298
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56121899 false
      更新时间:2024-05-07
    • Zeng Jiexian, Li Weiye
      Vol. 19, Issue 2, Pages: 234-242(2014) DOI: 10.11834/jig.20140209
      摘要:Corners are an important feature of objects in images and they are widely used. In the field of computer vision. There are many applications that rely on the successful detection of corners,such as image matching,panoramic stitching,3D modelling,object recognition,and motion tracking. The multi-scale algorithm based on curvature scale space is generally regarded as an effective method for finding corners in images. It can detect both fine and coarse features. However,different options of the selected scale will result in the leakage detection of the corner or even the wrong corner. Therefore,we describe a new corner detection method based on the curvature scale space and the direction of Freeman code statistics. This corner detection method is based on the improved curvature scale space corner detection algorithm proposed by He et al. First,it uses the Canny edge detector to get the binary edge map,and computes the curvature at a relatively low scale for each edge. It can pick the candidate corners set in this relatively low scale. In this way, it picks enough corners, unfortunately also containing wrong corners,such as round corners and false corners which are caused by the sharp variation edges. Then,in order to remove those wrong corners, we propose an adaptive threshold and statistical direction of chain code method. We compute a threshold adaptively according to the mean curvature within a specific region. Round corners are removed by comparing the curvature of the corner candidates with the adaptive threshold. We also eliminate the false corners by evaluating the difference of each candidate corner,the difference evaluating by using the Freeman code direction statistics. In the conventional corner detection based on curvature scale space,different options of the selected scale will result in the leakage detection of the corner or even the wrong corner. For example using the high scale easily leads to real corners missing,using the low scale will detect more wrong corners. In order to find more real corners,this method uses the low scale to detect corners and proposes a novel method to remove those wrong corners. The evaluation of this algorithm is taken from several aspects,such as the accuracy of detection,the wrong detection,miss detection and the computing time.The experimental results show that this proposed corner detection method overcomes others methods weakness such as accuracy,false corners and so on. Compared with other algorithms,the new method has high accuracy,low error rate and there is not a significant increase in computing time. We propose a novel method for detecting corners. This method adopts the low curvature scale and has the capacity of detecting more corners and accordingly reducing corner leakage detection rate;by calculating elliptical corner adaptive threshold,it can delete elliptical corner;by using the Freeman code direction statistics,it can also eliminate the false corner;thus undoubtedly improving the accuracy of corner detection. The experiments have fully prored that the corner detection algorithm we proposed in this paper has high efficiency and accuracy compared with other methods.  
      关键词:corner detection;curvature scale space;freeman code;direction of statistical histogram   
      3063
      |
      325
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56124254 false
      更新时间:2024-05-07
    • Palm-vein recognition based on oriented features

      Zhou Yujia, Liu Yaqin, Yang Feng, Huang Jing
      Vol. 19, Issue 2, Pages: 243-252(2014) DOI: 10.11834/jig.20140210
      摘要:Vein pattern recognition is one of the latest biometric techniques researched today. The new biometric system based on palm veins acquired by an infrared image offers a high level of accuracy. Oriented features are generally extracted as information for palm vein recognition. However,the use of oriented features is not always effective and computational simple. To simplify the computational complexity and make full use of the palm-vein information,a fast and robust approach by extracting oriented features is proposed in this paper. The images in the PolyU MSpalmprint ROI Database are used as original images. This is followed by the nonlinear enhancement so that the vein patterns from ROI images can be observed more clearly. The background intensity profiles are estimated by dividing the images into 32×32 blocks,and the average gray-levels in each block are computed and resized to the same size as the original image using a bicubic interpolation. Then, the resulting image is subtracted from the original ROI image,besides,CLAHE(Contrast Limited Adaptive Histogram Equalization)is employed to obtain the normalized and enhanced palm-vein image.Small line segments can approximated vessels in palm vein images. The radon transform is an effective tool to identify such line-like palm-vein features. Based on a modified local pattern, the orientation filter, which is derived from Radon transform is improved to extract features. Since the veins appear darker in the palm-vein images,the line direction that results in minimum response is encoded as the dominant direction. Feature matrix composed by the dominant direction is encoded by using 3-bit binary number. With respect to computational complexity,the similarity (also matching scores that range from 0 to 1)between probe images and gallery images is calculated by combining the Hamming distance and global matching methods. As a result,we identify the probe images by setting a threshold for the Matching scores. The modified local pattern proposed in this paper has a better performance in extracting palm-vein texture information. By optimizing the orientation filter, the EER is further improved to 0.0002 % from the images of the PolyU database, which conforms the rotation invariance of the proposed approach. In addition,with the use of encoding and global matching methods,we achieved a speed of 11 ms and 4 ms in feature extracting and matching for each image,respectively. Besides,the proposed method can also achieve good performance under the condition of poor image enhancement. Finally,experimental results show that the proposed approach can improve the speed and the accuracy of the palm-vein recognition system. Radon transform is widely-used in texture analysis. However,when used in palm-vein recognition,it still needs much improvement for better identifying line-like palm-vein features. We propose an improvement for a better use of the Radon transform in oriented features extracting. With respect to the speed of the recognition,encoding and global matching are used in this paper. Compared with other palm-vein recognition methods,the experimental results presented in this paper conform the superiority and robustness of the proposed approach in terms of both computational complexity and the use of palm-vein oriented information.  
      关键词:palm-vein recognition;oriented features;template of neighborhood;hamming distance   
      3826
      |
      323
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56123049 false
      更新时间:2024-05-07
    • Efficient cascade classifier for object tracking in complex conditions

      Jiang Weijian, Guo Gongde
      Vol. 19, Issue 2, Pages: 253-265(2014) DOI: 10.11834/jig.20140211
      摘要:We improve the TLD algorithm and propose local and global search based on the sliding-window method, the Integral Histogram Filter, and Random Haar-like Feature Filter to solve the drift problem of traditional tracking algorithms in complex conditions. First, we use the Integral Histogram Filter to reject the Sliding-window patches as quickly as possible to release the feature matching in the following filters. Then, we use Random Haar-like Feature Filter to overcome the drift problem, which causes a loss of accuracy during the object tracking under complex conditions (multi-object, occlusion, fast movement). We ultimately combine filters of the TLD algorithm and two new filters of our proposed. The experimental results show that the proposed approaches compared with the traditional tracking algorithms not only presents robustness and tracking accuracy in stable background or complex conditions, but also obtains the best computing speed with the use of the local and global search. The proposed method is able to detect the multi-scale object accurately both in different environment and tracking object deformation. Combining the global and local search strategy can overcome the time consuming effectively to achieve the real-time object tracking.  
      关键词:computer vision tracking;Haar-likes feature;cascade classifier;TLD algorithm;integral histogram   
      2606
      |
      460
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56123231 false
      更新时间:2024-05-07
    • Local feature descriptor based on mean normalized contrast

      Yan Xuejun, Zhao Chunxia, Yuan Xia, Xu Dan, Liu Fan
      Vol. 19, Issue 2, Pages: 266-274(2014) DOI: 10.11834/jig.20140212
      摘要:Because the contrast value and the two-bin contrast histogram are used, contrast context histogram (CCH) has the characteristics of low dimension and efficient computation. The CCH contrast value is calculated between every pixel in a local region and the center pixel of the region making the CCH contrast values be sensitive to the intensity changing of the center pixel. Usually, the stability of the center intensity depends on the accuracy of the local feature detector position, which is susceptible to noise,image transformation and distortion, scale space discretization, and other factors. Furthermore contrast values in CCH are not constant illumination invariant because the intensity of the center point is not the mean intensity of the whole local region in most instances. As a result, CCH descriptors cannot get competitive performance when compared to the standard SIFT (scale invariant feature transform) descriptor. In addition, the SIFT descriptor has the characteristics of high dimension and low computational efficiency. In order to solve these problems, a fast and low-dimensional local descriptor is proposed, which is called mean normalized contrast context histogram (MN-CCH). MN-CCH firstly calculates the Gaussian weighted mean of the whole local region. The core of the Gaussian kernel coincides with the center of the local region. The local region is normalized by its weighted mean to obtain the region's normalized contrast values. Then, the local region is divided into 32 sub-regions in the log-polar coordinate system, and a two-bin positive-negative contrast histogram is built for each sub-region. To overcome the linear light changes, the 64-dimension MN-CCH vector is normalized to a unit vector. As the Gaussian weighted mean is used, MN-CCH contrast values are more stable for the feature location error when compared to the center point intensity used in CCH. In addition, the normalized contrast values are constant illumination invariant, which helps to improve the MN-CCH's performance. The image matching experiment in the Mikolajczyk image transformation dataset shows the proposed MN-CCH outperforms the CCH descriptor in most image transformations, especially in natural images and texture images. The image matching result also shows that the MN-CCH overcomes the low Recall in lower 1-Precision and noise-sensitive problems exist in the CCH descriptor. The proposed 64-dimension MN-CCH gets competitive performance when compared to 128-dimension SIFT descriptor in the image matching experiment. The retrieval test on the PCA-SIFT's retrieval dataset shows that MN-CCH gets the same retrieval accuracy as SIFT does, which is higher than the CCH descriptor. MN-CCH's descriptor building and matching time is indentical to the CCH as these two methods have the same dimension and complexity, which is more efficient than the SIFT descriptor as SIFT's tangential calculation in descriptor building is time-consuming and SIFT uses a higher descriptor dimension. MN-CCH outperforms the CCH descriptor in both image matching and retrieval tests as it overcomes CCH's feature location error sensitive and contrast values are not constant illumination invariant, drawbacks. The proposed method has higher descriptor building speed and lower descriptor dimension, but it still gets competitive performance when compared to the SIFT descriptor. As a result, MN-CCH is more suitable in applications requiring high computing and large storages resources, such as mobile robot real-time navigation, visual SLAM and so on.  
      关键词:local descriptor;contrast context histogram;image matching;image retrieval   
      2868
      |
      277
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56123603 false
      更新时间:2024-05-07
    • Zhang Li, Li Yuanyuan, Yang Yan, Tan Jieqing
      Vol. 19, Issue 2, Pages: 275-282(2014) DOI: 10.11834/jig.20140213
      摘要:In the field of computer aided design, a new data fitting technique, the progressive iterative approximation (PIA), has been proposed and attracts plenty of attention. By adjusting the control points iteratively, the PIA method provides a straightforward way to generate a sequences of curves/surfaces with better precision for data fitting. The curve (tensor product surface) has the PIA property as long as the bases are normalized completely positive and the corresponding collocation matrix is non-singular. In order to extend the scope of application of the PIA property, our paper focuses on the triangular surface and the non-totally positive collocation matrix. Furthermore, we assume that it may also possess the PIA property. The theory is based on certain conditions, which are essential for a basis to satisfy the PIA property. Given a set of triangular basis functions and its corresponding parametric values, we can obtain the collocation matrix of the triangular basis functions at the parametric values. Then, we get a new matrix, which is the result of the identity matrix subtracting the collocation matrix, and calculate the spectrum radius of the new matrix. If the value of the spectrum radius is less than 1, we call the triangular basis functions over a triangle domain having the PIA property. Given a collocation matrix, which is diagonally dominant or generalized diagonally, dominant and the elements of the matrix are positive number, then the real part of the eigenvalue of the collocation matrix is also a positive number. Our work proves that if the real part of the eigenvalue collocation matrix is a positive number, then the corresponding bases on triangular domain possess the PIA property (we call it as generalized PIA property). In the end, we build the relationship between diagonally dominant or generalized diagonally dominant matrix and the bases possess progressive iterative approximation property. If the collocation matrix is diagonally dominant or generalized diagonally dominant, the corresponding bases on triangular domain possess the PIA property (we call it as generalized PIA property). As we know, the Said-Ball bases have many good properties, such as the shape preserving property and the convex hull property. Since the Said-Ball bases are much better than Bernstein bases in recursive evaluation and degree elevation or reduction, it is worthwhile to develop the Said-Ball bases on triangle domain for free-form surface design. We take the Said-Ball bases as examples and find that the Said-Ball bases on triangular domain have the generalized PIA property. Numerical examples are given to demonstrate the correctness of our theory. The main work of our paper is that we extend the scope of PIA property to the generalized blending basis over a triangle domain. The basic technique is to give a progressive iterative scheme of the triangle Said-Ball surfaces. Giving some scattered data points to form an initial control mesh, the limit surface can interpolate these original points by constructing an iterative sequence of the triangular Said-Ball patches. Moreover, the numerical examples of the triangular Said-Ball basis of low degree with the uniform or non-uniform parameters are given. Furthermore, we can generalize this method to the study of different kinds of generalized Ball bases in triangular domain.  
      关键词:progressive iterative approximation;generalized diagonally dominant;Said-Ball bases;triangle domain   
      2533
      |
      240
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56122807 false
      更新时间:2024-05-07
    • Target feature based multi-resolution volume rendering

      Guo Siqi, Lu Cai, Nie Xiaoyan
      Vol. 19, Issue 2, Pages: 283-289(2014) DOI: 10.11834/jig.20140214
      摘要:Multi-resolution volume rendering is an effective method to solve the problem of massive data volume rendering. We assume that there are some homogeneous regions with smaller variance or Shannon entropy in the data. By reducing the resolution of these areas, the overall amount of data can be reduced. However, for some kind of data, such as seismic data, there are only few homogeneous regions. For such data, traditional methods can hardly achieve the goal of multi-resolution. In this paper, a target-based multi-resolution volume rendering method is proposed based on the traditional method. Its basic idea is to use the target feature of the data as a guide to find the target areas. In some cases, the Shannon entropy and variance can only reflect the numerical changes, but not the information, which is required by the users. Therefore, this method uses the target feature to find the target areas, which are of interest for the user. By appropriately reducing the resolution of non-target areas after processing the data in the traditional way the compression ratio is improved. This method can meet the demand of computer memory by reducing the amount of data for non-critical regions and to protect the data in target regions as far as possible. In this way, we can achieve multi-resolution volume rendering effectively without losing the critical information of target areas. Under the premise of guarantee the amount of data, this method can get a better rendering results by dropping the amount of information in the non-target area, and to ensure high drawing quality of the target region. Experimental results show that compared with traditional methods, the proposed method can get better drawing effects in critical areas, while further reducing the amount of data used to draw.  
      关键词:multi-resolution;volume rendering;massive data;target feature   
      2353
      |
      342
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56121820 false
      更新时间:2024-05-07
    • HASM optimization based on the improved difference scheme

      Zhao Mingwei, Yue Tianxiang, Zhao Na
      Vol. 19, Issue 2, Pages: 290-296(2014) DOI: 10.11834/jig.20140215
      摘要:In order to solve the error problem in the spatial surface simulation, the high accuracy surface modeling (HASM) method, which is based on the fundamental theorem of the surface theory, has been developed during the past decades. Numerical tests have already shown that HASM is much more accurate than the classical methods such as Kring, inverse weighting (IDW) method, and splines. In addition, surface modeling of digital elevation model and spatial simulation of soil properties also indicate that the HASM indeed increases interpolation accuracy. However, the huge time consumption of the HASM limits its application at large scales. Numerical analysis have found that the main reason that lead to large time consumption is that the out-iteration process consume too much time, so an effective way to improve the simulation efficiency is speed up the out-iteration process. Therefore we propose a new method to construct the difference equations of the HASM in this paper. The new difference equation employs five grids to compute the first difference, while the original method only uses three grids. First,we prove that the modified difference equation could really improve the simulation accuracy by formula derivation. Then we design several numerical tests to verify the simulation accuracy of the new method. Sampling rate and the spatial position of the sample points are considered in the numerical tests. Three kinds of sampling rate are designed (0.5%, 1%, and 2%) to analysis the new method's advantage for different sampling rates. One of the numerical tests removes key points of the study area (valley bottom of the simulation surface) to analysis the influence of the missing of key sampling points to the origin and new methods. numerical test results show that HASM that adopted the new difference equation can improve the single iteration accuracy significantly. At the same time, the contrast experiments indicate that the modified HASM could decreases the influence of the key missing sample points, which is very important in the practical application because it is often impossible to identify all the key sample points. It should be pointed that the modified HASM does not require increasing memory to obtain these advantages. Furthermore the study reveals that the out-iteration process that employed the modified difference equations and the origin difference equations in turn could speed up the increasing of the simulation accuracy. This trait is of great help to arrive at the specified simulation accuracy quickly in practical application. Compared to the classical interpolation methods, HASM possess significant advantages on simulation accuracy. However the huge time consumption limits its wide application. Aiming to perfect HASM's simulation efficiency, we propose the modified HASM algorithm by building new difference equations. Compared to the origin one, the modified HASM has three advantages: First, the modified HASM could improve the single iteration accuracy significantly. Second, the modified HASM could get better simulation results when some key sample points are missing. Third we could accelerate the error decreasing in the out-iteration process when using the modified and origin HASM.  
      关键词:high accuracy surface modeling (HASM);difference scheme;mathematical test;simulation accuracy   
      2413
      |
      294
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56122362 false
      更新时间:2024-05-07
    • Method of modeling and visualization for repairing of osteoarticular defect

      Xing Huijun, Yang Jian, Li Qin
      Vol. 19, Issue 2, Pages: 297-304(2014) DOI: 10.11834/jig.20140216
      摘要:The main problem with the treatment for joint bone injury is the lack of accurate joint models for guidance in the repairing surgery. In this paper,we aim to study modeling and visualization techniques for quantifying osteoarticular defects accurately. The key techniques such as the enhancement of bone image, multi-modal image fusion, segmentation, modeling and measurement of bone structure, can visually show the three-dimensional geometry of the defect structure and provide a quantitative standard for doctors to diagnose and treat articular defects. The involved techniques are analyzed in depth, and the result shows the method above can provide accurate three-dimensional quantitative models. The CT and MRI based three-dimensional modeling provide an accurate and effective method for the evaluation of osteoarticular defects. Therefore this method has a great value in assisting doctors to diagnose the defect of the glenoid and humeral head of the bone joint. The future development of this technology plays an important role to repair osteoarticular defects.  
      关键词:joint bone;segmentation;modeling;quantitative analysis   
      2689
      |
      253
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56122831 false
      更新时间:2024-05-07
    • Zhang Jianwei, Fang Lin, Chen Yunjie, Zhan Tianming
      Vol. 19, Issue 2, Pages: 305-312(2014) DOI: 10.11834/jig.20140217
      摘要:In this paper,we propose a local statistical geodesic active contour (GAC) image segmentation method. Local intensity statistical information according with the Gaussian distribution is assumed. A directional driving item is established in order to reduce the effect of intensity inhomogeneity information. Second, a local statistical geodesic active contour energy function based on this hypothesis is established. By minimizing the proposed energy functional,it can orderly guide the movement of evolution curve to object boundaries for achieving regions of interest (ROI) segmentation. Finally, the method is implemented by a binary level set function in order to improve the algorithm's efficiency and stability. Experiment results with medical images show that the algorithm can segment ROI of medical images fast and accurate.  
      关键词:medical image;signed pressure force function;local statistical information;specific target segmentation   
      3355
      |
      593
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56123420 false
      更新时间:2024-05-07
    • Zhou Yuwei, Chen Qiang, Sun Quansen, Hu Baopeng
      Vol. 19, Issue 2, Pages: 313-321(2014) DOI: 10.11834/jig.20140218
      摘要:In remote sensing applications such as visual interpretation, it is necessary to improve the visual quality of remote sensing image. An approach based on dark channel prior and bilateral filtering is proposed for the enhancement of remote sensing image. To address the high computational complexity of softmatting in the dark channel prior model, bilateral filtering is used to estimate the atmospheric veil,then obtain the refined transformation map of He's model. Crossing-color is observed when dark channel prior is applied to the enhancement of remote sensing images. Thus, an improved algorithm for calculating the transmission map is presented by improving the pixel values of depth image and making all pixel values not larger than one. Finally, the enhanced remote sensing image is obtained with depth images and dark channel prior model. Experimental results demonstrate that the proposed algorithm can increase the image contrast effectively. Comparing to the image enhancement models based on SSR and bilateral filtering, Four-Scale Retinex, histogram equalization and MSRCR, the results validates the effectiveness of the proposed algorithm. The proposed model can make enhanced remote sensing images more acceptable in line with the visual characteristics and more convenient for visual interpretation, and it is feasible for the remote sensing image visualization enhancement.  
      关键词:dark channel prior;bilateral filtering;image enhancement;remote sensing image   
      4320
      |
      591
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56121944 false
      更新时间:2024-05-07
    • Liu Hui, Zhou Kefa, Wang Jinlin, Wang Shanshan
      Vol. 19, Issue 2, Pages: 322-327(2014) DOI: 10.11834/jig.20140219
      摘要:In order to improve the quality of the multispectral and panchromatic satellite images fusion, a method to combine the non-subsampled contourlet transform (NSCT) based on the pulse coupled neural network (PCNN) with the IHS (intensity-hue-saturation) transformation is proposed. First, the IHS transformation of the multispectral images is conducted to extract the I component.Then it is enhanced using the Principal Component Analysis to gaina new component I; Second, the panchromatic image and the Icomponent are decomposed by NSCT and then the edge gradient information is used to trigger on the PCNN to obtain the coefficient of the low-frequencies and high-frequencies of the fusion images; Finally,a fused imagecan be obtained by taking the inversed NSCT and inversed IHS to reconstruct the images. Experimental results on the ZY-I0 2C satellite data show that the proposed method can improve the spatial resolution while preserving spectral information,and obtain better fusion effect. The fusion method based on NSCT and IHS transformhas has more advantage than the common image fusion methods in the aspects of visual effect and objective evaluation.  
      关键词:image fusion;non-subsampled Contourlet transform;IHS transform;pulse coupled neural networks   
      2950
      |
      322
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56122868 false
      更新时间:2024-05-07
    • Segmented 2DPCA algorithm for band selection of hyperspectral image

      Vol. 19, Issue 2, Pages: 328-332(2014) DOI: 10.11834/jig.20140220
      摘要:It is well known that hyperspectral remote sensing technique is one of the breakthroughs in the earth observation. Hyperspectral images have the characteristics of contiguous spectral range and narrow spectrum interval. Typically, each image pixel is represented by hundreds of values, corresponding to various wavelengths. Today, with the wide application of hyperspectral images in many fields, such as surveillance, geology, environmental monitoring, and meteorology, the high dimensionality and huge amount data has become a key problem. Feature selection, especially band selection, plays an important role in hyperspectral image processing. In order to achieve the reduction of inter-spectral dimensionality effectively, a segmented two-dimensional principal component analysis (2DPCA) algorithm for band selection of hyperspectral image is proposed. The proposed algorithm is based on the traditional 2DPCA feature extraction method. It combines the advantages of 2DPCA and band selection. The whole band selection process can be divided into two steps. First, the spectral bands in a hyperspectral image are grouped into different clusters based on the correlation coefficient between them. Then, a band selection based on 2DPCA is taken for the bands in each group separately. In the second step, the image data dimensionality is converted in order to get a covariance matrix corresponding to the number of bands in each group. The number of selected bands is determined by the cumulative contribution rate of principal components. The bands are selected according to the amount of information of each band mapped into the principal components. This method has many advantages. It only needs to calculate all of the eigenvalues and eigenvectors of the covariance matrix, while it does not have to transform the original image matrix. Therefore it has a small amount of calculations and will not change the physical characteristics of the original image. According to the inter-spectral correlation coefficients, the original image data can be divided into four groups. The proposed algorithm extracts a total of 18 bands from the whole 189 in 34.937 seconds, effectively reduces the spectral dimension of hyperspectral image. While PCA, segmented PCA and 2DPCA extracts 13, 16, 11 bands respectively in 192.375, 50.829, 92.453 seconds. The bands selected by 2DPCA algorithm, which are very typical and contain a wealth of information, can reflect the difference between the pixel vectors corresponding to different surface features clearly. The new image that has fewer bands can be used in subsequent processing such as classification and recognition, the results of them will not be adversely affected. On the basis of 2DPCA feature extraction algorithm, in this paper, we have realized the band selection algorithm based on the 2DPCA and the segmented 2DPCA. Furthermore, they are compared with the existing PCA-based and segmented PCA-based algorithm from the aspects of selected bands, time-consumation, and pixel vector curving. The experiment results show that the proposed algorithm can work faster than the traditional algorithm and the selected bands can preferably retain the local important information of the original image. It can be seen that the algorithm proposed is feasible, efficient and has a high practical value.  
      关键词:hyperspectral image;two-dimensional principal component analysis (2DPCA);band selection;band grouping   
      2635
      |
      297
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56122736 false
      更新时间:2024-05-07
    0