摘要:Data granularity and transmission order are two basic questions when transferring steaming maps progressively over web. The map data with high granularity is able to result in high resolution animation but the data volume increases greatly. The transmission order reflects the data stream sequence from coarse to refined at different domains. This study investigates the hierarchical structure of map data organization and presents a granularity classification with three levels, namely feature level, object level and geometric detail level. Through example analysis, the study discusses the application of three granularity partitioning in progressive transmission as well as the strategies to reduce data volume. For transfer order analysis, a mat of semantic and scale is built to describe the map data set and two kinds of transmission order is offered to apply in different progressive transfer processes, namely the semantic priority order and the scale priority order.
关键词:progressive transfer;map generalization;streaming media;web GIS;spatial data granularity
摘要:This paper presents an efficient way to simplify large-scale vector maps while keeping their original topology relationships.By using the frame buffer and Voronoi diagram,we achieve a rapid simplification of large-scale vector maps while avoiding topolopy relationship errors,including intersections,self-intersections,points sidedness and polygons adjacency changes.This study will help improve the accuracy and efficiency of multi-scale,large-scale vector data generalization.
摘要:Multi-representation of spatial data is one of the hot topics in modern GIS. This paper proposed a bi-hierarchical multi-scale model to represent large vector data with two hierarchies, namely spatial elements and vertex coordinates, so as to improve the visualization efficiency. Firstly spatial elements were taken as the minimal units and organized with a multi-scale index to describe the changes of quantity or property of elements brought with scale changes.Then vertex coordinates were regarded as the minimal units and tagged with scale hierarchy labels to describe the geometric changes arising from different scales. Under the support of PostgreSQL, this paper implemented the bi-hierarchical multi-scale model by extending the index and functions, and developed the searching algorithms. Finally, a 1:10,000 scale land use dataset was taken as an example to validate the model and searching algorithms.The results showed that the visualization and transmission efficiency of large vector data could achieve great improvement with the proposed model and searching algorithms.
关键词:multi-representations;multi-scale database;multi-scale index
摘要:This paper presents the concept key techniques of the progressive transmission of spatial data over the Internet summarizes the state-of-the-art of related studies and comments on the latest research progress in the world. Finally, this paper concludes with an outlook and several key techniques of the progressive transmission of spatial data over the Internet.
摘要:Land-use data generalization is one of the basic operations for deriving multi-scale data from a comprehensive land-use database. Acquiring data characteristics and converting them into generalization rules used to implement data generalization is an effective way to improve the objectivity and validity for land-use data generalization. This paper focuses on the extraction and application of region-dependent rules for land-use data generalization. First, the objectives and hierarchy of the rules are clarified. Then, spatial association analysis and landscape indices are employed to set the importance ranks, minimum area thresholds and spatial distribution pattern of each land-use class. Finally, a case study was undertaken to show the application of these rules using 110 000 land-use data around Jiufeng county of Hubei, China.
关键词:multi-scale land-use data;data generalization;generalization rules;spatial association analysis;landscape index
摘要:This paper is the fourteenth of the literature bibliography on computer graphics engineering in China in 2008. We collected and classified most of the important papers in computer graphics field published in Chinese, selected references from 11important Chinese journals published in 2008 and classified these references into different categories according to their contents. Based on the overview and analysis work, we found that the number of researchers and developers engaged in computer graphics-related field has been greatly increasing in the past 14 years, and many conferences are held each year, many high-level achievements are made in China. In addition, computer graphics develops itself, deriving out some new research topics or directions including cross disciplines.
摘要:Many algorithms in computer vision usually use a pin-hole camera model. An image is caused by both projective transform and the lens distortion as fish-eye lens or wide-angle lens have large distortion. Unlike the traditional methods which need much information of the scene,a novel approach using geometrical invariability of line to perform camera self-calibration from a single image is presented. In order to correct lens distortion and calculate vanishing point from the image, projective invariants are used: collinear points should be in the same line, Parallel line should be intersect at one point; metric invariant is used: the included angle of orthogonal straight lines should be rectangular. So camera lens distortion and internal parameters are calibrated. The result of emulation and real images are accurate and reliable with this method.
摘要:Combining sparse bayesian learning (SBL) with compressed sensing (CS), a new method of reconstruction for compressed images with contaminated measurements is presented. This method regards the process of image reconstruction as a linear regression model and the image to be reconstructed as the unknown weights of the regression model. By sparse bayesian learning, the weights are endowed with certain prior condition probability density function, which limits the complexity of the model and simultaneously introduces the hyper-parameters. With maximizing the marginal likelihood function of hyper-parameters, the optimal weights are acquired, i.e. the reconstructed image. Simultaneously, this method provides the posterior probability density and the error bars of estimated weights, which deduces the uncertainty of reconstruction. Experimental results show that the new method can acquire exact reconstruction and under the same relative error of reconstruction, it is superior to basis pursuit on reconstruction time and orthogonal matching pursuit on the number of measurements.
摘要:Image pyramid is an important tool for processing and analyzing digital images. This study focuses on image pyramid, and finds out an approach to reduce the data volume of them and solve the problem of truncation in precision. Based on a same idea, we propose two kinds of image pyramid: the mean-like pyramid and the Gaussian-like pyramid. In them, father-pixels can be got by simple calculations of their son-pixels; there is no problem of truncation in such calculations, and no extra bits are needed to store decimals. Besides, as some son-pixels can be recovered from their father-pixels, they can be dropped when storing a pyramid. Thereby, the number of pixels need to be stored is equal to that of the original image only. Its shown that the proposed pyramid structures have good performance in general by theoretic analysis and experimental on precision, data volume and speed of creation.
摘要:Image can be decomposed into homogenous components u and oscillatory components v by using variational methods. However the solution of traditional variational methods has contrast loss and staircasing effect. Now we propose a local adaptive variational decomposition method based on L1. Using L1-norm as the fidelity term, we can get the solution which preserves the edges of original images and the contrast invariant. At the same time, the adaptive function which we induced here can reduce the staircasing effect on homogenous component u. Numerical results are presented, showing that the new method works better on several various types of images for image decomposition and denoising than traditional methods.
摘要:With the emergence of HDTV, the application of aspect ratio conversion (ARC) is getting wider in related fields. In this paper, a new aspect ratio conversion algorithm is proposed based on human visual perception. To start with, the fuzzy cluster algorithm based on the modified partition fuzzy degree algorithm is applied to cluster motion vectors adaptively, and then the image motion center of gravity (MCOG) is estimated based on the clustering result. Moreover, to solve the dithering artifact, a splitting region approach based on MCOG and center region of image is developed. Finally, different regions of image are scaled with different ratios. Experimental results illustrate that the proposed method has good consistency with the human visual perception and show its superiority.
关键词:aspect ratio conversion;fuzzy cluster algorithm;region of interest;visual perception
摘要:Based on the characteristics of difference histograms, we propose a reversible data hiding algorithm using difference expansion. In order to decrease the embedding distortion, we present three schemes, including interleavingly shifting the outer regions of the histogram, dividing the outer regions into segments by using zero points, and a payload-dependent overflow location map. This first scheme is used to control the number of selected differences; the second one is used to decrease the shifting distance of the region movement; the last one is used to generate a compact compressed overflow location map. The experimental results verify that, compared with other typical reversible algorithms in the literature (e.g., Thodi et al′s and Tian′s algorithms), the proposed algorithm has better overall performance. In particular, our algorithm has more advantages at low- and middle-embedding rates.
关键词:reversible data hiding;lossless watermark;histogram shifting;JBIG compression
摘要:According to the nearest relationship, the labeled value of every codeword in the labeled codebook by the nearest neighbor method was different from its nearest codeword, based on which, using I/matrix, as many as m bits data can be embedded by selectively changing at most m2 indexes in every n=2m VQ_compressed indexes. The simulated results by the computer showed that the method used for information hiding in VQ_compressed images has larger capacity and better secrecy.
摘要:Focusing on the problem of copy right protection for two-dimensional engineering graphic, an information hiding algorithm based on HVS and dimension characters is proposed. The sequenced dimension entity set is firstly acquired from the engineering graphic. Then the message is encrypted by a two-value chaotic sequence generated by Logistic map. The color and line weight character of a certain dimension entity is selected. And a slight modification is made to them based on HVS and the encrypted message. Simulation results show that the proposed algorithm is robust against the attacks such as rotation, moving, equal scaling, unequal scaling and noise.
关键词:information hiding;two-dimension engineering graphic;HVS;dimension character
摘要:This paper introduces a kind of AVS real encoding algorithm based on slice Parallel.The advantages of AVS real encoder based on slice parallel is high encoding speed,less delay time and low process complexity. This paper also puts forward a method of improving the slice effect. The experimental result demonstrates that it is absolutely feasible to implement a real-time AVS SD encoder.
摘要:In order to guarantee higher compression efficiency and video quality, a robust rate-distortion optimization(RDO) is employed in H.264/AVC video coding standard, which is jointly developed by ITUT and MPEG. According to the specification of RDO, all of the prediction modes have to be tried exhaustively in order to find the best mode. As a result, the computational complexity of the mode decision is extremely high and is not suitable for the real time service, such as video conference.This paper proposes a fast intra prediction modes decision algorithm based on the discrepancy within the video picture. In addition, the Direction-Vector based preferred modes selection method is introduced in the proposed algorithm to reduce the candidate modes to be tested. Experimental results are given to demonstrate that our proposed algorithm achieves a reduction of processing time by 40%~50% as compared with the current RDO optimized mode decision with little quality degradation.
摘要:There are two limitations in JVT-G012 scheme: firstly, the linear MAD prediction model performs poorly due to high motion or scene changes. Secondly, there exist much more complex motion compensation strategies in H.264, and a higher percentage of bits is required to encode non-texture data. Thus the method that simply predicts non-texture bits through the previous coded frames is no longer efficient. First of all, a macroblock-based histogram of difference is introduced, and bits are allocated rationally among coding units according to image complexities. Furthermore, a novel rate control scheme named Spatio-Temporal MAD prediction model is presented, and a more flexible non-texture prediction scheme is derived. As shown in experiments, the proposed algorithm achieves a bit rate closer to the target, and provides an improved visual quality and the higher PSNR.
关键词:H.264/AVC;rate control;histogram of difference;spatio-temporal MAD prediction model
摘要:The traditional watermarking techniques based on fractal coding is generally using 0,1 sequence as the watermark. This does not implement embedding gray image as the watermark. Based on orthogonal fractal coding, a new fast watermarking technique is proposed in this article which can embed gray images. Just because the orthogonal fractal decoding is a mean-invariant iteration,the watermark can be embedded into the fractal decoding parameters directly. Experimental results show that gray image’s embedding hardly does not have any effect on the fractal decoding image. And also, this scheme is better than the traditional techniques on being robust against many attacks such as cutting, low-passing filtering, and JPEG compression.
关键词:fractal image;coding digital watermark;fractal orthogonal transform
摘要:In this paper, a robust image watermarking detection based on support vector regression (SVR) is proposed. Firstly, six combined low order image moments are taken as the feature vector and the geometric transformation parameters are regarded as the training objective, the appropriate kernel function is selected for the training, and a SVR training model can be obtained. Secondly, the combined moments for test image are selected as input vector, the actual output is predicted by using the well trained SVR, and the geometric correction is performed on the test image by using the obtained geometric transformation parameters. Finally, the digital watermark is extracted from the corrected test image. Experimental results show that the proposed watermarking detection algorithm is not only robust against common signals processing such as filtering, sharpening, noise adding, JPEG compression etc, but also robust against the geometric attacks such as rotation, translation, scaling, cropping, combination attacks, etc.
摘要:A novel method which selects the approximate bases of high dimensional feature space based on the scale of linear independency is proposed;and after combining the presented method with the partial reduction strategy,SLS-SVRM(Sparse Least Squares Support Vector Regression Machine) is built. In addition, the recursive trick is used to accelerate the establishment of SLS-SVRM. SLS-SVRM obviously decreases the number of support vector without loss of the predicted accuracy. Finally, three UCI (university of California at irvine) datasets confirm the effectiveness of the proposed model.
关键词:least squares support vector regression machine;linear independency;approximate bases;partial reduction
摘要:As a new unsupervised learning method, Local Linear Embedding algorithm(LLE)aims at reducing the nonlinear dimensionality.Since the local linear embedding method has many disadvantages, a new method, namely robust linear embedding method based on a kernel function, is presented to solve this problem. Firstly, the kernel function is utilized to adjust the Euclidean distance between data points, so the new method can improve the performance and the range of application of LLE. Secondly, the new method using the improved W is selected because it is insensitive to noise. It is shown that the actual computation of the subspace is reduced to a standard eigenvalue problem. The proposed method was tested and evaluated in the Yale face database and AT&T face database. Nearest neighborhood (NN)algorithm was used to construct classifiers. The experimental results showed that the improved algorithm has good performance when pose, lighting condition, face expression and train sample number change.
关键词:Manifold learning;high dimensional data;dimensionality reduction;Kernel function
摘要:Appearance of the false lines is one of the major disturbing factors,which increases the complexity of the computation and reduces the stability of the extraction result,in the process of extracting artificial targets from high-resolution images. A robust two-step method is brought forward to filter the false lines extracted by the edge-based algorithms for extracting lines. Based on analysis of characteristics of false lines,in consideration that the lines extracted from artificial targets are regularly extraordinary,this method operates in two phase of the algorithms for extracting lines. One is that prior to grouping edges with co-lineation relationships,12 models are employed to filter a proportion of edges,from which the false lines may be fitted later.The other is that after the process of extracting lines, the strength of the similarity of the image characteristic close to the line is defined firstly,in terms of which the lines extracted are divided into the true ones or the false ones,and the false lines will be filtered. Lots of experiments show that this method can filter a majority of the false lines extracted from image,which are corresponding with image noise or irregular natural objects,while the true lines which are corresponding with artificial targets are almost preserved.
摘要:Strip surface detection is one of the basic process of strip quality control, existing methods for strip surface detection couldn’t meet the accuracy and real-time capability for industrial spots. To solve these problems, in this paper, a detection method of primary strip surface defect based on the local binary pattern (LBP) algorithm is proposed. Firstly, the LBP values of each pixel in strip image are calculated by employing a fast LBP algorithm. Then by constructing the LBP histogram, the information of principal edge points belonging to different types of defection is obtained. After thresholding, the existence and the location of defect in the image are suggested. Experimental results show, the proposed method not only has higher accuracy and real-time capability on the primary strip surface defect detection, but also can offer reliable structural and statistical feature information for further defect classification.
摘要:This paper makes improvements on motion detection method which combines frame difference and background subtraction, specifically in three areas: the use of gray transform and neighborhood correlation coefficient integrated with gray value information effectively solved the problem of misdetection of moving targets; the motion evaluation step introduced in combining strategy of frame difference and background subtraction solved the problem of miss detection of slow movement targets; and the use of running average updating method to update the background model prevented the background model from degradation. Experimental results show that the improved methods have significantly solved the problems of misdetection of moving targets, miss detection of slow movement targets and background model degradation.
摘要:In order to measure the temperature of radiation image target,the measured target must be recognized accurately from radiation images in the non-contact soft measurements of temperature field based on CCD image sensor. It is difficult to recognize the image target because of the existence of various interferences in radiation images captured in industrial locale. A classification recognition method is proposed. By multi-spectrum segmenting,various high-temperature noises in the radiation color image are reduced or even eliminated. And then with the improved Ostu segmentation algorithm,the interference of smog is eliminated. Finally,the morphology method in mathematics is applied to process the segmented image and remove the dissociations and narrow holes to smooth the image’s edge. The experimental results show that the method can recognize high-temperature melt target from high-temperature radiation image with various interferences accurately and it has an excellent practicability.
关键词:target recognition;multi-spectrum segmentation;Ostu method;radiation temperature measurement
摘要:Utilizing the Contourlet’s advantages of multi-scale, localization directionality and anisotropy, a remote Sensing multi spectral and panchromatic image fusion algorithm based on morphology and Contourlet transform is developed.Firstly, the multi-spectral image is transformed by IHS transform. Secondly, panchromatic images and the component I of the multi-spectral image are decomposed to the domain of the Contourlet transform.The image fusion is then implemented in different subbands. A new edge fusion algorithm based on the morphology grad operator is adopted as fusion rules in lowpass subbands and region standard variance is adopted as fusion rules in highpass subbands.Furthermore, the consistency check based on the morphology is proposed. Finally, the stretched grayscale fused image replaces the original intensity component, and the final fused image is achieved using the inverse IHS transform.The experimental results show that, the proposed fusion method can integrate the information and retain the features of source images more effectively compared with the traditional algorithm.
摘要:In order to build high quality panoramic image, the conception of a mosaicing graph and three rules for high quality panoramic image are presented in the paper.An image registration algorithm based on blocking-spatial clustering is used to calculate the registration position and to evaluate the registration quality of pairs of images to obtain weight of edge in mosaicing graph.Then a method of images mosaicing based on minimum routing cost spanning tree is proposed to calculate global optimum position of every image by constructing the minimum routing cost spanning tree of the mosaicing graph and to create the panoramic image.In the case study, the proposed method demonstrates high quality.
关键词:images mosaicing;panoramic image;mosaicing graph;spatial cluster;minimum routing cost spanning tree
摘要:A fast image mosaic algorithm based on feature points matching with excellent results is presented in this paper and two outstanding methods in this algorithm are proposed. The first one is importing a new filtering method for matching points by choosing pairs of correlation feature points with clustering algorithm aiming at the disadvantage of RANSAC algorithm. The efficiency of the algorithm is enhanced by doing this. The other one is raising another new method of blending images by using optimum path best-matched-line combined with pixel brightness weighting function in HSI color space.As a result,the gusting phenomenon is removed and a brightness blending image is achieved. Several results show that this is a highly-matched robust image mosaic algorithm.After choosing data with clustering algorithm beforehand,the efficiency is greatly enhanced.And this algorithm also has prevented ghosting phenomenon and can make images blending naturally at stitching areas.
关键词:image mosaic;clustering;optimum path;brightness weighting function
摘要:Aiming at a chieving fast automatic matching of artificial target points in digital close range photogrammetry, a new matching method basing on two-image space intersection is presented in this paper, taking into account the epipolar line constraint in object space.Firstly, a set of initially matched image points is found by calibrating the shortest distance between two image rays.Secondly, the coordinate of corresponding object point is calculated through two-image space intersection, and these points are grouped according to the distances between each other.Finally, eliminate those image points with high vate of coordinate errors and which is in the same image, and find out homologous image points by numbers of image points corresponding with object points.Both experiences prove the advantage of the matching method, which is high speed, high matching quotient and low miss matching quotient.It can also provide precise initial values for the following bundle adjustment.
关键词:digital industry photogrammetry;artificial target point;matching;space intersection
摘要:An image annotation method based on mutual information and constrained clustering is proposed. We utilized the semantic constraint to improve information bottleneck method,which employed to cluster the segmented region.Then relationships between image semantic concept and clustering regions are established. Toward the un-annotated image, a new method is proposed to calculate the conditional probability of each semantic concept,while considering the prior knowledge of training images and low-level features of the segmented regions.Finally,the image region semantics are automatically annotated by keywords with maximal conditional probability. The proposed method has been implemented and tested on an image database with about 500 images. The experimental results show that the effectiveness of the proposed method outperforms other approaches.
摘要:Quadratic hyperbolic polynomial basis functions with multiple shape parameters are presented in this paper, which possess the most properties of quadratic non-uniform B-spline basis functions. Based on the basis functions, quadratic hyperbolic polynomial curves with multiple shape parameters are constructed. These curves are C1-continuous with a non-uniform knot vector .With different values of the shape parameters,the shapes of the curves can be adjusted totally or locally .Without using multiple knots or solving equations,the curves can be interpolated given certain control points or control polygon edges directly. And hyperbolic polynomial curves can represent hyperbolas exactly.
关键词:B-spline curve;hyperbolic polynomial curve;multiple shape parameters;totally or locally adjust;interpolation
摘要:The model of forestry scene normally contains a huge number of facets in simulation because of the complexity of the forest’s geometrical structure. How to simplify the model while keeping its visual effects to some degree is always a hot problem in the field of computer graphics. In this paper, an approach which fits the implementation of mass woods included natural scene walkthrough system is presented, which uses 3D modeling software 3ds max for scene model construction and uses OSG for real-time rendering and walkthrough. Taking good advantage of the existing software, this paper realized the walkthrough in mass woods included natural scene and the collision detection during walkthrough as well. Experimental results show that this system has high frame rate which meets the requirement of real-time walkthrough in mass woods including natural scene, while keeping good visual effects.