最新刊期

    8 4 2003
    • Overview of the True Three-Dimension Volumetric Display Technologies

      Vol. 8, Issue 4, Pages: 361(2003) DOI: 10.11834/jig.200304137
      摘要:Now the people get the 3D effects by rotating the 2D images on the planar screen. But the effects are limited. They only can get the psychology depth, not the physical depth. In order to get the true 3D effects, some new technologies of 3 D volumetric display are coming up now. The volumetric display is basing on the "Voxel" (Volumetric pixel). This paper explains some main types of the 3D volumetric display technologies. Of them, a new 3D display technique is superior, which is called "True 3D Volumetric Display". Its scientific design can make the real depth cue. The observer can watch the 3D image from any angles without the special glass and get the different cue theoretically. It includes two realization method: Swept Volume display and Static Volume display. This paper emphasizes the basic theory, the history and the actuality of the volumetric display technologies. Then, it introduces some of main performances, and compares the performances of some products. Specially, it describe the theory of the rotating imaging in brief.At last, the true 3D volumetric display's application also is illuminated. In our country, the related research is empty. Also the application is null. But some applications has been used outside, such as the Air Control System in civil aviaton, the product exhibition in the advertising.  
      关键词:Computer imege processing;Volumetric display;True 3D;Rotating screen;overview   
      3866
      |
      191
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56157690 false
      更新时间:2024-05-07
    • Digital Watermarking Techniques for Image Authentication

      Vol. 8, Issue 4, Pages: 367(2003) DOI: 10.11834/jig.200304138
      摘要:The growth of networked multimedia systems has made digital data acquisition, exchange and transmission a simple task, but the ease of copying and editing also facilitates unauthorized use, misappropriation and misrepresentation, which makes it necessary to authenticate the multimedia data. At present, it has been a hotspot to authenticate digital images by the use of watermarking. According to the objectives of authentication, an image authentication system can be classified into complete verification and content verification. Watermarking for complete verification (known as fragile watermarking) considers image data as untouchable messages such that the data for authentication have to be exactly the same as the original. Content verification is a characteristic of multimedia data authentication. Watermarking for content verification (known as semi fragile watermarking) considers "information preserving" image manipulations, such as compression and format conversion, as acceptable. This paper presents the general framework of digital watermarking for image authentication, discusses the fundamental demands and common attacks, introduces various existing algorithms and analyzes their advantages and disadvantages. The paper also introduces the current state of the techniques and proposes several research topics at next stage.  
      关键词:Computer imgae processing;Digital image authentication;Fragile watermarking;Semi fragile watermarking   
      2439
      |
      182
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56159327 false
      更新时间:2024-05-07
    • An Image Encryption Arithmetic Base on Chaotic Sequences

      Vol. 8, Issue 4, Pages: 374(2003) DOI: 10.11834/jig.200304139
      摘要:Chaotic sequences have several good properties , such as the ease of their generation, their sensitive dependence on its initial parameters, and specially white noise alike, at the same time it can be reproduced precisely based on the initial condition, and their discrete mapping sequences have a same property.Because of these good properties, chaotic system can be used to image encryption and image preprocessing. In this paper, image encryption arithmetic is proposed based on chaotic system. First, based on the key (initial parameter is the key, here), the chaotic sequences can be generated based on key, then it is mapped to the discrete 2 K value sequences, according to the discrete sequence, gray value of pixel is modified randomly, it is obvious that if attacker dose not know the key, the encrypted image will be looked like white noise and can not be rebuilt. There are many image encryption methods, and different arithmetic gets different results, but up to now there is no suit way to evaluate the result of image encryption. Based on image local variance and human vision system, an efficient definition of encryption degree is proposed. Preliminary results are satisfactory, and security of image completely depend on the key, slight error key results a different result. Hence, it can be used to encrypt image data.  
      关键词:Computer imgae processing;Image encryption;Chaos sequences;scrambling transformation;Disorder degree   
      2703
      |
      189
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56160307 false
      更新时间:2024-05-07
    • Vol. 8, Issue 4, Pages: 379(2003) DOI: 10.11834/jig.200304140
      摘要:Image quality metric could be utilized to optimize the image compression algorithm and improve the image quality. Perceptual quality metric, based on the foundation of human visual system (HVS), can be used as a kind of closed connection between subjective assessments and objective assessments and reflects the human's visual sense to the image distortion. In recent years, the visual perception characteristics in relation to image quality closely has been researched thoroughly by means of some new progress in HVS, and a lot of perceptual quality metrics on the still image compression which all sound effectively have been proposed. An almost comprehensive survey of the different application of the visual perception characteristics in perceptual quality metric is presented in this paper, the important factors to predict the image quality accurately and robustly is given and the research achievement is summarized in this field. However, the development of computational HVS models is still in its infancy, and many issues remain to be investigated and solved. First of all, more psychophysical experiments need to be done with natural images for the modeling of more complex phenomena that occur in natural images. Secondly, more psychophysical experiments focus on measurements at the supra threshold also need to be done because quality metrics and compression are often applied above threshold. Finally, HVS models would be expressed by analysis functions with the aid of latest mathematic pay off in order that the general image quality metrics are developed.  
      关键词:Computer image processing;Image quality assessment;Image compression;visual perception;human visual system(HVS);Image quality metric   
      2596
      |
      202
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56159131 false
      更新时间:2024-05-07
    • A Study on Iris Image Quality Evaluation

      Vol. 8, Issue 4, Pages: 387(2003) DOI: 10.11834/jig.200304141
      摘要:After an Iris image is acquired, its quality should be firstly evaluated to judge whether it can be used for Iris recognition. In this paper Iris image quality evaluation is divided into three parts, namely, Iris detection, Iris area calculating covered by the eyelids or not in the image as well as definition evaluation. The potential Iris is detected with template match and some characteristic Iris features of the Iris image are used for verifying the presence of the Iris. The boundary between the Iris and the eyelids is found to acquire the Iris area covered by eyelids, and then the Iris area not in the image is computed according to the features of Iris boundary and pupil boundary. The average height of pupil boundary is calculated for evaluating the definition of the iris image. The experimental results indicate that the Iris images with high quality can be picked out effectively.  
        
      3013
      |
      177
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56158800 false
      更新时间:2024-05-07
    • A New Fast Fractal Coding Approach Based on Wavelet Decomposition

      Vol. 8, Issue 4, Pages: 392(2003) DOI: 10.11834/jig.200304142
      摘要:In order to reduce image coding time,a new fast fractal method based on wavelet decomposition is presented in this paper. This algorithm is mainly implemented in two fold. Firstly, according to the feature of wavelet decomposition whose energy is ununiformity distribution in subbands of whole wavelet transform image, the low frequency region, where most of energy concentrates, is regarded as a kind of image, and can be encoded with traditional fractal method. Secondly, owing to the similarity of subband image among different channels after wavelet decomposition, instead of global search mechanism we have introduced a local optimal solution to gain the whole image's fractal parameter by suitable scale transform of low frequency region's. The analysis and results show that the encoding speed is improved greatly in case of the fixed compression ratio, and the quality of the reconstructed image can be better retained. It's very evidently that the dissymmetry between the encoding and decoding process has been reformed drastically.  
      关键词:Computer image processing;Image compression;fractal coding;wavelet decomposition;Low frequency region   
      2756
      |
      180
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56157871 false
      更新时间:2024-05-07
    • High Fidelity Compression Algorithm Based on Limiting Image Grey Error

      Vol. 8, Issue 4, Pages: 398(2003) DOI: 10.11834/jig.200304144
      摘要:In order to resolve the contradiction between the need of high image quality and low data rates in data transmission and storage in the fields of satellite remote sensing, a new idea of using adaptive block coding technique and multi mode adaptive quantization technique to improve the JPEG LS is proposed. As a result, a new visually loss less coding algorithm-LIGE( Limiting Image Grey Error) is presented. The performance of this algorithm is much better than JPEG LS. Contrast to the DWT based SPIHT, LIGE has the following outstanding characters:Given a threshold Q, the image distortion and the PSNR of the reconstructed image can be predicted in advance, which can guarantee the required reconstructed image quality and avoid excessive loss of original image information. There is no floating point calculation and transform in the algorithm, so the compression speed of the coder is 3 times faster than that of SPIHT. At compression ratio 4:1, the PSNR of the reconstructed images of LIGE is higher than that of SPIHT, especially for the remote sensing image, the quality of the reconstructed image is even better.The result of this paper will be benefitial to the development of the communication system of Chinese satellites.  
      关键词:Computer image processing;compression;adaptive;block;prediction;JPEG LS   
      2306
      |
      158
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56159225 false
      更新时间:2024-05-07
    • Vol. 8, Issue 4, Pages: 403(2003) DOI: 10.11834/jig.200304145
      摘要:Strictly speaking, image magnification (image zoom or image resample ) is a morbid problem. Based on various image models, several methods such as bilinear interpolation, bicubic spline interpolation, fractal based interpolation and wavelet based interpolation have been developed. The focus problem of image magnification is how to get good visual resolution. In the methods based on wavelet transformation, it is how to make the detail coefficients of the magnified images. The present method interpolates the coefficients straight in the transformed domain, but the experiments show that the magnified images have bad visual effect. Experiments on one dimension signals in this paper illustrate that method is not reasonable. In addition, two other questions are also discussed: how to estimate the method of image magnification and how to choose the appropriate wavelets in the wavelet based method. Comparing and analyzing show that the best means to estimate the method of image magnification is subjective measure, on the other hand the objective measures mentioned in this paper are merely fit for the traditional interpolation methods. Since there is not a perfect method constructing the detail coefficients of the magnified images at present, how to choose the appropriate wavelets is a new question that should be deeply researched.  
      关键词:Computer image processing;Image magnification;wavelet transformation;Zoom;interpolation;resampling   
      2564
      |
      165
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56157782 false
      更新时间:2024-05-07
    • Image Classification for Image Compression and Compression Result Forecast

      Vol. 8, Issue 4, Pages: 409(2003) DOI: 10.11834/jig.200304146
      摘要:There is a considerable amount of redundant information in image data which makes image compression possible. Redundancy of data, spatial redundancy in particular varies with different images. It is necessary to study the spatial redundancy of compressed images and reduce the random selection of image compression methods. In this paper, a novel idea of image classification for image compression is proposed and its algorithm is presented too. The distribution of wavelet high frequency coefficients in images is considered while edge active measure (EAM) is defined to describe the nature of images in this algorithm. By EAM images can be classified and compression result can be forecasted .The experiments have shown that the image classification and result forecast implemented in this paper make sense and correspond to human visual understanding. The idea suggested in this paper has been of great value to the election and optimization of algorithms for different purposes.  
      关键词:Computer image processing;Image compression;Image EAM;Image classification;wavelet transform   
      2186
      |
      183
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56158741 false
      更新时间:2024-05-07
    • Adaptive Segmentation of Video Shot Based on Wavelet

      Vol. 8, Issue 4, Pages: 415(2003) DOI: 10.11834/jig.200304147
      摘要:With the development of multimedia technology, the amount of multimedia video data is enlarged at an explosive speed. As video data isn't structural, we have to detach it in order to index and browse video. It is a good method to use shots as the basic units. The analysis of character of shots sequence. It is not a good method using common threshold based because of the variety content of video and the different shot transition type. A novel wavelet based shot boundary detection approach is proposed to overcome these difficulties, it regards the shot change detection as the singularity position detection of frame frame difference function. This makes the selection of threshold easier and gets a good performance. Selecting a video segment and picking up frame to frame difference curve is the first step. in order to improve the speed of the segmentation, an intensity histogram based content description scheme is adopted, the description is sufficient and efficient. The next step is to analyze frame to frame difference function by wavelet transform. Then we filter the noise and find out the module max. Finally we can get the edge of shots accurately by tracking. It is a great method to detach video data adaptively and examine all kinds of shot edges.  
      关键词:Computer image processing;shot;Difference of content;Multiresolution analysis;Lipschitz constant   
      2283
      |
      185
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56159587 false
      更新时间:2024-05-07
    • Vol. 8, Issue 4, Pages: 422(2003) DOI: 10.11834/jig.200304148
      摘要:Fast and automatic segmentation of video object is a key technology in object based video coding such as MPEG 4. In this paper, a new algorithm for extracting moving object based on spatio temporal information is proposed. First, a binary motion image is achieved based on high order statistics detection method and motion information of multiple frames. High order statistics detection method can get rid of noise and disturbance effectively. Then, an improved watershed algorithm is proposed to segment motion region and its surrounding areas. Compared with traditional watershed algorithm the improved watershed algorithm can reduce the over segmentation phenomenon as well as the computational load. Finally, a moving object is extracted with projecting operation of spatial and temporal segmented results. In order to test the performance of the proposed algorithm, simulation test is performed with test sequences Sussie and Missa. And good experimental results are acquired which show that the proposed algorithm is efficient.  
      关键词:Computer image processing;Video object;Limited region segmentation;Improved watershed algorithm   
      2406
      |
      166
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56158569 false
      更新时间:2024-05-07
    • Human Motion Tracking with Weak Prediction

      Vol. 8, Issue 4, Pages: 427(2003) DOI: 10.11834/jig.200304150
      摘要:It has been a challenge to capture rapid human motion with self occlusion. Current algorithms are not capable of tracking rapid motions with self occlusion: features with rapid motion are beyond small interest region search, and positions of the occluded features are difficult to be estimated. In this paper, we present a robust human motion tracking algorithm with weak prediction. Instead of predicting the position of each human feature, the region of the whole body is estimated and candidate features are extracted through the overall search in the estimated region. A multi resolution search strategy is proposed to improve the efficiency of overall search: the initial candidate feature set is extracted from the low resolution image and successively refined at higher resolution levels. To establish the correspondence between the candidate and the actual features, an adaptive Bayes Classifier is constructed based on the time varied models of feature attributions, viz. color and motion. And a hierarchical human feature model is adopted to verify and accomplish the feature correspondence. The experiment demonstrates the effectiveness of our algorithm.  
      关键词:Computer image processing;motion capture;Human motion tracking;Multi resolution tracking;Feature correspondence   
      2503
      |
      189
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56157748 false
      更新时间:2024-05-07
    • Vol. 8, Issue 4, Pages: 434(2003) DOI: 10.11834/jig.200304151
      摘要:Complex wavelets can provide both shift invariance and good directional selectivity, which are lack in the traditional wavelet transform, but can not satisfy the condition of perfect reconstruction. Dual tree complex wavelet transform (DTCWT), which employs a dual tree of wavelet filters to obtain the real and imaginary parts of complex wavelet coefficients, can solve this problem. In this paper, the principle of DTCWT is discussed, and the directional characteristics of the twelve high frequency sub bands after DTCWT are studied. Based on the good directional characteristics of DTCWT, we propose a directional filtering method for enhancement of curve like texture images. Image filtering by using this method, the information of the local main direction in each sub band of wavelet transform domain is reserved, and the noise distributed in other directions is removed. This method is proven to be not only less complex, since it avoids the frequency and statistical estimations on characteristics of both the signal and noise, but also on better directional selectivity than real wavelet transform. The experimental results on texture image enhancement demonstrate that this method is more efficient and also more suitable for complicated textures images.  
      关键词:Computer imgae processing;image enhancement;Dual tree complex wavelet transform;Directional filtering   
      2286
      |
      190
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56158117 false
      更新时间:2024-05-07
    • Vol. 8, Issue 4, Pages: 441(2003) DOI: 10.11834/jig.200304152
      摘要:A novel Blind Source Separation(BSS) algorithm based on the combination of genetic algorithm and Independent Component Analysis (ICA) is proposed with analysis to the ICA method. The proposed algorithm can be used to solve the problem of local optimum that is easily stacked into by normal numerical solution. In the genetic algorithm, the Kurtosis as the fitness function is adopted, the elitist model is introduced and supplying filial generation's individual with migrant operation dynamically is also adopted. The simulation 1 is the separation of the mixed signals of three images and a noise. The simulation 2 is the separation of the mixed signals of two image signals (sub-gauss signal) and two voice signals (super-gauss signal). The image separation simulation shows that the blind signals separation can be realized and the global optimum can be acquired through the proposed algorithm under the circumstance of adequate population size and genetic generations. Compared with the Blind Source Separation method of extended-infomax, the proposed method in this paper can acquire better separating effect in separating the mixed signals of sub-gauss signal and super-gauss signal.  
        
      2447
      |
      195
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56159477 false
      更新时间:2024-05-07
    • Vol. 8, Issue 4, Pages: 447(2003) DOI: 10.11834/jig.200304153
      摘要:Mathematical morphology has already become the important research field of computer image processing, and it has been applied effectively in the remote sensing image analysis. This paper introduces mathematical morphology into the research of building DEM, provides a algorithm that those homologous points can be auto recognized from stereo images by using image matching and the 3D coordinates of ground points are extracted, then the DEM can be built by using morphologic transformation. The research injects new vigor into the arithmetic of auto building DEM. This paper discusses the method of using orthogonal wavelet to decompose and reconstruct a image?feature extraction and matching combining image and feature . The algorithm for auto building Thiessen polygon , Delaunay triangular net (TIN) and regular net (GRID) of DEM is presented with the discrete ground points from stereo matching . Experiments prove that this arithmetic has the virtue of simple data construction, high speed, efficient and high accurate.  
      关键词:Computer image processing;image matching;Mathematical morphologic transformation;Multi resolution feature extraction;DEM;Thiessen polygon;Delaunay triangular net   
      2706
      |
      211
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56157643 false
      更新时间:2024-05-07
    • Speckle Reduction for SAR Images Using Edge Directions in Wavelet Domain

      Vol. 8, Issue 4, Pages: 453(2003) DOI: 10.11834/jig.200304154
      摘要:A filter for speckle reduction in SAR image is proposed. On each level of wavelet decomposition, three images are used. One is the original image, and the two others are obtained by rotating the original image by 45°and -45°respectively, and so 12 subbands are gotten. In the 12 subbands, four subbands, HL subband and LH subband corresponding to the original image, and two HL subbands corresponding to the second and third image respectively, are used for edge detection, and the LL, HL, LH, HH subbands of the original image are used for synthesis. By using each point' s four wavelet coefficients in the four subbands for edge detection, the edge direction property of the point on the original image is captured, and then the edges are detected by setting a proper threshold. And so, the speckle can be reduced while the edges being preserved well by setting the wavelet coefficients in the synthesis subbands corresponding to the points not on edges to zero but retain the wavelet coefficients in the synthesis subbands corresponding to the points on edges. For detection of some oscillating edges, the filter is improved by combining with the traditional threshold method. Simulations on synthetic images indicate that the new filter performs better than the traditional wavelet domain hard threshold or soft threshold method.  
      关键词:Computer image processing;synthetic aperture radar(SAR);speckle;wavelet;Edge direction   
      2449
      |
      219
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56158999 false
      更新时间:2024-05-07
    • A Fast Algorithm of Decomposing Polyhedrons by Using Loop-chain

      Vol. 8, Issue 4, Pages: 459(2003) DOI: 10.11834/jig.200304155
      摘要:Based on loop chain, a fast algorithm is presented for decomposing arbitrary polyhedron into convex polyhedrons without adding new vertexes, in which the final number of convex polyhedrons is close to the least. A loop is consist of several vertices that start from one vertex go to next adjacent vertex can come back this vertex, all vertices in a loop is consist of a partition plane. The least perimeter loop chain is selected from all the loops consisted of edges and diagonal edges of the polyhedron, and decomposes the polyhedron with a series of planes consisted of all edges of the loop. Experiments show that this method can decomposing arbitrary polyhedron into the least number convex polyhedrons without adding new vertexes, can process most kinds of polyhedrons (e.g. a polyhedron with a inner hole), and has low cost in calculation. The algorithm of convex decomposing 3D objects has been a research direction of computer geometry for a period and is widely used in fields of pattern identifying, animation and CAD.  
      关键词:Computer graphics;fast algorithm;Polyhedron;Convex decomposition;Loop chain   
      2608
      |
      191
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56158926 false
      更新时间:2024-05-07
    • Matching 2D Polygonal Arcs Based on Junction as Feature Sets

      Vol. 8, Issue 4, Pages: 464(2003) DOI: 10.11834/jig.200304156
      摘要:Feature selection is a key problem of matching 2D polygonal arcs. The optimal feature sets should have the attribute of geometry and topology. In this paper, we focus on polygonal arcs junction that is two line segments that meet at a single point. We present junctions of polygonal arcs as feature sets; the benefits of using this feature sets include attribute of geometry and structure of topology of polygonal arcs. The number of features used in the matching must be reducing to a minimum without losing geometric information useful for 2D reconstruction. 2D polygonal arcs are represented by the feature sets. This representation is invariant to translation and rotation transformation. The 2D polygonal arcs matching task is reduce into a 1 D string matching problem. The trivial matching algorithm is proposed and use to matching of nonoccluded and occluded 2D polygonal arcs. Experiments with different classes polygonal arcs show that the matching algorithm is efficiency and is robust to digitization errors and noise effects. And can perform well when simple closed curves represented by polygonal arcs.  
        
      2478
      |
      152
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56160178 false
      更新时间:2024-05-07
    • Vol. 8, Issue 4, Pages: 468(2003) DOI: 10.11834/jig.200304157
      摘要:Via topological mapping, the inclusion test of whether a set of points is in a polygon or not can be converted to comparing the projected points' position on the projection line. At first, the center point for topological mapping must be figured out, then the method maps vertices of the polygon onto the projection line. To each point of the set, according to the position of its mapping point and the mapping points of the vertices, it is confirmed that the point is in the area of two lines from the center point to the two vertices of one of the polygon's edges. And then, according to the two cases, whether the point is in the edge's box or not, we can draw a conclusion. If the point is out of the edge's box, the calculation of comparing its position with the edge's box's is only needed,and if the point is in or on the boundary of the edge's box, the calculation of cross product must be added. To the points of set, througth pre calculation of the polygon's vertices this algorithm can greatly reduce calculation of each point. Experiments show that this algorithm is rapid, robust and can be implemented easily.  
      关键词:Computer graphics;Point;Set of points;polygon;Topological mapping;Mapping point;Projection line   
      2503
      |
      180
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56160458 false
      更新时间:2024-05-07
    • Quick Volume Rendering for Data of Dam Seismic Response

      Vol. 8, Issue 4, Pages: 472(2003) DOI: 10.11834/jig.200304158
      摘要:The concentrative part of field value can't be judged exactly on the static picture of classical volume rendering. In order to remedy this kind of shortage, this paper presents a quick volume rendering method, for the finite element data field of dam seismic response, choices the slicing direction relying on the element surface which normal line has the smallest included angle with the view direction, slices elements to many quadrilaterals, blends the colors of these quadrilaterals by using OpenGL blending technology, and then superimposes all elements one by one. This method accelerates the speed of volume rendering, thus, we can observe dynamic pictures of volume rendering and analyzes the concentrative part of field value clearly.  
        
      2631
      |
      175
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56160404 false
      更新时间:2024-05-07
    • Vol. 8, Issue 4, Pages: 476(2003) DOI: 10.11834/jig.200304159
      摘要:Spatial association rule discovery in spatial databases is a very important data mining task. In this paper, a two stage strategy for the discovery of spatial association rules in geographical databases is proposed. The spatial computational overhead is greatly reduced by top down refinement of spatial predicate granularities and multiple recursions of single level boolean association rule discovery step, which is the key step of the algorithm. The single level boolean association rule mining algorithm, FPT Generate, is detailed. FPT Generate uses the frequent item prefix tree, FIPT, to compress and project frequent item sets, and discovers association rules by growing a frequent pattern tree, FPT, by depth first search. The algorithm FPT Generate generates association rules without candidate generation and without redundant scans of databases. Optimizing techniques for the implementation, such as pseudo projecting and pruning, dynamic threading and hashing, and disk based partitioning, are also discussed. Experiments show that spatial association discovery systems powered by FPT Generate are much more time efficient and space scalable than those powered by the classical algorithm, Apriori. Finally, a spatial association rule discovery system, SmartMiner, upon the support of MapInfo Professional, is developed.  
        
      2499
      |
      176
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56160504 false
      更新时间:2024-05-07
    0