最新刊期

    9 11 2004
    • An Overview of Performance Evaluation in Content-based Image Retrieval

      Vol. 9, Issue 11, Pages: 1271(2004) DOI: 10.11834/jig.2004011244
      摘要:Promoted by the professional and diverse demands of image retrieval, the technique of content-based image retrieval (CBIR) becomes more mature. More and more commercial and scientific research systems are developed. As any technique is promoted by the performance evaluation of corresponding research area, for the development of effective image retrieval applications it is imperative to study the standard of performance evaluation in content-based image retrieval. Problems such as a common image database for performance comparison and a means of getting correlation judgement for queries are explained. This paper presents a review of the methods of performance evaluation in content-based image retrieval proposed in the literatures and tries to figure out the developing direction in the future. This paper also recommends that the content-based retrieval research community should establish a standard test-bed for evaluating image retrieval effectiveness. Further work needs to be done to better involve users in the evaluation process because the ultimate aim is to measure the usefulness of a system for a user. Interactive performance evaluations including several levels of feedback and user interaction need to be developed.  
        
      2997
      |
      231
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56151916 false
      更新时间:2024-05-07
    • Color Grading Method Based on Perceptually Uniform Color Space

      Vol. 9, Issue 11, Pages: 1277(2004) DOI: 10.11834/jig.2004011245
      摘要:Automatic visual inspection is one of the most important areas of machine vision. Color grading is one of the typical issues in automatic visual inspection, and has been broadly used in ceramic tiles, lumber etc. In order to implement grading quickly, accurately and automatically, a color grading method based on perceptually uniform color space is put forward according to human vision system characteristics. Firstly, color data in RGB like color space is transformed into perceptually uniform color space-CIE 1976 L *a *b *. In CIE 1976 L *a *b * color space, 2-D RWM (radius weighted mean) cut algorithms are applied to extract DC (dominant colors), which is not sensitive to lighting change and less computation complexity compared to 3-D RWM cut algorithms. Then, DC set as color feature, an innovative color distance metric-mapping color difference is proposed, which is identical to human visual system characteristic. The relation between mapping color difference and average color difference is also analysed. Finally, mapping color difference used as distance metric, anaminimum distance classifier is adopted to color grading. Experiments results show that the proposed method is effective and encouraging.  
        
      2860
      |
      188
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56154721 false
      更新时间:2024-05-07
    • The Research of Moving Objects Detection Using HOS

      Vol. 9, Issue 11, Pages: 1284(2004) DOI: 10.11834/jig.2004011247
      摘要:In the first part of the article, three orders correlation transformation method is applied to detect the moving targets, in series video frames. The obvious advantages of this approach is that it can assure the vargine targets to sustain the same eigen-values when it is displaced, rotated or resized. Therefore, it is especially suitable for the situations when the moving target have displacement, rotation or sizing in video series. The second part of the article provides an effective fast algorithm research on three orders correlation transformation. After applying the fast algorithm, the calculation time can be reduced to less than 6% of that with normal algorithm. In the third parts of the paper, experiments are made to test the algorithms and theories mentioned in the previous part of the article. Through the experiments, different stance of the same target in different images is detected successfully. Moving vehicles are detected in express way video series followed by the comparison of using the normal frames' differential method and the one in this article, from which it shows that the latter method is more accurate and more non-confused than the former one.  
        
      2615
      |
      190
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56154365 false
      更新时间:2024-05-07
    • Skew Document Image Detection Method Based on Windows Transform

      Vol. 9, Issue 11, Pages: 1290(2004) DOI: 10.11834/jig.2004011249
      摘要:During OCR(optical character recognition) image scanning, the document images, are always placed slantwise to some extent. When the skew degree is big enough, it will influence the effect of document analysis and lower the recognition accuracy as the algorithm for layout analysis and character recognition are very sensitive to page skew. So the skew degree detection is a very important step during the preprocessing of document analysis. In this paper, a skew detection method based on the window analysis is presented. First it chooses the suitable windows which are not in the margin but in the layout of a printed page. Then according to the kind of contents, just like tables, text lines, images and etc., it uses the different methods to pre-processing the windows image. To overcome the large computing, the third step is to blur the text lines and image from the window. The forth step is to detect the edges of the blurring regions .At last it uses a straight line fitting to the edges, and gets the skew angle. By this method, experimental results show that the skew angles of many kinds of document images can be efficiently and accurately detected, and it has sufficient adaptability.  
        
      2733
      |
      228
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56151970 false
      更新时间:2024-05-07
    • Vol. 9, Issue 11, Pages: 1294(2004) DOI: 10.11834/jig.2004011250
      摘要:Collision detection is a prevalent problem in computer graphics. A fast, accurate and feasible collision detection algorithm is important for an application. In this paper, a new planar simple polygon intersection algorithm, based on 2D axis-aligned bounding rectangle data structure, is presented for the polygons subjected to simple and unreformed movement. A new partition strategy for geometrical graphics according to the axial monotony, and the pre-checking process between 2D axis-aligned bounding rectangles reduce the number of unnecessary edge-pairs to be checked efficiently,so the algorithm can terminate promptly. After the partition along with coordinate axis, the interference checking between monotone chains proceeds. A novel search method based on sweep-line theory is adopted to eliminate the number of collision test for both segment-pairs and bounding volume-pairs drastically. So it can prompt the execution of algorithm. The accurate intersections, as well as the first occurrence of intersection between two objects subjected to a dynamic environment, are acquired by less arithmetical operation. The experimental results indicate that the complexity is far less than O(NP×NQ)for generic polygons,even asymptoticallyO(NP+NQ) for two convex polygons, where in NP,NQ denote the vertex number of two polygonsP,Qrespectively. So, it is a fast and efficient algorithm.  
        
      3729
      |
      223
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56153197 false
      更新时间:2024-05-07
    • Vol. 9, Issue 11, Pages: 1304(2004) DOI: 10.11834/jig.2004011251
      摘要:The main character of constructive neural networks is to build a network step by step during processing a given data set, during the process the construction and parameters are discovered by learning and are not presented before learning. Introducing kernel functions to non-linear transform, a support vector machine(SVM) transforms an input space into a high dimensional kernel space, then seeks the best linear classified plane in this new space. The classified function is similar to a neural network formally. A constructive kernel covering algorithm(CKCA) combines constructive learning methods of neural networks such as a covering algorithm with kernel function methods of SVM. Firstly CKCA maps the input data set into a kernel space, and then classifies the data set by using a covering algorithm in this kernel space. The CKCA method has the characteristic of low computation strong const. ructive ability and visibility there fore, it is suitable to solve the problems such as a vase high dimensional data set classification and image recognition. In this paper, CKCA is used to recognize characters of car plate which are sloped or fuzzy, and the result is satisfactory.  
        
      2751
      |
      265
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56154543 false
      更新时间:2024-05-07
    • Vol. 9, Issue 11, Pages: 1309(2004) DOI: 10.11834/jig.2004011252
      摘要:The assessment of the performance of an display i.e. TV set strongly depends on human visual perception which directs. The research and development of the display. The visual perception is influenced by one's different culture background. Study on the difference helpful to understand how to improve the characters of display sets. In this paper, a special experiment is performed, in which, a set of still pictures is displayed on a pair of colour TV sets, and the are asked to assess the image colorfulness for different white point setting and different contrast. Analysis of the evaluation results, indicatis that most people like the more colour saturated picture. Then from the results, the preferred white point setting and contrast can be got. The evaluators are divided into two parts: expert and non-expert. The results shows that the mean value from experts is smaller than that from non-experts.Comparisons of the results with similar experiment done by Philips research. An analytical result is obtained for visual perception study on people of different cultural background between the east and the west countries.  
        
      2719
      |
      208
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56155399 false
      更新时间:2024-05-07
    • Vol. 9, Issue 11, Pages: 1314(2004) DOI: 10.11834/jig.2004011253
      摘要:The conventional matching methods are easily affected by scene occlusions, light and noise. On the other hand, the relationship of correspondence between model and image need to be built, which makes the matching process become more complicated. Then Hausdorff distance is used and the drawbacks of conventional partial Hausdorff distance is analyzed and corrected. To achieve image matching quickly, the concept of information measures is introduced into image matching to extract the edge characteristic points based on edge detection, and the similarity measures are constructed based on modified Hausdorff distance, then a new matching strategy is proposed based on information measures and Hausdorff distance. In this method, a process of pre-matching is used to pick out the unimportant regions by making use of some general information, such as the proportion of pixels' number in the range of preset gray level or preset information measure value, which speeds up the matching process greatly. The proposed strategy improves the resistance to noise and gives the criteria of parameter selection to some extent. In addition, this method matches the image occlusions correctly and overcomes the mismatching problems that induced by noise, spurious edge segments and outlier points. The experimental results demonstrate that the proposed strategy is feasible and effective.  
        
      2846
      |
      225
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56153294 false
      更新时间:2024-05-07
    • Fast Algorithm for Mesh Denoising

      Vol. 9, Issue 11, Pages: 1320(2004) DOI: 10.11834/jig.2004011254
      摘要:Mesh denoising is an essential step in creating perfect 3D models. By adopting bilateral filtering from image denoising to 3D mesh denoising, Fleishman -et al.- proposed a simple and fast anisotropic mesh denoising algorithm, which is not efficient or stable enough. For these reasons, this paper proposes to use quasi-Cauchy kernel and Taylor polynomial to replace the Gauss kernel used in bilateral filtering. In the mean time, some of the implementation details are improved. All of these make the algorithm more efficient and stable. At last, the choice of appropriate parameters to achieve best result is demonstrated.  
        
      2786
      |
      198
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56154164 false
      更新时间:2024-05-07
    • Color Image Retrieval Method Based on Wavelet Multiresolution Analysis

      Vol. 9, Issue 11, Pages: 1326(2004) DOI: 10.11834/jig.2004011255
      摘要:A large number of images come forth because of the prevalence of multimedia technology and the implement of Internet technology. Since traditional text keyword-based retrieval approach can't adapt to the demand of image data retrieval, so content-based image retrieval(CBIR) technology becomes the current research focus. Among the contend-based retrieval technologies, feature extraction is most important. For instance, color, texture and shape feature etc. But each feature of image can only catch one aspect of the similarity of image, how to represent images better has become a important research field in content-based image retrieval. We come up with a representation method of image based on color and texture feature of image. In this method, we use the histogram of HSV color space as color feature, and the detail information of the multiresolution representation of image as texture feature. Take full advantage of the rich representation of color and the statistics of the wavelet coefficients. Through the comparative experiment on the recall of image retrieval over various image type and various combination of feature, we can conclude that this representation of image is feasible and valid in image retrieval.  
        
      2486
      |
      180
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56154278 false
      更新时间:2024-05-07
    • Vol. 9, Issue 11, Pages: 1331(2004) DOI: 10.11834/jig.2004011256
      摘要:Concerning the image expression and operation problem of traditional image processing, the novel idea that expressing an image in complex plane is proposed. The conversion method of mapping image to complex number is presented, and the operations of addition, subtraction, multiplication and complex conjugate of image are defined in a new point of veiw. Hence, Combined with the existing threshold access structures scheme, it leads to proposal of using complex number based secret sharing schemes in color images sharing. Experimental results validated that it effectively solved the problems of the size of the shares and the number of colors,etc.  
        
      3593
      |
      211
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56152841 false
      更新时间:2024-05-07
    • Vol. 9, Issue 11, Pages: 1336(2004) DOI: 10.11834/jig.2004011257
      摘要:Present polygon filling algorithms including scanline algorithm, seed filling algorithm and the algorithms based on the two classical filling theories, are analyzed and compared in this paper. A new filling algorithm is put forward, which is based on thorough analysis of the relations between point and its abutting sides. This new filling algorithm divides all points of a polygon into five types at firstly, and transforms the polygon into unit areas which are simple triangles and trapeziums by the line passing the points. Using the characteristics of bevel edges, the unit areas filling can replace the multiplication-division with the addition-subtraction. It decreases the time and complexity of filling the whole polygon. This paper explains the design of the algorithm and how it is going on, and also presents the data structures storing the information of points and unit areas. In the end, some experimental results show that the new algorithm has a high efficiency and a good stability.  
        
      3016
      |
      145
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56152922 false
      更新时间:2024-05-07
    • A Fine Granularity Video Coding Algorithm Based on Multi-transformation

      Vol. 9, Issue 11, Pages: 1342(2004) DOI: 10.11834/jig.2004011258
      摘要:With the widespread development of video applications, it is very urgent to develop fine granular scalable video compression algorithms, among which discrete wavelet transformation and matching pursuit (MP) transformation are the most popular. However, the complexity of MP video coding and the ringing and rippling of discrete wavelet transform video coding are annoying obstacles to many video applications. In this context, an algorithm based on both discrete wavelet and MP transform is presented in this paper for fine granularity video coding. Firstly, using every eight frames as a unit, this algorithm applies motion prediction and eliminates the inter frame redundancy by 1-dimensional wavelet transformation at motion vector direction. Thus, 1 low-frequency and 7 high-frequency frames are obtained. The transformed low-frequency frame is further processed by 2-dimensional wavelet transformation to eliminate the intra frame redundancy, with the other 7 high-frequency frames coded by Matching Pursuit. A new motion prediction and pixels regulation strategy is also presented in this paper. The MP atom assignment is based on human eye's vision characteristics and the energy of residual after motion estimation. The experiments on its performance and the analysis of computing complexity indicate that this algorithm balances the recovery video quality, computation complexity, and control granularity.  
        
      2357
      |
      174
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56153395 false
      更新时间:2024-05-07
    • A New Objective Assessment Method for Image Coding Quality

      Vol. 9, Issue 11, Pages: 1348(2004) DOI: 10.11834/jig.2004011259
      摘要:In order to assess image coding quality objectively and accurately, based on importance measures and fuzzy integral, a new objective assessment approach for image coding quality is proposed in this paper. The first step of this approach is: firstly the errors at edge, texture and flat-region are computed separately, then the errors are assessed according to the assessment function, lastly a global evaluation is obtained based on the importance measure on edge, texture and flat-region. The second step of this approach is: firstly a importance measure is established depending on the positions where the errors occur, then a subtle evaluation is acquired by fuzzy integral on all of the errors which occur at the pixels of the image. The third step of this approach is: a final evaluation is given based on the evaluations obtained in the previous two steps. Experimental results show that from the viewpoint of correlation coefficient, the proposed approach is obviously better than the others such as PQS、PSNR、 WMSE.  
        
      2607
      |
      187
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56153839 false
      更新时间:2024-05-07
    • Non-progressive Mesh Compression Based on Wavelet Transform

      Vol. 9, Issue 11, Pages: 1356(2004) DOI: 10.11834/jig.2004011260
      摘要:A non-progressive triangle mesh compression method based on wavelet transform is proposed in this paper. It uses remeshing to remove most of the connectivity information, then uses wavelet transform to compress the geometry information by taking advantage of its strong decorrelation power. After remeshing and wavelet transform, all wavelet coefficients are scanned in a determined way to form a sequence, then quantized and arithmetic encoded. For the adaptive semi-regular sampling pattern obtained by remeshing, in order for the decoder to know at which vertex each wavelet coefficient locates, an adaptive subdivision information coding algorithm is also designed. Experimental results show that the proposed method has achieved better rate-distortion performance than the well-known Edgebreaker method, and the compression ratio is about 200:1 for complex meshes acquired by 3D scanner under 10 bits quantization, which is more than 2 times of that of Edgebreaker method.  
        
      2816
      |
      206
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56153565 false
      更新时间:2024-05-07
    • Vol. 9, Issue 11, Pages: 1362(2004) DOI: 10.11834/jig.2004011261
      摘要:Descriptive models of housing spatial features based on Geographic Markup Language (GML) and its application are presented in this paper. The types and relationships between housing spatial features are firstly analyzed, and a new model for describing housing features is then proposed using the concept of standard cell from the point of reducing storage redundancies and improving query efficiency. Base on this, housing spatial features description and relationships between them based on GML are further put forward. These housing spatial feature models are defined as NatureStoreyModel, LogicalStoreyModel StandardCellModel, CellModel and LineModel, and the relationships are defined as natureStoreyMember, logicalStoreyMember, standardCellMember and lineMember. A schema defining the GML structure is given, incorporating both the geometry and attribute of housing features. And an application system used to manage housing information is developed based on Java, document object model (DOM) and simple application interface for XML (SAX). The implementation of the proposed models is tested through the system and the results approve the capability of the models.  
        
      2538
      |
      190
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56155060 false
      更新时间:2024-05-07
    • A Study on TGIS Based on Dynamic Multilevel Base State with Amendments

      Vol. 9, Issue 11, Pages: 1369(2004) DOI: 10.11834/jig.2004011262
      摘要:It is inefficient that realizing the query and redivivus of the history information based on the modal of base state with amendments though the modal can store change data with low redundancy. Based on the character of spatio-temporal cohesion and the spatio-property's, this paper introduces a method for designing and building spatio-temporal database based on dynamic multilevel base state with amendments, through analyzing the parcel land change character of the object-oriented TGIS. The modal of based on dynamic multilevel base state with amendments (DMBSA) is derived from the modal of base state with amendments and conquer the shortcoming of the modal. At last, a self-developed software package of land cadastral information system (ReGIS) based on above model and methods is presented. The application - ReGIS states that the modal of based on dynamic multilevel base state with amendments (DMBSA)can store and manage spatio-temporal data with low redundancy and trace history information efficiently.  
        
      2785
      |
      192
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56154948 false
      更新时间:2024-05-07
    • Vol. 9, Issue 11, Pages: 1376(2004) DOI: 10.11834/jig.2004011265
      摘要:There is a great application potential for the fusion of remote sensing images. With the development of quantitative remote sensing, not only improving spatial details but also preserving the spectral information of multispectral bands were required. The principle and methods of two fusion algorithms,SFIM (smoothing filter-based intensity modulation) and Gram-Schmidt (Gram-Schmidt transform), were described. In a case of IKONOS image in city, visual judgment, quantitative statistical parameters and graphs comparison were used to assess these two algorithms, which were also compared to the traditional methods of IHS transform and PC(principal component) transform. The results showed there was no distinct difference in spatial details improved. However in terms of spectral information fidelity, both IHS and PC method were the worst, Gram-Schmidt method was better, while SFIM method was the best.  
        
      4642
      |
      248
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56154071 false
      更新时间:2024-05-07
    • Vol. 9, Issue 11, Pages: 1386(2004) DOI: 10.11834/jig.2004011263
      摘要:Array for real-time geostrophic oceangraphy(ARGO) marine observation data have been highly focused in marine research area. However processing software for them is still very lack. The format, record and working mode of ARGO data with satellite and in suite measurement in the ocean are introduced. The characteristic, spatial 3D and spatio-temporal change of ARGO data are analyzed. A practical optimizing storage model of ARGO data based on Oracle is presented. The efficient storage and access of vast ARGO data and multi-source and multi-dimensional spatial-temporal data are discussed. An ARGO data processing system (ARGOGIS) with independent copyright for ocean-oriented management is developed by COM module. The ARGOGIS platform has been used in some departments on marine study.  
        
      3795
      |
      215
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56153981 false
      更新时间:2024-05-07
    • Vol. 9, Issue 11, Pages: 1392(2004) DOI: 10.11834/jig.2004011264
      摘要:Linear pixel unmixing is a straightforward and efficient approach to spectral decomposition of remotely sensed data. In recent years, Orthogonal subspace projection approach has been investigated and used in Linear pixel unmixing widely since it was proposed several years ago. A main drawback to its utilization in operational cases is that the spectral priori knowledge can not be automatically retrieved correctly and completely. To overcome the problem of not knowing the prior endmembers in an image dataset, this paper presents an unsupervised orthogonal subspace projection (UOSP) algorithm to retrieve endmember automatically at each time by searching the maximal pixel vector in an orthogonal imagery. If the pixel satisfied the property of being cohesive in spatial, it would be regarded as an endmember, then was removed the effect of it by orthogonal subspace projection method to get another orthogonal imagery. The experimental result shows that UOSP algorithm is an efficient and precise approach to retrieve endmembers and unmixing hybrid pixel automatically by employing PHI hyperspectral data.  
        
      3304
      |
      224
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56153626 false
      更新时间:2024-05-07
    0